Quote Originally Posted by sergiu View Post
Now, the worst possible scenario is indeed using a SSD as a cache, but also, keep in mind that the usage model for a cache is random writes + random reads where depending on the scenario, you might have a higher number of reads than writes which will decrease the average writing speed significantly to allow a recovery period for NAND cells. If real enterprise usage test usage is desired, then the SSDs should be tested in such a way that each page that was written should be also read at least once. Pure random writes on the entire space would be torture, not real life usage.
Even as a cache device it won't see the 4KB random writes that IOMeter unleashes on the drive. A typical block size for ZFS or Oracle database is much larger than 4KB. And then, caches are typically tuned to merge smaller writes into bigger ones because in-memory caches sit in front of it (you will see 128KB random writes instead of 4KB). And then, top that with the fact that if the caching algorithm is any good, your caches will be read more than written to.