That complicates things needlessly. Just how compressible is a "realistic load"? And how much does the Sandforce actually compress when given typical data? (not very much, I think). Better to stick with incompressible data so that the test is repeatable and we know how much is actually written to the flash.
I think 100% is the best test for NAND durability (and parity scheme, I suppose). But if you're entering a SF into the test, one of its big features that sets it apart is the compression and negating it doesn't represent what the SF can do (for most usage models).
I'd like to see both tested, of course, but the Call for MOAR in testing is always there
Take what you have on a drive, copy it to another, compress the files with the 2nd lowest level of RAR compression (lowest being 0 compression, just making a single file out of many), observe compressibility![]()
I kept hIOmon running so the stats below reflect the data from both versions of the app, however straightaway the max response times have jumped up following the switch to non compressible data.
I believe the Max Control (4.69s) is a TRIM related operation (?)
MB/s is swinging from 20 to 40MB/s
Got to pop out for a bit, will switch more compressible data later.
AnandTech mentioned in the Vertex 3 preview I believe that all of their in-house SandForce drives had a write amplification of about 0.6x. So assuming a write amplification of 1.1 for incompressible data (seems reasonable since that's what Intel and other good controllers without compression/dedupe can achieve), as an OS drive it can compress host writes by about 45% on average.
I think it would be far more valuable to see how well the SF controller can deal with more realistic workloads like this than completely compressible or incompressible data. But just my $0.02.![]()
Last edited by frostedflakes; 05-21-2011 at 09:53 AM.
I read that, too. The problem is that it is likely a bogus number, since Anand's drives get a lot of benchmarks run on them which write highly compressible (and unrealistic) data.
I think what you are talking about should be a separate experiment. You could put monitoring programs on computers to record months of writes, and then find a way to play those writes back and determine how much Sandforce can compress them.
But for this experiment, I think it is important to know how much data is actually written to the flash, and the only way to know that with a Sandforce drive is to use random data.
Random data also has the benefit of being a worst-case scenario for Sandforce, and I think that is what is most valuable to know, since then you can state with confidence that a more typical workload will likely result in a drive lasting AT LEAST X amount of writes. Anything other than random data and you have to make wishy-washy statements like, well, it might last that long, but if you write data that is less compressible, it will probably last a shorter time.
Last edited by johnw; 05-21-2011 at 10:05 AM.
I got the impression from the article that these weren't review drives. So the WA should be typical of what a regular user would see if they used it as an OS drive.
http://www.anandtech.com/show/4159/o...t-sf2500-ssd/2Thankfully one of the unwritten policies at AnandTech is to actually use anything we recommend. If we're going to suggest you spend your money on something, we're going to use it ourselves. Not in testbeds, but in primary systems. Within the company we have 5 SandForce drives deployed in real, every day systems. The longest of which has been running, without TRIM, for the past eight months at between 90 and 100% of its capacity.
You're right about it being a good indicator of worst-case durability, though, and we could still extrapolate about more ideal situations with more compressible data from this.
I read that, too. I still think you are wrong about those drives seeing "typical" use. Do you really think that the guys using those drives aren't playing around and running a boatload of benchmarks on them? I seriously doubt Anand's crew bought Sandforce drives and gave them to "typical" users to work with, without playing with them themselves first.
Here are the options in Anvils app. I could run at 100% incompressible for an equal time that I ran the 0 fill - I would then be at ~50%. I could then go to the 46% or I could just carry on with 100% incompressible?
I'm inclined to keep at 100% incompressible but if there is a consensus otherwise I can do whatever.![]()
Bookmarks