MMM
Results 1 to 25 of 5495

Thread: SSD Write Endurance 25nm Vs 34nm

Hybrid View

  1. #1
    Admin
    Join Date
    Feb 2005
    Location
    Ann Arbor, MI
    Posts
    12,338
    Quote Originally Posted by Ao1 View Post
    Anvil, I've just switched the new app you sent me. I used the 100% uncompressible setting and things look very different. Avg ~ 42 MB/s. Those files in the first app must have been near 100% compressible.
    If the peanut gallery gets a vote, I'd like to see a combination load. Realistic loads are neither fully incompressible nor fully compressible.


  2. #2
    Banned
    Join Date
    Jan 2010
    Location
    Las Vegas
    Posts
    936
    Quote Originally Posted by Vapor View Post
    If the peanut gallery gets a vote, I'd like to see a combination load. Realistic loads are neither fully incompressible nor fully compressible.
    That complicates things needlessly. Just how compressible is a "realistic load"? And how much does the Sandforce actually compress when given typical data? (not very much, I think). Better to stick with incompressible data so that the test is repeatable and we know how much is actually written to the flash.

  3. #3
    Admin
    Join Date
    Feb 2005
    Location
    Ann Arbor, MI
    Posts
    12,338
    Quote Originally Posted by johnw View Post
    I think 100% incompressible is the best choice, at least for a first test. That way anyone who wants to try to repeat your measurement can do so.
    I think 100% is the best test for NAND durability (and parity scheme, I suppose). But if you're entering a SF into the test, one of its big features that sets it apart is the compression and negating it doesn't represent what the SF can do (for most usage models).

    I'd like to see both tested, of course, but the Call for MOAR in testing is always there

    Quote Originally Posted by johnw View Post
    That complicates things needlessly. Just how compressible is a "realistic load"? And how much does the Sandforce actually compress when given typical data? (not very much, I think). Better to stick with incompressible data so that the test is repeatable and we know how much is actually written to the flash.
    Take what you have on a drive, copy it to another, compress the files with the 2nd lowest level of RAR compression (lowest being 0 compression, just making a single file out of many), observe compressibility

  4. #4
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    I kept hIOmon running so the stats below reflect the data from both versions of the app, however straightaway the max response times have jumped up following the switch to non compressible data.

    I believe the Max Control (4.69s) is a TRIM related operation (?)

    MB/s is swinging from 20 to 40MB/s

    Got to pop out for a bit, will switch more compressible data later.
    Attached Thumbnails Attached Thumbnails Click image for larger version. 

Name:	Untitled.png 
Views:	2602 
Size:	19.7 KB 
ID:	114485  

  5. #5
    Xtreme Member
    Join Date
    Oct 2004
    Posts
    300
    Quote Originally Posted by johnw View Post
    That complicates things needlessly. Just how compressible is a "realistic load"? And how much does the Sandforce actually compress when given typical data? (not very much, I think). Better to stick with incompressible data so that the test is repeatable and we know how much is actually written to the flash.
    AnandTech mentioned in the Vertex 3 preview I believe that all of their in-house SandForce drives had a write amplification of about 0.6x. So assuming a write amplification of 1.1 for incompressible data (seems reasonable since that's what Intel and other good controllers without compression/dedupe can achieve), as an OS drive it can compress host writes by about 45% on average.

    I think it would be far more valuable to see how well the SF controller can deal with more realistic workloads like this than completely compressible or incompressible data. But just my $0.02.
    Last edited by frostedflakes; 05-21-2011 at 09:53 AM.

  6. #6
    Banned
    Join Date
    Jan 2010
    Location
    Las Vegas
    Posts
    936
    Quote Originally Posted by frostedflakes View Post
    AnandTech mentioned in the Vertex 3 preview I believe that all of their in-house SandForce drives had a write amplification of about 0.6x. So assuming a write amplification of 1.1 for incompressible data (seems reasonable since that's what Intel and other good controllers without compression/dedupe can achieve), as an OS drive it can compress host writes by about 45% on average.
    I read that, too. The problem is that it is likely a bogus number, since Anand's drives get a lot of benchmarks run on them which write highly compressible (and unrealistic) data.

    I think what you are talking about should be a separate experiment. You could put monitoring programs on computers to record months of writes, and then find a way to play those writes back and determine how much Sandforce can compress them.

    But for this experiment, I think it is important to know how much data is actually written to the flash, and the only way to know that with a Sandforce drive is to use random data.

    Random data also has the benefit of being a worst-case scenario for Sandforce, and I think that is what is most valuable to know, since then you can state with confidence that a more typical workload will likely result in a drive lasting AT LEAST X amount of writes. Anything other than random data and you have to make wishy-washy statements like, well, it might last that long, but if you write data that is less compressible, it will probably last a shorter time.
    Last edited by johnw; 05-21-2011 at 10:05 AM.

  7. #7
    Xtreme Member
    Join Date
    Oct 2004
    Posts
    300
    I got the impression from the article that these weren't review drives. So the WA should be typical of what a regular user would see if they used it as an OS drive.

    Thankfully one of the unwritten policies at AnandTech is to actually use anything we recommend. If we're going to suggest you spend your money on something, we're going to use it ourselves. Not in testbeds, but in primary systems. Within the company we have 5 SandForce drives deployed in real, every day systems. The longest of which has been running, without TRIM, for the past eight months at between 90 and 100% of its capacity.
    http://www.anandtech.com/show/4159/o...t-sf2500-ssd/2

    You're right about it being a good indicator of worst-case durability, though, and we could still extrapolate about more ideal situations with more compressible data from this.

  8. #8
    Banned
    Join Date
    Jan 2010
    Location
    Las Vegas
    Posts
    936
    Quote Originally Posted by frostedflakes View Post
    I got the impression from the article that these weren't review drives. So the WA should be typical of what a regular user would see if they used it as an OS drive.
    I read that, too. I still think you are wrong about those drives seeing "typical" use. Do you really think that the guys using those drives aren't playing around and running a boatload of benchmarks on them? I seriously doubt Anand's crew bought Sandforce drives and gave them to "typical" users to work with, without playing with them themselves first.

  9. #9
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    Here are the options in Anvils app. I could run at 100% incompressible for an equal time that I ran the 0 fill - I would then be at ~50%. I could then go to the 46% or I could just carry on with 100% incompressible?

    I'm inclined to keep at 100% incompressible but if there is a consensus otherwise I can do whatever.
    Attached Thumbnails Attached Thumbnails Click image for larger version. 

Name:	Untitled.png 
Views:	2458 
Size:	2.7 KB 
ID:	114492  

Tags for this Thread

Bookmarks

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •