Page 5 of 220 FirstFirst ... 23456781555105 ... LastLast
Results 101 to 125 of 5495

Thread: SSD Write Endurance 25nm Vs 34nm

  1. #101
    Xtreme X.I.P.
    Join Date
    Apr 2008
    Location
    Norway
    Posts
    2,838
    Quote Originally Posted by Ao1 View Post
    Here is a screen shot on a fresh run with the 100% non compressible option.

    Anvil are you sure the xfers are 100% non compressible?
    The 100% Incompressibe option will result in a 101% file, it's not even possible to do deduplication on that file. (I've checked with 7Zip)

    I'll do another test and see of something has changed.

    I expect you can do about 40-50MB/s on that 40GB drive using incompressible data, don't know what/if throttling would make it worse.

    edit:
    Here'se a few samples of the original vs the compressed file.
    100perc.png

    I'll do a few tests but it's looking as designed.


    @Vapor
    The 40GB should cost < $100.
    Last edited by Anvil; 05-21-2011 at 01:58 PM.
    -
    Hardware:

  2. #102
    Banned
    Join Date
    Jan 2010
    Location
    Las Vegas
    Posts
    936
    Quote Originally Posted by Anvil View Post
    The 100% Incompressibe option will result in a 101% file, it's not even possible to do deduplication on that file. (I've checked with 7Zip)
    How much random data does your program have, and how much does it write before the same random pattern repeats (if it repeats)?

    In other words, a bad way to do it would be to have 512 Bytes of random data, and to write that same 512 Bytes over and over. A better way to do it would be to have many megabytes of random data before it repeats, or even generate the random data on the fly (if it can be done fast enough) and not have any repetition. Based on your data, I guess any repetition must be at least 11MB or more apart, since the 11,432KB file cannot be compressed by 7Zip. But what about the smaller files, like the 1KB one. If it happens to write two small files in a row, say 1KB and then 4KB, is the random data in the second file different than the random data in the first file?
    Last edited by johnw; 05-21-2011 at 03:01 PM.

  3. #103
    Xtreme X.I.P.
    Join Date
    Apr 2008
    Location
    Norway
    Posts
    2,838
    The data is created on the fly, nothing is repeated, if it was it wouldn't work.

    As long as the random data generator is active it creates fresh random data for every write.

    So yes, it looks like you've got the point about repeating data right.

    The issue here might be that the CPU could already be a bottleneck for generating the random data, I'll have to do some tests.
    -
    Hardware:

  4. #104
    Banned
    Join Date
    Jan 2010
    Location
    Las Vegas
    Posts
    936
    Quote Originally Posted by Anvil View Post
    The data is created on the fly, nothing is repeated, if it was it wouldn't work.
    Repeating could work, if you had a large enough data set. You could pre-generate, say, 1GB of random data, load it into memory, and then just step through it, 512B at a time. Theoretically, some deduplication algorithms could compress that once you got through the first GB and started repeating, but I seriously doubt the Sandforce controller can do that, especially if you changed the offset on successive runs through the data. (But it might be able to if you repeated after 4KB, say)

  5. #105
    Xtreme X.I.P.
    Join Date
    Apr 2008
    Location
    Norway
    Posts
    2,838
    The buffer is 4MB in size, the random data generator generates only whats needed so if the file is supposed to be 4KB it generates exactly that.

    Huge sets of memory could possibly work but there would be overhead using that model as well.

    I have no idea about how advanced the SF controller is but I bet it's pretty good, I'm also quite sure that we're already overdoing it.
    The 100% option does produce 101% files using 7Zip and that is a 1% penalty for the SF controller, not much for a few small files but for Millions of files it makes a difference.

    I'm pretty sure that at some given level of compression it will simply store the data as is.

    SF did once say that a full W7 + Office installation that normally would generate 25GB of writes ended up as 11GB of "stored" data.

    There is also the option of using encrypted data, haven't really read up on how that impacts the SF controller.
    -
    Hardware:

  6. #106
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    Here is a comparative shot when writing 100 uncompressible data. It's the same scale as the screen shot in post #66 - 0.1, but I have zoomed in to get more detail.

    I can see the same "hang" just after a new loop starts.

    Avg write is 48.32MB's (according to hIOmon) The highest ResponseTime_Max_Control is 5.215s.

    Despite the overwhelming evidence I'm a bit suspicious that somehow compression is still occurring to some extent.

    5.76TB writes/ 100% life
    Attached Thumbnails Attached Thumbnails Click image for larger version. 

Name:	Untitled.png 
Views:	2005 
Size:	28.2 KB 
ID:	114512  

  7. #107
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    Opps I just noticed that in the 1st screen shot in post #66 the logical disk is being monitored. In post #106 I captured the physical disk level. They both read the same anyway.
    Last edited by Ao1; 05-21-2011 at 05:13 PM.

  8. #108
    Banned
    Join Date
    Jan 2010
    Location
    Las Vegas
    Posts
    936
    Quote Originally Posted by Anvil View Post
    Huge sets of memory could possibly work but there would be overhead using that model as well.
    Yes, infinite speed is not possible.
    But memory bandwidth is many GB/s -- well over 10 GB/s on recent DDR3 systems.

  9. #109
    Banned
    Join Date
    Jan 2010
    Location
    Las Vegas
    Posts
    936
    Quote Originally Posted by Anvil View Post
    The buffer is 4MB in size, the random data generator generates only whats needed so if the file is supposed to be 4KB it generates exactly that.
    One more question. Are you sure you are not re-seeding the random number generator (with the same seed) each time you generate a new file? That would obviously result in repeated data.

  10. #110
    Xtreme Addict
    Join Date
    Jun 2006
    Posts
    1,820
    Quote Originally Posted by overthere View Post
    However, each of these three write file I/O operations can actually be to non-contiguous locations upon the device. And so, hIOmon observes the three write I/O operations to the device as "random" write I/O operations.
    On an SSD whether the logical LBAs are sequential or not plays absolutely no role - their physical location is so random, even 64 writes of 4KB each at consecutive locations can be on 64 different (and thus random) locations.
    What I am trying to say is that whether some I/O is considered random by hIoMon on an SSD has little to do with whether that I/O is truely random.
    Since the app is singlethreaded, essentially any <=4KB can be considered random and anything over would be sequential.

    Filesystem (re)allocation of clusters can also be a factor - but I don't want to go off topic here.
    Doesn't matter if TRIM is in effect. Whether the FS allocates them on another free cluster or existing free cluster is of no importance on SSD physical level, literally.
    I believe NTFS has CPU optimizations in that area for SSDs (i.e. take the road with the least CPU cycles consumed vs. "best placement" that would be used for HDDs).

    Quote Originally Posted by Anvil View Post
    The incompressible level you selected is actually 101% as it's totally incompressible. (it would result in a file of 101% of the original using 7Zip)
    A side note, 7Zip uses extreme compression technique, SF probably uses plain ZIP or LZ like NTFS. The maximum data block is likely 64KB or maybe even less (random presumption for SF, it's 64KB for NTFS). If it can't be compressed it just isn't. It won't store more than the original. 7Zip does depending on the compression settings.

    Quote Originally Posted by Vapor View Post
    I think 100% is the best test for NAND durability (and parity scheme, I suppose). But if you're entering a SF into the test, one of its big features that sets it apart is the compression and negating it doesn't represent what the SF can do (for most usage models).
    But testing with somewhat compressible data even will negate the amount of data written to the NAND which in turn negates the purposes of these tests
    Even though it can write the compressible data at 140MB/s, what is to stop it from actually compressing it to 1% of its size if it's compressible so?

    Take what you have on a drive, copy it to another, compress the files with the 2nd lowest level of RAR compression (lowest being 0 compression, just making a single file out of many), observe compressibility
    And then see the CPU utilization and the "speed" of compression compared to what NAND can do
    P5E64_Evo/QX9650, 4x X25-E SSD - gimme speed..
    Quote Originally Posted by MR_SmartAss View Post
    Lately there has been a lot of BS(Dave_Graham where are you?)

  11. #111
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    I left the endurance test running overnight with non compressible data. Avg MB/s has not changed (according to hIOmon) its still 48MB/s.

    7,168TB/ Life 100%. The only thing that has changed:

    Delta between most-worn and least-worn Flash blocks: 2 (used to be 0)

    Something does not seem right. The drive is not performing as expected, i.e writes are showing no sign of being throttled and the blocks aren't wearing.

    According to a post by tony:

    "Let me put this into plain English. You buy a 60GB drive, you install win7 to it. Average install can be around 16 to 20GB. You then bench the drive with a benchmark such as CDM or AS-SSD that writes incompressible data.

    What happens?

    1st all writes go thru the OP on the drive, on a 60GB drive OP is around 8GB, if the benchmark writes more than 8GB the drive will now be erasing blocks before writing to blocks as only OP area is kept clean.

    2nd, The drive sees a constant stream of high volume writes, Duraclass kicks in to limit write speed to ensure longevity of the nand on the drive. It does this for a fixed period of time, it can be hours or days.

    3rd, the drive now goes into recovery mode...this is dictated by the Duraclass and the amount of writes the drive has seen in a fixed period of time.

    This is what end users see as a speed loss, there is also an additional drop in speed from an unmapped drive to a mapped drive regardless of whether the drive has seen short burst high volume writes but I disregard that drop as Duraclass is a needed function of the drive and its effect is inevitable...
    I prefer to use the terms used drive performance and hammered drive performance.

    used = fully mapped day to day speed.

    hammered = benched to full, massive amounts of incompressible data written in a short time period."

  12. #112
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    OK, I stopped the test and ran AS SSD. Clearly the SF drive is somehow compressing the test files in Anvils apps.
    Attached Thumbnails Attached Thumbnails Click image for larger version. 

Name:	Untitled.png 
Views:	1946 
Size:	57.1 KB 
ID:	114530  

  13. #113
    Xtreme X.I.P.
    Join Date
    Apr 2008
    Location
    Norway
    Posts
    2,838
    Quote Originally Posted by johnw View Post
    One more question. Are you sure you are not re-seeding the random number generator (with the same seed) each time you generate a new file? That would obviously result in repeated data.
    Yes, I'm sure

    I just checked and it is not being seeded at all during the Endurance test.
    (meaning it's using the default seed in the test)

    I've been working a lot with this exact problem when creating the SSD Benchmark that's getting close to being ready for beta.
    I've been through all the possible phases of buffer/blocksize vs repeating blocks and 7Zip is a pretty good check for this.

    I'll do some tests on the files using standard Zip or LZ like alfaunits suggested.
    I'm pretty sure the result is that it's being overdone already, I might even remove some of the steps from the loop.
    -
    Hardware:

  14. #114
    Xtreme X.I.P.
    Join Date
    Apr 2008
    Location
    Norway
    Posts
    2,838
    @Ao1

    Could you try running a CDM on that drive, the AS SSD test is a good indication but not a reference.
    AS SSD uses a 16MB "file" for testing the sequential speed.
    Also, random 4K is performing better than Sequential writes, which is a bit odd.

    As long as you selected 100% it's doing exactly that.

    I have yet to see any of my SF drives showing < 100%, so it's not exceptional considering that you did run quite a few TB with easily compressible data.
    -
    Hardware:

  15. #115
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    How shall I run CDM? (0 fill or full?)

  16. #116
    Xtreme X.I.P.
    Join Date
    Apr 2008
    Location
    Norway
    Posts
    2,838
    As it's all about wear you can do both but 0-Fill = easily compressible data and Full (normal) equals incompressible data.

    Also, I've checked with ZIP and LZ and they are both agreeing with 7Zip, the file is larger than the original using ZIP and is simply stored using LZ.

    I'll send you an update anyways as I've added 67% compression and done a bit of fine tuning in the loop wrt compression.

    Here is a Perfmon of the transition from Writing->Deleting->Start Writing files again.

    perfmon_arrow.PNG

    It does look like that period is a lot different from mine, imho, that is TRIM working on the SF drive.

    I'm using the Intel driver btw, are you using the Windows default or the Intel one?
    (could be significant as there is a "bug" in the MS one)
    Last edited by Anvil; 05-22-2011 at 02:28 AM.
    -
    Hardware:

  17. #117
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    I ran non compressible whilst waiting. I'm running 0 fill now for comparison.
    Attached Thumbnails Attached Thumbnails Click image for larger version. 

Name:	Untitled.png 
Views:	1950 
Size:	47.2 KB 
ID:	114531  

  18. #118
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    0 fill option. So it seems that compressible data does not get throttled.

    This really seems to imply that the endurance test is somehow being compressed as speeds have not changed throughout.
    Attached Thumbnails Attached Thumbnails Click image for larger version. 

Name:	Untitled.png 
Views:	1997 
Size:	48.6 KB 
ID:	114533  
    Last edited by Ao1; 05-22-2011 at 02:42 AM. Reason: typo

  19. #119
    Xtreme X.I.P.
    Join Date
    Apr 2008
    Location
    Norway
    Posts
    2,838
    Yes, I'm 100% sure it's working.

    The CDM run confirms that it's working.

    So, where is the throttling? , 40-50MB/s is pretty good for that small drive if it's being throttled.

    Did you have a look at my perfmon graph?
    -
    Hardware:

  20. #120
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    Quote Originally Posted by Anvil View Post
    It does look like that period is a lot different from mine, imho, that is TRIM working on the SF drive.

    I'm using the Intel driver btw, are you using the Windows default or the Intel one?
    (could be significant as there is a "bug" in the MS one)
    I was going to ask you to run perfmon What I notice is that the hang happens just after the loop starts. It looks very much like a TRIM operation is causing the "hang". The hang duration coincides with ~5 seconds that are recorded against the "responsetime_max-control, which I believe is TRIM related. (Hopefully overthere can confirm.) Either way the hang seems to be much more that the test files being deleted.

    I'm running RST (10.5.0.1022)

  21. #121
    Xtreme X.I.P.
    Join Date
    Apr 2008
    Location
    Norway
    Posts
    2,838
    Time for another update

    16_85_tb_cdi.PNG

    Media Wearout has just dropped to 89.

    > 5,7 Million files have been created.

    edit:
    I'm using RST 10.5.0.1026/27
    -
    Hardware:

  22. #122
    Xtreme X.I.P.
    Join Date
    Apr 2008
    Location
    Norway
    Posts
    2,838
    A new take on the graph

    Endurance_XY.png

    I don't think the Wearout Indicator is the one to use but until then.
    -
    Hardware:

  23. #123
    Xtreme Enthusiast
    Join Date
    Jun 2004
    Location
    plan3t 3@rth
    Posts
    987
    Quote Originally Posted by Ao1 View Post
    I was going to ask you to run perfmon What I notice is that the hang happens just after the loop starts. It looks very much like a TRIM operation is causing the "hang". The hang duration coincides with ~5 seconds that are recorded against the "responsetime_max-control, which I believe is TRIM related. (Hopefully overthere can confirm.) Either way the hang seems to be much more that the test files being deleted.

    I'm running RST (10.5.0.1022)
    my ssd,s hung....never understood. :l thanks for making this post guys.
    Stacker830 Watercooled
    windows7 ultimate 64 bit!!!
    heatkiller(rev3) on 2500k@ 4.5ghz 1.35v,8 gigs 2133 ripjaws 1.5v
    Swiftech Mcp-655,1/2in tygon,13x120 sunnons on junk ps,
    (2)triple 120mm rads,Biostar TP67XE(rev 5.2)
    150 gig velicraptor (stable drive) ssds r still buggy!!
    xfi-xtrememusic,klipsch ultras, sen hd-595s
    Evga Hydro gtX 590,co0lermaster-1250 watt,
    24" Sony fw-900 black ops at @ 2304x1440 85hz/85fps SOLID
    G@m3r 4 L1Fe!!

    http://s76.photobucket.com/albums/j1...0VIEW%20ALL--/
    3dmark 11 http://3dmark.com/3dm11/1102387

  24. #124
    SLC
    Join Date
    Oct 2004
    Location
    Ottawa, Canada
    Posts
    2,795
    20.5TB. 90 wear indicator.

  25. #125
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    I'm on Version 3 of the SSD slayer now.

    I'm going to keep running it on the uncompressible option, otherwise this experiment will never end. I'm just coming up to 8TB and I'm still at 100%.

    Some observations/ thoughts:

    • When TRIM executes it seems to lock the drive until the command is finished.
    • It seems that the hang duration whilst the TRIM command is being executed is the same, regardless if the data is compressed or not.
    • It must be a bit tricky for the SF controller to translate the OS TRIM command to the compressed data on the SSD. Either way a TRIM command with an Intel SSD seems to be more seamless. Perhaps it TRIM's more frequently or it can do it without locking up the drive at the same time.
    • SF throttling seems to only really effect sequential speeds and 4k random speeds at high queue depth. 4K reads @ QD1 seem unaffected, although 4K random writes take a hit.
    • Strangely SF drives throttle both read & write sequential speeds (why reads?)
    • Even in a throttled state the V2 is faster that the 320 & the X25-V. (based on what is being reported by Anvil's app).
    • The X25-V and 320 are exhibiting near linear wear based on the SMART info.
    • Either the V2 has significantly better wear levelling or the SMART data is not as accurate.

    EDIT:

    Not sure what happens in Zone B. The Loop finishes at point A. It appears to start a new loop, but a couple of seconds later it locks up. Anvil's graph shows the same peak just after the loop finishes, but there is then no delay as can be seen in zone C below. It's the same pattern every time and it only happens just after the new loop starts. Sometime more pronounced other times its less pronounced.

    One_Hertz, any chance of running perfmon to see how the 320 handles it?
    Attached Thumbnails Attached Thumbnails Click image for larger version. 

Name:	Untitled.png 
Views:	1974 
Size:	31.1 KB 
ID:	114541  
    Last edited by Ao1; 05-22-2011 at 08:12 AM. Reason: better clarification

Page 5 of 220 FirstFirst ... 23456781555105 ... LastLast

Tags for this Thread

Bookmarks

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •