Page 84 of 220 FirstFirst ... 34748182838485868794134184 ... LastLast
Results 2,076 to 2,100 of 5495

Thread: SSD Write Endurance 25nm Vs 34nm

  1. #2076
    Xtreme X.I.P.
    Join Date
    Apr 2008
    Location
    Norway
    Posts
    2,838
    Kingston SSDNow 40GB (X25-V)

    378.94TB Host writes
    Reallocated sectors : 11 Up 1
    MD5 OK

    33.27MiB/s on avg (~83 hours)

    --

    Corsair Force 3 120GB

    01 94/50 (Raw read error rate)
    05 2 (Retired Block count)
    B1 57 (Wear range delta)
    E6 100 (Life curve status)
    E7 66 (SSD Life left)
    E9 133617 (Raw writes)
    F1 177979 (Host writes)

    109.80MiB/s on avg (~90 hours)

    power on hours : 509

    Wear range delta is still decreasing.

    The drive looks stable, I've been using the system a bit today and it doesn't seem to care.
    -
    Hardware:

  2. #2077
    SSDabuser
    Join Date
    Aug 2011
    Location
    The Rocket City
    Posts
    1,434
    Ao1,

    This could explain why my Mushkin's Wear Range Delta has dropped. The drive disconnects and has plenty of idle time. Of course, it also has 5000PE rated NAND instead of 3000PE NAND of the Force 3.

    Now that the Force 3 isn't suffering from disconnects, it can't shuffle static data around as efficiently.

    Now that I think about it, the speed increases seem to correlate more with the Wear Range Delta than anything else, but it wasn't this fast when new.
    Last edited by Christopher; 10-08-2011 at 09:40 AM.

  3. #2078
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    If I remember correctly my delta got to around 11 before LTT kicked in. It only dropped after the drive was left to idle.

    11 = [(550) / 5,000] x 100

    EDIT, currently the value is 0 for both SF drives that I tested. (They idle most of the time now)
    Last edited by Ao1; 10-08-2011 at 09:43 AM.

  4. #2079
    Xtreme X.I.P.
    Join Date
    Apr 2008
    Location
    Norway
    Posts
    2,838
    Quote Originally Posted by Ao1 View Post
    Anvil I’m intrigued by this. AFAIK the SF controller is supposed to prevent the delta between the least-worn and most worn block from accumulating more than a few % of the maximum lifetime wear rating of the Flash memory.

    I believe it is calculated something like this:

     Wear Range Delta = [(MW - LW) / MRW] x 100
     MW = P-E Cycles experienced by Most Worn block
     LW = P-E Cycles experienced by Least Worn block
     MRW = Max Rated Wear = P-E Cycle rating for the Flash memory

    Your Delta is 58. If I assume a P/E rating of 5,000 the difference between least and most worn is significant and certainly well in excess of what the controller is designed to prevent.

     58 = [(2,900) / 5,000] x 100

    Or 3,000 P/E

     58 = [(1,740) / 3,000] x 100

    I suspect that static wear levelling can only occur when the drive is in idle mode. Most likely the controller cannot accept any other commands whilst it moves blocks around and flushes out any invalid data contained within the static data block.

    Between post 2,060 & post 2,072 did your F3 have any power on idle time?

    If static data cannot be rotated whilst the drive is active it means that the endurance app is inducing wear well above the rate that would occur if the data could be rotated.

    Btw I believe that SF drives will issue a SMART trip once the reserve block count drops below the minimum allowable threshold.
    There has been no Idle time at all, even when the drive was tested on the X58 there were at most a few minutes of idle time while connecting the drive and later disconnecting it.

    I'm not sure if it's doing static wear leveling but it sure looks like it, there is 49GB free space and 12GB Min free space so there is ample space for doing "normal" wear leveling.

    There is one strange thing that I have observed, it is not capable of writing random data during the loop, it looks like the clean-up part is not enough for it to catch up on the deletion of files.
    In fact the Kingston has written more random data during the same period. (they were started minutes apart)


    @Christopher
    When the drive disconnects it is most likely doing nothing at all, it stops counting Power On Hours and imho it does nothing until it is physically reconnected, either by powercycling the drive or the SATA connector.
    In my case I do both as it sits in a tray for easy removal.

    I'll most likely make a change at 96 hours, nothing interesting happening
    -
    Hardware:

  5. #2080
    SSDabuser
    Join Date
    Aug 2011
    Location
    The Rocket City
    Posts
    1,434
    In two weeks the Mushkin has had almost a full 20 hours of idle time, as that's the time lost to disconnects.


    EDIT
    Surely the drive isn't off when it disconnects... it stops counting power on hours, but the drive still is getting power. It's got to be doing something...
    Last edited by Christopher; 10-08-2011 at 09:54 AM.

  6. #2081
    Xtreme X.I.P.
    Join Date
    Apr 2008
    Location
    Norway
    Posts
    2,838
    Imho it's frozen, dead or whatever one want's to call it

    It has made no difference to my wear leveling delta at all, the longest period of time it was frozen was ~5-6 hours and I've checked it every time it was frozen.
    -
    Hardware:

  7. #2082
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    Quote Originally Posted by Anvil View Post
    There has been no Idle time at all, even when the drive was tested on the X58 there were at most a few minutes of idle time while connecting the drive and later disconnecting it.

    I'm not sure if it's doing static wear leveling but it sure looks like it, there is 49GB free space and 12GB Min free space so there is ample space for doing "normal" wear leveling.

    There is one strange thing that I have observed, it is not capable of writing random data during the loop, it looks like the clean-up part is not enough for it to catch up on the deletion of files.
    In fact the Kingston has written more random data during the same period. (they were started minutes apart)
    I did wonder if the “TRIM” hang might also be something to do with moving static data around as well.

    I believe (not 100% sure though) that when a TRIM command is being executed no other commands can be executed. It would make sense that the same thing occurs if static data is being relocated, otherwise things could get messy.

    One thing for sure however is that high delta ranges are something the SF controller is supposed to prevent, so static data rotation is not working as intended.

    It would be worth looking at historical data to see how the delta value increased over time. It would also be interesting to see how much idle time was required before the delta returned to zero.

  8. #2083
    Xtreme X.I.P.
    Join Date
    Apr 2008
    Location
    Norway
    Posts
    2,838
    There has been a few changes since your drive was tested.

    Currently it idles for 250ms for every 500 files deleted, still it sort of hangs while deleting the files.
    There is a 10 second pause but that doesn't occur until after the random writes are done.

    The random write part writes (or tries to write) for 6 seconds, if it can't write it just skips and moves on.
    -
    Hardware:

  9. #2084
    SSDabuser
    Join Date
    Aug 2011
    Location
    The Rocket City
    Posts
    1,434
    So when it disconnects it's in a Zombie state??

    That makes sense I guess. I knew that power on hours weren't increasing, but I figured it was still doing something...

    Guess Not.


    I suppose until I figure something else out to try, I'm just going to reboot every day after I post updates.

    I need to get either another Mushkin, or a new motherboard. With another Mushkin I can test to see if the drive disconnects under normal usage, and with another motherboard I might be able to eliminate disconnects...

  10. #2085
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    How about 12 hours of endurance and 12 hours of power on idle time? That should bring the wear delta down quite quickly and also give the drive a rest to see if that helps with the disconnects.

  11. #2086
    Xtreme X.I.P.
    Join Date
    Apr 2008
    Location
    Norway
    Posts
    2,838
    I'll have a think about having a one off pause from when it hits 96 hours, by leaving it overnight it should get 5-7 hours of rest.

    One can tweak the loop to give it more rest, not sure that I'm ready to change course right now
    (I'd have to make a change so that one can set the pause in seconds or minutes instead of ms, should be an easy fix)
    -
    Hardware:

  12. #2087
    Xtreme Member
    Join Date
    Apr 2011
    Posts
    211
    I think there might be some connection between TRIM and the SF bug.

  13. #2088
    SynbiosVyse
    Guest
    I was unable to get my Force drive detected after unplugging it and rebooting several times. This is very bad news, as I would have thought that after the drive "dies" you should in theory at least be able to read data off the drive.

    I have not tried a different machine but as of right now I'm considering it completely dead, there's no reason why it wouldn't work in my machine.

  14. #2089
    Xtreme Member
    Join Date
    Feb 2006
    Location
    Stuttgart, Germany
    Posts
    225
    Quote Originally Posted by SynbiosVyse View Post
    I was unable to get my Force drive detected after unplugging it and rebooting several times. This is very bad news, as I would have thought that after the drive "dies" you should in theory at least be able to read data off the drive.

    I have not tried a different machine but as of right now I'm considering it completely dead, there's no reason why it wouldn't work in my machine.
    Most probably it is a controller failure, as it is highly unlikely to get all NAND dies dead. How many power on hours had? This could be a indicator for "controller burn test".

  15. #2090
    SSDabuser
    Join Date
    Aug 2011
    Location
    The Rocket City
    Posts
    1,434
    The Force F40-A's NAND isn't worn out, it's just dead.

    The Samsung is what happens when the NAND becomes "read only".

    Your drive just died, but in some scenarios it could possibly be fixed, just not by an end user.

  16. #2091
    SSDabuser
    Join Date
    Aug 2011
    Location
    The Rocket City
    Posts
    1,434
    Mushkin Chronos Deluxe 60 Update, Day 17

    05 2
    Retired Block Count

    B1 9
    Wear Range Delta

    F1 157862
    Host Writes

    E9 121706
    NAND Writes

    E6 100
    Life Curve

    E7 36
    Life Left

    Average 128.08MB/s Avg
    RST drivers, Intel DP67BG P67

    391Hours Work (24hrs since the last update)
    Time 16 days 7 hours

    10GiB Minimum Free Space 11720 files per loop

    SSDlife expects 8 days to 0 MWI
    Click image for larger version. 

Name:	Mushkin Day 17.JPG 
Views:	1694 
Size:	191.7 KB 
ID:	120974

  17. #2092
    SynbiosVyse
    Guest
    Quote Originally Posted by sergiu View Post
    Most probably it is a controller failure, as it is highly unlikely to get all NAND dies dead. How many power on hours had? This could be a indicator for "controller burn test".
    It was only online for about 2 months. Keep in mind though that also my drive did not have very much static data (just a few MB). I agree that this was probably a case of the controller pooping out.

  18. #2093
    Xtreme Enthusiast
    Join Date
    Jun 2011
    Location
    Norway
    Posts
    609
    Update:
    m4
    632.6673 TiB
    2303 hours
    Avg speed 91.02 MiB/s.
    AD gone from 254 to 245.
    P/E 11010.
    MD5 OK.
    Still no reallocated sectors
    Click image for larger version. 

Name:	M4-CT064 M4SSD2 SATA Disk Device_64GB_1GB-20111009-1526-3.png 
Views:	1341 
Size:	32.8 KB 
ID:	121000Click image for larger version. 

Name:	M4-CT064 M4SSD2 SATA Disk Device_64GB_1GB-20111009-1526-2.png 
Views:	1377 
Size:	80.7 KB 
ID:	121001

    Kingston V+100
    It dropped out during the night. I'm not be able to reconnect it until tomorrow since I'm away this weekend. Anvil was talking about an updated ASU so we can restore the log ourselves when this happens. I'll ask him to help me unless the new ver of ASU is finished.
    Click image for larger version. 

Name:	KINGSTON  SVP100S264G SATA Disk Device_64GB_1GB-20111009-1526-error.PNG 
Views:	1367 
Size:	72.5 KB 
ID:	120999
    1: AMD FX-8150-Sabertooth 990FX-8GB Corsair XMS3-C300 256GB-Gainward GTX 570-HX-750
    2: Phenom II X6 1100T-Asus M4A89TD Pro/usb3-8GB Corsair Dominator-Gainward GTX 460SE/-X25-V 40GB-(Crucial m4 64GB /Intel X25-M G1 80GB/X25-E 64GB/Mtron 7025/Vertex 1 donated to endurance testing)
    3: Asus U31JG - X25-M G2 160GB

  19. #2094
    Xtreme X.I.P.
    Join Date
    Apr 2008
    Location
    Norway
    Posts
    2,838
    Both drives are now running on the X58 which is configured w/o power savings and runs OROM 11 + RST 11 alpha.

    I'll play a bit with the Z68 setting during this session.

    Both drives were allowed to idle for 12 hours and so there isn't much progress.
    Wear Range Delta stuck at 57 so no decrease while idling.

    Kingston SSDNow 40GB (X25-V)

    380.19TB Host writes
    Reallocated sectors : 11
    MD5 OK

    38.69MiB/s on avg (~4.5 hours)

    --

    Corsair Force 3 120GB

    01 92/50 (Raw read error rate)
    05 2 (Retired Block count)
    B1 56 (Wear range delta)
    E6 100 (Life curve status)
    E7 66 (SSD Life left)
    E9 136592 (Raw writes)
    F1 181934 (Host writes)

    106.33MiB/s on avg (~4.5 hours)

    It has dropped a bit in avg speed but that is mainly due to that I have activated MD5 for this session.

    power on hours : 544
    -
    Hardware:

  20. #2095
    Xtreme Member
    Join Date
    Apr 2006
    Location
    Ontario
    Posts
    349
    Quote Originally Posted by Anvil View Post
    I'll see what I can do about that summary.
    PM/email me what you'd like to have in that summary.

    Yeah, I don't think most people are actually getting how much data is written.
    I've got one Kingston thats been running for more than 10,000 hours and its still short of 1TB Host Writes. (running as a boot drive on a server, not much happening but still it's running 24/7)
    I'll check the two other 40GB drives I've got (both Intels), they are both used as boot drives as well but not running 24/7.

    I'm pretty sure that 10-20TB Host Writes is what most of these drives will ever see during their normal life-span (2-3 years), unless they are used in non-standard environments.
    I had a Fusion-io ioXtreme that had 307.8 TB writes. DOM was Jan 4 2010. And I posted it in Nov 9 2010.

    See thread: http://www.xtremesystems.org/forums/...=1#post4619346

    My current SSDs that I'm testing (Vertex 3 240 GB and RevoDrive 3 240 GB) are at about 1.4 TB write already and I've only had it for 5 days, so I'm at about 300 GB/day (at least for the RevoDrive 3). If I didn't have unexpected power losses, I'm sure that it would be higher by now.

    So....14 TB of data is NOTHING to me.

    *edit*
    a) The Fusion-io ioXtreme was only 80 GB. And b) I think it was on a PCIe x4 connector (same as my RevoDrive 3 now).
    Last edited by alpha754293; 10-09-2011 at 09:24 AM.
    flow man:
    du/dt + u dot del u = - del P / rho + v vector_Laplacian u
    {\partial\mathbf{u}\over\partial t}+\mathbf{u}\cdot\nabla\mathbf{u} = -{\nabla P\over\rho} + \nu\nabla^2\mathbf{u}

  21. #2096
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    Anvil I’ve been trying to find out a bit more about how the static data levelling process works. This white paper is the only one I can find that talks about how static wear levelling is implemented, but even here it is not discussed in great detail. If I read the following correctly the processes requires idle time before the process can start. (I guess you could also read the 1st trigger as the period the static data had sat idle, rather than how long the drive had sat idle).

    The reason I think static data levelling is not working (as well as intended) on the SF drive is due to the high difference between worn/ least worn blocks. The SF controller is supposed to keep this to a very low threshold and the only reason I can think why it is not working is a lack of idle time. Obviously to get back to a low threshold you would need to write, pause, write, pause etc. until the wear could be evenly distributed, which would take a lot of time once the threshold has got beyond a certain point, as your drive appears to have done.

    “Static wear levelling addresses the blocks that are inactive and have data stored in them. Unlike dynamic wear levelling, which is evaluated each time a write flush buffer command is executed, static wear levelling has two trigger mechanisms that are periodically evaluated. The first trigger condition evaluates the idle stage period of inactive blocks. If this period is greater than the set threshold, then a scan of the ECT is initiated.

    The scan searches for the minimum erase count block in the data pool and the maximum erase count block in the free pool. Once the scan is complete, the second level of triggering is checked by taking the difference between the maximum erase count block found in the free pool and the minimum erase count block found in the data pool, and checking if that result is greater than a set wear-level threshold. If it is greater, then a block swap is initiated by first writing the content of the minimum erase count block found in the data pool to the maximum erase count block found in the free pool.

    Next, each block is re-associated in the opposite pool. The minimum erase count block found in the data pool is erased and placed in the free pool, and the maximum erase count block, which now has the contents of the other block’s data, is now associated in the data block pool. With the block swap complete, the re-mapping of the logical block address to the new physical block address is completed in the FTL. Finally, the ECT is updated by associating each block to its new groups”.


    Anyway I was going to experiment with wear levelling today but my V2 got hit with the time warp bug. First I noticed that files within folders had lost their integrity. It was as if I had run a SE from within Windows, but had not then rebooted. The file structure remained, but I could not copy or open files.

    The drive then “disappeared”. It could not be seen in the bios or by the OS. After physically disconnecting and then reconnecting the drive it reappeared. The folder structure was complete and all files could be opened/ copied. Only one problem; the file structure had reverted to a previous time, i.e. recently created folders had disappeared.
    Last edited by Ao1; 10-09-2011 at 10:46 AM.

  22. #2097
    Xtreme Member
    Join Date
    Feb 2006
    Location
    Stuttgart, Germany
    Posts
    225
    Static data rotation might just be working but in a totally different way than everybody expects for SF. I interpret B1 (WRD) as difference in percents between most worn and least worn block. For 136592GB written, if considering 128GB as one complete cycles, we have ~1067 P/E cycles in average. For this values there might be a block with 1300 P/E cycles and another one with 572 cycles (1300-572)/1300 = 0.56.
    Now, the controller might be programmed to postpone data rotation as much as possible to avoid increased wear, but to achieve a wear range delta of 5% (or any other value) at the end of estimated P/E cycles. This would explain why the value increased suddenly and now is slowly decreasing.

  23. #2098
    SSDabuser
    Join Date
    Aug 2011
    Location
    The Rocket City
    Posts
    1,434
    My Mushkin dropped from about 26 down to 9 - 10, but maybe we're misinterpreting. Maybe even 50+ is nothing on a 120GB drive. If my drive peaked at 27, and the 120GB Force 3 peaks at around twice that (as in that's the high water mark before it drops back down), then perhaps that's normal.

    I haven't seen much in the way of interpreting Wear Range Delta.

  24. #2099
    Xtreme X.I.P.
    Join Date
    Apr 2008
    Location
    Norway
    Posts
    2,838
    Ao1,
    I've read that paper before (or something similar), not sure how to interpret the lack of WRD movement while idling, I even restarted the computer and there was nothing monitoring the drives, no CDI or SSDLife, so, it was definitely idling. (and it was a secondary/spare drive)

    And, if it wasn't doing static wear leveling the WRD would make no sense imho.

    I think it somewhere along the lines that sergiu explains and there should be no problem in addressing static wear leveling on the fly.

    Lets see if it keeps on decreasing, it's still at 56 and it's been writing for >9 hours since moving it to the other rig.

    I've read about the time warp, have you been using the drive or has it been sitting there just idling?
    Keep us updated on that issue.
    -
    Hardware:

  25. #2100
    Xtreme X.I.P.
    Join Date
    Apr 2008
    Location
    Norway
    Posts
    2,838
    Quote Originally Posted by alpha754293 View Post
    ...
    My current SSDs that I'm testing (Vertex 3 240 GB and RevoDrive 3 240 GB) are at about 1.4 TB write already and I've only had it for 5 days, so I'm at about 300 GB/day (at least for the RevoDrive 3). If I didn't have unexpected power losses, I'm sure that it would be higher by now.

    So....14 TB of data is NOTHING to me.
    ...
    1.4TB in 5 days is quite a bit and certainly not normal, unless when doing lots of benchmarks and testing drives.
    Let us know how it works out in a week/month or so.
    -
    Hardware:

Page 84 of 220 FirstFirst ... 34748182838485868794134184 ... LastLast

Tags for this Thread

Bookmarks

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •