Page 64 of 220 FirstFirst ... 14546162636465666774114164 ... LastLast
Results 1,576 to 1,600 of 5495

Thread: SSD Write Endurance 25nm Vs 34nm

  1. #1576
    Xtreme Member
    Join Date
    Oct 2004
    Posts
    300
    Quote Originally Posted by Meo
    how is it, that these drives last so much, when theoretically 25nm SSD should die after 3000 rewrites + some reserve.... ?
    Manufacturer's P/E rating assumes no recovery period between writes. If you allow for a recovery period, though, the write durability can be increased by quite a bit. Here's a paper on it, I think this was posted much earlier in the thread, but even if it was it's worth reposting.

    http://www.usenix.org/event/hotstora...pers/Mohan.pdf

    With a recovery period of about 100 seconds they observed a 10-fold increase in write endurance for 2-bit 50nm MLC. With a recovery period of 3-4 hours, you're looking at a 100-fold increase in write endurance (so MLC NAND that's rated for 10k P/E cycles would be able to handle closer to 1 million).

    This is something to keep in mind when looking at how long these drives being tested last. All the drives here are being written to very aggressively, so there will be less of a recovery period. In theory, under a more modest desktop workload where the drives aren't being written to as rapidly, you could expect even greater write endurance than the results in this thread suggest.

    For example, during testing the NAND in the 64GB Samsung 470 was being overwritten once every 115 seconds or so. That isn't a very long recovery period, based on the durability increase for 50nm MLC in the study, you could expect roughly a 9.3x increase in endurance over the manufacturer's rating. The 34nm NAND is rated for 5k, which means it should be able to handle about 46.5k P/E cycles in practice. This seems to agree reasonably well with where the drive actually died (about 39k P/E cycles). Smaller geometry NAND probably benefits less from the same recovery period, which could be why endurance ended up being lower.

    Also I think I've mentioned this before, but just wanted to say thanks again to all those sacrificing their time and money to make this possible. This thread is a wealth of knowledge, lots of great information here on real world write endurance and SSDs in general.
    Last edited by frostedflakes; 09-10-2011 at 07:42 AM.

  2. #1577
    Xtreme Guru
    Join Date
    Aug 2009
    Location
    Wichita, Ks
    Posts
    3,887
    excellent points on the endurance there with respect to recovery times. that is surely a huge factor in the longevity of devices. I do feel this testing is a great point of reference though.
    "Lurking" Since 1977


    Jesus Saves, God Backs-Up
    *I come to the news section to ban people, not read complaints.*-[XC]Gomeler
    Don't believe Squish, his hardware does control him!

  3. #1578
    Xtreme Member
    Join Date
    Feb 2006
    Location
    Stuttgart, Germany
    Posts
    225
    Quote Originally Posted by One_Hertz View Post
    MAJOR UPDATE:

    12 hours ago the reallocated sector count was at 105 and reserve space was at 99%.

    Now, my reallocated sector count is at 4071!!! and reserve space is at 27%. This SSD has hours left.... EXTREMELY sudden failure. I am at 395.7TB right now.
    For this sudden increase in bad blocks I would say it is more likely a NAND die failed completely, but probably we will find out soon enough. If its just a failed die, then we might still see a few hundreds of TB written.

  4. #1579
    Xtreme Member
    Join Date
    May 2009
    Posts
    201
    Quote Originally Posted by sergiu View Post
    For this sudden increase in bad blocks I would say it is more likely a NAND die failed completely, but probably we will find out soon enough. If its just a failed die, then we might still see a few hundreds of TB written.
    That's most likely what it is. But it doesn't bode well for the wear leveling algorithm. Or may be that particular die was just not as good as the other ones. We will know soon I guess.

  5. #1580
    Admin
    Join Date
    Feb 2005
    Location
    Ann Arbor, MI
    Posts
    12,338
    FWIW, 4071 sectors is just ~2MiB (if Intel counts a sector as an LBA sector).

  6. #1581
    Xtreme Member
    Join Date
    Feb 2006
    Location
    Stuttgart, Germany
    Posts
    225
    Quote Originally Posted by Vapor View Post
    FWIW, 4071 sectors is just ~2MiB (if Intel counts a sector as an LBA sector).
    If reserve space decreased to 27% I'm pretty sure it cannot be LBA sectors and also not pages. For both pages and sectors, the sum would be too low compared to actual spare space (this also if spare space SMART parameter indeed scales with real values).
    Spare area considering 40GiB to 40GB difference: ~2813MiB
    73% * 2813 = 2053MiB = ~16Gib. Don't know exactly the geometry of the NAND die, but I guess a 64 or 32 Gib models are made by stacking more small dies on top of each other, so this is probably a complete part of the die. I have already saw something like this on a Corsair Force 240GB. Now, if my assumption is true, then either 2GiB of data have been lost, either were successfully recovered using parity data (this would be most likely)
    Last edited by sergiu; 09-10-2011 at 12:13 PM.

  7. #1582
    SLC
    Join Date
    Oct 2004
    Location
    Ottawa, Canada
    Posts
    2,795
    I think you guys are correct about an entire NAND chip failing in my 320... The number of reallocated sectors has not changed since the last update.

    Also, the average speed went UP by 1.5mb/s since the big change in reallocated sectors...

    Oh and MD5 checks of my 6GB file are still passing.
    Last edited by One_Hertz; 09-10-2011 at 01:45 PM.

  8. #1583
    Xtreme Member
    Join Date
    Feb 2006
    Location
    Stuttgart, Germany
    Posts
    225
    Quote Originally Posted by One_Hertz View Post
    Also, the average speed went UP by 1.5mb/s since the big change in reallocated sectors...
    Interesting... Personally, I am trying to understand what are the tradeoffs that have been made for these SSDs and this seems to be another clue. Assuming the SSD does not have any wear level algorithm, if you throw some write requests, you might get either a very high or a very low write speed depending on the state of the page and also a high WA. Now, if you add an advanced algorithm for wear leveling, this would add an overhead and will decrease the throughput because it would need to keep an updated list of pages that could be written. This seems easy at first sight but is not, because if you keep an ordered list based on least written pages, any free page gained would need to be inserted in a sorted order and this is a compute intensive task. Now spare area decreased significantly and the wear level algorithm is taking less time to execute and this would explain a sudden increase in write speed.
    Last edited by sergiu; 09-10-2011 at 03:37 PM.

  9. #1584
    SSDabuser
    Join Date
    Aug 2011
    Location
    The Rocket City
    Posts
    1,434
    Quote Originally Posted by frostedflakes View Post
    Manufacturer's P/E rating assumes no recovery period between writes. If you allow for a recovery period, though, the write durability can be increased by quite a bit. Here's a paper on it, I think this was posted much earlier in the thread, but even if it was it's worth reposting.

    http://www.usenix.org/event/hotstora...pers/Mohan.pdf

    With a recovery period of about 100 seconds they observed a 10-fold increase in write endurance for 2-bit 50nm MLC. With a recovery period of 3-4 hours, you're looking at a 100-fold increase in write endurance (so MLC NAND that's rated for 10k P/E cycles would be able to handle closer to 1 million).

    This is something to keep in mind when looking at how long these drives being tested last. All the drives here are being written to very aggressively, so there will be less of a recovery period. In theory, under a more modest desktop workload where the drives aren't being written to as rapidly, you could expect even greater write endurance than the results in this thread suggest.

    For example, during testing the NAND in the 64GB Samsung 470 was being overwritten once every 115 seconds or so. That isn't a very long recovery period, based on the durability increase for 50nm MLC in the study, you could expect roughly a 9.3x increase in endurance over the manufacturer's rating. The 34nm NAND is rated for 5k, which means it should be able to handle about 46.5k P/E cycles in practice. This seems to agree reasonably well with where the drive actually died (about 39k P/E cycles). Smaller geometry NAND probably benefits less from the same recovery period, which could be why endurance ended up being lower.

    Also I think I've mentioned this before, but just wanted to say thanks again to all those sacrificing their time and money to make this possible. This thread is a wealth of knowledge, lots of great information here on real world write endurance and SSDs in general.

    If that ends up being anywhere near true, a large capcity, or slow writing drive would die of boredom before exhausting PE cycles. It probably takes the X25-V much, much longer to do what the Samsung did every ~115 seconds (not sure if that takes WA into the equation). If the recovery period really exists, the X25-V will be around for a while... or in the 470s case, it could just be that Samsung flash is really, really good. I guess you could make the case that the recovery period for the 470 overcame substantially higher write amplification, and had it been on par with the others at ~1.1WA it would still be chugging along.

    ..that reminds me...

    Most of the available controllers on the market are already being tested -- except for the new SF, which I believe Anvil has covered. So there's the Toshiba, Samsung, Indilinx, Micron, SF1200, (possibly the SF2281), and the Intels. The only controllers I can think of that aren't being tested are the Phison and JMicron. The JMicron is terrible, and the Phison is only used in the Patriot Torqx 2 (with 32nm NAND). I can't really think of any other unique controller/flash combos that would help to diversify the test. I've bought several older drives in the past week as they've been on sale, but the drives are either already in the test (like the X25-V and Vertex Turbo) or pretty similar (an Agility60 w/ 34nm Intel) or not really appropriate (like an X25 E). I'd be willing to put up the new Agility60 for destruction, but I think that the Patriot Torqx might be more interesting. If anyone has any ideas for a good 32GB -64GB to test, I'll order one to throw on the fire.
    Last edited by Christopher; 09-11-2011 at 01:07 AM.

  10. #1585
    SSDabuser
    Join Date
    Aug 2011
    Location
    The Rocket City
    Posts
    1,434
    Actually, I found some Western Digital Silicon Edge Blue 64gb drives for $60 plus shipping. They use a WD branded controller with Samsung flash, but the controller may be a custom JMicron unit with 512MB of DDR2. Also showing up in retailers, the new Vertex Plus drives pair Indilinx controlled, Arrowana FW with 25nm IMFT. They're not very fast, but supposedly the Vertex Arrowana FW that was scheduled to be released as an upgrade for OCZ Indilinx Vertices and Agilities vastly improved some performance aspects (OCZ said 500% increase in small randoms back in May). So those drives and the Phison controlled Patriots are the only oddball SSDs I can think of at the moment. I'm going to buy one of these drives tomorrow night or Monday morning, unless someone really wants me to test a brand new Agility 60 1.6FW 34nm IMFT instead. I want something different from what was already being tested, but with a combination of good write speed and capacity to wear the drive out before the end of time. I think Anvil has a 32GB SLC WD Silicon Edge that uses the same controller as the MLC version, but I think the results were unsatisfactory with the write load, IIRC.
    Last edited by Christopher; 09-11-2011 at 01:13 AM.

  11. #1586
    Xtreme Member
    Join Date
    Apr 2011
    Posts
    211
    It would be interesting to bring some SLC drives into here just so we can compare if they really do last 10 times as much as the MLC drives etc.

  12. #1587
    Xtreme Member
    Join Date
    Feb 2006
    Location
    Stuttgart, Germany
    Posts
    225
    Quote Originally Posted by Christopher View Post
    I've bought several older drives in the past week as they've been on sale, but the drives are either already in the test (like the X25-V and Vertex Turbo) or pretty similar (an Agility60 w/ 34nm Intel) or not really appropriate (like an X25 E). I'd be willing to put up the new Agility60 for destruction, but I think that the Patriot Torqx might be more interesting. If anyone has any ideas for a good 32GB -64GB to test, I'll order one to throw on the fire.
    Your Agility60 would be interesting from another point of view: we could compare the evolution of failed blocks from two different batches of Intel 34nm and we could see how much reliability improves over time. Most probably we could trace manufacturing date (or at least an approximation) based on SSD manufacturing date and maybe NAND batch number if it has something like that.

  13. #1588
    Xtreme Member
    Join Date
    Feb 2006
    Location
    Stuttgart, Germany
    Posts
    225
    Quote Originally Posted by bulanula View Post
    It would be interesting to bring some SLC drives into here just so we can compare if they really do last 10 times as much as the MLC drives etc.
    I totally agree, but... if there is no electronic failure (controller, ram buffer, SATA interface, etc) and recovery time is indeed as in the model reposted some posts above, I am afraid we would need to leave the test as legacy to our grandchildren.

  14. #1589
    Back from the Dead
    Join Date
    Oct 2007
    Location
    Stuttgart, Germany
    Posts
    6,602
    The 320 and the M4 are STILL going? That's insane... insanely great, lol. Keep it up guys!

    I do have a few X25-Es lying around... hmmm
    Anyone interested PM me
    Last edited by jcool; 09-11-2011 at 07:03 AM.
    World Community Grid - come join a great team and help us fight for a better tomorrow![size=1]


  15. #1590
    Xtreme X.I.P.
    Join Date
    Apr 2008
    Location
    Norway
    Posts
    2,838
    308.08TB Host writes
    Reallocated sectors : 6
    MD5 OK

    32.7MiB/s on avg (80 hours)
    -
    Hardware:

  16. #1591
    Xtreme X.I.P.
    Join Date
    Apr 2008
    Location
    Norway
    Posts
    2,838
    @One_Hertz
    I can see the excitement in the sudden rise but I'm pretty sure it will last quite a bit longer

    Quote Originally Posted by Christopher View Post
    Actually, I found some Western Digital Silicon Edge Blue 64gb drives for $60 plus shipping.
    ...
    I want something different from what was already being tested, but with a combination of good write speed and capacity to wear the drive out before the end of time. I think Anvil has a 32GB SLC WD Silicon Edge that uses the same controller as the MLC version, but I think the results were unsatisfactory with the write load, IIRC.
    Although the WD is a fine drive in general it is not a drive for this test, it is slow, in fact it's slower than my X25-V.
    (and the SMART attributes are useless)

    I'm preparing a Corsair Force 3 120GB and I'm just playing a bit before going "live", the 120GB should be an interesting one. (LTT?, large capacity,...)
    I have not decided yet on what level of compression to use, as SMART displays both RAW and host writes I'm leaning towards 46% or 67%.

    If LTT is not set on the Corsair drives all their SF based drives would be interesting, both async and synchronous "drives" on the SF-2XXX series are prime candidates for this test
    I'd say more of the latest stuff, e.g. the Intel 510 or something similar. (or one of the new drives that are supposed to ship later this year)

    Quote Originally Posted by jcool View Post
    The 320 and the M4 are STILL going? That's insane... insanely great, lol. Keep it up guys!

    I do have a few X25-Es lying around... hmmm
    Anyone interested PM me
    I've got a few E's as well, imho the E isn't that interesting as it's not a typical nor widespread drive and it could take years for anything interesting to happen.

    It is a superb drive though, no doubt about it.
    -
    Hardware:

  17. #1592
    NooB MOD
    Join Date
    Jan 2006
    Location
    South Africa
    Posts
    5,799
    Hmmm, I've been trying to get an X25-E for a few years now, they just aren't available in SA. Is anyone willing to sell one at a good price?
    Xtreme SUPERCOMPUTER
    Nov 1 - Nov 8 Join Now!


    Quote Originally Posted by Jowy Atreides View Post
    Intel is about to get athlon'd
    Athlon64 3700+ KACAE 0605APAW @ 3455MHz 314x11 1.92v/Vapochill || Core 2 Duo E8500 Q807 @ 6060MHz 638x9.5 1.95v LN2 @ -120'c || Athlon64 FX-55 CABCE 0516WPMW @ 3916MHz 261x15 1.802v/LN2 @ -40c || DFI LP UT CFX3200-DR || DFI LP UT NF4 SLI-DR || DFI LP UT NF4 Ultra D || Sapphire X1950XT || 2x256MB Kingston HyperX BH-5 @ 290MHz 2-2-2-5 3.94v || 2x256MB G.Skill TCCD @ 350MHz 3-4-4-8 3.1v || 2x256MB Kingston HyperX BH-5 @ 294MHz 2-2-2-5 3.94v

  18. #1593
    Xtreme Member
    Join Date
    May 2009
    Posts
    201
    Its been a while since the last update for M4. We are spoiled little brats over here....

  19. #1594
    Xtreme Cruncher
    Join Date
    Nov 2008
    Location
    NE Ohio, USA
    Posts
    1,608
    Quote Originally Posted by devsk View Post
    Its been a while since the last update for M4. We are spoiled little brats over here....
    I guess you'll just have to wait.....

    Quote Originally Posted by B.A.T View Post
    Next update from me will be monday.
    Last edited by bluestang; 09-11-2011 at 01:59 PM.
    24/7 Cruncher #1
    Crosshair VII Hero, Ryzen 3900X, 4.0 GHz @ 1.225v, Arctic Liquid Freezer II 420 AIO, 4x8GB GSKILL 3600MHz C15, ASUS TUF 3090 OC
    Samsung 980 1TB NVMe, Samsung 870 QVO 1TB, 2x10TB WD Red RAID1, Win 10 Pro, Enthoo Luxe TG, EVGA SuperNOVA 1200W P2

    24/7 Cruncher #2
    ASRock X470 Taichi, Ryzen 3900X, 4.0 GHz @ 1.225v, Arctic Liquid Freezer 280 AIO, 2x16GB GSKILL NEO 3600MHz C16, EVGA 3080ti FTW3 Ultra
    Samsung 970 EVO 250GB NVMe, Samsung 870 EVO 500GBWin 10 Ent, Enthoo Pro, Seasonic FOCUS Plus 850W

    24/7 Cruncher #3
    GA-P67A-UD4-B3 BIOS F8 mod, 2600k (L051B138) @ 4.5 GHz, 1.260v full load, Arctic Liquid 120, (Boots Win @ 5.6 GHz per Massman binning)
    Samsung Green 4x4GB @2133 C10, EVGA 2080ti FTW3 Hybrid, Samsung 870 EVO 500GB, 2x1TB WD Red RAID1, Win10 Ent, Rosewill Rise, EVGA SuperNOVA 1300W G2

    24/7 Cruncher #4 ... Crucial M225 64GB SSD Donated to Endurance Testing (Died at 968 TB of writes...no that is not a typo!)
    GA-EP45T-UD3LR BIOS F10 modded, Q6600 G0 VID 1.212 (L731B536), 3.6 GHz 9x400 @ 1.312v full load, Zerotherm Zen FZ120
    OCZ 2x2GB DDR3-1600MHz C7, Gigabyte 7950 @1200/1250, Crucial MX100 128GB, 2x1TB WD Red RAID1, Win10 Ent, Centurion 590, XFX PRO650W

    Music System
    SB Server->SB Touch w/Android Tablet as a remote->Denon AVR-X3300W->JBL Studio Series Floorstanding Speakers, JBL LS Center, 2x SVS SB-2000 Subs


  20. #1595
    SSDabuser
    Join Date
    Aug 2011
    Location
    The Rocket City
    Posts
    1,434
    Actually, I bought the X25 E 32GB for $88. It only had 280GB of host writes on it when the post office dropped it off Thursday.

    We call that a win where I'm from.

    The "new" Agility60 comes in the cheaper plastic case, and in my testing with it, scales terribly in softraid with my older one (There were only a couple in stock, but the 30GB models around still). Writes scale great, but reads don't. My older Agility has only about 1.46TB of host writes, but has an average PE count of >1600 (which would be equivalent to around ~100,000GBs. There are a couple reasons for that, but mainly because its been abused in different un-Indilinx friendly conditions -- Win7 with trim is the only way to go.

    On the other hand, my 120GB Vertex Turbo should arrive in the mail on Tuesday. I'm waiting to get it before buying another one, but after seeing the impressive results of the M225 > Vturbo, I couldn't pass up the opportunity to buy one new for $1/GB.
    bought a new X25-V cheap last weekend at a brick and mortar, but my opinion on endurance testing those is another one won't really say much. And I'm in the mood for destruction.

    I do have a 510 120GB, and it would certainly put down high average numbers as well, but my plan is to use my laptop for endurance testing. I live in a tiny urban apartment and I can just close the lid, stick it under the couch, and pull it out to check it's progress (and it's only Sata II, C2D, but with AHCI).

    Besides the two 6gbps controllers, I don't really see much out there for something different, but I want to do something. The Phison controlled Torqx 2 might have a decent average speed under endurance testing as well (and it's certainly different), but I wanted some group consideration before jumping in. When do the new Samsung 6gps drives come out at retail? I think they're shipping in OEM laptops right now.
    Last edited by Christopher; 09-11-2011 at 02:49 PM.

  21. #1596
    Xtreme Member
    Join Date
    Apr 2011
    Posts
    211
    Maybe we can see if anyone is willing to donate a Intel X25-E if SLC is really that much better etc. Or maybe get a separate thread for SLC drives ?

  22. #1597
    SSDabuser
    Join Date
    Aug 2011
    Location
    The Rocket City
    Posts
    1,434
    Quote Originally Posted by sergiu View Post
    Your Agility60 would be interesting from another point of view: we could compare the evolution of failed blocks from two different batches of Intel 34nm and we could see how much reliability improves over time. Most probably we could trace manufacturing date (or at least an approximation) based on SSD manufacturing date and maybe NAND batch number if it has something like that.
    They both use Intel 34nm, but have different product numbers. RyderOCZ was kind enough to tell me which NAND they used:

    New
    JS29F32G08AAMDB

    Old
    JS29F64G08CAMDB

    I'm not sure what those bolded numbers represent

    The old one had higher correctable bit errors and much higher WA, but that was due in large part to me using it in sub optimal conditions. I recently tried updating to the 1.6 FW to try and reduce those. I'm not sure why those smart reported bit errors are caused, but it seems excessive. I have an Excel spreadsheet with SMART attributes that figures # of read/write sectors to GiB host/reads writes, etc. I was really surprised once I started looking at it in detail -- I used that drive to clone drives, install random linux distros to, and then used it on a laptop with Vista/no trim for quite some time.


    If the throttling situation was sorted out, I'd pick up a 60GB SF2200 25nm drive in a heartbeat for endurance testing.
    Last edited by Christopher; 09-11-2011 at 03:23 PM.

  23. #1598
    Admin
    Join Date
    Feb 2005
    Location
    Ann Arbor, MI
    Posts
    12,338
    Intel 311 20GB is a readily available, low-ish priced SLC drive if anyone is really intent on putting an SLC drive through its paces. Probably fast (for SLC device) to die too, considering how little NAND it has.


    C300 Update

    348.1TiB host writes, 1 MWI, 5872 raw wear, 2048/1 reallocations, 63.05MiB/sec, MD5 OK


    SF-1200 nLTT Update

    208.688TiB host writes, 151.406TiB NAND writes, 27 MWI, 2442.5 raw wear (equiv), wear range delta 3, 56.15MiB/sec, MD5 OK

  24. #1599
    SSDabuser
    Join Date
    Aug 2011
    Location
    The Rocket City
    Posts
    1,434
    The write speed is cut in half from the X25 Es, but that's still pretty high, especially when you consider Avg write speed / capacity. Its basically the X25-V of the SLC world with its controller population taking a big chunk out of performance. I just don't think its possible to wear the drive out in any sort of reasonable time frame.

    It would be killer to have a triplet of Larson Creeks in Raid 0... You'd have like 600mb reads and 300mb writes in 60GB of inexhaustible awesomeness. That's a commitment though - it would take decades to wear them out (probably).
    Last edited by Christopher; 09-11-2011 at 09:22 PM.

  25. #1600
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    Quote Originally Posted by sergiu View Post
    Interesting...snip.
    I think you are correct.

    From post #1362 it looks like erase cycles get slower and programming gets faster as the P/E cycles move towards the end game. To offset that this chart from SF shows the controller overhead that is incurred as the P/E cycle count increases. I'd guess it would be the same for all SSD's that are good at reducing WA, so when the blocks with high wear are replaced write speed should increase.

    Click image for larger version. 

Name:	Untitled.png 
Views:	1260 
Size:	90.3 KB 
ID:	119990

Page 64 of 220 FirstFirst ... 14546162636465666774114164 ... LastLast

Tags for this Thread

Bookmarks

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •