:up:
Thanks for the chart updates....Have a great weekend everyone!
Printable View
:up:
Thanks for the chart updates....Have a great weekend everyone!
Thank you for the graph updates. They were really needed.
Thank you all for doing this for the community. Are the Intel's still in the running? I'm excited how close all these SSDs are getting towards 1PiB before crapping out!
MAJOR UPDATE:
12 hours ago the reallocated sector count was at 105 and reserve space was at 99%.
Now, my reallocated sector count is at 4071!!! and reserve space is at 27%. This SSD has hours left.... EXTREMELY sudden failure. I am at 395.7TB right now.
Attachment 119876
Crikey!
(also, that's going to break my charts!!) :lol:
holy cow more excitement! Seriously i cant believe the intel is going out this quickly...even though the normalized charts show it further than the others...noting that it outlasted the 34nm drive though when normalization of capacity is taken into consideration :)
how is it, that these drives last so much, when theoretically 25nm SSD should die after 3000 rewrites + some reserve.... ?
So now I am just waiting for the results of :
M4 vs C300
and
X25-V vs Intel 320
so we can see how good ( or bad ) 25nm really is.
Manufacturer's P/E rating assumes no recovery period between writes. If you allow for a recovery period, though, the write durability can be increased by quite a bit. Here's a paper on it, I think this was posted much earlier in the thread, but even if it was it's worth reposting.Quote:
Originally Posted by Meo
http://www.usenix.org/event/hotstora...pers/Mohan.pdf
With a recovery period of about 100 seconds they observed a 10-fold increase in write endurance for 2-bit 50nm MLC. With a recovery period of 3-4 hours, you're looking at a 100-fold increase in write endurance (so MLC NAND that's rated for 10k P/E cycles would be able to handle closer to 1 million).
This is something to keep in mind when looking at how long these drives being tested last. All the drives here are being written to very aggressively, so there will be less of a recovery period. In theory, under a more modest desktop workload where the drives aren't being written to as rapidly, you could expect even greater write endurance than the results in this thread suggest.
For example, during testing the NAND in the 64GB Samsung 470 was being overwritten once every 115 seconds or so. That isn't a very long recovery period, based on the durability increase for 50nm MLC in the study, you could expect roughly a 9.3x increase in endurance over the manufacturer's rating. The 34nm NAND is rated for 5k, which means it should be able to handle about 46.5k P/E cycles in practice. This seems to agree reasonably well with where the drive actually died (about 39k P/E cycles). Smaller geometry NAND probably benefits less from the same recovery period, which could be why endurance ended up being lower.
Also I think I've mentioned this before, but just wanted to say thanks again to all those sacrificing their time and money to make this possible. This thread is a wealth of knowledge, lots of great information here on real world write endurance and SSDs in general.
excellent points on the endurance there with respect to recovery times. that is surely a huge factor in the longevity of devices. I do feel this testing is a great point of reference though.
FWIW, 4071 sectors is just ~2MiB (if Intel counts a sector as an LBA sector).
If reserve space decreased to 27% I'm pretty sure it cannot be LBA sectors and also not pages. For both pages and sectors, the sum would be too low compared to actual spare space (this also if spare space SMART parameter indeed scales with real values).
Spare area considering 40GiB to 40GB difference: ~2813MiB
73% * 2813 = 2053MiB = ~16Gib. Don't know exactly the geometry of the NAND die, but I guess a 64 or 32 Gib models are made by stacking more small dies on top of each other, so this is probably a complete part of the die. I have already saw something like this on a Corsair Force 240GB. Now, if my assumption is true, then either 2GiB of data have been lost, either were successfully recovered using parity data (this would be most likely)
I think you guys are correct about an entire NAND chip failing in my 320... The number of reallocated sectors has not changed since the last update.
Also, the average speed went UP by 1.5mb/s since the big change in reallocated sectors...
Oh and MD5 checks of my 6GB file are still passing.
Interesting... Personally, I am trying to understand what are the tradeoffs that have been made for these SSDs and this seems to be another clue. Assuming the SSD does not have any wear level algorithm, if you throw some write requests, you might get either a very high or a very low write speed depending on the state of the page and also a high WA. Now, if you add an advanced algorithm for wear leveling, this would add an overhead and will decrease the throughput because it would need to keep an updated list of pages that could be written. This seems easy at first sight but is not, because if you keep an ordered list based on least written pages, any free page gained would need to be inserted in a sorted order and this is a compute intensive task. Now spare area decreased significantly and the wear level algorithm is taking less time to execute and this would explain a sudden increase in write speed.
If that ends up being anywhere near true, a large capcity, or slow writing drive would die of boredom before exhausting PE cycles. It probably takes the X25-V much, much longer to do what the Samsung did every ~115 seconds (not sure if that takes WA into the equation). If the recovery period really exists, the X25-V will be around for a while... or in the 470s case, it could just be that Samsung flash is really, really good. I guess you could make the case that the recovery period for the 470 overcame substantially higher write amplification, and had it been on par with the others at ~1.1WA it would still be chugging along.
..that reminds me...
Most of the available controllers on the market are already being tested -- except for the new SF, which I believe Anvil has covered. So there's the Toshiba, Samsung, Indilinx, Micron, SF1200, (possibly the SF2281), and the Intels. The only controllers I can think of that aren't being tested are the Phison and JMicron. The JMicron is terrible, and the Phison is only used in the Patriot Torqx 2 (with 32nm NAND). I can't really think of any other unique controller/flash combos that would help to diversify the test. I've bought several older drives in the past week as they've been on sale, but the drives are either already in the test (like the X25-V and Vertex Turbo) or pretty similar (an Agility60 w/ 34nm Intel) or not really appropriate (like an X25 E). I'd be willing to put up the new Agility60 for destruction, but I think that the Patriot Torqx might be more interesting. If anyone has any ideas for a good 32GB -64GB to test, I'll order one to throw on the fire.
Actually, I found some Western Digital Silicon Edge Blue 64gb drives for $60 plus shipping. They use a WD branded controller with Samsung flash, but the controller may be a custom JMicron unit with 512MB of DDR2. Also showing up in retailers, the new Vertex Plus drives pair Indilinx controlled, Arrowana FW with 25nm IMFT. They're not very fast, but supposedly the Vertex Arrowana FW that was scheduled to be released as an upgrade for OCZ Indilinx Vertices and Agilities vastly improved some performance aspects (OCZ said 500% increase in small randoms back in May). So those drives and the Phison controlled Patriots are the only oddball SSDs I can think of at the moment. I'm going to buy one of these drives tomorrow night or Monday morning, unless someone really wants me to test a brand new Agility 60 1.6FW 34nm IMFT instead. I want something different from what was already being tested, but with a combination of good write speed and capacity to wear the drive out before the end of time. I think Anvil has a 32GB SLC WD Silicon Edge that uses the same controller as the MLC version, but I think the results were unsatisfactory with the write load, IIRC.
It would be interesting to bring some SLC drives into here just so we can compare if they really do last 10 times as much as the MLC drives etc.
Your Agility60 would be interesting from another point of view: we could compare the evolution of failed blocks from two different batches of Intel 34nm and we could see how much reliability improves over time. Most probably we could trace manufacturing date (or at least an approximation) based on SSD manufacturing date and maybe NAND batch number if it has something like that.
The 320 and the M4 are STILL going? That's insane... insanely great, lol. Keep it up guys!
I do have a few X25-Es lying around... hmmm :D
Anyone interested PM me ;)
308.08TB Host writes
Reallocated sectors : 6
MD5 OK
32.7MiB/s on avg (80 hours)
@One_Hertz
I can see the excitement in the sudden rise but I'm pretty sure it will last quite a bit longer :)
Although the WD is a fine drive in general it is not a drive for this test, it is slow, in fact it's slower than my X25-V.
(and the SMART attributes are useless)
I'm preparing a Corsair Force 3 120GB and I'm just playing a bit before going "live", the 120GB should be an interesting one. (LTT?, large capacity,...)
I have not decided yet on what level of compression to use, as SMART displays both RAW and host writes I'm leaning towards 46% or 67%.
If LTT is not set on the Corsair drives all their SF based drives would be interesting, both async and synchronous "drives" on the SF-2XXX series are prime candidates for this test
I'd say more of the latest stuff, e.g. the Intel 510 or something similar. (or one of the new drives that are supposed to ship later this year)
I've got a few E's as well, imho the E isn't that interesting as it's not a typical nor widespread drive and it could take years for anything interesting to happen.
It is a superb drive though, no doubt about it.