Could it be that WA is actually counting what was written in excess of 1? like 0.10 meaning a real 1.1 WA?
Printable View
Okay, I've been neglecting my SSD testing duties, at least in the reporting aspect -- I'm promising to rectify the situation shortly. I've been collecting log information and taking screenshots, but I've been travelling non stop for a while now and it's getting old.
Secondly, I have a new 64GB drive to test. I've been using it in my laptop to shake it down, but I have high hopes for it. Could this be the 64GB MLC drive that hits 1PB? I don't even know why I care, but I do.
Lastly, I was going to test the Imation S Class SLC too. But after waiting weeks for the drive to show up in the mail, I ended up getting the MLC based M-Class instead (I was sent the wrong drive) which is Samsung NAND + a Jmicron controller, no NCQ or TRIM. Very diappointed -- an older SLC drive would be interesting to test as the high WA would help get the job done sooner. God only knows how long a good SLC drive would take to go tits-up under these conditions, but a Phison controlled drive would have massive WA.
Kingston SSDNow 40GB (X25-V)
593.59TB Host writes
Reallocated sectors : 05 17
Available Reserved Space : E8 99
POH 5372
MD5 OK
32.53MiB/s on avg (~115 hours)
--
Corsair Force 3 120GB
01 92/50 (Raw read error rate)
05 2 (Retired Block count)
B1 43 (Wear range delta)
E6 100 (Life curve status)
E7 10 (SSD Life left)
E9 612226 (Raw writes) ->598TiB
F1 814872 (Host writes) ->796TiB
MD5 OK
104.49MiB/s on avg (~115 hours)
power on hours : 2353
--
I'll be rebooting and doing some cleaning shortly. (SSD Toolbox on the X25-V to regain some speed)
Kingston SSDNow 40GB (X25-V)
595.71TB Host writes
Reallocated sectors : 05 18
Available Reserved Space : E8 99
POH 5390
MD5 OK
35.02MiB/s on avg (~17 hours)
--
Corsair Force 3 120GB
01 94/50 (Raw read error rate)
05 2 (Retired Block count)
B1 45 (Wear range delta)
E6 100 (Life curve status)
E7 10 (SSD Life left)
E9 617211 (Raw writes) ->603TiB
F1 821510 (Host writes) ->802TiB
MD5 OK
107.06MiB/s on avg (~17 hours)
power on hours : 2372
--
The Force 3 is idling a few hours in wait for a data retention test. (which will be lasting through Sunday)
I’m jumping back in with an 830 (in week or so)….but with a couple of twists depending on if I can establish exactly when the theoretical P/E cycle count expires. Twist one; the workload will be 4K random full span. Twist two; once the theoretical P/E count has expired I will fill the drive up with data and checksum it. I’m then going to unplug the drive and leave it. I will check the data integrity every 3 months.
As a quick recap of the JDEC spec:
The SSD manufacturer shall establish an endurance rating for an SSD that represents the maximum number of terabytes that may be written by a host to the SSD
1) The SSD maintains its capacity
2) The SSD maintains the required UBER for its application class
3) The SSD meets the required functional failure requirement (FFR) for its application class
4) The SSD retains data with power off for the required time for its application class
The functional failure requirement for retention of data in a powered off condition is specified as 1 year for Client applications and 3 months for Enterprise (subject to temperature boundaries).
I’m not sure of the basis that determined the different requirements between 3 months for enterprise and 12 months for client applications. Presumably the variation is factoring the difference between how rapidly the SSD is written to as opposed to how much is written.
I'll be entering a few more drives as well and both will have twists.
1 drive with an extra spare area and one SF based drive with a different compression ratio.
Just for the record, I've changed to 46% on the Intel (some time ago), it shouldn't matter for endurance as 0-fill is just as valid as any other value but it would mean that all drives are performing the same test.
I suggest that all new drives are set to 46%, it would mean that all drives are using the same compression ratio/data pattern.
I'll decide on how to perform retention testing on the new drives but I won't stop when MWI is exhausted (as it would take a year to verify that it sticks to JEDEC) but I'll probably write n times NAND exhaustion so that the retention test (idle) period can be set to e.g 1 month or 14 days.
I have the impression it is not so much about how fast the SSD is written to as simply how much. I think a significant portion of the extra erase cycles that are specified for e-MLC come from them only needing to have 3 months data retention. In other words, if the consumer MLC only had to have 3 months data retention, I think they could specify significantly higher erase cycles than they do.
I'm not saying that there is no difference between standard MLC and e-MLC. But I think that if both were specified for 3 months data retention, then MLC might have 10,000 or 15,000 erase cycles, as compared to 30,000 for e-MLC.
By the way, the test you propose sounds interesting. Too bad we have to wait a year to find out if it passes the JEDEC specification! Anvil has an interesting idea to try to speed up the time by writing some multiple of what is required to exhaust MWI. However, I seriously doubt the decay in data retention time is linear with the multiple of MWI written (I suspect it accelerates a lot at some point). Maybe it is roughly linear at first, but I would not go past a multiple of 4 if I were doing it. Then it would still take 3 months to test the data retention.
Why did you decide on a Samsung 830? If the problems Christopher had are indeed a firmware bug, you may have trouble telling when you have exhausted the MWI on the Samsung.
I'm going for drives with rich and proven SMART attributes, a pity that the Samsung's are "hiding" some of the more interesting ones.
I'll most likely go for 4-6x the spec. (both new drives are 25nm)
Testing for data retention failures raises a few interesting questions, if one powers On the SSD every 3 months I'd expect it to somewhat interfere with the deterioration of the data and possibly help stabilize the data. The data won't be reprogrammed unless one writes (of course) but it will most likely make a difference.
If this wasn't the case then the drive could just be left powered On.
What about rotating data?, I expect none of the consumer drives (probably none at all) do "auto rotating" except while doing GC. (meaning that it would have to be done manually)
If I was to keep an offline copy of some data I'd most likely refresh the copy every 3-6 months, less frequent updates would make most data out of date.
I’d also guess that P/E exhaustion and data retention are not linear. The 830 is attractive for a few reasons. Selfishly it would be an interesting SSD to play with it for a few hours before testing, but more importantly it will only take a couple of days to exhaust the theoretical P/E count. The downside of course is that I might not know when that might be, but I can take that risk.
Potentially waiting a 1 year plus is a pain, but then again the 320 and X25-V have been running for what, nearly 7 months? Who knows they might still be running before I’m done :rofl:
Data retention is the big unknown for me. AFAIK data retention is established with accelerated testing using high temperatures. I will just leave the 830 in my PC case so it will always remain at normal operating temperatures.
Has anyone looked at SMART atributes on the 830 using GsmartControl? (Ref this post that revealed "hidden" info)
I'll test one of my 830's shortly, just need to find the drive and download/install gsmartcontrol.
Thanks Anvil :) If you hover your mouse over an attribute in the GsmartControl GUI does a pop up appear with more info? (I think it is a Java based script, so Java might need to be installed for it to appear).
Edit: I see that B1/ 177 is supposed to update continuously. Hmmm.
Does SSD Life predict the MWI?
Todays update:
Kingston V+100
It dropped out again. I'll restart it later today.
http://www.diskusjon.no/index.php?ap...tach_id=473820
Intel X25-M G1 80GB
190,9270 TiB
19953 hours
Reallocated sectors : 00
MWI=150 to 148
MD5 =OK
47.22 MiB/s on avg
http://www.diskusjon.no/index.php?ap...tach_id=473818http://www.diskusjon.no/index.php?ap...tach_id=473816
m4
156.4539 TiB
570 hours
Avg speed 80.87MiB/s.
AD gone from 18 to 09.
P/E 2749.
MD5 OK.
Reallocated sectors : 00
http://www.diskusjon.no/index.php?ap...tach_id=473819http://www.diskusjon.no/index.php?ap...tach_id=473817
Almost at MWI=1% now :)
Hey everyone :)
Had to deal with a family situation since early October. I flew to Connecticut and took care of a lot of things there for the past few months (the 10 day power outage was fun, not) and have only been at my place in Michigan for ~14 total days since early October (and those were awfully busy days). Fortunately everyone is okay (enough) now and I'm back in Michigan indefinitely. In the next few days I hope to try to catch up on the charting and everything. I may just veg out and play video games though...it will be nice to just relax for once.
To those who emailed me while I was away, thank you for your support and I'm sorry I never returned emails (I brought a laptop, but its battery and keyboard don't work, so I was basically PC-less for past few months).
I haven't so much as browsed XS since early October but this thread still looks very active so I bet I have a ton to catch up on in this thread alone :p:
At some point along the way, the PC testing my C300 and SF-1200 nLTT restarted. The SF-1200 nLTT is no longer recognized and the C300 still is (but it hasn't been in-testing for an unknown amount of time). The SF-1200 may just need some TLC to get recognized again (maybe even just a simple power cycle).
Hope everyone is doing well :)
Welcome back Vapor :)
Great to see that you are doing well!
We have missed boh you and your charts and there have been numerous request for the C300 in particular.
(and then there is the attachment situation...)
Anyways, looking forward to an update on the charts (when you are ready!)
Welcome back Vapor! You have been missed :welcome:
Welcome back Vapor! :welcome:
I think the vegging out for a little while sounds well deserved :)
Kingston SSDNow 40GB (X25-V)
599.49TB Host writes
Reallocated sectors : 05 19
Available Reserved Space : E8 99
POH 5422
MD5 OK
33.63MiB/s on avg (~29 hours)
--
Corsair Force 3 120GB is on a short retention test, it's 29 hours into the "test".
Next update will be next year :)
(the X25-V almost made 600TB during 2011, by midnight it will be about 130GiB short of 600TiB)
http://www.ssdaddict.com/ss/Endurance_cr_20111231.png
Last update of the year. 6 month of testing is over and 12 new can start :)
Kingston V+100
317.7995 TiB
7995 hours
Avg speed 24.53 MiB/s
AD still 1.
168= 1 (SATA PHY Error Count)
P/E?
MD5 OK.
Reallocated sectors : 00
http://www.diskusjon.no/index.php?ap...tach_id=473923http://www.diskusjon.no/index.php?ap...tach_id=473920
Intel X25-M G1 80GB
193,7977 TiB
19970 hours
Reallocated sectors : 00
MWI=148 to 147
MD5 =OK
45.87 MiB/s on avg
http://www.diskusjon.no/index.php?ap...tach_id=473922http://www.diskusjon.no/index.php?ap...tach_id=473919
m4
161.5540 TiB
588 hours
Avg speed 78.99MiB/s.
AD gone from 09 to 06.
P/E 2837.
MD5 OK.
Reallocated sectors : 00
http://www.diskusjon.no/index.php?ap...tach_id=473924http://www.diskusjon.no/index.php?ap...tach_id=473921
Happy new year everyone :toast:
Happy new year to everyone and welcome back Vapor !
Good news to hear the C300 will hopefully be back in action so we can see if it is indeed better than the M4.
Happy New Year Everyone.
Anvil, it would be cool to a have a graph that is normalised to reflect differences in the (raw) drive capacity and theoretical P/E cycle capability. This would highlight the ability of the controller to limit write amplification and/ or benefit from compression.
For example:
• Kingston (Intel) X25-V 40GB x 5,000P/E = 200,000 GB. P/E expired at 180,000 GB
• Corsair F3 120GB x 3,000P/E = 360,000 GB. P/E expired at 438,000 GB
The F3 has a distinct advantage, although in this case it was using compressible data.
As another example:
• Kingston (Intel) X25-V 40GB x 5,000P/E = 200,000 GB. P/E expired at 180,000 GB
• Kingston (JMF618) V100 64GB x 5,000 = 320,000 GB P/E expired at 196,000 GB
A huge difference, that is not readily evident from the graph.
It also seems that the X25-M G1 drive is really suffering due to a lack of TRIM.
By the way I have asked Samsung support if they can confirm if 177 is the P/E count. I’m not holding my breath for an answer. Regardless I’ve got one on order, although it will be annoying if I can’t determine when the theoretical P/E count expires.
The more I think about the more I think data retention is the true Achilles heel of SSD rather than write endurance.
Vapor, was your SF drive left unpowered? I can’t remember how much you had written past MWI 10, but as you mention that the drive no longer works presumably it lost its ability to retain data in just under 3 months.
Happy New Year!
@Ao1
Will have a look at a that graph.
The Intel G1 is really looking bad, I noticed the extremely low P/E count at MWI exhaustion, would be interesting to see what a G2 would do without TRIM.
This is not normalised, but it shows the difference between theoretical and actual writes required to deplete the P/E cycles.
I can’t recall what Vapor used for compression, but presumably the workload was a lot less compressible than the 46% used for the F3.
I’ve guessed the 830 actual writes, but for sure Samsung SSD’s are really bad at controlling WA, followed by the JMicron/ Toshiba controller. The impact of a lack of TRIM on the Intel G1 drive is also savage. It would be interesting to test a G2 without TRIM.
http://img819.imageshack.us/img819/4...omparison1.png
Now...how to work out normalised values for actual write capacity.....
OK, a normalised chart. I’m not sure how accurate this would be. I normalised the theoretical write capacity based on all drives having a capacity of 120GB with 5,000P/E cycles. I then multiplied the difference between non and normalised values of the theoretical write capacity with actual write values to get a normalised actual write value.
http://img31.imageshack.us/img31/661...arisonnorm.png