Could it be that WA is actually counting what was written in excess of 1? like 0.10 meaning a real 1.1 WA?
Printable View
Okay, I've been neglecting my SSD testing duties, at least in the reporting aspect -- I'm promising to rectify the situation shortly. I've been collecting log information and taking screenshots, but I've been travelling non stop for a while now and it's getting old.
Secondly, I have a new 64GB drive to test. I've been using it in my laptop to shake it down, but I have high hopes for it. Could this be the 64GB MLC drive that hits 1PB? I don't even know why I care, but I do.
Lastly, I was going to test the Imation S Class SLC too. But after waiting weeks for the drive to show up in the mail, I ended up getting the MLC based M-Class instead (I was sent the wrong drive) which is Samsung NAND + a Jmicron controller, no NCQ or TRIM. Very diappointed -- an older SLC drive would be interesting to test as the high WA would help get the job done sooner. God only knows how long a good SLC drive would take to go tits-up under these conditions, but a Phison controlled drive would have massive WA.
Kingston SSDNow 40GB (X25-V)
593.59TB Host writes
Reallocated sectors : 05 17
Available Reserved Space : E8 99
POH 5372
MD5 OK
32.53MiB/s on avg (~115 hours)
--
Corsair Force 3 120GB
01 92/50 (Raw read error rate)
05 2 (Retired Block count)
B1 43 (Wear range delta)
E6 100 (Life curve status)
E7 10 (SSD Life left)
E9 612226 (Raw writes) ->598TiB
F1 814872 (Host writes) ->796TiB
MD5 OK
104.49MiB/s on avg (~115 hours)
power on hours : 2353
--
I'll be rebooting and doing some cleaning shortly. (SSD Toolbox on the X25-V to regain some speed)
Kingston SSDNow 40GB (X25-V)
595.71TB Host writes
Reallocated sectors : 05 18
Available Reserved Space : E8 99
POH 5390
MD5 OK
35.02MiB/s on avg (~17 hours)
--
Corsair Force 3 120GB
01 94/50 (Raw read error rate)
05 2 (Retired Block count)
B1 45 (Wear range delta)
E6 100 (Life curve status)
E7 10 (SSD Life left)
E9 617211 (Raw writes) ->603TiB
F1 821510 (Host writes) ->802TiB
MD5 OK
107.06MiB/s on avg (~17 hours)
power on hours : 2372
--
The Force 3 is idling a few hours in wait for a data retention test. (which will be lasting through Sunday)
I’m jumping back in with an 830 (in week or so)….but with a couple of twists depending on if I can establish exactly when the theoretical P/E cycle count expires. Twist one; the workload will be 4K random full span. Twist two; once the theoretical P/E count has expired I will fill the drive up with data and checksum it. I’m then going to unplug the drive and leave it. I will check the data integrity every 3 months.
As a quick recap of the JDEC spec:
The SSD manufacturer shall establish an endurance rating for an SSD that represents the maximum number of terabytes that may be written by a host to the SSD
1) The SSD maintains its capacity
2) The SSD maintains the required UBER for its application class
3) The SSD meets the required functional failure requirement (FFR) for its application class
4) The SSD retains data with power off for the required time for its application class
The functional failure requirement for retention of data in a powered off condition is specified as 1 year for Client applications and 3 months for Enterprise (subject to temperature boundaries).
I’m not sure of the basis that determined the different requirements between 3 months for enterprise and 12 months for client applications. Presumably the variation is factoring the difference between how rapidly the SSD is written to as opposed to how much is written.
I'll be entering a few more drives as well and both will have twists.
1 drive with an extra spare area and one SF based drive with a different compression ratio.
Just for the record, I've changed to 46% on the Intel (some time ago), it shouldn't matter for endurance as 0-fill is just as valid as any other value but it would mean that all drives are performing the same test.
I suggest that all new drives are set to 46%, it would mean that all drives are using the same compression ratio/data pattern.
I'll decide on how to perform retention testing on the new drives but I won't stop when MWI is exhausted (as it would take a year to verify that it sticks to JEDEC) but I'll probably write n times NAND exhaustion so that the retention test (idle) period can be set to e.g 1 month or 14 days.
I have the impression it is not so much about how fast the SSD is written to as simply how much. I think a significant portion of the extra erase cycles that are specified for e-MLC come from them only needing to have 3 months data retention. In other words, if the consumer MLC only had to have 3 months data retention, I think they could specify significantly higher erase cycles than they do.
I'm not saying that there is no difference between standard MLC and e-MLC. But I think that if both were specified for 3 months data retention, then MLC might have 10,000 or 15,000 erase cycles, as compared to 30,000 for e-MLC.
By the way, the test you propose sounds interesting. Too bad we have to wait a year to find out if it passes the JEDEC specification! Anvil has an interesting idea to try to speed up the time by writing some multiple of what is required to exhaust MWI. However, I seriously doubt the decay in data retention time is linear with the multiple of MWI written (I suspect it accelerates a lot at some point). Maybe it is roughly linear at first, but I would not go past a multiple of 4 if I were doing it. Then it would still take 3 months to test the data retention.
Why did you decide on a Samsung 830? If the problems Christopher had are indeed a firmware bug, you may have trouble telling when you have exhausted the MWI on the Samsung.
I'm going for drives with rich and proven SMART attributes, a pity that the Samsung's are "hiding" some of the more interesting ones.
I'll most likely go for 4-6x the spec. (both new drives are 25nm)
Testing for data retention failures raises a few interesting questions, if one powers On the SSD every 3 months I'd expect it to somewhat interfere with the deterioration of the data and possibly help stabilize the data. The data won't be reprogrammed unless one writes (of course) but it will most likely make a difference.
If this wasn't the case then the drive could just be left powered On.
What about rotating data?, I expect none of the consumer drives (probably none at all) do "auto rotating" except while doing GC. (meaning that it would have to be done manually)
If I was to keep an offline copy of some data I'd most likely refresh the copy every 3-6 months, less frequent updates would make most data out of date.
I’d also guess that P/E exhaustion and data retention are not linear. The 830 is attractive for a few reasons. Selfishly it would be an interesting SSD to play with it for a few hours before testing, but more importantly it will only take a couple of days to exhaust the theoretical P/E count. The downside of course is that I might not know when that might be, but I can take that risk.
Potentially waiting a 1 year plus is a pain, but then again the 320 and X25-V have been running for what, nearly 7 months? Who knows they might still be running before I’m done :rofl:
Data retention is the big unknown for me. AFAIK data retention is established with accelerated testing using high temperatures. I will just leave the 830 in my PC case so it will always remain at normal operating temperatures.
Has anyone looked at SMART atributes on the 830 using GsmartControl? (Ref this post that revealed "hidden" info)
I'll test one of my 830's shortly, just need to find the drive and download/install gsmartcontrol.
Thanks Anvil :) If you hover your mouse over an attribute in the GsmartControl GUI does a pop up appear with more info? (I think it is a Java based script, so Java might need to be installed for it to appear).
Edit: I see that B1/ 177 is supposed to update continuously. Hmmm.
Does SSD Life predict the MWI?
Todays update:
Kingston V+100
It dropped out again. I'll restart it later today.
http://www.diskusjon.no/index.php?ap...tach_id=473820
Intel X25-M G1 80GB
190,9270 TiB
19953 hours
Reallocated sectors : 00
MWI=150 to 148
MD5 =OK
47.22 MiB/s on avg
http://www.diskusjon.no/index.php?ap...tach_id=473818http://www.diskusjon.no/index.php?ap...tach_id=473816
m4
156.4539 TiB
570 hours
Avg speed 80.87MiB/s.
AD gone from 18 to 09.
P/E 2749.
MD5 OK.
Reallocated sectors : 00
http://www.diskusjon.no/index.php?ap...tach_id=473819http://www.diskusjon.no/index.php?ap...tach_id=473817
Almost at MWI=1% now :)
Hey everyone :)
Had to deal with a family situation since early October. I flew to Connecticut and took care of a lot of things there for the past few months (the 10 day power outage was fun, not) and have only been at my place in Michigan for ~14 total days since early October (and those were awfully busy days). Fortunately everyone is okay (enough) now and I'm back in Michigan indefinitely. In the next few days I hope to try to catch up on the charting and everything. I may just veg out and play video games though...it will be nice to just relax for once.
To those who emailed me while I was away, thank you for your support and I'm sorry I never returned emails (I brought a laptop, but its battery and keyboard don't work, so I was basically PC-less for past few months).
I haven't so much as browsed XS since early October but this thread still looks very active so I bet I have a ton to catch up on in this thread alone :p:
At some point along the way, the PC testing my C300 and SF-1200 nLTT restarted. The SF-1200 nLTT is no longer recognized and the C300 still is (but it hasn't been in-testing for an unknown amount of time). The SF-1200 may just need some TLC to get recognized again (maybe even just a simple power cycle).
Hope everyone is doing well :)
Welcome back Vapor :)
Great to see that you are doing well!
We have missed boh you and your charts and there have been numerous request for the C300 in particular.
(and then there is the attachment situation...)
Anyways, looking forward to an update on the charts (when you are ready!)
Welcome back Vapor! You have been missed :welcome:
Welcome back Vapor! :welcome:
I think the vegging out for a little while sounds well deserved :)
Kingston SSDNow 40GB (X25-V)
599.49TB Host writes
Reallocated sectors : 05 19
Available Reserved Space : E8 99
POH 5422
MD5 OK
33.63MiB/s on avg (~29 hours)
--
Corsair Force 3 120GB is on a short retention test, it's 29 hours into the "test".
Next update will be next year :)
(the X25-V almost made 600TB during 2011, by midnight it will be about 130GiB short of 600TiB)
http://www.ssdaddict.com/ss/Endurance_cr_20111231.png
Last update of the year. 6 month of testing is over and 12 new can start :)
Kingston V+100
317.7995 TiB
7995 hours
Avg speed 24.53 MiB/s
AD still 1.
168= 1 (SATA PHY Error Count)
P/E?
MD5 OK.
Reallocated sectors : 00
http://www.diskusjon.no/index.php?ap...tach_id=473923http://www.diskusjon.no/index.php?ap...tach_id=473920
Intel X25-M G1 80GB
193,7977 TiB
19970 hours
Reallocated sectors : 00
MWI=148 to 147
MD5 =OK
45.87 MiB/s on avg
http://www.diskusjon.no/index.php?ap...tach_id=473922http://www.diskusjon.no/index.php?ap...tach_id=473919
m4
161.5540 TiB
588 hours
Avg speed 78.99MiB/s.
AD gone from 09 to 06.
P/E 2837.
MD5 OK.
Reallocated sectors : 00
http://www.diskusjon.no/index.php?ap...tach_id=473924http://www.diskusjon.no/index.php?ap...tach_id=473921
Happy new year everyone :toast:
Happy new year to everyone and welcome back Vapor !
Good news to hear the C300 will hopefully be back in action so we can see if it is indeed better than the M4.
Happy New Year Everyone.
Anvil, it would be cool to a have a graph that is normalised to reflect differences in the (raw) drive capacity and theoretical P/E cycle capability. This would highlight the ability of the controller to limit write amplification and/ or benefit from compression.
For example:
• Kingston (Intel) X25-V 40GB x 5,000P/E = 200,000 GB. P/E expired at 180,000 GB
• Corsair F3 120GB x 3,000P/E = 360,000 GB. P/E expired at 438,000 GB
The F3 has a distinct advantage, although in this case it was using compressible data.
As another example:
• Kingston (Intel) X25-V 40GB x 5,000P/E = 200,000 GB. P/E expired at 180,000 GB
• Kingston (JMF618) V100 64GB x 5,000 = 320,000 GB P/E expired at 196,000 GB
A huge difference, that is not readily evident from the graph.
It also seems that the X25-M G1 drive is really suffering due to a lack of TRIM.
By the way I have asked Samsung support if they can confirm if 177 is the P/E count. I’m not holding my breath for an answer. Regardless I’ve got one on order, although it will be annoying if I can’t determine when the theoretical P/E count expires.
The more I think about the more I think data retention is the true Achilles heel of SSD rather than write endurance.
Vapor, was your SF drive left unpowered? I can’t remember how much you had written past MWI 10, but as you mention that the drive no longer works presumably it lost its ability to retain data in just under 3 months.
Happy New Year!
@Ao1
Will have a look at a that graph.
The Intel G1 is really looking bad, I noticed the extremely low P/E count at MWI exhaustion, would be interesting to see what a G2 would do without TRIM.
This is not normalised, but it shows the difference between theoretical and actual writes required to deplete the P/E cycles.
I can’t recall what Vapor used for compression, but presumably the workload was a lot less compressible than the 46% used for the F3.
I’ve guessed the 830 actual writes, but for sure Samsung SSD’s are really bad at controlling WA, followed by the JMicron/ Toshiba controller. The impact of a lack of TRIM on the Intel G1 drive is also savage. It would be interesting to test a G2 without TRIM.
http://img819.imageshack.us/img819/4...omparison1.png
Now...how to work out normalised values for actual write capacity.....
OK, a normalised chart. I’m not sure how accurate this would be. I normalised the theoretical write capacity based on all drives having a capacity of 120GB with 5,000P/E cycles. I then multiplied the difference between non and normalised values of the theoretical write capacity with actual write values to get a normalised actual write value.
http://img31.imageshack.us/img31/661...arisonnorm.png
Kingston SSDNow 40GB (X25-V)
602.48TB Host writes
Reallocated sectors : 05 19
Available Reserved Space : E8 99
POH 5449
MD5 OK
33.21MiB/s on avg (~56 hours)
--
Corsair Force 3 120GB
01 120/50 (Raw read error rate)
05 2 (Retired Block count)
B1 45 (Wear range delta)
E6 100 (Life curve status)
E7 10 (SSD Life left)
E9 617211 (Raw writes) ->603TiB
F1 821510 (Host writes) ->802TiB
MD5 --
--.--MiB/s on avg (~-- hours)
power on hours : 2374
--
The F3 has just started testing again having been disconnected for 56+ hours and the X25-V is about to have a short data retention test. (48 hours)
First update of the year. :)
Kingston V+100
318.0055 TiB
1687 hours
Avg speed 25.89 MiB/s
AD still 1.
168= 1 (SATA PHY Error Count)
P/E?
MD5 OK.
Reallocated sectors : 00
http://www.diskusjon.no/index.php?ap...tach_id=474177http://www.diskusjon.no/index.php?ap...tach_id=474175
It dropped out again last night. It's the second time in 24 hours.
Intel X25-M G1 80GB
198,2124 TiB
19998hours
Reallocated sectors : 00
MWI=147 to 146
MD5 =OK
52.72 MiB/s on avg
http://www.diskusjon.no/index.php?ap...tach_id=474179http://www.diskusjon.no/index.php?ap...tach_id=474174
m4
168.9476 TiB
616 hours
Avg speed 78.09 MiB/s.
AD gone from 06 to 02.
P/E 2965.
MD5 OK.
Reallocated sectors : 00
http://www.diskusjon.no/index.php?ap...tach_id=474180http://www.diskusjon.no/index.php?ap...tach_id=474176
Almost there. Now the fun begins :D
Damn, I'm glad to be home. I'll unpack, then post an update after doing a manual power cycle of the 830 to get current PE values.
I will check out gSmartControl with the 830. Then I'll be prepping a new drive to start testing.
I still would very much like to know what happened to the 830... it's behaviour is not at all what I would expect to happen. I'm hoping a firmware update can fix the problem, but I don't know that Samsung will release one anytime soon.
Good to see you back Vapor!
Samsung 830 64GB Update, Day 27
FW:CXM01B1Q
GiB written:
175509.38 ASU
176717.9 Total
Avg MB/s
63.65
PE Cycles
11046
Reallocated Sectors
32768
16 Blocks
https://www.box.com/shared/static/7z...13z6mij1rn.png
https://www.box.com/shared/static/vm...vejl9pmmc5.png
Okay -- about the PE cycle count -- it initially increased over the first 10 days, but very slowly. Before the power cycle I just performed, PE cycles were the same as its been for two weeks at 6742. It's now at 11046 after the power cycle. I've been away for the past two weeks, so I was unable to power cycle the system.
The SMART attribute you refer to as P/E cycle count is not the P/E cycle count, the value has been decreasing from 100 down to 1 and it's raw value has been increasing.
The value could be working more like a MWI counter but it's raw value is something completely different.
Have you been logging with SMARTLOG?
Yes, I have. What is it if not PE cycles? It went from 100 to 1 (I shut down the system after about 10 days). It looks a lot like PE cycles, and the Samsung Magician Software refers to it as a PE cycle attribute. Of course, you can't log the PE cycles since the drive has to be power cycled.
If the raw value is a P/E cycle counter then WA is close to 4.
GIB written / Capacity in GiB should be close to 3000 and also the P/E count if WA was 1 :)
It is weird that the counter does not change during "normal" usage.
edit:
Could you expand the raw value column next time, Total Count of Write Sectors is not fully shown :)
Kingston SSDNow 40GB (X25-V)
Is 22 hours into a data retention test!
602.48TB Host writes
Reallocated sectors : 05 19
Available Reserved Space : E8 99
POH 5449
MD5 OK
--.--MiB/s on avg (~-- hours)
--
Corsair Force 3 120GB
01 94/50 (Raw read error rate)
05 2 (Retired Block count)
B1 49 (Wear range delta)
E6 100 (Life curve status)
E7 10 (SSD Life left)
E9 623290 (Raw writes) ->609TiB
F1 829599 (Host writes) ->810TiB
MD5 OK
106.76MiB/s on avg (~22 hours)
power on hours : 2395
--
Todays update:
Kingston V+100
319.7069 TiB
1707 hours
Avg speed 25.44 MiB/s
AD still 1.
168= 1 (SATA PHY Error Count)
P/E?
MD5 OK.
Reallocated sectors : 00
http://www.diskusjon.no/index.php?ap...tach_id=474251http://www.diskusjon.no/index.php?ap...tach_id=474248
Intel X25-M G1 80GB
201,4006 TiB
20017 hours
Reallocated sectors : 00
MWI=146 to 145
MD5 =OK
46.65 MiB/s on avg
http://www.diskusjon.no/index.php?ap...tach_id=474250http://www.diskusjon.no/index.php?ap...tach_id=474247
m4
174.3919 TiB
636 hours
Avg speed 80.17 MiB/s.
AD gone from 02 to 255.
P/E 3059.
MD5 OK.
Reallocated sectors : 00
http://www.diskusjon.no/index.php?ap...tach_id=474252http://www.diskusjon.no/index.php?ap...tach_id=474249
We passed 3000 P/E at 09:30 am today :)
^^ So what are you guys gonna do with this M4 now? :p Leave it powered off to check the volatility, or continue to attempt 1PB?
Samsung 830 64GB Update, Day 28
FW:CXM01B1Q
GiB written:
180960.49
Avg MB/s
64.61
PE Cycles
11418, up from 11046 yesterday
Reallocated Sectors
32768
16 Blocks
682 Hours
https://www.box.com/shared/static/gi...386p401o07.png
https://www.box.com/shared/static/mx...cr3e76dv22.png
B.A.T.
What is the status of the MTRON? And what kind of MTRON is it?
Christopher, if I look at the two last screen shots that you have posted you have written 5,472 GB of data and the RAW value of 177 has increased by 372. With no WA you should have been able to write 23,808 GB with 372 P/E cycles, so if the RAW value of 177 is counting P/E depletion the write amplification factor is 4.35.
That sounds horrific, but I’m leaning towards believing it might be correct based on how quickly Johnw’s 470 burnt out.
I think it is out of line that Samsung don’t provide a MWI based on the fact that data retention is an issue once you start exceeding it. :down:
I don't know what to make of the Samsung. Last night, I waited until exactly 24.00 Hours to stop ASU and then power cycle the Samsung to get the number of PE cycles used in that period. I can't seriously believe that WA is really that high on the 830 -- and there is still the problem of the drive being stuck in a lower performance state. I was hoping there was a new FW update for it, but alas, there is not.
The Samsung should be doing around 9000GB a day, but it's still stuck in whatever performance degradation mode its in. Now it does about 5300GB a day, using about ~370 PE cycles in the process.
Kingston SSDNow 40GB (X25-V)
Is 47.5 hours into the data retention test!
602.48TB Host writes
Reallocated sectors : 05 19
Available Reserved Space : E8 99
POH 5449
MD5 OK
--.--MiB/s on avg (~-- hours)
--
Corsair Force 3 120GB
01 92/50 (Raw read error rate)
05 2 (Retired Block count)
B1 54 (Wear range delta)
E6 100 (Life curve status)
E7 10 (SSD Life left)
E9 630653 (Raw writes) ->616TiB
F1 839397 (Host writes) ->820TiB
MD5 OK
106.70MiB/s on avg (~48 hours)
power on hours : 2421
--
Will be re-attaching the X25-V in 30 minutes.
Just mentioning that my x25 has been powered off since Saturday doing a retention test.
Will it last for a year? ;)
(it's 4.5 months left)
edit:
My X25-V is OK, it's back running the test. (disconnected for 48 hours + a few minutes)
http://www.ssdaddict.com/ss/Endurance_cr_20120103.png
B.A.T.
I remember all that about the MTRON... I just was wondering if you had got it back yet.
Since I got whored out of the S-Class 32GB SLC (I got sent a 64GB M-Class MLC jmicron + Samsung NAND instead), I picked up a new MTRON 16GB. It should be here in a few days.
I didn't get anything back, no, so we just have to enjoy what ww have here :)
Power is out again so the update will have to wait until the morning.
Well, don't freeze to death ;)
I don't even know the first thing about MTRONs...
I found this when I started digging in october.
MTRON MOBI 3000 PATA
MTRON MOBI 3000 SATA
MTRON MOBI 3500 SATA
MTRON PRO 7000 SATA
MTRON PRO 7500 SATA
Samsung 830 64GB Update, Day 29
FW:CXM01B1Q
GiB written:
186406.84
Avg MB/s
63.86
PE Cycles
11799, up from 11418 yesterday
Reallocated Sectors
36864
18 Blocks
706 Hours
https://www.box.com/shared/static/mi...hc6kk4l2mp.png
https://www.box.com/shared/static/yv...hql2o3r0n0.png
The MTRON I referred to is an MTRON PRO 7000 SLC as I've discovered. Thanks B.A.T.
Kingston SSDNow 40GB (X25-V)
605.34TB Host writes
Reallocated sectors : 05 19
Available Reserved Space : E8 99
POH 5473
MD5 OK
35.05MiB/s on avg (~24 hours)
--
Corsair Force 3 120GB
01 82/50 (Raw read error rate)
05 2 (Retired Block count)
B1 58 (Wear range delta)
E6 100 (Life curve status)
E7 10 (SSD Life left)
E9 637543 (Raw writes) ->623TiB
F1 848558 (Host writes) ->829TiB
MD5 OK
106.69MiB/s on avg (~72 hours)
power on hours : 2446
Todays update:
Kingston V+100
322.3106 TiB
1738 hours
Avg speed 25.23MiB/s
AD still 1.
168= 1 (SATA PHY Error Count)
P/E?
MD5 OK.
Reallocated sectors : 00
http://www.diskusjon.no/index.php?ap...tach_id=474533http://www.diskusjon.no/index.php?ap...tach_id=474530
Intel X25-M G1 80GB
205,8065 TiB
20048 hours
Reallocated sectors : 00
MWI=145 to 144
MD5 =OK
42.27 MiB/s on avg
http://www.diskusjon.no/index.php?ap...tach_id=474532http://www.diskusjon.no/index.php?ap...tach_id=474531
m4
182.3537 TiB
667 hours
Avg speed 78.03 MiB/s.
AD gone from 255 to 250.
P/E 3206.
MD5 OK.
Reallocated sectors : 00
http://www.diskusjon.no/index.php?ap...tach_id=474534http://www.diskusjon.no/index.php?ap...tach_id=474529
All ssd are back up and we have power again. 12 hours this time and hopefully the powercompany has fixed all broken equipment after the storm now.
ummm wilderness here fellas, from what i am reading there is a known bug that is causing M4 to die at 5200 hours of use! how far in is this one?? interesting!
http://forums.crucial.com/t5/Solid-S...p/77298#M23632
http://forum.crucial.com/t5/Solid-St...4GB/td-p/76392
thoughts?
Only 667 hours. The first one died after 2749 hour but the cause there was data retention due to powerloss.
First time I've heard about that problem.
Not so far and I've been using the m4s for a long time, will have to read the whole thread and it looks a bit confusing...
The first few posts looks similar to the freeze which is cured in most cases by the LPM fix.
We need a new thread for further discussion of those reported m4 issues (except for B.A.T's drive) :)
true, sorry for derail. just figured i would ask the experts :)Quote:
We need a new thread for further discussion of those reported m4 issues
also needed to know the uptime of the endurance testing.
I doubt that any of my m4s are close to 5000 hours, I have been using my m4s a lot. (except for the ones that are waiting for a new fw for the raid-card issues)
From reading the first 8 pages all I can say is it looks weird...
edit: (all pages read)
As usual a lot of people are jumping conclusions and are mixing issues, still there looks to be something going on for the ones that have been using the drives for 5000+ hours.
Why don't you create that thread CT :)
Samsung 830 64GB Update, Day 30
FW:CXM01B1Q
GiB written:
191703.19
Avg MB/s
63.15
PE Cycles
12167, up from 11799 yesterday
Reallocated Sectors
36864
18 Blocks
730 Hours
https://www.box.com/shared/static/zy...o5yd1bte5b.png
https://www.box.com/shared/static/s9...zjqndel2ty.png
Alright -- so I know that one or two of you have experience with the MTRON PRO 7000 -- or at least something like it.
Would it be feasible to test it in a reasonable amount of time? MTRON claims you could write 50GB a day for 140 years. That is roughly 2.5PB, and it is not very fast.
I won't have it here for a few more days, but I don't know what else to do with a 16GB MTRON other than endurance test it.
Does this 5200 hours or 5000+ hours issue also affect the C300 or just the M4 ?
Still waiting for my 830 to be delivered. :rolleyes: Someone provided data on an 830 used in a production environment, which can be compared to the output from Christopher’s drive. (The 1st table is Christopher’s drive; the 2nd is the production drive)
http://img842.imageshack.us/img842/9...ison830rev.png
Uploaded to ImageShack on account of XS not allowing you to upload images :down:ImageShack.us
I’ve been hassling Samsumg for info on the 177 value. What they have stated in item 2 & 3 below does not make sense and I don’t think I will get any further clarification.
1) Attribute 177 can update while the unit is on
2) The raw value on this attribute is how many time it can prolong the life of a specific block
3) The indicator range from a wider range of numbers as 1-100 is not enough.
Data from the second table seems to show that 177 can update without a power off. Maybe that is due to the different workload or maybe it just a configuration quirk on Christophers set up. Let’s see what happenes with my drive.
I still can’t be sure that 177 is the MWI, but for the purpose of the endurance test I’m going to assume it is. When it gets to 1 I will stop the test, disconnect the power and leave it for 3 months. I’m going to run a few trace benchmarks and then I will let the standard endurance test run for a day or so. I will then ramp things up with the 4K random workload.
Thanks for the chart Ao1.
For the record, the 830 is being tested on an H67. Initially, I was using the RST drivers... but after the "Great Samsung 830 Performance Apocalypse of 2011" I switched to MSAHCI. I had no idea that 177 was at 1 because I didn't power off the drive for over 10 days. 177 data after 10 days was up to ~130... then after a power cycle it was over 5400+. So I saw it go from 96/0/130ish to 1/0/5400ish+ after turning the system off then back on.
Also remember -- and I can't stress this enough -- that the drive is not working as it should, at least performance-wise. It's performing more like a 64GB C300 than 830, so there is no telling what other issues may be lurking. I'm thinking about getting another one.
Are you getting the 64GB 830 in the mail? (or is it a different capacity?)
If you fit a line to the 177 RAW vs. 177 normalized values, the slope comes to about 30 raw per 1 normalized. And 100 * 30 = 3000. So it seems likely 177 is counting down, in normalized units, from 3000. Which is likely the expected number of cycles for the flash.
For comparison, with the Samsung 470, normalized 177 attribute hit 1 when the raw value was about 5017, so it seems like the 470 was using flash with an estimated 5000 cycle lifetime.
It is interesting that the WA comes out less than 2 for the "production environment". I wonder if the reason we measured 4 or 5 with the endurance test is because with continuous writing the Samsung never does much GC and so ends up having to do a lot more block erases. But maybe in the production environment, there are sufficient pauses for Samsung GC to kick in? Just a guess.
Kingston SSDNow 40GB (X25-V)
607.98TB Host writes
Reallocated sectors : 05 19
Available Reserved Space : E8 99
POH 5496
MD5 OK
33.98MiB/s on avg (~47 hours)
--
Corsair Force 3 120GB
01 94/50 (Raw read error rate)
05 2 (Retired Block count)
B1 63 (Wear range delta)
E6 100 (Life curve status)
E7 10 (SSD Life left)
E9 644147 (Raw writes) ->629TiB
F1 857347 (Host writes) ->837TiB
MD5 OK
106.74MiB/s on avg (~95 hours)
power on hours : 2469
Quick (and old) question about a previous test of the Crucial M225 drive with Samsung 51nm nand. The P/E listed is 5000/10000...isn't all 51nm created equal or what am I missing?
I'm interested cuz I have an old Ocz Apex 64Gb (ex-os) drive that I now use as a scratch drive and it uses 51nm Samsung as well. It uses 2 of the infamous stuttering (w/XP) JMicron 602 rev. b controllers (they doubled the cache to a whopping 8MB) internally raided. I wish I knew the millage but the only smart attribute it supports is a fixed 44C temp reading. I hope I get even close to the writes of the M225 but I doubt the JM controller manages nand as efficiently as the Indi does.
That is what I would guess as well. “Normal” client use might be a completely different picture. It would be helpful if anyone reading this thread, that is using a 470 or an 830 in a client environment, could post their SMART data.
@Christopher, I’ll be using a Z68, which has Marvel for SATA III and an Intel chip for SATA II. I’m leaning towards using the Marvel controller. I’ve ordered the 64GB version, touch wood it will arrive tomorrow morning, otherwise I’ll have to wait until Monday.
Todays update. The power was cut for 10 min this morning and I didn't notice until after work. Everything is back up and running as normal.
Kingston V+100
323.4132 TiB
1753 hours
Avg speed 24.70MiB/s
AD still 1.
168= 1 (SATA PHY Error Count)
P/E?
MD5 OK.
Reallocated sectors : 00
http://www.diskusjon.no/index.php?ap...tach_id=474705http://www.diskusjon.no/index.php?ap...tach_id=474702
Intel X25-M G1 80GB
208,1707 TiB
20061 hours
Reallocated sectors : 00
MWI=144 to 143
MD5 =OK
50.56 MiB/s on avg
http://www.diskusjon.no/index.php?ap...tach_id=474704http://www.diskusjon.no/index.php?ap...tach_id=474703
m4
185.8111 TiB
681 hours
Avg speed 78.16 MiB/s.
AD gone from 250 to 248.
P/E 3268.
MD5 OK.
Reallocated sectors : 00
http://www.diskusjon.no/index.php?ap...tach_id=474706http://www.diskusjon.no/index.php?ap...tach_id=474701
Not that I think it matters much, but what Z68 board has no Intel SATA III ports on it? Some Z68 boards use Marvel for two extra 6gbps ports in addition to the two Intel 6gbps and for the purposes of testing it shouldn't matter really, but I am kinda curious as why Z68 SATA III ports would get axed in any 1155 board. Do you get two extra SATA II ports or something?
There are 2 native SATA III ports on every Intel n67/Z68 motherboard, there is a Marvell controller on most Z68s as well but mostly it sucks for RAID usage, it can do pretty OK for single drives.
(there are a few exceptions to the Marvell controller performing badly, just can't remember which one it was 9182 or 9128)
As far as I know, all 830s are using the same firmware, so I'd be curious as to see what happens with the new 830. I was contemplating buying another one myself.
Samsung 830 64GB Update, Day 31
FW:CXM01B1Q
GiB written:
1917196.90
Avg MB/s
64.88
PE Cycles
12503, up from 12167 yesterday
Reallocated Sectors
40960
20 Blocks, up from 18
755 Hours
https://www.box.com/shared/static/pl...gzr031fb8i.png
https://www.box.com/shared/static/1m...fzz12lszxl.png
Here is a bonus pic, depicting the current state of performance of the 830:
https://www.box.com/shared/static/cc...a7mkd0ru1x.png
Yes, it really is working in SATA III.
This is what it was like before the performance fell off a cliff:
https://www.box.com/shared/static/cf...okrk2111uz.jpg
I think I'll try SEing it again... but I expect no miracles this time either. I could RMA it, but I'm not sure that the fundamentals related to this test are affected, other than the speed of the 830's demise.
On the other hand, I've been playing with another drive I am considering testing. I bought two similar drives, but both were acting a tad strange out of the box. Here is one of the drives:
Notice how similar it and the 830 are currently performing (see the 830 ASU shot in the post above, the last pic from today's update) in ASU...
https://www.box.com/shared/static/yd...t0yhrl4vh9.jpg
@Christopher
The Samsung 830 is quite interesting... I've been holding off from the 830 512GB since I've seen your posts regarding the drop of performance (which is yet recovered) and the write-amplification calculated by Ao1. What worries me most is the sudden and severe change of the MWI-like SMART attribute since the power cycle. Back to Intel 320 Series for now and I don't plan to touch new controllers for a laptop used for production.
Just curious, have you verified that the motherboard, the SATA controller and the cables are still working correctly, i.e. you could reproduce SATA-III speed with a "good" SSD?
@Ao1
The log of the production drive in your chart could probably tell that the B1/177 serves as MWI and WL-count at the same time. If fitting the raw value against the value with a simple linear regression then it pretty matches the assumption that the expected P/E is around 3000 when the MWI is exhausted.
Look, it's pretty much impossible to get this to happen naturally. I made some changes to the amount of free space after the second day, as the drive had become quite slow. I bumped it back up where it was, and it was fine for another 8 days or so... until I reduced the amount of free space left again. I let it run longer to see how bad performance would get, and this time secure erasing it wouldn't even fix it, much less the more extreme methods I tried.
I can put the drive into another system, and it does the same thing in every system I've tried, and with different drivers as well.
If this was a widespread problem, Dell, Lenovo, and Samsung would have stumbled upon it already.
The WA was .1 because 177/Wear Leveling was not being reported correctly... it only increased by 130 the first 10 or so days, but then after I disconnected the drive, 177 reported correctly. It does not really increase on my drive while the drive is powered on... you have to manually power cycle it for Wear Leveling count to increase. For the past few days I've been power cycling the drive or system before taking the SMART screen shot.
You mean the 177/WL-count was increasing, but not increasing correctly, while you were not power cycling it? This is what looks strange to me.
OK, here's my speculation of what happened to your 830:
Assume that the P/E spec is 3000. Assume that the true WA is 4 under ASU stress. The expected endurance for host writes is: 64GB/(1.024^3)*3000/4 = 44703GB
In such case, could it be that the 830 degraded fast after you passed 44703GB of host writes?
Unfortunately there is some crucial info missing from Christophers run as that exact attribute failed to update, lets hope that it not the norm.
I have a question regarding the data retention test:
How to verify data retention? Do I set the disk to "Offline" in Windows Disk Management, then use WinHex to calculate an MD5 or SHA-1 of the entire physical disk, then power off the disk for, say, 10 days, then power it on and re-calculate the MD5/SHA-1? I worry that each time I connect and mount the disk, something will be written to the disk by the OS, hence resulting in different hash value of the entire disk.
Edit: just confirmed that Windows 7 would write something into the disk each time it's plugged, resulting in a different hash value of the entire disk. It looks like the data retention test would have to be done on a hash value of a large file instead?
Eagerly awaiting some more C300 results. Hopefully one day it will beat that M4.
At last it has arrived.
http://img15.imageshack.us/img15/8983/cdmsmart.png
http://img838.imageshack.us/img838/7536/wp000013c.jpg
Here goes, I'm starting off with the normal endurance workload to see how things progress with the SMART data before ramping up to 4K random full span
http://img210.imageshack.us/img210/9724/startaw.png
Does your 177 update properly, Ao1?
I took a look at the ASU pic you posted, and it seems... slow. Especially for the beginning of the loop.
It’s too early to say if 177 is updating properly. So far ~470 GiB and no movement on the raw value. If it hasn’t changed by 2,000 GiB I’ll do a power cycle.
Avg write speed is ~64
Yeah, you should be pulling down 110MB/s avg... that's why I say it's looking slow... it's looking like my 830 now.
The 830 didn't get down into the 64MBs range until after the performance problems. It was doing about 9000GB a day, averaging around 110MBs.
Ao1
Did you disable AV scanning on the test folder?
Also, try running smartmontools, it woke up some of my attributes on the Kingston (X25-V), if that doesn't help a bit of idling may trigger it?
(worth a try as we need to figure out more on the Samsung)
I’ve just disabled MS Essential on the drive and write speeds have improved. At the end of the first loop avg speed was ~83MiB/s. At the end of the second loop I was up to ~ 86 MiB/s. Nice :up:
I also took the opportunity to reboot and there was no change in the 177 values. I’ll try a power cycle soon if it doesn’t budge sometime soon.
EDIT: ~ 700GiB. Power cycle. No change in 177. Avg write ~100 MiB/s just in from the start of the first loop.
Wow, Ao1... you really should be getting closer to 160MBs at the beginning of the loop and stay above 100MBs for the whole loop.
Just to be sure, you turned the system off (or unplugged the drive) and there was no change in 177?
I promise I'm not crazy.
It will most likely take 4-5TiB of writes for 177 to change, not sure if the raw value will change until value is depleted (1)
I had movement in 177 for the first 10 days. The value increased from 0 to 134 over the first 10 days without a power cycle, and it went from 100 to 96 over that period as well. After the first power cycle, 177 jumped from 96/0/124 to 1/0/5444 I think.
Now I'm really intrigued. I'm thinking about getting another 830 as well. Or another Chronos Deluxe... which would go much more smoothly with 3.3.2FW.
I had movement in 177 for the first 10 days. The value increased from 0 to 134 over the first 10 days without a power cycle, and it went from 100 to 96 over that period as well. After the first power cycle, 177 jumped from 96/0/124 to 1/0/5444 I think.
Now I'm really intrigued. I'm thinking about getting another 830 as well. Or another Chronos Deluxe... which would go much more smoothly with 3.3.2FW.
I left the drive to idle for a while to see if 177 would change. It didn’t, but write speeds improved significantly when I restarted the ASU.
URL=http://imageshack.us/photo/my-images/16/withidle.png/]http://img16.imageshack.us/img16/8708/withidle.png[/URL]
Here is a shot at the same point in the second loop. Seems idle time enables the drive to clean itself up. I’ll check with hIOmon later to see if/ when the TRIM commands are being executed.
http://img855.imageshack.us/img855/4...dle2ndloop.png
Christopher, have you tried letting your drive idle for couple of hours?
Ao1,
I did let the 830 idle for about 8 hours. But if a SE won't help, then I don't think idling will as well.
See, the drive did start out fast, but running with the amount of static data + the min free space caused the drive to get slow after 48 hours. I changed it, and it was able to keep the speed up for the next 8 days -- until I messed with min free space again. This time, nothing would help. Not secure erasing, or full formats, or idle time. But I could try letting it idle again for a few hours. I'll let it idle for the next 5 hours, at which time I will post an update.
Kingston SSDNow 40GB (X25-V)
610.82TB Host writes
Reallocated sectors : 05 21
Available Reserved Space : E8 99
POH 5521
MD5 OK
33.62MiB/s on avg (~72 hours)
--
Corsair Force 3 120GB
01 94/50 (Raw read error rate)
05 2 (Retired Block count)
B1 64 (Wear range delta)
E6 100 (Life curve status)
E7 10 (SSD Life left)
E9 651239 (Raw writes) ->636TiB
F1 866775 (Host writes) ->846TiB
MD5 OK
106.77MiB/s on avg (~120 hours)
power on hours : 2494
--
http://www.ssdaddict.com/ss/Endurance_cr_20120106.png
OK, we have some movement (without a reboot or a power cycle).
64GiB * P/E Cycles (1) = 64
Actual writes = 106
WA = 0.60 :rolleyes:
http://img853.imageshack.us/img853/4387/177movement.png
EDIT:
As Anvil has pointed out in a PM, it looks like the LBA count was not updating correctly either. The jump occured after a re-boot and coincided with the increase in write speeds. Very strange.
And after a power cycle:
64GiB * P/E Cycles (40) = 2,560
Actual writes = 684
WA = 3.74 :rolleyes:
http://img822.imageshack.us/img822/1068/powercycle.png
And let the punishment begin. The 177 decimal value has increased by 1 since starting with the 4K, so it seems that it does not update properly. (At least not when the drive is being subjected to a continuous load).
http://img809.imageshack.us/img809/6...eendurance.png
I'll get you an updated version sometime this weekend with more options for write duration.
Lets hope it survives the testing :)