Me too. Remember, we are doing all of this in the name of science :p: ! It is these kinds of "hands-on" tests that make XS what it is. I don't recall anything like this before. Keep it up guys ! More volunteers welcome. We are making history.
Printable View
Thanks Ao1. Then there is no doubt that my M4 is rated for 3000 P/E cycles :up:
Evening update:
203 hours, 59.5699TiB, Wear Leveling Count and Percentage of the rated lifetime used has gone from 69 to 66.
Avg speed for all 203 hours is roughly is 85.47 MiB/s
Attachment 117467
C300 Update, charts next post :)
21.43TiB, 93 MWI, 358 raw, 0 reallocated, 62.15MiB/s
Attachment 117466
Updated charts :)
One_Hertz, is there a raw wear indicator for the 320? Hate to think that the only graph it'll be left participating on is just the Host Writes So Far bar graph :eh:
Host Writes So Far
Attachment 117468
(bars with a border = testing stopped/completed)
Raw data graphs
Writes vs. Wear:
Attachment 117469
MWI Exhaustion:
Attachment 117470
Writes vs. NAND Cycles:
Attachment 117471
Normalized data graphs
The SSDs are not all the same size, these charts normalize for 25GiB of onboard NAND.
Writes vs. Wear:
Attachment 117472
MWI Exhaustion:
Attachment 117473
Write-days data graphs
Not all SSDs write at the same speed, these charts factor out write speeds and look at endurance as a function of time.
Writes vs. Wear:
Attachment 117474
MWI Exhaustion:
Attachment 117475
Well, there is the reallocated blocks vs. TiB written. Unfortunately, the Samsung cannot participate in that (unless one of the two unknown SMART attributes turns out to track that number, but if it did, I would have expected some change in those attributes before now)
SSD seem really durable these days, people on other sites are right you can use a SSD like normal hard drive. They said that the drive will break before the the rewrite limit has been reached.
So does this mean doing all the stuff to limit SSD writes is a waste of time?
Also good work from all the people working on this experiment.:up:
The 320 can report another value for "wearout"
Look at post #799, could be that I've already been there as I've done a few extra tests on the 320 Series, will check when I get back home later tonight.
Morning update:
214 hours, 62.8925TiB, Wear Leveling Count and Percentage of the rated lifetime used has gone from 66 to 64.
Avg speed for all 214 hours is roughly is 85.6 MiB/s
Attachment 117496
Too bad the Samsung and the Crucial have crippled SMART data. Intel is still the best at this point in time. Hoping they get their act together and develop their own 6gbps controller with 34nm NAND and best for 4K random read / write ( only few care about sequential ) !!!
Eveningupdate:
225,5 hours, 66.3894 TiB, Wear Leveling Count and Percentage of the rated lifetime used has gone from 64 to 62.
Avg speed for all 214 hours is roughly is 85,75 MiB/s
Attachment 117512
Morning update:
237 hours, 69,87084 TiB, Wear Leveling Count and Percentage of the rated lifetime used has gone from 62 to 60.
Avg speed for all 237 hours is roughly is 85,87 MiB/s
Attachment 117533
Hi Vapor, any chance of showing the writes that have occurred following notification that the MWI is exhausted? Maybe a hatched extension on the bar in the MWI Exhaustion graph?
I'm surprised at how much the Samsung 470 has been able to write after MWI exhaustion. At this rate it will be able to double the amount of data it took to exhaust the MWI.
Is anyone else going to getting a SF2xxx drive to test? If not I might pick one up. One "hacked" and one with throttling enabled would be interesting.
I'll find out on mine as well, just need to get an opportunity to power down for a few minutes.
+
^ look in your PM.
:)
I'll try to get a special build for you later today, in order to find that special SMART attribute for reporting wearout.
I've figured out the Host writes on the 320 series using totally undocumented vendor specific info returned by WMI.
--
149.40TB Host writes
MWI 18
Reallocated sectors : 6
MD5, all tests were OK.
@Vapor
A P/E count chart would be useful in general. (TiB written/capacity)
Is nobody going to take out my offer and start testing endurance in terms of secure erases. This is just like writing the whole capacity of the SSD in a couple of seconds as it "zaps" the SSD NAND cells and resets them to 0. Anyone interested so we can see how durable this mechanism is and how many times it can be secure erased before it fails ??? Maybe an automated hdparm script or something. Anyone ???
This really is a valid point that also needs to be tested if we are talking about SSD endurance.
Or maybe test it on the V2 drive so we can see how many secure erases it can take if the standard endurance test failed because of throttling etc. ???
^^
You need to repower the drive every time you do a secure erase. Nobody is going to sit there and do it.
That was the intention when I first made it, thanks for reminding me :up:
Anvil, do you mean a bar chart with P/E cycles? Or a bar chart with normalized writes? Unfortunately, only the Crucials, the Samsung, and the SandForce show anything directly related to NAND writes (and therefore P/E cycles).
C300 update from earlier today, didn't have a chance to post.
29.971TiB, 90 MWI, 505 P/E cycles, 61.8MiB/s, ~240/0 MD5 runs/mismatches
Attachment 117544
Eveningupdate:
248,5 hours, 72,6775 TiB, Wear Leveling Count and Percentage of the rated lifetime used has gone from 60 to 59.
Avg speed for all 248,5 hours is roughly is 85,18 MiB/s (avg has gone down some due to 30 min of win update)
Attachment 117546
No, I meant "Host writes" / Capacity, which can be used by all* drives and it should be pretty close to the P/E count for the Intels.
(all drives if using the running total option)
We will need something as the counters stops telling what's going on and that time has come for the 320, unless we find some way to get to the other wear-out counter, it would still leave the X25 series out in the cold as there is no extra wear-out counter.
It's not much but it's something.