Does nobody think the secure erase endurance testing is as important as what we are doing now ???
Actually, I think it is more important because it stresses the SSD and its cells much more than the continuous writing we are doing now etc.
Printable View
Does nobody think the secure erase endurance testing is as important as what we are doing now ???
Actually, I think it is more important because it stresses the SSD and its cells much more than the continuous writing we are doing now etc.
@bulanula
No, at the moment this is the test that's going on and it will continue to run for quite some time, so, all my resources are bound to this test method for quite some time.
You don't need to ask every other post, I do read every word that is written in this thread and you are the only one pushing that test "pattern".
As One_Hertz mentioned, it will most likely be a manual process and that would make the test time-consuming.
--
160.46TB Host writes
MWI 12
Reallocated sectors : 6
MD5, no errors.
I just want to point that, even thou the test would be interesting, it does not have any relevance compared to a real life test. Testing random 4k writes would simulate a heavy OS paging while testing continuous erases does not have similar real usage pattern. And this might be implemented in a different way from one manufacturer to another. For example, if I would produce SSD firmware, I would definetly add an "if" statement that would skip erasing blocks that are already erased. Writing a few MiB/GiB of data before each erase also does not give you any warranties that you will hit all blocks because of the way different controllers choose to cluster pages.
C300 Update
50.51TiB, 83 MWI, 850 raw wear indicator, 61.8MiB/sec, MD5 OK.
Attachment 117681
Updated charts :)
Host Writes So Far
Attachment 117690
Attachment 117691
(bars with a border = testing stopped/completed)
Raw data graphs
Writes vs. Wear:
Attachment 117692
MWI Exhaustion:
Attachment 117693
Writes vs. NAND Cycles:
Attachment 117694
Normalized data graphs
The SSDs are not all the same size, these charts normalize for total NAND capacity.
Writes vs. Wear:
Attachment 117695
MWI Exhaustion:
Attachment 117696
Write-days data graphs
Not all SSDs write at the same speed, these charts factor out write speeds and look at endurance as a function of time.
Writes vs. Wear:
Attachment 117697
MWI Exhaustion:
Attachment 117698
Sandforce really depends on the SSD manufacturer (OCZ, Kingston, Patriot, etc) to enforce their own quality testing and production procedures. While this limits Sandforce's financial risk and investment costs for production and such, that sword also cuts the other way- it opens the door for these drive manufacturers to release improperly validated designs that are then back-associated to Sandforce's quality themselves.
But in all honesty, knowing all that I know, if I wanted to buy an SSD today,
I would still buy OCZ Sandforce-based SSD drives (3 year warranty)..
they are typically best bang for your buck.
And I wouldn't be one of those that complain about getting a 25nm vs 34nm drive..
As with any type of mass storage, just make sure you back up your vital data..
Not "Vertex 3" but does 256GB SF-2582 count?
https://lh6.googleusercontent.com/-8...9305699018.jpg
OEMs :)
Its quite the opposite, actually.
Secure Erase stresses the cells less than continuous writing,
unless you are talking about drive that has Mil-spec Secure Erase features (and none of you have that)
Eveningupdate:
100.7112 TiB
343 hours
Avg speed 90.32 MiB/s.
AD gone from 44 to 42.
P/E 1755.
MD5 OK
Attachment 117710
Looks like the Samsung is still pulling through. Impressive to say the least, considering it has quite a higher WA compared to the Intel / Crucial drives. Maybe their NAND has more cycles compared to Intel / Micron ONFI NAND ???
Did you get to try it? (no hurry)
How is the 320 doing :)
--
Having some minor issues on the AMD rig, not sure what's going on and so I've moved the drive to an Intel rig just to make sure that there are no issues with the drive.
(no SMART errors reported but the drive is dropped and is logged with reference to a "controller error", happened twice in 10 minutes)
So, either it's wearing out the IO sub-system on the MB or there are developments on the drive.
I've run diagnostics (Intel Toolbox) and there were no issues, it's running the "Endurance test" now just fine although it's been just a few loops.
I'm going to let it run a few more loops before I'm moving it back.
161.82TB Host writes
MWI 12
Reallocated sector count : 6
MD5 OK
ooohhh high drama! interesting!Quote:
Having some minor issues on the AMD rig, not sure what's going on and so I've moved the drive to an Intel rig just to make sure that there are no issues with the drive.
*grabs popcorn*
Doesn't look like the drama is caused by the SSD, it's been > 45 minutes and no issues/errors.
It's late and so I'll just have to move it back to the dedicated test-rig, will know for sure what's going on in 6-7 hours :)
edit:
OK, I've stopped the testing and ran a Full Diagnostics scan, all OK.
Attachment 117714
Odd that the motherboard would start taking issue with the testing before the SSD :wth: :lol:
:)
That port has handled some 100TB of writes.
Well, lets see what tomorrow brings, moving it back to the test-rig right now.
Amazing thread and information and great to finally have solid evidence that SSDs will last much longer than most would ever expect...especially since I have been purporting such since 07.
Hate the fact that I came in late and now trying to catch up with all but would be great to have a chart that shows total TB written before end life followed by the number of years that would have totaled at the total daily write estimate that they base their calculations on.
Personally, I see this as much bigger than most would think and have at least 3 manufacturer reps looking this over after sending them the link earlier. It truly is the closest we have seen to reliable endurance estimates and sure puts to rest any thought that SSDs may have a limited life span.
Just my two cents and thanks all and especial Anvil for that bench,ark which, IMHO, is the best going right now and I have used it in several reviews...Thank you very much Anvil!
Morningupdate:
104.0416 TiB
353 hours
Avg speed 89.90 MiB/s.
AD gone from 42 to 40.
P/E 1812.
MD5 OK
Attachment 117721
Not sure what really happened, it has to be related to the test-rig in some way as the drive was moved within minutes to the other rig and it had no idle time and diagnostics were all OK.
(could be caused by a lot of factors like, cabling, heat, OS, drivers,...)
I'll just have to keep an eye on it, it's been running as usual since I restarted the test, no MD5 errors, nothing.
163.07TB Host writes
MWI 11
Reallocated sectors, 6
MD5 -> OK
Eveningupdate:
107.9518 TiB
366 hours
Avg speed 89.60 MiB/s.
AD gone from 40 to 38.
P/E 1880.
MD5 OK.
Attachment 117754
C300 Update
56.83TiB, 81 MWI, 957 raw wear indicator, 61.8MiB/sec, MD5 OK
Attachment 117756
Nice to see the Intel almost going past the 200tb limit!