Can someone explain what the attributes "CA-Total Count of Error bits from flash" and "CB-Total Count of Read Sectors with Corectable Bit Errors" are all about?
Printable View
Can someone explain what the attributes "CA-Total Count of Error bits from flash" and "CB-Total Count of Read Sectors with Corectable Bit Errors" are all about?
This white paper is a bit dated, but it explains why ECC is necessary and how it works.
http://www.imation.com/PageFiles/83/...hite-Paper.pdf
226.72TB Host writes
Reallocated sectors : 6
MD5 OK
34.28MiB/s on avg (33.45 hours)
And it just keeps going on. Just got to love'em :up:
m4 update:
266.7679 TiB
883hours
Avg speed 89.39 MiB/s.
AD gone from 205 to 201.
P/E 4668.
MD5 OK.
Still no reallocated sectors
Attachment 118792
Hello all
I am new here. :)
Running a Corsair Force 40 GB. The drive was completely virgin and is a slave drive obviously. Initial screenshots after NTFS format with 4096K allocation:
Attachment 118795Attachment 118796
Min GiB free set to 36 GiB.
so far:
51.27 GiB written
0.26 hours
37.41 MB/s average
Where do you get P/E cycles? Please let me know if you need anything else.
P.S. I didn't realize the MD5 check was built-in to the utility until after i ran it, I generated my own checksums manually, but I will enable it if I ever stop the utility (stopping and resuming after would be okay right?)
292.75TB. 48 reallocated sectors. MD5 OK. The value which I think is the wear indicator is at 146.
Ooo, a new drive :)
Need to dot some I's and cross some T's first though :)
What kind of static data do you have? Most of us (all of us?) are running with our drives ~60-70% full of static data and with minimum free space of 12GiB in Anvil's app (Anvil's app will fill the drive until only that much is free, then delete and refill). I see you're running 36GiB free, which leads me to believe 0 static data and a very short run lengths of ~1.25GiB (your drive has only ~37.25GiB user capacity, AFAIK).
What compression setting in Anvil's app? Anything but 0-fill (and probably 8%) is preferred for SandForce drives. Incompressible isn't necessary as SMART attribute E9 tracks NAND writes. Based on NAND writes, we can figure out P/E cycles; it isn't a SMART attribute with SandForce drives.
MD5 can be enabled at any time :)
I see that's actually a Force 40-A, but I don't know what that means for the 40GB variety but every other -A version is 25nm IMFT. I wasn't aware there were 40GB 25nm drives in the wild, could be interesting :)
And last but not least, are you okay with Lifetime Throttle (LTT) kicking in? It's likely that in a couple weeks your drive will slow to a crawl and will attempt to force the drive to last for a very long time (no idea what Corsair set this timespan for, however).
C300 Update
183.345TiB host writes, 38 MWI, 3101 raw wear, 2048/1 reallocations, 62.2MiB/sec, MD5 OK
SF-1200 nLTT Update (from much earlier today)
59.94TiB host writes, 35.25 NAND writes, 85 MWI, 564 raw wear (equiv), wear range delta 3, 56.2MiB/sec, MD5 OK.
Charts Updated :)
All bar charts are sorted by their respective equivalent of Writes So Far.
Host Writes So Far
Attachment 118797
Attachment 118798
Normalized Writes So Far
The SSDs are not all the same size, these charts normalize for available NAND capacity.
Attachment 118799
Attachment 118800
Write Days So Far
Not all SSDs write at the same speed, these charts factor out write speeds and look at endurance as a function of time.
Attachment 118801
Attachment 118802
Host Writes vs. NAND Writes and Write Amplication
Based on reported or calculated NAND cycles from wear SMART values divided by total writes.
Attachment 118803
Attachment 118804
Attachment 118805
Thanks for your reply Vapor.
I only have a few files on the drive: a word file, an excel file, and an image file. In total they are consuming approximately 2.2 MiB.
Currently:
3500.91 GiB written
14.53 hours
68.21 MB/s average
I will double check this but afaik the F40-A is not 25nm based. There was a slight change in B.O.M with regards to some other chipset components which also gave rise to the firmware versions beyond 2.0. All of the original Force drives are at 2.0. Depending on what new B.O.M. you have, you'll have either 2.1b or 2.2.
Also, afaik, the SMART data is mostly broken in the firmware versions beyond 1.1. The health for example, is rigged to always say 100% (if somebody else finds otherwise, please let me know).
I have no problem with LTT kicking in. In fact, it was one of my plans to figure out just when exactly that does activate. The drive was meant to be expendable, so I have no problem ruining it. I also plan on secure erasing the drive after it kicks in to see if that would restore any of the performance. From your experience, you think that it would most likely not do anything right?
Lastly:
1) What do you recommend I set the compression to?
2) What do you recommend I set the GiB free setting to?
3) Should I hit stop to change these settings, as well as enable MD5?
Thanks. :)
m4 update:
273.4735 TiB
905hours
Avg speed 89.43 MiB/s.
AD gone from 201 to 197.
P/E 4786.
MD5 OK.
Still no reallocated sectors
Attachment 118835
I'll be joining with a second ssd within a week or two. I just ordered a Kingston SSDNow V+ 100 64GB.
@SynbiosVysem, the rest of us are keeping a lot more static data on the drive than that...basically the same amount as a compact Windows install.
SMART isn't broken, the algorithm is just changed. MWI will drop below 100% after 5-10TiB of NAND writes, then count down normally from there.
When LTT kicks in, the only thing that will restore performance is idle time connected to a PC. A secure erase won't do anything (other than restore the drive to a fresh state compared to a used/steady state...but LTT is independent of burst throttling and new/used/steady state). Will be interesting to see what lifetime length your drive is set for (will be able to tell once LTT kicks in). The Vertex 2 40GB (34nm) was set for 1 year with 5000 P/E cycle NAND but we have no idea what Corsair set theirs for (or even what NAND you have).
I'd recommend either 46% or 67% compression. 46% acts an awful lot like Windows/applications/caches and 67% acts an awful lot like documents/pictures (DOCX, XLSX, DNG, PNG, and JPEG).
I'd recommend GiB free set to 12 like the rest of us.
And yeah, feel free to stop/start at will (you don't even need to have "Keep running totals" checked because your drive tracks both host and NAND writes).
@B.A.T, very excited for the V+100!
I think at first (while MWI=100), twice a day should be good (if you can). That way we get a decent idea of how long MWI stayed at 100. After that, once a day or even less frequently is probably fine as MWI drops linearly. With LTT looming, checking in periodically on speeds (even if you don't report) doesn't hurt :)
229.87TB Host writes
Reallocated sectors : 6
MD5 OK
I threw the M225->Vertex Turbo 64GB on a Windows 7 Enterprise 64bit system late this afternoon before I left work to see how it reacts with trim.
I ripped through 20 loops and was at 99.98 MB/s Avg when I left :yepp:
Can't wait to see what numbers it brings in the morning :)
C300 Update
189.23TiB host writes, 36 MWI, 3200 raw wear, 2048/1 reallocations, 62.3MiB/sec, MD5 OK
SF-1200 nLTT Update
66.938TiB host writes, 40.75TiB NAND writes, 82 MWI, 652 raw wear (equiv), 56MiB/sec, MD5 OK
i would agree with that statement, the terrible WA on that drive is just inidicative of how resilient this NAND is.Quote:
Samsung is breaking the 400TiB barrier tonight. Wow! I don't care if its WA is 5 or 5000. This drive is a winner!
If we take into consideration the WA of the samsung, and compare that to the WA of the other drives, what would the expected amount of writes for the other drives be, taking this into consideration? they should basically last forever. lol.
if the write amplification of the Samsung is at 5.14 and it is passing 400TiB, would i be wrong to extrapolate that the NAND itself would last *at least* until 1.9PB for one of the drives which have a WA of 1?
my thought process is that it is beating that nand at a much harder rate than the others are.
Well, 400TiB * 5.14x = 2.01PiB. That's pretty epic already.
If the other 60/64GB drives survive to 2PiB with their 1.00-1.11x WA and 55-90MiB/sec speeds, we're going to be at this for a very, very long time.
Wasn't part of the impetus for this test some site torturing a drive with over 1PB of writes and people doubting it? Looking more and more plausible now.