Probably the drive just needs extra overprovisioning. Those 60GB are enough just to make the drive enter in MLC-1 mode. To keep it there, I would guess you would need to use even more over provisioning, like 70-75GB.
Printable View
Probably the drive just needs extra overprovisioning. Those 60GB are enough just to make the drive enter in MLC-1 mode. To keep it there, I would guess you would need to use even more over provisioning, like 70-75GB.
Secure erase fixed it:
Attachment 128683
At what point did it start slowing down?
(history might repeat itself)
Here are todays update:
m4
1264,4222 TiB
5272 hours
Avg speed 74.35 MiB/s.
AD 147 to 144
P/E 21747.
C3 4457
CE 58
MD5 OK.
Reallocated sectors : 00
http://www.diskusjon.no/index.php?ap...tach_id=498120
Intel X25-M G1 80GB
813.02 TiB
24559hours
Reallocated sectors : 134 to 153
Available Reserved space: 45 to 44
MWI= 169 to 152
MD5 =OK
31.39 Mi/s on avg
http://www.diskusjon.no/index.php?ap...tach_id=498119
Intel X25-E 64GB
556.88 TiB
1740-30=1710 hours
Reallocated sectors : 0
Available Reserved space: 100
MWI= 96
MD5 =OK
91.16 MiB/s on avg
http://www.diskusjon.no/index.php?ap...tach_id=498121
Samsung 830 256GB Day 125
(GiB) 3,071,152
(TiB) 2,999
(PiB) 2.95
(Avg) 298.87 MB/s
(B1) Wear Leveling Count: 13,304
(B6) Erase Fail Count: 9
(05) Reallocated Sectors: 36864
(POH) 3045
-------------------
The drive actually has 3012 TiB in LBA writes. I lost 13 TIB in ASU along the way.
Intel 330 120GB
600.30TB Host writes
2.12TiB Host reads
Reallocated sectors : 05 24 +2
Available Reserved Space : E8 100
MWI 30
[B5] 12
[B6] 12
[F1] Total LBAs Written 19670569
[F2] Total LBAs Read 69534
[F9] Total NAND Writes 433332GB // ~423TiB
POH 1451
MD5 OK
125.21MiB/s on avg (~324 hours)
--
Kingston SSDNow 40GB (X25-V)
1141.37TB Host writes (37400662*32)
Reallocated sectors : 05 65
Available Reserved Space : E8 98
POH 10186
MD5 OK
33.34MiB/s on avg (~324 hours)
--
Sandisk Extreme G25 - 120gb - Day 49
Drive hours: 1173
ASU GiB written: 506,710.51 GiB (494.83 TiB)
Avg MB/s: 125.81 MB/s (7.19 hours)
MD5: OK
Host GB written (F1): 510,518 GiB (498.55 TiB)
NAND writes (E9): 381,058 GiB (372.13 TiB)
Retired Block Count (05): 4 raw
Failure count (AB, AC): 1 program, 1 erase
Raw Error Rate (01): 109 normalized
Media Wearout Indicator (E7): 10 normalized
Wear Range Delta (B1): 3 raw
Reported Uncorrectable Errors (BB): 0 raw
Vertex 4 - 50% OP - 128gb - PAUSED
Drive hours: 101
ASU GiB written: 61,811.98 GiB (60.36 TiB)
Avg MB/s: 0 MB/s (0 hours)
MD5: OK
Host GB written (E8): 62,941.02 GiB (61.47 TiB, 131996886814 raw)
Raw Read Error Rate (01): 5 raw
Reallocated Block Count (05): 0 raw
Remaining Life (E9): 84 normalised
I can't seem to keep the vertex 4 out of storage mode. Secure wipe only helped for half an hour or so, then performance dropped again. I'll try a firmware reflash followed by another secure wipe. See if that helps.
Strange thing is that it ran fine for 3 days .....
If I can't get this working right, I may need to abandon breaking this V4 ... which would mean I would need to go find another victim in its place.
So, having secure erased it performs as expected in benchmarks?
Can you get it to perform in benchmarks once it has entered the so called storage-mode. (by normal means, like, i.e. by trimming a large file,...)
It always perfoms fine in benchmarks ... benchmarks simply don't push enough writes to the drive.
You see my graphs above showing performance degredataion while running ASU ... the first 10gig or so of writes was always ultra-quick, even when the run was being slowed significantly by the storage mode later in each cycle
My ASU runs always keep 5 gig free, and I am running a 61000meg parition.
But I'll try smaller partitions if I have problems again (I've just reflashed and secure erased, then ran WEI after creating partition to make sure windows knows it is an SSD and to use TRIM.
My rig had frozen 2 hours ago, can't find what caused it, nothing in the EventLog.
I suspect it was caused by the X25-V as there were 0 files in the test folder, as in, all files were deleted and just about to start a new loop.
Both drives look to be OK, I'll keep an extra eye on the rig this evening.
Intel 330 120GB
604.10TB Host writes
2.22TiB Host reads
Reallocated sectors : 05 26 +2
Available Reserved Space : E8 100
MWI 29
[B5] 13
[B6] 13
[F1] Total LBAs Written 19795323
[F2] Total LBAs Read 72844
[F9] Total NAND Writes 436080GB // ~426TiB
POH 1460
MD5 OK
125.29MiB/s on avg (~3 hours)
Once restarted, the 330 quickly threw a few reallocated sectors.
Attachment 128713
--
Kingston SSDNow 40GB (X25-V)
1142.45TB Host writes (37435802*32)
Reallocated sectors : 05 65
Available Reserved Space : E8 98
POH 10195
MD5 OK
38.44MiB/s on avg (~3 hours)
--
Attachment 128712
Only 40 more hours to 3PiB. That feels like a monumental number, but that would be the equivalent of a 64GB drive hitting 775TiB -- ie 3/4ths of a PiB.
Samsung 830 256GB Day 126
(GiB) 3,098,472
(TiB) 3,025
(PiB) 2.97
(Avg) 299.06 MB/s
(B1) Wear Leveling Count: 13,422
(B6) Erase Fail Count: 9
(05) Reallocated Sectors: 36864
(POH) 3071
-------------------
The 830 actually has 3040TiB on it, so a little over 24 hours from now should be 3PiB
About 14 hours later...
Intel 330 120GB
609.96TB Host writes
2.24TiB Host reads
Reallocated sectors : 05 31 +5
Available Reserved Space : E8 100
MWI 29
[B5] 15
[B6] 16
[F1] Total LBAs Written 19987316
[F2] Total LBAs Read 73298
[F9] Total NAND Writes 440309GB // ~430TiB
POH 1474
MD5 OK
125.21MiB/s on avg (~17 hours)
This time, B5 is not equal to B6.
--
Kingston SSDNow 40GB (X25-V)
1144.07TB Host writes (37488837*32)
Reallocated sectors : 05 65
Available Reserved Space : E8 98
POH 10209
MD5 OK
35.39MiB/s on avg (~17 hours)
--
Here are todays update:
m4
1276.6606 TiB
5320 hours
Avg speed 74.35 MiB/s.
AD 144 to 137
P/E 21954.
C3 4457
CE 58
MD5 OK.
Reallocated sectors : 00
http://www.diskusjon.no/index.php?ap...tach_id=498318
Intel X25-M G1 80GB
818.04 TiB
24607 hours
Reallocated sectors : 153 to 201
Available Reserved space: 44 to 41
MWI= 152 to 135
MD5 =OK
31.07 MiB/s on avg
http://www.diskusjon.no/index.php?ap...tach_id=498319
Intel X25-E 64GB
572.00 TiB
1788-30=1758 hours
Reallocated sectors : 0
Available Reserved space: 100
MWI= 96
MD5 =OK
91.22 MiB/s on avg
http://www.diskusjon.no/index.php?ap...tach_id=498317
Mtron Pro 7025 32GB
Paused until new firmware is updated.
Samsung 830 256GB Day 127
(GiB) 3,122,132
(TiB) 3,048
(PiB) 3.00
(Avg) 298.61 MB/s
(B1) Wear Leveling Count: 13,524
(B6) Erase Fail Count: 9
(05) Reallocated Sectors: 36864
(POH) 3094
-------------------
It's behaving nicely :)
I'll make sure to check what chips are used on both drives. (eventually)
Intel 330 120GB
620.21TB Host writes
2.26TiB Host reads
Reallocated sectors : 05 33 +2
Available Reserved Space : E8 100
MWI 27
[B5] 16
[B6] 17
[F1] Total LBAs Written 20323087
[F2] Total LBAs Read 74149
[F9] Total NAND Writes 447708GB // ~437TiB
POH 1497
MD5 OK
125.20MiB/s on avg (~41 hours)
--
Kingston SSDNow 40GB (X25-V)
1146.79TB Host writes (37578229*32)
Reallocated sectors : 05 65
Available Reserved Space : E8 98
POH 10233
MD5 OK
34.12MiB/s on avg (~41 hours)
--
Sandisk Extreme G25 - 120gb - Day 51
Drive hours: 1221
ASU GiB written: 527,275.97 GiB (514.9179.83 TiB)
Avg MB/s: 125.81 MB/s (7.19 hours)
MD5: OK
Host GB written (F1): 531,263 GiB (518.81 TiB)
NAND writes (E9): 396,545 GiB (387.25 TiB)
Retired Block Count (05): 4 raw
Failure count (AB, AC): 1 program, 1 erase
Raw Error Rate (01): 100 normalized
Media Wearout Indicator (E7): 10 normalized
Wear Range Delta (B1): 4 raw
Reported Uncorrectable Errors (BB): 0 raw
Vertex 4 - 50% OP - 128gb - PAUSED
Drive hours: 101
ASU GiB written: 62,800.31 GiB (61.33 TiB)
Avg MB/s: 0 MB/s (0 hours)
MD5: OK
Host GB written (E8): 63,967.47 GiB (62.47 TiB, 134149509504 raw)
Raw Read Error Rate (01): 5 raw
Reallocated Block Count (05): 0 raw
Remaining Life (E9): 84 normalised
My last attempt to get the vertex 4 working in performance mode only failed after 20 minutes or so. I also tried increasing the time between runs to 30 seconds to give the drive more recovery time, but that didn't help.
I am not testing with a 50,000 Megabyte partition, which is about what an enterprise focused SSD would get on a 64gig SLC drive.
It's too bad about the V4. I honestly think the 128 is one of the most interesting drives on the market, but not giving end users a choice to stick with pre 1.4fw if they so choose is a bit off-putting. 1.4+ is not for everyone IMHO.
Both drives produced a few reallocations.
Intel 330 120GB
623.95TB Host writes
2.27TiB Host reads
Reallocated sectors : 05 35 +2
Available Reserved Space : E8 100
MWI 27
[B5] 17
[B6] 18
[F1] Total LBAs Written 20445743
[F2] Total LBAs Read 74490
[F9] Total NAND Writes 450411GB // ~440TiB
POH 1506
MD5 OK
125.15MiB/s on avg (~49 hours)
Attachment 128827
--
Kingston SSDNow 40GB (X25-V)
1147.79TB Host writes (37610723*32)
Reallocated sectors : 05 67 +2
Available Reserved Space : E8 98
POH 10241
MD5 OK
33.95MiB/s on avg (~49 hours)
--
Attachment 128824
Attachment 128825
Here are todays update:
m4
1282.7254 TiB
5344 hours
Avg speed 74.38 MiB/s.
AD 137 to 133
P/E 22056.
C3 4457
CE 58
MD5 OK.
Reallocated sectors : 00
http://www.diskusjon.no/index.php?ap...tach_id=498387
Intel X25-M G1 80GB
820.52 TiB
24631 hours
Reallocated sectors : 201 226
Available Reserved space: 41 to 40
MWI= 135 128
MD5 =OK
31.00 MiB/s on avg
http://www.diskusjon.no/index.php?ap...tach_id=498388
Intel X25-E 64GB
579.46 TiB
1812-30=1782 hours
Reallocated sectors : 0
Available Reserved space: 100 to 99
MWI= 96
MD5 =OK
91.26 MiB/s on avg
http://www.diskusjon.no/index.php?ap...tach_id=498386
Mtron Pro 7025 32GB
Paused until new firmware is updated.
1000Tb of host writes seems a lot!
I wonder if the Corsair Force GT 240Gb can handle that kind of a write endurance?
Intel 330 120GB
634.53TB Host writes
2.30TiB Host reads
Reallocated sectors : 05 43 +8
Available Reserved Space : E8 100
MWI 26
[B5] 21
[B6] 22
[F1] Total LBAs Written 20792174
[F2] Total LBAs Read 75341
[F9] Total NAND Writes 458046GB // ~447TiB
POH 1530
MD5 OK
125.21MiB/s on avg (~74 hours)
--
Kingston SSDNow 40GB (X25-V)
1150.59TB Host writes (37702684*32)
Reallocated sectors : 05 67
Available Reserved Space : E8 98
POH 10266
MD5 OK
33.70MiB/s on avg (~74 hours)
--
Samsung 830 256GB Day 128
(GiB) 3,159,768
(TiB) 3,085
(PiB) 3.03
(Avg) 298.40 MB/s
(B1) Wear Leveling Count: 13,686
(B6) Erase Fail Count: 9
(05) Reallocated Sectors: 36864
(POH) 3130
-------------------
Bah, my vertex 4 is still not performing right.
Will decide whether to just kill it as is, or just keep it for something else.
Yeah, a 1.3.bugfixed firmware would have been good ... but I think the vertex 4 is solely focused on write benchmarks at all costs. It kinda has tunnel vision to other ssd requirements.
The plextor/liteon drives are a much better buy at the moment for performance.
I still have an unopened Vertex Turbo 64GB laying around somewhere, maybe the third time is the charm?
...Probably not.
Intel 330 120GB
641.41TB Host writes
2.32TiB Host reads
Reallocated sectors : 05 43
Available Reserved Space : E8 100
MWI 25
[B5] 21
[B6] 22
[F1] Total LBAs Written 21017815
[F2] Total LBAs Read 75912
[F9] Total NAND Writes 463017GB // ~452TiB
POH 1546
MD5 OK
125.50MiB/s on avg (~11 hours)
--
Kingston SSDNow 40GB (X25-V)
1152.56TB Host writes (37767241*32)
Reallocated sectors : 05 67
Available Reserved Space : E8 98
POH 10282
MD5 OK
36.89MiB/s on avg (~11 hours)
--
I needed the SATA ports last night so both drives had a 10 minute break. (gave them a cleaning as well by triggering TRIM)
--
Attachment 128870
I've changed the filling for the drives that are paused.
Based on current figures this might be the outcome at MWI 0
Attachment 128871
We used a Vertex 3 MaxIOPS 240GB at work for a critical server.
We had write errors from Oracle, so we switched the SSD. We discovered that the SSD, even with a almost perfect SMART data (2TB write, 200TB read, maximum difference between most worn cell and best cell = 1), it was impossible to copy a 8GB file entirely, it blocks when there is 760MB left. All the other data was intact.
How can this be possible? Isn't the controller supposed to mark bad blocks and not use them? Or is it because the SSD doesn't have idle time?
Yesterday I upgraded the firmware to 2.22. It was 2.15 since installed.
We are really not a support service for OCZ, especially since I'm here to kill OCZ drives, but there seems to be something ether wrong with your SSDs or server. I've never seen such a situation on SSDs here, and I've copied some whopper 20gig files to and from them.
Kensiko...this thread is dedicated to endurance testing, maybe you should post in the hardware-storage section.
Here are todays update:
m4
1295.2153 TiB
5392 hours
Avg speed 74.43 MiB/s.
AD 133 to 126
P/E 22267.
C3 4457
CE 58
MD5 OK.
Reallocated sectors : 00
http://www.diskusjon.no/index.php?ap...tach_id=498606
Intel X25-M G1 80GB
825.25 TiB
24679 hours
Reallocated sectors : 226 to 255 to 17
Available Reserved space: 40 to 37
MWI= 128 to 111
MD5 =OK
30.75 MiB/s on avg
http://www.diskusjon.no/index.php?ap...tach_id=498607
Intel X25-E 64GB
594.85 TiB
1860-30=1830 hours
Reallocated sectors : 0
Available Reserved space: 99
MWI= 96
MD5 =OK
91.32 MiB/s on avg
http://www.diskusjon.no/index.php?ap...tach_id=498605
Mtron Pro 7025 32GB
Paused until new firmware is updated.
Samsung 830 256GB Day 129
(GiB) 3,197,663
(TiB) 3,122
(PiB) 3.07
(Avg) 298.46 MB/s
(B1) Wear Leveling Count: 13,849
(B6) Erase Fail Count: 9
(05) Reallocated Sectors: 36864
(POH) 3177
-------------------
4PB, here I come. It's odd that it's been almost 1.5PB without an erase failure, but hey, I'll take it.
Intel 330 120GB
651.40TB Host writes
2.34TiB Host reads
Reallocated sectors : 05 45 +2
Available Reserved Space : E8 100
MWI 24
[B5] 22
[B6] 23
[F1] Total LBAs Written 21345238
[F2] Total LBAs Read 76706
[F9] Total NAND Writes 470230GB // ~459TiB
POH 1569
MD5 OK
125.28MiB/s on avg (~35 hours)
--
Kingston SSDNow 40GB (X25-V)
1155.21TB Host writes (37854060*32)
Reallocated sectors : 05 67
Available Reserved Space : E8 98
POH 10305
MD5 OK
34.38MiB/s on avg (~35 hours)
--
Sandisk Extreme G25 - 120gb - Day 54
Drive hours: 1291
ASU GiB written: 558,052.55 GiB (544.97 TiB)
Avg MB/s: 125.86 MB/s (69.64 hours)
MD5: OK
Host GB written (F1): 562,193 GiB (549.02 TiB)
NAND writes (E9): 419,639 GiB (409.80 TiB)
Retired Block Count (05): 4 raw
Failure count (AB, AC): 1 program, 1 erase
Raw Error Rate (01): 100 normalized
Media Wearout Indicator (E7): 10 normalized
Wear Range Delta (B1): 4 raw
Reported Uncorrectable Errors (BB): 0 raw
Vertex 4 - 50% OP - 128gb - PAUSED
Drive hours: 214
ASU GiB written: 96,525.65 GiB (94.26 TiB)
Avg MB/s: 0 MB/s (0 hours)
MD5: OK
Host GB written (E8): 97,836.11 GiB (95.54 TiB, 205177187947 raw)
Raw Read Error Rate (01): 5 raw
Reallocated Block Count (05): 0 raw
Remaining Life (E9): 77 normalised
I decided to let the V4 go for a while, but the loop started degrading to sub 100meg per sec speeds (after the initial burst of performance)
Drive just doesn't want to do what I want it to do.
Here are todays update:
m4
1301.0084 TiB
5415 hours
Avg speed 74.71 MiB/s.
AD 126 to 123
P/E 22365.
C3 4457
CE 58
MD5 OK.
Reallocated sectors : 00
http://www.diskusjon.no/index.php?ap...tach_id=498690
Intel X25-M G1 80GB
827.79 TiB
24702 hours
Reallocated sectors : 17 to 42
Available Reserved space: 37 to 35
MWI= 111 to 104
MD5 =OK
29.87 MiB/s on avg
http://www.diskusjon.no/index.php?ap...tach_id=498691
Intel X25-E 64GB
602.03 TiB
1883-30=1853 hours
Reallocated sectors : 0
Available Reserved space: 99
MWI= 96
MD5 =OK
92.44 MiB/s on avg
http://www.diskusjon.no/index.php?ap...tach_id=498689
Mtron Pro 7025 32GB
Paused until new firmware is updated.
I'm going for 3 weeks on vacation so my updates can be kind of sporadic. I've installed a ups to prevent a shutdown from powerfailures and will be crossing my fingers :)
I think that the G1 might soon die on me. It droppes reserve space at 1-2% every day, just hope it stays in there until I'm back again.
B.A.T. -- I don't know about you, but my drives only like to die when I'm out of town. I'd say it's a guarantee that something will die while you're gone. That's just how it works.
I'll just have to wait and see :D
Intel 330 120GB
661.85TB Host writes
2.37TiB Host reads
Reallocated sectors : 05 52 +7
Available Reserved Space : E8 100
MWI 22
[B5] 25
[B6] 27
[F1] Total LBAs Written 21687618
[F2] Total LBAs Read 77614
[F9] Total NAND Writes 477773GB // ~466TiB
POH 1593
MD5 OK
125.31MiB/s on avg (~59 hours)
--
Kingston SSDNow 40GB (X25-V)
1157.99TB Host writes (37945037*32)
Reallocated sectors : 05 67
Available Reserved Space : E8 98
POH 10329
MD5 OK
33.89MiB/s on avg (~59 hours)
--
Attachment 128910
Intel 330 120GB
672.40TB Host writes
2.39TiB Host reads
Reallocated sectors : 05 60 +8
Available Reserved Space : E8 100
MWI 21
[B5] 29
[B6] 31
[F1] Total LBAs Written 22033283
[F2] Total LBAs Read 78465
[F9] Total NAND Writes 485387GB // ~474TiB
POH 1617
MD5 OK
125.41MiB/s on avg (~83 hours)
--
Kingston SSDNow 40GB (X25-V)
1160.78TB Host writes (38036572*32)
Reallocated sectors : 05 68 +1
Available Reserved Space : E8 98
POH 10354
MD5 OK
33.69MiB/s on avg (~83 hours)
--
Sandisk Extreme G25 - 120gb - Day 56
Drive hours: 1343
ASU GiB written: 580,836.71 GiB (567.227 TiB)
Avg MB/s: 125.13 MB/s (121.85 hours)
MD5: OK
Host GB written (F1): 585,091 GiB (571.38 TiB)
NAND writes (E9): 436,741 GiB (426.50 TiB)
Retired Block Count (05): 4 raw
Failure count (AB, AC): 1 program, 1 erase
Raw Error Rate (01): 116 normalized
Media Wearout Indicator (E7): 10 normalized
Wear Range Delta (B1): 4 raw
Reported Uncorrectable Errors (BB): 0 raw
I gave both drivers a cleaning (trim) last night, doesn't look like it helped.
Intel 330 120GB
682.46TB Host writes
2.42TiB Host reads
Reallocated sectors : 05 74 14
Available Reserved Space : E8 100
MWI 20
[B5] 36
[B6] 38
[F1] Total LBAs Written 22362756
[F2] Total LBAs Read 79261
[F9] Total NAND Writes 492645GB // ~481TiB
POH 1640
MD5 OK
125.22MiB/s on avg (~12 hours)
--
Kingston SSDNow 40GB (X25-V)
1163.53TB Host writes (38126614*32)
Reallocated sectors : 05 68
Available Reserved Space : E8 98
POH 10377
MD5 OK
35.29MiB/s on avg (~12 hours)
--
Reallocated sectors is currently at 80 on the 330, it does look controlled but it is a bit early for a 120GB drive.
According to the recent review at Anandtech it is populated with "cheaper, lower endurance MLC NAND".
Attachment 128999
Attachment 128998
The odd thing about Anand's review is his short endurance test. He has 7629 GiB of NAND writes to a 60GB 330, but MWI is still at 100:
http://images.anandtech.com/reviews/...ter47hours.jpg
If the 330 flash has 3000 P/E cycles, then I would expect each decrement in MWI to be 30 (=300/100) cycles. But if there is 64GiB of flash on board, then 7629 GiB comes to 119.2 (=7629/64) cycles. The MWI should be at 97 or 96. But it is still at 100.
If the MWI is linear (and not buggy), then at least 119.2 cycles per MWI comes to at least 11920 P/E cycles for the flash, which is hard to believe.
I wonder if there is a bug in Intel's MWI computation. Maybe they are using host writes instead of NAND writes. Since Anand's host writes are only 1.55 TiB, that comes to only 24.8 (=1.55x1024/64) cycles if we incorrectly assume write amplification is 1.0. Since 3000 P/E cycle flash would be 30 cycles per MWI decrement, then we would expect the MWI to still be at 100 if Intel is incorrectly using host writes rather than NAND writes to compute MWI.
In contrast, the MWI for Anvil's 120GB 330 looks like it is counting for about 5000 cycles:
492645GiB/128GiB = 3848.8 cycles.Quote:
682.46TB Host writes
2.42TiB Host reads
Reallocated sectors : 05 74 14
Available Reserved Space : E8 100
MWI 20
[B5] 36
[B6] 38
[F1] Total LBAs Written 22362756
[F2] Total LBAs Read 79261
[F9] Total NAND Writes 492645GB // ~481TiB
If MWI wears out at 0, then it looks like it is counting to:
3848.8 x 100 / (100 - 20) = 4811 P/E cycles
There is a grace period just like in the original SF firmware.
It took 14.87 TiB of NAND writes (21.12TiB of Host Writes) before it turned 99.
It averages (from WMI 98 and down) 8.29TiB (host) / 5.84TiB (nand) writes between each MWI decrement.
At the current rate, my drive looks to point to about 5400 (writes / capacity "P/E cycles") and on average 53.53 cycles per MWI.
(figures are changing as reallocated-sectors increase)
Bases on NAND writes
Attachment 129016
That seems like a bug to me. I can see having a "grace period" for not throttling the drive under heavy writes. But it makes no sense to have a "grace period" for MWI. I have seen no evidence that the flash wears out more slowly during the first few percent of flash writes.
I'm surprised Intel did not fix that bug. Maybe Intel does not really have full access to the Sandforce firmware.
As SMART is adapted to Intel's way of reporting there could of course be side-effects.
Will have to check how the other (non-throttled) SF based drives worked during the first few MWI decrements.
The Force 3 changed to 99 at about 15TiB and then again to 98 at about 20TiB. (Link)
Here are todays update:
m4
1320.3993 TiB
5491 hours
Avg speed 74.58 MiB/s.
AD 123 to 112
P/E 22695.
C3 4457
CE 58
MD5 OK.
Reallocated sectors : 00
http://www.diskusjon.no/index.php?ap...tach_id=498933
Intel X25-M G1 80GB
835.05 TiB
24778 hours
Reallocated sectors : 42 to 115
Available Reserved space: 35 to 31
MWI= 104
MD5 =OK
27.79 MiB/s on avg
http://www.diskusjon.no/index.php?ap...tach_id=498931
Intel X25-E 64GB
625.97 TiB
1959-30=1929 hours
Reallocated sectors : 0 to 2
Available Reserved space: 99
MWI= 96 to 95
MD5 =OK
91.68 MiB/s on avg
http://www.diskusjon.no/index.php?ap...tach_id=498935
Mtron Pro 7025 32GB
Paused until new firmware is updated.
The G1 continues its path to death and the E had its first 2 reallocated sectors. The m4 just steams on :)
Intel 330 120GB
692.78TB Host writes
2.44TiB Host reads
Reallocated sectors : 05 100 +26
Available Reserved Space : E8 100
MWI 18
[B5] Program fail count 49
[B6] Erase fail count 51
[F1] Total LBAs Written 22700885
[F2] Total LBAs Read 80112
[F9] Total NAND Writes 500094GB // ~488TiB
POH 1664
MD5 OK
125.36MiB/s on avg (~35 hours)
--
Kingston SSDNow 40GB (X25-V)
1166.27TB Host writes (38216332*32)
Reallocated sectors : 05 68
Available Reserved Space : E8 98
POH 10401
MD5 OK
33.90MiB/s on avg (~35 hours)
--
Sandisk Extreme G25 - 120gb - Day 58
Drive hours: 1343
ASU GiB written: 600,727.94 GiB (586.65 TiB)
Avg MB/s: 124.72 MB/s (167.61 hours)
MD5: OK
Host GB written (F1): 605,082 GiB (590.90 TiB)
NAND writes (E9): 451,673 GiB (441.08 TiB)
Retired Block Count (05): 4 raw
Failure count (AB, AC): 1 program, 1 erase
Raw Error Rate (01): 118 normalized
Media Wearout Indicator (E7): 10 normalized
Wear Range Delta (B1): 4 raw
Reported Uncorrectable Errors (BB): 0 raw
Samsung 830 256GB Day 134
(GiB) 3,308,292
(TiB) 3,230
(PiB) 3.17
(Avg) 297.92 MB/s
(B1) Wear Leveling Count: 14,328
(B6) Erase Fail Count: 9
(05) Reallocated Sectors: 36864
(POH) 3275
-------------------
Intel 330 120GB
698.40TB Host writes
2.46TiB Host reads
Reallocated sectors : 05 118 +18
Available Reserved Space : E8 100
MWI 18
[B5] Program fail count 58
[B6] Erase fail count 60
[F1] Total LBAs Written 22885243
[F2] Total LBAs Read 80556
[F9] Total NAND Writes 504156GB // ~492TiB
POH 1677
MD5 OK
125.42MiB/s on avg (~48 hours)
--
Kingston SSDNow 40GB (X25-V)
1167.76TB Host writes (38265203*32)
Reallocated sectors : 05 69 +1
Available Reserved Space : E8 98
POH 10414
MD5 OK
33.69MiB/s on avg (~48 hours)
--
Attachment 129042
--
Not sure what to think about the 330, it's accumulating a lot of reallocated sectors, the Force 3 120GB had just 2 when it died but it did suffer from bad FW, most of the time that is.
The 830 looks to handle lots of reallocation's so there is still hope for the 330.
Some drives, like the vertex 4/octane and samsung, record whole erase blocks bad when they are found.
Other drives, like the sandforce drives, only record individual pages when they go bad ... for example, on the Intel 330, each page is 8kb, so when a write/erase error occurs on a block, and the page is marked bad, the bad block count increase by 2 (as each page is 2x4kb blocks)
If there is an erase failure on a SandForce it doesn't mark the whole erase block as bad? Just one page? I assume if there is one bad page in an erase block, it would just dump the block, not just a page. That's kinda wierd, but then the 830 hasn't had anything but erase failures.
Intel 330 120GB
703.51TB Host writes
2.47TiB Host reads
Reallocated sectors : 05 133 +15
Available Reserved Space : E8 99 -1
MWI 17
[B5] Program fail count 65
[B6] Erase fail count 68
[F1] Total LBAs Written 23052662
[F2] Total LBAs Read 81020
[F9] Total NAND Writes 507849GB // ~496TiB
POH 1689
MD5 OK
125.34MiB/s on avg (~60 hours)
--
Kingston SSDNow 40GB (X25-V)
1169.12TB Host writes (38309652*32)
Reallocated sectors : 05 71 +2
Available Reserved Space : E8 98
POH 10426
MD5 OK
33.59MiB/s on avg (~60 hours)
--
@christopher
Isn't [05] Reallocated sectors (in your case 36864) listed in SMART or is that calculated?
In any case, it's on a roll, it's steadily increasing and triggered a movement on [E9] Available Reserved Space
Anvil -- reallocated sectors are tracked in SMART data. 1 reallocation event = one 2MB erase block, or 4096 512b sectors. So 9 erase block failures equals 18MB reallocated, or 36864 sectors. The M3P went through almost 2GB of reallocations.
Today JEDEC released some Standard SSD Endurance Workloads that they feel are relevant for endurance testing.
This is just a bit of it, very detailed. reading now.Quote:
The enterprise endurance workload shall be comprised of random data with the following payload size distribution:
512 bytes (0.5K) 4%
1024 bytes (1K) 1%
1536 bytes (1.5K) 1%
2048 bytes (2K) 1%
2560 bytes (2.5K) 1%
3072 bytes (3K) 1%
3584 bytes (3.5K) 1%
4096 bytes (4K) 67%
8192 bytes (8K) 10%
16,384 bytes (16K) 7%
32,768 bytes (32K) 3%
65,536 bytes (64K) 3%
Samsung 830 256GB Day 136
(GiB) 3,375,642
(TiB) 3,296
(PiB) 3.24
(Avg) 298.04 MB/s
(B1) Wear Leveling Count: 14,619
(B6) Erase Fail Count: 9
(05) Reallocated Sectors: 36864
(POH) 3339
-------------------
It's all downhill for the 330, I'd be surprised if it last another week.
Intel 330 120GB
724.63TB Host writes
2.52TiB Host reads
Reallocated sectors : 05 199 +66
Available Reserved Space : E8 69 -30
MWI 14
[B5] Program fail count 96
[B6] Erase fail count 103
[F1] Total LBAs Written 23744862
[F2] Total LBAs Read 82722
[F9] Total NAND Writes 523106GB // ~511TiB
POH 1737
MD5 OK
125.39MiB/s on avg (~109 hours)
--
Kingston SSDNow 40GB (X25-V)
1174.70TB Host writes (38492805*32)
Reallocated sectors : 05 71
Available Reserved Space : E8 98
POH 10474
MD5 OK
33.39MiB/s on avg (~109 hours)
Intel 330 120GB
734.08TB Host writes
2.55TiB Host reads
Reallocated sectors : 05 228 +29
Available Reserved Space : E8 58 -11
MWI 13
[B5] Program fail count 110
[B6] Erase fail count 118
[F1] Total LBAs Written 24054404
[F2] Total LBAs Read 83516
[F9] Total NAND Writes 529922GB // ~517TiB
POH 1759
MD5 OK
125.42MiB/s on avg (~131 hours)
--
Kingston SSDNow 40GB (X25-V)
1177.20TB Host writes (38574684*32)
Reallocated sectors : 05 72 +1
Available Reserved Space : E8 98
POH 10496
MD5 OK
33.35MiB/s on avg (~131 hours)
Here are todays update:
m4
1352.2965 TiB
5616 hours
Avg speed 74.59 MiB/s.
AD 112 to 94
P/E 23236.
C3 4457
CE 58
MD5 OK.
Reallocated sectors : 00
http://www.diskusjon.no/index.php?ap...tach_id=499551
Intel X25-M G1 80GB
846.46 TiB
24902 hours
Reallocated sectors : 115 to 255 to 8
Available Reserved space: 31 to 22
MWI= 104
MD5 =OK
27.05 MiB/s on avg
http://www.diskusjon.no/index.php?ap...tach_id=499552
Intel X25-E 64GB
665.36 TiB
2083-30=2053 hours
Reallocated sectors : 2 to 7
Available Reserved space: 99
MWI= 95
MD5 =OK
91.73 MiB/s on avg
http://www.diskusjon.no/index.php?ap...tach_id=499550
Mtron Pro 7025 32GB
Paused until new firmware is updated.
The G1 continues its path to death and the E had 5 more reallocated sectors. The m4 just steams on
It would appear that TRIM and 'modern' firmware are worth their weight in gold. Imagine the X25-E and G1 with TRIM -- they'd be forever in the killing. And the G1 has already made it quite a ways.
Very good point Christopher. Although we're talking a totally different animal, seeing these trimless drives w/o GC doing this well sustains my confidence that my 60GB Ocz Apex with 51nm nand may live an exceptionally long life, all things considered. Biggest gripe I have is that smart only has one attribute, the temp which is locked at 44C. No lifetime reads/writes, no POH, no MWI...no nothing, doesn't even support SE! Guess that's the price paid for an internally raided gen1 SSD. Gotta give it props tho...after consolidating free space and a run of AS_Cleaner 1.0 (aka FreeSpaceCleaner) w/FF, dubbed 'Tony Trim' about once a month it still benches just as fast as the day I bought it. :up:
The system had halted 30 minutes within the latest report, found out 6 hours later.
Intel 330 120GB
741.81TB Host writes
2.57TiB Host reads
Reallocated sectors : 05 260 +32
Available Reserved Space : E8 46 -12
MWI 12
[B5] Program fail count 125
[B6] Erase fail count 135
[F1] Total LBAs Written 24307515
[F2] Total LBAs Read 84087
[F9] Total NAND Writes 535502GB // ~523TiB
POH 1776
MD5 OK
124.82MiB/s on avg (~18 hours)
--
Kingston SSDNow 40GB (X25-V)
1179.37TB Host writes (38645807*32)
Reallocated sectors : 05 72
Available Reserved Space : E8 98
POH 10514
MD5 OK
35.04MiB/s on avg (~18 hours)
Sandisk Extreme G25 - 120gb - Day 63
Drive hours: 1446
ASU GiB written: 625,108.60 GiB (610.46 TiB)
Avg MB/s: 124.05 MB/s (23.08 hours)
MD5: OK
Host GB written (F1): 629,613 GiB (614.86 TiB)
NAND writes (E9): 469,999 GiB (458.98 TiB)
Retired Block Count (05): 4 raw
Failure count (AB, AC): 1 program, 1 erase
Raw Error Rate (01): 108 normalized
Media Wearout Indicator (E7): 10 normalized
Wear Range Delta (B1): 3 raw
Reported Uncorrectable Errors (BB): 0 raw
I did a retention test for 3.5 days. Drive didn't complain! Drive might outlive the Intel 330, which I really wasn't expecting :P
I might do another drive (maybe a cheap 60gig sandforce) after 5.0.3 firmware is released with the TRIM fix. Could be interesting to see if the drive reverts to the old wear range delta behaviour
My 60GB Chronos Deluxe 60GB 2281/32nm Toggle on 3.3.0 and then 3.3.2 never went above 11 I think. But due to the 5K NAND, I think the WRD acted differently. Like it allowed a larger percentage of PE cycle deltas than 3K flash or something. Either that, or it just wear leveled better.
No, I'm absolutely certain your 60gig Chronos Deluxe drive was programmed with 3K cycles, not 5K cycles, from where it landed at MWI=10.
In any case, the Sandisk extreme hasn't done worse then 4 WRD in it's entire life. That is a huge improvement over the old Corsair Force 3, and even your 60gig Chronos Deluxe
Samsung 830 256GB Day 137
(GiB) 3,428,092
(TiB) 3,347
(PiB) 3.29
(Avg) 297.86 MB/s
(B1) Wear Leveling Count: 14,845
(B6) Erase Fail Count: 9
(05) Reallocated Sectors: 36864
(POH) 3390
-----------------------------------
canthearu -- I don't know what the FW was programmed with for certain, but the NAND was 5K rated. Not that it matters a whole lot, but it could just be that Mushkin didn't bother changing the FW for 5K and 3K models (Chronos and Chronos Deluxe). OCZ did the same thing with the 1.6FW for Indilinx.
I expected the 330 to outlast my Force 3, it doesn't look like that is going to happen.
No signs of slowdowns during benchmarks though.
Attachment 129160
...the rig had rebooted, frequent reboots and or frozen systems are signs of a drive going bad.
Here's the 330 as of 12 hours ago.
Attachment 129168
and finally an update.
Attachment 129167
Intel 330 120GB
749.94TB Host writes
2.60TiB Host reads
Reallocated sectors : 05 305 +45
Available Reserved Space : E8 27 -19
MWI 11
[B5] Program fail count 147
[B6] Erase fail count 158
[F1] Total LBAs Written 24574169
[F2] Total LBAs Read 85096
[F9] Total NAND Writes 541377GB // ~529TiB
POH 1800
MD5 OK
125.84MiB/s on avg (~14 hours)
--
Kingston SSDNow 40GB (X25-V)
1181.71TB Host writes (38722319*32)
Reallocated sectors : 05 72
Available Reserved Space : E8 98
POH 10534
MD5 OK
36.93MiB/s on avg (~14 hours)
It's getting close...
Attachment 129203
Attachment 129204
Anvil, that really is (matbe not very) surprising. It feels like the previous SFs died almost exactly when they were supposed to. It's strange that they seem to be dying so consistently. I think maybe some proprietary mojo from the Intel, Samsung, and Crucial/Marvells may be missing.
While it's a bit disappointing I prefer being warned about the failure.
In 30 minutes, available reserved space went from 9 to 7 so it might enter read-only mode sometime within the next few hours. (it's midnight here)
Maybe this is the one that behaves as expected wrt read-only mode :)
Who knows? Seeing a RO SSD is like seeing a unicorn -- there isn't much reason to believe they actually exist. I'm not saying its dissappointing, just that it isn't too different. SF is supposed to hit RO at MWI 1, which is why it stops at 10. I also know that the Intel and SF FW report reallocations much differently.
We'll know in a few hours, it's already at 1. (available reserved space)
There looks to be a bug with available reserved space.
It's back to 100 so it looks to continue but like some of the other SF drives, speed is increasing as it wears out so the signs of a dying drive are still there.
ASR did not correlate to the movements of any other SMART attribute so I suspected something was wrong, anyways, it carries on writing.
--
Intel 330 120GB
760.52TB Host writes
2.63TiB Host reads
Reallocated sectors : 05 387 +82
Available Reserved Space : E8 100 "-27"
MWI 9
[B5] Program fail count 186
[B6] Erase fail count 201
[F1] Total LBAs Written 24920807
[F2] Total LBAs Read 86201
[F9] Total NAND Writes 549015GB // ~536TiB
POH 1824
MD5 OK
126.35MiB/s on avg (~9 hours)
--
Kingston SSDNow 40GB (X25-V)
1184.51TB Host writes (38814090*32)
Reallocated sectors : 05 73 +1
Available Reserved Space : E8 98
POH 10559
MD5 OK
33.57MiB/s on avg (~9 hours)
Sandisk Extreme G25 - 120gb - Day 65
Drive hours: 1495
ASU GiB written: 646,449.96 GiB (631.30 TiB)
Avg MB/s: 125.68 MB/s (45.97 hours)
MD5: OK
Host GB written (F1): 651,060 GiB (635.80 TiB)
NAND writes (E9): 486,014 GiB (474.62 TiB)
Retired Block Count (05): 4 raw
Failure count (AB, AC): 1 program, 1 erase
Raw Error Rate (01): 110 normalized
Media Wearout Indicator (E7): 10 normalized
Wear Range Delta (B1): 3 raw
Reported Uncorrectable Errors (BB): 0 raw
So the Intel 330 said "haha, fooled ya :P". Intel seems to have botched the ASR counter in the sandforce firmware, although I haven't seen any sandforce drive go through bad block death yet to verify that the Retired Block count Normalized values work correctly either.
I bet the 330's counter cycles twice before it dies. I contend it's not about erase failures as much as read failures anyway, until you have no ARS left.
I wish it was
There is no change to ARS, it's still at 100.
MWI is at 9, will take another week or so for it to either stop "working" (0) or possibly restart at 100.
The M3P went through almost 3GB of flash, but it was the read failures that killed it -- along with every OCZ, and probably a couple other drives too.
Samsung 830 256GB Day 140
(GiB) 3,495,821
(TiB) 3,413
(PiB) 3.35
(Avg) 297.61 MB/s
(B1) Wear Leveling Count: 15,138
(B6) Erase Fail Count: 9
(05) Reallocated Sectors: 36864
(POH) 3456
-------------------
Samsung 830 256GB Day 141
(GiB) 3,541,007
(TiB) 3,458
(PiB) 3.40
(Avg) 297.53 MB/s
(B1) Wear Leveling Count: 15,333
(B6) Erase Fail Count: 9
(05) Reallocated Sectors: 36864
(POH) 3500
-------------------
Here are todays update:
m4
1382.3581 TiB
5734 hours
Avg speed 74.55 MiB/s.
AD 94 to 77
P/E 23748.
C3 4457 to 4647
CE 58
MD5 OK.
Reallocated sectors : 00
http://www.diskusjon.no/index.php?ap...tach_id=500009
Intel X25-M G1 80GB
856.49 TiB
25020 hours
Reallocated sectors : 8 to 188
Available Reserved space: 22 to 11
MWI= 104 to 68
MD5 =OK
26.16 MiB/s on avg
http://www.diskusjon.no/index.php?ap...tach_id=500010
Intel X25-E 64GB
702.41 TiB
2201-30=2171 hours
Reallocated sectors : 7 to 27
Available Reserved space: 99
MWI= 95
MD5 =OK
91.60 MiB/s on avg
http://www.diskusjon.no/index.php?ap...tach_id=500008
Mtron Pro 7025 32GB
Paused until new firmware is updated.
The G1 continues its path to death and the E had 20 more reallocated sectors. The m4 just steams on.
Wow this is great for you guys to tackle ssd endurance. Not many sites pay attention to this at all and it makes me very glad I chose a samsung 830 as well. Although mine is 128gig not 256gig model.
^love checking this thread
Sandisk Extreme G25 - 120gb - Day 68
Drive hours: 1566
ASU GiB written: 678,050.50 GiB (662.16 TiB)
Avg MB/s: 125.69 MB/s (117.42 hours)
MD5: OK
Host GB written (F1): 682,819 GiB (666.82 TiB)
NAND writes (E9): 509,731 GiB (497.78 TiB)
Retired Block Count (05): 4 raw
Failure count (AB, AC): 1 program, 1 erase
Raw Error Rate (01): 115 normalized
Media Wearout Indicator (E7): 10 normalized
Wear Range Delta (B1): 3 raw
Reported Uncorrectable Errors (BB): 0 raw
Sandisk Extreme G25 - 120gb - Day 70
Drive hours: 1616
ASU GiB written: 699,914.35 GiB (683.51 TiB)
Avg MB/s: 125.20 MB/s (167.61 hours)
MD5: OK
Host GB written (F1): 704,792 GiB (688.27 TiB)
NAND writes (E9): 526,156 GiB (513.82 TiB)
Retired Block Count (05): 4 raw
Failure count (AB, AC): 1 program, 1 erase
Raw Error Rate (01): 117 normalized
Media Wearout Indicator (E7): 10 normalized
Wear Range Delta (B1): 4 raw
Reported Uncorrectable Errors (BB): 0 raw