Kingston SSDNow 40GB (X25-V)
555.03TB Host writes
Reallocated sectors : 05 14
Available Reserved Space : E8 99
POH 5032
MD5 OK
33.54MiB/s on avg (~18 hours)
--
Corsair Force 3 120GB
It has been disconnected for 18 hours.
Printable View
Kingston SSDNow 40GB (X25-V)
555.03TB Host writes
Reallocated sectors : 05 14
Available Reserved Space : E8 99
POH 5032
MD5 OK
33.54MiB/s on avg (~18 hours)
--
Corsair Force 3 120GB
It has been disconnected for 18 hours.
Thanks! ...and same to you since this would not be as possible without ASU. :up:
Thanks! ...I might mount this one on a plaque and hang it in my office. :)
Not sure yet on running the other one, if I do it will be with FW 1.7.
Thanks! ...and yeah, I'm a little upset with not reaching 1 PB. :shakes:
If I do put the other M225 into the test, it won't be till after the New Year.
So I modified the Min GB Free the other day. It took a while, but the 830 is now unable recover its speed. I ran the Magician Optimization 6 times, but no increase in speed. I'm going to have to SE.
I was using 18.3GB of static data, 16GB min Free, so that leaves 26GB per loop. If I move it down to 12GB Min Free, it can't keep its speed up, and worse, can't be fixed.
https://www.box.com/shared/static/tx...8b69uaqlh6.jpg
Before SEing the drive, I tried a few things. The Samsung Magician's performance optimization is worthless. Those numbers above are abyssmal, and it all started when I lowered the min free space to 12GB. It took about a day, but the controller seems to have backed itself into a corner here. Something is wrong though... it's not fixable with traditional means, so I'm going to be forced to SE it. Just deleting the volume didn't do much. Large sequential writes didn't do much. Samsung's software didn't do anything, even when run 8 times at about 4 minutes each.
EDIT
https://www.box.com/shared/static/xu...2qqrt2qezy.jpg
This is after a SE. It looks pretty sad.
Major update. For real this time. Sort of...
but first, today's update:
Samsung 830 64GB Update, Day 10
FW:CXM01B1Q
GiB written:
87061.03
Actual Writes:
87476.6
Avg MB/s
81.54 down from 105MBs
Per-day average:
8706GB
PE Cycles
138
Reallocated Sector Count
20480 (10 blocks)
8K pages, 1MiB blocks
243 hours
https://www.box.com/shared/static/ub...g1yq1pnntm.png
https://www.box.com/shared/static/ti...j1cu94xdgr.jpg
Okay. Notice the 138 PE cycles?
From the reboot, or the SE, things changed. You can see the before and after SE CDM numbers.
Now look at this:
https://www.box.com/shared/static/ip...p7fzbkt7q1.jpg
The PE cycles are now at 5444. That's 1/1/5444.
So the PE cycle/wear leveling does work properly and can be used as a life indicator.
But I can't get performance back to near-new levels to save my life.
Truth it, I don't put a lot of faith in either number (100+ PE cycles, then a jump to 5444). Looking at the total writes to the total count of write sectors, you don't see much WA. Around 1.06 when I calcluated yesterday.
I'm more concerned that a SE didn't fix the profound performance drop.
One possible, if not probable, explaination is several of the cells are borderline and the controller is having a hard time dealing. Maybe the Samsung SE isn't real SE. Who knows? I deleted the volume first and that made no performance difference. See the a couple posts up for before and after pics.
You could be right, but my gut tells me neither wear leveling count could be accurate. And I still can't account for the crazy performance drop. The drive didn't go from ~100 PE cycles to over 5000 overnight, it happened instantly. So you could be right, I just have no way of knowing.
The PE cycles havent increased all day either since hitting 1/1/5444... the threshold is 0, but maybe it's at the end of the line.
Right now it's on track to write 200MB an hour... that's not good.
I meant that the actual value of 5444 erase cycles makes sense. The fact that it stuck around 100, then increased to 5444, and now is not increasing, does NOT make sense. That certainly seems like a firmware bug.
Combined with your weird write-speed decrease, which could be another firmware bug. I suppose it could also be some sort of throttling that comes in after the SSD depletes MWI, but my first guess is a firmware bug.
I may try re flashing the fw but the Samsung software may not let me. And it was slower yesterday... The SE helped a lot, but its still nowhere close.
UPDATE
I SE'd it again. Put a partition on it, and ran CDM again:
https://www.box.com/shared/static/ec...sevk4l36yb.jpg
The writes are bad but where did the reads go?!
Running a 100MB test would normally be on the small side and as the Samsungs have a cache that is larger you should at least do 500MB.
I'd let it idle for a day or so, it might rely on background GC although a SE should normally take care of all such anomalies.
Could you try running HDTune Read Benchmark using 64KB and 4MB?
It really looks to me like the drive throttles down its writes because its past the MWI... Read speed should be unaffected though when testing CDM 1000mb.
I'm very curious though about how much total writes the drive can take.
With respect to the CDM numbers, look at the past couple of posts with CDM shots. The first two were with 1000MB, 5 reps -- and the best numbers are being displayed. One of 5 would be much better, and the other 4 would range from low to super low. As far as the writes go, they look better than the reads, but it's not sustainable. Running the endurance loop, the beginning loop instant speeds start at 100 and drop into the 50MBs range.
Secondly, I SE'd the drive many times, in addition to multiple Magician performance optimization runs. Then, just grasping at straws, I tried one of the secure drive wipes in Acronis Drive Cleanser (the fast one that just writes zeros to every LBA). Sequential writes seemed to help at first, but didn't really do anything.
I tried running the drive with Intel RST and MSAHCI, and on two different systems.
I ran HDTune several different ways, but regardless of which benchmark I run, there is something wrong here.
64KB
https://www.box.com/shared/static/gu...omdof2yuc5.jpg
4MB
https://www.box.com/shared/static/i5...od54n4oqz5.jpg
I will let the drive idle the rest of the day, but I don't see why that would work but secure erasing doesn't.
Then it def looks like something is wrong, read speeds should be unaffected...
if you look at the above 4MB hd tune shot, the depressed reads roughly correspond to the amount of static data on the drive. However, most of the CDM shots are with no data on the drive.
Man, the days just seem so boring for me now for some reason. :shrug: LOL
Christopher, your results really make me regret buying a Samsung 830 128Gb and not going for the Crucial M4. Can it be that this drive is so bad that after exhausting the MWI it drops to ridiculously slow speed for BOTH write AND read?
What happened I'm not sure. But the drive is awesome.
Are you planning on writing 128TB a week? If not, you should be fine. Somehow, when I made it so more of the drive was being written to each testing loop the drive got in a bad way. But I also think it has to do with writing so much with out a reboot.
The 64GB 830 has written 90TB so far, so some drives will take it in stride while others won't.
But this drive is hard up at the moment. This is after idling for six hours
https://www.box.com/shared/static/ti...pz1qukvt1q.jpg
I actually just got a 60GB Vertex EX SLC in the mail today -- brand new, 1.3FW -- for $66... if I didn't think it would take either excruciatingly long to endurance test it, or die in an embarrassingly short time I'd do it in a heart beat... it gives your M225 Turbo a run for it's performance money.
I also have a new Imation S-class SLC coming in the mail, but with 120 4K write IOPS, it's probably nothing to get excited about. WA would go through the ceiling with that drive, and I think I could kill it pretty quick.
Either way, I'm looking past the 830 at this stage. Maybe I can get it back to health... maybe not.
EDIT
I SE'd the drive again with HDDERASE, and it made no difference.
An endurance loop with no static data (just after the SE) on the drive is just as slow as before. CDM still shows terrible results.
I have a theory... but it's not a good one yet. I'm going to continue with the testing.
@Christopher
Could you try formatting the drive?
Make sure to deselect the Quick Format option as it will make a huge difference, you can stop at 10-20% and run ATTO to check if it works for you, you need to perform a quick format if you abort the full format in order to run tests of course.
(a quick format just won't do sometimes, even if it should include a TRIM)
I've heard of this mystical Full Format option (yeah, I still remember how to perform a full format with Windows). I'm doing it now, but I already did something very similar.
UPDATE
Full format didn't do anything.
When using CDM 1000MB *5, the first read result is 80MBs every time. 80/320/100/270/119 is common for the sequential reads.
I think the firmware is broken, but I can't reflash it. You have to use the Mgician to manage fw updates, and I can't reflash the same version.
Until Samsung releases a new fw I'm stuck with this drive...
Unless this is some crazy wear leveling scheme, and nothing is broken at all. Perhaps this is what happens when too much wear occurs on some cells, in which case running the drive for a few days without any static data may help.
Samsung 830 64GB Update, Day 12
FW:CXM01B1Q
GiB written:
96974.70
Actual Writes:
97765.9
Avg MB/s
75.96MBs down from 105MBs
PE Cycles
6090
Reallocated Sector Count
20480 (10 blocks)
8K pages, 1MiB blocks
292 hours
https://www.box.com/shared/static/d8...484z5qcj85.png
https://www.box.com/shared/static/xx...7imojnzuat.png
--------------------------------------------------------------------------------
I'm currently hoping that running the loops with no static data will help.
I think this has something to do with the wear leveling algorithms.
Finally after 3 days of christmas frenzy trying to get all the details fixed before the big day, here are the latest update :)
Kingston V+100
299.9157 TiB
1469 hours
Avg speed 25.37 MiB/s
AD still 1.
168= 1 (SATA PHY Error Count)
P/E?
MD5 OK.
Reallocated sectors : 00
http://www.diskusjon.no/index.php?ap...tach_id=472288http://www.diskusjon.no/index.php?ap...tach_id=472284
Intel X25-M G1 80GB
147,5687 TiB
19688 hours
Reallocated sectors : 00
MWI=15 to 0 and up to 100%
MD5 =OK
51.23 MiB/s on avg
http://www.diskusjon.no/index.php?ap...tach_id=472287http://www.diskusjon.no/index.php?ap...tach_id=472283
m4
83.9249 TiB
304 hours
Avg speed 81.20 MiB/s.
AD gone from 65 to 52.
P/E 1461.
MD5 OK.
Reallocated sectors : 00
http://www.diskusjon.no/index.php?ap...tach_id=472286http://www.diskusjon.no/index.php?ap...tach_id=472285
The Force 3 was back on power again late Friday night so it was a disconnected for ~54-56 hours, it has been idling most of the time since it was back online.
I'll make a report in a few hours.
Bluestang,
I've changed my mind about Indilinx 1.7 firmware. I've been having problems with the one drive I put on 1.7, but the amount of WA I was experiencing with earlier FW is probably worth the risk.
But my 128GB Turbo running 1.6 did lose some 4K read performance, but seemed to gain more 4K write. Two Agility 60s didn't really change at all, and the random performance of the 60GB Vertex EX is so good I didn't want to risk upgrading it. If the M225 had been used in a normal (normal for other people :rolleyes:) way I think it would be a no-brainer. In the endurance testing loads, I think it would probably be good for just enough to get you over the 1PB hump. My Turbo 128 1.6FW has less than 120GB host writes, but the PE cycles had exploded and MWI was dropping quickly compared to the Agility 60 on 1.7 which was being used without TRIM.
Kingston SSDNow 40GB (X25-V)
564.87TB Host writes
Reallocated sectors : 05 14
Available Reserved Space : E8 99
POH 5117
MD5 OK
35.92MiB/s on avg (~19 hours)
--
Corsair Force 3 120GB
01 88/50 (Raw read error rate)
05 2 (Retired Block count)
B1 47 (Wear range delta)
E6 100 (Life curve status)
E7 10 (SSD Life left)
E9 544724 (Raw writes) ->532TiB
F1 725061 (Host writes) ->708TiB
MD5 OK
106.53MiB/s on avg (~19 hours)
power on hours : 2098
--
I had to restart the computer about 19 hours ago as something was off and so almost nothing had been written between Friday-night and Saturday-night. (it was just very slow)
Alright, the 830 is really starting to piss me off. Something is not right with it, but as far as I can tell, speed is the only thing affected
It could just be some crazy amount of write amplification, but I don't understand why reads are affected.
Samsung 830 64GB Update, Day 13
FW:CXM01B1Q
GiB written:
103395.55
Actual Writes:
97765.9
Avg MB/s
65.83MBs
PE Cycles
6407
Reallocated Sector Count
24576 (12 blocks)
8K pages, 1MiB blocks
320 hours
https://www.box.com/shared/static/58...iy80g57nxn.png
https://www.box.com/shared/static/cs...vfolm31l8j.png
I added 39GB of static data to the drive to see what would happen.
I'll try to keep the updates coming in while I'm out of town for Christmas. I won't be back for almost two weeks, so I hope the drive and system can make it without me being here physically.
Kingston SSDNow 40GB (X25-V)
565.88TB Host writes
Reallocated sectors : 05 14
Available Reserved Space : E8 99
POH 5126
MD5 OK
34.86MiB/s on avg (~28 hours)
--
Corsair Force 3 120GB
01 90/50 (Raw read error rate)
05 2 (Retired Block count)
B1 47 (Wear range delta)
E6 100 (Life curve status)
E7 10 (SSD Life left)
E9 547268 (Raw writes) ->534TiB
F1 728447 (Host writes) ->711TiB
MD5 OK
106.52MiB/s on avg (~28 hours)
power on hours : 2107
--
I had expected the 830 would perform more or less like the 470, the one thing I expected was that avg writes would increase and not the other way around.
Anvil,
There is something wrong with it. I though about RMa'ing it and starting over, but I was hoping... something... would happen to 'fix' it. Its okay. I have a new drive to test come January.
I was thinking the RMA word as well, they might be interested in finding what's gone wrong with this particular drive.
I think I may try and contact Samsung support, but I don't know if I have the patience to explain the situation to whoever the hell is on the other end. But I think you could break every Samsung drive conceivably, under the right conditions, so I'll try anyway. I doubt anyone other than me has encountered this since they'd be yanking them out of Dells and Lenovos overnight if this had happened to too many other people.
It's clearly not some kind of throttling, as reads are more affected than writes, though the writes are pretty bad too.
As such, I believe it to be a firmware bug, something that could only be fixed with a re flash. Possibly not even a firmware update could stop it.
Maybe don't mention that MWI is exhausted until you've explained everything else and let them respond to the other issues. I suspect once you mention that MWI is exhausted (or how much you have written), the support person will blame everything on that, even though there is clearly a firmware bug.
Todays update:
Kingston V+100
302.1915 TiB
1503 hours
Avg speed 25.39 MiB/s
AD still 1.
168= 1 (SATA PHY Error Count)
P/E?
MD5 OK.
Reallocated sectors : 00
http://www.diskusjon.no/index.php?ap...tach_id=472522http://www.diskusjon.no/index.php?ap...tach_id=472519
Intel X25-M G1 80GB
152,7166 TiB
19722 hours
Reallocated sectors : 00
MWI=160 to 159.
MD5 =OK
50.03 MiB/s on avg
http://www.diskusjon.no/index.php?ap...tach_id=472521http://www.diskusjon.no/index.php?ap...tach_id=472518
m4
93.4232 TiB
338 hours
Avg speed 81.16 MiB/s.
AD gone from 52 to 46.
P/E 1627.
MD5 OK.
Reallocated sectors : 00
http://www.diskusjon.no/index.php?ap...tach_id=472523http://www.diskusjon.no/index.php?ap...tach_id=472520
Kingston SSDNow 40GB (X25-V)
568.61TB Host writes
Reallocated sectors : 05 14
Available Reserved Space : E8 99
POH 5150
MD5 OK
33.85MiB/s on avg (~52 hours)
--
Corsair Force 3 120GB
01 94/50 (Raw read error rate)
05 2 (Retired Block count)
B1 45 (Wear range delta)
E6 100 (Life curve status)
E7 10 (SSD Life left)
E9 554109 (Raw writes) ->541TiB
F1 737553 (Host writes) ->720TiB
MD5 OK
106.61MiB/s on avg (~52 hours)
power on hours : 2132
--
Samsung 830 64GB Update, Day 15
FW:CXM01B1Q
GiB written:
111566.24
Actual Writes:
112445.4
Avg MB/s
62.27MBs
PE Cycles
6672
Reallocated Sector Count
24576 (12 blocks)
8K pages, 1MiB blocks
358 hours
https://www.box.com/shared/static/eq...gzaukg3vay.png
https://www.box.com/shared/static/jm...q00675qfkm.png
I have a different view of the Samsung's performance; It could just be dead, but instead it works fine and I don't mind leaving it unattended over the holidays.
Todays update:
Kingston V+100
304.3754 TiB
1528 hours
Avg speed 25.37 MiB/s
AD still 1.
168= 1 (SATA PHY Error Count)
P/E?
MD5 OK.
Reallocated sectors : 00
http://www.diskusjon.no/index.php?ap...tach_id=472633http://www.diskusjon.no/index.php?ap...tach_id=472630
Intel X25-M G1 80GB
156,5384 TiB
19747 hours
Reallocated sectors : 00
MWI=159 to 157.
MD5 =OK
49.40 MiB/s on avg
http://www.diskusjon.no/index.php?ap...tach_id=472632http://www.diskusjon.no/index.php?ap...tach_id=472629
m4
100.4250 TiB
363 hours
Avg speed 81.14 MiB/s.
AD gone from 46 to 42.
P/E 1749.
MD5 OK.
Reallocated sectors : 00
http://www.diskusjon.no/index.php?ap...tach_id=472634http://www.diskusjon.no/index.php?ap...tach_id=472631
My X25-E doesnt do that, but maybe I just havent noticed since it only has about 300GB host writes. But it
does kinda seem like the X25-Es Ive seen have had a lot of unsafe shutdown counts.
Todays update:
Kingston V+100
306.6465 TiB
1554 hours
Avg speed 25.38 MiB/s
AD still 1.
168= 1 (SATA PHY Error Count)
P/E?
MD5 OK.
Reallocated sectors : 00
http://www.diskusjon.no/index.php?ap...tach_id=472779http://www.diskusjon.no/index.php?ap...tach_id=472776
Intel X25-M G1 80GB
160,5192 TiB
19773 hours
Reallocated sectors : 00
MWI=157 to 156.
MD5 =OK
48.90 MiB/s on avg
http://www.diskusjon.no/index.php?ap...tach_id=472778http://www.diskusjon.no/index.php?ap...tach_id=472775
m4
107.6948 TiB
389 hours
Avg speed 81.16 MiB/s.
AD gone from 42 to 38.
P/E 1876.
MD5 OK.
Reallocated sectors : 00
http://www.diskusjon.no/index.php?ap...tach_id=472780http://www.diskusjon.no/index.php?ap...tach_id=472777
If you still have access to your X25-E, could you please post a screenshot of the SMART readings? Mine looks like this:
http://img607.imageshack.us/img607/6530/x25esmart.jpg
I've got a few E's, will check later tonight, I've seen the issue but haven't checked those drives.
--
Kingston SSDNow 40GB (X25-V)
573.97TB Host writes
Reallocated sectors : 05 14
Available Reserved Space : E8 99
POH 5197
MD5 OK
33.30MiB/s on avg (~100 hours)
--
Corsair Force 3 120GB
01 89/50 (Raw read error rate)
05 2 (Retired Block count)
B1 44 (Wear range delta)
E6 100 (Life curve status)
E7 10 (SSD Life left)
E9 563571 (Raw writes) ->550TiB
F1 750143 (Host writes) ->733TiB
MD5 OK
106.69MiB/s on avg (~25 hours)
power on hours : 2179
Somehow I've managed to hit the Stop button while it was running (probably while hitting a button to get the screen out of screen-saver mode) and so it was idling for about 10 hours without me noticing. (next time I'll use the mouse :))
I'm pretty confident that this is the first drive that will get past 1PB, it will take another 30 days + 4-5 days of retention testing so about 5 weeks totally.
Holy ****! Anvil what have you done in the ASU, I think I just ****ed up my 3rd M4 512G (in my 3rd computer)! 4th loop was the death loop (46% application fill), and the computer halted with a **** load of write error displayed in ASU. Is it a problem with the quality control of the flash IC of the 512G model, or is there a flaw of the firmware that doesn't do wear-leveling properly?! I don't think I can be this unlucky to have three (3) bad M4 512G in a row!!!
There is just one version of the Endurance test that is publicly available and we are all using it.
There is nothing special with the 46% or any of the other compression levels, it's just a write pattern that results in a that particular compression ratio.
It does sound weird that several 512GB are failing, they should behave just like any other m4 drive.
Just for the record, what settings are you using, static file size, min free space etc.
The page size of the 256GB/512GB M4 is 8k, while that of the 64GB/128GB M4 is 4k. I have a speculation that the firmware/controller of M4 isn't doing a great job there taking care of the 8k page sized models, which might lead to unexpected wears on certain blocks. Also, it might be a reasonable assumption that the cutting-edge capacity (currently 512GB or 600GB for a 2.5" SATA SSD) always comes with a higher failure rate, just like the mechanical hard drives (e.g. now 4TB drives shouldn't be that reliable at the moment because the manufacturing process isn't that mature yet).
Here I list some details of these three samples of M4 512GB, each on a different machine (I'll try to recall as much as I can and please let me know what you'd like to know). Sample 1 was from an early batch. Sample 2 and 3 were from the same batch at a later time.
Sample 1:
Model: M4 512GB
Factory firmware: 0001
Testing firmware: 0009
Motherboard: ThinkPad X220 SATA III (Intel native). This machine didn't have any problem with Intel X25-E 64GB.
Settings: all default
Static file size: around 380GB
Min free space: 12GB
Avg speed: 100MB/s
Error description: At the beginning I really liked this SSD. Looked very stable. Updated the firmware to 0002 before I installed OS on it. Then one day I started AIDA64 and ran the looped linear read test in the disk benchmark, and the system stability test with disk stress at the same time. The laptop halted within minutes each time I did this. The SSD was not detected by BIOS after reboot and each time I had to use the power cycling method to recover it. Then I updated the BIOS of the laptop from 1.19 to 1.24 and the laptop no longer halted under such tests, and I thought it was just all the laptop's fault. Later on, one day when I was installing a virtual machine, I got 0xF4 BSOD several times, each time upon a burst of writes, and none of the crash dump was successfully written to the SSD, indicating that the SSD went offline. Then I updated the firmware of the SSD to 0009 and seemed to have cured the BSOD problem. I lost trust in the SSD, so I decided to run ASU's Endurance testing on it. Unfortunately with around 380GB static file and the default settings (0-fill, 12GB min free etc), the SSD could be easily forced to go offline within the first loop. The "05 Reallocated Sector Count" and the "C4 Reallocation Event Count" increased crazily. Then I ran "diskpart clean" and used Windows 7 to do a *slow* format (hence 0 static file), and ASU wasn't able to repeat any problem for several loops, which I guess was because Windows marked the bad sectors. Then I did a quick format but ASU wasn't able to repeat any problem for several loops again, which I guess was because Windows copied the information of marked bad sectors. Then I used WinHex to do a DoD fill of the SSD, and it was forced offline easily upon the first loop. Then ASU was able to repeat the problem easily within 1 loop, until several times of failure the SSD was no longer recognized by BIOS regardless of the power cycling method for recovery.
Sample 2:
Model: M4 512GB
Factory firmware: 0002
Testing firmware: 0009
Motherboard: EVGA X58 E760 SATA II (Intel native). This machine didn't have any problem with Intel 320 600GB.
Settings: all default
Static file size: around 320GB
Min free space: 12GB
Avg speed: 80MB/s
Error description: At the beginning the SSD seemed to be stable, until I decided to upgrade the firmware from 0002 to 0009. I noticed 0xF4 BSOD upon bursts of read/write, e.g. verifying the local cache of a Steam game. Then I started AIDA64 and ran the looped linear read test and the system stability test with disk stress at the same time, and I could easily repeat the BSOD within tens of minutes each time. Then I ran ASU's Endurance testing with default settings, and the BSOD could be reproduced very quickly (at the very early stage of the 1st loop). The "05 Reallocated Sector Count" and the "C4 Reallocation Event Count" both stayed at 0.
Sample 3:
Model: M4 512GB
Factory firmware: 0002
Testing firmware: 0002
Motherboard: ASUS P8H67-I SATA III (Intel native)
Settings: all default, except 46% application fill and 1GB min free space
Static file size: around 120GB
Min free space: 1GB
Avg speed: 119MB/s
Error description: This sample didn't seem to be that troublesome but since my partner has been using it and never stressed it like I did I guess it just didn't have the chance to show any problem. I ran ASU's Endurance testing for around 3 loops of 0-fill with 12GB min free space and didn't notice any problem. Then I ran it for another night with 46% application fill with 1GB min free space and the computer lost video signal (and according to the log the computer halted during the 4th loop). At this stage the static data was around 310GB (because ASU didn't have a chance to clean the test files yet) and when I tried to continue running ASU's Endurance testing I got write error straight away (with very slow Avg speed displayed). I clicked "Stop" and all test files were deleted. Then ASU's Endurance testing wasn't able to reproduce the problem for another 3 loops. Still running more tests to see what happens next. The "05 Reallocated Sector Count" ,the "C4 Reallocation Event Count" and the "CA MWI" are all still at 0 at the moment.
The m4 really should have no problem running the Endurance test, the smaller 64GB did remarkably well and looks to do just as well in round 2.
The amount of static data should be relative to the capacity and so 350-400GB would be OK but 12GB as a minimum of free space would be on the low side for the 256/512GB capacities, you might consider increasing that to at least 16-32GB as that would be more realistic for a drive on the large side.
I keep at least 20-30GB of free space on 256GB drives and that is actually out of my comfort zone.
The relatively slow avg speed might be a pointer to that the drive is struggling with the work-load, I had expected it to be 2x the speed of the m4 64GB.
(the first m4 64GB had an avg speed of 88MiB/s)
Anyways, it should not fail this test in just a few loops!
Personally I'd probably approach the test carefully or stay away from testing that particular drive/model as multiple drives have shown issues with the workload.
So, you could try increasing the minimum free space to 20-30GB as that would be more life-like for client workloads, 1GB is much to low, even for a small drive.
If I can arrange to take one of my C300 or m4 256GB drives out of "production" I'll do a short test sometime during the Christmas holidays.
Todays update:
Kingston V+100
308.6083 TiB
1577 hours
Avg speed 25.37 MiB/s
AD still 1.
168= 1 (SATA PHY Error Count)
P/E?
MD5 OK.
Reallocated sectors : 00
http://www.diskusjon.no/index.php?ap...tach_id=472854http://www.diskusjon.no/index.php?ap...tach_id=472851
Intel X25-M G1 80GB
163,9794 TiB
19796 hours
Reallocated sectors : 00
MWI=156 to 155.
MD5 =OK
48.55 MiB/s on avg
http://www.diskusjon.no/index.php?ap...tach_id=472853http://www.diskusjon.no/index.php?ap...tach_id=472850
m4
113.9955 TiB
412 hours
Avg speed 81.18 MiB/s.
AD gone from 38 to 34.
P/E 1986.
MD5 OK.
Reallocated sectors : 00
http://www.diskusjon.no/index.php?ap...tach_id=472855http://www.diskusjon.no/index.php?ap...tach_id=472852
Any news of those charts, the C300 and Vapor ?
I don't have access to my X25-E at the moment, and the only pics I have access to are some benchmarks.
But the power count/unsafe shutdown counts are not 1:1, but there are quite a few. Mine only has around 300GB on it though.
I checked my X25-E drives and all have unsafe shutdowns, looks like 20-30% of the shutdowns are unsafe.
I performed a few tests last night and when Unsafe shutdown increased, the Power Cycle Count did not.
(I had expected Power Cycle Count to increase on every power Off and it didn't)
But at the same time, it seems as though most drives don't act this way... the Intel's power cycle may = 1 power on + 1 ATA shutdown. If the unsafe shutdown increases, power cycle doesn't. Other drives just assume that the drive has to be on to get turned off ;)
So the true number of power cycles for an Intel is Power Cycle Count + Unsafe Shutdown.
I believe that if you enter the bios/uefi then exit, unsafe shutdown count will increase. I enter the BIOS and UEFI all the time, so some drives are worse off than others. For example, I had been reflashing and upgrading the FW on all of my Indilinx drives, so I would remove all of the other drives in the system, as it could take a lot of reboots and UEFI changes and I'd prefer not to have other drives in the system when doing FW updates. I don't think the unsafe shutdown counts are problematic, but it's probably best to avoid them if you can do so without much trouble. Basically, anything other than a soft-shutdown with a modern OS would result in an unsafe shutdown count.
All of the drive failures I've experienced have occurred in Windows, not on a restart. The sample size is pretty small though.
Strange, now I cannot reproduce any problem on the 3rd sample of the M4 512GB. Today when I woke up it was the first time I saw the MWI lost virginity (no reallocated sector though, strange? Is it because MWI is calculated according to wear-leveling count and the P/E of the flash?):
http://img221.imageshack.us/img221/4...atadevice5.jpg
http://img21.imageshack.us/img21/5889/smartsp.jpg
http://img803.imageshack.us/img803/1576/statsk.jpg
OK never mind, I've figured it out myself. By looking into the figures in the OP of this thread, it is very obvious that the MWI is simply calculated by the P/E spec of the flash. For M4, the P/E spec is 3000 cycles, and so the MWI of the M4 64GB should last for 59.5GB * 3000 / 1024 = 174.3 TB, which is pretty close to that of the test result of the first M4 64GB sample in this thread (and also the trend of the 2nd sample). It also works the same for the Samsung 470 64GB. I'm just a bit slow to follow you guys here :)
Can I ask you guys what's the firmware of your X25-E's? Mine is 045C8850. This disk has been used in 4 different computers during the past 2 years and I don't know which computer(s) might did most of the "harm" to it.
I've done some simple tests of the "0C Power Cycle Count" and the "C0 Unsafe Shutdown Count" here in the current computer (ThinkPad X220, BIOS 1.25):
1474
1033
soft shutdown
1475
1034
soft reboot
1475
1034
sleep
1476
1035
hibernate
1477
1036
It looks like the difference between my 0C and C0 is locked :( I'll test it in another computer and see if it's the same problem.
Edit: just confirmed that my desktop E760 motherboard doesn't increase the "C0 Unsafe Shutdown Count" when doing soft shutdowns in Windows. It's obviously something wrong with the ThinkPad X220.
I've not had the opportunity to observe my M4 decrease MWI, but other drives operate differently. Also, the NAND in the M4 is rated at 3000PE cycles, but it really should be rated at 5000.
An Indilinx drive typically has high write amplification with random data, so MWI on a 120GB Turbo may drop after 100MB of host writes (when used as a OS drive, sometimes less). So running ASU will typically take much more host writes to do reduce MWI by 1. Actually, MWI with Indilinxes are kind of wonky anyway so perhaps they're not the best example.
I just opened a brand new 64GB Vertex Turbo last night, and it was at 97 percent out of the box (1.7FW). A brand-new Vertex EX 60 was at 89 MWI out of the box when I unsealed it last week (1.3FW).
The 64GB "Real Turbo" is not half as awesome as Bluestang's M225 Turbo, at least not on 1.7FW. It also had 5 erase failures out of the box too.
That doesn't mean the MWI is a good indicator of anything in and of itself. When taken in context with other factors, it can help give you a picture of what is happening -- but its just part of the picture and doesn't mean much on it's own.
I just noticed that the MWI of Intel 320 Series may also work differently.
Did some simple tests of my Intel 600GB drive and yet to observe the MWI increase (I thought the MWI was supposed to increase by 1 when 5.5TB is written, if the P/E spec is 1000?):
http://img810.imageshack.us/img810/3...600g3atade.png
http://img543.imageshack.us/img543/4054/smartpn.jpg
Maybe the wear-leveling algorithm of the Intel 320 Series isn't that good, and without over-provisioning (which means merely relying on the factory reserved space) the drive could wear out pretty fast on some blocks?
Then what are we looking for in this thread? When the MWI is exhausted on a drive, the flash becomes volatile to some extent. How to verify if it's still working as it's supposed to? (It's obviously not practical to wait for a year of loss of power then verify the integrity of data in it.) A drive may continue to work to 50% more of its spec'ed P/E cycles but how do you know the current period that the data can be held without power?
The 320 Series 600GB would need about 17.5TiB written for every decrease of MWI if P/E is at 3000.
E9 starts at 100 and stops decreasing at 1.
There's nothing wrong with Intel's reporting :), it's just a huge drive.
--
edit
All my E's except for one are 8850, the last one has 8860 which is not available as a download.
Todays update:
Kingston V+100
310.3283 TiB
1597 hours
Avg speed 25.36 MiB/s
AD still 1.
168= 1 (SATA PHY Error Count)
P/E?
MD5 OK.
Reallocated sectors : 00
http://www.diskusjon.no/index.php?ap...tach_id=472937http://www.diskusjon.no/index.php?ap...tach_id=472934
Intel X25-M G1 80GB
167,5932 TiB
19815 hours
Reallocated sectors : 00
MWI=155 to 154.
MD5 =OK
52.75 MiB/s on avg
http://www.diskusjon.no/index.php?ap...tach_id=472936http://www.diskusjon.no/index.php?ap...tach_id=472933
m4
119.5655 TiB
432 hours
Avg speed 81.20 MiB/s.
AD gone from 34 to 31.
P/E 2083.
MD5 OK.
Reallocated sectors : 00
http://www.diskusjon.no/index.php?ap...tach_id=472938http://www.diskusjon.no/index.php?ap...tach_id=472935
Kingston SSDNow 40GB (X25-V)
577.73TB Host writes
Reallocated sectors : 05 14
Available Reserved Space : E8 99
POH 5231
MD5 OK
33.15MiB/s on avg (~133 hours)
--
Corsair Force 3 120GB
01 92/50 (Raw read error rate)
05 2 (Retired Block count)
B1 48 (Wear range delta)
E6 100 (Life curve status)
E7 10 (SSD Life left)
E9 572928 (Raw writes) ->560TiB
F1 762590 (Host writes) ->745TiB
MD5 OK
106.68MiB/s on avg (~58 hours)
power on hours : 2212
Chart on the Force 3, showing WRD vs GiB written over the last few days.
http://www.ssdaddict.com/ss/Force3/F...2011_12_23.png
It shows that WRD climbs rapidly vs the GiB needed to decrease WRD to the same level.
Nice chart Anvil.
Are you using logged attributes or doing it manually?
Christmas update:
Kingston V+100
311.8668 TiB
1614 hours
Avg speed 25.35 MiB/s
AD still 1.
168= 1 (SATA PHY Error Count)
P/E?
MD5 OK.
Reallocated sectors : 00
http://www.diskusjon.no/index.php?ap...tach_id=473000http://www.diskusjon.no/index.php?ap...tach_id=472997
Intel X25-M G1 80GB
170,8411 TiB
19833hours
Reallocated sectors : 00
MWI=154
MD5 =OK
52.80 MiB/s on avg
http://www.diskusjon.no/index.php?ap...tach_id=472999http://www.diskusjon.no/index.php?ap...tach_id=472996
m4
124.5444 TiB
449 hours
Avg speed 81.21 MiB/s.
AD gone from 31 to 28.
P/E 2169.
MD5 OK.
Reallocated sectors : 00
http://www.diskusjon.no/index.php?ap...tach_id=473001http://www.diskusjon.no/index.php?ap...tach_id=472998
My next update will be late tomorrow night. No it's time to enjoy christmas with my family.
Merry Christmas to you all :)
Yeah, Merry Christmas and Happy New Year to all!!!
merry christmas~!
Merry Christmas to you all!
I bought myself a Christmas present - Samsung 830 512GB, and I wish everyone here a happy new year :)
Christmas update:
Kingston V+100
313.9284 TiB
1638 hours
Avg speed 25.33 MiB/s
AD still 1.
168= 1 (SATA PHY Error Count)
P/E?
MD5 OK.
Reallocated sectors : 00
http://www.diskusjon.no/index.php?ap...tach_id=473086http://www.diskusjon.no/index.php?ap...tach_id=473083
Intel X25-M G1 80GB
175,1643 TiB
19857 hours
Reallocated sectors : 00
MWI=154 to 152
MD5 =OK
52.68 MiB/s on avg
http://www.diskusjon.no/index.php?ap...tach_id=473085http://www.diskusjon.no/index.php?ap...tach_id=473084
m4
131.2455 TiB
473 hours
Avg speed 81.22 MiB/s.
AD gone from 28 to 24.
P/E 2286.
MD5 OK.
Reallocated sectors : 00
http://www.diskusjon.no/index.php?ap...tach_id=473087http://www.diskusjon.no/index.php?ap...tach_id=473082
Kingston SSDNow 40GB (X25-V)
583.01TB Host writes
Reallocated sectors : 05 15 Increased by 1
Available Reserved Space : E8 99
POH 5277
MD5 OK
34.18MiB/s on avg (~20 hours)
--
Corsair Force 3 120GB
01 95/50 (Raw read error rate)
05 2 (Retired Block count)
B1 49 (Wear range delta)
E6 100 (Life curve status)
E7 10 (SSD Life left)
E9 585940 (Raw writes) ->572TiB
F1 779900 (Host writes) ->762TiB
MD5 OK
106.87MiB/s on avg (~20 hours)
power on hours : 2258
--
I had to update a utility and so the testing on both drives were re-started about 20 hours ago.
An update on WRD and then a new one that includes Raw Error Read Rate for the Force 3
http://www.ssdaddict.com/ss/Force3/F...2011_12_24.png
http://www.ssdaddict.com/ss/Force3/F...2011_12_25.png
A brief summary so far.
http://www.ssdaddict.com/ss/Endurance_failed_drives.png
http://www.ssdaddict.com/ss/Enduranc...ly_running.png
...
Christmas will be over here in seven minutes, but I thought I would wish you all a merry Christmas before it is over.
Well then, let me be the first to wish you a happy new year. Hopefully the Samsung will die soon as the testing rig is going to start getting crowded in 2012.
Due to a poweroutage I have no update today. It's been out since 23.00 yesterday and we still have no power. Because of the bad weather I hope the rig is ok and I can continue tomorrow.
The weather's been pretty bad the past few days, pretty OK today though. (and no power outage in my area)
@B.A.T and Anvil...stay safe and good luck.
Thank you Bluestang :) The worst part is over (the wind) but still no power. 20 hours and counting.....
And we're back again. Everything is OK and the test is running again. All ssd got an unexpected retention test for 24h and they all passed :)
I have a question here: it looks like ASU's Endurance testing would drastically increase the MFT size of NTFS, which is probably responsible for WinHex being extremely slow to traverse a partition for a snapshot. How do I shrink the size of MFT if I no longer plan to run Endurance testing?
Paragon looks to do the trick. (the free version should work)
If you ran the test without static files on a large capacity drive then one can expect MFT to grow beyond normal size, for normal usage/size it should not be an issue.
Link
Kingston SSDNow 40GB (X25-V)
588.58TB Host writes
Reallocated sectors : 05 16 Increased by 1
Available Reserved Space : E8 99
POH 5326
MD5 OK
33.14MiB/s on avg (~69 hours)
--
Corsair Force 3 120GB
01 92/50 (Raw read error rate)
05 2 (Retired Block count)
B1 46 (Wear range delta)
E6 100 (Life curve status)
E7 10 (SSD Life left)
E9 599788 (Raw writes) ->586TiB
F1 798326 (Host writes) ->780TiB
MD5 OK
106.05MiB/s on avg (~69 hours)
power on hours : 2308
I’ve just been trying to figure out the Samsung 830 data.
I’ve assumed:
• 5K P/E cycles.
• Host writes = F1*512/1,073,741,824 = GiB
• B1 RAW = P/E cycles count
It seems strange that the Host Writes per P/E cycles ratio remained consistently wrong until the theoretical P/E cycle count had expired. Once the theoretical P/E cycle count had expired the WA came out at; Avg 3.93.
It would be interesting to see how these results compared to another 64GB 830, however it seems it is possible to calculate host writes and approximate the MWI/ NAND Writes & WA using the basic SMART data that is provided. W/A is quite savage, but then again so are the sustained write speeds (at least until you get to MWI = 0).
Did anything come of the 470 autopsy? Is the 320 still going?
http://img543.imageshack.us/img543/8...mycompared.png
The 830 has a serious defect in the sense that PE cycle counts do not increase until a reboot (essentially, though they do increase slightly). To get the 'true' PE count I'd have to reboot, which I will do now. I'm not there physically, so if anything was to go wrong, it would be several days before I could be there to fix it.
6672 is what its at now, but a reboot did not do anything. It needs a power cycle which I cannot perform until I get back.
Christmas update:
Kingston V+100
315.6061 TiB
1638 hours
Avg speed 25.38 MiB/s
AD still 1.
168= 1 (SATA PHY Error Count)
P/E?
MD5 OK.
Reallocated sectors : 00
http://www.diskusjon.no/index.php?ap...tach_id=473330http://www.diskusjon.no/index.php?ap...tach_id=473327
Intel X25-M G1 80GB
178,5906 TiB
19877 hours
Reallocated sectors : 00
MWI=152 to 151
MD5 =OK
46.71 MiB/s on avg
http://www.diskusjon.no/index.php?ap...tach_id=473329http://www.diskusjon.no/index.php?ap...tach_id=473326
m4
135.3233 TiB
494 hours
Avg speed 78.39 MiB/s.
AD gone from 24 to 21.
P/E 2380.
MD5 OK.
Reallocated sectors : 00
http://www.diskusjon.no/index.php?ap...tach_id=473331http://www.diskusjon.no/index.php?ap...tach_id=473328
The power cut of again from 03 to 14 today but everything is back up now. Only strange thing is that the X25-M and the m4 has dropped their write speed with a couple of MiB/s.
320 is still going...
7425 reallocated sectors (up by 1), 684.5tb.
Did you have to power cycle every day to get the revised B1 values that you have reported? What I was trying to point out is that the host writes per P/E count came out consistently at ~630GiB at every reading, until the P/E count hit 5,000. From that point onwards it came out consistently at ~16GiB.
The approx. theoretical writes capacity = 64 x 5,000 = 320,000 GiB, which was exceeded by the time host writes had reached 87,476GiB, giving a WA factor of 3.98 (Avg WA after 5,000 PE = 3.93).
Kingston SSDNow 40GB (X25-V)
590.94TB Host writes
Reallocated sectors : 05 17 Increased by 1 again, up 3 during the last week.
Available Reserved Space : E8 99
POH 5347
MD5 OK
33.03MiB/s on avg (~90 hours)
--
Corsair Force 3 120GB
01 94/50 (Raw read error rate)
05 2 (Retired Block count)
B1 44 (Wear range delta)
E6 100 (Life curve status)
E7 10 (SSD Life left)
E9 605659 (Raw writes) ->591TiB
F1 806136 (Host writes) ->787TiB
MD5 OK
105.98MiB/s on avg (~90 hours)
power on hours : 2329
--
@Ao1
Any particular reason to why you are using 5K P/E on the Samsung 830, the 470 series points to 1K or 3K, hard to tell though.
It's an interesting drive, the 830's I've got are all for OS X so I won't be doing any testing on the drives I've currently got. (I think :))
http://www.ssdaddict.com/ss/Endurance_cr_20111228.png
Chart(s) are updated/placed in post#1
I'm looking into making some of the other charts that Vapor made.
Hmm… seems it uses 32nm with 3K P/E cycles.
“The 470 Series uses Samsung K8HDGD8U5M flash chips manufactured by Samsung on a 32-nano fabrication process. Moving to finer process technologies allows chip makers to squeeze more NAND dies out of a single wafer, but it also reduces the lifespan of those chips. Flash fabbed on a 50-nano process typically carries a write-erase endurance of 10,000 cycles, while 34-nano parts are generally rated for 5,000 cycles. According to Samsung, the 470 Series' flash chips have a write-erase endurance of only 3,000 cycles, which is on par with the longevity of 25-nano flash chips currently rolling off the line at Micron”.
http://techreport.com/articles.x/20087
If the WA for the endurance app workload is 3.93 the drive could “only” write ~50,000GIB of host writes before the MWI expired. If the B1 value only changes after a power cycle it would be worth re testing with frequent power cycles to more closely monitor changes to B1.
Ao1
I'll consider running the Endurance test on the 128GB Samsung 830 for a few hours if that could help shed some light on the Samsung 830 series.
It should be the same NAND process but there might be unknown differences to e.g page size.
I'll send you a copy of the logged attributes on my drives.
I have considered enabling SMART logging in ASU as well but it won't work on system drives. (for some strange reason)
The work is done it just needs to be stored somewhere, it could be a supplement to the logging done by SMARTLOG.
Here is a graph using SMARTLOG data collected by Anvil. It’s not showing anything of interest as it is only collecting data collected over a couple of days, however over time it would hopefully be possible to see correlating events, such as relocation events and error counts.
(BTW, I shifted the decimal place by 6 for On-the-fly ECC & Soft ECC values to keep a reasonable scale. Writes are on the second axis for the same reason).
http://img7.imageshack.us/img7/3498/smartlog.png
Todays update:
Kingston V+100
317.5828 TiB
1682 hours
Avg speed 25.37 MiB/s
AD still 1.
168= 1 (SATA PHY Error Count)
P/E?
MD5 OK.
Reallocated sectors : 00
http://www.diskusjon.no/index.php?ap...tach_id=473438http://www.diskusjon.no/index.php?ap...tach_id=473436
Intel X25-M G1 80GB
182,2931 TiB
19900 hours
Reallocated sectors : 00
MWI=151 to 150
MD5 =OK
49.70 MiB/s on avg
http://www.diskusjon.no/index.php?ap...tach_id=473437http://www.diskusjon.no/index.php?ap...tach_id=473435
m4
141.6388 TiB
517 hours
Avg speed 80.42 MiB/s.
AD gone from 21 to 18.
P/E 2489.
MD5 OK.
Reallocated sectors : 00
http://www.diskusjon.no/index.php?ap...tach_id=473439http://www.diskusjon.no/index.php?ap...tach_id=473434
This one is still in the works as I need to check the accurate GiB values for some of the drives.
http://www.ssdaddict.com/ss/Endurance_PEsummary.png