http://img41.imageshack.us/img41/1503/whoopsr.png
Printable View
Todays update.
Kingston V+100
331.6785 TiB
1856 hours
Avg speed 29.95 MiB/s
AD still 1.
168= 1 (SATA PHY Error Count)
P/E?
MD5 OK.
Reallocated sectors : 00
http://www.diskusjon.no/index.php?ap...tach_id=475642http://www.diskusjon.no/index.php?ap...tach_id=475637
Intel X25-M G1 80GB
235,0894 TiB
20228 hours
Reallocated sectors : 00
MWI=137 to 136
MD5 =OK
47.50 MiB/s on avg
http://www.diskusjon.no/index.php?ap...tach_id=475640http://www.diskusjon.no/index.php?ap...tach_id=475639
m4
229.9788 TiB
850 hours
Avg speed 82.57 MiB/s.
AD gone from 224 to 221.
P/E 4052.
MD5 OK.
Reallocated sectors : 00
http://www.diskusjon.no/index.php?ap...tach_id=475641http://www.diskusjon.no/index.php?ap...tach_id=475638
Windows update rebooted sometime during the night so all ssd have been idling for a few hours. Otherwise they are OK.
B.A.T,
I never thought the Kington V+100 would turn out to be the slowest drive in the test, still no fw updates?
@Ao1
Did you back up the drive :D
Estimated lifetime : 14 days, is that a new feature in HD Sentinel?
I checked during Christmas but nothing has been released. it's painstakingly slow but not much I can do about it.
No, but my replacement 830 has just turned up. :D
The relocation event ties in exactly with an increase in write speed. The relocation occured during idle time following the Windoze update. I thought the increase was due to the idle time, but the drive has still maintained faster speeds a day later. I therfore suspect that both read and write speeds will be impacted before the sectors get relocated, and I would guess that this is exactly what caused Christopher’s drive to slow down.
@Ao1 @Anvil
HDDSentinel is calibrated for specific numbers of bad sectors. It declares SSDs/HDDs almost dead if bad sectors value increases over a specific point
I do believe that the Samsung doesn't flag as early as some drives would, which could help explain the reduction in read and writes, but it isn't that alone. Plus, the 830 still hasn't flagged anymore blocks as bad for some time now, even though it probably should have . It's been stuck at 20/40960 for far longer than I think it should, especially since I think some of the blocks have to be marginal now. The drive was flagging a block or two every couple days since 6 Dec I think -- but not in the past week.
Christopher I might be completely wrong in my assumption. Let me try to better explain where I was coming from. With a HDD I believe numerous attempts will be made to read/ write to a sector before it is finally marked as being defective and relocated. My assumption was that before failing sectors were relocated it might be feasible that read/ write performance would suffer. Once the sectors were relocated read/ write performance would revert back to normal. This seemed logical to what I could observe on my drive, but in truth I don’t know if or how bad the impact might be on performance before the sector was taken out of the equation.
How many sectors on your drive got relocated in one hit?
You really have to admire how the Intel drives have performed in this test. They may not be the fastest but they are the best engineered by a long shot IMHO.
I would only get 1 block at a time, which is 2048 sectors. The drive has used 20 of it's reserve blocks, 40960 reallocated sectors. If you look at the SMART data from the last few updates, notice that there are no program fails, just 10 erase fails, and 10 "runtime" bad blocks, for a "total" of twenty. But the runtime bad blocks should be program + erase fail together, which makes me think the attributes could be mislabeled or somthing. In reality, only 10 blocks should be counted, but total used reserved blocks are 20.
I understand what you are saying. I know from experience with another drive, one which I will start testing on Monday, that if there are bad blocks which haven't been flagged, it will take much longer to read and write to the nand for the affected region. MLC drives' NAND do get slower over time (at least in theory) as it has to try harder to resolve errors and sort out the Vth readings, which is one more reason why I love my X25-Es.
I totally agree about the Intel drives. They might not be fast, but I think they're as close to bulletproof as you can find. I don't personally know of anyone who has owned an Intel drive that has suffered a failure, nor have I really read too many horror stories. I believe that the two Intels in this test -- 40GB drives no less -- might end up being the real story here. Who would have imagined almost 700TB from a 40GB MLC drive? Especially one with 25nm... that's effectively as fast now as it was new, and there are no signs that either drive is on it's way to the grave yet.
New Firmware 0309 for Crucial M4 just posted to fix the 5184 Hour Bug...
http://www.crucial.com/support/firmware.aspx
Release Date: 01/13/2012
Change Log:
•Changes made in version 0002 (m4 can be updated to revision 0309 directly from either revision 0001, 0002, or 0009)
•Correct a condition where an incorrect response to a SMART counter will cause the m4 drive to become unresponsive after 5184 hours of Power-on time. The drive will recover after a power cycle, however, this failure will repeat once per hour after reaching this point. The condition will allow the end user to successfully update firmware, and poses no risk to user or system data stored on the drive.
This firmware update is STRONGLY RECOMMENDED for drives in the field. Although the failure mode due to the SMART Power On Hours counter poses no risk to saved user data, the failure mode can become repetitive, and pose a nuisance to the end user. If the end user has not yet observed this failure mode, this update is required to prevent it from happening.
If you are using a SAS Expander please do not download this Firmware. As soon as we have a Firmware Update that will work in these applications we will release it.
Samsung 830 64GB Update, Day 39
FW:CXM02B1Q
TiB Written:
246.17
GiB written:
252079.37
Avg MB/s
122.45, up from 120.54
PE Cycles
14153 up from 14109 a couple days ago -- it just started updating today in real time, but I'm sure the actual number is actually about 600PEs higher -- I'll still need to do a power cycle when I get home to get the "true" number.
Reallocated Sectors
40960
20 Blocks, holding steady
941 Hours
https://www.box.com/shared/static/dv...n85gx9buax.png
https://www.box.com/shared/static/q9...yxnmp6mo0a.png
Speed is up slightly, and a few hours ago PE cycles started updating as they should. Total bonus!.
UPDATE
1.8 hours later, PE cycles have increased by 14 to 14167. That is fantastic. I don't know why it just started, but I'm pretty damn glad it did.
Kingston SSDNow 40GB (X25-V)
630.17TB Host writes
Reallocated sectors : 05 21
Available Reserved Space : E8 99
POH 5691
MD5 OK
32.99MiB/s on avg (~99 hours)
--
Corsair Force 3 120GB
01 90/50 (Raw read error rate)
05 2 (Retired Block count)
B1 62 (Wear range delta)
E6 100 (Life curve status)
E7 10 (SSD Life left)
E9 699440 (Raw writes) ->683TiB
F1 930867 (Host writes) ->909TiB
MD5 OK
106.92MiB/s on avg (~99 hours)
power on hours : 2664
Host writes: 10,277 GiB
MWI: 61
P/E Cycles: 1,403
POH: 187
MB/s: 6.22
Relocated sectors: 4096
Samsung 830 64GB Update, Day 40
FW:CXM02B1Q
TiB Written:
256.6940
GiB written:
262854.63
Avg MB/s
124.15, up from 122.45
PE Cycles
14339 up from 14153
Reallocated Sectors
40960
20 Blocks, holding steady
965 Hours
https://www.box.com/shared/static/sj...hc69nqe0di.png
https://www.box.com/shared/static/gd...ezgvkz6gmz.png
I'm most pleased that PE cycles work right and also that speed continues to increase slightly.
POR Recovery count has to be unsafe shutdown count.
Kingston SSDNow 40GB (X25-V)
632.76TB Host writes
Reallocated sectors : 05 21
Available Reserved Space : E8 99
POH 5714
MD5 OK
32.94MiB/s on avg (~122 hours)
--
Corsair Force 3 120GB
01 80/50 (Raw read error rate)
05 2 (Retired Block count)
B1 61 (Wear range delta)
E6 100 (Life curve status)
E7 10 (SSD Life left)
E9 705944 (Raw writes) ->689TiB
F1 939520 (Host writes) ->918TiB
MD5 OK
106.92MiB/s on avg (~122 hours)
power on hours : 2687
I think something may have happened to my Endurance Testing rig... I was logged in earlier remotely from my tablet, and I'm no longer able to reach it. I hope it hasn't crashed in my absence.
Actually, I hope my apartment wasn't destroyed or robbed. If the system has crashed, I can handle it.. If all of my worldly possessions have been stolen or destroyed, I might not handle it so well.
Todays update.
Kingston V+100
334.1927 TiB
1885 hours
Avg speed 25.15 MiB/s
AD still 1.
168= 1 (SATA PHY Error Count)
P/E?
MD5 OK.
Reallocated sectors : 00
http://www.diskusjon.no/index.php?ap...tach_id=475827http://www.diskusjon.no/index.php?ap...tach_id=475823
Intel X25-M G1 80GB
240,5493 TiB
20265 hours
Reallocated sectors : 00
MWI=136 to 134
MD5 =OK
48.81 MiB/s on avg
http://www.diskusjon.no/index.php?ap...tach_id=475826http://www.diskusjon.no/index.php?ap...tach_id=475822
m4
240.1694 TiB
887 hours
Avg speed 78.19 MiB/s.
AD gone from 221 to 216.
P/E 4229.
MD5 OK.
Reallocated sectors : 00
http://www.diskusjon.no/index.php?ap...tach_id=475825http://www.diskusjon.no/index.php?ap...tach_id=475824
The 320 is DEAD.
I plugged it in today after giving it a 2 week break sitting there unplugged. I can see it in windows in device manager but there doesn't seem to be any data on it and I can't format it. I can tell windows to format it, but it does absolutely nothing. Also, my PC fails to boot up roughly 50% of the times with the SSD plugged in (hangs when initializing sata controllers).
Sounds exactly like how my M225 acted when it hit the wall.
That's a shame!
It did well though. :)
Did you save the latest stats?
Damn, I jinxed it the other day when I said the 320 was showing no signs of being on it's last legs.
I was thinking it would be a little longer yet before the Intel's started going tits-up.
Bluestang,
are you planning on running the other M225?