http://img41.imageshack.us/img41/1503/whoopsr.png
Printable View
Todays update.
Kingston V+100
331.6785 TiB
1856 hours
Avg speed 29.95 MiB/s
AD still 1.
168= 1 (SATA PHY Error Count)
P/E?
MD5 OK.
Reallocated sectors : 00
http://www.diskusjon.no/index.php?ap...tach_id=475642http://www.diskusjon.no/index.php?ap...tach_id=475637
Intel X25-M G1 80GB
235,0894 TiB
20228 hours
Reallocated sectors : 00
MWI=137 to 136
MD5 =OK
47.50 MiB/s on avg
http://www.diskusjon.no/index.php?ap...tach_id=475640http://www.diskusjon.no/index.php?ap...tach_id=475639
m4
229.9788 TiB
850 hours
Avg speed 82.57 MiB/s.
AD gone from 224 to 221.
P/E 4052.
MD5 OK.
Reallocated sectors : 00
http://www.diskusjon.no/index.php?ap...tach_id=475641http://www.diskusjon.no/index.php?ap...tach_id=475638
Windows update rebooted sometime during the night so all ssd have been idling for a few hours. Otherwise they are OK.
B.A.T,
I never thought the Kington V+100 would turn out to be the slowest drive in the test, still no fw updates?
@Ao1
Did you back up the drive :D
Estimated lifetime : 14 days, is that a new feature in HD Sentinel?
I checked during Christmas but nothing has been released. it's painstakingly slow but not much I can do about it.
No, but my replacement 830 has just turned up. :D
The relocation event ties in exactly with an increase in write speed. The relocation occured during idle time following the Windoze update. I thought the increase was due to the idle time, but the drive has still maintained faster speeds a day later. I therfore suspect that both read and write speeds will be impacted before the sectors get relocated, and I would guess that this is exactly what caused Christopher’s drive to slow down.
@Ao1 @Anvil
HDDSentinel is calibrated for specific numbers of bad sectors. It declares SSDs/HDDs almost dead if bad sectors value increases over a specific point
I do believe that the Samsung doesn't flag as early as some drives would, which could help explain the reduction in read and writes, but it isn't that alone. Plus, the 830 still hasn't flagged anymore blocks as bad for some time now, even though it probably should have . It's been stuck at 20/40960 for far longer than I think it should, especially since I think some of the blocks have to be marginal now. The drive was flagging a block or two every couple days since 6 Dec I think -- but not in the past week.
Christopher I might be completely wrong in my assumption. Let me try to better explain where I was coming from. With a HDD I believe numerous attempts will be made to read/ write to a sector before it is finally marked as being defective and relocated. My assumption was that before failing sectors were relocated it might be feasible that read/ write performance would suffer. Once the sectors were relocated read/ write performance would revert back to normal. This seemed logical to what I could observe on my drive, but in truth I don’t know if or how bad the impact might be on performance before the sector was taken out of the equation.
How many sectors on your drive got relocated in one hit?
You really have to admire how the Intel drives have performed in this test. They may not be the fastest but they are the best engineered by a long shot IMHO.
I would only get 1 block at a time, which is 2048 sectors. The drive has used 20 of it's reserve blocks, 40960 reallocated sectors. If you look at the SMART data from the last few updates, notice that there are no program fails, just 10 erase fails, and 10 "runtime" bad blocks, for a "total" of twenty. But the runtime bad blocks should be program + erase fail together, which makes me think the attributes could be mislabeled or somthing. In reality, only 10 blocks should be counted, but total used reserved blocks are 20.
I understand what you are saying. I know from experience with another drive, one which I will start testing on Monday, that if there are bad blocks which haven't been flagged, it will take much longer to read and write to the nand for the affected region. MLC drives' NAND do get slower over time (at least in theory) as it has to try harder to resolve errors and sort out the Vth readings, which is one more reason why I love my X25-Es.
I totally agree about the Intel drives. They might not be fast, but I think they're as close to bulletproof as you can find. I don't personally know of anyone who has owned an Intel drive that has suffered a failure, nor have I really read too many horror stories. I believe that the two Intels in this test -- 40GB drives no less -- might end up being the real story here. Who would have imagined almost 700TB from a 40GB MLC drive? Especially one with 25nm... that's effectively as fast now as it was new, and there are no signs that either drive is on it's way to the grave yet.
New Firmware 0309 for Crucial M4 just posted to fix the 5184 Hour Bug...
http://www.crucial.com/support/firmware.aspx
Release Date: 01/13/2012
Change Log:
•Changes made in version 0002 (m4 can be updated to revision 0309 directly from either revision 0001, 0002, or 0009)
•Correct a condition where an incorrect response to a SMART counter will cause the m4 drive to become unresponsive after 5184 hours of Power-on time. The drive will recover after a power cycle, however, this failure will repeat once per hour after reaching this point. The condition will allow the end user to successfully update firmware, and poses no risk to user or system data stored on the drive.
This firmware update is STRONGLY RECOMMENDED for drives in the field. Although the failure mode due to the SMART Power On Hours counter poses no risk to saved user data, the failure mode can become repetitive, and pose a nuisance to the end user. If the end user has not yet observed this failure mode, this update is required to prevent it from happening.
If you are using a SAS Expander please do not download this Firmware. As soon as we have a Firmware Update that will work in these applications we will release it.
Samsung 830 64GB Update, Day 39
FW:CXM02B1Q
TiB Written:
246.17
GiB written:
252079.37
Avg MB/s
122.45, up from 120.54
PE Cycles
14153 up from 14109 a couple days ago -- it just started updating today in real time, but I'm sure the actual number is actually about 600PEs higher -- I'll still need to do a power cycle when I get home to get the "true" number.
Reallocated Sectors
40960
20 Blocks, holding steady
941 Hours
https://www.box.com/shared/static/dv...n85gx9buax.png
https://www.box.com/shared/static/q9...yxnmp6mo0a.png
Speed is up slightly, and a few hours ago PE cycles started updating as they should. Total bonus!.
UPDATE
1.8 hours later, PE cycles have increased by 14 to 14167. That is fantastic. I don't know why it just started, but I'm pretty damn glad it did.
Kingston SSDNow 40GB (X25-V)
630.17TB Host writes
Reallocated sectors : 05 21
Available Reserved Space : E8 99
POH 5691
MD5 OK
32.99MiB/s on avg (~99 hours)
--
Corsair Force 3 120GB
01 90/50 (Raw read error rate)
05 2 (Retired Block count)
B1 62 (Wear range delta)
E6 100 (Life curve status)
E7 10 (SSD Life left)
E9 699440 (Raw writes) ->683TiB
F1 930867 (Host writes) ->909TiB
MD5 OK
106.92MiB/s on avg (~99 hours)
power on hours : 2664
Host writes: 10,277 GiB
MWI: 61
P/E Cycles: 1,403
POH: 187
MB/s: 6.22
Relocated sectors: 4096
Samsung 830 64GB Update, Day 40
FW:CXM02B1Q
TiB Written:
256.6940
GiB written:
262854.63
Avg MB/s
124.15, up from 122.45
PE Cycles
14339 up from 14153
Reallocated Sectors
40960
20 Blocks, holding steady
965 Hours
https://www.box.com/shared/static/sj...hc69nqe0di.png
https://www.box.com/shared/static/gd...ezgvkz6gmz.png
I'm most pleased that PE cycles work right and also that speed continues to increase slightly.
POR Recovery count has to be unsafe shutdown count.
Kingston SSDNow 40GB (X25-V)
632.76TB Host writes
Reallocated sectors : 05 21
Available Reserved Space : E8 99
POH 5714
MD5 OK
32.94MiB/s on avg (~122 hours)
--
Corsair Force 3 120GB
01 80/50 (Raw read error rate)
05 2 (Retired Block count)
B1 61 (Wear range delta)
E6 100 (Life curve status)
E7 10 (SSD Life left)
E9 705944 (Raw writes) ->689TiB
F1 939520 (Host writes) ->918TiB
MD5 OK
106.92MiB/s on avg (~122 hours)
power on hours : 2687
I think something may have happened to my Endurance Testing rig... I was logged in earlier remotely from my tablet, and I'm no longer able to reach it. I hope it hasn't crashed in my absence.
Actually, I hope my apartment wasn't destroyed or robbed. If the system has crashed, I can handle it.. If all of my worldly possessions have been stolen or destroyed, I might not handle it so well.
Todays update.
Kingston V+100
334.1927 TiB
1885 hours
Avg speed 25.15 MiB/s
AD still 1.
168= 1 (SATA PHY Error Count)
P/E?
MD5 OK.
Reallocated sectors : 00
http://www.diskusjon.no/index.php?ap...tach_id=475827http://www.diskusjon.no/index.php?ap...tach_id=475823
Intel X25-M G1 80GB
240,5493 TiB
20265 hours
Reallocated sectors : 00
MWI=136 to 134
MD5 =OK
48.81 MiB/s on avg
http://www.diskusjon.no/index.php?ap...tach_id=475826http://www.diskusjon.no/index.php?ap...tach_id=475822
m4
240.1694 TiB
887 hours
Avg speed 78.19 MiB/s.
AD gone from 221 to 216.
P/E 4229.
MD5 OK.
Reallocated sectors : 00
http://www.diskusjon.no/index.php?ap...tach_id=475825http://www.diskusjon.no/index.php?ap...tach_id=475824
The 320 is DEAD.
I plugged it in today after giving it a 2 week break sitting there unplugged. I can see it in windows in device manager but there doesn't seem to be any data on it and I can't format it. I can tell windows to format it, but it does absolutely nothing. Also, my PC fails to boot up roughly 50% of the times with the SSD plugged in (hangs when initializing sata controllers).
Sounds exactly like how my M225 acted when it hit the wall.
That's a shame!
It did well though. :)
Did you save the latest stats?
Damn, I jinxed it the other day when I said the 320 was showing no signs of being on it's last legs.
I was thinking it would be a little longer yet before the Intel's started going tits-up.
Bluestang,
are you planning on running the other M225?
Host writes: 10,666 GiB
MWI: 57
P/E Cycles: 1,544
POH: 211
Relocated sectors: 4,096
I lost another 6 hours due to an auto restart. :mad:
As soon as I get back home, there will be at least one new drive in the test.
I am thinking of getting a new SSD ... either a 128gb m4 or 128gb 830. From your testing which do you guys think would be better!?
I think its hard to argue against the M4. But there is the Plextor and Corsair drives to consider as well -- the Marvell plus 32nm toggle NAND, and for the same price as well. The 830 does have a big edge on writes, similar to the toggle nand Marvell drives. I don't think you can go wrong with either, and all those drives are similar in price and read performance.. They all have 3 year warranties too.
No more updates until Tuesday evening... I've lost contact with my endurance testing rig back home, and haven't been able to raise it for 24hrs now.
Kingston SSDNow 40GB (X25-V)
635.49TB Host writes
Reallocated sectors : 05 21
Available Reserved Space : E8 99
POH 5738
MD5 OK
32.90MiB/s on avg (~146 hours)
--
Corsair Force 3 120GB
01 92/50 (Raw read error rate)
05 2 (Retired Block count)
B1 59 (Wear range delta)
E6 100 (Life curve status)
E7 10 (SSD Life left)
E9 712831 (Raw writes) ->696TiB
F1 948675 (Host writes) ->926TiB
MD5 OK
106.92MiB/s on avg (~146 hours)
power on hours : 2711
That's a real shame isn't it? Whatever happened to the SSD promise that the drives will become read-only and will not die the way all SSDs have died so far. So, none of the players implemented any fail-safes in any drive. What a shame! I would be better served with 200TiB writes and read-only for next 5 years instead of 400TiB and dead-for-good in a week after that.
Host writes: 11,055 GiB
MWI: 53
P/E Cycles: 1,688
POH: 234
Relocated sectors: 4,096
More time lost due to a fricking Windows auto restart. Auto update / off
^^Ao1, could you eliminate any possibilities of BSOD by disabling auto-reboot upon BSOD? It could be found in System -> Advanced system settings -> Startup and recovery settings -> System failure automatically restart
I suspect that the disk went offline with some 0xF4 BSOD?
Intels have no read only point as some others.
Once MWI-1 is reached data retention is not guaranteed and may be reduced.
When you reach MWI-1 on the Intels (and many of the others tested here) it is your call-
1) When to back it up.
2) Stop writing to it and use it as giant read only thunb drive.
3) Keep on writing.
I don't know exactly what the final write numbers will be on the Intel 320 but if memory serves it will be in the 600-800TB range.
MWI was exhausted at 190TB.
If this was a data retention test I think One_Hertz might have been overly aggresive with the "Writes".
Since this is an endurance test it died like a "Spartan".
Ao1,
Just out of curiosity, is the Z68 system you're using a dedicated endurance rig or is it a multipurpose system?
My dedicated uATX H67/ 1155 Celeron system is perfect for an endurance testing rig, and had been completely stable -- until the other day when I lost contact. I'm not sure if the remote desktop client crashed, or if the whole system is down. I'm still out of town, should be home tomorrow to find out. I've also purchased another 16GB MTRON, so I'm going to start testing one of the two (one is a 3.5" MTRON PRO 7000 16GB, the other is a 2.5" MTRON MOBI 3000 16GB). I'm not convinced that it's even possible to test one in a reasonable timeframe, but I guess we'll see, won't we? Perhaps the MOBI will surprise me with more than one SMART attribute.
Still debating. It's in a laptop right now not doing much since I don't use it that much. But when I do, it's nice to be in Win7 in 15-20 seconds!
Also, very busy right now with my physical inventory from the end of December. Pricing WIP for the bean counters just sucks and is very time consuming. I should be done by the end of Feb.
I'm sure I'll break down at some point and re-enter....I just miss you guys so much! :)
Well Bluestang, the reason I was asking is because I obtained a brand-new Vertex Turbo 64GB... and I was wondering if you were going to run the other M225. I basically wanted your blessing before running it in the endurance test.
I was planning on running one of the MTRON 16GB drives and the 64GB Vertex Turbo... but I was hoping the Samsung would die soon to make room for other drives in the test. If you have no objections to me running this Vertex Turbo in the test, I will do so. I was kinda waiting around to see if you were going to run the other drive before I jump in.
Kingston SSDNow 40GB (X25-V)
637.96TB Host writes
Reallocated sectors : 05 21
Available Reserved Space : E8 99
POH 5760
MD5 OK
32.88MiB/s on avg (~168 hours)
--
Corsair Force 3 120GB
01 88/50 (Raw read error rate)
05 2 (Retired Block count)
B1 58 (Wear range delta)
E6 100 (Life curve status)
E7 10 (SSD Life left)
E9 719007 (Raw writes) ->702TiB
F1 956891 (Host writes) ->934TiB
MD5 OK
106.90MiB/s on avg (~168 hours)
power on hours : 2733
Don't need my permission...go for it. :up:
Plus, then we can see how well a OCZ Turbo stacks against a Crucial Turbo. :)
Half way house. MWI is 50 and P/E cycles are at 1803
MB/s: Steady 6.27MB/s
@Christopher, I running on my main work rig. It’s 100% stable, but using it for the endurance test is a pain. It runs along in the background and doesn't take up much in the way of resources (even when running 4K) but it still a pain in the butt. (No AV, restarts for app/ windoze etc.). I’m just glad I didn’t pick the X25-V or 320. :D
I will say that I'm concerned about running three drives with three instances of ASU. The 1155 Celeron G530/H67 combo is pegged around 30 percent, but I think some of that is from the Splashtop remote desktop app (which is how I access the system, over my tablet, so it may drop down whilst I'm not looking)-- either way, it might not have the juice to run 3 drives. It's virtually silent, which is a plus too. I mainly use it headless, but I do have it connected to my LED TV if I need to access the UEFI or something.
I looked at resource utilization from the Slashtop remote client on my main SB rig, but it uses nVidia GPUs (it's optimized for nVidia and may leverage CUDA - The endurance rig uses 1155 SB video), and CPU usage was practically 0.
Each instance should be about 2-3%, must be something else that's consuming the bulk of the 30%.
Actually Bluestang, perhaps you can help me with something.
I found a brand new VT64, but it's one of the newer ones with 1.7FW. It was at 97 percent MWI out of the box. I tried D flashing it again, but nothing happened, so I flashed regular Vertex FW, used the NAND cleaner, then did Vertex Turbo FW again. This time, all the factory blocks disappeared and MWI went back to 100. But after a few moments MWI dropped back to 97 as the bad blocks got flagged again. I didn't know MWI was affected by runtime bad blocks. I'll post screenshots tomorrow.
Todays update.
Kingston V+100
And again it has dropped out.....:shrug:
http://www.diskusjon.no/index.php?ap...tach_id=476115
Intel X25-M G1 80GB
248,02841 TiB
20311 hours
Reallocated sectors : 00
MWI=134 to 132
MD5 =OK
46.82 MiB/s on avg
http://www.diskusjon.no/index.php?ap...tach_id=476113http://www.diskusjon.no/index.php?ap...tach_id=476112
m4
252.9766 TiB
933 hours
Avg speed 80.48 MiB/s.
AD gone from 216 to 208.
P/E 4453.
MD5 OK.
Reallocated sectors : 00
http://www.diskusjon.no/index.php?ap...tach_id=476114http://www.diskusjon.no/index.php?ap...tach_id=476111
I know why it's pegged at 30 percent all the time... it's just at a lower clock level. The single core 1155 Celeron doesn't use P-states, doesn't downclock or reduce voltage, but the G530 does. It's always running at 1.4Ghz because ASU and the RDC client only use a couple percent combined at 2.4Ghz on both cores, so it never really gets a chance to hang out at 2.4 as everything possible is disabled on the endurance rig. It's about as stripped down as a modern Win7 system can be.
Host writes: 11,549 GiB
MWI: 48
P/E Cycles: 1,886
POH: 256
Relocated sectors: 4,096
MB/s: 5.97
Kingston SSDNow 40GB (X25-V)
640.57TB Host writes
Reallocated sectors : 05 21
Available Reserved Space : E8 99
POH 5782
MD5 OK
34.02MiB/s on avg (~13 hours)
--
Corsair Force 3 120GB
01 88/50 (Raw read error rate)
05 2 (Retired Block count)
B1 56 (Wear range delta)
E6 100 (Life curve status)
E7 10 (SSD Life left)
E9 725327 (Raw writes) ->708TiB
F1 965294 (Host writes) ->943TiB
MD5 OK
106.87MiB/s on avg (~13 hours)
power on hours : 2756
Okay... Just got home. Somehow, the system went to sleep even though sleep was disabled. So I lost the past few days and its basically right where my last update was.
UPDATE
I power cycled the drive and got 14764 PE cycles, so that would be the number for the last update -- at least they had started updating in real time, but it was lagging behind.
Oh and POR RECOVERY COUNT is unsafe shutdown count. I just tested it.
Samsung 830 64GB Update, Day 43
FW:CXM02B1Q
TiB Written:
258.3029
GiB written:
264502.22
Avg MB/s
126.54
PE Cycles
14764
Reallocated Sectors
40960
20 Blocks, holding steady
1041 Hours
https://www.box.com/shared/static/s8...nbzycqc42j.png
https://www.box.com/shared/static/ye...sj5z7ji179.png
The drive essentially idled for 76 hours. The drive was powered on while the system was sleeping.
Annoyingly, the drive will no longer update Wear Leveling Count! Argh!
Host writes: 12,118 GiB
MWI: 43
P/E Cycles: 2,071
POH: 283
Relocated sectors: 4,096
MB/s: 6.31
**Enter a new challenger**
****OCZ Vertex Turbo 64****
Here is ASU and SSDLife.
https://www.box.com/shared/static/sd...8ofib8zpg2.jpg
https://www.box.com/shared/static/ge...slcmqsd1b6.png
I ran the NAND cleaner on it again, but I don't think the NAND cleaner does what I think it does.
Have fun and Good Luck :)
Thanks Bluestang!
But I should never have run the NAND cleaner. Speed is great at first, but slows down as it struggles with the marginal blocks which have to be re-flagged.
I was always under the impression that the cleaner would help identify bad blocks, not hide them. I guess if you have a drive with a lot of bad blocks, you may have a hard time destructive-flashing firmware. Which I did incidentally, even though it's a brand-new drive.
I found this in a Toshiba product spec sheet. It would have been a lot nicer if they included a scale, but its the only thing I have come across that shows the relationship between P/E cycles and data retention, so it’s a start at least.
http://img716.imageshack.us/img716/5...aretention.png
Hmmm, it seems like they are saying that data retention time is constant up to "X" erase cycles, then data retention time decreases geometrically (power law like C^-0.3) until 10X erase cycles are reached (data retention time having decreased to about half the initial value at 10X cycles), then then data retention time decreases at an even faster geometrical rate beyond 10X erase cycles.
I think they just made that up. I don't think they have any actual data to support that chart, except in the most abstract sense.
Maybe what the super-vague Toshiba chart is showing is 0 percent chance of losing data retention up PE cycles = rated, increasing chance up to 2x MWI, and a greater chance (greatly reduced retention time) between 2x and 3x rated PE cycles. Once a drive hits 3x rated PE cycles, the chance of retention failure is very high (and or the amount of time rapidly decreases)...
But I don't think it works like that. And the chart could be referring to SLC which is probably quite different from MLC in that respect.
Christopher
Could you fill in the details (I've posted a preliminary update on the drives in post #1) on the new drive?
I expect you did not physically verify NAND?
Kingston SSDNow 40GB (X25-V)
643.21TB Host writes
Reallocated sectors : 05 21
Available Reserved Space : E8 99
POH 5806
MD5 OK
33.23MiB/s on avg (~37 hours)
--
Corsair Force 3 120GB
01 82/50 (Raw read error rate)
05 2 (Retired Block count)
B1 54 (Wear range delta)
B6 1 (Erase Fail Count)
E6 100 (Life curve status)
E7 10 (SSD Life left)
E9 731957 (Raw writes) ->715TiB
F1 974108 (Host writes) ->951TiB
MD5 OK
106.86MiB/s on avg (~37 hours)
power on hours : 2779
http://www.ssdaddict.com/ss/Endurance_cr_20120118.png
No, your interpretation makes no sense. It is clearly a log10 vs. log10 chart, with the decades clearly visible, which is why we are looking at X, 10X, 100X, and not X, 2X, and 3X as you say.
More importantly, data retention DOES decrease significantly before MWI is exhausted. Very roughly, data retention should be at least 10 years at 10% of MWI, 5 years at 50% of MWI, and 1 year at 100% of MWI.
I don't think it makes sense at all... That's my interpretation.
That graph can be found on 6 out of the 6 product datasheets I have aquaired for Toshiba drives. As content of a product datasheet I can’t imagine they made it up as any form of misleading claim would be problematic.
I found a Samsung datasheet that stated 10 years for data retention. I suspect 10 years from new and 1 year after the MWI has run out.
Maybe :D
Todays update.
Kingston V+100
338.6542 TiB
1939 hours
Avg speed 25.03 MiB/s
AD still 1.
168= 1 (SATA PHY Error Count)
P/E?
MD5 OK.
Reallocated sectors : 00
http://www.diskusjon.no/index.php?ap...tach_id=476293http://www.diskusjon.no/index.php?ap...tach_id=476290
Intel X25-M G1 80GB
253,2724 TiB
20347 hours
Reallocated sectors : 00
MWI=132 to 131
MD5 =OK
42.33 MiB/s on avg
http://www.diskusjon.no/index.php?ap...tach_id=476292http://www.diskusjon.no/index.php?ap...tach_id=476291
m4
262.4960 TiB
969 hours
Avg speed 79.73 MiB/s.
AD gone from 208 to 203.
P/E 4618.
MD5 OK.
Reallocated sectors : 00
http://www.diskusjon.no/index.php?ap...tach_id=476294http://www.diskusjon.no/index.php?ap...tach_id=476289
The Kingston is back up runnung. Everything looks ok for now. I also did some maintenance in the Intel G1 after the screenshot to regain some write speed.
Samsung 830 64GB Update, Day 45
FW:CXM02B1Q
TiB Written:
268.6338
GiB written:
275081.03
Avg MB/s
126.54
PE Cycles
14881 <-- not the true count as it stopped realtime updates
Reallocated Sectors
40960
20 Blocks, holding steady
1066Hours
https://www.box.com/shared/static/51...87ib0gf4o6.png
https://www.box.com/shared/static/th...codh8s57g4.png
So there are still many questions about the SMART attributes, but at least POR recovery count is unsafe shutdown count, and wear leveling is PE cycles and MWI. But I'm puzzled at the lack of new bad blocks. That, and why PE cycles won't update now -- I think it should start in a couple of days.
OCZ Vertex Turbo 64 Update Day 0
TiB 3.4471
GiB 3529.87
Avg MB/s 82.50
Avg Erase Count
62 up from 1
MWI 97 down from 100
1 Program Fail
13 Erase Fail
0 Read Fail
12 Hours
https://www.box.com/shared/static/1t...bnhm60co3s.png
https://www.box.com/shared/static/iq...dm2fqbezeh.png
I've also decided to start testing one of the MTRON 16GB SLCs. I have the 3.5" 7000 PRO, and the MOBI 3000 should be here in a couple days. I think I'll test the MOBI since it's a 2.5" drive and will be easier to cram in the endurance rig.
Neither drive will have much SMARt info... but I don't really have a use for a 16GB drive without NCQ, TRIM, or much speed other than endurance testing. I think performance between the drives should be almost identical. They're not really good for much, and installing Win7 on the 7000 PRO took hours. Booting takes several times longer than it should, though it doesn't stutter and does have pretty high write speeds vs. capacity. I'll take the drive apart to verify how much and what kind of NAND is onboard - it's not clear whether the drives are using OP of any kind, but based on it's Korean origin I would assume Samsung SLC similar to what is in my Vertex EX.
Host writes: 12,632 GiB
MWI: 37
P/E Cycles: 2,283
POH: 306
Relocated sectors: 4,096
MB/s: 5.97
New firmware for Samsung 830 available today from Samsung: CXM03B1Q
Christopher
Did that VT64 come brand new, sealed with FW 1.7 already on it?
Also, if it came with 13 bad blocks out of the box, they should have been listed under "Initial Bad Block Count". No?
Bluestang,
As a matter of fact you are correct. But -- mine when unsealed had 18 initial bad blocks and 5 erase failures. I had a picture of this, but the earliest one I can find was after a few GB of writes from ASU and CDM.
It only had a couple KB of writes on it new out-of-the-box. But when combined with the runtime bad blocks is why MWI was at 97 out of the box. I've been trying to get some answers about this for quite some time over on the OCZ forums, but didn't get much in the way of help.
For instance, I didn't know that bad blocks factored into MWI -- but that's clearly what happened. I also didn't know that the purpose of the NAND cleaner was to mark bad blocks as good so that FW could be reflashed -- which makes me wonder if you could have gotten a couple more GB out of the M225 with a NAND cleaning and D-flash as at one point I couldn't dflash as there were too many bad blocks -- which doesn't make any sense. If they're marked as bad from the factory, it's like they never existed. If they're not and pop up in runtime, it does affect MWI.
Anyway, I screwed up last night and accidentally stopped the Vertex, so it's lost about 8 hours of running time. It's averaging about 82MBs. But after the NAND cleaning, it averages about 140MBs for a couple of loops.
Kingston SSDNow 40GB (X25-V)
645.92TB Host writes
Reallocated sectors : 05 21
Available Reserved Space : E8 99
POH 5830
MD5 OK
33.03MiB/s on avg (~61 hours)
--
Corsair Force 3 120GB
01 90/50 (Raw read error rate)
05 2 (Retired Block count)
B1 52 (Wear range delta)
B6 1 (Erase Fail Count)
E6 100 (Life curve status)
E7 10 (SSD Life left)
E9 738755 (Raw writes) ->721TiB
F1 983142 (Host writes) ->960TiB
MD5 OK
106.83MiB/s on avg (~61 hours)
power on hours : 2803
Host writes: 13,162 GiB
MWI: 33
P/E Cycles: 2,446
Relocated sectors: 4,096
MB/s: 6.27
OK I have a theory. Samsung support stated that:
The raw value on this attribute [177] is how many times it can prolong the life of a specific block.
Assuming the normal value is the MWI the raw value should be 3,000 by the time it gets to 1. It’s not quiet working out that way. I will end up with more than 3,000 by the time I get to MWI 1. I think the raw value above 3,000 might therefore reflect the amount of times that a P/E cycle was avoided due to block management.
What do you reckon?
For my data retention test I have to decide; do I get to MWI 1 or do I use the RAW value of 3,000?
The threshold is 0
Samsung 830 64GB Update, Day 47
B]FW:[/B]CXM02B1Q
TiB Written:
285.9084
GiB written:
292770.25
Avg MB/s
125.90
PE Cycles
14881 <-- not the true count as it stopped realtime updates
Reallocated Sectors
40960
20 Blocks, holding steady
1106 hours
https://www.box.com/shared/static/r4...lozmq5t4ya.png
https://www.box.com/shared/static/9n...pqimjj3cbp.png
OCZ Vertex Turbo 64 Update Day 3
TiB 11.3107
GiB 11582.13
Avg MB/s 82.86
Avg Erase Count
205 up from 62
MWI 96 down from 97
1 Program Fail
14 Erase Fail
0 Read Fail
49 Hours
https://www.box.com/shared/static/p1...31p6742qnf.png
https://www.box.com/shared/static/bo...srhkx8yoie.png
The Vertex Turbo is slow, and the lack of new failed blocks on the Samsung puzzles me greatly.
It's been almost a month since the last block got reallocated on the Samsung...
How does Vertex Turbo come with 1.7 already loaded? Isn't 1.7 fairly recent, and the Turbo is an older drive??
You might want to pop that sucker open and take a looksee!
Yes. It's a brand-new drive. Came with 1.7FW on it. It's also using the new style plastic chassis (What I call revision 2 -- the first plastic chassis was not sized right and won't fit in many laptops).
I bought a 128GB Turbo last October as well. It had the mis-sized plastic chassis and came with 1.7FW.
BUT -- I second your concern. I asked OCZ about the 128GB Turbo, and got a response -- just a bizzare one. What I was trying to figure out it if OCZ was taking RMAs and returns, doing a new "factory flash" and putting the drives in a new plastic chassis (the metal ones get scratched really easy). I'm not saying that's what they're doing, but I at least consider it a possibility. They would have some drives in stock for RMAs/warraties, so it could be that they do maintain some old stock.
Kingston SSDNow 40GB (X25-V)
648.80TB Host writes
Reallocated sectors : 05 21
Available Reserved Space : E8 99
POH 5855
MD5 OK
32.95MiB/s on avg (~86 hours)
--
Corsair Force 3 120GB
01 92/50 (Raw read error rate)
05 2 (Retired Block count)
B1 49 (Wear range delta)
B6 1 (Erase Fail Count)
E6 100 (Life curve status)
E7 10 (SSD Life left)
E9 745973 (Raw writes) ->728TiB
F1 992732 (Host writes) ->969TiB
MD5 OK
106.83MiB/s on avg (~86 hours)
power on hours : 2828
I've not been power cycling the Samsung for PE count updates because I thought maybe 177 would start working... and a few hours ago, it did. So right now, if I power cycle the drive, I could get a true PE count... but then the drive would stop updating for a few days. It takes maybe 60 - 72 hours of continuous usage after 02FW update for the drive to update WLCount in real time.
I'm trying to order a 520 right now... A couple places have them in stock here.
One place that had some 60GB models took them down after midnight here; I ordered a 120GB model from another retailer. If I can find the 60GB model, it's getting tested!
Host writes: 13,691 GiB
MWI: 27
P/E Cycles: 2,636
POH: 355
Relocated sectors: 4,096
MB/s: 6.27
The test-rig was unresponsive when I checked so I had to re-boot.
From what I can find there are no problems but it was caused by the Force 3.
There were 0 files and so it is related to the cleanup part of the loop, as in delete->TRIM.
Looks like it happened last night so there are a few hours lost, will be re-starting the test shortly, just performing a few checks.
edit:
I re-checked and it stopped working 5 hours ago, not that bad, it's been months since there were issues though.
Kingston SSDNow 40GB (X25-V)
651.07TB Host writes
Reallocated sectors : 05 21
Available Reserved Space : E8 99
POH 5875
MD5 OK
38.28MiB/s on avg (~5 hours)
--
Corsair Force 3 120GB
01 94/50 (Raw read error rate)
05 2 (Retired Block count)
B1 48 (Wear range delta)
B6 1 (Erase Fail Count)
E6 100 (Life curve status)
E7 10 (SSD Life left)
E9 751464 (Raw writes) ->734TiB
F1 1000026 (Host writes) ->977TiB
MD5 OK
106.96MiB/s on avg (~5 hours)
power on hours : 2848
It's finally past 1PB but there's still a few days more until 1PiB is reached.
1000TB is pretty impressive, but the X25-V is the better story. If it can make it for another four months or so, it will have been a year.
Samsung 830 64GB Update, Day 48
B]FW:[/B]CXM02B1Q
TiB Written:
307209.83
GiB written:
300.0096
Avg MB/s
129.20
PE Cycles
15107 <-- not the true count as it stopped realtime updates
Reallocated Sectors
40960
20 Blocks, holding steady
1138 hours
https://www.box.com/shared/static/xq...k3nqyktr4p.png
https://www.box.com/shared/static/qf...r8iqlblglf.png
OCZ Vertex Turbo 64 Update Day 3
TiB 20.4309
GiB 20921.24
Avg MB/s 83.03
Avg Erase Count
370 up from 205
MWI 93 down from 96
1 Program Fail
16 Erase Fail (up from 14)
0 Read Fail
81 Hours
https://www.box.com/shared/static/oz...t27qb25mf5.png
https://www.box.com/shared/static/ql...laix2hj8oz.png
The 830's PE cycle count is working. The Turbo is still much slower than I was hoping it would be.
Still no more reallocations on the Samsung, and it just turned up 300TiB.
Host writes: 14,202 GiB
MWI: 21
P/E Cycles: 2,853
POH: 378
Relocated sectors: 8,192
MB/s: 5.96
From HD Sentinel:
“There are 8192 bad sectors on the disk surface. The contents of these sectors were moved to the spare area. 2 errors reported during write to the device”.
Since upgrading to 02FW I've not had another bad block. Incidentally, I also believe there was something wrong with the 01FW's wear leveling and... possibly everything else too.
The first thirty days saw 20 reallocated blocks, where it's been for the past 120TiB. I also figured wear leveling count was not average erase count but possibly maximum erase count, but you'd figure there would be an upper boundary to PE cycles. I just find it hard to believe the Samsung, using 27nm NAND and traditional ECC could hit those kinda numbers (15000+ PE cycles at the moment.).
One of my X25-M’s has 7 bad sectors. MWI = 99 and I only have 4,990 GB of host writes on it. I can’t be 100% sure, but I believe the 7 bad sectors occurred after I switched the drive from being an OS drive to a static data drive.
I think that the 01FW was really whacky on the 830, and that WA/WL were not as awesome as I'd hoped. Also, the drive while much faster now, uses about the same PE cycles as it did when it was only writing 5TB a day.
3.3698TiB was written (since the last update) with 59PE cycles, which means every 58.49GiB the drive consumes one cycle. This is much, much better than the drive was doing on 01FW. I assume this is partly why the drive isn't flagging blocks like it used to. Speed continues to increase a little every day.
The Turbo uses 1PE for ever 56.83GiB I also believe this means I shouldn't expect much of a speed increase over time -- maybe a little more).
From a Micron spec sheet:
1 page = 4,096K
1 block – 1 MiB
1 LUN = 4,096 MiB
Min number of valid blocks per LUN = 3,996
Difference between max and minimum = 100 blocks or 100 MiB
An invalid block is one that contains at least one page that has more bad bits than can be corrected by the minimum required ECC.
If a sector = one page (4k) 40,960 pages (sectors) = 40MiB.
Even is all bad sectors were on one LUN (unlikely) the LUN would still be operating within acceptable parameters.
http://img14.imageshack.us/img14/892...management.png
I think the Samsung uses 8K pages (based on some info on a far flung part of the Samsung website that I can't seem to find at the moment), but I'm not positive. 20 blocks is nothing... (it's still at 99/0/20). I was just expecting some more, especially if PE count is accurate.
Also, if you look at the smart data for my 830, notice that Program Fail = 0, Erase Fail = 10, Runtime Fails = 10 blocks, but the total number of reallocated blocks is 20. This could mean that the attributes are mislabeled, but I don't think so. Replaced blocks just = 2x Runtime fails.