573.3TB. 7424 reallocated sectors.
Basically, the reallocated sectors completely stopped increasing a couple of days back...
573.3TB. 7424 reallocated sectors.
Basically, the reallocated sectors completely stopped increasing a couple of days back...
Todays update:
Kingston V+100
263.3827 TiB
1054 hours
Avg speed 26.82 MiB/s
AD still 1.
168= 1 (SATA PHY Error Count)
P/E?
MD5 OK.
Reallocated sectors : 00
For some strange reason my write speed has dropped like a stone. I've tried SE and deleting but I get the same results. More strange is when I copy all the 40GB back to the Kingston the speed is around 130 MiB/s....
Intel X25-M G1 80GB
67,5969 TiB
19909 hours
Reallocated sectors : 00
MWI=113 to 110
MD5 =OK
46.86 MiB/s on avg
1: AMD FX-8150-Sabertooth 990FX-8GB Corsair XMS3-C300 256GB-Gainward GTX 570-HX-750
2: Phenom II X6 1100T-Asus M4A89TD Pro/usb3-8GB Corsair Dominator-Gainward GTX 460SE/-X25-V 40GB-(Crucial m4 64GB /Intel X25-M G1 80GB/X25-E 64GB/Mtron 7025/Vertex 1 donated to endurance testing)
3: Asus U31JG - X25-M G2 160GB
Unless I'm going soft the head in my twilight years (of my 20s), the M225 --> VertexTurbo is using 51nm Samsung NAND.
Anvil, I'll probably do a few loops with the 311 this weekend. What I really like about it (and part of why I really wish you could get a 40GB version) is the fact is has TRIM, and all the 3.0 Toolbox goodies.
I've been playing with the new Mushkin I bought, a 120GB Deluxe.
M225 --> Vertex Turbo:
Last edited by Christopher; 11-20-2011 at 12:10 AM.
There was another BSOD 101 this morning and so I checked and it looks like it had restored an old/default setup. (looks like a 2 hour pause)
I have corrected the config and it should be OK.
Kingston SSDNow 40GB (X25-V)
487.70TB Host writes
Reallocated sectors : 05 12
Available Reserved Space : E8 99
MD5 --
--.--MiB/s on avg (~- hours)
--
Corsair Force 3 120GB
01 92/50 (Raw read error rate)
05 2 (Retired Block count)
B1 55 (Wear range delta)
E6 100 (Life curve status)
E7 10 (SSD Life left)
E9 396935 (Raw writes) ->388TiB
F1 528336 (Host writes) ->516TiB
MD5 --
---.--MiB/s on avg (~-- hours)
power on hours : 1531
B1 is at 55. (3 up from last reporting)
-
Hardware:
@Christopher
The 120GB looks great!
It looks like C States are disabled or some background app is running?
-
Hardware:
Anvil,
The only app that should be running is Google Chrome. The first thing I did after installing the drive was put a partition on it and run CDM (as a secondary drive). I cloned a Windows installation to the drive and later I ran ASU (as the only drive in the system). There, I believe Skyrim is minimized in the background. But all C states are enabled. The CDM shot is fresh out of box performance. Nothing had been run before it.
EDIT
John,
here is an AS SSD shot. 4Ks are basically halved.
The CDM shot was new out of box (and overprovisioned too). CDM shows the highest run, while AS SSD shows the average.
Compared to every other bench, AS SSD makes every drive look like a dog. The ASU pic was with 46 percent [Applications] compression.
I'm working on the 311 now. It pretty impressive.
Last edited by Christopher; 11-20-2011 at 01:48 PM.
Last edited by johnw; 11-20-2011 at 01:24 PM.
I added AS SSD results above. I don't think the Chronos' 4K reads were particularly high with CDM, but AS SSDs seem excessively low. My M4 only scores about 12MBs reads @ 4K with AS SSD.
Christopher
That is low. My Intel 320 120GB is right at 19MB's @ 4k on AS SSD.
I'm using iastor driver on Intel P55 Sata 3.0Gbs Asus board. P7P55D-E PRO
A good board but nothing special.
Crystal Disk is 21MB's @ 4k.
Last edited by Hopalong X; 11-20-2011 at 03:48 PM.
I'm not a big AS SSD fan to begin with. I ran the M4 and got really low 4K reads on a ICH8 SATA II, so I think its just AS SSD. The 120GB Chronos is silly fast.
I've never been a SF fan until recently, but with the two Mushkins and now a Vertex LE in the mail, I've become a believer. I don't need benchmarks to tell me how fast the Chronos DX 120 is. The speed increase over the Deluxe 60 is intensely tangible.
That seems to be the opinion of many that have used AS SSD.
It gives you a comparison between drives on the same moboard set up. That is about all.
Almost as useful as the WEI score.
I've been playing around with the 311. Not really sure what the best way to run ASU on it is. I'm not sure how to mix static data and free space on a 18GB drive. Maybe 6GB static, 6 min free space?
40GB with 25nm SLC for the same price. Its all I ask, Intel. Well, that and maybe you send me a couple gratis.
Last edited by Christopher; 11-20-2011 at 06:27 PM.
This proves one and only thing yet again :
-do not rely on SF drives and avoid like the plague these doomed controllers
-34nm will always be better than 25nm ( just like 50nm Vertex beat everything so far but probably will be beaten by a capable C300 of same size due to lower WA in the C300; no amount of controller magic will be able to overcome the physical limitations of 25nm NAND )
The industry is trying to fool us with 22nm and even lower. They will soon realise that unless another material is used, things cannot scale below a certain limit without severe consequences. The game is up for the SSD and semiconductor industry. No coincidence the Intel SB-E has so many problems with virtualization and the like.
Kingston SSDNow 40GB (X25-V)
490.58TB Host writes
Reallocated sectors : 05 12
Available Reserved Space : E8 99
MD5 OK
35.53MiB/s on avg (~24 hours)
--
Corsair Force 3 120GB
01 92/50 (Raw read error rate)
05 2 (Retired Block count)
B1 53 (Wear range delta)
E6 100 (Life curve status)
E7 10 (SSD Life left)
E9 403527 (Raw writes) ->394TiB <-- corrected from 388
F1 537108 (Host writes) ->525TiB
MD5 OK
106.38MiB/s on avg (~23 hours)
power on hours : 1555
B1 is down again from 55 to 53.
Last edited by Anvil; 11-21-2011 at 02:12 PM.
-
Hardware:
I'm not actually certain I know what you're talking about. Yes, 5nxm is better than 3xnm is better than 2xnm (at least as far as endurance is concerned, but not speed necessarily). But the one controller than can really help mitigate than loss of endurance is the SandForce products (which you think are terrible). If you only have to write 70 percent to NAND as opposed to a traditional controller, you can overcome much of the endurance deficit right there. Intel's 25nm HET MLC is scheduled to roll out for all of their SSDs in the roadmap, and I can't see how that's necessarily a terrible thing. I've been stockpiling older drives to the best of my ability, but the SSD as we know it today is just a bridge to some other technology a few years down the road.
I wasn't a big SF fan until very recently, so I was always suspicious. Soon there will be even more drives/controller/nand combinations on the market, but SF is right now probably the best overall. My Chronos DX died from severe abuse, not normal use. The Force F40-A died in the same manner. All that really says is don't endurance test your drives if you don't want them to die, and that's not news. My 120GB Vertex Turbo cost about 6x the price I paid for mine when new, and if every drive used 50nm Samsung NAND still, I might not have one because they'd be so damn expensive. Compared to the prices I'm seeing for mechanical storage, 1$/GB seems super reasonable to me. I'd rather have a drive that might only last 8 years in normal usage than a drive I couldn't afford [and the economics of nand production make that possible].
Who the hell can keep track of what Intel does? VT-d is not the first feature I look for when CPU shopping, but I have to say the most disappointing thing about SB-E is that Cherryville didn't launch with it - complete with 25nm NAND.
Last edited by Christopher; 11-21-2011 at 05:16 AM.
?
What are you basing your thoughts on?
The SF drives are great imho and I rely on them daily.
Intel SB-E has just the same or better support for VT as earlier desktop chipsets/processors had, you are reading too much into rumors.
If you check with Intel, VT-d is supported and it wasn't on Gulftown.
If there was some truth to the rumors (about VT-d) there are virtually no applications that support VT-d for desktop usage.
Anyways, Intel clearly states that VT-d is supported on C1. (Link)
Last edited by Anvil; 11-21-2011 at 04:31 AM.
-
Hardware:
M225->Vertex Turbo 64GB Update: Drive is Dead EDIT: Maybe Not...see end of post!
Came in to work this morning to ASU error. Drive can no longer be seen by OS (W7 or XP). Red LED(s) are on inside inside drive case FWIW. Took a screenshot before I rebooted to see if it helped...
Last info available according to CDI...
823.31 TiB (905.23 TB) total
21xx hrs (Torture), 2943 hrs (Power-On)
16070 Raw Wear
118.89 MB/s avg for the last 56.73 hours (on W7 x64)
MD5 OK
C4-Erase Failure Block Count (Realloc Sectors) at 17.
1=Bnk 6/Blk 2406
2=Bnk 3/Blk 3925
3=Bnk 0/Blk 1766
4=Bnk 0/Blk 829
5=Bnk 4/Blk 3191
6=Bnk 7/Blk 937
7=Bnk 7/Blk 1980
8=Bnk 7/Blk 442
9=Bnk 7/Blk 700
10=Bnk 2/Blk 1066
11=Bnk 7/Blck 85
12=Bnk 4/Blk 3192
13=Bnk 7/Blk 280
14=Bnk 3/Blk 2375
15=Bnk 7/Blk 768
16=Bnk 7/Blk 765
17=Bnk 7/Blk 182
Bank 7 has 9 bad Blocks.
I'll boot back in to W7 to see if there are any errors in the event viewer or anything, but like the others, it looks like it just died without too much warning.
R.I.P. M225
EDIT: So I rebooted back in to W7 to check logs, but nothing really telling. AHCI BIOS paused for about 15-20 seconds on that port trying before finding nothing and continuing and drive still not recognized in OS. So I figured what the hell and unplugged drive while still in W7, waited a few seconds, and plug it back in and.....
What do you know, drive is recognized and accessable. WTF, I guess I spoke to soon. Well, we'll see as I'm running MD5 on Static Data right now. I'll post back in a bit when it's done.
Last edited by bluestang; 11-21-2011 at 06:14 AM. Reason: Re-Incarnation
24/7 Cruncher #1
Crosshair VII Hero, Ryzen 3900X, 4.0 GHz @ 1.225v, Arctic Liquid Freezer II 420 AIO, 4x8GB GSKILL 3600MHz C15, ASUS TUF 3090 OC
Samsung 980 1TB NVMe, Samsung 870 QVO 1TB, 2x10TB WD Red RAID1, Win 10 Pro, Enthoo Luxe TG, EVGA SuperNOVA 1200W P2
24/7 Cruncher #2
ASRock X470 Taichi, Ryzen 3900X, 4.0 GHz @ 1.225v, Arctic Liquid Freezer 280 AIO, 2x16GB GSKILL NEO 3600MHz C16, EVGA 3080ti FTW3 Ultra
Samsung 970 EVO 250GB NVMe, Samsung 870 EVO 500GBWin 10 Ent, Enthoo Pro, Seasonic FOCUS Plus 850W
24/7 Cruncher #3
GA-P67A-UD4-B3 BIOS F8 mod, 2600k (L051B138) @ 4.5 GHz, 1.260v full load, Arctic Liquid 120, (Boots Win @ 5.6 GHz per Massman binning)
Samsung Green 4x4GB @2133 C10, EVGA 2080ti FTW3 Hybrid, Samsung 870 EVO 500GB, 2x1TB WD Red RAID1, Win10 Ent, Rosewill Rise, EVGA SuperNOVA 1300W G2
24/7 Cruncher #4 ... Crucial M225 64GB SSD Donated to Endurance Testing (Died at 968 TB of writes...no that is not a typo!)
GA-EP45T-UD3LR BIOS F10 modded, Q6600 G0 VID 1.212 (L731B536), 3.6 GHz 9x400 @ 1.312v full load, Zerotherm Zen FZ120
OCZ 2x2GB DDR3-1600MHz C7, Gigabyte 7950 @1200/1250, Crucial MX100 128GB, 2x1TB WD Red RAID1, Win10 Ent, Centurion 590, XFX PRO650W
Music System
SB Server->SB Touch w/Android Tablet as a remote->Denon AVR-X3300W->JBL Studio Series Floorstanding Speakers, JBL LS Center, 2x SVS SB-2000 Subs
My 2 cents. For enterprise applications SF drives are a great solution. That is what they were designed and optimised for. For client applications you get a great boost from the spare area that is made from the OS and application installs, which saves around 4GB of NAND writes. After that however savings start to evaporate. We know that SF can easily compress zeros, but they struggle to compress anything else in client based applications. For sure SF cannot compress anything close to the theoretical compressibility of data in client applications with a low QD, so I would argue quiet strongly that the theoretical compressibility of application data is nothing like what can be achieved in real life. No-one (to my knowledge) using SF drives for normal client based activities has been able to demonstrate a significant difference between host and NAND writes.
If the data can’t be compressed, read and write performance can suck. As soon as the drive is in a steady state performance also drops.
With regards to endurance; now that it appears that expiry of the MWI puts data retention a risk I’m much more interested about how well the SSD performs before it gets to MWI 1 (or MWI 10 in the case of SF drives). The endurance advantage does not show up in the tests to date, even when data can (in theory) be compressed by 46%. For an enterprise workload however I bet it works just great.
Admittedly I’m not a fan of SF drives. I just can't see the advantage of a SF solution against any of the current gen drives. If you take reliability and the uncompressibility of data into account I can only see a disadvantage to current gen drives. It will be interesting to see what Intel do with SF. If they have been able to tweak the firmware it might be interesting, but if its stock SF firmware I’m struggling to understand why Intel would want to mess with them.
Last edited by Ao1; 11-21-2011 at 06:16 AM. Reason: typos
Ode for the dearly departed.
"Good Night, Good night! Parting is such sweet sorrow, that I shall say good night till it be morrow."
Whoa, Bluestang! Don't write it off so soon. On the same day the Mushkin died, my Agility 60 croaked. After several hours and 8 D flash attempts I was able to resurrect it, but I had to try it in many different systems. Of course, the Agility has 3TB on it...
Please clarify. Have you renamed the M225 to "Jesus, 64GB"?
Last edited by Christopher; 11-21-2011 at 06:22 AM.
It would be interesting to see what happens if you heated it up a bit. Not too much, say around 30 to 40oC for 12 to 24 hours.
SEE UDPATED POST ABOVE.
Ran it last week and everything showed good on the static data.
Not looking all that great, MD5 test seems stuck and SSDLife shows it staying at 2.6 GB of reads for a while now during the MD5 test. Also, CDI now shows that C5 "Read Failure Block Count" went from 0 to 1.
Last edited by bluestang; 11-21-2011 at 06:28 AM.
24/7 Cruncher #1
Crosshair VII Hero, Ryzen 3900X, 4.0 GHz @ 1.225v, Arctic Liquid Freezer II 420 AIO, 4x8GB GSKILL 3600MHz C15, ASUS TUF 3090 OC
Samsung 980 1TB NVMe, Samsung 870 QVO 1TB, 2x10TB WD Red RAID1, Win 10 Pro, Enthoo Luxe TG, EVGA SuperNOVA 1200W P2
24/7 Cruncher #2
ASRock X470 Taichi, Ryzen 3900X, 4.0 GHz @ 1.225v, Arctic Liquid Freezer 280 AIO, 2x16GB GSKILL NEO 3600MHz C16, EVGA 3080ti FTW3 Ultra
Samsung 970 EVO 250GB NVMe, Samsung 870 EVO 500GBWin 10 Ent, Enthoo Pro, Seasonic FOCUS Plus 850W
24/7 Cruncher #3
GA-P67A-UD4-B3 BIOS F8 mod, 2600k (L051B138) @ 4.5 GHz, 1.260v full load, Arctic Liquid 120, (Boots Win @ 5.6 GHz per Massman binning)
Samsung Green 4x4GB @2133 C10, EVGA 2080ti FTW3 Hybrid, Samsung 870 EVO 500GB, 2x1TB WD Red RAID1, Win10 Ent, Rosewill Rise, EVGA SuperNOVA 1300W G2
24/7 Cruncher #4 ... Crucial M225 64GB SSD Donated to Endurance Testing (Died at 968 TB of writes...no that is not a typo!)
GA-EP45T-UD3LR BIOS F10 modded, Q6600 G0 VID 1.212 (L731B536), 3.6 GHz 9x400 @ 1.312v full load, Zerotherm Zen FZ120
OCZ 2x2GB DDR3-1600MHz C7, Gigabyte 7950 @1200/1250, Crucial MX100 128GB, 2x1TB WD Red RAID1, Win10 Ent, Centurion 590, XFX PRO650W
Music System
SB Server->SB Touch w/Android Tablet as a remote->Denon AVR-X3300W->JBL Studio Series Floorstanding Speakers, JBL LS Center, 2x SVS SB-2000 Subs
@Ao1
In my case (VM's), the typical compression ratio is very close to the 46% ratio used in ASU, the more "data" the better the ratio. ("data" as in database servers)
If the drive was mostly used for incompressible data I expect I would select a different controller as well, although the SF2 series is a much better performer than the SF1 series was.
-
Hardware:
Bookmarks