05: 0
B1: 17
E7: 87%
E9: 25536
EA/F1: 40128
F2: 64
118.50 hours
39.0078 TiB
58.66 MB/s avg
MD5 Ok
05: 0
B1: 17
E7: 87%
E9: 25536
EA/F1: 40128
F2: 64
118.50 hours
39.0078 TiB
58.66 MB/s avg
MD5 Ok
1: AMD FX-8150-Sabertooth 990FX-8GB Corsair XMS3-C300 256GB-Gainward GTX 570-HX-750
2: Phenom II X6 1100T-Asus M4A89TD Pro/usb3-8GB Corsair Dominator-Gainward GTX 460SE/-X25-V 40GB-(Crucial m4 64GB /Intel X25-M G1 80GB/X25-E 64GB/Mtron 7025/Vertex 1 donated to endurance testing)
3: Asus U31JG - X25-M G2 160GB
C300 Update
225.165TiB host writes, 24 MWI, 3803 raw wear indicator, 2048/1 reallocations, 62.9MiB/sec, MD5 OK
SF-1200 nLTT Update
97.38TiB host writes, 64.406TiB NAND writes, 70 MWI, 1030.5 raw wear (equiv), raw wear delta 3, 56.35MiB/sec, MD5 OK
244.04TB Host writes
Reallocated sectors 6
MD5 OK
Been travelling this week, will try to catch up this weekend
-
Hardware:
1: AMD FX-8150-Sabertooth 990FX-8GB Corsair XMS3-C300 256GB-Gainward GTX 570-HX-750
2: Phenom II X6 1100T-Asus M4A89TD Pro/usb3-8GB Corsair Dominator-Gainward GTX 460SE/-X25-V 40GB-(Crucial m4 64GB /Intel X25-M G1 80GB/X25-E 64GB/Mtron 7025/Vertex 1 donated to endurance testing)
3: Asus U31JG - X25-M G2 160GB
M225->Vertex Turbo 64GB Update:
Just hit 100.00 TiB
431.8 hours
MWI 44 (drops by 1 for every 50 raw wear)
2809 Raw Wear
110.85 MB/s avg for the last 16.9 hours (on W7 x64)
MD5 OK
C4-Erase Failure Block Count(Realloc Sectors) at 1 (BANK 6, Block 2406)
I wish I was still on the FW that had the 10,000 P/E Cycles listed for this 51nm Samsung NAND.
Then MWI would be lasting ~350+ TiB on this drive
Last edited by bluestang; 08-22-2011 at 03:57 AM.
24/7 Cruncher #1
Crosshair VII Hero, Ryzen 3900X, 4.0 GHz @ 1.225v, Arctic Liquid Freezer II 420 AIO, 4x8GB GSKILL 3600MHz C15, ASUS TUF 3090 OC
Samsung 980 1TB NVMe, Samsung 870 QVO 1TB, 2x10TB WD Red RAID1, Win 10 Pro, Enthoo Luxe TG, EVGA SuperNOVA 1200W P2
24/7 Cruncher #2
ASRock X470 Taichi, Ryzen 3900X, 4.0 GHz @ 1.225v, Arctic Liquid Freezer 280 AIO, 2x16GB GSKILL NEO 3600MHz C16, EVGA 3080ti FTW3 Ultra
Samsung 970 EVO 250GB NVMe, Samsung 870 EVO 500GBWin 10 Ent, Enthoo Pro, Seasonic FOCUS Plus 850W
24/7 Cruncher #3
GA-P67A-UD4-B3 BIOS F8 mod, 2600k (L051B138) @ 4.5 GHz, 1.260v full load, Arctic Liquid 120, (Boots Win @ 5.6 GHz per Massman binning)
Samsung Green 4x4GB @2133 C10, EVGA 2080ti FTW3 Hybrid, Samsung 870 EVO 500GB, 2x1TB WD Red RAID1, Win10 Ent, Rosewill Rise, EVGA SuperNOVA 1300W G2
24/7 Cruncher #4 ... Crucial M225 64GB SSD Donated to Endurance Testing (Died at 968 TB of writes...no that is not a typo!)
GA-EP45T-UD3LR BIOS F10 modded, Q6600 G0 VID 1.212 (L731B536), 3.6 GHz 9x400 @ 1.312v full load, Zerotherm Zen FZ120
OCZ 2x2GB DDR3-1600MHz C7, Gigabyte 7950 @1200/1250, Crucial MX100 128GB, 2x1TB WD Red RAID1, Win10 Ent, Centurion 590, XFX PRO650W
Music System
SB Server->SB Touch w/Android Tablet as a remote->Denon AVR-X3300W->JBL Studio Series Floorstanding Speakers, JBL LS Center, 2x SVS SB-2000 Subs
C300 Update
231.027TiB host writes, 22 MWI, 3901 raw wear indicator, 2048/1 reallocations, 63MiB/sec (it's going up?), MD5 OK
SF-1200 nLTT Update from earlier today
102.94TiB host writes, 68.72TiB NAND writes, 68 MWI, 1099.5 raw wear (equiv), 56.4MiB/sec (also going up?), MD5 OK
I'd be surprised if there were more than 18 hours left (from your most recent update/post).
And on top of that it seems like the drive is still poised to live after SA177 bottoms out, sheesh
I wonder if the slight slowdown is something to do with the graph below.(The graph is based on SLC with 100K PE).
Either way, I can't wait to find out what happens when the NAND can no longer erase or hold a change. If the NAND can't erase how can it hold a charge? What happens when data can no longer be erased or loses it's charge? Will the OS become unstable? Will SMART pop up with a warning and then render the drive read only?
C300 Update
233.917TiB host writes, 21 MWI, 3950 raw wear indicator, 2048/1 reallocations, 63MiB/sec, MD5 OK
SF-1200 nLTT Update
106.063TiB host writes, 71.156TiB NAND writes, 67 MWI, 1138.5 raw wear (equiv), wear range delta 3 (still), 56.45MiB/sec (still speeding up...), MD5 OK
holy cow maybe there will be some excitement! i want to see smoke LOL
"Lurking" Since 1977
Jesus Saves, God Backs-Up *I come to the news section to ban people, not read complaints.*-[XC]GomelerDon't believe Squish, his hardware does control him!
05: 0
B1: 23
E7: 82%
E9: 33024
EA/F1: 49472
F2: 64
163.77 hours
58.67 MB/s avg
MD5 Ok
Very interesting that the wear range delta for the SF-1200s with no static data is so much worse than with static data. I know with mine it got to 8 when I had no static data and dropped down to 3 (and has since stayed there) once I added ~35GiB of static data.
With yours, SynbiosVyse, wear range delta has been climbing steadily the entire time (up to 23 now) and you have very little static data.
Is that because throughout the drive more blocks are being written to? With 20-30% static data there are regions of the drive that are essentially dormant during these tests.
What effects does the wear range delta have (if any) on performance, life, etc?
One would expect wear range delta to be higher with static data but the opposite is true so far. My SF-1200 is ~63% full with static data and my wear range delta shrunk when I added it. You have nearly 0 static data and yours continues to grow. With static data, NAND seems to be evenly used; without static data, NAND doesn't seem to be evenly used.
It's almost like it expects static data as it tries to do wear leveling.
Hard to know the effects of wear range delta...we don't know the units. Is it a percentage between 90th percentile usage and 10th percentile usage? Is it P/E cycles difference between 99th percentile usage and 1st percentile usage? If it's just P/E cycles between most used and least used, a difference of 23 cycles between least and most isn't a big deal when the average is 688 (as yours is now). If it's 23% wear difference between most/least, that's pretty notable and probably will have an effect on endurance. If it's 23% difference between 90th and 10th percentile, that could be really bad for endurance.
Also of note from your F40-A, LTT hasn't kicked in yet even though ~20% of your NAND's minimum rated lifetime has been used in less than a dozen days. Seems lifetime was set for something very short, maybe even impossibly short (basically deactivating LTT). And even if it kicks in, write speeds may not drop that much.
This seems counter-interuitive to me. If you have static data, wouldn't the NAND be unevenly used since some of the drive is dormant, hence, lower wear range delta? There will be a portion of the drive undergoing very few, if any, P/E cycles.
If you don't have static data, the drive will be evenly used because it is utilizing practically the whole drive.
477.199 TiB, 1298 hours, sa177: 1/1/39010, sa178: 4/4/968, 103.76 MB/s
The other unknown SMART attribute 235 is still at 99/99/2, just as it was when the SSD was fresh out of the box.
64GB Samsung 470
sa178 almost there....geez sa235 still hasn't moved, what do we have to do to get that one to change??
If I remember correctly the delta between most-worn and least-worn was higher on the V2 when I used static data and it remained lower on the V3 with no static data. SF drives rotate static data, so presumably that would increase WA.
The really weird thing though is that the drive has not hit LTT.
Could you please run Anvil's app on 100% incompressible data just to make sure LTT has not kicked in and is being masked by compression. I'm sure write speeds would have dropped if LTT had kicked in, although at 46% it would not be by as much as I saw.
This is a retail drive?
Off topic a bit, but I've noticed that one of my Intel drives has gone from 98% back to 100% over the 12 months or so that I switched it out from being an OS drive to a drive that just contains static data. Quite strange, but it has definitely reverted back to 100%.
I don't have a good explanation, I'm just translating what the data says and it says with static data, NAND seems to be evenly used; without static data, NAND doesn't seem to be evenly used.
I don't think any modern SSD doesn't actively rotate static data, however; it's extremely important for wear leveling. Even the 'old' Indilinx M225->Vertex Turbo has its reported wear range (based on max, average, and min reported P/E cycles) stay fairly constant over time....the Max Wear started at 255 cycles above Avg (257 vs. 2) and now Avg is at 2809 and Max is just 332 cycles higher than that.
I think that's exactly what's going on.
With my compression tests I observed a WA of .732x with the 46% setting with no static data and 12GiB free. With the 46% setting, 34.85GiB of static data, and 12GiB free WA has been .782x. With the F40-A with no static data and 1GiB free, it's been .803x. So the F40-A has the highest WA and a growing wear range delta--maybe it's the 1GiB free setting?
Agreed that it's very odd LTT hasn't activated yet. But the F40-A has been writing to NAND at 47.1MiB/sec, so I really doubt LTT will be hiding behind compression considering the NAND write speed is high and unchanged
Last edited by johnw; 08-20-2011 at 03:36 PM.
I wonder if it has ANYTHING left to give
Xtreme SUPERCOMPUTER
Nov 1 - Nov 8 Join Now!
Athlon64 3700+ KACAE 0605APAW @ 3455MHz 314x11 1.92v/Vapochill || Core 2 Duo E8500 Q807 @ 6060MHz 638x9.5 1.95v LN2 @ -120'c || Athlon64 FX-55 CABCE 0516WPMW @ 3916MHz 261x15 1.802v/LN2 @ -40c || DFI LP UT CFX3200-DR || DFI LP UT NF4 SLI-DR || DFI LP UT NF4 Ultra D || Sapphire X1950XT || 2x256MB Kingston HyperX BH-5 @ 290MHz 2-2-2-5 3.94v || 2x256MB G.Skill TCCD @ 350MHz 3-4-4-8 3.1v || 2x256MB Kingston HyperX BH-5 @ 294MHz 2-2-2-5 3.94v
Bookmarks