Great work there Ao1
Will be interesting to see some more SMART info from other users, both on the SF1 and SF2 controllers.
-
Hardware:
24/7 Cruncher #1
Crosshair VII Hero, Ryzen 3900X, 4.0 GHz @ 1.225v, Arctic Liquid Freezer II 420 AIO, 4x8GB GSKILL 3600MHz C15, ASUS TUF 3090 OC
Samsung 980 1TB NVMe, Samsung 870 QVO 1TB, 2x10TB WD Red RAID1, Win 10 Pro, Enthoo Luxe TG, EVGA SuperNOVA 1200W P2
24/7 Cruncher #2
ASRock X470 Taichi, Ryzen 3900X, 4.0 GHz @ 1.225v, Arctic Liquid Freezer 280 AIO, 2x16GB GSKILL NEO 3600MHz C16, EVGA 3080ti FTW3 Ultra
Samsung 970 EVO 250GB NVMe, Samsung 870 EVO 500GBWin 10 Ent, Enthoo Pro, Seasonic FOCUS Plus 850W
24/7 Cruncher #3
GA-P67A-UD4-B3 BIOS F8 mod, 2600k (L051B138) @ 4.5 GHz, 1.260v full load, Arctic Liquid 120, (Boots Win @ 5.6 GHz per Massman binning)
Samsung Green 4x4GB @2133 C10, EVGA 2080ti FTW3 Hybrid, Samsung 870 EVO 500GB, 2x1TB WD Red RAID1, Win10 Ent, Rosewill Rise, EVGA SuperNOVA 1300W G2
24/7 Cruncher #4 ... Crucial M225 64GB SSD Donated to Endurance Testing (Died at 968 TB of writes...no that is not a typo!)
GA-EP45T-UD3LR BIOS F10 modded, Q6600 G0 VID 1.212 (L731B536), 3.6 GHz 9x400 @ 1.312v full load, Zerotherm Zen FZ120
OCZ 2x2GB DDR3-1600MHz C7, Gigabyte 7950 @1200/1250, Crucial MX100 128GB, 2x1TB WD Red RAID1, Win10 Ent, Centurion 590, XFX PRO650W
Music System
SB Server->SB Touch w/Android Tablet as a remote->Denon AVR-X3300W->JBL Studio Series Floorstanding Speakers, JBL LS Center, 2x SVS SB-2000 Subs
Running some math here....somebody please correct me if it's wrong
Looks like the lifetime throttle (hereafter, LTT) rate at the NAND level for Ao1's SF-1200 drive is ~7MB/s (a little over 6MB/s * 1.12x....as an aside, for some reason, it didn't kick in for a couple of weeks). If LTT is meant to sustain the life of the drive until its warranty expires and we know the warranty, we can see how long OCZ/SandForce think the drive should live as a minimum.
Based on that, once LTT kicks in, it's limiting writes to the NAND at
7MB/s,
420MB/min,
24.6GB/hr,
590.6GB/day,
4.04TB/week,
17.3TB/month (30-day),
210.5TB/year.
I'm not sure how long the warranty is for Ao1's drive, so I'll just run through all the popular options (2, 3, and 5yr).
So for a two year warranty at this rate, that's ~421TB.
Three year warranty at this rate, that's ~631.5TB.
Five year warranty at this rate, that's ~1.03PB.
Those all seem like such large, such abnormally large numbers, for lifetime usage even at such piddling speeds. I can kinda see why SF is implementing it...without the LTT, writes would be ~6x higherI can also see why nobody has ever experienced LTT before, to hit the writes required requires some very abnormal usage.
134.65GB of 4K 100% random with iometer. (4 hour run. Avg speed: 3.19MBps (Binary)TRIM enabled.Pseudo Random
#233 at start = 38,208GB
#233 after = 38,656GB
Difference = 448GB
#241 at start = 40,064GB
#241 after = 40,192GB
Difference = 128GB
@Vapor
Write Amplification is surely part of that formula, so, best case WA could be ~250TB, ~380TB, ~600TB. (0.6 WA -> flash writes)
Even with WA those figures looks high, it might come to a complete halt if it was left running at "full" speed for months.
@Ao1
I'm pretty sure that is due to the random writes, I don't think that drive is feeling well
What size was the iometer test file?
-
Hardware:
I set it to run for 4 hours. I got the write stats from the cvs file it created. The MB/s was not a surprise. #233 was though. The silly thing is that its sequential speeds that get hit most, but it's 4K that does the damage.
Already took into account WA if the 233 value = true writes (sounds like it is). Ao1 has been writing at low 6s (MB/s) with incompressible, then a 1.12x WA = ~7MB/s at the NAND level.
If we're expecting users to use non-incompressible data (good assumption), then we can divide all the values I showed by your ~.6 (or multiply by ~1.6x) to show how much data (from the OS POV) gets written. Looks like a 3yr warranty would line up with just about 1PB
Looks like random writes are brutal for WA though, damn.
non-incompressible data?? You mean compressible data?
Anyway, I think 0.6 is a BAD assumption for write amplification for most normal usage. You can only use the data from the SSD 233 attribute to compute WA if the SSD has NEVER, NOT ONCE, been benchmarked with a program that writes easily compressible data.
The problem is that normal usage has VERY low writes. So if you run JUST ONE benchmark that writes easily compressible data (ATTO, CDM-0-fill, IOMeter with repeating or pseudo-random data, etc.), that will totally dominate the normal usage writes for most people. Note that Anvil did not claim that he never benchmarked his 0.6 WA drive. I guess he probably benchmarked it once or twice!![]()
Yeah, compressible data
Good points all aroundGuess we can't know how much OS-level data they're expecting, but it does look like they are aiming for ~200TB/yr of NAND writes (for just the 40GB drive), which is a ton.
Or maybe they have some bottom limit for throttled speed that they don't want to go below. At some point they may as well just make the drive read only for a while and that sort of thing is interesting enough to create controversy. The headlines would just be too good: "Sandforce drives shut themselves down if you write too much". I don't think we can surmise much about what Sandforce or OCZ expects in terms of write endurance based on the throttled speed. There are just too many other factors involved. I would assume that they are not really trying to protect themselves against 24/7 writes, at least in the 40 GB drives.
I asked for a manual check because I am a programmer and I expect to see bugs everywhere. For a manual check, if you really have 2MB free at the beginning of the partition, then you should see C partition having an offset of 4096 sectors. Do you have NtfsLastAccessUpdate disabled?
As a note, I have Windows XP SP3 with an Ubuntu distribution on dual boot. I keep my laptop in standby sometimes, but never to hibernate and I have NtfsLastAccessUpdate disabled. And until recently, I also had memory paging disabled.
Back to topic, random writes is very surprising. Was the drive throttled when you started testing? at a write amplification of around 3.3, if throttled, you could still wear flash cells at a rate of about 20MB/s
Yes it was throttled. The write speed was consistent throughout. Tomorrow the V2 is going to get 4 hours of random 512B.
512B is sadistic way to torture a SSD. Results will be interesting from speed point of view. Could you add on your test list also a 4 hour session with completely random 4KB random data but on the drive in a unaligned state?
i think it would be a good idea to do a post update to the first post to extrapolate on the SMART findings and the relation to WA/compression and other findings. Basically a compilation of known facts...i have linked this thread to others, but at 167 posts and climbing quickly, it will be hard to sift out the pertinent details. an overview of sorts?
"Lurking" Since 1977
![]()
Jesus Saves, God Backs-Up *I come to the news section to ban people, not read complaints.*-[XC]GomelerDon't believe Squish, his hardware does control him!
I'm thinking of implementing a "Steady State" benchmark in my app, shouldn't take that long as most things are already there.
Saving throughput every n minutes, exporting the result set to excel when done.
-
Hardware:
"Red Dwarf", SFF gaming PC
Winner of the ASUS Xtreme Design Competition
Sponsors...ASUS, Swiftech, Intel, Samsung, G.Skill, Antec, Razer
Hardware..[Maximus III GENE, Core i7-860 @ 4.1Ghz, 4GB DDR3-2200, HD5870, 256GB SSD]
Water.......[Apogee XT CPU, MCW60-R2 GPU, 2x 240mm radiators, MCP350 pump]
So, it seems on one side the delta between #233 & #241 can demonstrate compression factors, but on the other hand write amplification etc offsets any saving due to compression.
Random 512B writes are quiet slow, so after 8 hours I only incurred 128GB of host writes, which is not enough considering the attributes only update every 64GB.
Time to give the V2 a rest.
#233 at start = 38,656GB
#233 after = 38,848GB
Difference =192 GB
#241 at start = 40,192GB
#241 after = 40,320GB
Difference = 128GB
Yes, it's an offset of 4096 sectors. And I'm on XP SP3. Also, I believe my WA is skewed from two AS-SSD runs and one ATTO run in the first couple months. Forgot to mention that earlier.
Edit: hibernation off, system managed pagefile on SSD (currently 3.5GB), system restore off on all drives, recycling bin for SSD set to remove files immediately.
Last edited by bluestang; 06-30-2011 at 12:28 PM. Reason: added info
24/7 Cruncher #1
Crosshair VII Hero, Ryzen 3900X, 4.0 GHz @ 1.225v, Arctic Liquid Freezer II 420 AIO, 4x8GB GSKILL 3600MHz C15, ASUS TUF 3090 OC
Samsung 980 1TB NVMe, Samsung 870 QVO 1TB, 2x10TB WD Red RAID1, Win 10 Pro, Enthoo Luxe TG, EVGA SuperNOVA 1200W P2
24/7 Cruncher #2
ASRock X470 Taichi, Ryzen 3900X, 4.0 GHz @ 1.225v, Arctic Liquid Freezer 280 AIO, 2x16GB GSKILL NEO 3600MHz C16, EVGA 3080ti FTW3 Ultra
Samsung 970 EVO 250GB NVMe, Samsung 870 EVO 500GBWin 10 Ent, Enthoo Pro, Seasonic FOCUS Plus 850W
24/7 Cruncher #3
GA-P67A-UD4-B3 BIOS F8 mod, 2600k (L051B138) @ 4.5 GHz, 1.260v full load, Arctic Liquid 120, (Boots Win @ 5.6 GHz per Massman binning)
Samsung Green 4x4GB @2133 C10, EVGA 2080ti FTW3 Hybrid, Samsung 870 EVO 500GB, 2x1TB WD Red RAID1, Win10 Ent, Rosewill Rise, EVGA SuperNOVA 1300W G2
24/7 Cruncher #4 ... Crucial M225 64GB SSD Donated to Endurance Testing (Died at 968 TB of writes...no that is not a typo!)
GA-EP45T-UD3LR BIOS F10 modded, Q6600 G0 VID 1.212 (L731B536), 3.6 GHz 9x400 @ 1.312v full load, Zerotherm Zen FZ120
OCZ 2x2GB DDR3-1600MHz C7, Gigabyte 7950 @1200/1250, Crucial MX100 128GB, 2x1TB WD Red RAID1, Win10 Ent, Centurion 590, XFX PRO650W
Music System
SB Server->SB Touch w/Android Tablet as a remote->Denon AVR-X3300W->JBL Studio Series Floorstanding Speakers, JBL LS Center, 2x SVS SB-2000 Subs
I’ve been thinking about how to work out the compression factor for SF drives.
Due to the SMART update frequency of V2 drives it would be a lot more accurate to work with a V3.
The problem with trying to work out compression:
• #233 records compression and WA
• Compression could be related to xfer size and not just data format
• QD might also play into it
To help eliminate WA the drive would have to be in a fresh state and the test file would need to be less than the drive capacity, but large enough to mitigate the 1GB reporting frequency of a V3.
A test file should only consist of one xfer size with a known compressibility factor
Write speeds should be recorded during the xfer
Tests should be repeated at different QD's.
I suspect that for compression to be effective the xfer size needs to be 128K or above.
Anyone with a V3 fancy trying that out?
Last edited by Ao1; 07-01-2011 at 05:24 AM.
I'd do it but my Agility had to be returned, it had issues from day one, not being recognized was what tipped the scale.
Not sure if I'll be accepting another one in return, might try to get a Vertex 3 60GB or some V2.
(won't be getting the replacement for another week, probably more like 10 days)
-
Hardware:
Bookmarks