Here a comparison between ATTO and AS SSD, with the drive in a throttled state. ATTO 0fill sequential write = 260MB/s. AS SSD 100% fill = 7MB/s.
Attachment 116857
Printable View
Here a comparison between ATTO and AS SSD, with the drive in a throttled state. ATTO 0fill sequential write = 260MB/s. AS SSD 100% fill = 7MB/s.
Attachment 116857
Great work there Ao1
Will be interesting to see some more SMART info from other users, both on the SF1 and SF2 controllers.
Running some math here....somebody please correct me if it's wrong :p:
Looks like the lifetime throttle (hereafter, LTT) rate at the NAND level for Ao1's SF-1200 drive is ~7MB/s (a little over 6MB/s * 1.12x....as an aside, for some reason, it didn't kick in for a couple of weeks). If LTT is meant to sustain the life of the drive until its warranty expires and we know the warranty, we can see how long OCZ/SandForce think the drive should live as a minimum.
Based on that, once LTT kicks in, it's limiting writes to the NAND at
7MB/s,
420MB/min,
24.6GB/hr,
590.6GB/day,
4.04TB/week,
17.3TB/month (30-day),
210.5TB/year.
I'm not sure how long the warranty is for Ao1's drive, so I'll just run through all the popular options (2, 3, and 5yr).
So for a two year warranty at this rate, that's ~421TB.
Three year warranty at this rate, that's ~631.5TB.
Five year warranty at this rate, that's ~1.03PB.
Those all seem like such large, such abnormally large numbers, for lifetime usage even at such piddling speeds. I can kinda see why SF is implementing it...without the LTT, writes would be ~6x higher :eek: I can also see why nobody has ever experienced LTT before, to hit the writes required requires some very abnormal usage.
134.65GB of 4K 100% random with iometer. (4 hour run. Avg speed: 3.19MBps (Binary)TRIM enabled.Pseudo Random
#233 at start = 38,208GB
#233 after = 38,656GB
Difference = 448GB :eek:
#241 at start = 40,064GB
#241 after = 40,192GB
Difference = 128GB
@Vapor
Write Amplification is surely part of that formula, so, best case WA could be ~250TB, ~380TB, ~600TB. (0.6 WA -> flash writes)
Even with WA those figures looks high, it might come to a complete halt if it was left running at "full" speed for months.
@Ao1
I'm pretty sure that is due to the random writes, I don't think that drive is feeling well :)
What size was the iometer test file?
I set it to run for 4 hours. I got the write stats from the cvs file it created. The MB/s was not a surprise. #233 was though. The silly thing is that its sequential speeds that get hit most, but it's 4K that does the damage.
Already took into account WA if the 233 value = true writes (sounds like it is). Ao1 has been writing at low 6s (MB/s) with incompressible, then a 1.12x WA = ~7MB/s at the NAND level. :)
If we're expecting users to use non-incompressible data (good assumption), then we can divide all the values I showed by your ~.6 (or multiply by ~1.6x) to show how much data (from the OS POV) gets written. Looks like a 3yr warranty would line up with just about 1PB :eek:
Looks like random writes are brutal for WA though, damn.
non-incompressible data?? You mean compressible data?
Anyway, I think 0.6 is a BAD assumption for write amplification for most normal usage. You can only use the data from the SSD 233 attribute to compute WA if the SSD has NEVER, NOT ONCE, been benchmarked with a program that writes easily compressible data.
The problem is that normal usage has VERY low writes. So if you run JUST ONE benchmark that writes easily compressible data (ATTO, CDM-0-fill, IOMeter with repeating or pseudo-random data, etc.), that will totally dominate the normal usage writes for most people. Note that Anvil did not claim that he never benchmarked his 0.6 WA drive. I guess he probably benchmarked it once or twice! ;)
Yeah, compressible data :hitself:
Good points all around :) Guess we can't know how much OS-level data they're expecting, but it does look like they are aiming for ~200TB/yr of NAND writes (for just the 40GB drive), which is a ton.
Or maybe they have some bottom limit for throttled speed that they don't want to go below. At some point they may as well just make the drive read only for a while and that sort of thing is interesting enough to create controversy. The headlines would just be too good: "Sandforce drives shut themselves down if you write too much". I don't think we can surmise much about what Sandforce or OCZ expects in terms of write endurance based on the throttled speed. There are just too many other factors involved. I would assume that they are not really trying to protect themselves against 24/7 writes, at least in the 40 GB drives.
Here is a summary of what has been posted on 233/241.
I noticed Gogeta uses sleep a lot. When I use sleep it puts about 2GB of data my drive. Hibernate would also put loads of writes on a drive.
Still, it's a mixed bag so far.
Attachment 116870
I asked for a manual check because I am a programmer and I expect to see bugs everywhere. For a manual check, if you really have 2MB free at the beginning of the partition, then you should see C partition having an offset of 4096 sectors. Do you have NtfsLastAccessUpdate disabled?
As a note, I have Windows XP SP3 with an Ubuntu distribution on dual boot. I keep my laptop in standby sometimes, but never to hibernate and I have NtfsLastAccessUpdate disabled. And until recently, I also had memory paging disabled.
Back to topic, random writes is very surprising. Was the drive throttled when you started testing? at a write amplification of around 3.3, if throttled, you could still wear flash cells at a rate of about 20MB/s
Yes it was throttled. The write speed was consistent throughout. Tomorrow the V2 is going to get 4 hours of random 512B.
512B is sadistic way to torture a SSD :D . Results will be interesting from speed point of view. Could you add on your test list also a 4 hour session with completely random 4KB random data but on the drive in a unaligned state?
i think it would be a good idea to do a post update to the first post to extrapolate on the SMART findings and the relation to WA/compression and other findings. Basically a compilation of known facts...i have linked this thread to others, but at 167 posts and climbing quickly, it will be hard to sift out the pertinent details. an overview of sorts?
I'm thinking of implementing a "Steady State" benchmark in my app, shouldn't take that long as most things are already there.
Saving throughput every n minutes, exporting the result set to excel when done.
So, it seems on one side the delta between #233 & #241 can demonstrate compression factors, but on the other hand write amplification etc offsets any saving due to compression.
Random 512B writes are quiet slow, so after 8 hours I only incurred 128GB of host writes, which is not enough considering the attributes only update every 64GB.
Time to give the V2 a rest. :D
#233 at start = 38,656GB
#233 after = 38,848GB
Difference =192 GB
#241 at start = 40,192GB
#241 after = 40,320GB
Difference = 128GB
http://img808.imageshack.us/img808/4950/unled2uh.jpg
Yes, it's an offset of 4096 sectors. And I'm on XP SP3. Also, I believe my WA is skewed from two AS-SSD runs and one ATTO run in the first couple months. Forgot to mention that earlier.
Edit: hibernation off, system managed pagefile on SSD (currently 3.5GB), system restore off on all drives, recycling bin for SSD set to remove files immediately.
I’ve been thinking about how to work out the compression factor for SF drives.
Due to the SMART update frequency of V2 drives it would be a lot more accurate to work with a V3.
The problem with trying to work out compression:
• #233 records compression and WA
• Compression could be related to xfer size and not just data format
• QD might also play into it
To help eliminate WA the drive would have to be in a fresh state and the test file would need to be less than the drive capacity, but large enough to mitigate the 1GB reporting frequency of a V3.
A test file should only consist of one xfer size with a known compressibility factor
Write speeds should be recorded during the xfer
Tests should be repeated at different QD's.
I suspect that for compression to be effective the xfer size needs to be 128K or above.
Anyone with a V3 fancy trying that out?
I'd do it but my Agility had to be returned, it had issues from day one, not being recognized was what tipped the scale.
Not sure if I'll be accepting another one in return, might try to get a Vertex 3 60GB or some V2.
(won't be getting the replacement for another week, probably more like 10 days)