ETA :p: My V3 should arrive tomorrow. Is it a SF vendor wide solution or is it a vendor bespoke solution?
Printable View
This is not what I wrote. I wrote that compression on the ATTO testfile is so high that the 6g SATA bus becomes the limiting factor, not compression. This implies that nearly no actual data is being transferred, especially when looking at the result of ZIP/LZMA/NTFS LZ77 compression on that very same file.
The flaw in that argument is that I don't know the real maximum throughput (minus protocol overhead) of the 6G bus. Maybe someone in the know can chime in on that?
We would need a "normal" compressible testfile to get an idea of the compression ratios compared to NTFS. If we don't care for the comparison then the ASSD's compression test (going from 0% compressible to 100%) should give an idea of how good Sandforce compresses.
We should be able to work the compression factor out with the V3. Some help to work out the best method to do that would be appreciated.
Synthetic or real data?
For example I have a Documents folder that is 5.08 GB (5,455,376,384 bytes). As I write I am compressing the folder with WinRar on the best setting. The compression ratio is fluctuating between 82% & 89% (It's not finished yet).
I'd like to get a methodology in place so I can start testing as soon as the drive arrives tomorrow.
The Vertex 3 is listed at 550 mb/s read speed and there are screenshots of ATTO benchmarks at 560 mb/s. This sounds pretty close to your 570 mb/s number. Where do you know this number from?
Timur, there is an overhead factor on SATA 3, but unlike SATA 2 that overhead is unknown. (At least I have not been able to find it).
Regardless, however we should be able to work out the SF compression factor using attribute #233.
Suggestions welcome :)
Whoa, that is a bit disappointing. Took a while as well.
Attachment 117816
Ao1: Yes, unfortunately the overhead is unknown, but something around those 560-570 mb/s seems reasonable. I am thinking if I can get a benchmark to directly r/w from my M4's onboard cache (256 mb)!?
Before we work out the Sandforce compression ratio with our own methods: What does the ASSD compression benchmark return for Sandforce based drives? Would be good to get readings from different drives like Vertex 3 vs V3 maxiops vs Corsair etc.
Btw: My former system partition (OS + applications + games + personal data + some small samples libraries of Ableton Live and Sonar) compressed down by appr. 30% via NTFS, with some directories closing to 50% and more (asset files of Civilization 4) and other below 10% (audio sample files).
Unfortunately NTFS is seriously bottlenecked for anything that is not highly compressible (80%+ on the ASSD graph), so it cannot be used as a substitute for Sandforce compression on non Sandforce drives.
Even worse....my "Pictures" folder. Squeezing 5% took around 4minutes!
Attachment 117817
Here is ASSSD on the V2
Attachment 117818
Do you mean the maximum compression factor? That should be easy to measure. Just write a stream of zeros, and see how much the flash writes compares to the host writes.
If you mean the typical compression factor, then you first need typical data. You could take one of your system drives and create an UNCOMPRESSED archive out of all the files on it, and then repeatedly copy the archive to the V3, noting flash writes and host writes.
You are missing the point here. We were discussing compression *ratio* and how we could come up with reasonable numbers. My statement was that 10% seems a bit little for *compressible* data. Problem is that we can hardly define "compressible", but the ATTO benchmark that is usually used for that measurement is soooo extremely compressible that we have to consider this fact when talking about "compressible".
I don't say that the Sandforce chip doesn't need it's time to do the compression (so do all other compressors), but obviously it is fast enough on *de*compression (aka reading) that it seems to reach the practical limits of the 6G connection with the ATTO testfile.
This on the other hand implies that compression ratio is so good that we are merely measuring the Sandforce's compression speed on writes and 6G limit on read and *not* the compression ratio (nor speed of the flash chips and controller apart from compression).
No, I got the point. You seem confused. You do realize that 10% means compressed from 100% to 10%, right? We are talking about compression FACTORS. 10% means a factor of 10 compression.
If the Sandforce SSD's maximum sustained host write speed is 500MB/s (for example, when fed a stream of zeros), but the flash sustained write speed is only 90MB/s (for example, for a 60GB V3), then the compression factor is at least 5.6, or 18%. It could be 10%, but we cannot say for certain using the throughput ratio method.
But it is moot anyway, since Ao1 already measured it for a V2. IIRC, he got something like 8 - 12% or so for zero-fill.
I found a screenshot of ASSD compression speed on http://thessdreview.com/our-reviews/...-as-ssd-tests/
At 100% compressible (which is close to the ATTO file) it shows about 410 mb/s write-speed. So like I suspected in my last post we only get to know the speed of the compression engine (or controller/flash speed), not the compression ratio. Else those 100% should be closer to the 6G limits.
Curiously the read-speed is "limited" to 520 mb/s in this test as well, which is a good deal below the 550-560 mb/s of the ATTO screenshots. This could be due to different calculation methods, or due to different systems/setups (CPU and SATA controller) that the benchmarks ran on. What ATTO readings do you get on your system compared to your ASSD graph?
No, that cannot be true. Note that the rated sequential write speed for the 480GB V3 is 450 MB/s. Why would the compression engine be slower on the 480GB model than the 120GB or 240GB models?
Anyway, as I said (and Ao1 has said), Ao1 already measured the zero-fill compression ratio for a V2, and it was about 10%. He will no doubt measure it for his V3, and we will see if it differs from the V2.
The original argument was neither about "ratio" nor "factor", but Ao1's assumption that "If 0fill is around 10% NAND wear". So when writing 100% zeros around 10% NAND would wear out. I found 10% too high for only filling in zeros based on my experience with different compression engines.
There are two information I'm missing in my own argument:
1) How good can an engine compress that reaches a throughput of over 400 mb/s? This is amazing fast for doing compression. Still I like to think that compressing only zeros should be quite close to 100% with any compression engine. To support my argument I mentioned the rather bad and bottlenecked compression of NTFS.
2) Ao1 wrote "NAND wear", not "NAND fill". Once all NAND pages have been written to at least once we can assume that "wear" for writing new (even when compressed) data is higher than just the space needed to save that data, because some blocks have to be deleted at some point to make room for that new data.
The typical ATTO bench is using QD 4 (overlapped IO)
Just do a test at QD 1.
You should really look up the posts where Ao1 measured it. The SMART attribute on the V2 appears to show how much data was actually written to flash. On a large sequential write, there is no reason to assume any significant overhead or write-amplification of the type that you refer to above. So Ao1's measurement is looking at the compression factor that was achieved, or a close approximation. His measurement is correct, within the limitation that the increment of the attribute was rather large.
I just compressed my current system partition (OS + Civilization 4 + applications + some small sample libraries + some pics) via 7Z's "fastest" method (only 64 kb dictionary size). Squeezed the whole 300,000+ files / 59 gb down to less than 60%.
I read from SSD and wrote to HD with throughput only being around 20 mb/s even when my 8 logical cores where nowhere near to being maxed out on load. One has to consider that very likely 7Z only uses small block sizes for reads and/or most of these 300k files are small files. And 20 mb/s is around what my M4 can deliver for random 4 kb (depending on CPU setup). hIOmon should be able to tell us, right? (did not dive into its manual yet)
Hm, I'm currently looking at size distribution of my system partition. Seems like many files ain't that small, so the 20 mb/s must come down to how 7Z reads the files.
Post #151 compared an ATTO run to an AS SSD run when the drive was in a throttled state. Clearly ATTO is highly compressible. With uncompressed data I could only get 7MB/s, but with highly compressible data I could get 250MB/s.
The problem with previous testing was that 1) the V2 only reports every 64GB & 2) unless the drive is in a fresh state WA can distort the 233 reading.
I can avoid both those factors with the V3.
Here are a few more shots of ATTO for what they are worth. It does not appear to let you run a bench at QD1.
Attachment 117819
Attachment 117820
Attachment 117821
Attachment 117822
Hmm, I thought that random run might have been a fluke so I ran it again.
Attachment 117823
Attachment 117824
Here are the same folders compressed with 7z. Better compression, although the Picture's folder took around 6 minutes.
Ok, here is what I will do unless someone has a better idea. Install Win 7 and Office and then check the 233 and host write readings.
SE
Copy my Documents folder and then check the 233 and host write readings
SE, and then do the same for my Video folder and Picture Folder.
These folders have my day to day working data so they should be a good representation.
Attachment 117827