ETA :p: My V3 should arrive tomorrow. Is it a SF vendor wide solution or is it a vendor bespoke solution?
Printable View
This is not what I wrote. I wrote that compression on the ATTO testfile is so high that the 6g SATA bus becomes the limiting factor, not compression. This implies that nearly no actual data is being transferred, especially when looking at the result of ZIP/LZMA/NTFS LZ77 compression on that very same file.
The flaw in that argument is that I don't know the real maximum throughput (minus protocol overhead) of the 6G bus. Maybe someone in the know can chime in on that?
We would need a "normal" compressible testfile to get an idea of the compression ratios compared to NTFS. If we don't care for the comparison then the ASSD's compression test (going from 0% compressible to 100%) should give an idea of how good Sandforce compresses.
We should be able to work the compression factor out with the V3. Some help to work out the best method to do that would be appreciated.
Synthetic or real data?
For example I have a Documents folder that is 5.08 GB (5,455,376,384 bytes). As I write I am compressing the folder with WinRar on the best setting. The compression ratio is fluctuating between 82% & 89% (It's not finished yet).
I'd like to get a methodology in place so I can start testing as soon as the drive arrives tomorrow.
The Vertex 3 is listed at 550 mb/s read speed and there are screenshots of ATTO benchmarks at 560 mb/s. This sounds pretty close to your 570 mb/s number. Where do you know this number from?
Timur, there is an overhead factor on SATA 3, but unlike SATA 2 that overhead is unknown. (At least I have not been able to find it).
Regardless, however we should be able to work out the SF compression factor using attribute #233.
Suggestions welcome :)
Whoa, that is a bit disappointing. Took a while as well.
Attachment 117816
Ao1: Yes, unfortunately the overhead is unknown, but something around those 560-570 mb/s seems reasonable. I am thinking if I can get a benchmark to directly r/w from my M4's onboard cache (256 mb)!?
Before we work out the Sandforce compression ratio with our own methods: What does the ASSD compression benchmark return for Sandforce based drives? Would be good to get readings from different drives like Vertex 3 vs V3 maxiops vs Corsair etc.
Btw: My former system partition (OS + applications + games + personal data + some small samples libraries of Ableton Live and Sonar) compressed down by appr. 30% via NTFS, with some directories closing to 50% and more (asset files of Civilization 4) and other below 10% (audio sample files).
Unfortunately NTFS is seriously bottlenecked for anything that is not highly compressible (80%+ on the ASSD graph), so it cannot be used as a substitute for Sandforce compression on non Sandforce drives.
Even worse....my "Pictures" folder. Squeezing 5% took around 4minutes!
Attachment 117817
Here is ASSSD on the V2
Attachment 117818
Do you mean the maximum compression factor? That should be easy to measure. Just write a stream of zeros, and see how much the flash writes compares to the host writes.
If you mean the typical compression factor, then you first need typical data. You could take one of your system drives and create an UNCOMPRESSED archive out of all the files on it, and then repeatedly copy the archive to the V3, noting flash writes and host writes.
You are missing the point here. We were discussing compression *ratio* and how we could come up with reasonable numbers. My statement was that 10% seems a bit little for *compressible* data. Problem is that we can hardly define "compressible", but the ATTO benchmark that is usually used for that measurement is soooo extremely compressible that we have to consider this fact when talking about "compressible".
I don't say that the Sandforce chip doesn't need it's time to do the compression (so do all other compressors), but obviously it is fast enough on *de*compression (aka reading) that it seems to reach the practical limits of the 6G connection with the ATTO testfile.
This on the other hand implies that compression ratio is so good that we are merely measuring the Sandforce's compression speed on writes and 6G limit on read and *not* the compression ratio (nor speed of the flash chips and controller apart from compression).
No, I got the point. You seem confused. You do realize that 10% means compressed from 100% to 10%, right? We are talking about compression FACTORS. 10% means a factor of 10 compression.
If the Sandforce SSD's maximum sustained host write speed is 500MB/s (for example, when fed a stream of zeros), but the flash sustained write speed is only 90MB/s (for example, for a 60GB V3), then the compression factor is at least 5.6, or 18%. It could be 10%, but we cannot say for certain using the throughput ratio method.
But it is moot anyway, since Ao1 already measured it for a V2. IIRC, he got something like 8 - 12% or so for zero-fill.
I found a screenshot of ASSD compression speed on http://thessdreview.com/our-reviews/...-as-ssd-tests/
At 100% compressible (which is close to the ATTO file) it shows about 410 mb/s write-speed. So like I suspected in my last post we only get to know the speed of the compression engine (or controller/flash speed), not the compression ratio. Else those 100% should be closer to the 6G limits.
Curiously the read-speed is "limited" to 520 mb/s in this test as well, which is a good deal below the 550-560 mb/s of the ATTO screenshots. This could be due to different calculation methods, or due to different systems/setups (CPU and SATA controller) that the benchmarks ran on. What ATTO readings do you get on your system compared to your ASSD graph?
No, that cannot be true. Note that the rated sequential write speed for the 480GB V3 is 450 MB/s. Why would the compression engine be slower on the 480GB model than the 120GB or 240GB models?
Anyway, as I said (and Ao1 has said), Ao1 already measured the zero-fill compression ratio for a V2, and it was about 10%. He will no doubt measure it for his V3, and we will see if it differs from the V2.
The original argument was neither about "ratio" nor "factor", but Ao1's assumption that "If 0fill is around 10% NAND wear". So when writing 100% zeros around 10% NAND would wear out. I found 10% too high for only filling in zeros based on my experience with different compression engines.
There are two information I'm missing in my own argument:
1) How good can an engine compress that reaches a throughput of over 400 mb/s? This is amazing fast for doing compression. Still I like to think that compressing only zeros should be quite close to 100% with any compression engine. To support my argument I mentioned the rather bad and bottlenecked compression of NTFS.
2) Ao1 wrote "NAND wear", not "NAND fill". Once all NAND pages have been written to at least once we can assume that "wear" for writing new (even when compressed) data is higher than just the space needed to save that data, because some blocks have to be deleted at some point to make room for that new data.
The typical ATTO bench is using QD 4 (overlapped IO)
Just do a test at QD 1.
You should really look up the posts where Ao1 measured it. The SMART attribute on the V2 appears to show how much data was actually written to flash. On a large sequential write, there is no reason to assume any significant overhead or write-amplification of the type that you refer to above. So Ao1's measurement is looking at the compression factor that was achieved, or a close approximation. His measurement is correct, within the limitation that the increment of the attribute was rather large.
I just compressed my current system partition (OS + Civilization 4 + applications + some small sample libraries + some pics) via 7Z's "fastest" method (only 64 kb dictionary size). Squeezed the whole 300,000+ files / 59 gb down to less than 60%.
I read from SSD and wrote to HD with throughput only being around 20 mb/s even when my 8 logical cores where nowhere near to being maxed out on load. One has to consider that very likely 7Z only uses small block sizes for reads and/or most of these 300k files are small files. And 20 mb/s is around what my M4 can deliver for random 4 kb (depending on CPU setup). hIOmon should be able to tell us, right? (did not dive into its manual yet)
Hm, I'm currently looking at size distribution of my system partition. Seems like many files ain't that small, so the 20 mb/s must come down to how 7Z reads the files.
Post #151 compared an ATTO run to an AS SSD run when the drive was in a throttled state. Clearly ATTO is highly compressible. With uncompressed data I could only get 7MB/s, but with highly compressible data I could get 250MB/s.
The problem with previous testing was that 1) the V2 only reports every 64GB & 2) unless the drive is in a fresh state WA can distort the 233 reading.
I can avoid both those factors with the V3.
Here are a few more shots of ATTO for what they are worth. It does not appear to let you run a bench at QD1.
Attachment 117819
Attachment 117820
Attachment 117821
Attachment 117822
Hmm, I thought that random run might have been a fluke so I ran it again.
Attachment 117823
Attachment 117824
Here are the same folders compressed with 7z. Better compression, although the Picture's folder took around 6 minutes.
Ok, here is what I will do unless someone has a better idea. Install Win 7 and Office and then check the 233 and host write readings.
SE
Copy my Documents folder and then check the 233 and host write readings
SE, and then do the same for my Video folder and Picture Folder.
These folders have my day to day working data so they should be a good representation.
Attachment 117827
Like johnw said, just store the files with no compression.
Using my app to create the file should be doable as well, 0-Fill is really 0-Fill and you can create the test file manually from the menu.
(it will create the same test-file as if it was running the benchmark and it will be of the size you've selected)
edit:
Installing Windows and Office is the one to start with imho.
Choosing "Neither" in ATTO should be QD 1, at least that's how I understood it.
Problem with ATTO really is that it's not just "highly" compressible, it's compressible nearly to "non-existant". Compressing a 2 GB (!) ATTO testfile via BZip2 squeezes it down to 14 kb (!!) at around 320 mb/s (all r/w happening inside the RAM cache, 8 logical cores at 3.1 gHz). This is a ratio of 1 : 6,67572021484375e-6! Compressing the standard 256 mb ATTO testfile via same settings results in a 2 kb file, which for all practical purposes is the same.
So the 250-260 mb/s you are measuring with ATTO in throttled state again likely is just the limit of the 3G link in combination with the limit of the compression engine.
What I am trying to say is that ATTO is not suitable for deciding on NAND wear and compression rate of Sandforce drives, because at close to 100% compression ratio we don't really get any of these information out of it.
It's a good start! However, to have an accurate measurement, you would need to copy the directory serveral times until you hit around 100GiB host writes. This will give you an error margin of less than 1%. Also, do the same with video/mp3s files. I am pretty sure there will be no compression and probably a WA of at least 1.1
What settings have you used for 7zip? Best results are achieved with LZMA2, dictionary of at least 64/128MB, solide archive.
I tried running "Neither" as it looked to be QD1 but it doesn't look to result in more than about QD0.5 afaiks.
Also one needs to keep in mind de-duplication on the SF controller.
There is no doubt about the SF2 being "stronger" than the SF1, there are few signs of weakness wrt reads (at QD1), writes however are affected.
I'm testing the WildFire now and incompressible data is written at ~50% of full speed, full speed being 480-500MB/s at QD1.
480-500 mb/s write speed @ QD1 with incompressible data?
No, full speed is highly compressible data (480-500MB/s), incompressible is written at a rate of ~50% of full speed :)
oh i see :) just the QD1 that confused me :)
According to Windows Performance-Monitor the maximum QD is always 1 less than what you set up in ATTO, so the 0.5 QD reading with "Neither" likely is connected to that. Whether this is a measurement flaw of Performance or Resource Monitor or a flaw of ATTO I cannot say.
The 500 mb/s maximum vs. 250 mb/s (50%) with incompressible data unfortunately again only tells us that the drive handles the compressed data at a maximum rate of 500 mb/s, not that it's been compressed to 50%.
Consider the 6,67572021484375e-6 factor at 320 mb/s that I got with BZip2 on an i7 running 8 logical cores at 3.1 gHz. In order to reach a compression factor of only 10% with the ATTO testfile the computational power of the Sandforce compression engine would have to be 15000 times worse than what BZip2 does on my i7. Quite a big difference just to push the speed from 320 mb/s to around 560 mb/s, which is not even doubled.
Other compression algorithms are slower and less efficient for that test-file though. And I absolutely admit that I don't have the slightest idea how fast you can make a dedicated compression engine on such a controller chip without burning the poor little thing. I don't even know if something like a "dedicated" compression logic exists or is possible, instead of just running compression software/firmware on an all-around processor. Anyone in the know about how Sandforce *really* accomplishes compression under the hood?
At 250MB/s the data is stored. (worst case as in incompressibe data)
The question is at which compression rate does it switch from compression to just storing the data. There could be just a few algorithms used, meaning a few thresholds where it takes a different direction and or de-duplicates the data.
I'm pretty confident in that as long as the data is between say somewhere between 60-67% and 100% (as in 100% incompressible) then the data is just stored.
Trying to compress this sort of data would lead to slower writes, well, we'll find out.
(those figures looks to be what the SF1 controller does)
@Timur
Consider that hardware compression has to meet strict requirements and data must be easily accessible. A very good software algorithm can compress 1GiB data to less than 1KiB (if mathematically possible) because it has access to all at once. To do the same, a controller would need to have a large cache to store and compress the data and this would lead to high latencies for pending writes while everything is compressed. Also, consider that you want to access 512 bytes starting from position 512000512. In this case, controller would need to unzip the complete file to find out what is at that position which would be a complete waste of resources. Most probably, if controller is hit with a high write load, it will cluster pages and will archive them. From one test done earlier in this thread it seems that it groups at least 8 pages of 4KiB and this is where compression factor of 8-12% came from.
@Anvil
I would say that, if it can save even one page, it will save it, so most probably it will switch to just storing when archived quantity cannot be stored in a smaller number of pages, so I would bet that would still archive even if result has a compression factor of 80-85%. The problem is that it needs to store some parity data (RAISE) in other pages, so I expect to see WA equals to 1 in these scenarios.
Again, I will say that that cannot be true. The 480GB Vertex 3 is rated at only 450MB/s sequential write. It makes no sense for the "compression engine", as you call it, to be slower on the 480GB model than on the lower capacity models. Therefore, whatever is limiting the sequential write speed to 450MB/s (or whatever) is not the compression engine.
But once again I will say, your point is moot. Ao1 already measured the zero-fill compression factor for a V2, and it was in the 8-12% range. Soon Ao1 should be able to tell us what it is for a V3.
There could be a number of reasons to have lower rated specs, for example lower controller clock-rate to accommodate for the increase of flash on the board (even if to just cut production cost by getting rid of a single condenser).
And Ao1 did *not* measure compression zero-fill factor, but NAND wear over the course of 64 gb. NAND wear is used as an indication for compression ratio, but it's not necessarily a foolproof one.
@Ao1: Did you do the measurements on a new drive or on one that was already filled before? If the latter then how do we know SMART wear reports are not just from garbage collection doing its work at any time during the last 64 gb (be it because of the 0fill or not)?
Due to its nature, Sandforce compression factor cannot be measured below a threshold limit which is what it can fit to a page. It does not matter if it can compress 32KiB to 10 bytes or 4095bytes because it is using a full page and needs to write it on flash cells. It cannot wait forever to something that is matching the available space to complete it. I am pretty sure it can theoretically compress serveral GB files made by zeroes to a few bytes but it never knows what is receiving next so it must assume the worst possible scenario. And for this reason it will never try to archive more than a fixed value
Regarding the Sandforce GC, it is supposed to erase blocks which would not increase writes count. Also a small but maybe usefull info: I was able run CristalDiskMark (both compressible and uncompressible test data) on a OCZ Vertex 3 240GB model last week. I looked at smart parameters but because I set test size to 100MB I did not expected to see any significant changes, so I did not noted exactly initial values for #233 and #241. But for my surprise, I saw something like 30GB change for incompressible data and 5GB for zero fill. Now values might not be exact because I have not noted them, this is just what I remember from my poor memory. I still have access to the drive but I cannot do any tests, at least now because it is running some VMs.
I have saved the results for later comparison when drive is used:
Attachment 117836Attachment 117837
Page deleting *is* NAND wear, isn't it?
And you can perfectly well compress several GB of zeros or even repeated pseudo-"random" data without knowing what comes next, it just has to fit into what you've already got and you need enough memory to hold that information before finally writing them. Think of it like: 1 sheep, 2 sheep, 3 sheep, 4 sheep, 5 sheep.... 102043434 sheep. It's still just X sheep.
I'm really not trying to rain on anyone's parade here, but measurements don't get anywhere when "interpretation" of uncertain information is taken for hard evidence. Just keep your mind open about what (mostly undocumented) complexities might happen under the hood. I put some counter-arguments into the ring to inspire a better understanding
And originally really just wondered how 0fills can only be compressed down to 10%. Whatever the truth is, Sandforce is ultimately limited by its processing power and onboard RAM. Too bad there ain't any specs out there on these things, or are there?
Guys, those ATTO benches in #224. There are a number of test patterns you can select, but they are all highly compressible.
Look at the read/ write speeds in the random test pattern.
Does a random test pattern mean a random selection of highly compressible patterns?
If so why the big drop in performance? Something is capping those write speeds. It's almost like the randomness has caused the data to become uncompressible.
Timur, I agree there is a lot of second guessing in the absence of hard facts. Tomorrow I will try to clear things up by using real data and carefully monitoring SMART attributes. I will also be careful not to write anything in excess of the drive capacity without running a SE first.
The only hard fact I have seen is a graph in an Anantech review that showed the difference between host writes for a Win/ Office install against nand writes.
There are most likely a lot of variables so it will be hard to get to the bottom of it, but we can try. :)
Hey Ao1, you're doing good here! Sorry for my overly questioning attitude. ;)
Considering ATTO: When you do that I/O comparison test with random pattern then the data in the testfile itself changes. It becomes less compressible (between 5% and over 90% depending on compression settings)!
V3 - 60GB
I ran a SE between each file copy exercise.
Attachment 117880
I ran a quick comparison on ATTO. I'm not sure if the SF cpu is clipping the writes, or if it is due to the data becoming non compressible when random.
Endurance app now up and running
Attachment 117876
The CDM screen shot was taken just after I started the endurance app. I kept screen shots for each of the xfers above. If anyone wants to see them send me a pm.
Attachment 117877
Attachment 117878
Great work there Ao1, and you got the drive this morning?
Did you run any of the standard benchmarks, just wondering about the incompressible seq. write speed, looks to be 76MiB/s on the Endurance test, I'd guess 85-90MiB/s?
Hi Anvil, yes I got it this morning, but I had all my ducks lined up to crank out the testing.
I just stopped the endurance app to run AS SSD......(I'm on SATA 2 so sequential read speeds are clipped)
I suspect the ATTO random is down to the SF controller.
Attachment 117879
I expect it's already in steady-state and so the figures looks OK, about the same seq writes as a 34nm 60GB drive.
Not sure about it being held back due to the SATA 3Gb/s ports on writes, shouldn't matter much as throughput is far from stressing the 3Gb/s interface.
(it does matter for drives like the C300 though, but they don't do compression)
The seq. read speed is very nice compared to the Agility 3.
Yep it would have been in a steady state on the AS SSD run. I didn't open up the case to see what NAND was being used in case I got any BSOD weirdness and had to return it. So far so good though. Once I get it to a throttled state I will open up the case.
I'm still getting the TRIM hang btw. Doesn't seem any different to the V2. :shrug:
AVG speed now 64.81MB/s (0.22TB)
Is it as noticeable as it was on the V2, meaning 10 seconds or so?
It would still be interesting to test it on SATA 3. I'm curious to see if writing zeroes would max out the interface. Also, could you do same tests trim vs non trim and look for WA?
Like I wrote before, the ATTO testfile itself becomes a lot less compressible when you use that random I/O comparison (only down to around 90% via 7Z fastest w/o solid). So the reduced write performance is likely due to SF's compression working hard on that file.
This would also fit in comparison to AS SSD, which is even less compressible than the ATTO random file and thus shows less sequential write performance.
What I wonder about is: Is the resulting write performance due to slow NAND writes of the SF drive, or due to the SF trying to compress the data even when it's not really compressible?
Anvil suggested that SF may only compress data that's at least 66% compressible, but I doubt that SF is able to detect that before trying. Because in order to do that it would have to analyze the data on a "content" basis (i.e. don't try JPG, MP3, ZIP, etc.), which I don't think it can do (at least not with a proper RAM buffer for pre-analysis). 7Z/LZMA2 tries to avoid recompressing already compressed files and still it has to go through the whole file at full compression time first (even knowing its ZIP extension).
I assume you have LPM Slumber disabled in order to identify TRIM hang from LPM hang? ;)
The "hang" is a known issue on the SF controller (deleting large files or a lot of files will result in a "hang" during TRIM), it's got nothing to do with LPM.
I was hoping that this one was gone but apparently not.
(personally I always apply the "LPM fix" on SnB systems)
One of the benefits of the MSAHCI driver over the Intel one is that you can switch LPM states (+Slumber activation time) via power-profiles without need to reboot. It even allows to use different settings for battery operation. OS X uses a Slumber timer of 40 ms by the way, Windows "Balanced" and "Power Saver" profiles are set to 100 ms, "High Performance" turns LPM off (only with MS driver).
The ~67% is based on the compression ratio of a file (or a collection of files) found by using 7Zip "Fastest" and when written the E9 and F1 SMART attributes increase at the same rate, which indicates that "no" compression is performed. (WA is ~1.0)
The SF1 series controller told us in 64GB steps how the data was written, now with the SF2 series it is considerably easier to measure as the counters are showing raw/host writes in 1GB steps.
In real life it won't get a constant stream of some "known" type of data, it will most likely be a mix where in general the smallest package is ~4K and the largest package is ~128KB.
(there will be extremes like 512B and multi MB packets but those arent typical)
The SF controller sees only packages of data in streams, it has no concept of files and so looking for extensions is out of the question.
Of course looking for extensions is out of question. That is why I wrote that SF would have to be "content" aware (analyze and guess content) and that even 7Z's LZMA2 needs to go through *all* of the content even while it could be extension aware, which obviously it is not. Software compressors usually are content aware in that they use different algorithms for different source files (both RAR, 7Z and current WinZIP do that unless you force them not to).
So when 7Z with the power of an i7 and 7 gb RAM at its disposal has to compress through the whole incompressible file just to reach 100% (aka no compression, aka just store) then I doubt that SF with its lack of memory can do any much better (still no idea about its "dedicated" computing power).
My somewhat educated guess (!) is that SF does nothing of that sort, but just tries to compress everything coming in with some algorithm that similar to LZMA2 avoids ever going over 100%. In that light the 67% limit you identified would be the turning point from which on SF's compression engine fails to get any more compression out of 7Z file.
And since the difference between compressing to 100% to storing at 100% is throughput I wonder if SF could be faster with incompressible if it wouldn't try to compress it in the first place?! Why else would incompressible only write at around 70 mb/s? Or do you think SF is sooo much slower to push uncompressed data to its NANDs than other SSD processors (i.e. Marvell)? Maybe it is, but with all the computing put into compression there should be enough juice for just storing away.
The SF2 series controller is capable of writing incompressible data at about 280-290MiB/s, that is, with the optimal NAND/die "setup".
OCZ lists the specifications for the incompressible data rate, it's not a secret at all and ~70MiB/s is in line with the specs for the 60GB V3.
(depending on the block-size)
Attachment 117890
In my argument I forgot that those Vertex 3 240 gb come at over 240 mb/s when fresh. So my last argument isn't so valid as it seems.
Edit: You posted just before I hit the Post button. ;)
+1 poor trim implementation. for those that missed it seems a fix is inboundQuote:
The "hang" is a known issue on the SF controller (deleting large files or a lot of files will result in a "hang" during TRIM), it's got nothing to do with LPM.
SATA 3.1
Just noticed, the V3 has a number of new SMART attributes. 230 (E6) should be interesting.
Attachment 117903
@Ao1
Could you do some more zero fill tests? I would be interested to see how much data you need to write to see 80 increments in #233 smart parameter. The data from post #245 is interesting and odd in the same time compared to V2
Sure, but let me get the drive to a throttled state first.
Don't forget for the V3 I only wrote to clean NAND, so WA was excluded.
I don't know why the installations compressed by 50%, but the data folders stayed the same. The documents folder should have been quite compressible. (Office docs, pdf docs and three web sites).
Are you going to test the WA of the other settings from Anvil's app before you hit LTT?
Probably because some of the files in the Windows installation had very easily compressible data, like bytes repeated over and over, while the document folders did not have anything so simple. I've always thought that Sandforce's compression algorithm is basic, and cannot compress anything more sophisticated than a short repeating pattern like zero-fill or repeated bytes.
Not sure what you mean?
Currently Anvil's app is running with non compressible data and whatever the default settings are for random writes.
Once I have followed John's method I can do loads more on compression. The extra capacity of the V3 plus 1GB reporting will make it a lot easier to experiment with.
If I am reading his posts correctly, Ao1 is talking about his documents folder which totals 5444933198 Bytes. Ao1 showed a RAR .zip archive of documents that was 3155165184 Bytes. That comes to a compression ratio of 57.9%. Seems reasonable for a documents folder. But Sandforce's compression algorithm cannot compress the documents much at all. Which agrees with what I have often said, that Sandforce's compression algorithm is quite basic and can only compress simple repeating byte patterns well.
There's 5 (or is it 6?) settings for data compressibility with Anvil's app, I only see you've published ~WA with 0-fill setting
You should still have many TiB of wiggle room before LTT kicks in, why not try seeing the compressibility of more things? Try out Anvil's various settings, take your current Program Files directories and copy it onto the V3 and other stuff :p:
It's hard to tell if something is quicker without the stopwatch :)
It's probably not much though, if anything at all.
I reinstalled the OS (W7) on one of my bench-pc's and ~6GB writes is very close to what I found as well so the data looks fine.
Using my benchmark one can easily observe that the different compression ratios results in different scores, but as I've said, there's not much difference between incompressible and 67% and not much between 67% and 46% either. It does depend on NAND and capacity though.
Guess it's time to create that Benchmark thread so that more people can make some input on the various SF drives.
Nice :up:
My Windows install was Home Premium. No updates etc, just a basic install. MS Office 2007 Enterprise Edition - standard install.
Outside of installs and benchmarks I need to be pursued that compression works in even a minor way. [for data I work with anyway]
The 233/241 ratio should be a good indication if a workload is using compression or not.
A real life test that I would propose: copy openjdk 7 source code from: http://download.java.net/openjdk/jdk7/ , unzip it, do multiple copies to your ssd, then look how SMART parameters are evolving. The archive uncompressed has 273MiB but is reported to use 333MiB on disk. It is delivered in a 83MiB size but it is easily compressible using 7zip to 30-50MiB depending on settings.
I checked as well before going online for updates, my W7 is the x64 Enterprise, should be very close in size anyways.
If your work files are mostly multimedia there won't be much compression at all.
In my case it's mostly OS and VM's containing OS's + apps + databases and they all compress.
I made a few checks when cleaning the drives and the results are pretty clear...
These two drives have been running 2R0 as a pair for, well basically 99% of the time and have been used as boot drives running 1-2 VM's for testing purposes.
Attachment 117906 Attachment 117907
I found an interesting article on SF compression here:
"For this article I took a consumer SSD that has a SandForce 1222 controller and ran some throughput tests against it using IOzone. IOzone enabled me to control the level of data compressibility, which IOzone calls dedupability, so I could test the impact on performance. I tested write and read performance as well as random read, random write, fwrite, and fread performance. I ran each test 10 times and reported the average and standard deviation.
The three write tests all exhibited the same general behavior. More specifically:
• As the level of compressibility decreases, the performance drops off fairly quickly.
• As the level of compressibility decreases, there is little variation in performance as the record size changes (over the record sizes tested).
The absolute values of the performance varied for each test, but for the general write test, the performance went from about 260 MB/s (close to the rated performance) at 98% data compression to about 97 MB/s at 2% data compression for a record size of 1 MB.
The three read tests also exhibited the same general behavior. Specifically,
•The performance drops only slightly with decreasing compressibility (dedupability)
•As the level of compressibility decreases, the performance for larger record sizes actually increases
•As the level of compressibility decreases, there is little performance variation between record sizes
Again, the absolute performance varies for each test, but the trends are the same. But basically, the real-time data compression does not affect the read performance as much as it does the write performance.
The important observation from these tests is that the performance does vary with data compressibility. I believe that SandForce took a number of applications from their target markets and studied the data quite closely and realized that it was pretty compressible and designed their algorithms for those data patterns. While SandForce hasn't stated which markets they are targeting I think to understand the potential performance impact for your data requires that you study your data. Remember that you're not studying the compressibility of the data file as a whole but rather the chunks of data that a SandForce controller SSD would encounter. So think small chunks of data."
Attachment 117926
Attachment 117927
Attachment 117928
Attachment 117929
Attachment 117930
Attachment 117931
Just under 24hours and ~5.3TB. No reduction in the MWI or life curve, so I'm still within the credit zone.
AVG write speed = 65.17MB/s
Attachment 117925
excellent find A01:up: IOzone looks very interesting as well.
this would make quite a bit of sense: like johnw says, and I agree, the SF being a low wattage proc it has to have a very rudimentary compression engine. If they did design their algorithms for certain patterns (tweaked it :) ) then that would help to explain why they pull off so much performance with such little power.Quote:
I believe that SandForce took a number of applications from their target markets and studied the data quite closely and realized that it was pretty compressible and designed their algorithms for those data patterns.
Good find!
I would like to see how much write throughput IOzone reports on a Vertex 3 or other current SF controller based SSD.
E6 Life Curve Status
New: 00786400000064 (16 hex)
Current: 00626400000064 (16 hex)/ 0 25188 0 100 (Dec 2byte)/ 0 98 100 0 0 0 100 (Dec 1byte)
Does it match 231?
No. 231 it's still at 100%, which is impossible considering I'm nearly at 6TB. 98% is much more realistic.
EDIT and the OCZ Toolbox SMART value for 230 (E6) still reports 100. I think the raw value is Dec 1byte
Strange, could it be due to the initial "credit"?
That is my guess :)
EDIT: Let's say 2% of NAND PE = Credit. That means the remaining 98% is distributed across the 100% period of the life curve
Does the "credit" serve any purpose other than allowing shiny specs and fooling review sites who are not in the know?
Let's see how it develops, if they start moving in "sync" then there is a connection.
Makes sense! :)
E6 100%
E7 100%
E9 6,497
E6 raw values
00616400000064 (16 hex)
0 24932 0 100 (2byte)
0 97 100 0 0 0 100 (1byte)
E6 100%
E7 100%
E9 6681
E6 raw values
00606400000064 (16 hex)
0 24676 0 100 (2byte)
0 96 100 0 0 0 100 (1byte)
Can anyone convert the raw value when new into 2byte & 1 byte values? 00786400000064 (16 hex)
2byte:
0 30820 0 100
1byte:
0 120 100 0 0 0 100
If E6/230 has anything to do with LTT, you seem to have a loooong way to go until it kicks in :p:
78 = 120
64 = 100
7864 = 30820
You can just use the windows calculator, select Viev->Programmer and then enter the value in Hex mode, to convert to Dec just click on Dec.
Thanks....maybe 100 is the base line. 120 was initial credit and now I am below the life curve at 96. If correct at some stage it will try to get back to 100. Bah, I wish the V2 had this attribute.
Although the PE cycles are not in theory supposed to go below the life line they clearly did with the V2 and it looks like it will be the same for V3.
May it will become clearer when the MWI gets to 99%
E6 100%
E7 100%
E9 6829 GB
E6 raw values
0 95 100 0 0 0 100 (1byte)
230 (E6) is supposed to show if the drive is operating under extreme conditions requiring protection measures to be activated.
I've just passed 8,154GB. MWI is still 100. 230 is showing as 100, but the raw value is showing 91.
SPECULATION :)
It could be nothing more than coincidence, but if you trend the raw value deduction against the data written so far you end up with ~35TB.
Then again once the credit PE cycles have run out maybe it all changes.
Attachment 117937
Interesting, a great finding if that is the case!
Still running at 65MiB/s?