Page 9 of 24 FirstFirst ... 678910111219 ... LastLast
Results 201 to 225 of 598

Thread: Sandforce Life Time Throttling

  1. #201
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    Quote Originally Posted by Computurd View Post
    *off topic*
    there are whispers of user configurable LTT firmware coming very soon.

    ETA My V3 should arrive tomorrow. Is it a SF vendor wide solution or is it a vendor bespoke solution?

  2. #202
    Timur
    Guest
    Quote Originally Posted by johnw View Post
    So, unless you consider 1/6 compression "close to zero", then no, it does not imply data compression to "close to zero".
    This is not what I wrote. I wrote that compression on the ATTO testfile is so high that the 6g SATA bus becomes the limiting factor, not compression. This implies that nearly no actual data is being transferred, especially when looking at the result of ZIP/LZMA/NTFS LZ77 compression on that very same file.

    The flaw in that argument is that I don't know the real maximum throughput (minus protocol overhead) of the 6G bus. Maybe someone in the know can chime in on that?

    We would need a "normal" compressible testfile to get an idea of the compression ratios compared to NTFS. If we don't care for the comparison then the ASSD's compression test (going from 0% compressible to 100%) should give an idea of how good Sandforce compresses.

  3. #203
    Banned
    Join Date
    Jan 2010
    Location
    Las Vegas
    Posts
    936
    Quote Originally Posted by Timur View Post
    This is not what I wrote. I wrote that compression on the ATTO testfile is so high that the 6g SATA bus becomes the limiting factor, not compression.
    Except that is not true either. 6Gbps SATA should be capable of at least 570MB/s, and no Sandforce SSDs that I have seen writes that fast.

  4. #204
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    We should be able to work the compression factor out with the V3. Some help to work out the best method to do that would be appreciated.

    Synthetic or real data?

    For example I have a Documents folder that is 5.08 GB (5,455,376,384 bytes). As I write I am compressing the folder with WinRar on the best setting. The compression ratio is fluctuating between 82% & 89% (It's not finished yet).

    I'd like to get a methodology in place so I can start testing as soon as the drive arrives tomorrow.

  5. #205
    Timur
    Guest
    The Vertex 3 is listed at 550 mb/s read speed and there are screenshots of ATTO benchmarks at 560 mb/s. This sounds pretty close to your 570 mb/s number. Where do you know this number from?

  6. #206
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    Timur, there is an overhead factor on SATA 3, but unlike SATA 2 that overhead is unknown. (At least I have not been able to find it).

    Regardless, however we should be able to work out the SF compression factor using attribute #233.

    Suggestions welcome

  7. #207
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    Whoa, that is a bit disappointing. Took a while as well.

    Click image for larger version. 

Name:	Untitled.png 
Views:	134 
Size:	9.5 KB 
ID:	117816

  8. #208
    Timur
    Guest
    Ao1: Yes, unfortunately the overhead is unknown, but something around those 560-570 mb/s seems reasonable. I am thinking if I can get a benchmark to directly r/w from my M4's onboard cache (256 mb)!?

    Before we work out the Sandforce compression ratio with our own methods: What does the ASSD compression benchmark return for Sandforce based drives? Would be good to get readings from different drives like Vertex 3 vs V3 maxiops vs Corsair etc.

    Btw: My former system partition (OS + applications + games + personal data + some small samples libraries of Ableton Live and Sonar) compressed down by appr. 30% via NTFS, with some directories closing to 50% and more (asset files of Civilization 4) and other below 10% (audio sample files).

    Unfortunately NTFS is seriously bottlenecked for anything that is not highly compressible (80%+ on the ASSD graph), so it cannot be used as a substitute for Sandforce compression on non Sandforce drives.
    Last edited by Timur; 07-18-2011 at 01:46 AM.

  9. #209
    Banned
    Join Date
    Jan 2010
    Location
    Las Vegas
    Posts
    936
    Quote Originally Posted by Timur View Post
    The Vertex 3 is listed at 550 mb/s read speed and there are screenshots of ATTO benchmarks at 560 mb/s. This sounds pretty close to your 570 mb/s number.
    No, the compression is done on writes, obviously. So the relevant number is sequential write speed. The rated maximum sequential write speeds for the Vertex 3 are: 495, 500, 520, 450 MB/s for 60, 120, 240, 480GB models. Obviously NOT limited by SATA 6Gbps.

  10. #210
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    Even worse....my "Pictures" folder. Squeezing 5% took around 4minutes!

    Click image for larger version. 

Name:	Untitled.png 
Views:	135 
Size:	11.6 KB 
ID:	117817

  11. #211
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    Here is ASSSD on the V2

    Click image for larger version. 

Name:	Untitled.png 
Views:	133 
Size:	33.4 KB 
ID:	117818

  12. #212
    Banned
    Join Date
    Jan 2010
    Location
    Las Vegas
    Posts
    936
    Quote Originally Posted by Ao1 View Post
    We should be able to work the compression factor out with the V3. Some help to work out the best method to do that would be appreciated.
    Do you mean the maximum compression factor? That should be easy to measure. Just write a stream of zeros, and see how much the flash writes compares to the host writes.

    If you mean the typical compression factor, then you first need typical data. You could take one of your system drives and create an UNCOMPRESSED archive out of all the files on it, and then repeatedly copy the archive to the V3, noting flash writes and host writes.

  13. #213
    Timur
    Guest
    You are missing the point here. We were discussing compression *ratio* and how we could come up with reasonable numbers. My statement was that 10% seems a bit little for *compressible* data. Problem is that we can hardly define "compressible", but the ATTO benchmark that is usually used for that measurement is soooo extremely compressible that we have to consider this fact when talking about "compressible".

    I don't say that the Sandforce chip doesn't need it's time to do the compression (so do all other compressors), but obviously it is fast enough on *de*compression (aka reading) that it seems to reach the practical limits of the 6G connection with the ATTO testfile.

    This on the other hand implies that compression ratio is so good that we are merely measuring the Sandforce's compression speed on writes and 6G limit on read and *not* the compression ratio (nor speed of the flash chips and controller apart from compression).

  14. #214
    Banned
    Join Date
    Jan 2010
    Location
    Las Vegas
    Posts
    936
    Quote Originally Posted by Timur View Post
    You are missing the point here. We were discussing compression *ratio* and how we could come up with reasonable numbers. My statement was that 10% seems a bit little for *compressible* data.
    No, I got the point. You seem confused. You do realize that 10% means compressed from 100% to 10%, right? We are talking about compression FACTORS. 10% means a factor of 10 compression.

    If the Sandforce SSD's maximum sustained host write speed is 500MB/s (for example, when fed a stream of zeros), but the flash sustained write speed is only 90MB/s (for example, for a 60GB V3), then the compression factor is at least 5.6, or 18%. It could be 10%, but we cannot say for certain using the throughput ratio method.

    But it is moot anyway, since Ao1 already measured it for a V2. IIRC, he got something like 8 - 12% or so for zero-fill.

  15. #215
    Timur
    Guest
    Quote Originally Posted by Ao1 View Post
    Here is ASSSD on the V2
    I found a screenshot of ASSD compression speed on http://thessdreview.com/our-reviews/...-as-ssd-tests/

    At 100% compressible (which is close to the ATTO file) it shows about 410 mb/s write-speed. So like I suspected in my last post we only get to know the speed of the compression engine (or controller/flash speed), not the compression ratio. Else those 100% should be closer to the 6G limits.

    Curiously the read-speed is "limited" to 520 mb/s in this test as well, which is a good deal below the 550-560 mb/s of the ATTO screenshots. This could be due to different calculation methods, or due to different systems/setups (CPU and SATA controller) that the benchmarks ran on. What ATTO readings do you get on your system compared to your ASSD graph?

  16. #216
    Banned
    Join Date
    Jan 2010
    Location
    Las Vegas
    Posts
    936
    Quote Originally Posted by Timur View Post
    So like I suspected in my last post we only get to know the speed of the compression engine (or controller/flash speed), not the compression ratio. Else those 100% should be closer to the 6G limits.
    No, that cannot be true. Note that the rated sequential write speed for the 480GB V3 is 450 MB/s. Why would the compression engine be slower on the 480GB model than the 120GB or 240GB models?

    Anyway, as I said (and Ao1 has said), Ao1 already measured the zero-fill compression ratio for a V2, and it was about 10%. He will no doubt measure it for his V3, and we will see if it differs from the V2.

  17. #217
    Timur
    Guest
    Quote Originally Posted by johnw View Post
    No, I got the point. You seem confused. You do realize that 10% means compressed from 100% to 10%, right? We are talking about compression FACTORS. 10% means a factor of 10 compression.
    The original argument was neither about "ratio" nor "factor", but Ao1's assumption that "If 0fill is around 10% NAND wear". So when writing 100% zeros around 10% NAND would wear out. I found 10% too high for only filling in zeros based on my experience with different compression engines.

    There are two information I'm missing in my own argument:

    1) How good can an engine compress that reaches a throughput of over 400 mb/s? This is amazing fast for doing compression. Still I like to think that compressing only zeros should be quite close to 100% with any compression engine. To support my argument I mentioned the rather bad and bottlenecked compression of NTFS.

    2) Ao1 wrote "NAND wear", not "NAND fill". Once all NAND pages have been written to at least once we can assume that "wear" for writing new (even when compressed) data is higher than just the space needed to save that data, because some blocks have to be deleted at some point to make room for that new data.

  18. #218
    Xtreme X.I.P.
    Join Date
    Apr 2008
    Location
    Norway
    Posts
    2,838
    The typical ATTO bench is using QD 4 (overlapped IO)

    Just do a test at QD 1.
    -
    Hardware:

  19. #219
    Banned
    Join Date
    Jan 2010
    Location
    Las Vegas
    Posts
    936
    Quote Originally Posted by Timur View Post
    2) Ao1 wrote "NAND wear", not "NAND fill". Once all NAND pages have been written to at least once we can assume that "wear" for writing new (even when compressed) data is higher than just the space needed to save that data, because some blocks have to be deleted at some point to make room for that new data.
    You should really look up the posts where Ao1 measured it. The SMART attribute on the V2 appears to show how much data was actually written to flash. On a large sequential write, there is no reason to assume any significant overhead or write-amplification of the type that you refer to above. So Ao1's measurement is looking at the compression factor that was achieved, or a close approximation. His measurement is correct, within the limitation that the increment of the attribute was rather large.

  20. #220
    Timur
    Guest
    Quote Originally Posted by Ao1 View Post
    Whoa, that is a bit disappointing. Took a while as well.
    I just compressed my current system partition (OS + Civilization 4 + applications + some small sample libraries + some pics) via 7Z's "fastest" method (only 64 kb dictionary size). Squeezed the whole 300,000+ files / 59 gb down to less than 60%.

    I read from SSD and wrote to HD with throughput only being around 20 mb/s even when my 8 logical cores where nowhere near to being maxed out on load. One has to consider that very likely 7Z only uses small block sizes for reads and/or most of these 300k files are small files. And 20 mb/s is around what my M4 can deliver for random 4 kb (depending on CPU setup). hIOmon should be able to tell us, right? (did not dive into its manual yet)

  21. #221
    Timur
    Guest
    Quote Originally Posted by Anvil View Post
    The typical ATTO bench is using QD 4 (overlapped IO)

    Just do a test at QD 1.
    Good point!

  22. #222
    Timur
    Guest
    Hm, I'm currently looking at size distribution of my system partition. Seems like many files ain't that small, so the 20 mb/s must come down to how 7Z reads the files.
    Last edited by Timur; 07-18-2011 at 02:43 AM.

  23. #223
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    Post #151 compared an ATTO run to an AS SSD run when the drive was in a throttled state. Clearly ATTO is highly compressible. With uncompressed data I could only get 7MB/s, but with highly compressible data I could get 250MB/s.

    The problem with previous testing was that 1) the V2 only reports every 64GB & 2) unless the drive is in a fresh state WA can distort the 233 reading.

    I can avoid both those factors with the V3.

    Here are a few more shots of ATTO for what they are worth. It does not appear to let you run a bench at QD1.


    Click image for larger version. 

Name:	io comp.png 
Views:	136 
Size:	16.1 KB 
ID:	117819

    Click image for larger version. 

Name:	neither.png 
Views:	133 
Size:	15.6 KB 
ID:	117820

    Click image for larger version. 

Name:	overlapped.png 
Views:	135 
Size:	15.9 KB 
ID:	117821

    Click image for larger version. 

Name:	random.png 
Views:	137 
Size:	16.0 KB 
ID:	117822

  24. #224
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    Hmm, I thought that random run might have been a fluke so I ran it again.

    Click image for larger version. 

Name:	FF00.png 
Views:	142 
Size:	15.8 KB 
ID:	117823

    Click image for larger version. 

Name:	ran 2.png 
Views:	138 
Size:	16.2 KB 
ID:	117824

  25. #225
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    Here are the same folders compressed with 7z. Better compression, although the Picture's folder took around 6 minutes.

    Ok, here is what I will do unless someone has a better idea. Install Win 7 and Office and then check the 233 and host write readings.

    SE

    Copy my Documents folder and then check the 233 and host write readings

    SE, and then do the same for my Video folder and Picture Folder.

    These folders have my day to day working data so they should be a good representation.

    Click image for larger version. 

Name:	w7.png 
Views:	131 
Size:	14.3 KB 
ID:	117827

Page 9 of 24 FirstFirst ... 678910111219 ... LastLast

Tags for this Thread

Bookmarks

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •