Page 10 of 24 FirstFirst ... 7891011121320 ... LastLast
Results 226 to 250 of 598

Thread: Sandforce Life Time Throttling

  1. #226
    Xtreme X.I.P.
    Join Date
    Apr 2008
    Location
    Norway
    Posts
    2,838
    Like johnw said, just store the files with no compression.

    Using my app to create the file should be doable as well, 0-Fill is really 0-Fill and you can create the test file manually from the menu.
    (it will create the same test-file as if it was running the benchmark and it will be of the size you've selected)

    edit:
    Installing Windows and Office is the one to start with imho.
    Last edited by Anvil; 07-18-2011 at 03:44 AM.
    -
    Hardware:

  2. #227
    Timur
    Guest
    Choosing "Neither" in ATTO should be QD 1, at least that's how I understood it.

    Problem with ATTO really is that it's not just "highly" compressible, it's compressible nearly to "non-existant". Compressing a 2 GB (!) ATTO testfile via BZip2 squeezes it down to 14 kb (!!) at around 320 mb/s (all r/w happening inside the RAM cache, 8 logical cores at 3.1 gHz). This is a ratio of 1 : 6,67572021484375e-6! Compressing the standard 256 mb ATTO testfile via same settings results in a 2 kb file, which for all practical purposes is the same.

    So the 250-260 mb/s you are measuring with ATTO in throttled state again likely is just the limit of the 3G link in combination with the limit of the compression engine.

    What I am trying to say is that ATTO is not suitable for deciding on NAND wear and compression rate of Sandforce drives, because at close to 100% compression ratio we don't really get any of these information out of it.

  3. #228
    Xtreme Member
    Join Date
    Feb 2006
    Location
    Stuttgart, Germany
    Posts
    225
    Quote Originally Posted by Ao1 View Post
    Here are the same folders compressed with 7z. Better compression, although the Picture's folder took around 6 minutes.

    Ok, here is what I will do unless someone has a better idea. Install Win 7 and Office and then check the 233 and host write readings.

    SE

    Copy my Documents folder and then check the 233 and host write readings

    SE, and then do the same for my Video folder and Picture Folder.

    These folders have my day to day working data so they should be a good representation.
    It's a good start! However, to have an accurate measurement, you would need to copy the directory serveral times until you hit around 100GiB host writes. This will give you an error margin of less than 1%. Also, do the same with video/mp3s files. I am pretty sure there will be no compression and probably a WA of at least 1.1

    What settings have you used for 7zip? Best results are achieved with LZMA2, dictionary of at least 64/128MB, solide archive.
    Last edited by sergiu; 07-18-2011 at 03:52 AM.

  4. #229
    Xtreme X.I.P.
    Join Date
    Apr 2008
    Location
    Norway
    Posts
    2,838
    I tried running "Neither" as it looked to be QD1 but it doesn't look to result in more than about QD0.5 afaiks.

    Also one needs to keep in mind de-duplication on the SF controller.
    There is no doubt about the SF2 being "stronger" than the SF1, there are few signs of weakness wrt reads (at QD1), writes however are affected.
    I'm testing the WildFire now and incompressible data is written at ~50% of full speed, full speed being 480-500MB/s at QD1.
    -
    Hardware:

  5. #230
    Xtreme Guru
    Join Date
    Aug 2009
    Location
    Wichita, Ks
    Posts
    3,887
    480-500 mb/s write speed @ QD1 with incompressible data?
    "Lurking" Since 1977


    Jesus Saves, God Backs-Up
    *I come to the news section to ban people, not read complaints.*-[XC]Gomeler
    Don't believe Squish, his hardware does control him!

  6. #231
    Xtreme X.I.P.
    Join Date
    Apr 2008
    Location
    Norway
    Posts
    2,838
    No, full speed is highly compressible data (480-500MB/s), incompressible is written at a rate of ~50% of full speed
    -
    Hardware:

  7. #232
    Xtreme Guru
    Join Date
    Aug 2009
    Location
    Wichita, Ks
    Posts
    3,887
    oh i see just the QD1 that confused me
    "Lurking" Since 1977


    Jesus Saves, God Backs-Up
    *I come to the news section to ban people, not read complaints.*-[XC]Gomeler
    Don't believe Squish, his hardware does control him!

  8. #233
    Timur
    Guest
    According to Windows Performance-Monitor the maximum QD is always 1 less than what you set up in ATTO, so the 0.5 QD reading with "Neither" likely is connected to that. Whether this is a measurement flaw of Performance or Resource Monitor or a flaw of ATTO I cannot say.

    The 500 mb/s maximum vs. 250 mb/s (50%) with incompressible data unfortunately again only tells us that the drive handles the compressed data at a maximum rate of 500 mb/s, not that it's been compressed to 50%.

    Consider the 6,67572021484375e-6 factor at 320 mb/s that I got with BZip2 on an i7 running 8 logical cores at 3.1 gHz. In order to reach a compression factor of only 10% with the ATTO testfile the computational power of the Sandforce compression engine would have to be 15000 times worse than what BZip2 does on my i7. Quite a big difference just to push the speed from 320 mb/s to around 560 mb/s, which is not even doubled.

    Other compression algorithms are slower and less efficient for that test-file though. And I absolutely admit that I don't have the slightest idea how fast you can make a dedicated compression engine on such a controller chip without burning the poor little thing. I don't even know if something like a "dedicated" compression logic exists or is possible, instead of just running compression software/firmware on an all-around processor. Anyone in the know about how Sandforce *really* accomplishes compression under the hood?
    Last edited by Timur; 07-18-2011 at 04:27 AM.

  9. #234
    Xtreme X.I.P.
    Join Date
    Apr 2008
    Location
    Norway
    Posts
    2,838
    At 250MB/s the data is stored. (worst case as in incompressibe data)
    The question is at which compression rate does it switch from compression to just storing the data. There could be just a few algorithms used, meaning a few thresholds where it takes a different direction and or de-duplicates the data.

    I'm pretty confident in that as long as the data is between say somewhere between 60-67% and 100% (as in 100% incompressible) then the data is just stored.
    Trying to compress this sort of data would lead to slower writes, well, we'll find out.
    (those figures looks to be what the SF1 controller does)
    -
    Hardware:

  10. #235
    Xtreme Member
    Join Date
    Feb 2006
    Location
    Stuttgart, Germany
    Posts
    225
    @Timur
    Consider that hardware compression has to meet strict requirements and data must be easily accessible. A very good software algorithm can compress 1GiB data to less than 1KiB (if mathematically possible) because it has access to all at once. To do the same, a controller would need to have a large cache to store and compress the data and this would lead to high latencies for pending writes while everything is compressed. Also, consider that you want to access 512 bytes starting from position 512000512. In this case, controller would need to unzip the complete file to find out what is at that position which would be a complete waste of resources. Most probably, if controller is hit with a high write load, it will cluster pages and will archive them. From one test done earlier in this thread it seems that it groups at least 8 pages of 4KiB and this is where compression factor of 8-12% came from.

    @Anvil
    I would say that, if it can save even one page, it will save it, so most probably it will switch to just storing when archived quantity cannot be stored in a smaller number of pages, so I would bet that would still archive even if result has a compression factor of 80-85%. The problem is that it needs to store some parity data (RAISE) in other pages, so I expect to see WA equals to 1 in these scenarios.

  11. #236
    Banned
    Join Date
    Jan 2010
    Location
    Las Vegas
    Posts
    936
    Quote Originally Posted by Timur View Post
    The 500 mb/s maximum vs. 250 mb/s (50%) with incompressible data unfortunately again only tells us that the drive handles the compressed data at a maximum rate of 500 mb/s, not that it's been compressed to 50%.
    Again, I will say that that cannot be true. The 480GB Vertex 3 is rated at only 450MB/s sequential write. It makes no sense for the "compression engine", as you call it, to be slower on the 480GB model than on the lower capacity models. Therefore, whatever is limiting the sequential write speed to 450MB/s (or whatever) is not the compression engine.

    But once again I will say, your point is moot. Ao1 already measured the zero-fill compression factor for a V2, and it was in the 8-12% range. Soon Ao1 should be able to tell us what it is for a V3.
    Last edited by johnw; 07-18-2011 at 01:15 PM.

  12. #237
    Timur
    Guest
    There could be a number of reasons to have lower rated specs, for example lower controller clock-rate to accommodate for the increase of flash on the board (even if to just cut production cost by getting rid of a single condenser).

    And Ao1 did *not* measure compression zero-fill factor, but NAND wear over the course of 64 gb. NAND wear is used as an indication for compression ratio, but it's not necessarily a foolproof one.

    @Ao1: Did you do the measurements on a new drive or on one that was already filled before? If the latter then how do we know SMART wear reports are not just from garbage collection doing its work at any time during the last 64 gb (be it because of the 0fill or not)?

  13. #238
    Banned
    Join Date
    Jan 2010
    Location
    Las Vegas
    Posts
    936
    Quote Originally Posted by Timur View Post
    And Ao1 did *not* measure compression zero-fill factor, but NAND wear over the course of 64 gb. NAND wear is used as an indication for compression ratio, but it's not necessarily a foolproof one.
    That's absurd. Of course Ao1 measured the zero-fill compression-ratio. It may not be a perfect measurement, but that is what he measured.

    I'm going to stop responding to you on this issue now, since you are beating a dead horse here.

  14. #239
    Xtreme Member
    Join Date
    Feb 2006
    Location
    Stuttgart, Germany
    Posts
    225
    Quote Originally Posted by Timur View Post

    And Ao1 did *not* measure compression zero-fill factor, but NAND wear over the course of 64 gb. NAND wear is used as an indication for compression ratio, but it's not necessarily a foolproof one.

    @Ao1: Did you do the measurements on a new drive or on one that was already filled before? If the latter then how do we know SMART wear reports are not just from garbage collection doing its work at any time during the last 64 gb (be it because of the 0fill or not)?
    Due to its nature, Sandforce compression factor cannot be measured below a threshold limit which is what it can fit to a page. It does not matter if it can compress 32KiB to 10 bytes or 4095bytes because it is using a full page and needs to write it on flash cells. It cannot wait forever to something that is matching the available space to complete it. I am pretty sure it can theoretically compress serveral GB files made by zeroes to a few bytes but it never knows what is receiving next so it must assume the worst possible scenario. And for this reason it will never try to archive more than a fixed value
    Regarding the Sandforce GC, it is supposed to erase blocks which would not increase writes count. Also a small but maybe usefull info: I was able run CristalDiskMark (both compressible and uncompressible test data) on a OCZ Vertex 3 240GB model last week. I looked at smart parameters but because I set test size to 100MB I did not expected to see any significant changes, so I did not noted exactly initial values for #233 and #241. But for my surprise, I saw something like 30GB change for incompressible data and 5GB for zero fill. Now values might not be exact because I have not noted them, this is just what I remember from my poor memory. I still have access to the drive but I cannot do any tests, at least now because it is running some VMs.

    I have saved the results for later comparison when drive is used:
    Click image for larger version. 

Name:	CristalDiskMark_zeroFill.png 
Views:	156 
Size:	25.6 KB 
ID:	117836Click image for larger version. 

Name:	CristalDiskMark_random.png 
Views:	154 
Size:	26.5 KB 
ID:	117837
    Last edited by sergiu; 07-18-2011 at 01:56 PM. Reason: Adding results

  15. #240
    Timur
    Guest
    Quote Originally Posted by sergiu View Post
    Regarding the Sandforce GC, it is supposed to erase blocks which would not increase writes count.
    Page deleting *is* NAND wear, isn't it?

    And you can perfectly well compress several GB of zeros or even repeated pseudo-"random" data without knowing what comes next, it just has to fit into what you've already got and you need enough memory to hold that information before finally writing them. Think of it like: 1 sheep, 2 sheep, 3 sheep, 4 sheep, 5 sheep.... 102043434 sheep. It's still just X sheep.

    I'm really not trying to rain on anyone's parade here, but measurements don't get anywhere when "interpretation" of uncertain information is taken for hard evidence. Just keep your mind open about what (mostly undocumented) complexities might happen under the hood. I put some counter-arguments into the ring to inspire a better understanding

    And originally really just wondered how 0fills can only be compressed down to 10%. Whatever the truth is, Sandforce is ultimately limited by its processing power and onboard RAM. Too bad there ain't any specs out there on these things, or are there?

  16. #241
    Timur
    Guest
    Quote Originally Posted by johnw View Post
    Maybe it has something to do with the data CDM is using (maybe it is easy for the V3 to compress it?).
    I just posted in a dedicated thread that CDM data is a lot more easy to compress than ASSD data (did not check IOmeter yet).

  17. #242
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    Guys, those ATTO benches in #224. There are a number of test patterns you can select, but they are all highly compressible.
    Look at the read/ write speeds in the random test pattern.

    Does a random test pattern mean a random selection of highly compressible patterns?

    If so why the big drop in performance? Something is capping those write speeds. It's almost like the randomness has caused the data to become uncompressible.

  18. #243
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    Timur, I agree there is a lot of second guessing in the absence of hard facts. Tomorrow I will try to clear things up by using real data and carefully monitoring SMART attributes. I will also be careful not to write anything in excess of the drive capacity without running a SE first.

    The only hard fact I have seen is a graph in an Anantech review that showed the difference between host writes for a Win/ Office install against nand writes.

    There are most likely a lot of variables so it will be hard to get to the bottom of it, but we can try.

  19. #244
    Timur
    Guest
    Hey Ao1, you're doing good here! Sorry for my overly questioning attitude.

    Considering ATTO: When you do that I/O comparison test with random pattern then the data in the testfile itself changes. It becomes less compressible (between 5% and over 90% depending on compression settings)!

  20. #245
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    V3 - 60GB

    I ran a SE between each file copy exercise.

    Click image for larger version. 

Name:	Untitled.png 
Views:	171 
Size:	23.1 KB 
ID:	117880
    Last edited by Ao1; 07-19-2011 at 02:23 AM. Reason: added compression %

  21. #246
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    I ran a quick comparison on ATTO. I'm not sure if the SF cpu is clipping the writes, or if it is due to the data becoming non compressible when random.

    Endurance app now up and running


    Click image for larger version. 

Name:	A.png 
Views:	145 
Size:	31.0 KB 
ID:	117876

  22. #247
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    The CDM screen shot was taken just after I started the endurance app. I kept screen shots for each of the xfers above. If anyone wants to see them send me a pm.

    Click image for larger version. 

Name:	app start.png 
Views:	145 
Size:	25.0 KB 
ID:	117877

    Click image for larger version. 

Name:	Untitled.png 
Views:	147 
Size:	69.1 KB 
ID:	117878

  23. #248
    Xtreme X.I.P.
    Join Date
    Apr 2008
    Location
    Norway
    Posts
    2,838
    Great work there Ao1, and you got the drive this morning?

    Did you run any of the standard benchmarks, just wondering about the incompressible seq. write speed, looks to be 76MiB/s on the Endurance test, I'd guess 85-90MiB/s?
    -
    Hardware:

  24. #249
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    Hi Anvil, yes I got it this morning, but I had all my ducks lined up to crank out the testing.

    I just stopped the endurance app to run AS SSD......(I'm on SATA 2 so sequential read speeds are clipped)

    I suspect the ATTO random is down to the SF controller.

    Click image for larger version. 

Name:	as-ssd-bench OCZ-VERTEX3 19.07.2011 10-58-13.png 
Views:	149 
Size:	37.2 KB 
ID:	117879

  25. #250
    Xtreme X.I.P.
    Join Date
    Apr 2008
    Location
    Norway
    Posts
    2,838
    I expect it's already in steady-state and so the figures looks OK, about the same seq writes as a 34nm 60GB drive.

    Not sure about it being held back due to the SATA 3Gb/s ports on writes, shouldn't matter much as throughput is far from stressing the 3Gb/s interface.
    (it does matter for drives like the C300 though, but they don't do compression)

    The seq. read speed is very nice compared to the Agility 3.
    -
    Hardware:

Page 10 of 24 FirstFirst ... 7891011121320 ... LastLast

Tags for this Thread

Bookmarks

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •