Page 8 of 24 FirstFirst ... 56789101118 ... LastLast
Results 176 to 200 of 598

Thread: Sandforce Life Time Throttling

  1. #176
    Xtreme Member
    Join Date
    Feb 2006
    Location
    Stuttgart, Germany
    Posts
    225
    Quote Originally Posted by Ao1 View Post
    I suspect that for compression to be effective the xfer size needs to be 128K or above.
    In one of the tests where you wrote zeroes,you were able to use 128GB for 1024GB host writes. For this, I assumed that, if controller is hit in bursts, it will group 8 pages and will archive them, with best case scenario as only one page. But because of granularity of the counter and other internal writes which we did not consider but are counted, this size might be higher and not necessary a multiple of 2. It would be interesting to find out how writes are grouped. For example, let's say that random pages (with archivable content) are written and each 8 pages are compressed to one. Then a secvential read is issued with pages that were archived in different clusters. For this example, controller might need to read one or more pages, decompress them to maybe 32KB or more and retrieve only one peace of 4KB which was part of the secvential read.

    Also, some interesting stuff. Below is SMART data from a Corsair Force F240 bought second hand by a friend. Historically, the drive was used in a high end gaming machine for most probably 6 to 9 months. What is most interesting is "Retired block count" value. If it counts 4K pages, then it will go to 0 probably for 32768 taken as value which corresponds to only 128MB, well below spare part. On the other hand, if it is counted as erase blocks of 512KB, then this is almost a full broken die (32Gbit) and this would indicate that controller is able to handle easily up to 4 broken flash dies. As a note, I personally believe drives with Sandforce controllers are best to be bought blindly as second hand. Because of lifetime throttle we can be sure that a 5 years warranty drive which is one year old has at least 80% flash life left.
    Click image for larger version. 

Name:	CorsairF240.jpg 
Views:	427 
Size:	95.9 KB 
ID:	117099
    Last edited by sergiu; 07-02-2011 at 05:50 PM.

  2. #177
    Banned
    Join Date
    Jan 2010
    Location
    Las Vegas
    Posts
    936
    A 240GB Sandforce drive probably has 256GiB of flash on board. 16GiB of that is probably used for RAISE.
    So non-RAISE flash should be 240GiB.

    reserved flash: 240GiB - 240GB = 17.698GB

    retired flash: 8064 x 512 KiB = 4.228GB

    fraction of reserved flash that is retired: 4.228GB / 17.698GB = 0.2389

    If the normalized value goes from 100 down to 0, then we would expect

    100 x (1 - 0.2389) = 76.1

    Seems reasonable.
    Last edited by johnw; 07-02-2011 at 07:12 PM.

  3. #178
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    I'm going to get a V3 60GB. What I would to determine is the max amount of writes per day that can be achieved without triggering life time throttling.

    Any suggestions to how I could go about that? Whilst it took 30TB and 7 days to get the V2 into a throttled state that duration included a grace period.

    I could assume a 2% grace period and use that up in one hit. I could then write say 20GB per day and see what happens.

    Any suggestions would be welcome.

  4. #179
    Banned
    Join Date
    Jan 2010
    Location
    Las Vegas
    Posts
    936
    It should not be difficult to determine. Basically, you need to determine the slope of the throttle line, after the initial credit is used up. So:

    1) Use up the initial credit by writing continuously until the speed slows down and settles at a constant, low value

    2) Let the SSD idle with the power on for 24 hours (be sure to disable sleep or hibernation on the computer, and check the SMART attribute to make sure power-on hours increased by 24)

    3) Start writing continuously again and see how much you can write before throttling kicks in again: you need two numbers, amount written before throttling, W, and the hours written until throttling kicked in, H

    Then you can compute how much is permitted to write per day:

    W x 24 / (24 + H)

  5. #180
    Xtreme X.I.P.
    Join Date
    Apr 2008
    Location
    Norway
    Posts
    2,838
    I suggest the first you do is to install windows 7 and check if the raw nand writes are .5 or .6 of the host writes
    -
    Hardware:

  6. #181
    Xtreme Guru
    Join Date
    Aug 2009
    Location
    Wichita, Ks
    Posts
    3,887
    maybe leave it powered on for a month, then start. Or two weeks. Two weeks idle, then start, that should give you enough leeway to adjust the parameters once you get started. At least one would think...
    "Lurking" Since 1977


    Jesus Saves, God Backs-Up
    *I come to the news section to ban people, not read complaints.*-[XC]Gomeler
    Don't believe Squish, his hardware does control him!

  7. #182
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    ^ I thought about that as well, but I believe the initial performance credit is based on PE cycles rather than power on hours.

    What John suggests sounds like it would work.

    Before I start to eat up PE cycles with Anvil's app I will however look at compression.

    Again I'm open to suggestions on how best to do this. To eliminate WA I will need to make sure that the drive is in a fresh state and that whatever I test will not put the drive into a steady state (i.e. writes will have to be less than the drive capacity to avoid WA being factored).

    For sure I can do a Win/ Office install.

  8. #183
    Xtreme X.I.P.
    Join Date
    Apr 2008
    Location
    Norway
    Posts
    2,838
    We'll figure something out.

    I could probably make you a special build where you can set the MB/s to write, either per second, hour or day if that would make it easier.
    Once I get SMART working it will make monitoring events pretty easy, like raw nand writes vs host writes.
    -
    Hardware:

  9. #184
    Admin
    Join Date
    Feb 2005
    Location
    Ann Arbor, MI
    Posts
    12,338
    Quote Originally Posted by Ao1 View Post
    I'm going to get a V3 60GB. What I would to determine is the max amount of writes per day that can be achieved without triggering life time throttling.

    Any suggestions to how I could go about that? Whilst it took 30TB and 7 days to get the V2 into a throttled state that duration included a grace period.

    I could assume a 2% grace period and use that up in one hit. I could then write say 20GB per day and see what happens.

    Any suggestions would be welcome.
    I like johnw's method, but I think it should be even easier if LTT actually does what it says it does.

    Burn up the initial credits (on the compression tests I guess?), then observe writes/day with LTT interfering. I would think the slope of the throttle line = performance when LTT is interfering.

    Let's say it limits you to 8MiB/sec when LTT kicks in, that means it's happy for the rest of its life to write at 8MiB/sec (8MiB/sec = 28.125GiB/hr = 675GiB/day). Writes/day with LTT = maximum the drive allows itself to write. This should also equal maximum allowed (on the average) before LTT kicks in, so long as there is a idle time period of buffer (so if you wait 24hrs, you should be able to write 675+GiB at full speed before LTT kicks back in).

    If LTT kicks back in before the hypothetical 675GiB, then LTT functioning isn't quite faithful to its intention

    Ultimately, it seems like johnw's method is necessary to see max writes/day allowed, because it's not a guarantee that the throttle line = LTT performance. johnw's method will allow you to see actual max writes/day and also any discrepancy from deriving it from LTT performance.

  10. #185
    Banned
    Join Date
    Jan 2010
    Location
    Las Vegas
    Posts
    936
    Quote Originally Posted by Vapor View Post
    I like johnw's method, but I think it should be even easier if LTT actually does what it says it does.

    Burn up the initial credits (on the compression tests I guess?), then observe writes/day with LTT interfering. I would think the slope of the throttle line = performance when LTT is interfering.
    As you say, there are possible problems with this method, which is why I suggested the other.

    The main problem I see with the method you mentioned is that I suspect throttling is actually more of a sawtooth than a straight line. It is probably implemented to let the SSD write at full speed until it notices that the current state has dropped below the warranty line. At that point, I suspect that the throttle will kick in with a limit even lower than the slope of the warranty line. Once the state crosses back over the warranty line (as it must, since the throttled slope is less than the warranty line), then full speed is allowed again for a short time until the state crosses below the warranty line again.

    If I am right about the sawtooth, then the simple method of measuring the average speed in throttle may give an approximation to the slope of the warranty line, but depending on exactly how the sawtooth is implemented (how often does it check the state? how far will it let you go on either side of the line before it throttles or releases the throttle?) it could give a biased number.

    So, I think measuring the time and amount written until throttling, after 24 hours of idling, is probably the most accurate method. Of course, it is easy enough to do both methods, but if they do not give the same number, I know which method I will trust more.

  11. #186
    Admin
    Join Date
    Feb 2005
    Location
    Ann Arbor, MI
    Posts
    12,338
    Quote Originally Posted by johnw View Post
    So, I think measuring the time and amount written until throttling, after 24 hours of idling, is probably the most accurate method. Of course, it is easy enough to do both methods, but if they do not give the same number, I know which method I will trust more.
    Agreed your method is the accurate one, but I don't necessarily agree on the sawtooth.

    I was thinking the opposite issue, if there's a flaw with "LTT performance = throttle slope" it's that LTT performance might be better than the throttle/lifetime slope and that you would need more than 24 hours of idle recovery to full-speed write the equivalent of 24hrs of LTT writes.

    We saw with the Vertex 2 40GB it was writing at ~6.25MiB/sec with LTT, which works out to ~11 P/E cycles a day (with just a WA of 1.00x, which we deduced is too optimistic for 100% incompressible data). A three year warranty at that rate would work out to ~12,000 P/E cycles, which is what I was getting at when I said "LTT functioning isn't quite faithful to its intention"....it's intention to drag the life of the drive out to the end of the warranty period, but 12,000 P/E cycles is 2.4x what the NAND is rated at and seems optimistic for a warranty.

    We'll know more as more testing gets done, but that's my theory on why LTT performance would not equal the throttle/lifetime slope....and if it does, then observing LTT performance is a nice and quick way to figure out the daily write budget

  12. #187
    Banned
    Join Date
    Jan 2010
    Location
    Las Vegas
    Posts
    936
    Quote Originally Posted by Vapor View Post
    We saw with the Vertex 2 40GB it was writing at ~6.25MiB/sec with LTT, which works out to ~11 P/E cycles a day (with just a WA of 1.00x, which we deduced is too optimistic for 100% incompressible data). A three year warranty at that rate would work out to ~12,000 P/E cycles, which is what I was getting at when I said "LTT functioning isn't quite faithful to its intention"....it's intention to drag the life of the drive out to the end of the warranty period, but 12,000 P/E cycles is 2.4x what the NAND is rated at and seems optimistic for a warranty.
    How did you determine that the SSD in question was set for a 3 year warranty and not a 1 year warranty?

    By the way, I'm not sure what you are disagreeing with about the sawtooth implementation of throttling. I did NOT say that the average write speed would be less (or more) than the warranty slope. I guess you got confused by the statement that the slope of the the throttled part of the sawtooth is less than the warranty line slope. That is self-evidently true. But that proves nothing about the average write speed of the SSD, since it depends on the slope of both portions of the sawtooth, as well as the length of each edge of the teeth.
    Last edited by johnw; 07-16-2011 at 02:11 PM.

  13. #188
    Xtreme Member
    Join Date
    Aug 2008
    Location
    SF bay area, CA
    Posts
    262
    Quote Originally Posted by johnw View Post
    ...
    The main problem I see with the method you mentioned is that I suspect throttling is actually more of a sawtooth than a straight line. It is probably implemented to let the SSD write at full speed until it notices that the current state has dropped below the warranty line.
    ...
    If you get fine granularity with your monitoring, it is indeed slightly sawtooth but won't actually let you pass below the lifetime.
    "Red Dwarf", SFF gaming PC
    Winner of the ASUS Xtreme Design Competition
    Sponsors...ASUS, Swiftech, Intel, Samsung, G.Skill, Antec, Razer
    Hardware..[Maximus III GENE, Core i7-860 @ 4.1Ghz, 4GB DDR3-2200, HD5870, 256GB SSD]
    Water.......[Apogee XT CPU, MCW60-R2 GPU, 2x 240mm radiators, MCP350 pump]

  14. #189
    Banned
    Join Date
    Jan 2010
    Location
    Las Vegas
    Posts
    936
    Quote Originally Posted by zads View Post
    If you get fine granularity with your monitoring, it is indeed slightly sawtooth but won't actually let you pass below the lifetime.
    How much above the "lifetime" line does the current state have to be for the throttling portion of the sawtooth to end and the full-speed portion of the sawtooth to be permitted? In other words, after you have run full-speed into the "lifetime" line, and the throttling kicks in, how long does the throttling continue until it checks again and finds that you are now above the "lifetime" line?

  15. #190
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    The more I think about it the more I think that the implementation of LTT is flawed.

    In theory LTT is supposed to prevent PE depletion below the life curve, but it did not appear to do that in the endurance test. It kicked in too late and it did not then restrict writes enough to protect the life curve.

    Assume a default initial credit period of 2% and a 3 year warranty = 525 power on hours/ 22 days. The 2% credit is based on PE cycles that don't count towards the life curve.

    In the endurance test I was able to incur 28,352GB of writes. Approx 4,096GB of those writes were 0fill. Let's assume that equates to 25,000GB of uncompressible data over 7 days or 168 hours. (150GB per power on hour).

    I was able to use 2% PE credits and 9% of the PE life curve in 168 power on hours.

    Anyway you look at it LTT should have prevented 9% depletion of PE cycles in 168 hours.

    I also believe writes speeds should have been reduced to 3MB/s (or less) to prevent PE depletion based on the load that was being applied.

    It will be interesting to see what happens with the V3. The V2 had a life curve based on 5,000 PE cycles. The V3 will have a life curve based on 3,000 PE cycles. Unless the V3 can better compress data or reduce WA LTT should kick in quicker.
    Last edited by Ao1; 07-17-2011 at 04:21 AM. Reason: typos

  16. #191
    Xtreme Member
    Join Date
    Feb 2006
    Location
    Stuttgart, Germany
    Posts
    225
    Quote Originally Posted by Ao1 View Post
    I was able to use 2% PE credits and 9% of the PE life curve in 168 power on hours.

    Anyway you look at it LTT should have prevented 9% depletion of PE cycles in 168 hours.

    I also believe writes speeds should have been reduced to 3MB/s (or less) to prevent PE depletion based on the load that was being applied.
    I suspect OCZ choose to release the drive with much more credits, like 8%. It would make sense to have different configurations based on your capacity, specially to ensure that some sustained short time testing does not trigger the LTT. Also, I believe 3MB limit would have probably been achieved if drive would have been stressed more. Now how much more, that's another question. I would assume at least a week or two.

  17. #192
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    I thought about that. There are a couple of options I guess. If NAND PE cycles are based on minimum specs you could "front load" an assumption of additional life without reducing the spec'd PE count for the life cycle curve.

    The other (more likely) option would be to use 10% PE cycles for the credit period and then spread the remaining 90% over the life time curve. Use 20% and then spread 80% over the life time curve etc. [EDIT. i.e. On one hand you increase the period in which you will not be throttled, but on the other hand you increase the chances of being throttled further down the line]

    In the endurance thread I got to 11.6TB before the MWI dropped to 99%.

    Out of that 11.6TB approx 4TB was 0fill. If 0fill is around 10% NAND wear that would work out to 8TB

    8TB = Credit plus 1% MWI.
    If 1% = approx 2TB then the credit was for 6TB.

    It certainly looks like OCZ did not use the default 2% and went much higher.

    Whatever the credit period however once it expired it should not have let me go from 100% to 91% within 5 days. At that stage the PE cycles must have been below the life time curve.
    Last edited by Ao1; 07-17-2011 at 08:49 AM.

  18. #193
    Timur
    Guest
    10% NAND wear for 0fill seems a bit high?! How do you get to that number?

    Good compression squeezes down the 256 mb (or bigger) ATTO testfile to less than 0.5 kb, even the non-stellar NTFS compression puts the whole testfile into a single cluster (=less than 4kb =less than 0.0002%). I expect the Sandforce compression to be better than NTFS.

  19. #194
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    ^ An educated guess based on post #91 & #141, which looked at the difference between host and nand SMART attributes. Admittedly a rough calculation as SMART is only update every 64GB on the V2 drives.

    When I get the V3 I can look into compression with better refinement as SMART updates more frequently. (1GB intervals I believe).

  20. #195
    Xtreme Member
    Join Date
    Feb 2006
    Location
    Stuttgart, Germany
    Posts
    225
    Quote Originally Posted by Ao1 View Post

    In the endurance thread I got to 11.6TB before the MWI dropped to 99%.

    Out of that 11.6TB approx 4TB was 0fill. If 0fill is around 10% NAND wear that would work out to 8TB

    8TB = Credit plus 1% MWI.
    If 1% = approx 2TB then the credit was for 6TB.

    It certainly looks like OCZ did not use the default 2% and went much higher.

    Whatever the credit period however once it expired it should not have let me go from 100% to 91% within 5 days. At that stage the PE cycles must have been below the life time curve.
    The anomaly in counting MWI was strange from beginning. What I assumed was the following scenario: if there is an initial 10% allowed to be written, then controller will behave normally until flash cells will achieve this level of wear. That would mean decreasing the wear level like in a normal usage. However, when this level is achieved, the nice guard named "lifetime throttle" will wake up and will enforce its rules if needed. This however does not explain the missing 6TB from MWI counting.

  21. #196
    Banned
    Join Date
    Jan 2010
    Location
    Las Vegas
    Posts
    936
    Quote Originally Posted by Timur View Post
    I expect the Sandforce compression to be better than NTFS.
    I expect Sandforce compression to be much worse than "NTFS". Sandforce has only a fraction of the power to use for compression that a modern CPU has, and Sandforce must do the compression at a rate of 270MB/s or greater.

  22. #197
    Timur
    Guest
    Don't forget that NTFS compression is from a time when CPU's where less powerful and is set to only use a fraction of available CPU load. On the other hand the 270 mb/s argument is very valid. Still for data like the ATTO testfile where even the size of the source file doesn't matter for compressed destination file it shouldn't matter much. 1E10 x 0 is just that, there isn't much to calculate about it.

    One advantage that NTFS compression has over Sandforce is that it may begin at a lower compression or even non-compression stage when you originally write a file and then compresses the file afterwards (when it's already on the disc). Unfortunately that doesn't help its bad throughput performance, neither for reads nor writes (I originally planed to use NTFS compression for my Crucial SSD). On the other hand Sandforce has a dedicated compression logic, I don't see why it should not be able to match or best the level of NTFS compression at high speed.

    I wonder how big Sandforce's internal buffers for compression are?! Since it doesn't make use of cache memory there has to be at least some sort of registers to deal with that. This could be more of a contributing factor to its effectiveness (little memory = little word- and booksizes).

    I'm no compression expert though, so I may get it all wrong. But the fact that ATTO tests on current Sandforce drives seem to run close the practical limits of the SATA 6G connection implies that the test-data has been compressed down close to zero.
    Last edited by Timur; 07-17-2011 at 02:24 PM.

  23. #198
    Banned
    Join Date
    Jan 2010
    Location
    Las Vegas
    Posts
    936
    Quote Originally Posted by Timur View Post
    But the fact that ATTO tests on current Sandforce drives seem to run close the practical limits of the SATA 6G connection implies that the test-data has been compressed down close to zero.
    This statement is either wrong, or so ambiguous as to be meaningless. The fastest writes with the Sandforce 2XXX controllers are a little over 500MB/s. If we assume that the SSDs can write incompressible data at 130 MB/s, then that would imply that the compression factor for a 500MB/s write is at least 3.8. Maybe you could even assume about 6 times compression factor for a 60GB SSD that writes highly compressible data at 500MB/s (since 60GB SSD can probably only write incompressible data at 90MB/s). So, unless you consider 1/6 compression "close to zero", then no, it does not imply data compression to "close to zero".
    Last edited by johnw; 07-17-2011 at 02:46 PM.

  24. #199
    Xtreme Guru
    Join Date
    Aug 2009
    Location
    Wichita, Ks
    Posts
    3,887
    *off topic*
    there are whispers of user configurable LTT firmware coming very soon.
    "Lurking" Since 1977


    Jesus Saves, God Backs-Up
    *I come to the news section to ban people, not read complaints.*-[XC]Gomeler
    Don't believe Squish, his hardware does control him!

  25. #200
    Admin
    Join Date
    Feb 2005
    Location
    Ann Arbor, MI
    Posts
    12,338
    Quote Originally Posted by Computurd View Post
    *off topic*
    there are whispers of user configurable LTT firmware coming very soon.
    That's very much on topic of "Sandforce Life Time Throttling", IMO

    I wouldn't mind seeing a DIP switch on the PCB, accessible only via opening it up and voiding the warranty...but a user configurable firmware flash is very interesting as well. Personally, I wouldn't do it, I don't use my drives nearly enough to trigger LTT, but still interesting to have an option.

Page 8 of 24 FirstFirst ... 56789101118 ... LastLast

Tags for this Thread

Bookmarks

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •