Page 1 of 24 123411 ... LastLast
Results 1 to 25 of 598

Thread: Sandforce Life Time Throttling

  1. #1
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597

    Sandforce Life Time Throttling/ Compression and RAISE

    Performance throttling SF1200 drives

    Performance throttling ensures that sufficient PE cycles are available for the duration of the warranty and is based on a PE life curve over time.

    Time is based on "power on hours", therefore power on hours without write activity enables credits to be incurred.

    There are four choices that a SF vendor can choose from for the warranty duration:

     No throttling
     1 year warranty
     3 year warranty
     5-year warranty

    If no throttling is selected the controller will not ensure that the PE cycles last for a predefined duration. (From testing carried out to date in the endurance thread only OCZ have elected to impliment LTT. Drive tested from Corsair and Mushkin in th eendurance thread have not had LLT).

    When a drive is brand new the controller is configured to allow "credits". Credits allow PE wear without inducing throttling. The credit duration can also be configured. As standard it is 2% of the drives warranty period. During this period the drive will not be performance throttled regardless of the warranty duration.

    Once those credits have expired throttling is implemented if PE cycles exceed the warranty duration.

    Throttling will never allow PE cycles to fall below the life curve. This can result is a significant slowdown in write speeds. Once this has occurred the only way to recover the drive is to leave it to idle until such time that the PE cycles are aligned back with the life curve.

    Throttling is not deactivated at the end of the defined warranty period.

    There is no information on workloads that can induce throttling, however it is important to note that for typical Client workloads it is unlikely that the drive will be subject to throttling.

    Wear is based on PE cycles and is not related to host writes. If highly compressible data is written it has little impact on PE cycles. This enables a fully throttled drive to still achieve peak performance if highly compressible data is written.

    The recovery period (idle time) can be quite short before throttling is released. If significant write activity occurs again however the drive will quickly revert back to a throttled state.

    For non compressible data the impact of throttling resulted in write speeds being capped at ~6MB/s. For 0fill write speeds are not impacted.

    In addition to life time throttling there is also a throttling mechanism that throttles burst activity. So far this has not been investigated in this thread.

    The best estimate for the amount of writes that can be incurred without LTT becoming an issue: (Assuming the SSD is powered up 24/7)

    • 0.60TiB per day with 1 year throttle/ 25.6GiB per power on hour/ 7.28MiB/s
    • 0.2TiB per day with 3 year throttle/ 8.53GiB per power on hour/ 2.42MiB/s
    • 0.12TiB per day with 5 year throttle/ 5.12GiB per power on hour/ 1.45MiB/s

    Power throttling

    Similar in operation to performance throttling, but here power consumption is monitored, based on flash program, erase, or read operation activities Variable settings can be configured by the vendor from no power throttling to frugal power throttling.

    SMART attributes #233 (E9) and #242 (F1)

    Attribute #233 records the write to NAND.
    Attribute #242 records host writes.

    Dividing #233 by #242 will show the compressibility and write applification factors that the controller has achieved against writes incurred by the host.

    For SF1 drives these attributes are only updated every 64GB.
    For SF2 drives these attributes are updated ever GB

    The lower the ratio the higher the compression factor = less writes to NAND and faster write speeds.

    Uncompressible 4K random writes induce significantly more writes to NAND than writes incurred by the host system. (Amplification factor of 3.3)

    Click image for larger version. 

Name:	comp.png 
Views:	4957 
Size:	6.1 KB 
ID:	116894

    TRIM
    It would appear that without TRIM more writes to NAND are incurred.

    It would appear that write performance is enhanced without TRIM until the drive becomes degraded. Once the drive is degraded write speeds drop to around 50% below what could be expected if TRIM was activated. This "impact" is however highly subjective to workload so YMMV.

    Performance throttling SF2xxx drives

    General note. SMART reading are taken at a point in time and not necessarily when an attribute value changed. Consequently variations between readings can be expected. Readings can however be averaged out.

    E6 raw value

    The Vertex 3 has a new SMART attribute - E6. The raw 1byte value appears to indicate a reduction in the life curve count ranging from 120 to 0. During the endurance test the reduction in life curve attribute was linear against the writes being incurred.

    The average amount of writes to generate a drop in the E6 value = 261GB.

    This is based on the workload from the endurance app. A more onerous workload would reduce this figure and a less onerous work load would increase it.

    The total amount of data that can be written between 120 & 0 values = 34TB

    E7 Life left (MWI)

    It would appear that E7 - Life Left is directly linked to E6. If the amount of PE cycles recorded by E6 exceeds the E7 value to maintain PE for the warranty duration the drive will become throttled until such time as E7 can recalibrate.

    The MWI dropped to 99% at 11,078GB. Averaging out subsequent drops in the MWI vs writes to establish a theoretical figure for 100%, resulted in a figure of 8,592GB. it would appear that 8,592GB was therfore the amount of data that could be written within the credit period.

    If this figure is excluded the MWI projection for write capacity = 283TB

    The ratio difference in projections between E6 & E7 = 8.32.

    It would appear that the write load is 8.32 times higher than the the life time curve can accomodate to maintain the warranty duration.

    Click image for larger version. 

Name:	1.png 
Views:	4499 
Size:	27.7 KB 
ID:	118200

    Click image for larger version. 

Name:	2.png 
Views:	4501 
Size:	32.7 KB 
ID:	118201

    Compression

    The V3 update frequency for SMART attributes E9 & F1 is 1GB. This allows compression factors to be better calculated.

    Click image for larger version. 

Name:	Untitled.png 
Views:	4802 
Size:	23.2 KB 
ID:	118080

    Write speeds (and to an lesser extent read speeds) are dependent on how compressible the data is. Write speeds were detemined using Anvil's Benchmark App. Interestingly the V3 was slower at sequential writes that an X25-M if the data was not highly compressible, but for 4K it was faster regardless of the compression factor.

    Click image for larger version. 

Name:	Untitled.png 
Views:	4497 
Size:	88.4 KB 
ID:	118881

    Depending on the compressibility factor of the data a SF drive can been faster or slower that drives that don't use compression.

    Click image for larger version. 

Name:	Untitled.png 
Views:	4223 
Size:	105.2 KB 
ID:	118882

    Testing by Vapor

    Click image for larger version. 

Name:	Untitled.png 
Views:	4503 
Size:	52.9 KB 
ID:	118852

    RAISE

    RAISE - Redundant Array of Independent Silicon Elements - writes data across multiple flash die to enable recovery from a failure in a sector, page or entire block, just like the concept of multi-drive RAID used in disk-based storage, but RAISE only requires a single drive.

    SSDs are built using flash die that are assembled up to 8 die per package. For optimum capacity the SSD can be assembled with up to 16 packages. That puts 128 individual die in one SSD. If the failure rate (unrecoverable read error) of one MLC die is conservatively 1,000 PPM (a failure probability of 0.1%) then using the probability formula for 128 devices the failure rate increases to 12.0% over the life of the SSD.

    Using RAISE technology in a SandForce Driven SSD reduces the probability of a single unrecoverable read error by 100 times to 0.001%. Applying that same formula, the failure rate of the SSD drops from 12.0% to a mere 0.13%, nearly 100 times lower.

    SF2xxx drives now have an option to disable RAISE when the drive is configured at the factory.

    OCZ V3 60GB drives have disabled RAISE.
    OCZ V3 120GB & 240GB drives have RAISE enabled.
    All of OCZ V2 drives have RAISE enabled.
    Attached Files Attached Files
    Last edited by Ao1; 10-04-2011 at 05:56 AM. Reason: Revised to provide thread summary

  2. #2
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    (The V2 was previously induced into a throttled state before this thread started. See the endurance thread for details).

    The V2 has been left to idle for a week. I formatted the drive and ran a 16GB test file. Performance seems OK. I have not SE'd the drive, only formatted it so I can test on a drive with no other data on it.

    Anvils Endurance app is now running as below.

    Click image for larger version. 

Name:	Anvil.png 
Views:	5264 
Size:	15.0 KB 
ID:	116654

    Click image for larger version. 

Name:	smart.png 
Views:	5307 
Size:	24.8 KB 
ID:	116655

    Click image for larger version. 

Name:	Start.png 
Views:	5284 
Size:	12.2 KB 
ID:	116656
    Last edited by Ao1; 06-30-2011 at 11:43 AM.

  3. #3
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    Click image for larger version. 

Name:	Anvil 1.png 
Views:	5333 
Size:	15.6 KB 
ID:	116657

  4. #4
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    Click image for larger version. 

Name:	1.83.png 
Views:	5233 
Size:	15.4 KB 
ID:	116658

  5. #5
    Xtreme Member
    Join Date
    Feb 2006
    Location
    Stuttgart, Germany
    Posts
    225
    Could you post SMART parameters taken using another tool, like HDDSentinel ? I am using this tool and I see two more, 233 and 234. Value for 234 is always equal with 241 while value for 233 is lower and is increasing slower if I write compresible data. I speculate that value for 233 is indicating real flash writes.
    Last edited by sergiu; 06-25-2011 at 07:26 AM.

  6. #6
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    Click image for larger version. 

Name:	2.2.png 
Views:	5240 
Size:	15.5 KB 
ID:	116659

  7. #7
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    I'm still running Anvil's apps so these are a few minutes different.

    Click image for larger version. 

Name:	sent.png 
Views:	5249 
Size:	23.6 KB 
ID:	116660

    Click image for larger version. 

Name:	sent 1.png 
Views:	5268 
Size:	25.5 KB 
ID:	116661
    Last edited by Ao1; 06-25-2011 at 07:44 AM.

  8. #8
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    2.5 hours in now. Previously the drive returned to a fully throttled after ~380GB. The drive has had much longer to idle this time, so lets see how much longer it lasts.

    Click image for larger version. 

Name:	2.5.png 
Views:	5282 
Size:	12.8 KB 
ID:	116663

  9. #9
    Banned
    Join Date
    Jan 2010
    Location
    Las Vegas
    Posts
    936
    Quote Originally Posted by Ao1 View Post
    • After the drive reached a throttled state I ran tests with compressible and non compressible data. Non compressible data was not throttled. Compressible data for both reads and writes was throttled.
    Did you write that incorrectly? I thought the incompressible data was limited to 7MB/s write speed, but the compressible data (i.e., a stream of zeros) could still "write" quickly since it does not actually write much to the flash memory.

  10. #10
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    Opps, sorry about that. Will change 1st post. Highly compressible data (ofill) is not impacted, with reads or writes. I'll link the posts from the other thread.

    Still no change

    Click image for larger version. 

Name:	3.26.png 
Views:	5223 
Size:	12.6 KB 
ID:	116664
    Last edited by Ao1; 06-25-2011 at 08:42 AM.

  11. #11
    Banned
    Join Date
    Jan 2010
    Location
    Las Vegas
    Posts
    936
    Quote Originally Posted by Ao1 View Post

    • Throttling only became apparent after 7 days of continually running Anvil's app.

    ...

    My V2 has been idling for some time, so I would expect throttling has been released to some degree.
    It is my guess that the amount that will need to be written to any particular Sandforce SSD before lifetime throttling kicks in will depend on both the amount of power-on time the SSD has had and on the total writes to the SSD before beginning the "current" test.

    So, if someone were to try to repeat Ao1's results, I predict that the quickest way to get lifetime throttling to kick in is to start with a brand new drive (minimal power on hours) and run Anvil's write app continuously. In contrast, if someone were to try the test on a Sandforce drive that has been powered on for several thousand hours, but has seen very light usage (only a few hundred GBs of writes, say), then I predict that lifetime throttling will take a lot longer to kick in when running Anvil's app on the SSD.

    EDIT: In addition to the above, I think the amount of time before lifetime throttling begins should depend on the capacity of the SSD, the write speed, and the model (V2 or V3, etc.). Lifetime throttling should begin after a lower amount of total writes on a smaller SSD. So, to get to a lifetime throttled state as quickly as possible, one would need to buy a brand new SSD, the smallest capacity available, and run Anvil's program continuously.

    One interesting test would be the difference between say, a new 60GB Vertex 2 and a new 60GB Agility 3. How long running Anvil's program until lifetime throttling begins? I cannot predict the answer, since the two SSD's probably have different write speeds, and definitely have different firmware.
    Last edited by johnw; 06-25-2011 at 09:05 AM.

  12. #12
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    4.21 hours and things are starting to slow down. I've got to go out soon for around 5 hours, so I'll have to leave it running.

    Click image for larger version. 

Name:	4.21.png 
Views:	5225 
Size:	12.6 KB 
ID:	116665

  13. #13
    Xtreme Infrastructure Eng
    Join Date
    Feb 2004
    Posts
    1,184
    What will you be using as the alternative method for issuing a secure erase? Please consider using hdparm with a linux live CD such as System Rescue CD - https://ata.wiki.kernel.org/index.php/ATA_Secure_Erase. I have always used this method and have had good results.

    Edit: I can't comment on this method's effectiveness of resetting SF throttling, however.
    Last edited by Gogeta; 06-25-2011 at 11:43 AM.
    Less is more.

  14. #14
    Xtreme X.I.P.
    Join Date
    Apr 2008
    Location
    Norway
    Posts
    2,838
    Ao1

    Great to see a new thread for this subject

    How come you are running with 6-7GB free space?
    (I just noticed that free space was a bit low in a few of the screen-shots)

    @johnw
    I'm running most of my SF drives in raid-0 and so each drive doesn't get that much writes, not a problem vs other drives, so far.
    Still, the ghost of restricted writes is annoying, particularly when one knows that the drives needs to be powered on all the time just to stay on the safe side.
    -
    Hardware:

  15. #15
    Xtreme Member
    Join Date
    Oct 2007
    Posts
    407
    Quote Originally Posted by johnw
    Lifetime throttling should begin after a lower amount of total writes on a smaller SSD.
    We don't know how complex the DuraClass write detection algorithms are. 7 MB/sec of continuous writing @ 31,536,000 seconds per year results in 210.5 TB per year or 631.5 TB per 3 year period, which greatly exceeds even a 5000 p/e cycle rating on a 40 GB drive. It should work out just about right for a 120 GB drive though and for a 240 GB drive it would be overkill. Maybe they decided to use the same algorithm for all drive sizes and base it on a 120 GB size. OTOH, maybe the algorithm does allow for varying drive sizes and always kicks in at around 30 TB of continuous writing to a new drive, but only drops to 20 MB/sec on a 120 GB drive and 40 MB/sec on a 240 GB drive. Or maybe you get a 7 day grace period no matter what and the larger, faster drives will just write a lot more data in that 7 days. And of course each drive vendor may have a different DuraClass setting in the firmware. I would be very surprised for instance if OWC did not have a more aggressive life throttle with their 5 year warranties. That would make for an interesting test actually.

  16. #16
    Banned
    Join Date
    Jan 2010
    Location
    Las Vegas
    Posts
    936
    Quote Originally Posted by gojirasan View Post
    We don't know how complex the DuraClass write detection algorithms are. 7 MB/sec of continuous writing @ 31,536,000 seconds per year results in 210.5 TB per year or 631.5 TB per 3 year period, which greatly exceeds even a 5000 p/e cycle rating on a 40 GB drive.
    I think I mentioned this in the other monster thread, but I think that actually makes sense. I calculate about 15,000 erase cycles, over 3 years and 40GiB of flash, at 7MB/s.

    The lifetime throttling algorithm, lacking a battery-powered clock to tell time, may just make an assumption like 8 hours of power-on time = 24 hours of calendar time. There is your factor of 3 to get from 15,000 to 5,000 erase cycles.

    That is just a guess on my part, obviously. But I think it is plausible.
    Last edited by johnw; 06-25-2011 at 03:14 PM.

  17. #17
    Xtreme Addict
    Join Date
    May 2006
    Posts
    1,315
    So is it true that Mushkin's SF2xxx Chronos drives use truly throttle-free firmware?
    MAIN: 4770K 4.6 | Max VI Hero | 16GB 2400/C10 | H110 | 2 GTX670 FTW SLi | 2 840 Pro 256 R0 | SB Z | 750D | AX1200 | 305T | 8.1x64
    HTPC: 4670K 4.4 | Max VI Gene | 8GB 2133/C9 | NH-L9I | HD6450 | 840 Pro 128 | 2TB Red | GD05 | SSR-550RM | 70" | 8.1x64
    MEDIA: 4670K 4.4 | Gryphon | 8GB 1866/C9 | VX Black | HD4600 | 840 Pro 128 | 4 F4 HD204UI R5 | 550D | SSR-550RM | 245BW | 8.1x64

  18. #18
    Banned
    Join Date
    Jan 2010
    Location
    Las Vegas
    Posts
    936
    Quote Originally Posted by Brahmzy View Post
    So is it true that Mushkin's SF2xxx Chronos drives use truly throttle-free firmware?
    Probably not. This was addressed in the other thread. They were referring to the IOPS throttle (none), not the lifetime throttle.

  19. #19
    Xtreme Member
    Join Date
    Feb 2006
    Location
    Stuttgart, Germany
    Posts
    225
    Interesting value for SMART 233. If the value is indeed indicating flash writes, then it would mean a write amplification much higher than 1 for non compressible data, probably around 1.2-1.3. Could you post some more values as your test progresses? I would be interested to see how this SMART parameter evolves in time.

  20. #20
    Admin
    Join Date
    Feb 2005
    Location
    Ann Arbor, MI
    Posts
    12,338
    Quote Originally Posted by sergiu View Post
    Interesting value for SMART 233. If the value is indeed indicating flash writes, then it would mean a write amplification much higher than 1 for non compressible data, probably around 1.2-1.3. Could you post some more values as your test progresses? I would be interested to see how this SMART parameter evolves in time.
    FWIW, 8080 divided by 7C40 is 1.034x

    8080 = 32896 GB, 7C40 = 31808 GB

  21. #21
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    Quote Originally Posted by Brahmzy View Post
    So is it true that Mushkin's SF2xxx Chronos drives use truly throttle-free firmware?
    This is what Mushkin have stated:
    "DuraClass management functionality is still active. “Unthrottled” in this context refers to write IOPS bursting up to 90,000+ but being governed down to 20,000 after a few seconds which is typical behavior with standard firmware with SF-2281. The firmware we have on the Chronos and Chronos Deluxe drives will not have that governor activated."

  22. #22
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    Quote Originally Posted by Gogeta View Post
    What will you be using as the alternative method for issuing a secure erase?
    I have the nuke option, which I will leave to last if nothing else works.

    By definition a SE should be SE however. I can't really see why there would be any difference between hdderase/ The OCZ toolbox or any other method, but we will soon see.

    I'm willing to throw the kitchen sink at it if necessary, so I may try you suggestion if nothing else works.

  23. #23
    Xtreme Member
    Join Date
    Aug 2008
    Location
    SF bay area, CA
    Posts
    262
    Quote Originally Posted by gojirasan View Post
    We don't know how complex the DuraClass write detection algorithms are. 7 MB/sec of continuous writing @ 31,536,000 seconds per year results in 210.5 TB per year or 631.5 TB per 3 year period, which greatly exceeds even a 5000 p/e cycle rating on a 40 GB drive
    ...


    (210,500 GB/year) / (40 GB drive) = 5,262.5 cycles per year on a 40GB drive... If the flash in the drive in question is 5,000 cycle flash..
    Who wants to guess how many years the write throttle setting is at?
    Nobody says the warranty period on the box has to match the 'warranty throttle speriod' of the drive firmware. It's all basically detectable from a linear slope of power-on hours of the drive versus Media Wear Indicator.

    Given a warranty period of 1 year:
    365 days/year x 24 hours/day = 8760 hours. (power on hours raw value)
    At 90% of that lifetime, 876 hours, you will experience lifetime throttling if you are at around 90 (90%) MWI.
    At 50% of that lifetime, 4380 hours, you will experience lifetime throttling if you are at around 50 (50%) MWI.. get it?
    There is a slight leeway here, but the drive will gradually correct to this slope. (allowing for some initial abuse from benchmark whores *ahem* to not cripple their SSD performance by running their drives hard)

    And yes that means if you have a drive with 1 year lifetime/warranty throttling period, and just let it idle for 8760 hours, you essentially have an unthrottled drive. :P
    Last edited by zads; 06-25-2011 at 11:22 PM.
    "Red Dwarf", SFF gaming PC
    Winner of the ASUS Xtreme Design Competition
    Sponsors...ASUS, Swiftech, Intel, Samsung, G.Skill, Antec, Razer
    Hardware..[Maximus III GENE, Core i7-860 @ 4.1Ghz, 4GB DDR3-2200, HD5870, 256GB SSD]
    Water.......[Apogee XT CPU, MCW60-R2 GPU, 2x 240mm radiators, MCP350 pump]

  24. #24
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    There is nothing to say that 7MB/s was the lowest the drive could be throttled down to. The problem was that when it got to that state it made my PC unresponsive, so it was difficult to keep running it, whilst trying to use my PC as normal. The drop in speed occurred more or less instantly. It could be that if I had kept running it that at some stage it would have dropped again.

    The issue however is that the drive is more or less unusable by the time it drops to 7MB/s

  25. #25
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    When I got back in last night Anvil's app was still running as normal. I tried a reboot to see if that would cause throttling to kick in, but it didn't. This morning ~16 hours after starting the app is still going strong.

    A couple of items I have noticed. When the drive entered a throttled state in the endurance test the delta between most worn and least worn was 12. After leaving the drive to idle the delta has gone back to 2. (See first post)

    Looking back on the Endurance threads I stopped running Anvils app on the 28th May. Just before throttling kicked in:

    Delta between most-worn and least-worn Flash blocks: 12
    Approximate SSD life Remaining: 91%
    Number of bytes written to SSD: 28,352 GB

    Between then and the start of running Anvil's app the number of bytes written has increased to 31,488, which equates to 3,136MB over 28 days.

    I have set Anvil's app to only leave 1GB free. That means on each cycle the drive starts empty and then gradually fills up leaving only 1GB free. The files are then deleted, which generates a TRIM hang as the full expanse of the drive is TRIMed. The process then starts again.

    In the endurance test I had static data on it and I set Anvils app to leave 12GB.

    I would imagine that delta between most worn and least worn will stay the same due to the way I am currently running Anvil's app. I'm going to leave it running that way for a while to see what happens.
    Last edited by Ao1; 06-25-2011 at 11:54 PM.

Page 1 of 24 123411 ... LastLast

Tags for this Thread

Bookmarks

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •