Results 1 to 25 of 598

Thread: Sandforce Life Time Throttling

Hybrid View

  1. #1
    Admin
    Join Date
    Feb 2005
    Location
    Ann Arbor, MI
    Posts
    12,338
    I just want to clarify on something there will surely be questions about. Yes, groberts101 has been banned for his general conduct in the Storage section; the ban is medium length.

    How things are said are important in terms of getting your point across effectively but also in terms of staying within the rules so that one can continue to post here.

    Thankfully everyone else kept their cool here

  2. #2
    Banned
    Join Date
    Jan 2010
    Location
    Las Vegas
    Posts
    936
    Quote Originally Posted by Vapor View Post
    I just want to clarify on something there will surely be questions about. Yes, groberts101 has been banned for his general conduct in the Storage section; the ban is medium length.
    Doh! Now we can't try to convince him to use one of his 8000 hour SSDs in an endurance test!

  3. #3
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    Hi sergiu,

    OK I've just started running Anvils app with no trim after a SE.

    In the first loop the MB/s speed is noticably faster. I'm getting around 60MB/s compared to around 45MB/s without TRIM. The blocks are of course clean at the moment so I doubt it will last.


    Click image for larger version. 

Name:	Untitled.png 
Views:	333 
Size:	48.7 KB 
ID:	116813

    Click image for larger version. 

Name:	cmd.png 
Views:	326 
Size:	6.6 KB 
ID:	116814

  4. #4
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    Here is the end of the first loop. The V2 was significantly faster without TRIM.

    Click image for larger version. 

Name:	Untitled.png 
Views:	325 
Size:	18.0 KB 
ID:	116815

  5. #5
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    Here is the 2nd loop. Still faster without TRIM. (No TRIM hang either, which makes it faster still)

    Click image for larger version. 

Name:	Untitled.png 
Views:	326 
Size:	17.5 KB 
ID:	116818

  6. #6
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    1 hour in. Anvils app is becoming unresponsive. I don't think life time throttling has kicked in yet.

    Click image for larger version. 

Name:	Untitled.png 
Views:	320 
Size:	22.6 KB 
ID:	116819

  7. #7
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    Nearly 2 hours in. Anvils app becomes periodically unresponsive for a few seconds and then it starts to respond.

    I've got to go out and do some work now, so I'll have to leave it running and report back when I get back. (3 to 4 hours)

    Click image for larger version. 

Name:	Untitled.png 
Views:	327 
Size:	56.9 KB 
ID:	116820

  8. #8
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    Hi Vapor, any chance of posting your #233 (E9) and #242 (F1) stats and power on hours? Have your drives always been in raid 0? What stripe size? It would be interesting to see how the #233 & #242 values compare between drives.

  9. #9
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    Looking at the run without TRIM Anvil's app generated 293.47GB of writes.

    #233 started at 37,312 and ended at 37,696. Difference = 384GB. Even if you allow 64GB for the delayed reporting that still comes out at 320GB = 26GB more than what was written.

  10. #10
    Xtreme Member
    Join Date
    Feb 2006
    Location
    Stuttgart, Germany
    Posts
    225
    Quote Originally Posted by Ao1 View Post
    Looking at the run without TRIM Anvil's app generated 293.47GB of writes.

    #233 started at 37,312 and ended at 37,696. Difference = 384GB. Even if you allow 64GB for the delayed reporting that still comes out at 320GB = 26GB more than what was written.
    What you see is absolutely normal. If data cannot be compressed, write amplification is for sure greater than one. Based on earlier measurements in this thread, it looked like a 1.12 write amplification for incompressible data written with the application. I would also have another request if possible: do some pure random writes and look at SMART parameter before and after the test to see how write amplification is evolving with this kind of load. For the test, to be statistically correct, you would probably need to write at least 2-3TB of data, so probably you would need to keep it idle for a few days.

  11. #11
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    1TB of 8% fill.

    #233 at start = 37,696GB
    #233 after 1TB 8% fill = 37,824GB
    Difference = 128GB

    #241 at start = 37,120GB
    #241 after 1TB 0fill = 38,144GB
    Difference = 1,024GB

    @ sergiu, I'll run a 46% fill for 1TB and then I'll do 100% 4K random writes.

  12. #12
    Xtreme Member
    Join Date
    Feb 2006
    Location
    Stuttgart, Germany
    Posts
    225
    Quote Originally Posted by Ao1 View Post
    1TB of 8% fill.

    #233 at start = 37,696GB
    #233 after 1TB 8% fill = 37,824GB
    Difference = 128GB

    #241 at start = 37,120GB
    #241 after 1TB 0fill = 38,144GB
    Difference = 1,024GB

    @ sergiu, I'll run a 46% fill for 1TB and then I'll do 100% 4K random writes.
    Could you repeat also the test with 8% compression, but writing 4TB of data? Technically the result gives us a 0.125 write amplification which is a little too good to be true considering that controller is theoretically also writing some data to assure a proper error correction. With 4TB written I believe we can approximate with a lower margin of error the write amplification.

    @Anvil: Would be possible to modify your application to detect SMART parameter changes (something like pooling the drive every 30s)? You could then easily compute write amplification (with a good aproximation) on Sandforce based SSDs after first detected changes on 231 and 241 parameters.

  13. #13
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    Quote Originally Posted by sergiu View Post
    Could you repeat also the test with 8% compression, but writing 4TB of data?
    Got cut short, so only 2.79GB

    2.79TB of 8% fill.

    #233 at start = 37,696GB
    #233 after 2.79TB 8% fill = 38,144GB
    Difference = 448GB

    #241 at start = 37,120GB
    #241 after 2.79TB 0fill = 40,000GB
    Difference = 2,880GB

  14. #14
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    As throttling is based on #233 it was possible to write at close to full speed for 0fill.

    With 8% fill the drive started off at ~120MB/s, but went down to ~45MB/s when the drive went back into a throttled state. Non compressible throttled write speeds are ~6MB/s

    For the 46% fill the drive is at ~10MB/s, 4MB/s above the throttled speed state of ~6MB/s.

    Those write speeds, referenced to the MB/s speeds above the throttled state, should be able to verify Anvil's assumptions of % fill that he based on zip files, against the compression that the SF controller can achieve.

    Click image for larger version. 

Name:	anvil app.png 
Views:	241 
Size:	24.5 KB 
ID:	116856

  15. #15
    Admin
    Join Date
    Feb 2005
    Location
    Ann Arbor, MI
    Posts
    12,338
    Quote Originally Posted by Ao1 View Post
    Hi Vapor, any chance of posting your #233 (E9) and #242 (F1) stats and power on hours? Have your drives always been in raid 0? What stripe size? It would be interesting to see how the #233 & #242 values compare between drives.
    They have not always been in RAID 0. When I first got them I tested single vs. RAID, so one V2 has noticeably more wore wear. I use a 32KB stripe.

    Anyway, here's the info:
    Click image for larger version. 

Name:	V2a.PNG 
Views:	313 
Size:	36.3 KB 
ID:	116839
    Click image for larger version. 

Name:	V2b.PNG 
Views:	326 
Size:	36.0 KB 
ID:	116840

    Seems compression/write-amp levels are similar-ish (what with the low resolution and uneven usage), though you bring up a really good question: does RAID (except R1) compromise some of the compression ability.

  16. #16
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    Quote Originally Posted by Vapor View Post
    They have not always been in RAID 0. When I first got them I tested single vs. RAID, so one V2 has noticeably more wore wear. I use a 32KB stripe.

    Anyway, here's the info:

    Seems compression/write-amp levels are similar-ish (what with the low resolution and uneven usage), though you bring up a really good question: does RAID (except R1) compromise some of the compression ability.
    Thanks for posting The difference between #233 & #241 are quite low on both drives, so it does not appear that much of your work load could be compressed.

    It would be great if more people could post their SMART values for these attributes to get a better picture.

  17. #17
    Xtreme Cruncher
    Join Date
    Nov 2008
    Location
    NE Ohio, USA
    Posts
    1,608
    Quote Originally Posted by Ao1 View Post
    It would be great if more people could post their SMART values for these attributes to get a better picture.
    My V2 60GB...hope it helps...

    24/7 Cruncher #1
    Crosshair VII Hero, Ryzen 3900X, 4.0 GHz @ 1.225v, Arctic Liquid Freezer II 420 AIO, 4x8GB GSKILL 3600MHz C15, ASUS TUF 3090 OC
    Samsung 980 1TB NVMe, Samsung 870 QVO 1TB, 2x10TB WD Red RAID1, Win 10 Pro, Enthoo Luxe TG, EVGA SuperNOVA 1200W P2

    24/7 Cruncher #2
    ASRock X470 Taichi, Ryzen 3900X, 4.0 GHz @ 1.225v, Arctic Liquid Freezer 280 AIO, 2x16GB GSKILL NEO 3600MHz C16, EVGA 3080ti FTW3 Ultra
    Samsung 970 EVO 250GB NVMe, Samsung 870 EVO 500GBWin 10 Ent, Enthoo Pro, Seasonic FOCUS Plus 850W

    24/7 Cruncher #3
    GA-P67A-UD4-B3 BIOS F8 mod, 2600k (L051B138) @ 4.5 GHz, 1.260v full load, Arctic Liquid 120, (Boots Win @ 5.6 GHz per Massman binning)
    Samsung Green 4x4GB @2133 C10, EVGA 2080ti FTW3 Hybrid, Samsung 870 EVO 500GB, 2x1TB WD Red RAID1, Win10 Ent, Rosewill Rise, EVGA SuperNOVA 1300W G2

    24/7 Cruncher #4 ... Crucial M225 64GB SSD Donated to Endurance Testing (Died at 968 TB of writes...no that is not a typo!)
    GA-EP45T-UD3LR BIOS F10 modded, Q6600 G0 VID 1.212 (L731B536), 3.6 GHz 9x400 @ 1.312v full load, Zerotherm Zen FZ120
    OCZ 2x2GB DDR3-1600MHz C7, Gigabyte 7950 @1200/1250, Crucial MX100 128GB, 2x1TB WD Red RAID1, Win10 Ent, Centurion 590, XFX PRO650W

    Music System
    SB Server->SB Touch w/Android Tablet as a remote->Denon AVR-X3300W->JBL Studio Series Floorstanding Speakers, JBL LS Center, 2x SVS SB-2000 Subs


  18. #18
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    Quote Originally Posted by bluestang View Post
    My V2 60GB...hope it helps...
    Thanks for posting. Lifetime writes < write to NAND

  19. #19
    Admin
    Join Date
    Feb 2005
    Location
    Ann Arbor, MI
    Posts
    12,338
    Quote Originally Posted by Ao1
    Thanks for posting The difference between #233 & #241 are quite low on both drives, so it does not appear that much of your work load could be compressed.

    It would be great if more people could post their SMART values for these attributes to get a better picture.
    Yeah, I was surprised at the low differences compared to 'expected.' All that is on these drives is OS and applications, things that should be very compressible. All documents (from .doc to .jpg/RAW) are on a different SSD.

    Part of the pagefile is on this R0 array (part of the pagefile is on the other SSD, too), but with 18GB of RAM I'm not sure how much either gets used (usually in the 8-13GB of RAM usage range). Firmware is 1.10 on both V2s.

    I am curious why my write amplification (233 vs. 241) values are higher compared to others' values. Usage, firmware, and RAID are probably the three leading candidates in my mind, but I'd be surprised if it's my usage.

    Seeing as I'm very far away from hitting a lifetime throttle state, I could run Anvil's app a few nights with the different compressibility levels and try to decode some of it.

    EDIT:
    Quote Originally Posted by bluestang View Post
    My V2 60GB...hope it helps...

    image snip
    Whoa, looks like a write amplification of over 2x there...I'm wondering if different firmwares have broken reporting now.
    Last edited by Vapor; 06-28-2011 at 10:48 AM. Reason: edit

  20. #20
    Xtreme Member
    Join Date
    Aug 2008
    Location
    SF bay area, CA
    Posts
    262
    Weird, my last post didn't go through? Maybe I fell asleep last night before submitting it lol

    Quote Originally Posted by One_Hertz View Post
    I am still pondering on the reasoning why this warranty throttle even exists and why SF couldn't just warranty their SSDs based on the amount of data written to them (like Intel does it). Could it be there to prevent enterprise users from using the cheaper consumer SF drives instead of their expensive enterprise models?
    Enterprise can be throttled too.
    Only certain customers get unthrottled drives.
    Its much easier to sell a warranty based on time than a warranty that will vary in time based on your workload.
    Also much easier so you don't have to deal with RMA checking every MWI.

    Quote Originally Posted by johnw View Post
    No, as I said in the other thread, all that is required is a SF drive that has around a year of power-on hours, but minimal writes. It will have built up a large gap between current writes and the warranty-throttle line, so it should be able to be written hundreds of TBs before warranty-throttle begins.
    Quote Originally Posted by Anvil View Post
    ...
    So, without a "throttling disabled" SF drive that's pretty much it for the Endurance test.
    You could buy unthrottled drives through me but you would have to start again at MWI=100

    Quote Originally Posted by Ao1 View Post
    ...
    The downside for vendor's is that they would have to check all RMA's to verify the workload = time/ cost/ money, still it's surprising that none of the small vendors have started to sell un-throttled drives.

    The other issue I guess is that it would not look good for an immerging technology if people started reporting their NAND had burnt out "prematurely".
    Right on the money. Sell unthrottled drives and those retail vendors will get 1 egg reviews on newegg "it died after only 1 month. I was only benchmarking it 24/7!!"
    "Red Dwarf", SFF gaming PC
    Winner of the ASUS Xtreme Design Competition
    Sponsors...ASUS, Swiftech, Intel, Samsung, G.Skill, Antec, Razer
    Hardware..[Maximus III GENE, Core i7-860 @ 4.1Ghz, 4GB DDR3-2200, HD5870, 256GB SSD]
    Water.......[Apogee XT CPU, MCW60-R2 GPU, 2x 240mm radiators, MCP350 pump]

  21. #21
    Admin
    Join Date
    Feb 2005
    Location
    Ann Arbor, MI
    Posts
    12,338
    Quote Originally Posted by zads View Post
    Right on the money. Sell unthrottled drives and those retail vendors will get 1 egg reviews on newegg "it died after only 1 month. I was only benchmarking it 24/7!!"
    Doesn't every non-SandForce drive have this 'problem' then? They seem to be getting on fine

  22. #22
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    Quote Originally Posted by zads View Post
    You could buy unthrottled drives through me but you would have to start again at MWI=100
    Or you could post your test results

  23. #23
    Xtreme X.I.P.
    Join Date
    Apr 2008
    Location
    Norway
    Posts
    2,838
    Quote Originally Posted by zads View Post
    You could buy unthrottled drives through me but you would have to start again at MWI=100
    Let me think about that, could be interesting.

    So, it would just slow down to whatever rate the controller is capable of handling?
    -
    Hardware:

Tags for this Thread

Bookmarks

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •