Page 7 of 24 FirstFirst ... 4567891017 ... LastLast
Results 151 to 175 of 598

Thread: Sandforce Life Time Throttling

  1. #151
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    Here a comparison between ATTO and AS SSD, with the drive in a throttled state. ATTO 0fill sequential write = 260MB/s. AS SSD 100% fill = 7MB/s.

    Click image for larger version. 

Name:	Atto.png 
Views:	409 
Size:	110.6 KB 
ID:	116857

  2. #152
    Xtreme Member
    Join Date
    Feb 2006
    Location
    Stuttgart, Germany
    Posts
    225
    Quote Originally Posted by Ao1 View Post
    Got cut short, so only 2.79GB

    2.79TB of 8% fill.

    #233 at start = 37,696GB
    #233 after 2.79TB 8% fill = 38,144GB
    Difference = 448GB

    #241 at start = 37,120GB
    #241 after 2.79TB 0fill = 40,000GB
    Difference = 2,880GB
    This is much better. Now a write amplification of 0.155 is more realistic for the load.

  3. #153
    Xtreme X.I.P.
    Join Date
    Apr 2008
    Location
    Norway
    Posts
    2,838
    Great work there Ao1

    Will be interesting to see some more SMART info from other users, both on the SF1 and SF2 controllers.
    -
    Hardware:

  4. #154
    Xtreme Cruncher
    Join Date
    Nov 2008
    Location
    NE Ohio, USA
    Posts
    1,608
    Quote Originally Posted by sergiu View Post
    Could you also check manually if partition is aligned? You need to look at sector offset with a tool like Active@ Partition Manager
    24/7 Cruncher #1
    Crosshair VII Hero, Ryzen 3900X, 4.0 GHz @ 1.225v, Arctic Liquid Freezer II 420 AIO, 4x8GB GSKILL 3600MHz C15, ASUS TUF 3090 OC
    Samsung 980 1TB NVMe, Samsung 870 QVO 1TB, 2x10TB WD Red RAID1, Win 10 Pro, Enthoo Luxe TG, EVGA SuperNOVA 1200W P2

    24/7 Cruncher #2
    ASRock X470 Taichi, Ryzen 3900X, 4.0 GHz @ 1.225v, Arctic Liquid Freezer 280 AIO, 2x16GB GSKILL NEO 3600MHz C16, EVGA 3080ti FTW3 Ultra
    Samsung 970 EVO 250GB NVMe, Samsung 870 EVO 500GBWin 10 Ent, Enthoo Pro, Seasonic FOCUS Plus 850W

    24/7 Cruncher #3
    GA-P67A-UD4-B3 BIOS F8 mod, 2600k (L051B138) @ 4.5 GHz, 1.260v full load, Arctic Liquid 120, (Boots Win @ 5.6 GHz per Massman binning)
    Samsung Green 4x4GB @2133 C10, EVGA 2080ti FTW3 Hybrid, Samsung 870 EVO 500GB, 2x1TB WD Red RAID1, Win10 Ent, Rosewill Rise, EVGA SuperNOVA 1300W G2

    24/7 Cruncher #4 ... Crucial M225 64GB SSD Donated to Endurance Testing (Died at 968 TB of writes...no that is not a typo!)
    GA-EP45T-UD3LR BIOS F10 modded, Q6600 G0 VID 1.212 (L731B536), 3.6 GHz 9x400 @ 1.312v full load, Zerotherm Zen FZ120
    OCZ 2x2GB DDR3-1600MHz C7, Gigabyte 7950 @1200/1250, Crucial MX100 128GB, 2x1TB WD Red RAID1, Win10 Ent, Centurion 590, XFX PRO650W

    Music System
    SB Server->SB Touch w/Android Tablet as a remote->Denon AVR-X3300W->JBL Studio Series Floorstanding Speakers, JBL LS Center, 2x SVS SB-2000 Subs


  5. #155
    Admin
    Join Date
    Feb 2005
    Location
    Ann Arbor, MI
    Posts
    12,338
    Running some math here....somebody please correct me if it's wrong

    Looks like the lifetime throttle (hereafter, LTT) rate at the NAND level for Ao1's SF-1200 drive is ~7MB/s (a little over 6MB/s * 1.12x....as an aside, for some reason, it didn't kick in for a couple of weeks). If LTT is meant to sustain the life of the drive until its warranty expires and we know the warranty, we can see how long OCZ/SandForce think the drive should live as a minimum.

    Based on that, once LTT kicks in, it's limiting writes to the NAND at
    7MB/s,
    420MB/min,
    24.6GB/hr,
    590.6GB/day,
    4.04TB/week,
    17.3TB/month (30-day),
    210.5TB/year.

    I'm not sure how long the warranty is for Ao1's drive, so I'll just run through all the popular options (2, 3, and 5yr).
    So for a two year warranty at this rate, that's ~421TB.
    Three year warranty at this rate, that's ~631.5TB.
    Five year warranty at this rate, that's ~1.03PB.

    Those all seem like such large, such abnormally large numbers, for lifetime usage even at such piddling speeds. I can kinda see why SF is implementing it...without the LTT, writes would be ~6x higher I can also see why nobody has ever experienced LTT before, to hit the writes required requires some very abnormal usage.

  6. #156
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    134.65GB of 4K 100% random with iometer. (4 hour run. Avg speed: 3.19MBps (Binary)TRIM enabled.Pseudo Random

    #233 at start = 38,208GB
    #233 after = 38,656GB

    Difference = 448GB

    #241 at start = 40,064GB
    #241 after = 40,192GB

    Difference = 128GB

  7. #157
    Xtreme X.I.P.
    Join Date
    Apr 2008
    Location
    Norway
    Posts
    2,838
    @Vapor

    Write Amplification is surely part of that formula, so, best case WA could be ~250TB, ~380TB, ~600TB. (0.6 WA -> flash writes)
    Even with WA those figures looks high, it might come to a complete halt if it was left running at "full" speed for months.

    @Ao1
    I'm pretty sure that is due to the random writes, I don't think that drive is feeling well

    What size was the iometer test file?
    -
    Hardware:

  8. #158
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    I set it to run for 4 hours. I got the write stats from the cvs file it created. The MB/s was not a surprise. #233 was though. The silly thing is that its sequential speeds that get hit most, but it's 4K that does the damage.

  9. #159
    Admin
    Join Date
    Feb 2005
    Location
    Ann Arbor, MI
    Posts
    12,338
    Quote Originally Posted by Anvil View Post
    @Vapor

    Write Amplification is surely part of that formula, so, best case WA could be ~250TB, ~380TB, ~600TB. (0.6 WA -> flash writes)
    Even with WA those figures looks high, it might come to a complete halt if it was left running at "full" speed for months.
    Already took into account WA if the 233 value = true writes (sounds like it is). Ao1 has been writing at low 6s (MB/s) with incompressible, then a 1.12x WA = ~7MB/s at the NAND level.

    If we're expecting users to use non-incompressible data (good assumption), then we can divide all the values I showed by your ~.6 (or multiply by ~1.6x) to show how much data (from the OS POV) gets written. Looks like a 3yr warranty would line up with just about 1PB

    Looks like random writes are brutal for WA though, damn.

  10. #160
    Banned
    Join Date
    Jan 2010
    Location
    Las Vegas
    Posts
    936
    Quote Originally Posted by Vapor View Post
    If we're expecting users to use non-incompressible data (good assumption), then we can divide all the values I showed by your ~.6 (or multiply by ~1.6x) to show how much data (from the OS POV) gets written. Looks like a 3yr warranty would line up with just about 1PB
    non-incompressible data?? You mean compressible data?

    Anyway, I think 0.6 is a BAD assumption for write amplification for most normal usage. You can only use the data from the SSD 233 attribute to compute WA if the SSD has NEVER, NOT ONCE, been benchmarked with a program that writes easily compressible data.

    The problem is that normal usage has VERY low writes. So if you run JUST ONE benchmark that writes easily compressible data (ATTO, CDM-0-fill, IOMeter with repeating or pseudo-random data, etc.), that will totally dominate the normal usage writes for most people. Note that Anvil did not claim that he never benchmarked his 0.6 WA drive. I guess he probably benchmarked it once or twice!

  11. #161
    Admin
    Join Date
    Feb 2005
    Location
    Ann Arbor, MI
    Posts
    12,338
    Yeah, compressible data

    Good points all around Guess we can't know how much OS-level data they're expecting, but it does look like they are aiming for ~200TB/yr of NAND writes (for just the 40GB drive), which is a ton.

  12. #162
    Xtreme Member
    Join Date
    Oct 2007
    Posts
    407
    Or maybe they have some bottom limit for throttled speed that they don't want to go below. At some point they may as well just make the drive read only for a while and that sort of thing is interesting enough to create controversy. The headlines would just be too good: "Sandforce drives shut themselves down if you write too much". I don't think we can surmise much about what Sandforce or OCZ expects in terms of write endurance based on the throttled speed. There are just too many other factors involved. I would assume that they are not really trying to protect themselves against 24/7 writes, at least in the 40 GB drives.

  13. #163
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    Here is a summary of what has been posted on 233/241.

    I noticed Gogeta uses sleep a lot. When I use sleep it puts about 2GB of data my drive. Hibernate would also put loads of writes on a drive.

    Still, it's a mixed bag so far.

    Click image for larger version. 

Name:	Untitled.png 
Views:	354 
Size:	6.2 KB 
ID:	116870

  14. #164
    Xtreme Member
    Join Date
    Feb 2006
    Location
    Stuttgart, Germany
    Posts
    225
    Quote Originally Posted by bluestang View Post
    I asked for a manual check because I am a programmer and I expect to see bugs everywhere. For a manual check, if you really have 2MB free at the beginning of the partition, then you should see C partition having an offset of 4096 sectors. Do you have NtfsLastAccessUpdate disabled?

  15. #165
    Xtreme Member
    Join Date
    Feb 2006
    Location
    Stuttgart, Germany
    Posts
    225
    Quote Originally Posted by Ao1 View Post
    Here is a summary of what has been posted on 233/241.

    I noticed Gogeta uses sleep a lot. When I use sleep it puts about 2GB of data my drive. Hibernate would also put loads of writes on a drive.

    Still, it's a mixed bag so far.

    Click image for larger version. 

Name:	Untitled.png 
Views:	354 
Size:	6.2 KB 
ID:	116870
    As a note, I have Windows XP SP3 with an Ubuntu distribution on dual boot. I keep my laptop in standby sometimes, but never to hibernate and I have NtfsLastAccessUpdate disabled. And until recently, I also had memory paging disabled.

    Back to topic, random writes is very surprising. Was the drive throttled when you started testing? at a write amplification of around 3.3, if throttled, you could still wear flash cells at a rate of about 20MB/s

  16. #166
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    Yes it was throttled. The write speed was consistent throughout. Tomorrow the V2 is going to get 4 hours of random 512B.

  17. #167
    Xtreme Member
    Join Date
    Feb 2006
    Location
    Stuttgart, Germany
    Posts
    225
    512B is sadistic way to torture a SSD . Results will be interesting from speed point of view. Could you add on your test list also a 4 hour session with completely random 4KB random data but on the drive in a unaligned state?

  18. #168
    Xtreme Guru
    Join Date
    Aug 2009
    Location
    Wichita, Ks
    Posts
    3,887
    i think it would be a good idea to do a post update to the first post to extrapolate on the SMART findings and the relation to WA/compression and other findings. Basically a compilation of known facts...i have linked this thread to others, but at 167 posts and climbing quickly, it will be hard to sift out the pertinent details. an overview of sorts?
    "Lurking" Since 1977


    Jesus Saves, God Backs-Up
    *I come to the news section to ban people, not read complaints.*-[XC]Gomeler
    Don't believe Squish, his hardware does control him!

  19. #169
    Xtreme X.I.P.
    Join Date
    Apr 2008
    Location
    Norway
    Posts
    2,838
    I'm thinking of implementing a "Steady State" benchmark in my app, shouldn't take that long as most things are already there.
    Saving throughput every n minutes, exporting the result set to excel when done.
    -
    Hardware:

  20. #170
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    Quote Originally Posted by Computurd View Post
    i think it would be a good idea to do a post update to the first post to extrapolate on the SMART findings and the relation to WA/compression and other findings. Basically a compilation of known facts...i have linked this thread to others, but at 167 posts and climbing quickly, it will be hard to sift out the pertinent details. an overview of sorts?
    Done. I have tried to summaries key findings and observations based on the test that have been carried out. If anyone disagrees/ has comments/ sees omissions let me know and I will update accordingly.

  21. #171
    Xtreme Member
    Join Date
    Aug 2008
    Location
    SF bay area, CA
    Posts
    262
    Quote Originally Posted by Vapor View Post
    Yeah, compressible data

    Good points all around Guess we can't know how much OS-level data they're expecting, but it does look like they are aiming for ~200TB/yr of NAND writes (for just the 40GB drive), which is a ton.
    Yeah 3k is plenty of write cycles for 99.9% of people out there;
    What does it take to wear out that drive in 1 years time?
    even if you use 25nm 3K PE cycle NAND flash on a small 40GB SSD;
    You're talking about writing over 328GB PER DAY (assuming W/A=1)...
    "Red Dwarf", SFF gaming PC
    Winner of the ASUS Xtreme Design Competition
    Sponsors...ASUS, Swiftech, Intel, Samsung, G.Skill, Antec, Razer
    Hardware..[Maximus III GENE, Core i7-860 @ 4.1Ghz, 4GB DDR3-2200, HD5870, 256GB SSD]
    Water.......[Apogee XT CPU, MCW60-R2 GPU, 2x 240mm radiators, MCP350 pump]

  22. #172
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    So, it seems on one side the delta between #233 & #241 can demonstrate compression factors, but on the other hand write amplification etc offsets any saving due to compression.

    Random 512B writes are quiet slow, so after 8 hours I only incurred 128GB of host writes, which is not enough considering the attributes only update every 64GB.

    Time to give the V2 a rest.

    #233 at start = 38,656GB
    #233 after = 38,848GB

    Difference =192 GB

    #241 at start = 40,192GB
    #241 after = 40,320GB

    Difference = 128GB

  23. #173
    Xtreme Cruncher
    Join Date
    Nov 2008
    Location
    NE Ohio, USA
    Posts
    1,608
    Quote Originally Posted by sergiu View Post
    I asked for a manual check because I am a programmer and I expect to see bugs everywhere. For a manual check, if you really have 2MB free at the beginning of the partition, then you should see C partition having an offset of 4096 sectors. Do you have NtfsLastAccessUpdate disabled?

    Yes, it's an offset of 4096 sectors. And I'm on XP SP3. Also, I believe my WA is skewed from two AS-SSD runs and one ATTO run in the first couple months. Forgot to mention that earlier.

    Edit: hibernation off, system managed pagefile on SSD (currently 3.5GB), system restore off on all drives, recycling bin for SSD set to remove files immediately.
    Last edited by bluestang; 06-30-2011 at 12:28 PM. Reason: added info
    24/7 Cruncher #1
    Crosshair VII Hero, Ryzen 3900X, 4.0 GHz @ 1.225v, Arctic Liquid Freezer II 420 AIO, 4x8GB GSKILL 3600MHz C15, ASUS TUF 3090 OC
    Samsung 980 1TB NVMe, Samsung 870 QVO 1TB, 2x10TB WD Red RAID1, Win 10 Pro, Enthoo Luxe TG, EVGA SuperNOVA 1200W P2

    24/7 Cruncher #2
    ASRock X470 Taichi, Ryzen 3900X, 4.0 GHz @ 1.225v, Arctic Liquid Freezer 280 AIO, 2x16GB GSKILL NEO 3600MHz C16, EVGA 3080ti FTW3 Ultra
    Samsung 970 EVO 250GB NVMe, Samsung 870 EVO 500GBWin 10 Ent, Enthoo Pro, Seasonic FOCUS Plus 850W

    24/7 Cruncher #3
    GA-P67A-UD4-B3 BIOS F8 mod, 2600k (L051B138) @ 4.5 GHz, 1.260v full load, Arctic Liquid 120, (Boots Win @ 5.6 GHz per Massman binning)
    Samsung Green 4x4GB @2133 C10, EVGA 2080ti FTW3 Hybrid, Samsung 870 EVO 500GB, 2x1TB WD Red RAID1, Win10 Ent, Rosewill Rise, EVGA SuperNOVA 1300W G2

    24/7 Cruncher #4 ... Crucial M225 64GB SSD Donated to Endurance Testing (Died at 968 TB of writes...no that is not a typo!)
    GA-EP45T-UD3LR BIOS F10 modded, Q6600 G0 VID 1.212 (L731B536), 3.6 GHz 9x400 @ 1.312v full load, Zerotherm Zen FZ120
    OCZ 2x2GB DDR3-1600MHz C7, Gigabyte 7950 @1200/1250, Crucial MX100 128GB, 2x1TB WD Red RAID1, Win10 Ent, Centurion 590, XFX PRO650W

    Music System
    SB Server->SB Touch w/Android Tablet as a remote->Denon AVR-X3300W->JBL Studio Series Floorstanding Speakers, JBL LS Center, 2x SVS SB-2000 Subs


  24. #174
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    I’ve been thinking about how to work out the compression factor for SF drives.

    Due to the SMART update frequency of V2 drives it would be a lot more accurate to work with a V3.

    The problem with trying to work out compression:

    • #233 records compression and WA
    • Compression could be related to xfer size and not just data format
    • QD might also play into it

    To help eliminate WA the drive would have to be in a fresh state and the test file would need to be less than the drive capacity, but large enough to mitigate the 1GB reporting frequency of a V3.

    A test file should only consist of one xfer size with a known compressibility factor

    Write speeds should be recorded during the xfer

    Tests should be repeated at different QD's.

    I suspect that for compression to be effective the xfer size needs to be 128K or above.

    Anyone with a V3 fancy trying that out?
    Last edited by Ao1; 07-01-2011 at 05:24 AM.

  25. #175
    Xtreme X.I.P.
    Join Date
    Apr 2008
    Location
    Norway
    Posts
    2,838
    I'd do it but my Agility had to be returned, it had issues from day one, not being recognized was what tipped the scale.

    Not sure if I'll be accepting another one in return, might try to get a Vertex 3 60GB or some V2.
    (won't be getting the replacement for another week, probably more like 10 days)
    -
    Hardware:

Page 7 of 24 FirstFirst ... 4567891017 ... LastLast

Tags for this Thread

Bookmarks

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •