Page 23 of 24 FirstFirst ... 132021222324 LastLast
Results 551 to 575 of 598

Thread: Sandforce Life Time Throttling

  1. #551
    Xtreme Member
    Join Date
    Aug 2006
    Posts
    187
    So does the Intel 520 suffer from SF throttling?
    Intel S1155 Core i7 2600K Quad Core CPU
    Gigabyte GA-Z68X-UD3R-B3 Socket 1155
    DDR3 16GB (4x4G) G.Skill Ripjaws 1600MHz RAM Kit
    128GB Crucial M4 2.5" SATA 3 Solid State Drive (SSD)
    2TB Western Digital BLACK edition 64M SATA HDD
    1TB Western Digital Green 64M SATA HDD
    NVIDIA GTX560 1GB Gigabyte OC PCIe Video Card
    23.6" BenQ XL2410T 3D LED Monitor
    CoolerMaster RC-922M-KKN1 HAF Mid ATX Case Black
    Thermaltake 775 Watt Toughpower XT ATX PSU
    LG BH10LS30 Blu-Ray Writer
    Corsair Hydro H70 High Performance Liquid Cooling System

  2. #552
    Xtreme Member
    Join Date
    Feb 2012
    Location
    Perth, Australia
    Posts
    467
    The Intel 520 does not seem to suffer from Lifetime throttling.

  3. #553
    Xtreme Member
    Join Date
    Aug 2006
    Posts
    187
    Quote Originally Posted by canthearu View Post
    The Intel 520 does not seem to suffer from Lifetime throttling.
    wow so that would be the first SF drive that doesn't?
    Intel S1155 Core i7 2600K Quad Core CPU
    Gigabyte GA-Z68X-UD3R-B3 Socket 1155
    DDR3 16GB (4x4G) G.Skill Ripjaws 1600MHz RAM Kit
    128GB Crucial M4 2.5" SATA 3 Solid State Drive (SSD)
    2TB Western Digital BLACK edition 64M SATA HDD
    1TB Western Digital Green 64M SATA HDD
    NVIDIA GTX560 1GB Gigabyte OC PCIe Video Card
    23.6" BenQ XL2410T 3D LED Monitor
    CoolerMaster RC-922M-KKN1 HAF Mid ATX Case Black
    Thermaltake 775 Watt Toughpower XT ATX PSU
    LG BH10LS30 Blu-Ray Writer
    Corsair Hydro H70 High Performance Liquid Cooling System

  4. #554
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    Day 6

    Click image for larger version. 

Name:	day 6.png 
Views:	405 
Size:	19.6 KB 
ID:	124479

    @canthearu Tests to try and find out how SF can compress data have so far been based on writing significant volumes of data with various levels of compression. What I try to do is look at how SF performs with a “normal” work load that excludes things like installations. Bottom line, at least in my case is that it does not perform well at all. Perhaps there is something with my configuration that is skewing results. If you want to compare just use your SSD as normal, but try to avoid installations, which as can be seen above skew results.
    By the way I don’t use a page file. I will see if that makes a difference later.
    Last edited by Ao1; 03-10-2012 at 01:07 AM.

  5. #555
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    Quote Originally Posted by therat View Post
    wow so that would be the first SF drive that doesn't?
    OCZ are about the only company that throttle.

  6. #556
    Xtreme Member
    Join Date
    Aug 2006
    Posts
    187
    Quote Originally Posted by Ao1 View Post
    OCZ are about the only company that throttle.
    Really? Why would they do that when it is so unpopular?
    Intel S1155 Core i7 2600K Quad Core CPU
    Gigabyte GA-Z68X-UD3R-B3 Socket 1155
    DDR3 16GB (4x4G) G.Skill Ripjaws 1600MHz RAM Kit
    128GB Crucial M4 2.5" SATA 3 Solid State Drive (SSD)
    2TB Western Digital BLACK edition 64M SATA HDD
    1TB Western Digital Green 64M SATA HDD
    NVIDIA GTX560 1GB Gigabyte OC PCIe Video Card
    23.6" BenQ XL2410T 3D LED Monitor
    CoolerMaster RC-922M-KKN1 HAF Mid ATX Case Black
    Thermaltake 775 Watt Toughpower XT ATX PSU
    LG BH10LS30 Blu-Ray Writer
    Corsair Hydro H70 High Performance Liquid Cooling System

  7. #557
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    Day 7.

    It’s impossible to know if the data was being compressed (outside of the install on day 6). Even if it was, any benefit from compression was blown out the water by WA. This confirms my suspicion that SF drives are about the worst you can use for a client based work load. They are like a F1 race car, theoretically really fast on the race track, but put them on a country lane and things don’t look so good. As SSD performance is underutilised in a client environment speed differences between different brands of SSD are never going to be noticed anyway, except perhaps in very rare instances. My 2c’s.

    Click image for larger version. 

Name:	day 7.png 
Views:	383 
Size:	19.7 KB 
ID:	124556

  8. #558
    Xtreme Member
    Join Date
    Feb 2006
    Location
    Stuttgart, Germany
    Posts
    225
    I have also noticed recently that WA was close 1 for all writes in the last 4 months on my SSD. In client scenarios I believe it behaves better only when the drive is near full. In this situation, 10% of space saved thru compression translates in 10% extra provisioning and better performance. The difference due to compression should be easily observed in full span 4K writes with highly compressible data (pure database scenario), however this is a server load.

  9. #559
    Xtreme Member
    Join Date
    Feb 2012
    Location
    Perth, Australia
    Posts
    467
    Quote Originally Posted by Ao1 View Post
    Even if it was, any benefit from compression was blown out the water by WA. This confirms my suspicion that SF drives are about the worst you can use for a client based work load.
    I disagree on more then just a few levels.

    My desktop sandforce drive is 2 years old, used in a standard desktop, its stats indicate:

    11227 Power on hours.
    3456 GiB NAND writes
    4032 GiB LBA writes
    12736 GiB LBA reads

    Many, many of my Power on Hours have been low intensity. Yet still I'm definitely running below 1 write amplification, and just about all the sandforce drives I've seen are running below 1 write amplification.

    Furthermore, have you even checked the same workload on any other SSDs, like the m4 or the barefoot controller. I know that SSDs based on the barefoot controller regularly burn out their NAND in 2 years on a moderate desktop load due to their write amplification.

    My HTPC sandforce 2 drive has accumulated 1529 hours and 469 GiB of NAND writes, mostly because I've been a bit mean to it. (done a hunderd gig or so endurance test and plenty of benchmarks) It again is tracking well below your 55 gig for 80 hours figure.

    There is something about your particular workload that is causing a poor write amplification ... and I bet if you were able to perform this test on other SSDs, you would see the same or worse figures.

  10. #560
    Xtreme Member
    Join Date
    Feb 2006
    Location
    Stuttgart, Germany
    Posts
    225
    Low intensity writes are also impacted by page size. Most first generation Sandforce drives have 4K pages while in second generation, most of them have 8K pages. People usually use 4K clusters in Windows and this would explain high WA in idle. This should also impact other SSDs.

  11. #561
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    If you get a WA of just under 1 then either compression is not working well or WA is high to offset the benefit of compression. If WA is coming out around 0.86 you are not getting anything close to the benefit of 0 fill read/ write speeds because either the data can’t be compressed or WA is adding to the work load to slow things down. 50% compression add 1.1WA does not = 1 (I use WA that Intel quote they can achieve).

    There is undoubtedly a trade of between compression and WA, which will of course fluctuate depending on work load.

    SF drives are the only drives that report LBA vs nand writes so I can’t compare with other SSD’s. I can only report and make conclusions on what I can observe with my set up but of course YMMV.

    Remember though that I have excluded the huge difference that a full installation would have made (i.e. 50% reduction in nand vs LBA writes). A full installation (OS & Apps) would significantly change the WA factor over a short duration because it represents a large volume of data that can be compressed by 50%. To offset that I avoided copying large data files that I know can’t be compressed (Like mp3, avi’s etc.)

    If you have chance it would great to compare your stats on a daily basis to compare, however I think it is fair to say this:

    • SF drives were designed for enterprise workloads and the compression algorithms are optimised for enterprise not client workloads
    • If you can’t compress data by at least 46% both read and write speeds drop well below other SSDs
    • SF drives can’t compress to anything like the theoretical compressibility of data unless you are in the 0 fill to 8% fill range. If you are not in that range write speeds drop significantly

    On the plus side SF drives can compress OS/ App installations by 50% but what benefit does the end user get from that?

    Edit:

    Quote Originally Posted by canthearu View Post
    <snip> done a hundred gig or so endurance test and plenty of benchmarks) It again is tracking well below your 55 gig for 80 hours figure.
    BTW if you are benchmarking using 0 fill you will be seriously skewing your results. 0 fill = lots of LBA writes and hardly any nand writes. Same thing if you have been using ASU with anything less than 100% uncompressible.
    Last edited by Ao1; 03-11-2012 at 06:34 AM.

  12. #562
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    Quote Originally Posted by sergiu View Post
    Low intensity writes are also impacted by page size. Most first generation Sandforce drives have 4K pages while in second generation, most of them have 8K pages. People usually use 4K clusters in Windows and this would explain high WA in idle. This should also impact other SSDs.
    There are still a lot of unknown's. Can SF write combine with no [onboard] cache? If they can do they need a high QD? Can SF only compress xfers above a certain size? How random does the data have to be? What is the ratio between compressibility of data and WA?

    @ Anvil, a special version of ASU that could drip feed xfers at specific sizes might help to isolate exactly how the SF controller works.
    Last edited by Ao1; 03-11-2012 at 06:08 AM.

  13. #563
    Xtreme Member
    Join Date
    Feb 2006
    Location
    Stuttgart, Germany
    Posts
    225
    Quote Originally Posted by Ao1 View Post
    There are still a lot of unknown's. Can SF write combine with no [onboard] cache? If they can do they need a high QD? Can SF only compress xfers above a certain size? How random does the data have to be? What is the ratio between compressibility of data and WA?

    @ Anvil, a special version of ASU that could drip feed xfers at specific sizes might help to isolate exactly how the SF controller works.
    A wild guess is that it can write combine as it can easily use the SRAM allocated for archiving as a buffer, something like "if size<32K and not timeout, then wait, else archive and flush". The controller does not use external memory but any quantity of internal memory could be used as a buffer. Also... strange things @ Sandforce second generation: my workload is over 50% database related and this translates to about 150GB of writes in a "job" that runs for a few hours. During job run, I see WA of around 0.2-0.3. However, in idle (complete Christmas holiday) I saw WA of over 1.5.

  14. #564
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    What xfer sizes are typically being generated by your database? What is the avg QD? What is the split between random and sequential access? This would really help to understand a workload that the SF controller is able to manage well.

  15. #565
    Banned
    Join Date
    Jan 2010
    Location
    Las Vegas
    Posts
    936
    Quote Originally Posted by canthearu View Post
    My HTPC sandforce 2 drive has accumulated 1529 hours and 469 GiB of NAND writes, mostly because I've been a bit mean to it. (done a hunderd gig or so endurance test and plenty of benchmarks) It again is tracking well below your 55 gig for 80 hours figure.
    Which proves nothing with respect to how a Sandforce SSD will behave with typical client loads. Benchmarks are one of the worst things you can run on an SSD that is supposed to be simulating typical client loads, since the benchmarks write a LOT of data (often overwhelming the amount of "typical" data written), and the data written by the benchmarks is usually far from "typical".

  16. #566
    Banned
    Join Date
    Jan 2010
    Location
    Las Vegas
    Posts
    936
    Quote Originally Posted by Ao1 View Post
    SF drives are the only drives that report LBA vs nand writes so I can’t compare with other SSD’s. I can only report and make conclusions on what I can observe with my set up but of course YMMV.
    But it is possible to estimate WA on some other SSDs. Many SSDs include an erase block count (sometimes called wear leveling count) which can be used to estimate flash writes. Divide estimated flash writes by the host writes (either as reported by the SSD SMART attributes, or from some sort of IO monitoring program), and you have an estimate of WA. Not perfect, but may be good enough to compare with SF under a load that causes high WA with SF.
    Last edited by johnw; 03-11-2012 at 11:19 AM.

  17. #567
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    I will image the 530 and put it on the 830 to see what happens. Should be an interesting comparison.

  18. #568
    Xtreme Member
    Join Date
    Feb 2006
    Location
    Stuttgart, Germany
    Posts
    225
    Quote Originally Posted by Ao1 View Post
    What xfer sizes are typically being generated by your database? What is the avg QD? What is the split between random and sequential access? This would really help to understand a workload that the SF controller is able to manage well.
    I have only monitored the load with HDDSentinel. The reported load of the SSD is ~5-7% while sustained speed is 3.5-4MB/s. The max theoretical QD for my load cannot be higher than 8, however I believe the average is somewhere between 2 and 4. Block size is 16K and as for sequential/random distribution, I have no idea. However, for small block sizes, I believe there is no difference between sequential/random on Sandforce based SSD, as there is no external buffer.

  19. #569
    SSDabuser
    Join Date
    Aug 2011
    Location
    The Rocket City
    Posts
    1,434
    I really wonder if SF 3 shouldn't have external cache... I'm thinking it could certainly use it. Despite protestations to the contrary, I think you really have to know your workload before deciding whether SF is right for you.

    i'm still of the opinion that the 2281 needs OP like the first 1200s, but so far I've not seen a reduction in the non-FOB performance degradation even when OP'd.

  20. #570
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    Quote Originally Posted by sergiu View Post
    I have only monitored the load with HDDSentinel. The reported load of the SSD is ~5-7% while sustained speed is 3.5-4MB/s. The max theoretical QD for my load cannot be higher than 8, however I believe the average is somewhere between 2 and 4. Block size is 16K and as for sequential/random distribution, I have no idea. However, for small block sizes, I believe there is no difference between sequential/random on Sandforce based SSD, as there is no external buffer.
    This is already helpful info. Are all xfers 16K? Can you check qd with performon?

  21. #571
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    Quote Originally Posted by johnw View Post
    But it is possible to estimate WA on some other SSDs. Many SSDs include an erase block count (sometimes called wear leveling count) which can be used to estimate flash writes. Divide estimated flash writes by the host writes (either as reported by the SSD SMART attributes, or from some sort of IO monitoring program), and you have an estimate of WA. Not perfect, but may be good enough to compare with SF under a load that causes high WA with SF.
    This method will not have the same level of granularity, as updates can only occur at 60GB increments of nand writes. I’ll get the exact LBA count next time the PE increases and then I can start from there.

    Click image for larger version. 

Name:	830.png 
Views:	353 
Size:	17.3 KB 
ID:	124573

  22. #572
    Banned
    Join Date
    Jan 2010
    Location
    Las Vegas
    Posts
    936
    Quote Originally Posted by Ao1 View Post
    This method will not have the same level of granularity, as updates can only occur at 60GB increments of nand writes. I’ll get the exact LBA count next time the PE increases and then I can start from there.
    Hmmm, I wonder if you should completely fill the SSD 100% with data and then delete it (and repeat the fill/delete to get the OP), and then restart the test.

    I am concerned that there could be some blocks on the SSD that have never been written to before (or since the last secure erase), and if so the SSD will be able to write to some pages without doing a block erase, which would result in an inaccurate estimate of flash writes from block erase count.

  23. #573
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    Neither the 530 nor the 830 were SE’d. Both were formatted before installing the OS & apps. I have reset the start point to a change in the PE count. Both drives are now in a similar test condition.

    I’m more interested in the 530 though. I would like to establish the best work load to minimise WA. 12K xfers at QD4 seem a good place to start.

    Click image for larger version. 

Name:	830 12.png 
Views:	338 
Size:	18.1 KB 
ID:	124579

  24. #574
    Xtreme X.I.P.
    Join Date
    Apr 2008
    Location
    Norway
    Posts
    2,838
    Most databases uses a 8K-64KB block size, using 12K would be quite strange, it's either 4,8,16,32,64,...

    I've got a few SF2 drives sitting in an array, will check Host vs NAND writes in a few weeks.
    I'll probably try the 520 on the MacBook as well, not sure if it's really compatible, if it works out it will be running as a boot drive with 1 VM. (the other VM's are on a different drive)
    -
    Hardware:

  25. #575
    the jedi master
    Join Date
    Jun 2002
    Location
    Manchester uk/Sunnyvale CA
    Posts
    3,884
    Are you guys telling me Intel does not set a warranty period on their drives?

    If thats correct, just fill your Intel test drives with incompressible data continually and watch them die LOL

    Come on guys, everyone throttles SF drives, we can set a warranty period to what ever we want with mptool,1 day to 10000 days or more if we wanted. Obviously if I set 1 day the throttle would take for ever to kick in, if I set 10000 days it would kick in MUCH faster.

    Consider the following, If I intentionally slow the write speeds of a drive, I can hide the throttle speed as the throttle speed would be the same as the normal write speed.
    Got a problem with your OCZ product....?
    Have a look over here
    Tony AKA BigToe


    Tuning PC's for speed...Run whats fast, not what you think is fast

Page 23 of 24 FirstFirst ... 132021222324 LastLast

Tags for this Thread

Bookmarks

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •