So does the Intel 520 suffer from SF throttling?
Printable View
So does the Intel 520 suffer from SF throttling?
The Intel 520 does not seem to suffer from Lifetime throttling.
Day 6
Attachment 124479
@canthearu Tests to try and find out how SF can compress data have so far been based on writing significant volumes of data with various levels of compression. What I try to do is look at how SF performs with a “normal” work load that excludes things like installations. Bottom line, at least in my case is that it does not perform well at all. Perhaps there is something with my configuration that is skewing results. If you want to compare just use your SSD as normal, but try to avoid installations, which as can be seen above skew results.
By the way I don’t use a page file. I will see if that makes a difference later.
Day 7.
It’s impossible to know if the data was being compressed (outside of the install on day 6). Even if it was, any benefit from compression was blown out the water by WA. This confirms my suspicion that SF drives are about the worst you can use for a client based work load. They are like a F1 race car, theoretically really fast on the race track, but put them on a country lane and things don’t look so good. As SSD performance is underutilised in a client environment speed differences between different brands of SSD are never going to be noticed anyway, except perhaps in very rare instances. My 2c’s.
Attachment 124556
I have also noticed recently that WA was close 1 for all writes in the last 4 months on my SSD. In client scenarios I believe it behaves better only when the drive is near full. In this situation, 10% of space saved thru compression translates in 10% extra provisioning and better performance. The difference due to compression should be easily observed in full span 4K writes with highly compressible data (pure database scenario), however this is a server load.
I disagree on more then just a few levels.
My desktop sandforce drive is 2 years old, used in a standard desktop, its stats indicate:
11227 Power on hours.
3456 GiB NAND writes
4032 GiB LBA writes
12736 GiB LBA reads
Many, many of my Power on Hours have been low intensity. Yet still I'm definitely running below 1 write amplification, and just about all the sandforce drives I've seen are running below 1 write amplification.
Furthermore, have you even checked the same workload on any other SSDs, like the m4 or the barefoot controller. I know that SSDs based on the barefoot controller regularly burn out their NAND in 2 years on a moderate desktop load due to their write amplification.
My HTPC sandforce 2 drive has accumulated 1529 hours and 469 GiB of NAND writes, mostly because I've been a bit mean to it. (done a hunderd gig or so endurance test and plenty of benchmarks) It again is tracking well below your 55 gig for 80 hours figure.
There is something about your particular workload that is causing a poor write amplification ... and I bet if you were able to perform this test on other SSDs, you would see the same or worse figures.
Low intensity writes are also impacted by page size. Most first generation Sandforce drives have 4K pages while in second generation, most of them have 8K pages. People usually use 4K clusters in Windows and this would explain high WA in idle. This should also impact other SSDs.
If you get a WA of just under 1 then either compression is not working well or WA is high to offset the benefit of compression. If WA is coming out around 0.86 you are not getting anything close to the benefit of 0 fill read/ write speeds because either the data can’t be compressed or WA is adding to the work load to slow things down. 50% compression add 1.1WA does not = 1 (I use WA that Intel quote they can achieve).
There is undoubtedly a trade of between compression and WA, which will of course fluctuate depending on work load.
SF drives are the only drives that report LBA vs nand writes so I can’t compare with other SSD’s. I can only report and make conclusions on what I can observe with my set up but of course YMMV.
Remember though that I have excluded the huge difference that a full installation would have made (i.e. 50% reduction in nand vs LBA writes). A full installation (OS & Apps) would significantly change the WA factor over a short duration because it represents a large volume of data that can be compressed by 50%. To offset that I avoided copying large data files that I know can’t be compressed (Like mp3, avi’s etc.)
If you have chance it would great to compare your stats on a daily basis to compare, however I think it is fair to say this:
• SF drives were designed for enterprise workloads and the compression algorithms are optimised for enterprise not client workloads
• If you can’t compress data by at least 46% both read and write speeds drop well below other SSDs
• SF drives can’t compress to anything like the theoretical compressibility of data unless you are in the 0 fill to 8% fill range. If you are not in that range write speeds drop significantly
On the plus side SF drives can compress OS/ App installations by 50% but what benefit does the end user get from that?
Edit:
BTW if you are benchmarking using 0 fill you will be seriously skewing your results. 0 fill = lots of LBA writes and hardly any nand writes. Same thing if you have been using ASU with anything less than 100% uncompressible.
There are still a lot of unknown's. :( Can SF write combine with no [onboard] cache? If they can do they need a high QD? Can SF only compress xfers above a certain size? How random does the data have to be? What is the ratio between compressibility of data and WA?
@ Anvil, a special version of ASU that could drip feed xfers at specific sizes might help to isolate exactly how the SF controller works.
A wild guess is that it can write combine as it can easily use the SRAM allocated for archiving as a buffer, something like "if size<32K and not timeout, then wait, else archive and flush". The controller does not use external memory but any quantity of internal memory could be used as a buffer. Also... strange things @ Sandforce second generation: my workload is over 50% database related and this translates to about 150GB of writes in a "job" that runs for a few hours. During job run, I see WA of around 0.2-0.3. However, in idle (complete Christmas holiday) I saw WA of over 1.5.
What xfer sizes are typically being generated by your database? What is the avg QD? What is the split between random and sequential access? This would really help to understand a workload that the SF controller is able to manage well. :up:
Which proves nothing with respect to how a Sandforce SSD will behave with typical client loads. Benchmarks are one of the worst things you can run on an SSD that is supposed to be simulating typical client loads, since the benchmarks write a LOT of data (often overwhelming the amount of "typical" data written), and the data written by the benchmarks is usually far from "typical".
But it is possible to estimate WA on some other SSDs. Many SSDs include an erase block count (sometimes called wear leveling count) which can be used to estimate flash writes. Divide estimated flash writes by the host writes (either as reported by the SSD SMART attributes, or from some sort of IO monitoring program), and you have an estimate of WA. Not perfect, but may be good enough to compare with SF under a load that causes high WA with SF.
I will image the 530 and put it on the 830 to see what happens. Should be an interesting comparison. :D
I have only monitored the load with HDDSentinel. The reported load of the SSD is ~5-7% while sustained speed is 3.5-4MB/s. The max theoretical QD for my load cannot be higher than 8, however I believe the average is somewhere between 2 and 4. Block size is 16K and as for sequential/random distribution, I have no idea. However, for small block sizes, I believe there is no difference between sequential/random on Sandforce based SSD, as there is no external buffer.
I really wonder if SF 3 shouldn't have external cache... I'm thinking it could certainly use it. Despite protestations to the contrary, I think you really have to know your workload before deciding whether SF is right for you.
i'm still of the opinion that the 2281 needs OP like the first 1200s, but so far I've not seen a reduction in the non-FOB performance degradation even when OP'd.
This method will not have the same level of granularity, as updates can only occur at 60GB increments of nand writes. I’ll get the exact LBA count next time the PE increases and then I can start from there.
Attachment 124573
Hmmm, I wonder if you should completely fill the SSD 100% with data and then delete it (and repeat the fill/delete to get the OP), and then restart the test.
I am concerned that there could be some blocks on the SSD that have never been written to before (or since the last secure erase), and if so the SSD will be able to write to some pages without doing a block erase, which would result in an inaccurate estimate of flash writes from block erase count.
Neither the 530 nor the 830 were SE’d. Both were formatted before installing the OS & apps. I have reset the start point to a change in the PE count. Both drives are now in a similar test condition.
I’m more interested in the 530 though. I would like to establish the best work load to minimise WA. 12K xfers at QD4 seem a good place to start.
Attachment 124579
Most databases uses a 8K-64KB block size, using 12K would be quite strange, it's either 4,8,16,32,64,...
I've got a few SF2 drives sitting in an array, will check Host vs NAND writes in a few weeks.
I'll probably try the 520 on the MacBook as well, not sure if it's really compatible, if it works out it will be running as a boot drive with 1 VM. (the other VM's are on a different drive)
Are you guys telling me Intel does not set a warranty period on their drives?
If thats correct, just fill your Intel test drives with incompressible data continually and watch them die LOL
Come on guys, everyone throttles SF drives, we can set a warranty period to what ever we want with mptool,1 day to 10000 days or more if we wanted. Obviously if I set 1 day the throttle would take for ever to kick in, if I set 10000 days it would kick in MUCH faster.
Consider the following, If I intentionally slow the write speeds of a drive, I can hide the throttle speed as the throttle speed would be the same as the normal write speed.