It would be more realistic for the default of ASU to use 100% incompressible data.
It would be more realistic for the default of ASU to use 100% incompressible data.
I think 0 fill should be illegal. You should get carted off to the gaol for using 0-fill.
I don't even run non-SF drives with 0 fill in the endurance test.
Hi guys,
Hopefully I have done this correctly, there should be 3 images attached. First off a great thank to Anvil for this great tool , that I now have to learn how to use...back in the cue mate!!!
Now if someone could write an optimisation script so that you could get all your drives up to speed that would be brilliant
Can someone help me interpretate these images, eg. the number they represent and tell how I can improve on it, please
thanks
Henrik
Henrik
A Dane Down Under
Current systems:
EVGA Classified SR-2
Lian Li PC-V2120 Black, Antec 1200 PSU,
2x X5650 (20x 190 APPROX 4.2GHZ), CPU Cooling: Noctua NH-D14
(48gb) 6x 8Gb Kingston ECC 1333 KVR1333D3D4R9S/8GI, Boot: 8R0 SAMSUNG 830 129GB ARECA 1882IX-4GB CACHE - Scratch disk: 2x6R0 INTEL 520 120GB's, 2x IBM M1015/LSI 9240-8i, Asus GTX-580
ASUS P5W64 WS PRO, QX-6700 (Extreme Quadcore) 2.66Ghz, 4x2GB HyberX, various hard drives and GT-7600
Tyan S2895 K8WE 2x 285 Opteron's 8x 2gb DDR400 1x nVidia GT-8800 2x 1 TB Samsung F1 (not very nice) Chenbro SR-107 case
Monitors: NEC 2690v2 & Dell 2405 & 2x ASUS VE246H
nice cache!! :-)
Henrik
A Dane Down Under
Current systems:
EVGA Classified SR-2
Lian Li PC-V2120 Black, Antec 1200 PSU,
2x X5650 (20x 190 APPROX 4.2GHZ), CPU Cooling: Noctua NH-D14
(48gb) 6x 8Gb Kingston ECC 1333 KVR1333D3D4R9S/8GI, Boot: 8R0 SAMSUNG 830 129GB ARECA 1882IX-4GB CACHE - Scratch disk: 2x6R0 INTEL 520 120GB's, 2x IBM M1015/LSI 9240-8i, Asus GTX-580
ASUS P5W64 WS PRO, QX-6700 (Extreme Quadcore) 2.66Ghz, 4x2GB HyberX, various hard drives and GT-7600
Tyan S2895 K8WE 2x 285 Opteron's 8x 2gb DDR400 1x nVidia GT-8800 2x 1 TB Samsung F1 (not very nice) Chenbro SR-107 case
Monitors: NEC 2690v2 & Dell 2405 & 2x ASUS VE246H
0 Fill won't be default for the next release.
100% incompressible is not normal though, except for media files. (there will always be a mix of compressible and incompressible)
I'll probably make an option where one can disable the continuous re-generating of random data. (it will be less cpu intensive and won't matter for drives that don't do compression)
-
Hardware:
@tived
I'll have a look at your tests, at first glance they do look normal based on your info.
-
Hardware:
That is incorrect. Most user day-to-day data will look close to 100% incompressible to Sandforce SSDs.
OS and program installs have data that can be significantly compressed by Sandforce, but most people only install those once, so it is not a good indication of day-to-day saved data, especially with the bigger SSDs.
I just think that when benchmarking, it's ridiculous to just show zero fill. It's principle more than anything.
That's why I like ASU -- I can just bench a SF with every compressibility level and then weight the results as I please. 47% to 67% are far more realistic an average than 0/100%, but 67% on SF is pretty much incompressible I think.
I'd have to double check, but I did break out a SF2281 the other day to upgrade some FW and do a SE. I was pleased with it's incompressible performance.
johnw,
It's not black and white.
i.e loading applications will result in reading compressible files, how much writes are affected depends on what type of files one are working with.
Databases takes "compression" to the extreme as most are highly compressible, I might end up Endurance testing the Force 3 I've still got using the database setting as it would make sense for my kind of usage.
I've still got the Vertex 3's running my VM's and one of these days I'll check how they have developed.
From what I've seen WA is well below 1.0. (based on the SMART data, how that translates to real WA is of course not known)
-
Hardware:
Mushkin Chronos Deluxe 120GB 5.02ABB0 Reference FW
Zero Fill
46%
Incompressible
AS-SSD
-----------------------------------
I think 46% should be the ASU default. Anything other than 0-fill, though. 67% is almost incompressible to SF -- surely whatever compression it can effect is offset by overhead from not-so-compressible data. While my personal experience is that much of the daily writes are frequently compressible to a degree, it's generally not enough to offset the larger incompressible writes. My workload generates an average WA of ~1.2ish, but my workload is hardly universal.
Last edited by Christopher; 04-04-2012 at 11:01 AM.
How was the 67% on the Mushkin.
I know from earlier tests (also confirmed by Vapor's tests) that there is not much difference from 46% to 100% on 2281 based drives, earlier SF controllers will suffer more and Async NAND will still make a difference for all SF based drives. (on current controllers)
-
Hardware:
Unless you want ASU to cater to Enterprise users, it would be a bad idea to base the defaults on database writes, since very few non-Enterprise users have a lot of database writes to their SSDs.
For typical non-Enterprise users, the best thing to use for a benchmark to correspond to day-to-day usage is 100% incompressible / random data. That should be the default. If you start arbitrarily choosing "randomness" of less than 100%, then your benchmark will be arbitrary and not suitable for objective comparisons. There is a reason why the SNIA tests specify random data. It is fine to have a choice, but the default should be 100% incompressible.
I didn't include the 67%, but it's just about the same as the incompressible. I swear, I never remembered this drive pulling more than 70mb/s QD1 4K RW in CDM, but here it is hitting 111MB/s in AS-SSD. Not too shabby.
Without a radical redesign of the SF/SF FW, I really think the next gen SF should go back to 28% OP and should probably remove RAISE. Just having a proper OP seems to really even out the sequential writes. Newer SFs have a funky waveform-like write pattern, a problem I don't think my Vertex LE 100 has.
---------------
Here is the 67%
Hmm. The last time I checked (last year) this drive was just about even with 100% and 67%, but I see now it looks closer to 47% than %100. That could be the 5.xx series FW at work. The writes are a good bit higher than they were on 3.xx FW.
Last edited by Christopher; 04-04-2012 at 11:28 AM.
The default won't be Database, I've just not decided.
100% would be worst case for the SF based drives and I'm not sure that it's fair vs other non compressing controllers, there is a portion of compressible data in any workload and real life tests show that SF drives are generally as fast and sometimes faster than most drives. (up till now that is)
So it will be in the range of 46-100%.
Where does SNIA say that random data means incompressible data?
-
Hardware:
Thanks, Anvil
Henrik
Henrik
A Dane Down Under
Current systems:
EVGA Classified SR-2
Lian Li PC-V2120 Black, Antec 1200 PSU,
2x X5650 (20x 190 APPROX 4.2GHZ), CPU Cooling: Noctua NH-D14
(48gb) 6x 8Gb Kingston ECC 1333 KVR1333D3D4R9S/8GI, Boot: 8R0 SAMSUNG 830 129GB ARECA 1882IX-4GB CACHE - Scratch disk: 2x6R0 INTEL 520 120GB's, 2x IBM M1015/LSI 9240-8i, Asus GTX-580
ASUS P5W64 WS PRO, QX-6700 (Extreme Quadcore) 2.66Ghz, 4x2GB HyberX, various hard drives and GT-7600
Tyan S2895 K8WE 2x 285 Opteron's 8x 2gb DDR400 1x nVidia GT-8800 2x 1 TB Samsung F1 (not very nice) Chenbro SR-107 case
Monitors: NEC 2690v2 & Dell 2405 & 2x ASUS VE246H
hmm, i am looking MSM, but I can for some reason not find where I can get into the properties of the controller and set that "write caching" on
Henrik
Henrik
A Dane Down Under
Current systems:
EVGA Classified SR-2
Lian Li PC-V2120 Black, Antec 1200 PSU,
2x X5650 (20x 190 APPROX 4.2GHZ), CPU Cooling: Noctua NH-D14
(48gb) 6x 8Gb Kingston ECC 1333 KVR1333D3D4R9S/8GI, Boot: 8R0 SAMSUNG 830 129GB ARECA 1882IX-4GB CACHE - Scratch disk: 2x6R0 INTEL 520 120GB's, 2x IBM M1015/LSI 9240-8i, Asus GTX-580
ASUS P5W64 WS PRO, QX-6700 (Extreme Quadcore) 2.66Ghz, 4x2GB HyberX, various hard drives and GT-7600
Tyan S2895 K8WE 2x 285 Opteron's 8x 2gb DDR400 1x nVidia GT-8800 2x 1 TB Samsung F1 (not very nice) Chenbro SR-107 case
Monitors: NEC 2690v2 & Dell 2405 & 2x ASUS VE246H
also, i am a bit disappointed with my performance on my bootdisk with 4x Intel 520 in RAID-0, only giving me 590's, would that be because its on the SATA-II controller?? this one has in windows write cache turned on, my two arrays on the M1015, I can't enable this.
Henrik
Henrik
A Dane Down Under
Current systems:
EVGA Classified SR-2
Lian Li PC-V2120 Black, Antec 1200 PSU,
2x X5650 (20x 190 APPROX 4.2GHZ), CPU Cooling: Noctua NH-D14
(48gb) 6x 8Gb Kingston ECC 1333 KVR1333D3D4R9S/8GI, Boot: 8R0 SAMSUNG 830 129GB ARECA 1882IX-4GB CACHE - Scratch disk: 2x6R0 INTEL 520 120GB's, 2x IBM M1015/LSI 9240-8i, Asus GTX-580
ASUS P5W64 WS PRO, QX-6700 (Extreme Quadcore) 2.66Ghz, 4x2GB HyberX, various hard drives and GT-7600
Tyan S2895 K8WE 2x 285 Opteron's 8x 2gb DDR400 1x nVidia GT-8800 2x 1 TB Samsung F1 (not very nice) Chenbro SR-107 case
Monitors: NEC 2690v2 & Dell 2405 & 2x ASUS VE246H
My personal belief is that the default should be either 46 or 67.
I think some folk are getting write cache confused.
Disk cache policy on/off is for the Disk drive cache (SDD or HDD) it's best to turn this off I found in RAID0 arrays.
You can find this in MSM under 'Logical', then the array you want to change the cache policy on. (right click on it)
A reboot is needed for this to take affect
This is the only cache policy available on the IBM M1015 !!
For any single drives on the M1015 you can change this in Device Manager (windoze)
With cached controllers you ofcourse get:
Read Policy, Read ahead on/off
Write policy, Write through, Write back and Write back with BBU
I/O policy, direct I/O or cached I/O
There are lots of applications that generate random test data at the application-level, this kind of data is normal data used during application testing.
Neat if one can't export/import current systems.
If you have a look at SNIA's specs they are testing with other patterns and are also debating "how random is random enough".
3.6 Data Patterns
All tests shall be run with a random data pattern. The Test Operator may execute additional
runs with non-random data patterns. If non-random data patterns are used, the Test Operator
must report the data pattern.
Note: Some SSS devices look for and optimize certain data patterns in the data payloads
written to the device. It is not feasible to test for all possible kinds of optimizations, which are
vendor specific and often market segment specific. The SSS TWG is still trying to characterize
“how random is random enough” with respect to data patterns."
-
Hardware:
I have read the SNIA SSS documents, no need to quote them to me, unless you have a point you are trying to make. I'm not sure what your point is here.
I think it is clear what is being referred to in the passage you quoted. If the data stream consists of repeated blocks of the same "random" data, then how large a block size is necessary in order to fool all SSS devices into thinking it is a continuous stream of truly random data? The answer obviously depends on the compression and de-duplication algorithms used by various SSS devices, so it is difficult for SNIA to come up with a universally applicable definition of a sufficiently random data stream. Nevertheless, it is obvious that if the data stream can be compressed significantly by a specific SSS device, then the data stream is not "random enough" to be used for the mandatory random data stream.
Last edited by johnw; 04-10-2012 at 11:01 AM.
Bookmarks