Page 9 of 21 FirstFirst ... 678910111219 ... LastLast
Results 201 to 225 of 502

Thread: Anvil's Storage Utilities

  1. #201
    Banned
    Join Date
    Jan 2010
    Location
    Las Vegas
    Posts
    936
    It would be more realistic for the default of ASU to use 100% incompressible data.

  2. #202
    SSDabuser
    Join Date
    Aug 2011
    Location
    The Rocket City
    Posts
    1,434
    I think 0 fill should be illegal. You should get carted off to the gaol for using 0-fill.

    I don't even run non-SF drives with 0 fill in the endurance test.

  3. #203
    Xtreme Member
    Join Date
    Aug 2010
    Location
    perth, west oz
    Posts
    252

    My initial setup with Intel 520's and some Seagate SV-35

    Click image for larger version. 

Name:	IBM ServeRAID M1015 SCSI Disk Device_15991GB_1GB-20120404-1507_8xR0_Seagate_SV-35_2TB_7200rpm.png 
Views:	549 
Size:	113.2 KB 
ID:	125023Click image for larger version. 

Name:	IBM ServeRAID M1015 SCSI Disk Device_951GB_1GB-20120404-1503_8xR0_Intel_520_120GB.png 
Views:	522 
Size:	116.1 KB 
ID:	125024Click image for larger version. 

Name:	Bootdisk_480GB_1GB-20120404-1500_4xRaid-0_Intel 520_120gb.png 
Views:	548 
Size:	117.2 KB 
ID:	125025

    Hi guys,

    Hopefully I have done this correctly, there should be 3 images attached. First off a great thank to Anvil for this great tool , that I now have to learn how to use...back in the cue mate!!!

    Now if someone could write an optimisation script so that you could get all your drives up to speed that would be brilliant

    Can someone help me interpretate these images, eg. the number they represent and tell how I can improve on it, please

    thanks

    Henrik
    Henrik
    A Dane Down Under

    Current systems:
    EVGA Classified SR-2
    Lian Li PC-V2120 Black, Antec 1200 PSU,
    2x X5650 (20x 190 APPROX 4.2GHZ), CPU Cooling: Noctua NH-D14
    (48gb) 6x 8Gb Kingston ECC 1333 KVR1333D3D4R9S/8GI, Boot: 8R0 SAMSUNG 830 129GB ARECA 1882IX-4GB CACHE - Scratch disk: 2x6R0 INTEL 520 120GB's, 2x IBM M1015/LSI 9240-8i, Asus GTX-580

    ASUS P5W64 WS PRO, QX-6700 (Extreme Quadcore) 2.66Ghz, 4x2GB HyberX, various hard drives and GT-7600

    Tyan S2895 K8WE 2x 285 Opteron's 8x 2gb DDR400 1x nVidia GT-8800 2x 1 TB Samsung F1 (not very nice) Chenbro SR-107 case

    Monitors: NEC 2690v2 & Dell 2405 & 2x ASUS VE246H

  4. #204
    Xtreme Enthusiast
    Join Date
    Dec 2004
    Location
    Switzerland
    Posts
    546
    Here a Result from my new Workstation :



    www.ocaholic.ch


  5. #205
    Xtreme Member
    Join Date
    Aug 2010
    Location
    perth, west oz
    Posts
    252
    nice cache!! :-)
    Henrik
    A Dane Down Under

    Current systems:
    EVGA Classified SR-2
    Lian Li PC-V2120 Black, Antec 1200 PSU,
    2x X5650 (20x 190 APPROX 4.2GHZ), CPU Cooling: Noctua NH-D14
    (48gb) 6x 8Gb Kingston ECC 1333 KVR1333D3D4R9S/8GI, Boot: 8R0 SAMSUNG 830 129GB ARECA 1882IX-4GB CACHE - Scratch disk: 2x6R0 INTEL 520 120GB's, 2x IBM M1015/LSI 9240-8i, Asus GTX-580

    ASUS P5W64 WS PRO, QX-6700 (Extreme Quadcore) 2.66Ghz, 4x2GB HyberX, various hard drives and GT-7600

    Tyan S2895 K8WE 2x 285 Opteron's 8x 2gb DDR400 1x nVidia GT-8800 2x 1 TB Samsung F1 (not very nice) Chenbro SR-107 case

    Monitors: NEC 2690v2 & Dell 2405 & 2x ASUS VE246H

  6. #206
    Xtreme X.I.P.
    Join Date
    Apr 2008
    Location
    Norway
    Posts
    2,838
    Quote Originally Posted by johnw View Post
    It would be more realistic for the default of ASU to use 100% incompressible data.
    0 Fill won't be default for the next release.

    100% incompressible is not normal though, except for media files. (there will always be a mix of compressible and incompressible)

    I'll probably make an option where one can disable the continuous re-generating of random data. (it will be less cpu intensive and won't matter for drives that don't do compression)
    -
    Hardware:

  7. #207
    Xtreme X.I.P.
    Join Date
    Apr 2008
    Location
    Norway
    Posts
    2,838
    @tived

    I'll have a look at your tests, at first glance they do look normal based on your info.
    -
    Hardware:

  8. #208
    Banned
    Join Date
    Jan 2010
    Location
    Las Vegas
    Posts
    936
    Quote Originally Posted by Anvil View Post
    100% incompressible is not normal though, except for media files. (there will always be a mix of compressible and incompressible)
    That is incorrect. Most user day-to-day data will look close to 100% incompressible to Sandforce SSDs.

    OS and program installs have data that can be significantly compressed by Sandforce, but most people only install those once, so it is not a good indication of day-to-day saved data, especially with the bigger SSDs.

  9. #209
    SSDabuser
    Join Date
    Aug 2011
    Location
    The Rocket City
    Posts
    1,434
    I just think that when benchmarking, it's ridiculous to just show zero fill. It's principle more than anything.

    That's why I like ASU -- I can just bench a SF with every compressibility level and then weight the results as I please. 47% to 67% are far more realistic an average than 0/100%, but 67% on SF is pretty much incompressible I think.

    I'd have to double check, but I did break out a SF2281 the other day to upgrade some FW and do a SE. I was pleased with it's incompressible performance.

  10. #210
    Xtreme X.I.P.
    Join Date
    Apr 2008
    Location
    Norway
    Posts
    2,838
    johnw,

    It's not black and white.

    i.e loading applications will result in reading compressible files, how much writes are affected depends on what type of files one are working with.
    Databases takes "compression" to the extreme as most are highly compressible, I might end up Endurance testing the Force 3 I've still got using the database setting as it would make sense for my kind of usage.

    I've still got the Vertex 3's running my VM's and one of these days I'll check how they have developed.
    From what I've seen WA is well below 1.0. (based on the SMART data, how that translates to real WA is of course not known)
    -
    Hardware:

  11. #211
    SSDabuser
    Join Date
    Aug 2011
    Location
    The Rocket City
    Posts
    1,434
    Mushkin Chronos Deluxe 120GB 5.02ABB0 Reference FW


    Zero Fill
    Click image for larger version. 

Name:	Chronos 502a ASU zerofill.PNG 
Views:	500 
Size:	170.8 KB 
ID:	125040

    46%
    Click image for larger version. 

Name:	Chronos 502a ASU 46.png 
Views:	539 
Size:	138.6 KB 
ID:	125039

    Incompressible
    Click image for larger version. 

Name:	Chronos 502a ASU 0.png 
Views:	487 
Size:	139.1 KB 
ID:	125038


    AS-SSD
    Click image for larger version. 

Name:	Chronos 502a asssd after SE 2.PNG 
Views:	482 
Size:	64.8 KB 
ID:	125041

    -----------------------------------
    I think 46% should be the ASU default. Anything other than 0-fill, though. 67% is almost incompressible to SF -- surely whatever compression it can effect is offset by overhead from not-so-compressible data. While my personal experience is that much of the daily writes are frequently compressible to a degree, it's generally not enough to offset the larger incompressible writes. My workload generates an average WA of ~1.2ish, but my workload is hardly universal.
    Last edited by Christopher; 04-04-2012 at 11:01 AM.

  12. #212
    Xtreme X.I.P.
    Join Date
    Apr 2008
    Location
    Norway
    Posts
    2,838
    How was the 67% on the Mushkin.

    I know from earlier tests (also confirmed by Vapor's tests) that there is not much difference from 46% to 100% on 2281 based drives, earlier SF controllers will suffer more and Async NAND will still make a difference for all SF based drives. (on current controllers)
    -
    Hardware:

  13. #213
    Banned
    Join Date
    Jan 2010
    Location
    Las Vegas
    Posts
    936
    Quote Originally Posted by Anvil View Post
    Databases takes "compression" to the extreme as most are highly compressible, I might end up Endurance testing the Force 3 I've still got using the database setting as it would make sense for my kind of usage.
    Unless you want ASU to cater to Enterprise users, it would be a bad idea to base the defaults on database writes, since very few non-Enterprise users have a lot of database writes to their SSDs.

    For typical non-Enterprise users, the best thing to use for a benchmark to correspond to day-to-day usage is 100% incompressible / random data. That should be the default. If you start arbitrarily choosing "randomness" of less than 100%, then your benchmark will be arbitrary and not suitable for objective comparisons. There is a reason why the SNIA tests specify random data. It is fine to have a choice, but the default should be 100% incompressible.

  14. #214
    SSDabuser
    Join Date
    Aug 2011
    Location
    The Rocket City
    Posts
    1,434
    I didn't include the 67%, but it's just about the same as the incompressible. I swear, I never remembered this drive pulling more than 70mb/s QD1 4K RW in CDM, but here it is hitting 111MB/s in AS-SSD. Not too shabby.

    Without a radical redesign of the SF/SF FW, I really think the next gen SF should go back to 28% OP and should probably remove RAISE. Just having a proper OP seems to really even out the sequential writes. Newer SFs have a funky waveform-like write pattern, a problem I don't think my Vertex LE 100 has.

    ---------------
    Here is the 67%
    Click image for larger version. 

Name:	Chronos 502a ASU 67.png 
Views:	488 
Size:	131.2 KB 
ID:	125042

    Hmm. The last time I checked (last year) this drive was just about even with 100% and 67%, but I see now it looks closer to 47% than %100. That could be the 5.xx series FW at work. The writes are a good bit higher than they were on 3.xx FW.
    Last edited by Christopher; 04-04-2012 at 11:28 AM.

  15. #215
    Xtreme X.I.P.
    Join Date
    Apr 2008
    Location
    Norway
    Posts
    2,838
    Quote Originally Posted by johnw View Post
    Unless you want ASU to cater to Enterprise users, it would be a bad idea to base the defaults on database writes, since very few non-Enterprise users have a lot of database writes to their SSDs.

    For typical non-Enterprise users, the best thing to use for a benchmark to correspond to day-to-day usage is 100% incompressible / random data. That should be the default. If you start arbitrarily choosing "randomness" of less than 100%, then your benchmark will be arbitrary and not suitable for objective comparisons. There is a reason why the SNIA tests specify random data. It is fine to have a choice, but the default should be 100% incompressible.
    The default won't be Database, I've just not decided.

    100% would be worst case for the SF based drives and I'm not sure that it's fair vs other non compressing controllers, there is a portion of compressible data in any workload and real life tests show that SF drives are generally as fast and sometimes faster than most drives. (up till now that is)

    So it will be in the range of 46-100%.

    Where does SNIA say that random data means incompressible data?
    -
    Hardware:

  16. #216
    Banned
    Join Date
    Jan 2010
    Location
    Las Vegas
    Posts
    936
    Quote Originally Posted by Anvil View Post
    Where does SNIA say that random data means incompressible data?
    Random data is incompressible. SNIA does not need to say it. It is a basic fact of information theory.

  17. #217
    Xtreme Member
    Join Date
    Aug 2010
    Location
    perth, west oz
    Posts
    252
    Thanks, Anvil

    Henrik
    Henrik
    A Dane Down Under

    Current systems:
    EVGA Classified SR-2
    Lian Li PC-V2120 Black, Antec 1200 PSU,
    2x X5650 (20x 190 APPROX 4.2GHZ), CPU Cooling: Noctua NH-D14
    (48gb) 6x 8Gb Kingston ECC 1333 KVR1333D3D4R9S/8GI, Boot: 8R0 SAMSUNG 830 129GB ARECA 1882IX-4GB CACHE - Scratch disk: 2x6R0 INTEL 520 120GB's, 2x IBM M1015/LSI 9240-8i, Asus GTX-580

    ASUS P5W64 WS PRO, QX-6700 (Extreme Quadcore) 2.66Ghz, 4x2GB HyberX, various hard drives and GT-7600

    Tyan S2895 K8WE 2x 285 Opteron's 8x 2gb DDR400 1x nVidia GT-8800 2x 1 TB Samsung F1 (not very nice) Chenbro SR-107 case

    Monitors: NEC 2690v2 & Dell 2405 & 2x ASUS VE246H

  18. #218
    Xtreme Member
    Join Date
    Mar 2012
    Posts
    231
    Quote Originally Posted by tived View Post
    Click image for larger version. 

Name:	IBM ServeRAID M1015 SCSI Disk Device_15991GB_1GB-20120404-1507_8xR0_Seagate_SV-35_2TB_7200rpm.png 
Views:	549 
Size:	113.2 KB 
ID:	125023Click image for larger version. 

Name:	IBM ServeRAID M1015 SCSI Disk Device_951GB_1GB-20120404-1503_8xR0_Intel_520_120GB.png 
Views:	522 
Size:	116.1 KB 
ID:	125024Click image for larger version. 

Name:	Bootdisk_480GB_1GB-20120404-1500_4xRaid-0_Intel 520_120gb.png 
Views:	548 
Size:	117.2 KB 
ID:	125025

    Hi guys,

    Hopefully I have done this correctly, there should be 3 images attached. First off a great thank to Anvil for this great tool , that I now have to learn how to use...back in the cue mate!!!

    Now if someone could write an optimisation script so that you could get all your drives up to speed that would be brilliant

    Can someone help me interpretate these images, eg. the number they represent and tell how I can improve on it, please

    thanks

    Henrik
    you dont have write caching turned on thats why your score is so low. you should be in the 9000 point range

  19. #219
    Xtreme Member
    Join Date
    Aug 2010
    Location
    perth, west oz
    Posts
    252
    hmm, i am looking MSM, but I can for some reason not find where I can get into the properties of the controller and set that "write caching" on

    Henrik
    Henrik
    A Dane Down Under

    Current systems:
    EVGA Classified SR-2
    Lian Li PC-V2120 Black, Antec 1200 PSU,
    2x X5650 (20x 190 APPROX 4.2GHZ), CPU Cooling: Noctua NH-D14
    (48gb) 6x 8Gb Kingston ECC 1333 KVR1333D3D4R9S/8GI, Boot: 8R0 SAMSUNG 830 129GB ARECA 1882IX-4GB CACHE - Scratch disk: 2x6R0 INTEL 520 120GB's, 2x IBM M1015/LSI 9240-8i, Asus GTX-580

    ASUS P5W64 WS PRO, QX-6700 (Extreme Quadcore) 2.66Ghz, 4x2GB HyberX, various hard drives and GT-7600

    Tyan S2895 K8WE 2x 285 Opteron's 8x 2gb DDR400 1x nVidia GT-8800 2x 1 TB Samsung F1 (not very nice) Chenbro SR-107 case

    Monitors: NEC 2690v2 & Dell 2405 & 2x ASUS VE246H

  20. #220
    Xtreme Member
    Join Date
    Aug 2010
    Location
    perth, west oz
    Posts
    252
    also, i am a bit disappointed with my performance on my bootdisk with 4x Intel 520 in RAID-0, only giving me 590's, would that be because its on the SATA-II controller?? this one has in windows write cache turned on, my two arrays on the M1015, I can't enable this.

    Henrik
    Henrik
    A Dane Down Under

    Current systems:
    EVGA Classified SR-2
    Lian Li PC-V2120 Black, Antec 1200 PSU,
    2x X5650 (20x 190 APPROX 4.2GHZ), CPU Cooling: Noctua NH-D14
    (48gb) 6x 8Gb Kingston ECC 1333 KVR1333D3D4R9S/8GI, Boot: 8R0 SAMSUNG 830 129GB ARECA 1882IX-4GB CACHE - Scratch disk: 2x6R0 INTEL 520 120GB's, 2x IBM M1015/LSI 9240-8i, Asus GTX-580

    ASUS P5W64 WS PRO, QX-6700 (Extreme Quadcore) 2.66Ghz, 4x2GB HyberX, various hard drives and GT-7600

    Tyan S2895 K8WE 2x 285 Opteron's 8x 2gb DDR400 1x nVidia GT-8800 2x 1 TB Samsung F1 (not very nice) Chenbro SR-107 case

    Monitors: NEC 2690v2 & Dell 2405 & 2x ASUS VE246H

  21. #221
    SSDabuser
    Join Date
    Aug 2011
    Location
    The Rocket City
    Posts
    1,434
    My personal belief is that the default should be either 46 or 67.

  22. #222
    Xtreme Member
    Join Date
    Nov 2011
    Posts
    124
    I think some folk are getting write cache confused.

    Disk cache policy on/off is for the Disk drive cache (SDD or HDD) it's best to turn this off I found in RAID0 arrays.
    You can find this in MSM under 'Logical', then the array you want to change the cache policy on. (right click on it)
    A reboot is needed for this to take affect
    This is the only cache policy available on the IBM M1015 !!
    For any single drives on the M1015 you can change this in Device Manager (windoze)

    With cached controllers you ofcourse get:
    Read Policy, Read ahead on/off
    Write policy, Write through, Write back and Write back with BBU
    I/O policy, direct I/O or cached I/O

  23. #223
    Registered User
    Join Date
    Jul 2008
    Location
    Naples
    Posts
    22
    Plextor M3Pro 2x128 GB in raid0





  24. #224
    Xtreme X.I.P.
    Join Date
    Apr 2008
    Location
    Norway
    Posts
    2,838
    Quote Originally Posted by johnw View Post
    Random data is incompressible. SNIA does not need to say it. It is a basic fact of information theory.
    There are lots of applications that generate random test data at the application-level, this kind of data is normal data used during application testing.
    Neat if one can't export/import current systems.

    If you have a look at SNIA's specs they are testing with other patterns and are also debating "how random is random enough".

    3.6 Data Patterns
    All tests shall be run with a random data pattern. The Test Operator may execute additional
    runs with non-random data patterns. If non-random data patterns are used, the Test Operator
    must report the data pattern.
    Note: Some SSS devices look for and optimize certain data patterns in the data payloads
    written to the device. It is not feasible to test for all possible kinds of optimizations, which are
    vendor specific and often market segment specific. The SSS TWG is still trying to characterize
    “how random is random enough” with respect to data patterns."
    -
    Hardware:

  25. #225
    Banned
    Join Date
    Jan 2010
    Location
    Las Vegas
    Posts
    936
    Quote Originally Posted by Anvil View Post
    If you have a look at SNIA's specs they are testing with other patterns and are also debating "how random is random enough".
    I have read the SNIA SSS documents, no need to quote them to me, unless you have a point you are trying to make. I'm not sure what your point is here.

    I think it is clear what is being referred to in the passage you quoted. If the data stream consists of repeated blocks of the same "random" data, then how large a block size is necessary in order to fool all SSS devices into thinking it is a continuous stream of truly random data? The answer obviously depends on the compression and de-duplication algorithms used by various SSS devices, so it is difficult for SNIA to come up with a universally applicable definition of a sufficiently random data stream. Nevertheless, it is obvious that if the data stream can be compressed significantly by a specific SSS device, then the data stream is not "random enough" to be used for the mandatory random data stream.
    Last edited by johnw; 04-10-2012 at 11:01 AM.

Page 9 of 21 FirstFirst ... 678910111219 ... LastLast

Tags for this Thread

Bookmarks

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •