Page 15 of 21 FirstFirst ... 512131415161718 ... LastLast
Results 351 to 375 of 502

Thread: Anvil's Storage Utilities

  1. #351
    Xtreme X.I.P.
    Join Date
    Apr 2008
    Location
    Norway
    Posts
    2,838
    Quote Originally Posted by One_Hertz View Post
    Anvil - any luck changing the read tests to test actual data that is on the SSD instead of the test file?
    Yepp, it's been working for quite some time.
    What is left is making it presentable.

    I'll PM you when it's ready for testing.


    @B Gates

    I've updated my V4's but haven't got time to play just yet, I've been close to that result though
    -
    Hardware:

  2. #352
    SLC
    Join Date
    Oct 2004
    Location
    Ottawa, Canada
    Posts
    2,795
    Excellent! Now that is the first large advantage over IOMeter

  3. #353
    I am Xtreme
    Join Date
    Jan 2006
    Location
    Australia! :)
    Posts
    6,096
    could someone show me how I can use ASU to do something along the lines of what I wanted to do in this thread:

    http://www.xtremesystems.org/forums/...ing&highlight=

    read: I want to thrash a HDD non-stop, 24x7 for about a week straight. Basically fill the HDD up, delete, rinse & repeat so-to-speak!

    DNA = Design Not Accident
    DNA = Darwin Not Accurate

    heatware / ebay
    HARDWARE I only own Xeons, Extreme Editions & Lian Li's
    https://prism-break.org/

  4. #354
    SSDabuser
    Join Date
    Aug 2011
    Location
    The Rocket City
    Posts
    1,434
    Quote Originally Posted by tiro_uspsss View Post
    could someone show me how I can use ASU to do something along the lines of what I wanted to do in this thread:

    http://www.xtremesystems.org/forums/...ing&highlight=

    read: I want to thrash a HDD non-stop, 24x7 for about a week straight. Basically fill the HDD up, delete, rinse & repeat so-to-speak!

    just use the endurance test in ASU. Go into settings and check 'keep running totals'. And change the 4K write duration to 0ms. You may need to experiment with adding time per 1000 files deleted. Lastly, you don't need the 10 second break between loops on a mechanical HDD either.

  5. #355
    SLC
    Join Date
    Oct 2004
    Location
    Ottawa, Canada
    Posts
    2,795
    Quote Originally Posted by tiro_uspsss View Post
    could someone show me how I can use ASU to do something along the lines of what I wanted to do in this thread:

    http://www.xtremesystems.org/forums/...ing&highlight=

    read: I want to thrash a HDD non-stop, 24x7 for about a week straight. Basically fill the HDD up, delete, rinse & repeat so-to-speak!

    Stress testing HDDs like this is not the best idea tbh... A prominent failure point is during spinup/spindown when the heads have to unpark/park and the drive has to do a bunch of its firmware initialization procedures. Also if you are just using it 24/7 then its temperature will be static. Changes in temperature cause a lot of internal changes in the drive (the drive automatically changes some firmware parameters based on its temperature) and can be the cause of failure as well. Also, drives do internal clean-up procedures (to look for and reallocate weak/bad sectors mostly) if given time to idle. If you don't let them idle ever then it is not the best... Just reading/writing is a part of it of course but I question how useful it is for what you are trying to do. It is more or less impossible to predict failure accurately... this is why backup systems and RAIDs exist.
    Last edited by One_Hertz; 07-22-2012 at 06:30 AM.

  6. #356
    Xtreme X.I.P.
    Join Date
    Apr 2008
    Location
    Norway
    Posts
    2,838
    When running the ASU test, the pause between loops is needed (and then some)

    I ran a few tests on a 2TB drive (a bad drive starting to accumulate bad sectors) and it is painfully slow, with 500,000 files per loop it takes hours? (will check) deleting the files.

    It might be due to that all files are in a single folder, it is extremely slow and a few days of writing did not help accumulate more bad sectors, it looks like I have to let it die the natural way or try to get it replaced.
    (it's a secondary backup drive and I've already cleaned the drive)

    Temperature during ASU testing was more or less fixed in the low 40's (C). (42-43)
    -
    Hardware:

  7. #357
    Xtreme Member
    Join Date
    Jul 2008
    Location
    In the space between...
    Posts
    345
    Quote Originally Posted by tiro_uspsss View Post
    could someone show me how I can use ASU to do something along the lines of what I wanted to do in this thread:

    http://www.xtremesystems.org/forums/...ing&highlight=

    read: I want to thrash a HDD non-stop, 24x7 for about a week straight. Basically fill the HDD up, delete, rinse & repeat so-to-speak!

    The most practical purpose to put a drive through 'hell week', imo, would be to check initial drive quality before commiting it a new HDD to a raid0 array (or any array I guess). Most drives fail in the first few months or after ~3 yrs or more from a story I read (no I can't remember where...lol). I've always made sure to 'break-in' my HDDs for a couple of months before putting them in raid...never tried the 'hammer test' so to speak.
    'Best Bang For The Buck' Build - CM Storm Sniper - CM V8 GTS HSF
    2500K @ 4.5GHz 24/7 - Asus P8Z68-V Pro Gen3 - GSkill 2x4GB DDR3-2400 C10
    Sapphire Vapor-X 7770 OC Edition - PC Power & Cooling Silencer MkIII 600W
    Boot: 2x 64GB SuperSSpeed S301 SLC Raid 0 Work: Intel 520 120GB
    Storage: Crucial M500 1TB - Ocz Vertex 4 128GB - 4x 50GB Ocz Vertex 2
    HDDs: 2 x 1TB WD RE4 Raid0 - Ext.Backup: 2 x 1.5TB WD Blacks Raid 1

  8. #358
    SSDabuser
    Join Date
    Aug 2011
    Location
    The Rocket City
    Posts
    1,434
    I've never tried running ASU endurance on a HDD, and I don't much see the point.

  9. #359
    Xtreme Member
    Join Date
    Mar 2012
    Posts
    231
    HyperX 3K 240 RAID 0 8kb stripes:



    first Ive seen over 10,000

  10. #360
    Xtreme Enthusiast
    Join Date
    Jan 2008
    Location
    Athens -> Hellas
    Posts
    944
    @Anvil,

    Is there any case that AHCI/RAID OROM and driver version can be displayed on AMD Boards?

    Thank you for the time, effort and all the updates.

  11. #361
    Xtreme X.I.P.
    Join Date
    Apr 2008
    Location
    Norway
    Posts
    2,838
    I'll have a look at it!

    To find out, I'll probably have to re-install my AMD rig so I can't say when.
    -
    Hardware:

  12. #362
    Xtreme Member
    Join Date
    Mar 2012
    Posts
    231
    Pair Force GS 240GB 64kb stripe:



    the review will be up soon at http://www.rwlabs.com/

  13. #363
    Xtreme X.I.P.
    Join Date
    Apr 2008
    Location
    Norway
    Posts
    2,838
    Looks like I need to try those "GS" drives!

    "Soon" as in today or this week
    -
    Hardware:

  14. #364
    Xtreme Member
    Join Date
    Mar 2012
    Posts
    231
    Quote Originally Posted by Anvil View Post
    Looks like I need to try those "GS" drives!

    "Soon" as in today or this week
    as in next week

  15. #365
    Xtreme Member
    Join Date
    Mar 2012
    Posts
    231
    How come I cant thank anyone?

  16. #366
    Xtreme Guru
    Join Date
    Aug 2009
    Location
    Wichita, Ks
    Posts
    3,887
    pretty stanky Bill
    "Lurking" Since 1977


    Jesus Saves, God Backs-Up
    *I come to the news section to ban people, not read complaints.*-[XC]Gomeler
    Don't believe Squish, his hardware does control him!

  17. #367
    Xtreme Member
    Join Date
    Mar 2012
    Posts
    231
    Quote Originally Posted by Computurd View Post
    pretty stanky Bill
    thanks comp

  18. #368
    Registered User
    Join Date
    Jul 2005
    Posts
    6
    Anvil, could you please to add possibility of copy all results to clipboard as text? The screenshot doesn't always comfortable, and "copy results to clipboard" function grabs only summarized scores, not read/write ratings in MB/s.

  19. #369
    Xtreme Member
    Join Date
    Aug 2010
    Location
    perth, west oz
    Posts
    252
    Hi guys,
    couldn't sleep so I ran these test just to see what was how my IBM M1015 was performing together with 8 Intel 520s in RAID-0 from 2-8 disks and JBOD

    I will be replacing this controller, or at least moving it down the food chain, with an Areca 1882ix 24channel 4GB cache and battery backup unit, which hopefully will give me better performance.

    Anyway, i thought i'd share this with you all.

    Click image for larger version. 

Name:	IBM M1015 3RAID-0_SCSI Disk Device_337GB_1GB-20120901_2nd run.png 
Views:	416 
Size:	118.8 KB 
ID:	129780Click image for larger version. 

Name:	IBM M1015 2RAID-0_SCSI Disk Device_237GB_1GB-20120901-2336_2nd run.png 
Views:	415 
Size:	117.3 KB 
ID:	129779Click image for larger version. 

Name:	IBM M1015 4RAID-0_SCSI Disk Device_443GB_1GB-20120901_2nd run.png 
Views:	422 
Size:	108.7 KB 
ID:	129781Click image for larger version. 

Name:	IBM M1015 5RAID-0_SCSI Disk Device_594GB_1GB-20120901_2nd run.png 
Views:	419 
Size:	123.8 KB 
ID:	129782Click image for larger version. 

Name:	IBM M1015 6RAID-0 SCSI Disk Device_713GB_1GB-20120901_2nd run.png 
Views:	578 
Size:	121.9 KB 
ID:	129783Click image for larger version. 

Name:	IBM M1015 7RAID-0_SCSI Disk Device_775GB_1GB-20120901_2ND RUN.png 
Views:	420 
Size:	104.7 KB 
ID:	129784Click image for larger version. 

Name:	IBM M1015 8RAID-0 SCSI Disk Device_951GB_1GB-20120901_2ND RUN.png 
Views:	537 
Size:	116.0 KB 
ID:	129785Click image for larger version. 

Name:	IBM_M1015_1Disk_INTEL SSDSC2CW12 SCSI Disk Device_120GB_1GB-20120901-2328_2nd run.png 
Views:	418 
Size:	117.5 KB 
ID:	129786Click image for larger version. 

Name:	IBM_M1015_Windows 2R0_ATA INTEL SSDSC2CW12 SCSI Disk Device_120GB_1GB-20120901-2350.png 
Views:	450 
Size:	114.1 KB 
ID:	129787

    Probably should do a graph ;-) naaahh just kiddin

    OHH, it didn't help that Anvils program RC 2 expired in the middle of these tests, so i reset the date to a date later and was able to finish

    Henrik
    Last edited by tived; 09-01-2012 at 09:53 AM.
    Henrik
    A Dane Down Under

    Current systems:
    EVGA Classified SR-2
    Lian Li PC-V2120 Black, Antec 1200 PSU,
    2x X5650 (20x 190 APPROX 4.2GHZ), CPU Cooling: Noctua NH-D14
    (48gb) 6x 8Gb Kingston ECC 1333 KVR1333D3D4R9S/8GI, Boot: 8R0 SAMSUNG 830 129GB ARECA 1882IX-4GB CACHE - Scratch disk: 2x6R0 INTEL 520 120GB's, 2x IBM M1015/LSI 9240-8i, Asus GTX-580

    ASUS P5W64 WS PRO, QX-6700 (Extreme Quadcore) 2.66Ghz, 4x2GB HyberX, various hard drives and GT-7600

    Tyan S2895 K8WE 2x 285 Opteron's 8x 2gb DDR400 1x nVidia GT-8800 2x 1 TB Samsung F1 (not very nice) Chenbro SR-107 case

    Monitors: NEC 2690v2 & Dell 2405 & 2x ASUS VE246H

  20. #370
    RAIDer
    Join Date
    Jul 2005
    Location
    Norway
    Posts
    699
    If you are using software raid, disable "write cache buffer flushing" on each drive in "disk properties" that you have connected to the M1015. That helped me with the performance, if I remember correctly :p

    Finally got my last M3p @ LSI 9207
    8xr0 software raid srv08r2

    It suffers by no cache, but it is still pretty fast


    ATTO: for compare
    Last edited by Nizzen; 09-01-2012 at 10:18 AM.

  21. #371
    Xtreme Member
    Join Date
    Aug 2010
    Location
    perth, west oz
    Posts
    252

    Ibm m1015 8x intel 520 ssd 120gb_test result

    I know I was joking but here are the results tabulated up in a google spreadsheet - haven't got MS office installed yet. figurs marked in red are the fastest, but its clear that there is not much linear about the scaling on this controller

    Click image for larger version. 

Name:	IBM M1015 8X INTEL 520 SSD 120GB_TEST RESULT.png 
Views:	589 
Size:	58.7 KB 
ID:	129788

    :-)

    Henrik

  22. #372
    Xtreme Member
    Join Date
    Aug 2010
    Location
    perth, west oz
    Posts
    252
    Quote Originally Posted by Nizzen View Post
    If you are using software raid, disable "write cache buffer flushing" on each drive in "disk properties" that you have connected to the M1015. That helped me with the performance, if I remember correctly :p
    thanks Nizzen, I only wanted to use it as a reference, so just to note, i did not do that, so bare that in mind when reading the results

    I would be very interested in how I can improve the performance on this controller

    thanks

    Henrik
    Henrik
    A Dane Down Under

    Current systems:
    EVGA Classified SR-2
    Lian Li PC-V2120 Black, Antec 1200 PSU,
    2x X5650 (20x 190 APPROX 4.2GHZ), CPU Cooling: Noctua NH-D14
    (48gb) 6x 8Gb Kingston ECC 1333 KVR1333D3D4R9S/8GI, Boot: 8R0 SAMSUNG 830 129GB ARECA 1882IX-4GB CACHE - Scratch disk: 2x6R0 INTEL 520 120GB's, 2x IBM M1015/LSI 9240-8i, Asus GTX-580

    ASUS P5W64 WS PRO, QX-6700 (Extreme Quadcore) 2.66Ghz, 4x2GB HyberX, various hard drives and GT-7600

    Tyan S2895 K8WE 2x 285 Opteron's 8x 2gb DDR400 1x nVidia GT-8800 2x 1 TB Samsung F1 (not very nice) Chenbro SR-107 case

    Monitors: NEC 2690v2 & Dell 2405 & 2x ASUS VE246H

  23. #373
    Xtreme Member
    Join Date
    Aug 2010
    Location
    perth, west oz
    Posts
    252
    Holy crap!!! :-)

    What drives are you using... (OK, its the Plextors) I better have another play with this.... did I just spend on an Areca because I have not educated myself well enough? that is some excellent numbers

    thanks - I am going to have a try tomorrow night ;-)

    Henrik

    Quote Originally Posted by Nizzen View Post
    If you are using software raid, disable "write cache buffer flushing" on each drive in "disk properties" that you have connected to the M1015. That helped me with the performance, if I remember correctly :p

    Finally got my last M3p @ LSI 9207
    8xr0 software raid srv08r2

    It suffers by no cache, but it is still pretty fast


    ATTO: for compare
    Last edited by tived; 09-02-2012 at 07:58 AM.
    Henrik
    A Dane Down Under

    Current systems:
    EVGA Classified SR-2
    Lian Li PC-V2120 Black, Antec 1200 PSU,
    2x X5650 (20x 190 APPROX 4.2GHZ), CPU Cooling: Noctua NH-D14
    (48gb) 6x 8Gb Kingston ECC 1333 KVR1333D3D4R9S/8GI, Boot: 8R0 SAMSUNG 830 129GB ARECA 1882IX-4GB CACHE - Scratch disk: 2x6R0 INTEL 520 120GB's, 2x IBM M1015/LSI 9240-8i, Asus GTX-580

    ASUS P5W64 WS PRO, QX-6700 (Extreme Quadcore) 2.66Ghz, 4x2GB HyberX, various hard drives and GT-7600

    Tyan S2895 K8WE 2x 285 Opteron's 8x 2gb DDR400 1x nVidia GT-8800 2x 1 TB Samsung F1 (not very nice) Chenbro SR-107 case

    Monitors: NEC 2690v2 & Dell 2405 & 2x ASUS VE246H

  24. #374
    Xtreme Member
    Join Date
    Aug 2010
    Location
    perth, west oz
    Posts
    252
    Hi Nizzan,

    this is what I get when looking at both my M1015's ??? its disabled, and thats the settings that was used on my tests yesterday
    ????

    What am I doing wrong, is it when I set it up in MegaRAID Storage Manager? I am using the default settings

    Henrik
    Click image for larger version. 

Name:	hardware cache policy_win7.jpg 
Views:	516 
Size:	81.7 KB 
ID:	129795
    Henrik
    A Dane Down Under

    Current systems:
    EVGA Classified SR-2
    Lian Li PC-V2120 Black, Antec 1200 PSU,
    2x X5650 (20x 190 APPROX 4.2GHZ), CPU Cooling: Noctua NH-D14
    (48gb) 6x 8Gb Kingston ECC 1333 KVR1333D3D4R9S/8GI, Boot: 8R0 SAMSUNG 830 129GB ARECA 1882IX-4GB CACHE - Scratch disk: 2x6R0 INTEL 520 120GB's, 2x IBM M1015/LSI 9240-8i, Asus GTX-580

    ASUS P5W64 WS PRO, QX-6700 (Extreme Quadcore) 2.66Ghz, 4x2GB HyberX, various hard drives and GT-7600

    Tyan S2895 K8WE 2x 285 Opteron's 8x 2gb DDR400 1x nVidia GT-8800 2x 1 TB Samsung F1 (not very nice) Chenbro SR-107 case

    Monitors: NEC 2690v2 & Dell 2405 & 2x ASUS VE246H

  25. #375
    Xtreme Member
    Join Date
    Nov 2011
    Posts
    124
    Quote Originally Posted by tived View Post
    Hi Nizzan,

    this is what I get when looking at both my M1015's ??? its disabled, and thats the settings that was used on my tests yesterday
    ????

    What am I doing wrong, is it when I set it up in MegaRAID Storage Manager? I am using the default settings

    Henrik
    The drive cache can only be changed on the LSI controller or via MSM

    Did you reboot after setting each RAID test up ?

    What's your ATTO speed with R0x8 ?

    Remember the M1015 is a cheap 8 port 6Gbps controller the Areca controller you are looking at will cost how many times more ?
    Will you get the same performance boost.
    If you want RAID0 and the M1015 let windows do the striping and the M1015 do the controlling (best in LSI9211 IT mode)
    Last edited by mobilenvidia; 09-02-2012 at 10:31 AM.
    ASUS P8Z77 WS
    Intel Core i5-3470T
    16GB 1333Mhz RAM
    PNY GeForce GTX470
    LSI 9266-8i CacheCade pro v2.0/fastpath
    IBM ServeRAID M5016
    IBM ServeRAID M1015 LSI SAS Controller (IR mode)
    4x 60GB OCZ Solid 3 SSDs
    6x Hitachi 2TB 7k2000 HDs

Page 15 of 21 FirstFirst ... 512131415161718 ... LastLast

Tags for this Thread

Bookmarks

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •