Excellent! Now that is the first large advantage over IOMeter
could someone show me how I can use ASU to do something along the lines of what I wanted to do in this thread:
http://www.xtremesystems.org/forums/...ing&highlight=
read: I want to thrash a HDD non-stop, 24x7 for about a week straight. Basically fill the HDD up, delete, rinse & repeat so-to-speak!
DNA = Design Not Accident
DNA = Darwin Not Accurate
heatware / ebay
HARDWARE I only own Xeons, Extreme Editions & Lian Li's
https://prism-break.org/
just use the endurance test in ASU. Go into settings and check 'keep running totals'. And change the 4K write duration to 0ms. You may need to experiment with adding time per 1000 files deleted. Lastly, you don't need the 10 second break between loops on a mechanical HDD either.
Stress testing HDDs like this is not the best idea tbh... A prominent failure point is during spinup/spindown when the heads have to unpark/park and the drive has to do a bunch of its firmware initialization procedures. Also if you are just using it 24/7 then its temperature will be static. Changes in temperature cause a lot of internal changes in the drive (the drive automatically changes some firmware parameters based on its temperature) and can be the cause of failure as well. Also, drives do internal clean-up procedures (to look for and reallocate weak/bad sectors mostly) if given time to idle. If you don't let them idle ever then it is not the best... Just reading/writing is a part of it of course but I question how useful it is for what you are trying to do. It is more or less impossible to predict failure accurately... this is why backup systems and RAIDs exist.
Last edited by One_Hertz; 07-22-2012 at 06:30 AM.
When running the ASU test, the pause between loops is needed (and then some)
I ran a few tests on a 2TB drive (a bad drive starting to accumulate bad sectors) and it is painfully slow, with 500,000 files per loop it takes hours? (will check) deleting the files.
It might be due to that all files are in a single folder, it is extremely slow and a few days of writing did not help accumulate more bad sectors, it looks like I have to let it die the natural way or try to get it replaced.
(it's a secondary backup drive and I've already cleaned the drive)
Temperature during ASU testing was more or less fixed in the low 40's (C). (42-43)
-
Hardware:
The most practical purpose to put a drive through 'hell week', imo, would be to check initial drive quality before commiting it a new HDD to a raid0 array (or any array I guess). Most drives fail in the first few months or after ~3 yrs or more from a story I read (no I can't remember where...lol). I've always made sure to 'break-in' my HDDs for a couple of months before putting them in raid...never tried the 'hammer test' so to speak.
'Best Bang For The Buck' Build - CM Storm Sniper - CM V8 GTS HSF
2500K @ 4.5GHz 24/7 - Asus P8Z68-V Pro Gen3 - GSkill 2x4GB DDR3-2400 C10
Sapphire Vapor-X 7770 OC Edition - PC Power & Cooling Silencer MkIII 600W
Boot: 2x 64GB SuperSSpeed S301 SLC Raid 0 Work: Intel 520 120GB
Storage: Crucial M500 1TB - Ocz Vertex 4 128GB - 4x 50GB Ocz Vertex 2
HDDs: 2 x 1TB WD RE4 Raid0 - Ext.Backup: 2 x 1.5TB WD Blacks Raid 1
I've never tried running ASU endurance on a HDD, and I don't much see the point.
HyperX 3K 240 RAID 0 8kb stripes:
first Ive seen over 10,000
@Anvil,
Is there any case that AHCI/RAID OROM and driver version can be displayed on AMD Boards?
Thank you for the time, effort and all the updates.
I'll have a look at it!
To find out, I'll probably have to re-install my AMD rig so I can't say when.
-
Hardware:
Pair Force GS 240GB 64kb stripe:
the review will be up soon at http://www.rwlabs.com/
Looks like I need to try those "GS" drives!
"Soon" as in today or this week
-
Hardware:
How come I cant thank anyone?
pretty stanky Bill
"Lurking" Since 1977
Jesus Saves, God Backs-Up *I come to the news section to ban people, not read complaints.*-[XC]GomelerDon't believe Squish, his hardware does control him!
Anvil, could you please to add possibility of copy all results to clipboard as text? The screenshot doesn't always comfortable, and "copy results to clipboard" function grabs only summarized scores, not read/write ratings in MB/s.
Hi guys,
couldn't sleep so I ran these test just to see what was how my IBM M1015 was performing together with 8 Intel 520s in RAID-0 from 2-8 disks and JBOD
I will be replacing this controller, or at least moving it down the food chain, with an Areca 1882ix 24channel 4GB cache and battery backup unit, which hopefully will give me better performance.
Anyway, i thought i'd share this with you all.
Probably should do a graph ;-) naaahh just kiddin
OHH, it didn't help that Anvils program RC 2 expired in the middle of these tests, so i reset the date to a date later and was able to finish
Henrik
Last edited by tived; 09-01-2012 at 09:53 AM.
Henrik
A Dane Down Under
Current systems:
EVGA Classified SR-2
Lian Li PC-V2120 Black, Antec 1200 PSU,
2x X5650 (20x 190 APPROX 4.2GHZ), CPU Cooling: Noctua NH-D14
(48gb) 6x 8Gb Kingston ECC 1333 KVR1333D3D4R9S/8GI, Boot: 8R0 SAMSUNG 830 129GB ARECA 1882IX-4GB CACHE - Scratch disk: 2x6R0 INTEL 520 120GB's, 2x IBM M1015/LSI 9240-8i, Asus GTX-580
ASUS P5W64 WS PRO, QX-6700 (Extreme Quadcore) 2.66Ghz, 4x2GB HyberX, various hard drives and GT-7600
Tyan S2895 K8WE 2x 285 Opteron's 8x 2gb DDR400 1x nVidia GT-8800 2x 1 TB Samsung F1 (not very nice) Chenbro SR-107 case
Monitors: NEC 2690v2 & Dell 2405 & 2x ASUS VE246H
If you are using software raid, disable "write cache buffer flushing" on each drive in "disk properties" that you have connected to the M1015. That helped me with the performance, if I remember correctly :p
Finally got my last M3p @ LSI 9207
8xr0 software raid srv08r2
It suffers by no cache, but it is still pretty fast
ATTO: for compare
Last edited by Nizzen; 09-01-2012 at 10:18 AM.
Henrik
A Dane Down Under
Current systems:
EVGA Classified SR-2
Lian Li PC-V2120 Black, Antec 1200 PSU,
2x X5650 (20x 190 APPROX 4.2GHZ), CPU Cooling: Noctua NH-D14
(48gb) 6x 8Gb Kingston ECC 1333 KVR1333D3D4R9S/8GI, Boot: 8R0 SAMSUNG 830 129GB ARECA 1882IX-4GB CACHE - Scratch disk: 2x6R0 INTEL 520 120GB's, 2x IBM M1015/LSI 9240-8i, Asus GTX-580
ASUS P5W64 WS PRO, QX-6700 (Extreme Quadcore) 2.66Ghz, 4x2GB HyberX, various hard drives and GT-7600
Tyan S2895 K8WE 2x 285 Opteron's 8x 2gb DDR400 1x nVidia GT-8800 2x 1 TB Samsung F1 (not very nice) Chenbro SR-107 case
Monitors: NEC 2690v2 & Dell 2405 & 2x ASUS VE246H
Holy crap!!! :-)
What drives are you using... (OK, its the Plextors) I better have another play with this.... did I just spend on an Areca because I have not educated myself well enough? that is some excellent numbers
thanks - I am going to have a try tomorrow night ;-)
Henrik
Last edited by tived; 09-02-2012 at 07:58 AM.
Henrik
A Dane Down Under
Current systems:
EVGA Classified SR-2
Lian Li PC-V2120 Black, Antec 1200 PSU,
2x X5650 (20x 190 APPROX 4.2GHZ), CPU Cooling: Noctua NH-D14
(48gb) 6x 8Gb Kingston ECC 1333 KVR1333D3D4R9S/8GI, Boot: 8R0 SAMSUNG 830 129GB ARECA 1882IX-4GB CACHE - Scratch disk: 2x6R0 INTEL 520 120GB's, 2x IBM M1015/LSI 9240-8i, Asus GTX-580
ASUS P5W64 WS PRO, QX-6700 (Extreme Quadcore) 2.66Ghz, 4x2GB HyberX, various hard drives and GT-7600
Tyan S2895 K8WE 2x 285 Opteron's 8x 2gb DDR400 1x nVidia GT-8800 2x 1 TB Samsung F1 (not very nice) Chenbro SR-107 case
Monitors: NEC 2690v2 & Dell 2405 & 2x ASUS VE246H
Henrik
A Dane Down Under
Current systems:
EVGA Classified SR-2
Lian Li PC-V2120 Black, Antec 1200 PSU,
2x X5650 (20x 190 APPROX 4.2GHZ), CPU Cooling: Noctua NH-D14
(48gb) 6x 8Gb Kingston ECC 1333 KVR1333D3D4R9S/8GI, Boot: 8R0 SAMSUNG 830 129GB ARECA 1882IX-4GB CACHE - Scratch disk: 2x6R0 INTEL 520 120GB's, 2x IBM M1015/LSI 9240-8i, Asus GTX-580
ASUS P5W64 WS PRO, QX-6700 (Extreme Quadcore) 2.66Ghz, 4x2GB HyberX, various hard drives and GT-7600
Tyan S2895 K8WE 2x 285 Opteron's 8x 2gb DDR400 1x nVidia GT-8800 2x 1 TB Samsung F1 (not very nice) Chenbro SR-107 case
Monitors: NEC 2690v2 & Dell 2405 & 2x ASUS VE246H
The drive cache can only be changed on the LSI controller or via MSM
Did you reboot after setting each RAID test up ?
What's your ATTO speed with R0x8 ?
Remember the M1015 is a cheap 8 port 6Gbps controller the Areca controller you are looking at will cost how many times more ?
Will you get the same performance boost.
If you want RAID0 and the M1015 let windows do the striping and the M1015 do the controlling (best in LSI9211 IT mode)
Last edited by mobilenvidia; 09-02-2012 at 10:31 AM.
ASUS P8Z77 WS
Intel Core i5-3470T
16GB 1333Mhz RAM
PNY GeForce GTX470
LSI 9266-8i CacheCade pro v2.0/fastpath
IBM ServeRAID M5016
IBM ServeRAID M1015 LSI SAS Controller (IR mode)
4x 60GB OCZ Solid 3 SSDs
6x Hitachi 2TB 7k2000 HDs
Bookmarks