Yepp, it's been working for quite some time.
What is left is making it presentable.
I'll PM you when it's ready for testing.
@B Gates
I've updated my V4's but haven't got time to play just yet, I've been close to that result though :)
Printable View
Excellent! Now that is the first large advantage over IOMeter :)
could someone show me how I can use ASU to do something along the lines of what I wanted to do in this thread:
http://www.xtremesystems.org/forums/...ing&highlight=
read: I want to thrash a HDD non-stop, 24x7 for about a week straight. Basically fill the HDD up, delete, rinse & repeat so-to-speak! :)
:up:
just use the endurance test in ASU. Go into settings and check 'keep running totals'. And change the 4K write duration to 0ms. You may need to experiment with adding time per 1000 files deleted. Lastly, you don't need the 10 second break between loops on a mechanical HDD either.
Stress testing HDDs like this is not the best idea tbh... A prominent failure point is during spinup/spindown when the heads have to unpark/park and the drive has to do a bunch of its firmware initialization procedures. Also if you are just using it 24/7 then its temperature will be static. Changes in temperature cause a lot of internal changes in the drive (the drive automatically changes some firmware parameters based on its temperature) and can be the cause of failure as well. Also, drives do internal clean-up procedures (to look for and reallocate weak/bad sectors mostly) if given time to idle. If you don't let them idle ever then it is not the best... Just reading/writing is a part of it of course but I question how useful it is for what you are trying to do. It is more or less impossible to predict failure accurately... this is why backup systems and RAIDs exist.
When running the ASU test, the pause between loops is needed (and then some)
I ran a few tests on a 2TB drive (a bad drive starting to accumulate bad sectors) and it is painfully slow, with 500,000 files per loop it takes hours? (will check) deleting the files.
It might be due to that all files are in a single folder, it is extremely slow and a few days of writing did not help accumulate more bad sectors, it looks like I have to let it die the natural way or try to get it replaced.
(it's a secondary backup drive and I've already cleaned the drive)
Temperature during ASU testing was more or less fixed in the low 40's (C). (42-43)
The most practical purpose to put a drive through 'hell week', imo, would be to check initial drive quality before commiting it a new HDD to a raid0 array (or any array I guess). Most drives fail in the first few months or after ~3 yrs or more from a story I read (no I can't remember where...lol). I've always made sure to 'break-in' my HDDs for a couple of months before putting them in raid...never tried the 'hammer test' so to speak.
I've never tried running ASU endurance on a HDD, and I don't much see the point.
HyperX 3K 240 RAID 0 8kb stripes:
http://i876.photobucket.com/albums/a...liar/050-3.png
first Ive seen over 10,000
@Anvil,
Is there any case that AHCI/RAID OROM and driver version can be displayed on AMD Boards?
Thank you for the time, effort and all the updates.
I'll have a look at it!
To find out, I'll probably have to re-install my AMD rig so I can't say when.
Pair Force GS 240GB 64kb stripe:
http://i876.photobucket.com/albums/a...liar/050-5.png
the review will be up soon at http://www.rwlabs.com/
Looks like I need to try those "GS" drives!
"Soon" as in today or this week :)
How come I cant thank anyone?
pretty stanky Bill :)
Anvil, could you please to add possibility of copy all results to clipboard as text? The screenshot doesn't always comfortable, and "copy results to clipboard" function grabs only summarized scores, not read/write ratings in MB/s.
Hi guys,
couldn't sleep so I ran these test just to see what was how my IBM M1015 was performing together with 8 Intel 520s in RAID-0 from 2-8 disks and JBOD
I will be replacing this controller, or at least moving it down the food chain, with an Areca 1882ix 24channel 4GB cache and battery backup unit, which hopefully will give me better performance.
Anyway, i thought i'd share this with you all.
Attachment 129780Attachment 129779Attachment 129781Attachment 129782Attachment 129783Attachment 129784Attachment 129785Attachment 129786Attachment 129787
Probably should do a graph ;-) naaahh just kiddin
OHH, it didn't help that Anvils program RC 2 expired in the middle of these tests, so i reset the date to a date later and was able to finish
Henrik
If you are using software raid, disable "write cache buffer flushing" on each drive in "disk properties" that you have connected to the M1015. That helped me with the performance, if I remember correctly :p
Finally got my last M3p @ LSI 9207
8xr0 software raid srv08r2
It suffers by no cache, but it is still pretty fast :)
http://i413.photobucket.com/albums/p...Anvil8xm3p.png
ATTO: for compare
http://i413.photobucket.com/albums/p...server08r2.png
I know I was joking but here are the results tabulated up in a google spreadsheet - haven't got MS office installed yet. figurs marked in red are the fastest, but its clear that there is not much linear about the scaling on this controller
Attachment 129788
:-)
Henrik
Holy crap!!! :-)
What drives are you using... (OK, its the Plextors) I better have another play with this.... did I just spend on an Areca because I have not educated myself well enough? that is some excellent numbers
thanks - I am going to have a try tomorrow night ;-)
Henrik
Hi Nizzan,
this is what I get when looking at both my M1015's ??? its disabled, and thats the settings that was used on my tests yesterday
????
What am I doing wrong, is it when I set it up in MegaRAID Storage Manager? I am using the default settings
Henrik
Attachment 129795
The drive cache can only be changed on the LSI controller or via MSM
Did you reboot after setting each RAID test up ?
What's your ATTO speed with R0x8 ?
Remember the M1015 is a cheap 8 port 6Gbps controller the Areca controller you are looking at will cost how many times more ?
Will you get the same performance boost.
If you want RAID0 and the M1015 let windows do the striping and the M1015 do the controlling (best in LSI9211 IT mode)