Sh:@t dude!
Way to tease! :D
Sh:@t dude!
Way to tease! :D
nice thread! :yepp: :toast: :bows: :up:
Thank you for the time its going to take to run all of these benches and compile the results.
With regard to I/O testing, how does bonnie++ compare? The OP probably won't run Linux for his testing, but I think its a valid question while he enjoys some vacation time.
Bonnie++ is a fast test (ie, akin to hdtune; atto; crystaldiskmark; et al. for windows that are shown here often) not something that will provide you real good data for comparison. I use it mainly as a first step to get a 50,000 foot view and then start running real tests tailored toward what I am trying to bench. If you are running it yourself, normally what you want to do is run it 5-9 times, remove the best & worst run and average the rest to help remove some of the fluctuations.
That was a file named "32GiB" (which was 32GiB in size) that I created on that partition. The syntax should be the same or is it blowing up as you don't have a file created there for it to use?
Yeah, didn't have the file. It was odd as it seemed to create the file the very first time I ran it but it's been failing since then. Maybe I accidently had a file of that name the first time from something else... Do you create the file ahead of time then on the target? Simple text file?
NCQ on, off or post results for both is time permits?
Yeah, sort of, I created it on my unix box (dd if=/dev/zero of=32GiB bs=1M count=32768) as that's where I was at the time and then just moved it over. it doesn't really matter where you create it or what's in it, it's just a pointer to blocks that xdd will operate on. I left NCQ on for the drives I have here, if you have time it would be interesting to see what type of affect it has).
Here are the variables I am looking at testing against to see the impact. I obviously can't vary all of them against each other without taking 2 months to complete testing. So I need to choose a few to set constant and not mess with. Any suggestions?
Areca Settings:
Sata NCQ Support: on/off
HDD Read Ahead Cache: enabled/disabled
Volume Data Read Ahead: disabled/conservative/normal/aggresive
HDD Queue Depth: 1/2/4/8/16/32
Disk Write Cache Mode: Auto/Enabled/Disabled
Volume Cache Mode: Write Through/Write Back
Volume Stripe Size: 32/64/128
OS (Vista 64) Settings:
Enable Write Caching On Disk: on/off
Enable Advanced Performance: on/off
Also:
Can try with BBU plugged in or unplugged (for 512meg cache)
Can compare 512meg cache results to 2GB (2GB only with BBU plugged in)
Looks like a good overview for just base settings and their affects. stripe sizes, file system alignments et al will as you mentioned take quite a bit longer to plot out (I know, it takes about 4-6 months to do a full gamut test when I do them on arrays here).
The BBU itself won't provide anything different except for data integrity which you are not testing. You can change the cache sizes and then manually set your "Disk Write Cache Mode". Auto means (turn on write cache if BBU is present". So you can manually just turn it on or off on the card for testing purposes.
You probably want to reboot when setting these items just to avoid any issues w/ firmware not applying settings immediately. Volume stripe sizes are going to be a killer, you probably want to either save those for another test or at least last unless you are planning on limiting it to only a specific raid type (to get real value you will want to do all permutations which is what takes a lot of time). I would probably approach it in two modes, first with a raid-0 and then use that to test out all combinations of the areca settings (with OS settings off then on for each test) with the exception of stripe size (use the default of 64k). Then do the same with a pass-through disk (single disk but attached to the areca) this would be a baseline and show comparison to the multiplication affects of striping. After that then do tests with other raid levels (as the base settings are not germane to a raid-type but are low-level and would apply equally).
Alright, so I just want to run a test case to solve my problem with the non-existing target.
I have a folder called xdd on my G: drive (the Raid 10 drive) with the xdd bin folder and everything else in it.
g:\xdd\bin>xdd.exe -op read -targets 1 _________
I want to target that drive and both the volume and the logical drive are named RAID10. No combo I have tried gets me anything other than "The system cannot find the file specified". I have tried files in the g:\xdd\bin directory and the drive itself.
I know I'm missing something simple. I'm pretty sure it's a syntax issue.
When you say "newest firmware" are you talking about firmware on your RAID controller?
Awesome post, man. :)
Thanks,
-Dean
@speederlander: you just need to create a file then for it to run against. You can grab a windows version of DD at http://www.chrysocome.net/dd or use any other tool to create a file for xdd to point to. or you can use the -createnewfiles option which will take more time (it actually creates a new file per pass). So something like:
xdd.exe -verbose -op read -targets 1 S32GiB -createnewfiles -dio -blocksize 512 -reqsize 128 -mbytes 8192 -passes 3 -seek random -seek range 64000000 -queuedepth 128
Speederlander, will you be posting updates with results as the tests complete, or in one large sum at the end of testing?
Alright, I obviously have lost my ability to use the command line. Hence iometer for the near-term. :p:
Disclosure: I freely admit I am not an expert when it comes to the nitty gritty details of storage access patterns, etc.
Do you have any specific pattern(s) you would like to see on iometer?
I need some feedback as to what is most appropriate for good characterization.
How about:
IOMeter Parameters:
Maximum Disk Size: 16777216
Starting Disk Sector: 0
# of Outstanding I/Os: 1
Test Connection Rate: skip
Access Pattern:
Transfer Request Size: 4KB
Percent of Access Specification: 100%
Percent Read/Write Distribution: 67% Read/33% Write
Percent Random/Sequential Distribution: 100% Random/0% Seq
Burstiness: 0ms/1 IOs
Align I/Os on Sector Boundaries
Reply Size: No Reply
Test Set-up:
Run Time: 3 Minutes
Ramp Up: 30 seconds
Cycling Options: Cycle # of Outstanding I/Os -- run step outstanding I/Os on all disks at a time
# of Outstanding I/Os: start: 1 end: 32 power: 2, exponential stepping
TEST RUN:
blame it on new-years early partying :)Quote:
Alright, I obviously have lost my ability to use the command line.
as for iometer I haven't used it much, but you seem to have the general gist. I would suggest that you want to check a couple request sizes, your 4K that you have is good as that's your page size with non-itanium intel chips (ie, similar to what you would have with say a pagefile access). 64K would be good for a file server (smb1) with large client base, then whatever you want to show how it performs with your normal (or modeled) request size.
I would also allow queue depth to go to 256 or at least 128. The areca will spread that out and the 1680 has a queue depth of 256 (most drives allow for at least 32 each) this will basically saturate the drive (worst case).
The read/write distribution you have (67/33) is decent for a file server, but you probably want a pure 100/0 0/100 test for comparison sake.
Test Schedule: (may throw in stripe variations as well...)
First results posted to reserved post spots at the beginning of the thread.
Next test to be RAID 0 with disk write caching off and OS caching off.
Comparison of RAID 0 results:
*removed for editing*
You may want mention in your column headers (for future lookups) "RAID 0 Write Cache [On|Off]" so people will know it's not read cache or whatnot. Also just for clarification you have MBps which indicates Mega = 1,000,000 bytes. If That's what you mean great. if you actually mean 1,048,576 bytes you should use the MiBps abbreviation. I haven't had a chance to go through the data in detail but looks good so far. :)
Thanks. The MBps header is the iometer header from their output csv file.
With respect to the headers, you mean the thing I posted in post #45? That was just a quick and dirty. It's going away. The information in the first posts of the thread where I am putting all the data is sufficient, yes?
Added results for caching off and NCQ off. Interesting impact. NCQ stay on in any event for the remainder of tests.
RAID 10 next I think.
Should I be comparing 8 drive RAID 10 to 4 drive RAID 0 to see the RAID 10 scaling?