Results 1 to 20 of 20

Thread: RAID 0 - Stripe Size For Best All Around Performance?

  1. #1
    Registered User
    Join Date
    Sep 2007
    Posts
    43

    RAID 0 - Stripe Size For Best All Around Performance?

    Hi there,

    I am about to create a RAID 0 array with 2 (two) WD6400AAKS hdd's on ASUS P5K-E WIFI/AP (ICHR9) motherboard..

    The plan is to create 1st slice with 300gb and second 2nd slice with all remaining space.

    Most of the data are small files with size ~5mb-6mb.

    What stripe size should I chose in order to get the best all around performance? 64kb ?
    I am particularly interested in fastest possible boot times. The operating system shall be WinXP Pro.

    Is it right that the smaller the stripe size the better the boot performance?
    MOBO: ASUS Maximus IX Hero Z270
    CPU: i7-7700K @4,8ghz (1,25 vcore) (cooled by Noctua NH-D15)
    RAM: Hyperx Fury Black (2400mhz, CL15) 2x8gb @3000mhz (16-18-18-36, 1.30v)
    SSD: Samsung 850 PRO 512GB
    VGA: MSI GeForce GTX 1080 ARMOR OC 8GB
    PSU: Corsair AX850
    OS: Windows 10 (x64)

  2. #2
    Xtreme Cruncher
    Join Date
    Nov 2006
    Location
    'burb of Cleveland, Ohio
    Posts
    2,871
    I've tried 2 slice sizes with my RAID0 boot array. 64k had the best performance, considering writes and reads. When I switched to 32k after a format, my read speed went up a bit, maybe 5%, but my writes took a huge hit, much slower.

    I don't think boot volume's file size average 5-6MB... Windoze has much smaller file sizes. I say go for 64k.

  3. #3
    I am Xtreme zanzabar's Avatar
    Join Date
    Jul 2007
    Location
    SF bay area, CA
    Posts
    15,871
    it depends on what ur doing and what drives/controller u have, i have a stripe size of 128 right now and it seams better than the 64 for mine

    but small files will be better with 64 most likly
    5930k, R5E, samsung 8GBx4 d-die, vega 56, wd gold 8TB, wd 4TB red, 2TB raid1 wd blue 5400
    samsung 840 evo 500GB, HP EX 1TB NVME , CM690II, swiftech h220, corsair 750hxi

  4. #4
    SLC
    Join Date
    Oct 2004
    Location
    Ottawa, Canada
    Posts
    2,795
    32 or 64 for general use. 32 if you don't do too much writing, which is what I use.

  5. #5
    Registered User
    Join Date
    Sep 2007
    Posts
    43
    Quote Originally Posted by zanzabar View Post
    it depends on what ur doing and what drives/controller u have
    I wont have no seperate raid controllers. As I already said I will be having RAID 0 on ASUS P5K-E WIFI/AP motherboard which has ICHR9 chipset

    Will it make a difference, if I decide to create a RAID 0 without having no slices, just one large RAID 0 consisting of two WD6400AAKS hdd's, instead of having two slices (300gb/900gb).

    Well, sure that hdtune and hdtach will give much better results on the smaller slice (lower accestimes and higher read/write mb/s) if I make one smaller slice and one larger, however will it (slicing or no slicing) affect boot times / app load times differ ?

    If "slicing" is essential, than what should the maximum amount for the smaller slice?

    Would it be fine, if make it like this:

    Slice No.1 - 300gb,
    Slice No.2 - 900gb,

    ?
    MOBO: ASUS Maximus IX Hero Z270
    CPU: i7-7700K @4,8ghz (1,25 vcore) (cooled by Noctua NH-D15)
    RAM: Hyperx Fury Black (2400mhz, CL15) 2x8gb @3000mhz (16-18-18-36, 1.30v)
    SSD: Samsung 850 PRO 512GB
    VGA: MSI GeForce GTX 1080 ARMOR OC 8GB
    PSU: Corsair AX850
    OS: Windows 10 (x64)

  6. #6
    Xtreme Cruncher
    Join Date
    Nov 2006
    Location
    'burb of Cleveland, Ohio
    Posts
    2,871
    I can't imagine you installing 300GB worth of programs. 300GB and 900GB will work just fine.

  7. #7
    Xtreme CCIE
    Join Date
    Dec 2004
    Location
    Atlanta, GA
    Posts
    3,842
    I'd go 64k - it's a universal default for a reason. The rule of thumb is pretty much unless you have a specific reason to change it, keep the default.
    Dual CCIE (Route\Switch and Security) at your disposal. Have a Cisco-related or other network question? My PM box is always open.

    Xtreme Network:
    - Cisco 3560X-24P PoE Switch
    - Cisco ASA 5505 Firewall
    - Cisco 4402 Wireless LAN Controller
    - Cisco 3502i Access Point

  8. #8
    Registered User
    Join Date
    Sep 2007
    Posts
    43
    Quote Originally Posted by Polizei View Post
    I can't imagine you installing 300GB worth of programs. 300GB and 900GB will work just fine.
    I agree, however the thing that cracks me up is the “cooperation” between slices in Intel Matrix Raid. For instance, copying a certain file from Slice to Slice takes much more longer time than it does in a non raid mode from one HDD to other HDD. You know what I am sayin’? Even copying from “slice” to USB flash drive is faster.

    I remember first having two Caviar RE2 500gb in Intel Matrix Raid with slice no.1 – 100gb, and slice no. 2 – 900gb. Well If I had to move/copy files from no.1 to no. 2 or other way around it would take me forever, and that is the reason why I am thinking of having RAID 0 without no slices.

    Data loss is not an issue. I have had raid 0 for years and never had any issues of data loss. Moreover I backup all of my important stuff.
    MOBO: ASUS Maximus IX Hero Z270
    CPU: i7-7700K @4,8ghz (1,25 vcore) (cooled by Noctua NH-D15)
    RAM: Hyperx Fury Black (2400mhz, CL15) 2x8gb @3000mhz (16-18-18-36, 1.30v)
    SSD: Samsung 850 PRO 512GB
    VGA: MSI GeForce GTX 1080 ARMOR OC 8GB
    PSU: Corsair AX850
    OS: Windows 10 (x64)

  9. #9
    Xtreme Member
    Join Date
    May 2007
    Location
    Portland, Oregon
    Posts
    234
    Can you use different stripe sizes in raid 0. For example a 32 or 64 stripe size for your operating system and programs and another stripe size for your data. I will write larger video files so I am thinking a 128 (or larger) stripe size for these. So two partitions.
    Asus P5Q Deluxe (P5QD1406) || Q9400 at 3.4GHz || 2x2GB Mushkin Redline PC2-8000 || HIS HD 4870 1GB w/ Scythe Musashi (silent) || 501GB Samsung HD || Zalman ZM850-HP || Antec P182 || Windows 7 x64 RTM

  10. #10
    Xtreme Member
    Join Date
    Jul 2007
    Location
    now
    Posts
    242
    Quote Originally Posted by doakh View Post
    Can you use different stripe sizes in raid 0.... So two partitions.
    sounds like you're talking about matrix raid with two volumes? if so, then yes, each RAID volume can have a different stripe size.
    ------media machine------
    Q6600 L737B || EVGA 780i || 8GB G.Skill DDR2-1000 PQ || 2 x EVGA 9800 GX2
    150GB Raptor (Vista x64 SP1) || 3 x 640GB WD6400AAKS RAID0

    LG Blu-ray/HD-DVD Combo || Dell 3008WFP || Corsair HX1000W || Rocketfish
    Air Cooled with ThermalRight Ultra 120 Extreme

  11. #11
    Xtreme Member
    Join Date
    Jan 2007
    Location
    Dorset, UK
    Posts
    439
    Quote Originally Posted by -JJ- View Post
    I agree, however the thing that cracks me up is the “cooperation” between slices in Intel Matrix Raid. For instance, copying a certain file from Slice to Slice takes much more longer time than it does in a non raid mode from one HDD to other HDD. You know what I am sayin’? Even copying from “slice” to USB flash drive is faster.

    I remember first having two Caviar RE2 500gb in Intel Matrix Raid with slice no.1 – 100gb, and slice no. 2 – 900gb. Well If I had to move/copy files from no.1 to no. 2 or other way around it would take me forever
    Well, DUH! Disk read from one "end" of the disk, disk write to other "end", followed by read of next chunk of data, then write, then read, then write... Of course it's slow, the disk heads are seeking from one end to the other continuously, and that's the slowest part of the process, tens of milliseconds between every seek.

    Writing from one disk to another avoids the continual reseeking because then the reads and writes are contiguous. That's not a "bug" or deficiency in Matrix RAID, it's just unavoidable physics if you are copying files between different parts of the SAME disk(s).

  12. #12
    Xtreme Addict
    Join Date
    Jul 2006
    Location
    Vancouver, BC
    Posts
    2,061
    Quote Originally Posted by Serra View Post
    I'd go 64k - it's a universal default for a reason. The rule of thumb is pretty much unless you have a specific reason to change it, keep the default.
    The number of drives must play a factor.

    What about 64KB divided by the drives in the array... 32KB for dual drive arrays, 16KB for four drive arrays, etc.?

  13. #13
    Registered User
    Join Date
    Sep 2007
    Posts
    43

    Talking

    Quote Originally Posted by IanB View Post
    Well, DUH! Disk read from one "end" of the disk, disk write to other "end", followed by read of next chunk of data, then write, then read, then write... Of course it's slow, the disk heads are seeking from one end to the other continuously, and that's the slowest part of the process, tens of milliseconds between every seek.

    Writing from one disk to another avoids the continual reseeking because then the reads and writes are contiguous. That's not a "bug" or deficiency in Matrix RAID, it's just unavoidable physics if you are copying files between different parts of the SAME disk(s).
    i got it .. now it seems so obvious
    MOBO: ASUS Maximus IX Hero Z270
    CPU: i7-7700K @4,8ghz (1,25 vcore) (cooled by Noctua NH-D15)
    RAM: Hyperx Fury Black (2400mhz, CL15) 2x8gb @3000mhz (16-18-18-36, 1.30v)
    SSD: Samsung 850 PRO 512GB
    VGA: MSI GeForce GTX 1080 ARMOR OC 8GB
    PSU: Corsair AX850
    OS: Windows 10 (x64)

  14. #14
    Registered User
    Join Date
    Oct 2005
    Location
    Warsaw, Poland
    Posts
    64
    Quote Originally Posted by virtualrain View Post
    The number of drives must play a factor.

    What about 64KB divided by the drives in the array... 32KB for dual drive arrays, 16KB for four drive arrays, etc.?
    Yes, that is true but I think that 64 is for 2 drives, 32 for 4, etc.
    ▪ RocketFish FullTower Case
    ▪ DFI LanParty UT 680i LT SLI-T2R/G (N5FD320 BIOS) | Thermalright HR-09S Type 3
    ▪ Q6600 3000Mhz (9x333)@stock | Thermalright Ultra-120 Extreme + Noctua NH-P12 +Arctic Cooling TX-2
    ▪ 4x1GB Cellshock CS2221440 D9GKX 667Mhz@stock
    ▪ 2xSeagate Barracuda 7200.7 ST3120827AS 120GB SATA NCQ [RAID0] | 2xSeagate Barracuda 7200.10 ST3160815AS 160GB SATA II NCQ [RAID0]
    ▪ BFG NVIDIA GeForce 8800 GTX OC [SLI] | Samsung SyncMaster™ 205BW 20.1"
    ▪ Creative Sound Blaster X-Fi XtremeGamer Fatal1ty Pro | SpeedLink Medusa 5.1 ProGamer Edition (SL-8973)
    ▪ Silverstone Olympia OP1000
    ▪ MS Vista x64 sp1/Gentoo ~AMD64


  15. #15
    Registered User
    Join Date
    Sep 2007
    Posts
    43
    just for info I will post the end results of my WD6400AAKS x 2 Intel Matrix Raid setup:

    So here are two WD6400AAKS on intel matrix raid. One RAID 0 slice is 200GB (for OS) and the other one with all remaining GB's is 1TB RAID 0 slice. Benchmarks are done with HDTach 3.0.4.0 - Long bench 32mb. Both "slices" have stripe size of 64kb.

    Slice No. 1 - 200GB


    Slice No. 1 - 1TB
    Last edited by -JJ-; 04-22-2008 at 11:16 PM.
    MOBO: ASUS Maximus IX Hero Z270
    CPU: i7-7700K @4,8ghz (1,25 vcore) (cooled by Noctua NH-D15)
    RAM: Hyperx Fury Black (2400mhz, CL15) 2x8gb @3000mhz (16-18-18-36, 1.30v)
    SSD: Samsung 850 PRO 512GB
    VGA: MSI GeForce GTX 1080 ARMOR OC 8GB
    PSU: Corsair AX850
    OS: Windows 10 (x64)

  16. #16
    Xtreme Member
    Join Date
    Oct 2006
    Posts
    412
    Quote Originally Posted by AlphaHeX View Post
    Yes, that is true but I think that 64 is for 2 drives, 32 for 4, etc.
    me - 5 drive array 64k all the way - use kiss principle - i find if purposes different then a separate array set-up specifically for that purpose

    mind you i do not have anything near the experience some guys here have in this department
    Last edited by swiftex; 04-23-2008 at 05:25 AM.

  17. #17
    Registered User
    Join Date
    Jan 2007
    Posts
    11
    Here is an example of how block sizes affects a WD6400AAKS's read speed, albeit a single one:



    ~Ibrahim~
    Last edited by ikjadoon; 04-23-2008 at 05:12 PM.

  18. #18
    Xtreme CCIE
    Join Date
    Dec 2004
    Location
    Atlanta, GA
    Posts
    3,842
    Quote Originally Posted by virtualrain View Post
    The number of drives must play a factor.

    What about 64KB divided by the drives in the array... 32KB for dual drive arrays, 16KB for four drive arrays, etc.?
    I probably never would have noticed this if there hadn't been a thread resurrection... but yeah, I would say that offhand (with no results to look at) that would sound extremely reasonable to me.

    The only thing I"d change though is that since RAID-0 inherently requires 2 disks and since 64k is the default which every manufacturer has independently determined to be the best *overall* size, it would have to go like: 64KB for dual drives, 32k for quad, 16k for octo... again, all *on average*

    The actual trick is to determine what works best for the size of files you're reading/writing... which pretty much means running real-world bench tests on activities you perform (benchmarks are only going to show for the benchmark). Unfortunately, I'm also willing to bet that for most applications this difference would be so small it would be nearly impossible to accurately stopwatch it so it's a bit of a tossup either way.
    Dual CCIE (Route\Switch and Security) at your disposal. Have a Cisco-related or other network question? My PM box is always open.

    Xtreme Network:
    - Cisco 3560X-24P PoE Switch
    - Cisco ASA 5505 Firewall
    - Cisco 4402 Wireless LAN Controller
    - Cisco 3502i Access Point

  19. #19
    Xtreme Addict
    Join Date
    Jul 2006
    Posts
    1,124
    Generally yes, if you take your average (80% rule) request size and then you set your stripe stripe size so that stripe size * (stripe_width - parity drives) <= request size.

    Now this is _NOT_ linear, this scales better when you have a higher degree of parallelism of your requests. This is common here when you run streaming tests say (your queue length is close to 1 generally). This is not what you see in heavy use environments which is one reason why you have to really watch your workloads (request size, queue depth, request type (random,sequential), et al).

    Generally if you take the time to map out all the combinations of stripe size/request size the difference is minimal. It's more of a function of queue depth or request size for your performance.

    I've attached an example here for a raid-6 array (24 drives, all combinations of stripe size and request sizes from 64KiB to 16GiB). This was a fast test and the only one I had handy on the USB key here as I had this argument recently at work.
    Attached Files Attached Files

    |.Server/Storage System.............|.Gaming/Work System..............................|.Sundry...... ............|
    |.Supermico X8DTH-6f................|.Asus Z9PE-D8 WS.................................|.HP LP3065 30"LCD Monitor.|
    |.(2) Xeon X5690....................|.2xE5-2643 v2....................................|.Mino lta magicolor 7450..|
    |.(192GB) Samsung PC10600 ECC.......|.2xEVGA nVidia GTX670 4GB........................|.Nikon coolscan 9000......|
    |.800W Redundant PSU................|.(8x8GB) Kingston DDR3-1600 ECC..................|.Quantum LTO-4HH..........|
    |.NEC Slimline DVD RW DL............|.Corsair AX1200..................................|........ .................|
    |.(..6) LSI 9200-8e HBAs............|.Lite-On iHBS112.................................|.Dell D820 Laptop.........|
    |.(..8) ST9300653SS (300GB) (RAID0).|.PA120.3, Apogee, MCW N&S bridge.................|...2.33Ghz; 8GB Ram;......|
    |.(112) ST2000DL003 (2TB) (RAIDZ2)..|.(1) Areca ARC1880ix-8 512MiB Cache..............|...DVDRW; 128GB SSD.......|
    |.(..2) ST9146803SS (146GB) (RAID-1)|.(8) Intel SSD 520 240GB (RAID6).................|...Ubuntu 12.04 64bit.....|
    |.Ubuntu 12.04 64bit Server.........|.Windows 7 x64 Pro...............................|............... ..........|

  20. #20
    Registered User
    Join Date
    Sep 2007
    Posts
    43
    more results.. this time with smaller first slice.. same 64kb stripe size..

    so 2 (two) WD6400AAKS in Intel Matrix Raid, with two slices (100gb & 1,05tb).


    Slice No.1 --> 100gb


    Slice No.2 --> 1,05tb
    Attached Thumbnails Attached Thumbnails Click image for larger version. 

Name:	hd tach 100gb long test write cache enabled.JPG 
Views:	2804 
Size:	96.1 KB 
ID:	77939   Click image for larger version. 

Name:	hd tach 1200gb long test write cache enabled.JPG 
Views:	2489 
Size:	93.9 KB 
ID:	77940  
    MOBO: ASUS Maximus IX Hero Z270
    CPU: i7-7700K @4,8ghz (1,25 vcore) (cooled by Noctua NH-D15)
    RAM: Hyperx Fury Black (2400mhz, CL15) 2x8gb @3000mhz (16-18-18-36, 1.30v)
    SSD: Samsung 850 PRO 512GB
    VGA: MSI GeForce GTX 1080 ARMOR OC 8GB
    PSU: Corsair AX850
    OS: Windows 10 (x64)

Bookmarks

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •