MMM
Results 1 to 25 of 98

Thread: RAID And You (A Guide To RAID-0/1/5/6/xx)

Hybrid View

  1. #1
    Registered User
    Join Date
    Oct 2006
    Location
    Kirghudu, Cowjackingstan
    Posts
    462
    Thanks to GullLars for his elaborate posts regarding SSD stripe sizes on Anandtech. Thought I'd add it here
    1. Intel SSDs can do 90-100% of their sequential bandwidth with 16-32KB blocks @ QD 1, and at higher queue depths they can reach it at 8KB blocks. Harddisks on the other hand reach their maximum bandwidth around 64-128KB sequential blocks, and do not benefit noticably from increasing the queue depth.

    When you RAID-0, the files that are larger than the stripe size get split up in chucks equal in size to the stripe size and distributed amongs the units in the RAID. Say you have a 128KB file (or want to read a 128KB chunk of a larger file), this will get divided into 8 pieces when the stripe size is 16KB, and with 3 SSDs in the RAID this means 3 chunks for 2 of the SSDs, and 2 chukcs for the third. When you read this file, you will read 16KB blocks from all 3 SSDs at Queue Depth 2 and 3. If you check out ATTO, you will see 2x 16KB @ QD 3 + 1x 16KB @ QD 2 summarize to higher bandwidth than 1x 128KB @ QD 1.

    The bandwidth when reading or writing files equal to or smaller the stripe size will not be affected by the RAID. The sequential bandwidth of blocks of 1MB or larger will also be the same since the SSDs will be able to deliver max bandwidth with any stripe size since data is striped over all in blocks large enough or enough blocks to reach max bandwidth for each SSD.

    So to summarize, benefits and drawbacks of using a small stripe size:
    + Higher performance of files/blocks above the stripe size while still relatively small (<1MB)
    - Additional computational overhead from managing more blocks in-flight, although this is negligable for RAID-0.
    The added performance of small-medium files/blocks from a small stripe size can make a difference for OS/apps, and can be meassured in PCmark Vantage.

    2. Regarding the "Most SSD's have a native block size of 32KB when erasing......" quote, this is purely false.
    Most SSDs have 4KB pages and 512KB erase-blocks. Anyways, as long as you have LBA->Physical block abstraction, dynamic wear leveling, and garbage collection, you can forget about erase-blocks and only think of pages.
    This is true for Intels SSDs, and most newer SSD (2009 and newer).

    These SSDs have "pools" of pre-erased blocks wich are written to, so you don't have to erase evertime you write. The garbage collection is responsible for cleaning dirty or partially dirty erase-blocks and combine them to pure valid blocks in new locations, and the old blocks then enter the "clean" pool.

    Most SSDs are capable of writing faster than their garbage collection can clean, and therefore you get a lower "sustained" write speed than the max speed, it will however return back to max when the GC has had some time to replenish the clean pool. Some SSDs will sacrafice write amplification (by using more aggressive GC) to increase sustained sequential write.

    Intel on the other hand has focused on maximizing the random write performance in a way that also minimizes write amplification, and this either means high temporary and really low sustained write, or like intel has done, fairly low sequential write that does not degrade much. (this has to do with write placement, wear leveling, and garbage collection)

    This technique is what allows the x25-V to have random write equal to sequential write (or close to. 40MB/s random write, 45MB/s sequential write). x25-M could probably also get a random:seq write ratio close to 1:1, but the controller doesn't have enough computational power to deliver that high random write using intels technique.

    3. Anyways, i thought i'd post it here so everyone could see:
    The numbers he's refering to shows 16KB stripe as superior performance-wise.
    Here's the PCmark vantage HDD scores of 3 x25-V's in RAID-0 by stripe size:
    16KB: 74 164
    32KB: 70 364
    64KB: 63 710
    128KB: 55 045
    For those wondering, 16KB shows 540MB/s read and 131MB/s write in CrystalDiskMark 3.0 while 128KB shows 520MB/s read and 131MB/s write (1000MB lenght, 5 runs)

    Also, here are the AS SSD total scores by stripe size for 3 x25-V's in RAID-0:
    16KB: 809
    32KB: 797
    64KB: 795
    128KB: 774

    By doing PCmark vantage points multiplied by 2/3, i guess Anand used a 128KB stripe.
    If he'd used a 16KB stripe, the numbers would likely be around 48-49 000
    This is supported by benchmarking done by the user Anvil, who got 47 980 points in the Vantage HDD test with 2 x25-V's in RAID-0 off ICH10R with a 16KB stripe size. (IRST 9.6 driver, writeback cache disabled).

    Sony KDL40 // ASRock P67 Extreme4 1.40 // Core i5 2500K //
    G.Skill Ripjaws 1600 4x2Gb // HD6950 2GB // Intel Gigabit CT PCIe //
    M-Audio Delta 2496 // Crucial-M4 128Gb // Hitachi 2TB // TRUE-120 //
    Antec Quattro 850W // Antec 1200 // Win7 64 bit

  2. #2
    Xtreme Member
    Join Date
    Feb 2009
    Location
    Argentina
    Posts
    388
    Hello everyone!
    I hope to be in the right place to ask.
    I'm bound to make my first raid configuration, but I have one enormous doubt.
    The last time that the idea of making a raid came to my mind, I thought about a raid 0 for the os and variety of software in order to improve performance.
    But because I also like OCing, many people, in that moment, claimed that the raid setup could be affected and even get broken because of an unstable OC.

    This keep being the same with the new P67 motherboards?
    I have an Asus maximus IV extreme and towards the OC, I wanted to set up a raid 0.

    What you think?
    i7 2600k / G1. Sniper 2 / 8Gb Sniper 1600 / GTX 580 / 3.4Tb / AX1200 *Mod. / v2120 / Rheobus Extreme *Mod.

    HF 14 Livingstone / Thermochill Pa 120.3 / Bitspower Water Tank Z-Multi 250ml / MCP 355 + XSPC Laing DDC Acetal Top / Bitspower Matt Black Fittings Army / NoiseBlocker Blacksilent Fans

  3. #3
    Registered User
    Join Date
    Apr 2008
    Location
    Up State New York
    Posts
    94
    Quote Originally Posted by Osterman View Post
    Hello everyone!
    I hope to be in the right place to ask.
    I'm bound to make my first raid configuration, but I have one enormous doubt.
    The last time that the idea of making a raid came to my mind, I thought about a raid 0 for the os and variety of software in order to improve performance.
    But because I also like OCing, many people, in that moment, claimed that the raid setup could be affected and even get broken because of an unstable OC.

    This keep being the same with the new P67 motherboards?
    I have an Asus maximus IV extreme and towards the OC, I wanted to set up a raid 0.

    What you think?
    As long as its for over clocking, if you have important stuff you CAN NOT afford to lose, put it on a separate hard drive. Then if you loss the RAID-0 you wont lose the important info........Not sure why but above a certain point Asus boards seem to lose the hard drives when overclocking, like on the RIVE board, plus its not sata 6 like they said it was when it was new. I have a broken raid currently, it still booots but in the bios when booting the Intel controller shows one of the discs as bad.
    RIVE/3930K Water
    16GB Gskill Ripjaw X 2133
    2X Intel CherryVille 520 60GB Raid O
    2X HD 7970 VisionTec
    Corsair AX1200

    Biostar TP45HP, GA-X48-DQ6 , Maximus IV Gene Z

Bookmarks

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •