Page 4 of 4 FirstFirst 1234
Results 76 to 98 of 98

Thread: RAID And You (A Guide To RAID-0/1/5/6/xx)

  1. #76
    Xtreme Member
    Join Date
    Mar 2005
    Location
    Belgium
    Posts
    293
    Quote Originally Posted by Serra View Post
    Index:
    [SIZE="3"]

    4c. RAID-0 Performance Scaling With # of Drives
    One of the things I have always hated seeing here is people who have RAID-0 arrays with 7 or 8 drives. Aside from the clear danger of disk failure, RAID-0 may scale well with a few drives, but much less so as you add more. For example, versus one disk and assuming theoretical maximums:

    1 Disk = Baseline
    2 Disks = 1/2 Time Decrease = 50% performance increase vs. 1 disk
    3 Disks = 2/3 Time Decrease = 16% performance increase vs. 2 disks
    4 Disks = 3/4 Time Decrease = 9% performance increase vs. 3 disks
    5 Disks = 4/5 Time Decrease = 5% performance increase vs. 4 disks
    6 Disks = 5/6 Time Decrease = 3% performance increase vs. 5 disks

    And one must take into account the fact that as the number of drives increase, so too does the minimum size of the file required to be considered a "large" file. Add in the fact that overhead alone accounts for a few % of performance and you can see that past 3 disks your *theoretical maximum* increase is sitting in the low to mid single digit range.
    This is so not true. Raid 0 scales far better with more drives. Only the acces time gets higher and thats the only downfall in performance.
    E6400@3.2GHZ | P5B deluxe |8800GTS 512MB | 2*250GB Hitachi + 500GB WD|
    Aerocool masstige | Seasonic 600W |
    MX510@450HZ | 24" LG 245WP
    Cooled by: Swiftech storm - MCP 655 - Maze4/EK vodni block - Nexxos Dual Extreme

    A64 3000+@2.9ghz // XP M 2500+@2.5ghz // XP M 2600+@2.6ghz benchable at 2.9ghz superpi 35s //
    AMD opteron 146@3GHZ CABYE APMW batch 0076

  2. #77
    Xtreme Addict
    Join Date
    Jul 2006
    Posts
    1,124
    The problem is generally the test environment. What Serra observes is very likely due to both a stripe size that is too large and not enough outstanding commands in the queue. Properly sizing an array is not as simple as just throwing disks at it (though even our SAN team here has fallen into that trap numerous times). There is no single correct setup/design that will work for all situations. Empirical evidence is very alluring but is very limited in application. (I always think of it as the separation between physicists and engineers. :P )

    |.Server/Storage System.............|.Gaming/Work System..............................|.Sundry...... ............|
    |.Supermico X8DTH-6f................|.Asus Z9PE-D8 WS.................................|.HP LP3065 30"LCD Monitor.|
    |.(2) Xeon X5690....................|.2xE5-2643 v2....................................|.Mino lta magicolor 7450..|
    |.(192GB) Samsung PC10600 ECC.......|.2xEVGA nVidia GTX670 4GB........................|.Nikon coolscan 9000......|
    |.800W Redundant PSU................|.(8x8GB) Kingston DDR3-1600 ECC..................|.Quantum LTO-4HH..........|
    |.NEC Slimline DVD RW DL............|.Corsair AX1200..................................|........ .................|
    |.(..6) LSI 9200-8e HBAs............|.Lite-On iHBS112.................................|.Dell D820 Laptop.........|
    |.(..8) ST9300653SS (300GB) (RAID0).|.PA120.3, Apogee, MCW N&S bridge.................|...2.33Ghz; 8GB Ram;......|
    |.(112) ST2000DL003 (2TB) (RAIDZ2)..|.(1) Areca ARC1880ix-8 512MiB Cache..............|...DVDRW; 128GB SSD.......|
    |.(..2) ST9146803SS (146GB) (RAID-1)|.(8) Intel SSD 520 240GB (RAID6).................|...Ubuntu 12.04 64bit.....|
    |.Ubuntu 12.04 64bit Server.........|.Windows 7 x64 Pro...............................|............... ..........|

  3. #78
    Xtreme CCIE
    Join Date
    Dec 2004
    Location
    Atlanta, GA
    Posts
    3,982
    Quote Originally Posted by 187(V)URD@ View Post
    This is so not true. Raid 0 scales far better with more drives. Only the acces time gets higher and thats the only downfall in performance.
    What part do you have a particular issue with? The theoretical changes were just plain math, and the fact is that 4 disks vs. 3 disks (for example) *cannot* offer more than a 9% speed increase... which isn't really that high. Once you shave off a percent or two for overhead (and yes, this will depend on hardware setup).. yeah, that's mid to low single digits. If you have a method of math that suggests otherwise I would be interested to see it.
    Dual CCIE (Route\Switch and Security) at your disposal. Have a Cisco-related or other network question? My PM box is always open.

    Xtreme Network:
    - Cisco 3560X-24P PoE Switch
    - Cisco ASA 5505 Firewall
    - Cisco 4402 Wireless LAN Controller
    - Cisco 3502i Access Point

  4. #79
    Xtreme Enthusiast
    Join Date
    Dec 2005
    Location
    Peoples Republic of Berkeley (PRB), USA
    Posts
    928
    Thoughts on RAID-50 and 60?

  5. #80
    Xtreme CCIE
    Join Date
    Dec 2004
    Location
    Atlanta, GA
    Posts
    3,982
    Quote Originally Posted by halo112358 View Post
    Thoughts on RAID-50 and 60?
    Implement if you have the cash

    Really though, it depends on what your budget is and what your goals are. If I may be so bold as to make assumptions about your budget, since you need to buy a minimum of 6 drives and a hardware controller to get it set up I expect you won't be buying SSD's. I'll further suggest that because of the enormous storage capacity of even relatively inexpensive platter-based hard drives you wont be using a fraction of the resultant storage capacity. As a result, I guess I would propose the question: "Do you require a parity RAID level, or would it be equally/more effective to just buy a single, large hard drive to regularly backup full drive images to?" (though you may plan to do that in addition).

    The question comes around because a lot of people don't really need parity-based RAID levels for home solutions. Most people can stand to have a longer wait when a drive fails, and if they can it is often cheaper (for the same performance level) to simply perform regular backups. At the same time, it is on occasion more effective than RAID solutions because it does offer protection against viruses, some corruption issues, etc.

    I guess on a bit of an aside as far as the parity calculations go I should also mention something I had only previously touched on in the guide but did not go in depth on at the time of writing. Mostly what I'm thinking about relates to the increasing read/write speeds of drives and the new, extremely low access times offered by SSD drives - as read/write speeds have effectively more than doubled over what was available even 1 year ago (high-end SSD vs. a WD Raptor) and access times have dropped dramatically (same drive comparison), the chance for bottlenecks on hardware RAID cards have increased noticeably. There have been a few new chips made in the last year or two that could probably handle most setups, but I guess what I'm saying is simply make sure to take a good look at the hardware controller you plan to use to make sure it can handle whatever kind of disk you plan on using.
    Dual CCIE (Route\Switch and Security) at your disposal. Have a Cisco-related or other network question? My PM box is always open.

    Xtreme Network:
    - Cisco 3560X-24P PoE Switch
    - Cisco ASA 5505 Firewall
    - Cisco 4402 Wireless LAN Controller
    - Cisco 3502i Access Point

  6. #81
    Xtreme Addict
    Join Date
    Jul 2006
    Posts
    1,124
    A question came up recently with a network file server disk I/O load and performances under different raid types, I didn't have anything saved previously so I ran some simple tests with their request size (this was for large files shared over a SMB v1 network, so block sizes being limited to 64KiB) to illustrate some raid points to them. Figured it may be of interest here as it shows nicely the performance of the raids w/ random workloads and how the raid levels handle them everything else being equal.

    This is not intended to be an absolute (depends on raid implementation, and other items) this is mainly good though for illustration purposes to help people get their minds around concepts like data instance redundancy (ie raid-1 or 10 is has more iops than any other raid for reads), where the write penalties show for the parity raids et al.

    all OS & drive write caching is disabled only the areca card has a write cache with the BBU & normal read-ahead but data size is 24GiB and card has just 512MiB to mitigate that. This is more akin to show how it would perform under proper deployment.


    RAID 0:
    Code:
    Areca 1680ix-24 512MB Cache; 
    HDD Read Ahead Cache: Enabled
    Volume Data Read Ahead: Normal
    HDD Queue Depth: 32
    Disk Write Cache Mode: Disabled
    
    RAID 0; 8KiB Stripe Size; 8 x ST973451SS (73GB 2.5" SFF SAS 15K Savvio)
    
    Windows XP x64 SP2, NTFS, 32GB Partition; no allignment (file system start sector 63)
    =====================================================================================================
    
    H:\xdd\bin>xdd.exe -verbose -op read -targets 1 S24GiB -dio -blocksize 512 -reqsize 128 -mbytes 8192 -passes 3 -seek random -seek ra
    nge 50000000 -queuedepth 128
    
    IOIOIOIOIOIOIOIOIOIOI XDD version 6.5.013007.0001 IOIOIOIOIOIOIOIOIOIOIOI
    xdd - I/O Performance Inc. Copyright 1992-2007
    Starting time for this run, Mon Dec 22 11:23:07 2008
    
    ID for this run, 'No ID Specified'
    Maximum Process Priority, disabled
    Passes, 3
    Pass Delay in seconds, 0
    Maximum Error Threshold, 0
    Target Offset, 0
    I/O Synchronization, 0
    Total run-time limit in seconds, 0
    Output file name, stdout
    CSV output file name,
    Error output file name, stderr
    Pass seek randomization, disabled
    File write synchronization, disabled
    Pass synchronization barriers, enabled
    Number of Targets, 1
    Number of I/O Threads, 128
    
    Computer Name, ANALYTICAL, User Name, stcost
    Operating System Info: NT 5.2 Build 3790 Service Pack 2
    Page size in bytes, 4096
    Number of processors on this system, 4
    Megabytes of physical memory, 4095
    Seconds before starting, 0
            Target[0] Q[0], S24GiB
                    Target directory, "./"
                    Process ID, 2376
                    Thread ID, 2376
                    Processor, all/any
                    Read/write ratio, 100.00,  0.00
                    Throttle in MB/sec,   0.00
                    Per-pass time limit in seconds, 0
                    Blocksize in bytes, 512
                    Request size, 128, blocks, 65536, bytes
                    Number of Requests, 1024
                    Start offset, 0
                    Number of MegaBytes, 8192
                    Pass Offset in blocks, 0
                    I/O memory buffer is a normal memory buffer
                    I/O memory buffer alignment in bytes, 4096
                    Data pattern in buffer, '0x00'
                    Data buffer verification is disabled.
                    Direct I/O, enabled
                    Seek pattern, queued_interleaved
                    Seek range, 50000000
                    Preallocation, 0
                    Queue Depth, 128
                    Timestamping, disabled
                    Delete file, disabled
    
                         T  Q       Bytes      Ops    Time      Rate      IOPS   Latency     %CPU  OP_Type    ReqSize
    TARGET   PASS0001    0 128    8589934592 131072 166.256    51.667    788.38    0.0013     2.36   read       65536
    TARGET   PASS0002    0 128    8589934592 131072 166.541    51.578    787.02    0.0013     2.33   read       65536
    TARGET   PASS0003    0 128    8589934592 131072 166.186    51.689    788.71    0.0013     2.72   read       65536
    TARGET   Average     0 128   25769803776 393216 498.751    51.669    788.40    0.0013     2.47   read       65536
             Combined    1 128   25769803776 393216 498.751    51.669    788.40    0.0013     2.43   read       65536
    Ending time for this run, Mon Dec 22 11:31:28 2008
    
    
    H:\xdd\bin>xdd.exe -verbose -op write -targets 1 S24GiB -dio -blocksize 512 -reqsize 128 -mbytes 8192 -passes 3 -seek random -seek r
    ange 50000000 -queuedepth 128
    
    IOIOIOIOIOIOIOIOIOIOI XDD version 6.5.013007.0001 IOIOIOIOIOIOIOIOIOIOIOI
    xdd - I/O Performance Inc. Copyright 1992-2007
    Starting time for this run, Mon Dec 22 11:35:06 2008
    
    ID for this run, 'No ID Specified'
    Maximum Process Priority, disabled
    Passes, 3
    Pass Delay in seconds, 0
    Maximum Error Threshold, 0
    Target Offset, 0
    I/O Synchronization, 0
    Total run-time limit in seconds, 0
    Output file name, stdout
    CSV output file name,
    Error output file name, stderr
    Pass seek randomization, disabled
    File write synchronization, disabled
    Pass synchronization barriers, enabled
    Number of Targets, 1
    Number of I/O Threads, 128
    
    Computer Name, ANALYTICAL, User Name, stcost
    Operating System Info: NT 5.2 Build 3790 Service Pack 2
    Page size in bytes, 4096
    Number of processors on this system, 4
    Megabytes of physical memory, 4095
    Seconds before starting, 0
            Target[0] Q[0], S24GiB
                    Target directory, "./"
                    Process ID, 2464
                    Thread ID, 2464
                    Processor, all/any
                    Read/write ratio,  0.00, 100.00
                    Throttle in MB/sec,   0.00
                    Per-pass time limit in seconds, 0
                    Blocksize in bytes, 512
                    Request size, 128, blocks, 65536, bytes
                    Number of Requests, 1024
                    Start offset, 0
                    Number of MegaBytes, 8192
                    Pass Offset in blocks, 0
                    I/O memory buffer is a normal memory buffer
                    I/O memory buffer alignment in bytes, 4096
                    Data pattern in buffer, '0x00'
                    Data buffer verification is disabled.
                    Direct I/O, enabled
                    Seek pattern, queued_interleaved
                    Seek range, 50000000
                    Preallocation, 0
                    Queue Depth, 128
                    Timestamping, disabled
                    Delete file, disabled
    
                         T  Q       Bytes      Ops    Time      Rate      IOPS   Latency     %CPU  OP_Type    ReqSize
    TARGET   PASS0001    0 128    8589934592 131072 240.651    35.695    544.66    0.0018     2.10   write       65536
    TARGET   PASS0002    0 128    8589934592 131072 246.432    34.857    531.88    0.0019     2.27   write       65536
    TARGET   PASS0003    0 128    8589934592 131072 246.922    34.788    530.82    0.0019     1.97   write       65536
    TARGET   Average     0 128   25769803776 393216 734.004    35.109    535.71    0.0019     2.12   write       65536
             Combined    1 128   25769803776 393216 734.004    35.109    535.71    0.0019     2.12   write       65536
    Ending time for this run, Mon Dec 22 11:47:21 2008
    RAID 1+0 (10):
    Code:
    Areca 1680ix-24 512MB Cache; 
    HDD Read Ahead Cache: Enabled
    Volume Data Read Ahead: Normal
    HDD Queue Depth: 32
    Disk Write Cache Mode: Disabled
    
    RAID 1+0; 8KiB Stripe Size; 8 x ST973451SS (73GB 2.5" SFF SAS 15K Savvio)
    
    Windows XP x64 SP2, NTFS, 32GB Partition; no allignment (file system start sector 63)
    =====================================================================================================
    
    
    
    H:\xdd\bin>xdd.exe -verbose -op read -targets 1 S24GiB -dio -blocksize 512 -reqsize 128 -mbytes 8192 -passes 3 -seek random -seek
    nge 50000000 -queuedepth 128
    
    IOIOIOIOIOIOIOIOIOIOI XDD version 6.5.013007.0001 IOIOIOIOIOIOIOIOIOIOIOI
    xdd - I/O Performance Inc. Copyright 1992-2007
    Starting time for this run, Sun Dec 21 15:37:11 2008
    
    ID for this run, 'No ID Specified'
    Maximum Process Priority, disabled
    Passes, 3
    Pass Delay in seconds, 0
    Maximum Error Threshold, 0
    Target Offset, 0
    I/O Synchronization, 0
    Total run-time limit in seconds, 0
    Output file name, stdout
    CSV output file name,
    Error output file name, stderr
    Pass seek randomization, disabled
    File write synchronization, disabled
    Pass synchronization barriers, enabled
    Number of Targets, 1
    Number of I/O Threads, 128
    
    Computer Name, ANALYTICAL, User Name, stcost
    Operating System Info: NT 5.2 Build 3790 Service Pack 2
    Page size in bytes, 4096
    Number of processors on this system, 4
    Megabytes of physical memory, 4095
    Seconds before starting, 0
            Target[0] Q[0], S24GiB
                    Target directory, "./"
                    Process ID, 3952
                    Thread ID, 3952
                    Processor, all/any
                    Read/write ratio, 100.00,  0.00
                    Throttle in MB/sec,   0.00
                    Per-pass time limit in seconds, 0
                    Blocksize in bytes, 512
                    Request size, 128, blocks, 65536, bytes
                    Number of Requests, 1024
                    Start offset, 0
                    Number of MegaBytes, 8192
                    Pass Offset in blocks, 0
                    I/O memory buffer is a normal memory buffer
                    I/O memory buffer alignment in bytes, 4096
                    Data pattern in buffer, '0x00'
                    Data buffer verification is disabled.
                    Direct I/O, enabled
                    Seek pattern, queued_interleaved
                    Seek range, 50000000
                    Preallocation, 0
                    Queue Depth, 128
                    Timestamping, disabled
                    Delete file, disabled
    
                         T  Q       Bytes      Ops    Time      Rate      IOPS   Latency     %CPU  OP_Type    ReqSize
    TARGET   PASS0001    0 128    8589934592 131072 100.944    85.096    1298.46    0.0008     3.46   read       65536
    TARGET   PASS0002    0 128    8589934592 131072 101.129    84.941    1296.09    0.0008     3.55   read       65536
    TARGET   PASS0003    0 128    8589934592 131072 100.580    85.404    1303.16    0.0008     3.61   read       65536
    TARGET   Average     0 128   25769803776 393216 302.036    85.320    1301.88    0.0008     3.54   read       65536
             Combined    1 128   25769803776 393216 302.036    85.320    1301.88    0.0008     3.49   read       65536
    Ending time for this run, Sun Dec 21 15:42:15 2008
    
    
    H:\xdd\bin>xdd.exe -verbose -op write -targets 1 S24GiB -dio -blocksize 512 -reqsize 128 -mbytes 8192 -passes 3 -seek random -seek r
    ange 50000000 -queuedepth 128
    
    IOIOIOIOIOIOIOIOIOIOI XDD version 6.5.013007.0001 IOIOIOIOIOIOIOIOIOIOIOI
    xdd - I/O Performance Inc. Copyright 1992-2007
    Starting time for this run, Sun Dec 21 15:44:02 2008
    
    ID for this run, 'No ID Specified'
    Maximum Process Priority, disabled
    Passes, 3
    Pass Delay in seconds, 0
    Maximum Error Threshold, 0
    Target Offset, 0
    I/O Synchronization, 0
    Total run-time limit in seconds, 0
    Output file name, stdout
    CSV output file name,
    Error output file name, stderr
    Pass seek randomization, disabled
    File write synchronization, disabled
    Pass synchronization barriers, enabled
    Number of Targets, 1
    Number of I/O Threads, 128
    
    Computer Name, ANALYTICAL, User Name, stcost
    Operating System Info: NT 5.2 Build 3790 Service Pack 2
    Page size in bytes, 4096
    Number of processors on this system, 4
    Megabytes of physical memory, 4095
    Seconds before starting, 0
            Target[0] Q[0], S24GiB
                    Target directory, "./"
                    Process ID, 1600
                    Thread ID, 1600
                    Processor, all/any
                    Read/write ratio,  0.00, 100.00
                    Throttle in MB/sec,   0.00
                    Per-pass time limit in seconds, 0
                    Blocksize in bytes, 512
                    Request size, 128, blocks, 65536, bytes
                    Number of Requests, 1024
                    Start offset, 0
                    Number of MegaBytes, 8192
                    Pass Offset in blocks, 0
                    I/O memory buffer is a normal memory buffer
                    I/O memory buffer alignment in bytes, 4096
                    Data pattern in buffer, '0x00'
                    Data buffer verification is disabled.
                    Direct I/O, enabled
                    Seek pattern, queued_interleaved
                    Seek range, 50000000
                    Preallocation, 0
                    Queue Depth, 128
                    Timestamping, disabled
                    Delete file, disabled
    
                         T  Q       Bytes      Ops    Time      Rate      IOPS   Latency     %CPU  OP_Type    ReqSize
    TARGET   PASS0001    0 128    8589934592 131072 267.955    32.057    489.16    0.0020     1.79   write       65536
    TARGET   PASS0002    0 128    8589934592 131072 269.735    31.846    485.93    0.0021     1.77   write       65536
    TARGET   PASS0003    0 128    8589934592 131072 268.538    31.988    488.09    0.0020     1.80   write       65536
    TARGET   Average     0 128   25769803776 393216 806.226    31.964    487.72    0.0021     1.79   write       65536
             Combined    1 128   25769803776 393216 806.226    31.964    487.72    0.0021     1.78   write       65536
    Ending time for this run, Sun Dec 21 15:57:30 2008
    RAID5:
    Code:
    Areca 1680ix-24 512MB Cache; 
    HDD Read Ahead Cache: Enabled
    Volume Data Read Ahead: Normal
    HDD Queue Depth: 32
    Disk Write Cache Mode: Disabled
    
    
    RAID 5; 8KiB Stripe Size; 8 x ST973451SS (73GB 2.5" SFF SAS 15K Savvio)
    
    Windows XP x64 SP2, NTFS, 32GB Partition; no allignment (file system start sector 63)
    =====================================================================================================
    
    
    H:\xdd\bin>xdd.exe -verbose -op read -targets 1 S24GiB -dio -blocksize 512 -reqsize 128 -mbytes 8192 -passes 3 -seek random -seek ra
    nge 50000000 -queuedepth 128
    
    IOIOIOIOIOIOIOIOIOIOI XDD version 6.5.013007.0001 IOIOIOIOIOIOIOIOIOIOIOI
    xdd - I/O Performance Inc. Copyright 1992-2007
    Starting time for this run, Sun Dec 21 22:25:06 2008
    
    ID for this run, 'No ID Specified'
    Maximum Process Priority, disabled
    Passes, 3
    Pass Delay in seconds, 0
    Maximum Error Threshold, 0
    Target Offset, 0
    I/O Synchronization, 0
    Total run-time limit in seconds, 0
    Output file name, stdout
    CSV output file name,
    Error output file name, stderr
    Pass seek randomization, disabled
    File write synchronization, disabled
    Pass synchronization barriers, enabled
    Number of Targets, 1
    Number of I/O Threads, 128
    
    Computer Name, ANALYTICAL, User Name, stcost
    Operating System Info: NT 5.2 Build 3790 Service Pack 2
    Page size in bytes, 4096
    Number of processors on this system, 4
    Megabytes of physical memory, 4095
    Seconds before starting, 0
            Target[0] Q[0], S24GiB
                    Target directory, "./"
                    Process ID, 3892
                    Thread ID, 3892
                    Processor, all/any
                    Read/write ratio, 100.00,  0.00
                    Throttle in MB/sec,   0.00
                    Per-pass time limit in seconds, 0
                    Blocksize in bytes, 512
                    Request size, 128, blocks, 65536, bytes
                    Number of Requests, 1024
                    Start offset, 0
                    Number of MegaBytes, 8192
                    Pass Offset in blocks, 0
                    I/O memory buffer is a normal memory buffer
                    I/O memory buffer alignment in bytes, 4096
                    Data pattern in buffer, '0x00'
                    Data buffer verification is disabled.
                    Direct I/O, enabled
                    Seek pattern, queued_interleaved
                    Seek range, 50000000
                    Preallocation, 0
                    Queue Depth, 128
                    Timestamping, disabled
                    Delete file, disabled
    
                         T  Q       Bytes      Ops    Time      Rate      IOPS   Latency     %CPU  OP_Type    ReqSize
    TARGET   PASS0001    0 128    8589934592 131072 171.366    50.126    764.87    0.0013     2.41   read       65536
    TARGET   PASS0002    0 128    8589934592 131072 170.789    50.296    767.45    0.0013     2.49   read       65536
    TARGET   PASS0003    0 128    8589934592 131072 171.240    50.163    765.43    0.0013     2.43   read       65536
    TARGET   Average     0 128   25769803776 393216 512.806    50.253    766.79    0.0013     2.44   read       65536
             Combined    1 128   25769803776 393216 513.000    50.234    766.50    0.0013     2.40   read       65536
    Ending time for this run, Sun Dec 21 22:33:40 2008
    
    
    H:\xdd\bin>xdd.exe -verbose -op write -targets 1 S24GiB -dio -blocksize 512 -reqsize 128 -mbytes 8192 -passes 3 -seek random -seek r
    ange 50000000 -queuedepth 128
    
    IOIOIOIOIOIOIOIOIOIOI XDD version 6.5.013007.0001 IOIOIOIOIOIOIOIOIOIOIOI
    xdd - I/O Performance Inc. Copyright 1992-2007
    Starting time for this run, Sun Dec 21 22:33:59 2008
    
    ID for this run, 'No ID Specified'
    Maximum Process Priority, disabled
    Passes, 3
    Pass Delay in seconds, 0
    Maximum Error Threshold, 0
    Target Offset, 0
    I/O Synchronization, 0
    Total run-time limit in seconds, 0
    Output file name, stdout
    CSV output file name,
    Error output file name, stderr
    Pass seek randomization, disabled
    File write synchronization, disabled
    Pass synchronization barriers, enabled
    Number of Targets, 1
    Number of I/O Threads, 128
    
    Computer Name, ANALYTICAL, User Name, stcost
    Operating System Info: NT 5.2 Build 3790 Service Pack 2
    Page size in bytes, 4096
    Number of processors on this system, 4
    Megabytes of physical memory, 4095
    Seconds before starting, 0
            Target[0] Q[0], S24GiB
                    Target directory, "./"
                    Process ID, 2364
                    Thread ID, 2364
                    Processor, all/any
                    Read/write ratio,  0.00, 100.00
                    Throttle in MB/sec,   0.00
                    Per-pass time limit in seconds, 0
                    Blocksize in bytes, 512
                    Request size, 128, blocks, 65536, bytes
                    Number of Requests, 1024
                    Start offset, 0
                    Number of MegaBytes, 8192
                    Pass Offset in blocks, 0
                    I/O memory buffer is a normal memory buffer
                    I/O memory buffer alignment in bytes, 4096
                    Data pattern in buffer, '0x00'
                    Data buffer verification is disabled.
                    Direct I/O, enabled
                    Seek pattern, queued_interleaved
                    Seek range, 50000000
                    Preallocation, 0
                    Queue Depth, 128
                    Timestamping, disabled
                    Delete file, disabled
    
                         T  Q       Bytes      Ops    Time      Rate      IOPS   Latency     %CPU  OP_Type    ReqSize
    TARGET   PASS0001    0 128    8589934592 131072 572.414    15.007    228.98    0.0044     0.89   write       65536
    TARGET   PASS0002    0 128    8589934592 131072 578.774    14.842    226.46    0.0044     0.87   write       65536
    TARGET   PASS0003    0 128    8589934592 131072 576.499    14.900    227.36    0.0044     0.89   write       65536
    TARGET   Average     0 128   25769803776 393216 1727.685    14.916    227.60    0.0044     0.88   write       65536
             Combined    1 128   25769803776 393216 1727.685    14.916    227.60    0.0044     0.88   write       65536
    Ending time for this run, Sun Dec 21 23:02:49 2008
    RAID 6:
    Code:
    Areca 1680ix-24 512MB Cache; 
    HDD Read Ahead Cache: Enabled
    Volume Data Read Ahead: Normal
    HDD Queue Depth: 32
    Disk Write Cache Mode: Disabled
    
    
    RAID 6; 8KiB Stripe Size; 8 x ST973451SS (73GB 2.5" SFF SAS 15K Savvio)
    
    Windows XP x64 SP2, NTFS, 32GB Partition; no allignment (file system start sector 63)
    =====================================================================================================
    
    
    H:\xdd\bin>xdd.exe -verbose -op read -targets 1 S24GiB -dio -blocksize 512 -reqsize 128 -mbytes 8192 -passes 3 -seek random -seek ra
    nge 50000000 -queuedepth 128
    
    IOIOIOIOIOIOIOIOIOIOI XDD version 6.5.013007.0001 IOIOIOIOIOIOIOIOIOIOIOI
    xdd - I/O Performance Inc. Copyright 1992-2007
    Starting time for this run, Sun Dec 21 19:30:13 2008
    
    ID for this run, 'No ID Specified'
    Maximum Process Priority, disabled
    Passes, 3
    Pass Delay in seconds, 0
    Maximum Error Threshold, 0
    Target Offset, 0
    I/O Synchronization, 0
    Total run-time limit in seconds, 0
    Output file name, stdout
    CSV output file name,
    Error output file name, stderr
    Pass seek randomization, disabled
    File write synchronization, disabled
    Pass synchronization barriers, enabled
    Number of Targets, 1
    Number of I/O Threads, 128
    
    Computer Name, ANALYTICAL, User Name, stcost
    Operating System Info: NT 5.2 Build 3790 Service Pack 2
    Page size in bytes, 4096
    Number of processors on this system, 4
    Megabytes of physical memory, 4095
    Seconds before starting, 0
            Target[0] Q[0], S24GiB
                    Target directory, "./"
                    Process ID, 3160
                    Thread ID, 3160
                    Processor, all/any
                    Read/write ratio, 100.00,  0.00
                    Throttle in MB/sec,   0.00
                    Per-pass time limit in seconds, 0
                    Blocksize in bytes, 512
                    Request size, 128, blocks, 65536, bytes
                    Number of Requests, 1024
                    Start offset, 0
                    Number of MegaBytes, 8192
                    Pass Offset in blocks, 0
                    I/O memory buffer is a normal memory buffer
                    I/O memory buffer alignment in bytes, 4096
                    Data pattern in buffer, '0x00'
                    Data buffer verification is disabled.
                    Direct I/O, enabled
                    Seek pattern, queued_interleaved
                    Seek range, 50000000
                    Preallocation, 0
                    Queue Depth, 128
                    Timestamping, disabled
                    Delete file, disabled
    
                         T  Q       Bytes      Ops    Time      Rate      IOPS   Latency     %CPU  OP_Type    ReqSize
    TARGET   PASS0001    0 128    8589934592 131072 173.154    49.609    756.97    0.0013     2.45   read       65536
    TARGET   PASS0002    0 128    8589934592 131072 171.793    50.002    762.97    0.0013     2.33   read       65536
    TARGET   PASS0003    0 128    8589934592 131072 174.457    49.238    751.31    0.0013     1.97   read       65536
    TARGET   Average     0 128   25769803776 393216 518.649    49.686    758.15    0.0013     2.25   read       65536
             Combined    1 128   25769803776 393216 519.000    49.653    757.64    0.0013     2.22   read       65536
    Ending time for this run, Sun Dec 21 19:38:54 2008
    
    
    H:\xdd\bin>xdd.exe -verbose -op write -targets 1 S24GiB -dio -blocksize 512 -reqsize 128 -mbytes 8192 -passes 3 -seek random -seek r
    ange 50000000 -queuedepth 128
    
    IOIOIOIOIOIOIOIOIOIOI XDD version 6.5.013007.0001 IOIOIOIOIOIOIOIOIOIOIOI
    xdd - I/O Performance Inc. Copyright 1992-2007
    Starting time for this run, Sun Dec 21 19:45:01 2008
    
    ID for this run, 'No ID Specified'
    Maximum Process Priority, disabled
    Passes, 3
    Pass Delay in seconds, 0
    Maximum Error Threshold, 0
    Target Offset, 0
    I/O Synchronization, 0
    Total run-time limit in seconds, 0
    Output file name, stdout
    CSV output file name,
    Error output file name, stderr
    Pass seek randomization, disabled
    File write synchronization, disabled
    Pass synchronization barriers, enabled
    Number of Targets, 1
    Number of I/O Threads, 128
    
    Computer Name, ANALYTICAL, User Name, stcost
    Operating System Info: NT 5.2 Build 3790 Service Pack 2
    Page size in bytes, 4096
    Number of processors on this system, 4
    Megabytes of physical memory, 4095
    Seconds before starting, 0
            Target[0] Q[0], S24GiB
                    Target directory, "./"
                    Process ID, 2192
                    Thread ID, 2192
                    Processor, all/any
                    Read/write ratio,  0.00, 100.00
                    Throttle in MB/sec,   0.00
                    Per-pass time limit in seconds, 0
                    Blocksize in bytes, 512
                    Request size, 128, blocks, 65536, bytes
                    Number of Requests, 1024
                    Start offset, 0
                    Number of MegaBytes, 8192
                    Pass Offset in blocks, 0
                    I/O memory buffer is a normal memory buffer
                    I/O memory buffer alignment in bytes, 4096
                    Data pattern in buffer, '0x00'
                    Data buffer verification is disabled.
                    Direct I/O, enabled
                    Seek pattern, queued_interleaved
                    Seek range, 50000000
                    Preallocation, 0
                    Queue Depth, 128
                    Timestamping, disabled
                    Delete file, disabled
    
                         T  Q       Bytes      Ops    Time      Rate      IOPS   Latency     %CPU  OP_Type    ReqSize
    TARGET   PASS0001    0 128    8589934592 131072 702.788    12.223    186.50    0.0054     0.63   write       65536
    TARGET   PASS0002    0 128    8589934592 131072 708.464    12.125    185.01    0.0054     0.64   write       65536
    TARGET   PASS0003    0 128    8589934592 131072 712.876    12.050    183.86    0.0054     0.72   write       65536
    TARGET   Average     0 128   25769803776 393216 2124.127    12.132    185.12    0.0054     0.66   write       65536
             Combined    1 128   25769803776 393216 2124.127    12.132    185.12    0.0054     0.66   write       65536
    Ending time for this run, Sun Dec 21 20:20:28 2008
    Last edited by stevecs; 12-22-2008 at 01:30 PM.

    |.Server/Storage System.............|.Gaming/Work System..............................|.Sundry...... ............|
    |.Supermico X8DTH-6f................|.Asus Z9PE-D8 WS.................................|.HP LP3065 30"LCD Monitor.|
    |.(2) Xeon X5690....................|.2xE5-2643 v2....................................|.Mino lta magicolor 7450..|
    |.(192GB) Samsung PC10600 ECC.......|.2xEVGA nVidia GTX670 4GB........................|.Nikon coolscan 9000......|
    |.800W Redundant PSU................|.(8x8GB) Kingston DDR3-1600 ECC..................|.Quantum LTO-4HH..........|
    |.NEC Slimline DVD RW DL............|.Corsair AX1200..................................|........ .................|
    |.(..6) LSI 9200-8e HBAs............|.Lite-On iHBS112.................................|.Dell D820 Laptop.........|
    |.(..8) ST9300653SS (300GB) (RAID0).|.PA120.3, Apogee, MCW N&S bridge.................|...2.33Ghz; 8GB Ram;......|
    |.(112) ST2000DL003 (2TB) (RAIDZ2)..|.(1) Areca ARC1880ix-8 512MiB Cache..............|...DVDRW; 128GB SSD.......|
    |.(..2) ST9146803SS (146GB) (RAID-1)|.(8) Intel SSD 520 240GB (RAID6).................|...Ubuntu 12.04 64bit.....|
    |.Ubuntu 12.04 64bit Server.........|.Windows 7 x64 Pro...............................|............... ..........|

  7. #82
    Xtreme Addict
    Join Date
    Jul 2006
    Posts
    1,124
    Just a note that I updated the spreadsheet for raid availability calcs located earlier in the thread here: (http://www.xtremesystems.org/forums/...9&postcount=52) fixed some of the typos, made it easier to compare multiple raid scenarios and added in performance data (not really spot-on but close to real world without going into modeling the specific M/G/1 queues and requiring a lot more info from the end user). As usual, free to use/abuse.

    |.Server/Storage System.............|.Gaming/Work System..............................|.Sundry...... ............|
    |.Supermico X8DTH-6f................|.Asus Z9PE-D8 WS.................................|.HP LP3065 30"LCD Monitor.|
    |.(2) Xeon X5690....................|.2xE5-2643 v2....................................|.Mino lta magicolor 7450..|
    |.(192GB) Samsung PC10600 ECC.......|.2xEVGA nVidia GTX670 4GB........................|.Nikon coolscan 9000......|
    |.800W Redundant PSU................|.(8x8GB) Kingston DDR3-1600 ECC..................|.Quantum LTO-4HH..........|
    |.NEC Slimline DVD RW DL............|.Corsair AX1200..................................|........ .................|
    |.(..6) LSI 9200-8e HBAs............|.Lite-On iHBS112.................................|.Dell D820 Laptop.........|
    |.(..8) ST9300653SS (300GB) (RAID0).|.PA120.3, Apogee, MCW N&S bridge.................|...2.33Ghz; 8GB Ram;......|
    |.(112) ST2000DL003 (2TB) (RAIDZ2)..|.(1) Areca ARC1880ix-8 512MiB Cache..............|...DVDRW; 128GB SSD.......|
    |.(..2) ST9146803SS (146GB) (RAID-1)|.(8) Intel SSD 520 240GB (RAID6).................|...Ubuntu 12.04 64bit.....|
    |.Ubuntu 12.04 64bit Server.........|.Windows 7 x64 Pro...............................|............... ..........|

  8. #83
    Registered User
    Join Date
    Aug 2006
    Posts
    74

    Which raid?

    Hi,

    I have ten hard disk slot so which is tbe best raid option?



    is raid 6 with five disk or raid 5 with 5 disk?

    Ugo

  9. #84
    Xtreme Member
    Join Date
    Sep 2008
    Posts
    116
    I was wondering how do I setup RAID 1 to increase read performance if it's not possible then what feature to look up for in a RAID controller card.

  10. #85
    Xtreme Member Gilhooley's Avatar
    Join Date
    Nov 2006
    Posts
    164
    Quote Originally Posted by panzerchaos47 View Post
    I was wondering how do I setup RAID 1 to increase read performance if it's not possible then what feature to look up for in a RAID controller card.
    A good RAID card will do that, but usually it means a card with chip/memory - so it's expensive. Today considering performance/cost a better solution might be 2 SSDs in RAID1 on a motherboard controller. (If you're only looking for RAID1 functions)
    Q9650@4000 - Apogee GTX, Gigabyte X48-DS5, 8GB Corsair Dominator XMS2-8500, GTX480 El cheapo Asetek block, Audiophile 192 + Adam-A7, Win7

  11. #86
    Xtreme Member
    Join Date
    Sep 2008
    Posts
    116
    Thanks I was looking for how to get extra benefit of RAID 1 like double read performance like RAID 0. I know that few years ago I search about RAID 1 that it doesn't increase read performance but I read about it recently people said RAID 1 would read on both drives to double read performance. Making it RAID 0 without double the performance on write.
    I tested two drives that could do 120 mBps on sequential read but when I enable RAID 1 on those two drives it brought it down to 111 mBps after couple of testing. I tried both hardware mode on 680i and software mode in windows (disable raid in 680i)

  12. #87
    Xtreme Enthusiast
    Join Date
    Dec 2009
    Location
    Norway
    Posts
    513
    Did you test the random performance?
    It could be you get a boost to random read IOPS.

  13. #88
    Xtreme Member
    Join Date
    Sep 2008
    Posts
    116
    Quote Originally Posted by GullLars View Post
    Did you test the random performance?
    It could be you get a boost to random read IOPS.
    It sort of improve that.

    Top is RAID1 with 680i
    Middle without RAID
    Bottom is RAID with Windows 7
    Attached Images Attached Images
    Last edited by panzerchaos47; 05-13-2010 at 02:49 PM.

  14. #89
    Xtreme Addict
    Join Date
    Jul 2006
    Posts
    1,124
    RAID-1 can provide an increase in IOPS over other RAID types due to the duplicity of information, however the scenarios that would show such an improvement would be highly random and at elevated queue depths. An example can be seen above in an earlier post of mine showing an 8-drive array configured with different raid levels.

    |.Server/Storage System.............|.Gaming/Work System..............................|.Sundry...... ............|
    |.Supermico X8DTH-6f................|.Asus Z9PE-D8 WS.................................|.HP LP3065 30"LCD Monitor.|
    |.(2) Xeon X5690....................|.2xE5-2643 v2....................................|.Mino lta magicolor 7450..|
    |.(192GB) Samsung PC10600 ECC.......|.2xEVGA nVidia GTX670 4GB........................|.Nikon coolscan 9000......|
    |.800W Redundant PSU................|.(8x8GB) Kingston DDR3-1600 ECC..................|.Quantum LTO-4HH..........|
    |.NEC Slimline DVD RW DL............|.Corsair AX1200..................................|........ .................|
    |.(..6) LSI 9200-8e HBAs............|.Lite-On iHBS112.................................|.Dell D820 Laptop.........|
    |.(..8) ST9300653SS (300GB) (RAID0).|.PA120.3, Apogee, MCW N&S bridge.................|...2.33Ghz; 8GB Ram;......|
    |.(112) ST2000DL003 (2TB) (RAIDZ2)..|.(1) Areca ARC1880ix-8 512MiB Cache..............|...DVDRW; 128GB SSD.......|
    |.(..2) ST9146803SS (146GB) (RAID-1)|.(8) Intel SSD 520 240GB (RAID6).................|...Ubuntu 12.04 64bit.....|
    |.Ubuntu 12.04 64bit Server.........|.Windows 7 x64 Pro...............................|............... ..........|

  15. #90
    Xtreme Guru
    Join Date
    Jun 2010
    Location
    In the Land down -under-
    Posts
    4,439
    Nice guide serra, maby add a tutorial for people setting up a raid 0? Would be very useful and handy

    Another thing I find funny is AMD/Intel would snipe any of our Moms on a grocery run if it meant good quarterly results, and you are forever whining about what feser did?

  16. #91
    Xtreme CCIE
    Join Date
    Dec 2004
    Location
    Atlanta, GA
    Posts
    3,982
    Quote Originally Posted by Johnny87au View Post
    Nice guide serra, maby add a tutorial for people setting up a raid 0? Would be very useful and handy
    I had considered doing a guide like that, but with how quickly controllers/BIOS options change and how little ability I have to pull review hardware it just isn't in the cards.

    Add-on controllers would be possible to do that with since they change very slowly compared to most hardware, but given their costs it's just impractical if you're not getting anything free. Motherboard implementations, on the other hand, sometimes change BIOS-to-BIOS (usually just in naming, but still...), so even that would be very hard to do properly without free access.


    Maybe if any vendors are watching they could turn this problem around??
    Dual CCIE (Route\Switch and Security) at your disposal. Have a Cisco-related or other network question? My PM box is always open.

    Xtreme Network:
    - Cisco 3560X-24P PoE Switch
    - Cisco ASA 5505 Firewall
    - Cisco 4402 Wireless LAN Controller
    - Cisco 3502i Access Point

  17. #92
    Xtreme Guru
    Join Date
    Jun 2010
    Location
    In the Land down -under-
    Posts
    4,439
    Sounds good man!

    Another thing I find funny is AMD/Intel would snipe any of our Moms on a grocery run if it meant good quarterly results, and you are forever whining about what feser did?

  18. #93
    Thanks landlord's share,and I learned.
    >@<!

  19. #94
    Xtreme Member
    Join Date
    Feb 2009
    Location
    Germany
    Posts
    103
    Awesome guide Serra.
    Though there still is an issue I cannot figure out myself. Can I undo a raid 1 setup without having to reformat the drives?

  20. #95
    Xtreme Member
    Join Date
    Nov 2009
    Location
    Slovenia
    Posts
    178
    I have OCZ vertex 2 and 2 x Samsung 1TB f3 in RAID0 but I wonder if I run anything on my RAID0 blue screen appears 0x0000007F. Please help

  21. #96
    Registered User
    Join Date
    Oct 2006
    Location
    Kirghudu, Cowjackingstan
    Posts
    462
    Thanks to GullLars for his elaborate posts regarding SSD stripe sizes on Anandtech. Thought I'd add it here
    1. Intel SSDs can do 90-100% of their sequential bandwidth with 16-32KB blocks @ QD 1, and at higher queue depths they can reach it at 8KB blocks. Harddisks on the other hand reach their maximum bandwidth around 64-128KB sequential blocks, and do not benefit noticably from increasing the queue depth.

    When you RAID-0, the files that are larger than the stripe size get split up in chucks equal in size to the stripe size and distributed amongs the units in the RAID. Say you have a 128KB file (or want to read a 128KB chunk of a larger file), this will get divided into 8 pieces when the stripe size is 16KB, and with 3 SSDs in the RAID this means 3 chunks for 2 of the SSDs, and 2 chukcs for the third. When you read this file, you will read 16KB blocks from all 3 SSDs at Queue Depth 2 and 3. If you check out ATTO, you will see 2x 16KB @ QD 3 + 1x 16KB @ QD 2 summarize to higher bandwidth than 1x 128KB @ QD 1.

    The bandwidth when reading or writing files equal to or smaller the stripe size will not be affected by the RAID. The sequential bandwidth of blocks of 1MB or larger will also be the same since the SSDs will be able to deliver max bandwidth with any stripe size since data is striped over all in blocks large enough or enough blocks to reach max bandwidth for each SSD.

    So to summarize, benefits and drawbacks of using a small stripe size:
    + Higher performance of files/blocks above the stripe size while still relatively small (<1MB)
    - Additional computational overhead from managing more blocks in-flight, although this is negligable for RAID-0.
    The added performance of small-medium files/blocks from a small stripe size can make a difference for OS/apps, and can be meassured in PCmark Vantage.

    2. Regarding the "Most SSD's have a native block size of 32KB when erasing......" quote, this is purely false.
    Most SSDs have 4KB pages and 512KB erase-blocks. Anyways, as long as you have LBA->Physical block abstraction, dynamic wear leveling, and garbage collection, you can forget about erase-blocks and only think of pages.
    This is true for Intels SSDs, and most newer SSD (2009 and newer).

    These SSDs have "pools" of pre-erased blocks wich are written to, so you don't have to erase evertime you write. The garbage collection is responsible for cleaning dirty or partially dirty erase-blocks and combine them to pure valid blocks in new locations, and the old blocks then enter the "clean" pool.

    Most SSDs are capable of writing faster than their garbage collection can clean, and therefore you get a lower "sustained" write speed than the max speed, it will however return back to max when the GC has had some time to replenish the clean pool. Some SSDs will sacrafice write amplification (by using more aggressive GC) to increase sustained sequential write.

    Intel on the other hand has focused on maximizing the random write performance in a way that also minimizes write amplification, and this either means high temporary and really low sustained write, or like intel has done, fairly low sequential write that does not degrade much. (this has to do with write placement, wear leveling, and garbage collection)

    This technique is what allows the x25-V to have random write equal to sequential write (or close to. 40MB/s random write, 45MB/s sequential write). x25-M could probably also get a random:seq write ratio close to 1:1, but the controller doesn't have enough computational power to deliver that high random write using intels technique.

    3. Anyways, i thought i'd post it here so everyone could see:
    The numbers he's refering to shows 16KB stripe as superior performance-wise.
    Here's the PCmark vantage HDD scores of 3 x25-V's in RAID-0 by stripe size:
    16KB: 74 164
    32KB: 70 364
    64KB: 63 710
    128KB: 55 045
    For those wondering, 16KB shows 540MB/s read and 131MB/s write in CrystalDiskMark 3.0 while 128KB shows 520MB/s read and 131MB/s write (1000MB lenght, 5 runs)

    Also, here are the AS SSD total scores by stripe size for 3 x25-V's in RAID-0:
    16KB: 809
    32KB: 797
    64KB: 795
    128KB: 774

    By doing PCmark vantage points multiplied by 2/3, i guess Anand used a 128KB stripe.
    If he'd used a 16KB stripe, the numbers would likely be around 48-49 000
    This is supported by benchmarking done by the user Anvil, who got 47 980 points in the Vantage HDD test with 2 x25-V's in RAID-0 off ICH10R with a 16KB stripe size. (IRST 9.6 driver, writeback cache disabled).

    Sony KDL40 // ASRock P67 Extreme4 1.40 // Core i5 2500K //
    G.Skill Ripjaws 1600 4x2Gb // HD6950 2GB // Intel Gigabit CT PCIe //
    M-Audio Delta 2496 // Crucial-M4 128Gb // Hitachi 2TB // TRUE-120 //
    Antec Quattro 850W // Antec 1200 // Win7 64 bit

  22. #97
    Xtreme Member
    Join Date
    Feb 2009
    Location
    Argentina
    Posts
    386
    Hello everyone!
    I hope to be in the right place to ask.
    I'm bound to make my first raid configuration, but I have one enormous doubt.
    The last time that the idea of making a raid came to my mind, I thought about a raid 0 for the os and variety of software in order to improve performance.
    But because I also like OCing, many people, in that moment, claimed that the raid setup could be affected and even get broken because of an unstable OC.

    This keep being the same with the new P67 motherboards?
    I have an Asus maximus IV extreme and towards the OC, I wanted to set up a raid 0.

    What you think?
    i7 2600k / G1. Sniper 2 / 8Gb Sniper 1600 / GTX 580 / 3.4Tb / AX1200 *Mod. / v2120 / Rheobus Extreme *Mod.

    HF 14 Livingstone / Thermochill Pa 120.3 / Bitspower Water Tank Z-Multi 250ml / MCP 355 + XSPC Laing DDC Acetal Top / Bitspower Matt Black Fittings Army / NoiseBlocker Blacksilent Fans

  23. #98
    Registered User
    Join Date
    Apr 2008
    Location
    Up State New York
    Posts
    94
    Quote Originally Posted by Osterman View Post
    Hello everyone!
    I hope to be in the right place to ask.
    I'm bound to make my first raid configuration, but I have one enormous doubt.
    The last time that the idea of making a raid came to my mind, I thought about a raid 0 for the os and variety of software in order to improve performance.
    But because I also like OCing, many people, in that moment, claimed that the raid setup could be affected and even get broken because of an unstable OC.

    This keep being the same with the new P67 motherboards?
    I have an Asus maximus IV extreme and towards the OC, I wanted to set up a raid 0.

    What you think?
    As long as its for over clocking, if you have important stuff you CAN NOT afford to lose, put it on a separate hard drive. Then if you loss the RAID-0 you wont lose the important info........Not sure why but above a certain point Asus boards seem to lose the hard drives when overclocking, like on the RIVE board, plus its not sata 6 like they said it was when it was new. I have a broken raid currently, it still booots but in the bios when booting the Intel controller shows one of the discs as bad.
    RIVE/3930K Water
    16GB Gskill Ripjaw X 2133
    2X Intel CherryVille 520 60GB Raid O
    2X HD 7970 VisionTec
    Corsair AX1200

    Biostar TP45HP, GA-X48-DQ6 , Maximus IV Gene Z

Page 4 of 4 FirstFirst 1234

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •