Page 5 of 7 FirstFirst ... 234567 LastLast
Results 101 to 125 of 151

Thread: Raid 0 Hardware :banana::banana::banana::banana: :)

  1. #101
    Xtreme Member
    Join Date
    Nov 2002
    Location
    www.overclockers.com.au
    Posts
    258
    I have a question re the old Mtrons (100MB read 80MB write) and the new ones that are about to be released sometimes this month (7500 series 130MB read and 120MB write).

    On a raid 0 with 4 drives will there be any advantage in going with the 7500 series? Will the faster read/write make any diference? Is it a case where the old one you can get 400/500 in benchmarks but if I go for the new ones performace will go up to 500+?
    Last edited by MnM; 07-01-2008 at 11:38 PM.

  2. #102
    Administrator
    Join Date
    Nov 2007
    Location
    Stockton, CA
    Posts
    3,568
    Here is now I look at it.

    With the current lineup of controller cards out there, there isn’t much you can do to get past the bandwidth cap of 800-850 M/s. You can but be ready to spend some money. You basically need fiber controllers for that. Unless something new comes out.

    Access times are all pretty much the same and that does not scale in a Raid so it’s pretty much a non issue.

    So pick you speed and price range. Even cheap SSD’s will scale up to the cap if you have enough of them. Write speeds do go up as you add SSD’s also.


    However with all that said. Be careful when you make your purchase ! I cannot stress this enough.

    Some of the newer SSD’s are not really rated for Raids, but are going after the Laptop market. I have heard some of these brands become unstable in Raids. Even if they say they are rated for Severs, does not mean Raids.

    Also

    Not all motherboards work well with SSD’s and Raids. Be careful and do some research first. Plan what you want to do.

    If you go intel motherboards you should be fine with 1 SATA SSD, but when you go to a Raid setup problems start happening and generally going to a controller card fix’s that.

    Do you really need 800-850 M/s bandwidth ? if not use 2 – 4 SSD’s and you will be very happy.
    Best way to go is to use several SSD’s in a Raid vs. 1 big SSD as bandwidth will scale up nicely. But in a laptop you can pretty much only use 1.

  3. #103
    Xtreme Guru
    Join Date
    Dec 2002
    Posts
    4,046
    Quote Originally Posted by Buckeye View Post


    Working on busting the 1GB/s barrier. In fact going for 3-4GB/s but I need more SSD's also for that, plus some new equipment.
    you nuts!

    details on how youre going for 3-4GB/s

  4. #104
    Xtreme Cruncher
    Join Date
    Jun 2006
    Location
    On top of a mountain
    Posts
    4,163
    Well I thought I had seen Mad Bandwidth...but that's pretty much over the top.

    Congrats on living the dream
    20 Logs on the fire for WCG: i7 920@2.8 X3220@3.0 X3220@2.4 E8400@4.05 E6600@2.4

  5. #105
    Xtreme Addict
    Join Date
    Jun 2006
    Posts
    1,820
    He can't... he needs 4-5 RAID cards for that, all running on x4+ link. No controllers support spanning with more than two controllers in the same system.
    Quote Originally Posted by NapalmV5 View Post
    you nuts!

    details on how youre going for 3-4GB/s
    P5E64_Evo/QX9650, 4x X25-E SSD - gimme speed..
    Quote Originally Posted by MR_SmartAss View Post
    Lately there has been a lot of BS(Dave_Graham where are you?)

  6. #106
    Xtreme Member
    Join Date
    Nov 2002
    Location
    www.overclockers.com.au
    Posts
    258
    Quote Originally Posted by Buckeye View Post
    Here is now I look at it.

    With the current lineup of controller cards out there, there isn’t much you can do to get past the bandwidth cap of 800-850 M/s. You can but be ready to spend some money. You basically need fiber controllers for that. Unless something new comes out.

    Access times are all pretty much the same and that does not scale in a Raid so it’s pretty much a non issue.

    So pick you speed and price range. Even cheap SSD’s will scale up to the cap if you have enough of them. Write speeds do go up as you add SSD’s also.


    However with all that said. Be careful when you make your purchase ! I cannot stress this enough.

    Some of the newer SSD’s are not really rated for Raids, but are going after the Laptop market. I have heard some of these brands become unstable in Raids. Even if they say they are rated for Severs, does not mean Raids.

    Also

    Not all motherboards work well with SSD’s and Raids. Be careful and do some research first. Plan what you want to do.

    If you go intel motherboards you should be fine with 1 SATA SSD, but when you go to a Raid setup problems start happening and generally going to a controller card fix’s that.

    Do you really need 800-850 M/s bandwidth ? if not use 2 – 4 SSD’s and you will be very happy.
    Best way to go is to use several SSD’s in a Raid vs. 1 big SSD as bandwidth will scale up nicely. But in a laptop you can pretty much only use 1.
    Thanks for the reply.
    Well what I plan to do is get 4 SSD's (Mtron/Mtron7500Pro/MemorightGT S series/or the new OCZ Core Series) and a raid card (Areca or most probably an Adaptec 5405) and use raid-0.

    Now given the setup (and taking into consideration that 4 SSD will not max the capacity of my raid card) is it better to go for newer ssd's (like Mtro pro 7500 130 read/120 write) or because of raid-0 and 4 ssds those new ssds with better read/write will not make much of a difference?

    I want performance but if performance increase is negligible I rather go for the cheaper ssds.

  7. #107
    Administrator
    Join Date
    Nov 2007
    Location
    Stockton, CA
    Posts
    3,568
    Quote Originally Posted by NapalmV5 View Post
    you nuts!

    details on how youre going for 3-4GB/s
    I will be moving over to a Fiber setup soon. Bandwidth will be 4GB/s.

    Once I get the details on how I will post back. I will making a visit to Acera to look at some different setups.

    It involves a Fiber controller card that will manage the Raid, then a PCIe Fiber card to connect to the Raid setup.

    http://areca.us/products/fibre_to_sata_ll_cable.htm
    http://areca.us/products/fiber_to_sata_ll.htm

    This one Raid cage but I am looking for a tower.

    http://www.netstor.com.tw/_03/03_02.php?NTQ

    I really want all this to be internal so that is what I am working on.

  8. #108
    Administrator
    Join Date
    Nov 2007
    Location
    Stockton, CA
    Posts
    3,568
    Quote Originally Posted by just a noob View Post
    http://www.performance-pcs.com/catal...oducts_id=4177 what about that(if your just needing room), and would this raid controller: http://www.newegg.com/Product/Produc...82E16816151031 work well with a pair of mitron 16 or 32 gb ssd's in raid zero?
    Thanks for the case link. The problem with the Fiber controller is it needs a cage with a back plane for the SATA connections. The controller plugs into the back plane.

    Also in order to get to the bandwidth of the Fiber controller I will need more SSD's. I am at 8 now so looking to go to 12.

  9. #109
    Registered User
    Join Date
    Dec 2003
    Location
    Denmark
    Posts
    32
    Quote Originally Posted by Buckeye View Post
    I will be moving over to a Fiber setup soon. Bandwidth will be 4GB/s.

    Once I get the details on how I will post back. I will making a visit to Acera to look at some different setups.

    It involves a Fiber controller card that will manage the Raid, then a PCIe Fiber card to connect to the Raid setup.

    http://areca.us/products/fibre_to_sata_ll_cable.htm
    http://areca.us/products/fiber_to_sata_ll.htm

    This one Raid cage but I am looking for a tower.

    http://www.netstor.com.tw/_03/03_02.php?NTQ

    I really want all this to be internal so that is what I am working on.
    You are aware that Fibre Channel bandwidth is 4Gbit not 4Gbyte, right?

  10. #110
    Xtreme Member
    Join Date
    Jun 2008
    Posts
    160
    Quote Originally Posted by spazoid View Post
    You are aware that Fibre Channel bandwidth is 4Gbit not 4Gbyte, right?
    I know the drives themself are 4 gigabits/sec but I thought you could go up to 10 gigabits with fiberchannel with disk shelves and stuff which use the interface. We use a lot of netapp filers at work and I think they are all 4 gigabits per shelf as well but I thought that wasn't the limit of FC. I could be wrong though.

  11. #111
    Registered User
    Join Date
    Dec 2003
    Location
    Denmark
    Posts
    32
    Quote Originally Posted by Sandon View Post
    I could be wrong though.
    So could I

  12. #112
    Xtreme Addict
    Join Date
    Jun 2006
    Posts
    1,820
    Quote Originally Posted by Buckeye View Post
    I will be moving over to a Fiber setup soon. Bandwidth will be 4GB/s.
    As others mentioned, that's Gbit not Gbyte. Even if you do reach that max. speed, that's only 500MB/s - you're limiting yourself by going FibreChannel for this purpose.
    P5E64_Evo/QX9650, 4x X25-E SSD - gimme speed..
    Quote Originally Posted by MR_SmartAss View Post
    Lately there has been a lot of BS(Dave_Graham where are you?)

  13. #113
    Administrator
    Join Date
    Nov 2007
    Location
    Stockton, CA
    Posts
    3,568
    ahh my mistake, dang

    Thanks for clearing that up for me.

  14. #114
    Xtreme Guru
    Join Date
    Dec 2002
    Posts
    4,046
    get 4x iodrive @ 3.2GB/s and you shall get that 3-4GB/s

    and when it becomes bootable @ 3.2GB/s oh baby!!

  15. #115
    Xtreme Cruncher
    Join Date
    Jun 2006
    Location
    On top of a mountain
    Posts
    4,163
    This reminds me of the search for the Holy Grail...
    20 Logs on the fire for WCG: i7 920@2.8 X3220@3.0 X3220@2.4 E8400@4.05 E6600@2.4

  16. #116
    Xtreme Member
    Join Date
    Oct 2007
    Posts
    398
    Quote Originally Posted by NapalmV5 View Post
    get 4x iodrive @ 3.2GB/s and you shall get that 3-4GB/s

    and when it becomes bootable @ 3.2GB/s oh baby!!

    How would you create a RAID0 array among pci-e? Software RAID (aka dynamic disks in Windose)?

    Doesn't scale very well...<30-50% in XP...don't know about Vista.

    I think Linux software RAID scales better

  17. #117
    Xtreme Member
    Join Date
    Mar 2006
    Location
    Central CA, USA
    Posts
    117
    Amazing amount of hardware.

    And in Stockton, too! (I'm in Lodi.)

    -bZj
    Intel Q9550@3.4GHz & BoneTrail X38,
    HD4850@700/1100, G.Skill 3x2GB@1333,
    VelociRaptoRAID, Li'Lfish, DTek+Swiftech H2O

    Intel E2180 @ 2.5GHz & Bad Axe 2,
    ATi HD 2600 Pro / 256MB @ 660/1100,
    Super Talent 4 x 1GB @ 832MHz/4-4-3-7,
    WD5000AAKS, CoolerMaster GeminII/RC690

  18. #118
    Xtreme Mentor
    Join Date
    May 2008
    Posts
    2,554
    You are a sick man. I like it

  19. #119
    Registered User
    Join Date
    Jul 2008
    Posts
    48
    My friend works with various hard ware companies. So he receives free intel extremes, free ssd's, free cpu cases, ect...
    Nvidia gtx 285
    AMD Phenom II X4 965
    Asus M3N-HT deluxe edition motherboard
    150gb raptor hard drive
    500gb hitachi deskstar
    8gb ddr2 800Mhz corsair RAM

  20. #120
    Registered User
    Join Date
    Jul 2008
    Location
    germany
    Posts
    33
    Maybe it is possible to create an (hardware based) RAID over 2,3,4 Acreca 1231ML Controllers....
    like a "SLI-RAID" Just email to their support, i think the chance is good that this will work.

    each controller is limited to ~800Megabybe/s, so put 8 Mtrons on each and you will get ~3.2Gigabyte/s

    in 1Q09 Mtron releases new SSDs with 280MB/S read | 260MB/S write, so u will need only 4 drives each controller then. If u can wait so long

    .
    Monitors----> Dell 22" + 30" + 22" Tripple Setup

    Case--------> "BigBlack" LianLi Workstation with 18x5.25";
    CPU---------> Intel QX9650@3.0 (no time 4 overclocking yet lol)
    GFX---------> Nvidia SLI GFX280,
    Mainboard---> Asus Striker II Extreme [Bios : 0901]
    RAM---------> 8GB (4x2) DDR3 OCZ3P18004GK, Platinum Edition CL8 8-8-27 2 PC3 14400

    Storage-----> HDD-Raid5 4TB Seagate ES.2 Drives
    ------------> SSD-Raid5 128GB --> 8 x Mtron Pro 2.5" each 16Gig -> 700/500 MB Read/Write
    ------------> SSD installed into Fantec MR-SA1042-1 -> 2 x 5.25" Storage with each 4x2.5" Drivebay
    Controller---> All Drives are controlled from : Areca ARC-1231ML SATA / SAS PCIe RAID Controller Card ,

    Cooling-----> Aquacomputer Watercooling System with Aquaero Control
    Monitoring -> 2x5.25 wide blue LCD Controll display (4x16chars)

    Powered-----> by be quiet! 1000W Dark Power Pro P7,

    OS----------> Windows Vista Ultimate 64

    and to RELAX my

    selfbuild HD-Home Cinema with Buttkicker

  21. #121
    Xtreme Addict
    Join Date
    Jul 2006
    Posts
    1,124
    @buckeye- Since you seem to also have a linux OS there I would be very curious if you can run XDD against the system for both read & write random iops performance. I was very leery of the SSD's as from the spattering of reports I was getting is that write speeds suck in comparison (6-7ms) opposed to 0.1/0.2 ms for reads. Was able to get better write iops than that w/ sas.

    Anyway, if you're able http://www.ioperformance.com/products.htm something like:

    Code:
    #!/bin/bash
    ################################
    CONTROLLER="ARC1680ix"
    RAID="R6"
    DISKS="D24"
    DRIVE="st31000340ns"
    SS="SS064k"
    FS="jfs"
    USERNAME=ftpadmin
    
    TMP="/var/ftp/tmp"
    FILEOP="/var/ftp/pub/incoming/src/iozone3_283/src/current/fileop"
    IOZONE="/var/ftp/pub/incoming/src/iozone3_283/src/current/iozone"
    XDD="/usr/local/bin/xdd.linux"
    XDDTARGET="/var/ftp/tmp/S0"
    
    # XDD Tests
    for QD in 1 2 4 8 16 32 64 128; do
      sync ; sleep 5
      $XDD -verbose -op read -target S0  -blocksize 512 -reqsize 128 -mbytes 8192 -passes 3 -seek random -seek range 64000000 -queuedepth $QD > $TMP/loki-xdd-$CONTROLLER-$RAID-$DISKS-$DRIVE-$SS-$FS-READ-QD$QD.txt
      sync ; sleep 5
      $XDD -verbose -op write -target S0  -blocksize 512 -reqsize 128 -mbytes 8192 -passes 3 -seek random -seek range 64000000 -queuedepth $QD > $TMP/loki-xdd-$CONTROLLER-$RAID-$DISKS-$DRIVE-$SS-$FS-WRITE-QD$QD.txt
    done
    Where S0 is the largest you can make it (I used 64GB above). But something big enough to hit all the disks with decent sized chunks.


    As for your goal to attain >1GiB/s speeds, welcome to the club. At this point there are two main issues. 1) individual controller performance and 2) memory/process performance. You can put in several controllers in a system which can solve the first, the second at this point it's going to an amd opteron system or wait and see what intel does w/ nehelem-ep's. I've been banging my head against this for a while both for disk I/O and network I/O. It's not easy. I'm hoping to get over to SC08 this year to nose around.

    Another item as well is workload type, if it's random I/O you pretty much won't hit good speeds under any parity raid (3/4/5/6) as you're limited to the # of IOPS of a single drive. Either use raid-10 or 100, or use many smaller parity raids and LVM them together.

    |.Server/Storage System.............|.Gaming/Work System..............................|.Sundry...... ............|
    |.Supermico X8DTH-6f................|.Asus Z9PE-D8 WS.................................|.HP LP3065 30"LCD Monitor.|
    |.(2) Xeon X5690....................|.2xE5-2643 v2....................................|.Mino lta magicolor 7450..|
    |.(192GB) Samsung PC10600 ECC.......|.2xEVGA nVidia GTX670 4GB........................|.Nikon coolscan 9000......|
    |.800W Redundant PSU................|.(8x8GB) Kingston DDR3-1600 ECC..................|.Quantum LTO-4HH..........|
    |.NEC Slimline DVD RW DL............|.Corsair AX1200..................................|........ .................|
    |.(..6) LSI 9200-8e HBAs............|.Lite-On iHBS112.................................|.Dell D820 Laptop.........|
    |.(..8) ST9300653SS (300GB) (RAID0).|.PA120.3, Apogee, MCW N&S bridge.................|...2.33Ghz; 8GB Ram;......|
    |.(112) ST2000DL003 (2TB) (RAIDZ2)..|.(1) Areca ARC1880ix-8 512MiB Cache..............|...DVDRW; 128GB SSD.......|
    |.(..2) ST9146803SS (146GB) (RAID-1)|.(8) Intel SSD 520 240GB (RAID6).................|...Ubuntu 12.04 64bit.....|
    |.Ubuntu 12.04 64bit Server.........|.Windows 7 x64 Pro...............................|............... ..........|

  22. #122
    Xtreme Addict
    Join Date
    Jun 2006
    Posts
    1,820
    +1 on the write IOPS request. Though I'd prefer some Windows output, or at least IoMeter (file server benchmark, changed to 100&#37; writes).
    P5E64_Evo/QX9650, 4x X25-E SSD - gimme speed..
    Quote Originally Posted by MR_SmartAss View Post
    Lately there has been a lot of BS(Dave_Graham where are you?)

  23. #123
    Xtreme Member
    Join Date
    Jan 2008
    Posts
    114
    Quote Originally Posted by NapalmV5 View Post
    get 4x iodrive @ 3.2GB/s and you shall get that 3-4GB/s

    and when it becomes bootable @ 3.2GB/s oh baby!!
    LOL
    http://www.fusionio.com/CustomDataSh...7-23d666d4732a

    GIGABYTE GA-EX58-UD5 LGA 1366 Intel X58 ATX Intel Motherboard
    Intel Core i7 920 Nehalem 2.66GHz 4 x 256KB L2 Cache 8MB L3 Cache LGA 1366 130W Quad-Core Processor
    CORSAIR XMS3 6GB (3 x 2GB) 240-Pin DDR3 SDRAM DDR3 1600 (PC3 12800) Desktop Memory Model TR3X6G1600C8 G
    CORSAIR CMPSU-1000HX 1000W ATX12V 2.2 / EPS12V 2.91 SLI Ready CrossFire Ready 80 PLUS Certified Modular Active PFC Compatible with Core i7 Power Supply
    VGA GIGABYTE|GV-R587D5-1GD-B R
    CPU COOL XIGM|Dark Knight-S1283V R

  24. #124
    Administrator
    Join Date
    Nov 2007
    Location
    Stockton, CA
    Posts
    3,568
    Quote Originally Posted by stevecs View Post
    @buckeye- Since you seem to also have a linux OS there I would be very curious if you can run XDD against the system for both read & write random iops performance. I was very leery of the SSD's as from the spattering of reports I was getting is that write speeds suck in comparison (6-7ms) opposed to 0.1/0.2 ms for reads. Was able to get better write iops than that w/ sas.

    Anyway, if you're able http://www.ioperformance.com/products.htm something like:


    Where S0 is the largest you can make it (I used 64GB above). But something big enough to hit all the disks with decent sized chunks.


    As for your goal to attain >1GiB/s speeds, welcome to the club. At this point there are two main issues. 1) individual controller performance and 2) memory/process performance. You can put in several controllers in a system which can solve the first, the second at this point it's going to an amd opteron system or wait and see what intel does w/ nehelem-ep's. I've been banging my head against this for a while both for disk I/O and network I/O. It's not easy. I'm hoping to get over to SC08 this year to nose around.

    Another item as well is workload type, if it's random I/O you pretty much won't hit good speeds under any parity raid (3/4/5/6) as you're limited to the # of IOPS of a single drive. Either use raid-10 or 100, or use many smaller parity raids and LVM them together.

    Ah sorry Stevecs, my mind has been on other issues and I missed your post. I am sorry but I do not have a Linux system setup to test this.

    I was hoping that a Fiber setup would be able to push this baby along, but it appears that it will not be able to if I understand the posts here correctly. That and the extra cost of going to Fiber, another couple thousand and add some more SSD's to that. I am still going to stop into Acera and see what they might be able to come up with.

    As far as the iodrives go, they sound very good, if/when you can use them in windows and boot from them. But using PCIe slots creates problems when you want to add in SLI or even TRI-SLI setups. I guess you would have to make some trade off's and design your system around them.

  25. #125
    Xtreme Addict
    Join Date
    Jul 2006
    Posts
    1,124
    Sorry, my mistake, I thought you did. Anyway, xdd is also usable under windows (just the script won't help you, you'll have to type the command manually w/ the arguments). xdd is nice in that it avoids all caches of the OS and you can customize workloads.

    As for fiber or infiniband, no I use both no help that's just a media layer and FC is at 4Gbit (going to 8Gbit) and infiniband is 10Gbit, but w/ sas you can get 4x3Gbit (12gbit) but that does NOT mean you can actually push that data that's a separate issue. You run into controller and host bottlenecks. The best I've seen so far is 1.2-1.6GiB/s (solaris amd-opteron system w/ 48 drives running zfs) but that's all (zero apps as you're taking all the cpu cycles to do the I/O). As for speed, (thoughput) SSD's are pricey and are still slower than rotational media at this point. For IOPS (which you wouldn't be pushing throughput on) they have an advantage w/ reads but so far a disadvantage w/ writes as far as I can see (which is why I was interested in those xdd runs to get some real #'s).

    As for I/O drives they don't look that good if it takes 6 of them to reach 4.2GiB/s for reads & 3.6GiB/s for writes. I would be more curious to know what system that they used that had 6 PCIe 4x slots to use for testing actually. And for the IOPS (which they didn't list for read or write so assuming read here) may be in for a run for the money with multiple cards & ssd's though i haven't priced such a solution.

    |.Server/Storage System.............|.Gaming/Work System..............................|.Sundry...... ............|
    |.Supermico X8DTH-6f................|.Asus Z9PE-D8 WS.................................|.HP LP3065 30"LCD Monitor.|
    |.(2) Xeon X5690....................|.2xE5-2643 v2....................................|.Mino lta magicolor 7450..|
    |.(192GB) Samsung PC10600 ECC.......|.2xEVGA nVidia GTX670 4GB........................|.Nikon coolscan 9000......|
    |.800W Redundant PSU................|.(8x8GB) Kingston DDR3-1600 ECC..................|.Quantum LTO-4HH..........|
    |.NEC Slimline DVD RW DL............|.Corsair AX1200..................................|........ .................|
    |.(..6) LSI 9200-8e HBAs............|.Lite-On iHBS112.................................|.Dell D820 Laptop.........|
    |.(..8) ST9300653SS (300GB) (RAID0).|.PA120.3, Apogee, MCW N&S bridge.................|...2.33Ghz; 8GB Ram;......|
    |.(112) ST2000DL003 (2TB) (RAIDZ2)..|.(1) Areca ARC1880ix-8 512MiB Cache..............|...DVDRW; 128GB SSD.......|
    |.(..2) ST9146803SS (146GB) (RAID-1)|.(8) Intel SSD 520 240GB (RAID6).................|...Ubuntu 12.04 64bit.....|
    |.Ubuntu 12.04 64bit Server.........|.Windows 7 x64 Pro...............................|............... ..........|

Page 5 of 7 FirstFirst ... 234567 LastLast

Bookmarks

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •