*Push it to the Limit song plays*
You can do it! :hehe::smoke:
Printable View
I have a question re the old Mtrons (100MB read 80MB write) and the new ones that are about to be released sometimes this month (7500 series 130MB read and 120MB write).
On a raid 0 with 4 drives will there be any advantage in going with the 7500 series? Will the faster read/write make any diference? Is it a case where the old one you can get 400/500 in benchmarks but if I go for the new ones performace will go up to 500+?
Here is now I look at it.
With the current lineup of controller cards out there, there isn’t much you can do to get past the bandwidth cap of 800-850 M/s. You can but be ready to spend some money. You basically need fiber controllers for that. Unless something new comes out.
Access times are all pretty much the same and that does not scale in a Raid so it’s pretty much a non issue.
So pick you speed and price range. Even cheap SSD’s will scale up to the cap if you have enough of them. Write speeds do go up as you add SSD’s also.
However with all that said. Be careful when you make your purchase ! I cannot stress this enough.
Some of the newer SSD’s are not really rated for Raids, but are going after the Laptop market. I have heard some of these brands become unstable in Raids. Even if they say they are rated for Severs, does not mean Raids.
Also
Not all motherboards work well with SSD’s and Raids. Be careful and do some research first. Plan what you want to do.
If you go intel motherboards you should be fine with 1 SATA SSD, but when you go to a Raid setup problems start happening and generally going to a controller card fix’s that.
Do you really need 800-850 M/s bandwidth ? if not use 2 – 4 SSD’s and you will be very happy.
Best way to go is to use several SSD’s in a Raid vs. 1 big SSD as bandwidth will scale up nicely. But in a laptop you can pretty much only use 1.
Well I thought I had seen Mad Bandwidth...but that's pretty much over the top.
Congrats on living the dream :up:
Thanks for the reply.
Well what I plan to do is get 4 SSD's (Mtron/Mtron7500Pro/MemorightGT S series/or the new OCZ Core Series) and a raid card (Areca or most probably an Adaptec 5405) and use raid-0.
Now given the setup (and taking into consideration that 4 SSD will not max the capacity of my raid card) is it better to go for newer ssd's (like Mtro pro 7500 130 read/120 write) or because of raid-0 and 4 ssds those new ssds with better read/write will not make much of a difference?
I want performance but if performance increase is negligible I rather go for the cheaper ssds.
I will be moving over to a Fiber setup soon. Bandwidth will be 4GB/s.
Once I get the details on how I will post back. I will making a visit to Acera to look at some different setups.
It involves a Fiber controller card that will manage the Raid, then a PCIe Fiber card to connect to the Raid setup.
http://areca.us/products/fibre_to_sata_ll_cable.htm
http://areca.us/products/fiber_to_sata_ll.htm
This one Raid cage but I am looking for a tower.
http://www.netstor.com.tw/_03/03_02.php?NTQ
I really want all this to be internal so that is what I am working on.
Thanks for the case link. The problem with the Fiber controller is it needs a cage with a back plane for the SATA connections. The controller plugs into the back plane.
Also in order to get to the bandwidth of the Fiber controller I will need more SSD's. I am at 8 now so looking to go to 12.
I know the drives themself are 4 gigabits/sec but I thought you could go up to 10 gigabits with fiberchannel with disk shelves and stuff which use the interface. We use a lot of netapp filers at work and I think they are all 4 gigabits per shelf as well but I thought that wasn't the limit of FC. I could be wrong though.
ahh my mistake, dang
Thanks for clearing that up for me.
get 4x iodrive @ 3.2GB/s and you shall get that 3-4GB/s :)
and when it becomes bootable @ 3.2GB/s oh baby!!
This reminds me of the search for the Holy Grail...
Amazing amount of hardware.
And in Stockton, too! :D (I'm in Lodi.)
-bZj
You are a sick man. I like it :up:
My friend works with various hard ware companies. So he receives free intel extremes, free ssd's, free cpu cases, ect...
Maybe it is possible to create an (hardware based) RAID over 2,3,4 Acreca 1231ML Controllers....
like a "SLI-RAID" :D Just email to their support, i think the chance is good that this will work.
each controller is limited to ~800Megabybe/s, so put 8 Mtrons on each and you will get ~3.2Gigabyte/s ;)
in 1Q09 Mtron releases new SSDs with 280MB/S read | 260MB/S write, so u will need only 4 drives each controller then. If u can wait so long ;)
.
@buckeye- Since you seem to also have a linux OS there I would be very curious if you can run XDD against the system for both read & write random iops performance. I was very leery of the SSD's as from the spattering of reports I was getting is that write speeds suck in comparison (6-7ms) opposed to 0.1/0.2 ms for reads. Was able to get better write iops than that w/ sas.
Anyway, if you're able http://www.ioperformance.com/products.htm something like:
Where S0 is the largest you can make it (I used 64GB above). But something big enough to hit all the disks with decent sized chunks.Code:#!/bin/bash
################################
CONTROLLER="ARC1680ix"
RAID="R6"
DISKS="D24"
DRIVE="st31000340ns"
SS="SS064k"
FS="jfs"
USERNAME=ftpadmin
TMP="/var/ftp/tmp"
FILEOP="/var/ftp/pub/incoming/src/iozone3_283/src/current/fileop"
IOZONE="/var/ftp/pub/incoming/src/iozone3_283/src/current/iozone"
XDD="/usr/local/bin/xdd.linux"
XDDTARGET="/var/ftp/tmp/S0"
# XDD Tests
for QD in 1 2 4 8 16 32 64 128; do
sync ; sleep 5
$XDD -verbose -op read -target S0 -blocksize 512 -reqsize 128 -mbytes 8192 -passes 3 -seek random -seek range 64000000 -queuedepth $QD > $TMP/loki-xdd-$CONTROLLER-$RAID-$DISKS-$DRIVE-$SS-$FS-READ-QD$QD.txt
sync ; sleep 5
$XDD -verbose -op write -target S0 -blocksize 512 -reqsize 128 -mbytes 8192 -passes 3 -seek random -seek range 64000000 -queuedepth $QD > $TMP/loki-xdd-$CONTROLLER-$RAID-$DISKS-$DRIVE-$SS-$FS-WRITE-QD$QD.txt
done
As for your goal to attain >1GiB/s speeds, welcome to the club. At this point there are two main issues. 1) individual controller performance and 2) memory/process performance. You can put in several controllers in a system which can solve the first, the second at this point it's going to an amd opteron system or wait and see what intel does w/ nehelem-ep's. I've been banging my head against this for a while both for disk I/O and network I/O. It's not easy. I'm hoping to get over to SC08 this year to nose around.
Another item as well is workload type, if it's random I/O you pretty much won't hit good speeds under any parity raid (3/4/5/6) as you're limited to the # of IOPS of a single drive. Either use raid-10 or 100, or use many smaller parity raids and LVM them together.
+1 on the write IOPS request. Though I'd prefer some Windows output, or at least IoMeter (file server benchmark, changed to 100% writes).
Ah sorry Stevecs, my mind has been on other issues and I missed your post. I am sorry but I do not have a Linux system setup to test this.
I was hoping that a Fiber setup would be able to push this baby along, but it appears that it will not be able to if I understand the posts here correctly. That and the extra cost of going to Fiber, another couple thousand and add some more SSD's to that. I am still going to stop into Acera and see what they might be able to come up with.
As far as the iodrives go, they sound very good, if/when you can use them in windows and boot from them. But using PCIe slots creates problems when you want to add in SLI or even TRI-SLI setups. I guess you would have to make some trade off's and design your system around them.