*Push it to the Limit song plays*
You can do it! :hehe::smoke:
Printable View
I have a question re the old Mtrons (100MB read 80MB write) and the new ones that are about to be released sometimes this month (7500 series 130MB read and 120MB write).
On a raid 0 with 4 drives will there be any advantage in going with the 7500 series? Will the faster read/write make any diference? Is it a case where the old one you can get 400/500 in benchmarks but if I go for the new ones performace will go up to 500+?
Here is now I look at it.
With the current lineup of controller cards out there, there isn’t much you can do to get past the bandwidth cap of 800-850 M/s. You can but be ready to spend some money. You basically need fiber controllers for that. Unless something new comes out.
Access times are all pretty much the same and that does not scale in a Raid so it’s pretty much a non issue.
So pick you speed and price range. Even cheap SSD’s will scale up to the cap if you have enough of them. Write speeds do go up as you add SSD’s also.
However with all that said. Be careful when you make your purchase ! I cannot stress this enough.
Some of the newer SSD’s are not really rated for Raids, but are going after the Laptop market. I have heard some of these brands become unstable in Raids. Even if they say they are rated for Severs, does not mean Raids.
Also
Not all motherboards work well with SSD’s and Raids. Be careful and do some research first. Plan what you want to do.
If you go intel motherboards you should be fine with 1 SATA SSD, but when you go to a Raid setup problems start happening and generally going to a controller card fix’s that.
Do you really need 800-850 M/s bandwidth ? if not use 2 – 4 SSD’s and you will be very happy.
Best way to go is to use several SSD’s in a Raid vs. 1 big SSD as bandwidth will scale up nicely. But in a laptop you can pretty much only use 1.
Well I thought I had seen Mad Bandwidth...but that's pretty much over the top.
Congrats on living the dream :up:
Thanks for the reply.
Well what I plan to do is get 4 SSD's (Mtron/Mtron7500Pro/MemorightGT S series/or the new OCZ Core Series) and a raid card (Areca or most probably an Adaptec 5405) and use raid-0.
Now given the setup (and taking into consideration that 4 SSD will not max the capacity of my raid card) is it better to go for newer ssd's (like Mtro pro 7500 130 read/120 write) or because of raid-0 and 4 ssds those new ssds with better read/write will not make much of a difference?
I want performance but if performance increase is negligible I rather go for the cheaper ssds.
I will be moving over to a Fiber setup soon. Bandwidth will be 4GB/s.
Once I get the details on how I will post back. I will making a visit to Acera to look at some different setups.
It involves a Fiber controller card that will manage the Raid, then a PCIe Fiber card to connect to the Raid setup.
http://areca.us/products/fibre_to_sata_ll_cable.htm
http://areca.us/products/fiber_to_sata_ll.htm
This one Raid cage but I am looking for a tower.
http://www.netstor.com.tw/_03/03_02.php?NTQ
I really want all this to be internal so that is what I am working on.
Thanks for the case link. The problem with the Fiber controller is it needs a cage with a back plane for the SATA connections. The controller plugs into the back plane.
Also in order to get to the bandwidth of the Fiber controller I will need more SSD's. I am at 8 now so looking to go to 12.
I know the drives themself are 4 gigabits/sec but I thought you could go up to 10 gigabits with fiberchannel with disk shelves and stuff which use the interface. We use a lot of netapp filers at work and I think they are all 4 gigabits per shelf as well but I thought that wasn't the limit of FC. I could be wrong though.
ahh my mistake, dang
Thanks for clearing that up for me.
get 4x iodrive @ 3.2GB/s and you shall get that 3-4GB/s :)
and when it becomes bootable @ 3.2GB/s oh baby!!
This reminds me of the search for the Holy Grail...
Amazing amount of hardware.
And in Stockton, too! :D (I'm in Lodi.)
-bZj
You are a sick man. I like it :up:
My friend works with various hard ware companies. So he receives free intel extremes, free ssd's, free cpu cases, ect...
Maybe it is possible to create an (hardware based) RAID over 2,3,4 Acreca 1231ML Controllers....
like a "SLI-RAID" :D Just email to their support, i think the chance is good that this will work.
each controller is limited to ~800Megabybe/s, so put 8 Mtrons on each and you will get ~3.2Gigabyte/s ;)
in 1Q09 Mtron releases new SSDs with 280MB/S read | 260MB/S write, so u will need only 4 drives each controller then. If u can wait so long ;)
.
@buckeye- Since you seem to also have a linux OS there I would be very curious if you can run XDD against the system for both read & write random iops performance. I was very leery of the SSD's as from the spattering of reports I was getting is that write speeds suck in comparison (6-7ms) opposed to 0.1/0.2 ms for reads. Was able to get better write iops than that w/ sas.
Anyway, if you're able http://www.ioperformance.com/products.htm something like:
Where S0 is the largest you can make it (I used 64GB above). But something big enough to hit all the disks with decent sized chunks.Code:#!/bin/bash
################################
CONTROLLER="ARC1680ix"
RAID="R6"
DISKS="D24"
DRIVE="st31000340ns"
SS="SS064k"
FS="jfs"
USERNAME=ftpadmin
TMP="/var/ftp/tmp"
FILEOP="/var/ftp/pub/incoming/src/iozone3_283/src/current/fileop"
IOZONE="/var/ftp/pub/incoming/src/iozone3_283/src/current/iozone"
XDD="/usr/local/bin/xdd.linux"
XDDTARGET="/var/ftp/tmp/S0"
# XDD Tests
for QD in 1 2 4 8 16 32 64 128; do
sync ; sleep 5
$XDD -verbose -op read -target S0 -blocksize 512 -reqsize 128 -mbytes 8192 -passes 3 -seek random -seek range 64000000 -queuedepth $QD > $TMP/loki-xdd-$CONTROLLER-$RAID-$DISKS-$DRIVE-$SS-$FS-READ-QD$QD.txt
sync ; sleep 5
$XDD -verbose -op write -target S0 -blocksize 512 -reqsize 128 -mbytes 8192 -passes 3 -seek random -seek range 64000000 -queuedepth $QD > $TMP/loki-xdd-$CONTROLLER-$RAID-$DISKS-$DRIVE-$SS-$FS-WRITE-QD$QD.txt
done
As for your goal to attain >1GiB/s speeds, welcome to the club. At this point there are two main issues. 1) individual controller performance and 2) memory/process performance. You can put in several controllers in a system which can solve the first, the second at this point it's going to an amd opteron system or wait and see what intel does w/ nehelem-ep's. I've been banging my head against this for a while both for disk I/O and network I/O. It's not easy. I'm hoping to get over to SC08 this year to nose around.
Another item as well is workload type, if it's random I/O you pretty much won't hit good speeds under any parity raid (3/4/5/6) as you're limited to the # of IOPS of a single drive. Either use raid-10 or 100, or use many smaller parity raids and LVM them together.
+1 on the write IOPS request. Though I'd prefer some Windows output, or at least IoMeter (file server benchmark, changed to 100% writes).
Ah sorry Stevecs, my mind has been on other issues and I missed your post. I am sorry but I do not have a Linux system setup to test this.
I was hoping that a Fiber setup would be able to push this baby along, but it appears that it will not be able to if I understand the posts here correctly. That and the extra cost of going to Fiber, another couple thousand and add some more SSD's to that. I am still going to stop into Acera and see what they might be able to come up with.
As far as the iodrives go, they sound very good, if/when you can use them in windows and boot from them. But using PCIe slots creates problems when you want to add in SLI or even TRI-SLI setups. I guess you would have to make some trade off's and design your system around them.
Sorry, my mistake, I thought you did. Anyway, xdd is also usable under windows (just the script won't help you, you'll have to type the command manually w/ the arguments). xdd is nice in that it avoids all caches of the OS and you can customize workloads.
As for fiber or infiniband, no I use both no help that's just a media layer and FC is at 4Gbit (going to 8Gbit) and infiniband is 10Gbit, but w/ sas you can get 4x3Gbit (12gbit) but that does NOT mean you can actually push that data that's a separate issue. You run into controller and host bottlenecks. The best I've seen so far is 1.2-1.6GiB/s (solaris amd-opteron system w/ 48 drives running zfs) but that's all (zero apps as you're taking all the cpu cycles to do the I/O). As for speed, (thoughput) SSD's are pricey and are still slower than rotational media at this point. For IOPS (which you wouldn't be pushing throughput on) they have an advantage w/ reads but so far a disadvantage w/ writes as far as I can see (which is why I was interested in those xdd runs to get some real #'s).
As for I/O drives they don't look that good if it takes 6 of them to reach 4.2GiB/s for reads & 3.6GiB/s for writes. I would be more curious to know what system that they used that had 6 PCIe 4x slots to use for testing actually. And for the IOPS (which they didn't list for read or write so assuming read here) may be in for a run for the money with multiple cards & ssd's though i haven't priced such a solution.
really? Now that I find interesting as I would never have thought it would be the same number (even dram takes longer for a write/store than for a read). I would need to run some benches to actually prove that to myself. ;) if you do actually pick any up please post if you can get some real-world #'s if for nothing else than for my edification. ;)
The Fusion IOdrive has PCIe x4 (physical). The Supermicro X7DWN+ (http://www.supermicro.com/products/m...400/X7DWN+.cfm) has 4x PCIe x8 and you can add a UIO riser card (http://www.supermicro.com/products/n...risercards.cfm) for 4 more for a total of 8x PCIe x8 slots. But I don't know how bandwidth starved the UIO based PCIe slots would be. Of course you have zero slots for your regular graphic cards... so that wouldn't help Buckeye
I'm interested in how you are able to create an array accross raid adapters. It must be through a variation of software RAID. I know it sucks in Windows, but it supposedly scales better in linux.
Yes, the Fusion I/O is 4x physical but on the link they said it took 6 of the fusion I/O devices to reach those #'s which is why I was curious on the type of board. Assuming it scales linearly that would be 700MiB/s read and 600MiB/s write for a single drive for $14,000. Not something that is cost effective considering what you can build w/ SSD or even traditional HD's. (excepting IOPS performance), with the supermicro risers one that would just be physical 8x but you'd be limited to 16x total throughput as it attaches to a single 16x slot so each slot would be 4x at most (assuming zero overhead), which I guess could do it.
As for raids across adapters that's simple you just have to use a volume manager. (ie, veritas, linux lvm, et al) or a filesystem like ZFS (which is both filesystem & volume management in one). Under linux I set up each set of drives (ie 12 for sata for example) into a single raid-6 volume. I do that one per card so w/ say 4 cards I have 48 drives, in 4 12-way raid-6's. That gets presented to the OS as 4 large drives. I take those drives and use them as a LVM physical device. Then carve up a logical volume from those physicals and put a filesystem (JFS in this case) on top of that. This lets me add more cards or volumes to the lvm as physical devices and can grow (and thus also grow performance). Generally for SATA drives 12-16 is about where you reach the limit of the raid controller for max performance so by limiting the raids to that and by getting additional cards it scales better.
But can you put that many RAID controllers in a single system? For one thing due to ROM space, and then the controller BIOS option for multiple controllers...
Yes, you're limited to 128K max for rom space, I've only put in a max of 4 Areca's before on a single box, but that was without a graphics card (console based system). I don't know how much space was left though after the free / initial load of the cards.
I got an answer from Areca support about putting two controllers together...
Quote:
> you can install upto 4 controllers in same machine. but all our
> controller is pure hardware raid cards, evey controller is independent. it
> mean you can't create a raidset crosss two controllers.
> but you can stripe two volume together in your system, some customer needs
> high performance have similar configurations before.
>
>
So the Question is, stripping two Volumes in software, will this increase the accesstime or something... ;) Iam scared enough at this time that the write-accesstime will grow up from 0.2 to 3.8ms while using a 1 or 2GB Cache modul insead the original 256MB Modul...
:)
All raid cards (actually _ANY_ controller) will increase latency it's physics. Adding in multiple cards mainly increases reliability and streaming performance first then IOPS later. Most likely the added latency is due to the various buffering that the card is doing (both read & write). You can always turn that off or on the sas controllers you can change the aggressiveness factor for reading. I saw the same post about that increase and it's interesting but not something that /today/ will make a big difference unless you are building up a very large database with heavy workloads (ie, small transaction sizes). Put it simply the service time it takes to process a request and get the data back to the calling application is much larger than what you have for say table or index lookups in databases. So don't get too fixated at that time. 3ms or whatever is VERY good for the functions that a desktop or general server will perform. IOPS*Request_size=bandwidth to put it really simply. You have to find a balance for your system you can't have both. ;)
well, I understand.Quote:
you can't have both.
But also this is the XTREME Forum, isnt it :D
;)
lol for a sec there i thought they updated their datasheets..
but its still 4x iodrive max @ single system
then again.. their specs keep changing @ 1x vs multi iodrive and are just weird.. maybe they do it on purpose idk maybe they just cant decide on final performance
from their tech faq,
http://www.fusionio.com/PDFs/ioDrive-Technical-FAQ.pdf
Quote:
Is it possible to RAID multiple ioDrives™?
To the host operating system an ioDrive appears as a normal hard drive, so users are able to leverage any software
RAID solution that uses the host Operating System’s native drivers. The Operating Systems volume manager
performs the RAID function.
After a major crash on my main working rig last week and found myself unprepared backup wise, Ouch ! Didn’t turn out that bad :) I am not sure what actually happened with the rig, could be hardware or something like that. I am going through all the equipment on a test bench to try and figure that out atm.
But I needed a machine up and running for work. So the new rig which has been waiting for cooling and has been running in stock setup was commissioned as the new work/play rig. It has no OC yet.
This machine is loaded with all my work apps, anti-virus (which was turned off for tests), the whole banana.
The SSD Raid was set to Raid 5 config for safety, I felt the need to do that after last weekend’s computer crash. I did take a hit on bandwidth but I have extra safety now, but to tell you the truth I can hardly notice any performance hit.
All in all it is running Xtremely well and very fast. Going from a Raid 0 and Vista boot time which was ~8-10 seconds, and now in Raid 5 and all apps loaded it takes about ~12 seconds. So still a very good boot time. Apps like Photoshop, Outlook, Paperport etc all load almost instant. I am very happy with this setup so far, I just need cooling to OC this beast.
First Raid benching
PCMark Vantage scored a 37057 on HDD Test Suite.
http://img244.imageshack.us/img244/681/hdtachtz5.jpg
http://img396.imageshack.us/img396/2521/hdtuneqm3.jpg
Yes, the killer in parity raids is the writes. 100% random being the worst as you have only the speed of a single drive in IOPS. With flash (current tech) that's probably going to be worst than a normal drive these days. The further you are away from random writes (and the less # of writes you do) the better. However with an OS on it there is a good number of random write operations (access time updates, directory updates, small files, temp files et al). I've been looking for someone to actually do some XDD or other tests to really show what they can do but no luck so far.
Just adding an update here...
The other day I was working away as always, right before a boardroom call I am all set with spread sheets and everything else open. I go to call in on my bridge when my computer freezes... no biggie I thought, I mean it's not like I have not ever had my computer freeze up before.
I hit the reset button and get to the ARC-1231ML initialization part... ARC-1231ML cannot initialize... followed by that nasty beeping sound that the ARC makes when things go bad :(
I hit F6 to turn off the alarm and then check the Raid... GAH !! 2x 32gb MTRON PRO's have FAILED. Not good I thought.
I took both SSD's and connected them up to a regular SATA connector to find that they can only be seen as a 15mb drive. Nothing I did would change that, from repartation and reformat.
Good thing they have a 5 year warranty as I now have 2 new ones on the way to me.
rough... I sure hope all SSDs dont fail as easily as that.
Yeah it's been what 7 months and 2 failures, not very good.
But on the flip side I am very happy that I ended up with the MTRON PRO's as it would really bite to have gotten some of the other brands that had only 3 months warranty and then having to go to the manufacture to get a RMA done.
DVNation was pretty cool, just took an email with serial numbers and a phone call, with no major hickups. He knows who I am and just went man that sucks, I will get 2 out you right away.
I value that kind of service a lot. Jason gets a big :up:
I moved the Raid out to my bench for some runs. Here it is with only 7 SSD's installed. It will be 9 next week and that should cap out the ARC-1231ML Bandwidth.
As my Core i7 965 has arrived I am just waiting on a MB and RAM and I will throw this Raid on that setup for some real benches. I am just using what O had handly here for this, more will change later. I still have a QX9770, QX9650 to put under phase and LN2.
So no OC yet, just a baseline run with PCMark05 to see how the Raid is doing. Not too bad of a HDD score, but I can do much better when 2 other SSD’s come back. I should be hitting 70k or so then.
http://img151.imageshack.us/img151/5969/pcmark05hn6.jpg
So how fast does this baby boot up ?
This video is from a cold boot. I have to figure how to turn off that dang Floppy Disk check, so I hit F1 there. You will then see it go to Initialization of the ARC-1231ML which is a pain as it takes a few seconds, it will pause when its ready for 5 seconds so if you need to get into the Controller card’s BIOS you can hit F6, I hit ESC to bypass this 5 second wait. Then off it goes and your on the desktop.
There are what seems like some delays there in the boot, but notice when the Vista Logo is shown, what normal is next is Windows Is Starting Up. This happens so fast that both sounds play at the same time.
http://www.youtube.com/watch?v=3902-UKRBlQ
56400 @ 7x ?
i get 57634 @ 4x :D
What setup are you using Nap ?
This is on Vista also, XP generally gives better results.
the 4x supertalent masterdrive px
buckeye, when one of the ssd's fail can you still read all the data from it? I think intel claimed this is possible using some sort of special program for their ssd's. It would be very nice if all the data would still be there (without using the raid card to recover it)
That is quite a spendy setup there.
get ready buckeye.. 4x @ areca1231ml 2GB
http://www.xtremesystems.org/forums/....php?p=3424824
+ 78634 pcmark05
6x coming up :D
Necro revive for UBER Thread :)
big time props to Buck for being the pioneer :)
i just looked at the 1st post lol its so funny