really? Now that I find interesting as I would never have thought it would be the same number (even dram takes longer for a write/store than for a read). I would need to run some benches to actually prove that to myself.if you do actually pick any up please post if you can get some real-world #'s if for nothing else than for my edification.
![]()
|.Server/Storage System.............|.Gaming/Work System..............................|.Sundry...... ............|
|.Supermico X8DTH-6f................|.Asus Z9PE-D8 WS.................................|.HP LP3065 30"LCD Monitor.|
|.(2) Xeon X5690....................|.2xE5-2643 v2....................................|.Mino lta magicolor 7450..|
|.(192GB) Samsung PC10600 ECC.......|.2xEVGA nVidia GTX670 4GB........................|.Nikon coolscan 9000......|
|.800W Redundant PSU................|.(8x8GB) Kingston DDR3-1600 ECC..................|.Quantum LTO-4HH..........|
|.NEC Slimline DVD RW DL............|.Corsair AX1200..................................|........ .................|
|.(..6) LSI 9200-8e HBAs............|.Lite-On iHBS112.................................|.Dell D820 Laptop.........|
|.(..8) ST9300653SS (300GB) (RAID0).|.PA120.3, Apogee, MCW N&S bridge.................|...2.33Ghz; 8GB Ram;......|
|.(112) ST2000DL003 (2TB) (RAIDZ2)..|.(1) Areca ARC1880ix-8 512MiB Cache..............|...DVDRW; 128GB SSD.......|
|.(..2) ST9146803SS (146GB) (RAID-1)|.(8) Intel SSD 520 240GB (RAID6).................|...Ubuntu 12.04 64bit.....|
|.Ubuntu 12.04 64bit Server.........|.Windows 7 x64 Pro...............................|............... ..........|
The Fusion IOdrive has PCIe x4 (physical). The Supermicro X7DWN+ (http://www.supermicro.com/products/m...400/X7DWN+.cfm) has 4x PCIe x8 and you can add a UIO riser card (http://www.supermicro.com/products/n...risercards.cfm) for 4 more for a total of 8x PCIe x8 slots. But I don't know how bandwidth starved the UIO based PCIe slots would be. Of course you have zero slots for your regular graphic cards... so that wouldn't help Buckeye
I'm interested in how you are able to create an array accross raid adapters. It must be through a variation of software RAID. I know it sucks in Windows, but it supposedly scales better in linux.
Yes, the Fusion I/O is 4x physical but on the link they said it took 6 of the fusion I/O devices to reach those #'s which is why I was curious on the type of board. Assuming it scales linearly that would be 700MiB/s read and 600MiB/s write for a single drive for $14,000. Not something that is cost effective considering what you can build w/ SSD or even traditional HD's. (excepting IOPS performance), with the supermicro risers one that would just be physical 8x but you'd be limited to 16x total throughput as it attaches to a single 16x slot so each slot would be 4x at most (assuming zero overhead), which I guess could do it.
As for raids across adapters that's simple you just have to use a volume manager. (ie, veritas, linux lvm, et al) or a filesystem like ZFS (which is both filesystem & volume management in one). Under linux I set up each set of drives (ie 12 for sata for example) into a single raid-6 volume. I do that one per card so w/ say 4 cards I have 48 drives, in 4 12-way raid-6's. That gets presented to the OS as 4 large drives. I take those drives and use them as a LVM physical device. Then carve up a logical volume from those physicals and put a filesystem (JFS in this case) on top of that. This lets me add more cards or volumes to the lvm as physical devices and can grow (and thus also grow performance). Generally for SATA drives 12-16 is about where you reach the limit of the raid controller for max performance so by limiting the raids to that and by getting additional cards it scales better.
|.Server/Storage System.............|.Gaming/Work System..............................|.Sundry...... ............|
|.Supermico X8DTH-6f................|.Asus Z9PE-D8 WS.................................|.HP LP3065 30"LCD Monitor.|
|.(2) Xeon X5690....................|.2xE5-2643 v2....................................|.Mino lta magicolor 7450..|
|.(192GB) Samsung PC10600 ECC.......|.2xEVGA nVidia GTX670 4GB........................|.Nikon coolscan 9000......|
|.800W Redundant PSU................|.(8x8GB) Kingston DDR3-1600 ECC..................|.Quantum LTO-4HH..........|
|.NEC Slimline DVD RW DL............|.Corsair AX1200..................................|........ .................|
|.(..6) LSI 9200-8e HBAs............|.Lite-On iHBS112.................................|.Dell D820 Laptop.........|
|.(..8) ST9300653SS (300GB) (RAID0).|.PA120.3, Apogee, MCW N&S bridge.................|...2.33Ghz; 8GB Ram;......|
|.(112) ST2000DL003 (2TB) (RAIDZ2)..|.(1) Areca ARC1880ix-8 512MiB Cache..............|...DVDRW; 128GB SSD.......|
|.(..2) ST9146803SS (146GB) (RAID-1)|.(8) Intel SSD 520 240GB (RAID6).................|...Ubuntu 12.04 64bit.....|
|.Ubuntu 12.04 64bit Server.........|.Windows 7 x64 Pro...............................|............... ..........|
Yes, you're limited to 128K max for rom space, I've only put in a max of 4 Areca's before on a single box, but that was without a graphics card (console based system). I don't know how much space was left though after the free / initial load of the cards.
|.Server/Storage System.............|.Gaming/Work System..............................|.Sundry...... ............|
|.Supermico X8DTH-6f................|.Asus Z9PE-D8 WS.................................|.HP LP3065 30"LCD Monitor.|
|.(2) Xeon X5690....................|.2xE5-2643 v2....................................|.Mino lta magicolor 7450..|
|.(192GB) Samsung PC10600 ECC.......|.2xEVGA nVidia GTX670 4GB........................|.Nikon coolscan 9000......|
|.800W Redundant PSU................|.(8x8GB) Kingston DDR3-1600 ECC..................|.Quantum LTO-4HH..........|
|.NEC Slimline DVD RW DL............|.Corsair AX1200..................................|........ .................|
|.(..6) LSI 9200-8e HBAs............|.Lite-On iHBS112.................................|.Dell D820 Laptop.........|
|.(..8) ST9300653SS (300GB) (RAID0).|.PA120.3, Apogee, MCW N&S bridge.................|...2.33Ghz; 8GB Ram;......|
|.(112) ST2000DL003 (2TB) (RAIDZ2)..|.(1) Areca ARC1880ix-8 512MiB Cache..............|...DVDRW; 128GB SSD.......|
|.(..2) ST9146803SS (146GB) (RAID-1)|.(8) Intel SSD 520 240GB (RAID6).................|...Ubuntu 12.04 64bit.....|
|.Ubuntu 12.04 64bit Server.........|.Windows 7 x64 Pro...............................|............... ..........|
I got an answer from Areca support about putting two controllers together...
> you can install upto 4 controllers in same machine. but all our
> controller is pure hardware raid cards, evey controller is independent. it
> mean you can't create a raidset crosss two controllers.
> but you can stripe two volume together in your system, some customer needs
> high performance have similar configurations before.
>
>
So the Question is, stripping two Volumes in software, will this increase the accesstime or something...Iam scared enough at this time that the write-accesstime will grow up from 0.2 to 3.8ms while using a 1 or 2GB Cache modul insead the original 256MB Modul...
![]()
Monitors----> Dell 22" + 30" + 22" Tripple Setup
Case--------> "BigBlack" LianLi Workstation with 18x5.25";
CPU---------> Intel QX9650@3.0 (no time 4 overclocking yet lol)
GFX---------> Nvidia SLI GFX280,
Mainboard---> Asus Striker II Extreme [Bios : 0901]
RAM---------> 8GB (4x2) DDR3 OCZ3P18004GK, Platinum Edition CL8 8-8-27 2 PC3 14400
Storage-----> HDD-Raid5 4TB Seagate ES.2 Drives
------------> SSD-Raid5 128GB --> 8 x Mtron Pro 2.5" each 16Gig ->700/500 MB Read/Write
------------> SSD installed into Fantec MR-SA1042-1 -> 2 x 5.25" Storage with each 4x2.5" Drivebay
Controller---> All Drives are controlled from : Areca ARC-1231ML SATA / SAS PCIe RAID Controller Card ,
Cooling-----> Aquacomputer Watercooling System with Aquaero Control
Monitoring -> 2x5.25 wide blue LCD Controll display (4x16chars)
Powered-----> by be quiet! 1000W Dark Power Pro P7,
OS----------> Windows Vista Ultimate 64
and to RELAXmy
selfbuild HD-Home Cinema withButtkicker
![]()
All raid cards (actually _ANY_ controller) will increase latency it's physics. Adding in multiple cards mainly increases reliability and streaming performance first then IOPS later. Most likely the added latency is due to the various buffering that the card is doing (both read & write). You can always turn that off or on the sas controllers you can change the aggressiveness factor for reading. I saw the same post about that increase and it's interesting but not something that /today/ will make a big difference unless you are building up a very large database with heavy workloads (ie, small transaction sizes). Put it simply the service time it takes to process a request and get the data back to the calling application is much larger than what you have for say table or index lookups in databases. So don't get too fixated at that time. 3ms or whatever is VERY good for the functions that a desktop or general server will perform. IOPS*Request_size=bandwidth to put it really simply. You have to find a balance for your system you can't have both.![]()
|.Server/Storage System.............|.Gaming/Work System..............................|.Sundry...... ............|
|.Supermico X8DTH-6f................|.Asus Z9PE-D8 WS.................................|.HP LP3065 30"LCD Monitor.|
|.(2) Xeon X5690....................|.2xE5-2643 v2....................................|.Mino lta magicolor 7450..|
|.(192GB) Samsung PC10600 ECC.......|.2xEVGA nVidia GTX670 4GB........................|.Nikon coolscan 9000......|
|.800W Redundant PSU................|.(8x8GB) Kingston DDR3-1600 ECC..................|.Quantum LTO-4HH..........|
|.NEC Slimline DVD RW DL............|.Corsair AX1200..................................|........ .................|
|.(..6) LSI 9200-8e HBAs............|.Lite-On iHBS112.................................|.Dell D820 Laptop.........|
|.(..8) ST9300653SS (300GB) (RAID0).|.PA120.3, Apogee, MCW N&S bridge.................|...2.33Ghz; 8GB Ram;......|
|.(112) ST2000DL003 (2TB) (RAIDZ2)..|.(1) Areca ARC1880ix-8 512MiB Cache..............|...DVDRW; 128GB SSD.......|
|.(..2) ST9146803SS (146GB) (RAID-1)|.(8) Intel SSD 520 240GB (RAID6).................|...Ubuntu 12.04 64bit.....|
|.Ubuntu 12.04 64bit Server.........|.Windows 7 x64 Pro...............................|............... ..........|
well, I understand.you can't have both.
But also this is the XTREME Forum, isnt it
![]()
Monitors----> Dell 22" + 30" + 22" Tripple Setup
Case--------> "BigBlack" LianLi Workstation with 18x5.25";
CPU---------> Intel QX9650@3.0 (no time 4 overclocking yet lol)
GFX---------> Nvidia SLI GFX280,
Mainboard---> Asus Striker II Extreme [Bios : 0901]
RAM---------> 8GB (4x2) DDR3 OCZ3P18004GK, Platinum Edition CL8 8-8-27 2 PC3 14400
Storage-----> HDD-Raid5 4TB Seagate ES.2 Drives
------------> SSD-Raid5 128GB --> 8 x Mtron Pro 2.5" each 16Gig ->700/500 MB Read/Write
------------> SSD installed into Fantec MR-SA1042-1 -> 2 x 5.25" Storage with each 4x2.5" Drivebay
Controller---> All Drives are controlled from : Areca ARC-1231ML SATA / SAS PCIe RAID Controller Card ,
Cooling-----> Aquacomputer Watercooling System with Aquaero Control
Monitoring -> 2x5.25 wide blue LCD Controll display (4x16chars)
Powered-----> by be quiet! 1000W Dark Power Pro P7,
OS----------> Windows Vista Ultimate 64
and to RELAXmy
selfbuild HD-Home Cinema withButtkicker
![]()
from their tech faq,
http://www.fusionio.com/PDFs/ioDrive-Technical-FAQ.pdf
Is it possible to RAID multiple ioDrives™?
To the host operating system an ioDrive appears as a normal hard drive, so users are able to leverage any software
RAID solution that uses the host Operating System’s native drivers. The Operating Systems volume manager
performs the RAID function.
After a major crash on my main working rig last week and found myself unprepared backup wise, Ouch ! Didn’t turn out that badI am not sure what actually happened with the rig, could be hardware or something like that. I am going through all the equipment on a test bench to try and figure that out atm.
But I needed a machine up and running for work. So the new rig which has been waiting for cooling and has been running in stock setup was commissioned as the new work/play rig. It has no OC yet.
This machine is loaded with all my work apps, anti-virus (which was turned off for tests), the whole banana.
The SSD Raid was set to Raid 5 config for safety, I felt the need to do that after last weekend’s computer crash. I did take a hit on bandwidth but I have extra safety now, but to tell you the truth I can hardly notice any performance hit.
All in all it is running Xtremely well and very fast. Going from a Raid 0 and Vista boot time which was ~8-10 seconds, and now in Raid 5 and all apps loaded it takes about ~12 seconds. So still a very good boot time. Apps like Photoshop, Outlook, Paperport etc all load almost instant. I am very happy with this setup so far, I just need cooling to OC this beast.
First Raid benching
PCMark Vantage scored a 37057 on HDD Test Suite.
![]()
Yes, the killer in parity raids is the writes. 100% random being the worst as you have only the speed of a single drive in IOPS. With flash (current tech) that's probably going to be worst than a normal drive these days. The further you are away from random writes (and the less # of writes you do) the better. However with an OS on it there is a good number of random write operations (access time updates, directory updates, small files, temp files et al). I've been looking for someone to actually do some XDD or other tests to really show what they can do but no luck so far.
|.Server/Storage System.............|.Gaming/Work System..............................|.Sundry...... ............|
|.Supermico X8DTH-6f................|.Asus Z9PE-D8 WS.................................|.HP LP3065 30"LCD Monitor.|
|.(2) Xeon X5690....................|.2xE5-2643 v2....................................|.Mino lta magicolor 7450..|
|.(192GB) Samsung PC10600 ECC.......|.2xEVGA nVidia GTX670 4GB........................|.Nikon coolscan 9000......|
|.800W Redundant PSU................|.(8x8GB) Kingston DDR3-1600 ECC..................|.Quantum LTO-4HH..........|
|.NEC Slimline DVD RW DL............|.Corsair AX1200..................................|........ .................|
|.(..6) LSI 9200-8e HBAs............|.Lite-On iHBS112.................................|.Dell D820 Laptop.........|
|.(..8) ST9300653SS (300GB) (RAID0).|.PA120.3, Apogee, MCW N&S bridge.................|...2.33Ghz; 8GB Ram;......|
|.(112) ST2000DL003 (2TB) (RAIDZ2)..|.(1) Areca ARC1880ix-8 512MiB Cache..............|...DVDRW; 128GB SSD.......|
|.(..2) ST9146803SS (146GB) (RAID-1)|.(8) Intel SSD 520 240GB (RAID6).................|...Ubuntu 12.04 64bit.....|
|.Ubuntu 12.04 64bit Server.........|.Windows 7 x64 Pro...............................|............... ..........|
Just adding an update here...
The other day I was working away as always, right before a boardroom call I am all set with spread sheets and everything else open. I go to call in on my bridge when my computer freezes... no biggie I thought, I mean it's not like I have not ever had my computer freeze up before.
I hit the reset button and get to the ARC-1231ML initialization part... ARC-1231ML cannot initialize... followed by that nasty beeping sound that the ARC makes when things go bad
I hit F6 to turn off the alarm and then check the Raid... GAH !! 2x 32gb MTRON PRO's have FAILED. Not good I thought.
I took both SSD's and connected them up to a regular SATA connector to find that they can only be seen as a 15mb drive. Nothing I did would change that, from repartation and reformat.
Good thing they have a 5 year warranty as I now have 2 new ones on the way to me.
rough... I sure hope all SSDs dont fail as easily as that.
Yeah it's been what 7 months and 2 failures, not very good.
But on the flip side I am very happy that I ended up with the MTRON PRO's as it would really bite to have gotten some of the other brands that had only 3 months warranty and then having to go to the manufacture to get a RMA done.
DVNation was pretty cool, just took an email with serial numbers and a phone call, with no major hickups. He knows who I am and just went man that sucks, I will get 2 out you right away.
I value that kind of service a lot. Jason gets a big![]()
I moved the Raid out to my bench for some runs. Here it is with only 7 SSD's installed. It will be 9 next week and that should cap out the ARC-1231ML Bandwidth.
As my Core i7 965 has arrived I am just waiting on a MB and RAM and I will throw this Raid on that setup for some real benches. I am just using what O had handly here for this, more will change later. I still have a QX9770, QX9650 to put under phase and LN2.
So no OC yet, just a baseline run with PCMark05 to see how the Raid is doing. Not too bad of a HDD score, but I can do much better when 2 other SSD’s come back. I should be hitting 70k or so then.
So how fast does this baby boot up ?
This video is from a cold boot. I have to figure how to turn off that dang Floppy Disk check, so I hit F1 there. You will then see it go to Initialization of the ARC-1231ML which is a pain as it takes a few seconds, it will pause when its ready for 5 seconds so if you need to get into the Controller card’s BIOS you can hit F6, I hit ESC to bypass this 5 second wait. Then off it goes and your on the desktop.
There are what seems like some delays there in the boot, but notice when the Vista Logo is shown, what normal is next is Windows Is Starting Up. This happens so fast that both sounds play at the same time.
http://www.youtube.com/watch?v=3902-UKRBlQ
56400 @ 7x ?
i get 57634 @ 4x![]()
What setup are you using Nap ?
This is on Vista also, XP generally gives better results.
the 4x supertalent masterdrive px
buckeye, when one of the ssd's fail can you still read all the data from it? I think intel claimed this is possible using some sort of special program for their ssd's. It would be very nice if all the data would still be there (without using the raid card to recover it)
That is quite a spendy setup there.
get ready buckeye.. 4x @ areca1231ml 2GB
http://www.xtremesystems.org/forums/....php?p=3424824
+ 78634 pcmark05
6x coming up![]()
Last edited by NapalmV5; 12-08-2008 at 09:11 PM.
Necro revive for UBER Thread
big time props to Buck for being the pioneer![]()
"Lurking" Since 1977
![]()
Jesus Saves, God Backs-Up *I come to the news section to ban people, not read complaints.*-[XC]GomelerDon't believe Squish, his hardware does control him!
Bookmarks