@buckeye- Since you seem to also have a linux OS there I would be very curious if you can run XDD against the system for both read & write random iops performance. I was very leery of the SSD's as from the spattering of reports I was getting is that write speeds suck in comparison (6-7ms) opposed to 0.1/0.2 ms for reads. Was able to get better write iops than that w/ sas.
Anyway, if you're able
http://www.ioperformance.com/products.htm something like:
Where S0 is the largest you can make it (I used 64GB above). But something big enough to hit all the disks with decent sized chunks.
As for your goal to attain >1GiB/s speeds, welcome to the club. At this point there are two main issues. 1) individual controller performance and 2) memory/process performance. You can put in several controllers in a system which can solve the first, the second at this point it's going to an amd opteron system or wait and see what intel does w/ nehelem-ep's. I've been banging my head against this for a while both for disk I/O and network I/O. It's not easy. I'm hoping to get over to SC08 this year to nose around.
Another item as well is workload type, if it's random I/O you pretty much won't hit good speeds under any parity raid (3/4/5/6) as you're limited to the # of IOPS of a single drive. Either use raid-10 or 100, or use many smaller parity raids and LVM them together.
Bookmarks