Depends on what you mean by performance. You _may_ get better streaming speeds out of a 8MiB request than a 64KiB one but that's dependent on the type of raid, it's stripe size and stripe width, controller optimizations, hash memory (if using a parity raid) among other items. However the differences that you see are really based on implementations not in any underlaying issues with the design itself. > 4 pcie lanes does have some benefits (mainly for burst traffic which would help keeping the controller's cache filled up).

IRQ sharing is actually important as well (and distributions of the interrupts between multiple cpu's/cores) but that's hard to control under windows, though generally it's not something you'll really end up seeing with block device transfers at the block sizes mentioned here or at the bandwidth rates and with normal CPU systems, though this is on your lower power system right? So there may be something there holding it back easy way to find out though is to do a raid-0 stripe and see what the cap is to the same slot/drives/controller. Most times it's a chipset design (ie, ~800MiB on the IOP34x series) or the cpu processing power to do parity calcs I've found.

As for the 4x pci lanes on that giga board they should NOT be shared with anything on the SB. That chipset should have 2GiB/sec going to the SB in total and from the diagram: http://www.intel.com/Assets/Image/di...ck_Diagram.jpg the PCIe's are on their own (and disregard the marketing #'s of the 500MB there, it's 2.0Gbps/channel full duplex). But yes if you fully saturate the SB you can oversubscribe the NB/SB connection though you'd have to have everything plugged in and running at a decent clip to do that.