Quote Originally Posted by NapalmV5 View Post
if you guys think x4 is more than enough.. i apologize for my suggestion
No need to apologize, man. Your opinion is as valid as anyone else's.

Quote Originally Posted by stevecs View Post
> 4 pcie lanes does have some benefits (mainly for burst traffic which would help keeping the controller's cache filled up).
Yes, I subscribe to that. Though burst traffic might not happen all that often. Still a valid point, nevertheless.

Quote Originally Posted by stevecs View Post
IRQ sharing is actually important as well (and distributions of the interrupts between multiple cpu's/cores) but that's hard to control under windows, though generally it's not something you'll really end up seeing with block device transfers at the block sizes mentioned here or at the bandwidth rates and with normal CPU systems, though this is on your lower power system right?
IRQ sharing is a major pain to go around in Windows, indeed. Usually you can only manage IRQ attribution by physically moving hardware around or disabling the offending hardware...

My brother once had lowsy throughput on a 10/100 NIC (25~50%) on his P3-1000 router, with 100% CPU usage when transferring anything over the LAN. Many hours later he found out somehow the AGP card and the PCI slot where the NIC was installed were sharing an IRQ. He had to swap the NIC to another PCI slot (not a problem, he had six of them available... lol) to get the thing working properly.


Quote Originally Posted by stevecs View Post
As for the 4x pci lanes on that giga board they should NOT be shared with anything on the SB. That chipset should have 2GiB/sec going to the SB in total and from the diagram: http://www.intel.com/Assets/Image/di...ck_Diagram.jpg the PCIe's are on their own (and disregard the marketing #'s of the 500MB there, it's 2.0Gbps/channel full duplex). But yes if you fully saturate the SB you can oversubscribe the NB/SB connection though you'd have to have everything plugged in and running at a decent clip to do that.
Hehe, I was clearly in dire need of sleep when I posted earlier... I meant exactly that about the "bandwidth sharing" issue. PCIe lanes have dedicated bandwidth, but the global SB bandwidth is not.

Though correct me if I'm wrong, but isn't the DMI bandwidth 1GBps in each direction, as opposed to 2GBps? Since the PCIe ports appear at 500MBps each (and we all know that's 250MBps bidirectional), my guess is the same applies to the DMI interface, which I've always heard was basically a slightly altered 4x PCIe connection, btw.

Any thoughts on this one?

Oh! I just remembered something I wanted to say on my earlier post: poor RAID performance can sometimes be attributed to controller firmware and HDD incompatibilities. Not too frequent, but you might want to check that too.

Cheers.

Miguel