No need to apologize, man. Your opinion is as valid as anyone else's.
Yes, I subscribe to that. Though burst traffic might not happen all that often. Still a valid point, nevertheless.
IRQ sharing is a major pain to go around in Windows, indeed. Usually you can only manage IRQ attribution by physically moving hardware around or disabling the offending hardware...
My brother once had lowsy throughput on a 10/100 NIC (25~50%) on his P3-1000 router, with 100% CPU usage when transferring anything over the LAN. Many hours later he found out somehow the AGP card and the PCI slot where the NIC was installed were sharing an IRQ. He had to swap the NIC to another PCI slot (not a problem, he had six of them available... lol) to get the thing working properly.
Hehe, I was clearly in dire need of sleep when I posted earlier...I meant exactly that about the "bandwidth sharing" issue. PCIe lanes have dedicated bandwidth, but the global SB bandwidth is not.
Though correct me if I'm wrong, but isn't the DMI bandwidth 1GBps in each direction, as opposed to 2GBps? Since the PCIe ports appear at 500MBps each (and we all know that's 250MBps bidirectional), my guess is the same applies to the DMI interface, which I've always heard was basically a slightly altered 4x PCIe connection, btw.
Any thoughts on this one?
Oh! I just remembered something I wanted to say on my earlier post: poor RAID performance can sometimes be attributed to controller firmware and HDD incompatibilities. Not too frequent, but you might want to check that too.
Cheers.
Miguel




I meant exactly that about the "bandwidth sharing" issue. PCIe lanes have dedicated bandwidth, but the global SB bandwidth is not.
Reply With Quote
Bookmarks