Yeah, I've run into that problem for all platforms/vendors. I've even seen various 'stupid admin' designs where they put say a serial card in a 64bit/133mhz slot but then put the SAN hba in a 32bit/33mhz one. The board I've go my eyes on now is the Supermicro X8DAH+ as it has two tylersburg-36D chipsets on it (only one I've found so far w/ two). For I/O it's going to be a killer if they don't castrate something else. You don't really need more than 8x PCIe v1 speeds as of yet as the cards that are available can't handle more than that anyway (IOP34x are maxed out). What you're doing is generally what is done (putting multiple cards in a system and lvming/striping across them). Here at at the datacenter we don't have any SSD's really deployed to any of the clients, the largest bank of drives we have is about 500-600 (not including sans) but even that large one is not for a single db instance and is hooked up via 4gbit fc so there are a lot of built-in bottlenecks (doesn't help matters that they bought a lower-end sun box (should have had at least a T5440 for the sparc line).