I'm not sure to what extend the latency will increase with trace distance, I do know that a bridge chip does increase latency quite a bit though so that does not work for core to core communication. The biggest problem for MC on a GPU would be cooling, the GTX280 is pretty reaching the maximum power draw that can be cooled on a single spot. Having 2 GPUs separated by a little distance will make cooling quite a bit easier as the power output per unit area can be lowered this way.
The amount of traces that connect the 2 chips will only play a role when that amount is bigger than the amount of traces needed for the memory bus. So let's say that we have 2 chips with each a 256-bit memory bus and this bus requires around 500 pins (not counting video output, PCIe CFX sideport, 'power' pins and more) then that MCM design will only become more efficient when that GPU to GPU connection requires more than those 500 pins, which to me seems unlikely. That's because that MCM package would have more than 1000 pins terminating from it, while those 2 chips done separately will more than 500 + the pins needed for the interconnect terminating from it. Latency will probably increase when doing thing the non-MCM way, but I don't think that will play to big of a role as GPUs tend to be pretty well at latency hiding and the memory latency also is not to big of a problem.
All this is of course not yet applicable to the upcoming R700, as R700 will probably not be any miracle yet, but ATI's future R800 might be more like this.
EDIT: I can't find anything on Cyberlink's website on GPU accelerated video transcoding, so not sure whether it will support NVIDIA cards. Has anyone else got any idea? I found some more info over here: http://www.hardware.info/nl-NL/artic...HD_4850_Test/6
It's dutch though so you may have to grab yourself a translator. There they just said that AMD cards will get GPU accelerated video transcoding and they made is sound like NVIDIA will not get it in Cyberlink's application. Adobe Premier will also get a plug-in to accelerate certain computations and of course Havok Physics will get a boost in the future with AMD cards.
So it seems like ATI's cards already have more uses in the GPGPU front than NVIDIA cards, it's just that NVIDIA is making more noise about their CUDA program.
Bookmarks