Originally Posted by
Sr7
While I understand your point, I have to say that you are wrong here. I'm not trying to come off as rude but your response is just a bit naive. Here is why...
First.. to compare the status quo of oil (mega corporations) to the state of multi-gpu is a bit of a non-sequitur. Basically that in itself is not comparable, because oil lobbies governments like crazy to get special interest wedges in place. That said, there is *absolutely no reason* why AMD and NVIDIA wouldn't have done this if it made sense and worked properly. It would give them 1 more chip (see relevance) in the average enthusiast system, which they could keep from Intel and hold over their heads. At some point you need to stop and realize that maybe it wasn't done because it doesn't make sound technical sense from the driver or latency or image quality perspectives... things aren't always about "evil corporations".
The analogies you make also fail in that in the tech industry... if you think you can just hold onto technology without changing for decades, you're gravely mistaken (in this industry more than any other). Look at what happened with hard disks... SSD's came to the marketplace and the hard disk manufacturers who were not prepared for/invested in this future tech were caught with their pants down. Now almost all the patents and technology they have spent years building are irrelevant. It's a vastly different business when the mechanical element goes out the window and flash memory comes into the picture. The smart guys hedged their bets and are now making SSDs with IP they invested in years ago. The slow guys who thought they could sit on their tech forever got burned, and are now making attempts to sell off their hard disk drive businesses, as the average price for a HDD plummets (no profitability left in the market).
Secondly, SFR (split frame rendering, what this Hydra technology uses) used to be the main method multi-gpus used... until AFR came along and became the standard. Now all default crossfire and SLI profiles use AFR. Why? Better scaling. If you take any NVIDIA GPU and test the default SLI profile, then compare to SFR scaling, you'll see my point (same goes for ATI but you can't force SFR on their products through their CP).
The way that 3d graphics work, you can't just arbitrarily chop up a frame and send it to two different cards and expect a proper image to come out of it. What about data that spans into both sub-frames? States get set in DX runtime calls, and you can't just round robin the draw calls.. you'd have to make each GPU fully aware of the rest of the frame also, meaning you have inherently duplicated work, even if they're not each computing the whole frame.. they're still overlapping, which is always a bad thing here. I have no doubts they've demo'ed the tech but I imagine they have some hacks in place to work around shortcomings. If you think you have driver problems now with SLI or CF, wait till you add more latency and another layer of software to the mix.
Last, as I mentioned before, the reason microstutter exists is the same reason Hydra can't effectively dole out work to arbitary GPUs... it has no way of knowing how long the different chips are going to take to process each of the frames, especially because framerates in realtime games can vary wildly from frame to frame, so you can't use knowledge from prior frames to offset the present of the new frame, because maybe that frame was way faster or slower than the 20 preceding it. Also, knowledge of each GPUs capabilities, let alone their speeds at different parts of the graphics pipeline are impossible to effectively asses, especially for 3rd party software with no access to the GPU/driver internals. It's hard enough for the chip to know definitively "Okay this chip is this fast relative to this chip, I need to send 20% of the frame to the slower chip", let alone the even more complex issue of "when I send 20% to the slower chip, how do I predict if that architecture of GPU will be quite bottlenecked for this particular piece of geometry vs. the other GPU architecture"
My final gripe would be the fact that Lucid is putting out beautiful marketing slides and getting peoples hopes up. They're promising 100% scaling no matter what. The sad truth is this will just not work the way they're selling it. Crossfire and SLI take a lot of flak for not always scaling to expectations. What people don't realize is that you can only scale up to the point where you are CPU bound and have no more graphics work to do, so many times this is why things don't scale further. Adding more graphics cards to the equation after the point of complete cpu-boundedness will only hurt framerates, not help. Also, there are issues around inter-chip communication. Currently you need SLI and Crossfire profiles to tell the driver "hey I know the game developer didn't clear this re-render this resource, or clear this to signal that they don't need you to retain any data, but you can safely discard this data and no corruption will occur". Lastly, if a game isn't scaling because of CPU/GPU synchronization, you won't see scaling either. Hydra won't fix that.. so much of this lies in the game developers ability to properly handle these cases (or a profile to exist for each game, as they do in NVIDIA and ATI drivers).
Without this, you either take the conservative route to avoid corruption and assume the game developer meant to have the present frame depend on the previous one (and transfer between GPUs, hurting scaling, which can't happen with the Hydra system), or you take the aggressive route and say "I don't care if the developer expected me to preserve some data between frames because they default to programming under the assumption of 1 GPU with an "untouched" set of memory from the last frame... kill the previous frame data and risk corruption anyway!"
The bottom line is if you have multiple GPUs they need to be able to talk via the driver and be aware of one another, and this isn't possible in the case of different manufacturers, or a non CF/SLI configured setup.
There are so many reasons this is a bad idea (though it's great in theory), you just may not realize it yet. It's a bit of a pipe dream in the long run IMO.