Wow. I'm loving where multi-GPU technology is heading! Not only will this make the next gen cards even better, but can also breath new life into older GPUs that aren't QUIET obsolete yet. (48xx series) GO GO LUCID GO
Wow. I'm loving where multi-GPU technology is heading! Not only will this make the next gen cards even better, but can also breath new life into older GPUs that aren't QUIET obsolete yet. (48xx series) GO GO LUCID GO
LoL, i guess i m not being clear enough.
Without Hydra: Lynnfield Runs 1x16, 1x8+1x8, 1x4+1x4+1x4+1x4
Now according to the article
With Hydra: 2x16,4x8 , or of course 1x16 are possible.
Given that Lynnfield is only capable of Sending or Recieving pci-e 16x amounts of data, how does the Hyrdra, then extrapolate 32x lanes worth of data, bandwidth wise from 16.. that is what i am asking. It seems it is somehow magically breaching the physical limitations of lynnfield?
As two GFX cards sending max data back via hydra would be 16x + 16x = 32x of data, how does that fit down the 16x interconnect between hydra and the pci-e controller in the lynnfield cpu?
Which is implying isn't putting the max chip on a lynnfield platform kind of pointless?
How long before they build this chip onto the mobos? 1? 2 years? :up:
Hydra is wired for 32 lanes and your only rendering a sinlge frame across multiple GPU's
Just because the CPU is limited 16 lanes doesn't mean the board is too.
With Lynnfield you'll have 16 lanes to the Hydra chip then 32 lanes to the slots
The two are independent, CPU sends the frame to Hydra via 16 lane bus.
Then Hydra breaks the frame up and send the parts to the GPU's thru 32 lanes.
Hydra works one frame at a time.
Then an x58 which runs 32 lanes x16 + x16 can render more data, this is what i was confused about, the way it was presented it makes it sound like hydra can saturate 2 16x lanes, with only 1 16x lane as input.. that didn't make sense.
And this is still what i am asking.. 16 lanes of data from the cpu... should only equal 16 lanes of output to the gpu's... why the 32 lanes? Maybe its my lack of understanding, but frame by frame, isn;t that how gpu's work already?
Hydra itself is a mini processor... so it has its own instructions it adds onto the data
yeah but not 16 full lanes of pci-e 2.0 bandwidth of overhead... basically I just want to understand if the capacity could ever be used fully.
If two gfx cards process a frame, that takes both their full 16lanes, so a combined 32 lanes of data from gfx
IE: GFXs(32 lanes of data) <--- Hydra Chip (16 lanes)<---(16lanes)PCI-e controll on die lynfield..
The processor is getting the computational questions from the game engine running, lets say 32lanes worth of data..but since the pcie can only send 16lanes of data to the hydra...how can the hydra render the full 32 lanes? To me it would seem that on lynnfield you would be limited to the same 8x 8x config.. regardless if you have more potential lanes, as you can only ever recieve 16lanes worth of data, at any given time. This in essence seems like they are putting an over powered chip on the lynnfield platform, as it can't use the full capabilities...but your still paying for the silicon.
so what happens if i e.g. mix a 5870 with a 7800gt. what graphics api does this setup support? i guess dx9 only? how is the memory of the card used? e.g. 256mb of the 7800gt vs. 1024mb of the 5870? what types of AA and AF can i use since both gpus use completely different algorithms and methods? ...?
i mean, if i want to use 2 graphics cards that have the same features, in the end i'm forced to buy the exact same gpu anyways?
i don't know, i still don't "trust" this chip ;)
It's good to see some solid info on this, it's been nothing but whispers and rumors for far too long. I'm not skeptical on the tech, it seems like a very elegant solution to the problem by load balancing on a per object basis, however I will remain cautiously optimistic about claims of "near linear scaling". I can see how it would have some advantages over tiling or alternate frame rendering when each GPU isn't required to have the full texture/object detail in memory at all times etc... but the claims are still rather bold.
I guess the only thorny issue I have left is how it will resolve alternate AA methods from one card to the next. Depending on the game, some scenes may appear "odd" or inaccurate due to different rendering methods of certain scenes.
Here's to hoping it is the holy grail. :D
I don't have high hopes...
a 5870x2 with Hydra instead of PLX would rock. You would be able to run 4x 5870x2 in Xfire!
I guess sooner...
http://www.businesswire.com/portal/s...76&newsLang=en
If one card finishes its part of the scene faster than the other card the hydra chip would adjust the work load until they were rendering there part of each frame at the same time. I guess it would have to do this continously on the fly. And since the workload is split up by objects, I dont think diffrent types of AA would realy be a problem. One object may just look smoother than other objects.
Most people that will be taking advantage of this tech will most likely not be throwing in a 3-4 generation old card in with a new gen card anyway.
Load balancing is why they went with 32 (4x8)lanes on Hydra.
They don't have to fill the lanes, but they have enough bandwidth if one card get 100% of the frame
It doesn't break up the frame evenly,even with identical GPUs.
Hydra figures out shader capablities and splits load amongst the cards
In Razz's case it would probably be 85% 5870 and 15% 7800GT most of the time.
This article has a couple of good pictures of how it works
http://www.nkstars.net/joomla/index....id=61&Itemid=2
my name is inigo montanya and this is not what agnostic means
perhaps vendor independent, or vendor blind
There's still a problem with this, if you saw some of the details the press reported on the Hydra chip last year, it said that the Object based rendering was "rebalanced" for every frame, meaning an object could be rendered on one card for one frame, and then the other card for the next frame. The press noted in the UT3 demo that in real-time the two monitor's showing the split card video was "flickering" as each object was rendered at different times by different GPU's.
This would mean that objects would appear to change in apparent AA smoothness very quickly. It may happen so fast that you don't notice it, but it may be a problem or cause visual artifacts of some kind.
It was just a pipe dream anyway. Of course its not gonna happen. :(