I'm trying to get a sample.
man this wont work mixing cards as.....
nvidifail have blocked nvidia cards working if it "sees" a ati card, so only if the lucid chip can get around that will id do that....
but imo the interesting thing will be the scalling of x-fire and sli now... as you will not be able to mix cards (still pissed at nvidia about that..)
we shall see as we still don't know what version of the chip it's using
Heh, I wouldn't be so quick to celebrate. Wait until you see if it actually works first ;)
Jon Peddie Research did a little paper on multi-GPU/Lucid and they seem to think it's the second coming but even they are just going on Lucid's promises and not anything concrete.
http://www.jonpeddie.com/special/Whi...ortunities.pdf
Yeah I need to actually see it fullyy working with a few different graphics card configurations along with some test results and so forth.
First of guys. Hydra wont run Crossfire or SLI. It will (should) run 2 or more graphics cards at the same time for higher performance.
SLI and Crossfire works mostly by letting each GPU work on a singel frame. It can also be done by drawing half the image one one GPU and the other half on the other GPU. The problem with this is that ypu need 2 GPUs with exactly the same performance. And the first GPU needs to wait for the other GPU to finish befor they start on the next frame.
Hydra uses some kind of advanced algorithm to analyze each frame and devide the work on all the GPUs in the system, based on how fast the GPU can put out images. Witch means that if ou pair a GTX 280 with an 9500GT, the 280 will do most of the work in the frame (liek light, water and other shader intesive parts of the image). The slower GPU will render the less intesive parts of the frame. Both images will then be joined togheter to form a complete frame. With two identical GPUs this would mean that the work will be devided equally and scalling will (in theory) be 100%. It also means that there are no obsticals that prevent you from using an ATI and a Nvida card in the same system. However there will be driver issues since both GPUs needs to use there seperate drivers and settings. Another good thing is that if you have 2 idetical cards, and you overclock one of them, you will se an overall performance increase without oveclocking the other card. The overclocked card simply takes more of the work and the frame will be completed faster.
I just cant wait to see how will it performs in reality :)
pack of rubbish
4870+4650 rendering the same game lol
imagine the 3DMark01 fun :D ... dragos with ATI and nature with nVidia :p
so how does this work with nvidias newer win7 drivers to disable mixed cards ??
If one has some issues with drivers, they will have twice more using this chip... I'd avoid mixing ATi and Nvidia cards unless you absolutely have to.
I still find it funny that people believe that Lucid can just stick a chip on a motherboard and make Nvidia's and ATI's chips work better than the designers can :rolleyes: Can't wait for the first reviews of this thing. I'll be just as impressed as anybody if it works but right now it sounds like a pipedream.
I thought that ELSA was already selling boxes for render farms that used Hydra 100 or did these get delayed/cancelled?
asked MSI and this is only prototype ... not for sale in first and second wave of P55 boards ... but everything can be changed.
Because from a marketing point of view that's complete failure. And you fail to realize that ATI and nvidia exist for the sole purpose of making money. What good could come from ATI and nvidia investing money and research time to get xfire/SLI working with mixed and matched configurations?
Option A: you can mix ATI with nvidia, so now you go and purchase the competitor's product. FAIL for both companies.
Option B: you can increase the performance of your current 4890/GTX 285, by throwing in your old 3870/8800 GT, rather than being forced to buy ANOTHER 4890/GTX 285. Once again, FAIL.
That's why you need a third party to come in and develop such a thing. And it seems they have found the best way (distributing directx/opengl code rather than distributing frames) to allow cards of different performance levels to scale almost perfectly. Obviously the same technique would work with cards of the same performance level and should show linear scaling.
Whether it works as advertised or not is the question. Will we see terrible micro or macro stutter, will every game engine be supported, and so on.
i would like to see >4 gpu's with this or would that be too much to handle.
Ans the wait is finally over.... (my sig for last 8 months)
87% scaling?
http://i27.tinypic.com/29bdpus.gif
This is nothing more than a PCIe bridge for now, and the mobo may not see the light of day :doh:
And how you want to tell the gpus what to render.. that would need massive support by ati/nv, or another possibility would be, that there would be a layer of software between dx/ogl and the gfx driver that calculates what high workload is and what not, then split it up, distribute it and rejoin the frames and then send it to the dx/ogl rendere... would need quite a bit calculation power...
Right now thats nothing more then a pipe dream... as already mentioned by some. :p: