On that not after testing all flagship cards from ati since 3xxx series, I would not mind purchasing this card if/when if comes out in the market. bench it and then resell it :D
Printable View
On that not after testing all flagship cards from ati since 3xxx series, I would not mind purchasing this card if/when if comes out in the market. bench it and then resell it :D
I wonder if these dual gpu cards make any sense for people already sitting on a decent setup. I mean 6 months after they release we should see 28nm products no ? Was wondering if its finally time to change the 295 but seems it will hold out a while longer not really playing the latest stuff or going to play BF3 (would love to but dont think ill have the time). Prolly only makes sense for someone building something from the ground up.
There's not a lot of demanding games out there atm so there's not much reason to buy a powerful card unless using triple monitors or 30" monitor. And it seems like there won't be as long as developers keep pushing multi platform titles for the current console generation.
Thanks guys. I have an SG07 and putting a 700w/800w is nearly impossible, although MIGHT be interesting! I do have an 850w corsiar, hmm......
850 corsair should be enough.
I would not be surprised if we see an ASUS MARRS style edition with 3GB per GPU and higher clocks and a custom cooling solution and the requirement for a minimum of a 1000W PSU.
However the normal GTX 590 should run on a 800W PSU (decent quality one). Hopefully run on a 900W Gold PSU (Enermax) ;)
John
Winpower :p:
John
:shocked: Dual GF110!! Thats insane.. not impossible. But that just wild. Gonna be one super hot card. Sure would be nice in my system with EK water block to match. :rofl:
nVidia could be preparing something rather exciting for the 590 launch.
These are from the release notes for nVidia's BETA Release 270 CUDA 4.0 Candidate driver for developers and select reviewers. It appears that nVidia are making CUDA architecture more parallel, I wonder how this will effect the world of SLi and DirectX11 thread lists?Quote:
* Unified Virtual Addressing
* GPUDirect v2.0 support for Peer-to-Peer Communication
* Share GPUs across multiple threads
* Use all GPUs in the system concurrently from a single host thread
Also nVidia recently posted this as an announcement on their official forums.
Does make you wonder if a rather power hungry multi GPU is coming rather soon doesn't it?Quote:
Furmark is an application designed to stress the GPU by maximizing power draw well beyond any real world application or game. In some cases, this could lead to slowdown of the graphics card due to hitting over-temperature or over-current protection mechanisms. These protection mechanisms are designed to ensure the safe operation of the graphics card. Using Furmark or other applications to disable these protection mechanisms can result in permanent damage to the graphics card and void the manufacturer's warranty.
John
For CUDA, well it's a developpement kit essentially for developper and Quadro/tesla system. And i don't see what CUDA have to do with DX11 and SLI performance in games ( outside PhysX ). this update is not directly dedicated to "games developpement"; but more and likely essential for professional computing.
For the "Furmark things: this type of things have been added till the release of the GTX580&570. specially after the " gpu-z" tricks.
Most likely, it's the thermal limit of a card they can design within a certain set of specs (length, cooler-size, power-system complexity, etc.)... Not all power-related problems are based purely on power-draw :)
Also, there is a limit to the amount of current you can push through soldering-tin and copper :p:
Best Regards :toast:
Very true, but correct me if I am wrong, but doesn't nVidia use CUDA for PhysX?
(as in PhysX runs over CUDA)?
I have heard that PhysX 3.0 SDK (currently they are on 2.8.4) will thread across multiple GPU's, so surely this means it requires CUDA 4? based driver:confused:
Or is this entirely independent and I have got confused over how nVidia implement PhysX relating to their CUDA. Either way surely it is a sign of things to come? :shrug: (even if we are just talking in the folding and transcoding/encoding department).
John
Geforce GTX 590 launched March 22
http://img849.imageshack.us/img849/6...0432064222.pngQuote:
In about two weeks introduced Nvidia's retaliation to the Radeon 6990th Geforce GTX 590 boasts 1024 CUDA cores, 3 GB of GDDR5 memory and soaring 375 W TDP
I don't think the clock will be in the 6xx range.. the card will be too slow ( 580 are at 772 and 570 at 732...imagine the result ). I believe they base this info on the gpu-z screenshots we have seen.
what is going on with all the new beffy cards having only 2x 8pin power
Yes, PhysX is somewhat CUDA related, although not absolute mandatory, ie. the original AGEIA accelerator. :)
And PhysX 3.0 is supposed to thread across multiple CPU cores, not GPU's, that's what you mean? http://physxinfo.com/news/3414/physx...lti-threading/ No point in dividing it across multiple GPUs when something like a 9600 is enough to run it, at least in current implementations of it.
Thank you for clearing that up for me DarthShader :up:
Although I am sure I did hear somewhere that PhysX 3.0 would be kinder in multiGPU situations. At the moment on the single PCB GTX 295 all PhysX processing is done on GPU B :(
Rendering is done on both GPU A+B. So in games which use PhysX GPU B is working a lot harder than GPU A. If the work could be split across multiple GPU's, then PhysX would have less of an impact.
But hey, nothing wrong with having some SSE and multi-threading love :p:
John
So much speculation. After plugging some numbers into Excel, and assuming the leaked specs are reasonably accurate, even at a 600MHz clock it will be at least 50% faster than a GTX 580 because of the number of cores. The GTX 580 stock clock is 772MHz. It will also be faster than 2 stock 560Ti's in SLI. In short this will be a monster card but it will be slower than the 6990 overclock card. The 590 would need a 650+ clock to beat the 6990. If somehow they managed to get the clocks up to 700MHz, the GTX 590 will destroy everything and bring about Armageddon.
For GTX 590 to be faster than 6990 it would have to be faster than 570 sli. Now can Nvidia pull of a 570 sli in one card? They can of course use full GF110 chips, but they need to drop the voltage and clocks to a 375W level. And I don't see how they could pull it off considering a single GTX 570 has TDP of 220 W.
Personally I think performance wise Nvidia will admit defeat, but the dual GPU card could still be a good offer if priced accordingly. It would give a good option for Nvidia Surround. Also the reference cooler might actually be something usable.