On that not after testing all flagship cards from ati since 3xxx series, I would not mind purchasing this card if/when if comes out in the market. bench it and then resell it
On that not after testing all flagship cards from ati since 3xxx series, I would not mind purchasing this card if/when if comes out in the market. bench it and then resell it
I wonder if these dual gpu cards make any sense for people already sitting on a decent setup. I mean 6 months after they release we should see 28nm products no ? Was wondering if its finally time to change the 295 but seems it will hold out a while longer not really playing the latest stuff or going to play BF3 (would love to but dont think ill have the time). Prolly only makes sense for someone building something from the ground up.
There's not a lot of demanding games out there atm so there's not much reason to buy a powerful card unless using triple monitors or 30" monitor. And it seems like there won't be as long as developers keep pushing multi platform titles for the current console generation.
"No, you'll warrant no villain's exposition from me."
Thanks guys. I have an SG07 and putting a 700w/800w is nearly impossible, although MIGHT be interesting! I do have an 850w corsiar, hmm......
You must [not] advance.
Current Rig: i7 4790k @ stock (**** TIM!) , Zotac GTX 1080 WC'd 2214mhz core / 5528mhz Mem, Asus z-97 Deluxe
Heatware
850 corsair should be enough.
I would not be surprised if we see an ASUS MARRS style edition with 3GB per GPU and higher clocks and a custom cooling solution and the requirement for a minimum of a 1000W PSU.
However the normal GTX 590 should run on a 800W PSU (decent quality one). Hopefully run on a 900W Gold PSU (Enermax)
John
Stop looking at the walls, look out the window
Winpower
John
Stop looking at the walls, look out the window
Dual GF110!! Thats insane.. not impossible. But that just wild. Gonna be one super hot card. Sure would be nice in my system with EK water block to match.
Last edited by Lu(ky; 03-06-2011 at 09:41 AM.
CPU: Intel Core i7-4770K 4.8GHz
MOBO: GIGABYTE GA-G1.Sniper M5 MATX 1150
MEMORY: G.SKILL Trident X 8GB 2400MHz 9-11-11-31 1T
GPU: 2 x eVGA GTX 780 SC
SOUND KRK Rokit 5 Limited Edition White Studio Monitors
SSD: 4 x Samsung 128GB Pro's Raid 0
PSU: SeaSonic Platinum 1000W
COOLING: 2 x Alphacool NexXxoS UT60 Full Copper 420mm 6 x Swiftech Helix 140mm Fans
CASE: Lian Li PC-C32B TECH STATION MOD build log coming soon
MONITOR: ASUS VG278HE Black 27" 149Hz
O.S: Windows 7 Pro x64
nVidia could be preparing something rather exciting for the 590 launch.
These are from the release notes for nVidia's BETA Release 270 CUDA 4.0 Candidate driver for developers and select reviewers. It appears that nVidia are making CUDA architecture more parallel, I wonder how this will effect the world of SLi and DirectX11 thread lists?* Unified Virtual Addressing
* GPUDirect v2.0 support for Peer-to-Peer Communication
* Share GPUs across multiple threads
* Use all GPUs in the system concurrently from a single host thread
Also nVidia recently posted this as an announcement on their official forums.
Does make you wonder if a rather power hungry multi GPU is coming rather soon doesn't it?Furmark is an application designed to stress the GPU by maximizing power draw well beyond any real world application or game. In some cases, this could lead to slowdown of the graphics card due to hitting over-temperature or over-current protection mechanisms. These protection mechanisms are designed to ensure the safe operation of the graphics card. Using Furmark or other applications to disable these protection mechanisms can result in permanent damage to the graphics card and void the manufacturer's warranty.
John
Stop looking at the walls, look out the window
For CUDA, well it's a developpement kit essentially for developper and Quadro/tesla system. And i don't see what CUDA have to do with DX11 and SLI performance in games ( outside PhysX ). this update is not directly dedicated to "games developpement"; but more and likely essential for professional computing.
For the "Furmark things: this type of things have been added till the release of the GTX580&570. specially after the " gpu-z" tricks.
Last edited by Lanek; 03-06-2011 at 04:57 PM.
CPU: - I7 4930K (EK Supremacy )
GPU: - 2x AMD HD7970 flashed GHZ bios ( EK Acetal Nickel Waterblock H2o)
Motherboard: Asus x79 Deluxe
RAM: G-skill Ares C9 2133mhz 16GB
Main Storage: Samsung 840EVO 500GB / 2x Crucial RealSSD C300 Raid0
Most likely, it's the thermal limit of a card they can design within a certain set of specs (length, cooler-size, power-system complexity, etc.)... Not all power-related problems are based purely on power-draw
Also, there is a limit to the amount of current you can push through soldering-tin and copper
Best Regards
Silverstone RAVEN RV02|
Core i5 2500K@4.4GHz, 1,300V|
Corsair A70|ASUS P67 Sabertooth|Creative X-Fi Titanium Fatal1ty|
Corsair Dominator DDR1600 4x4096MB@DDR3-1600@1.65V|Sapphire HD7970 3GB 1075/1475MHz|
Corsair Force F120 120GB SSD SATA-II, WD Caviar Black 2x1TB SATA-II 32mb, Hitatchi 320GB SATA-II 16mb|Silverstone DA750 750w PSU|
Very true, but correct me if I am wrong, but doesn't nVidia use CUDA for PhysX?
(as in PhysX runs over CUDA)?
I have heard that PhysX 3.0 SDK (currently they are on 2.8.4) will thread across multiple GPU's, so surely this means it requires CUDA 4? based driver
Or is this entirely independent and I have got confused over how nVidia implement PhysX relating to their CUDA. Either way surely it is a sign of things to come? (even if we are just talking in the folding and transcoding/encoding department).
John
Stop looking at the walls, look out the window
Geforce GTX 590 launched March 22
In about two weeks introduced Nvidia's retaliation to the Radeon 6990th Geforce GTX 590 boasts 1024 CUDA cores, 3 GB of GDDR5 memory and soaring 375 W TDP
I don't think the clock will be in the 6xx range.. the card will be too slow ( 580 are at 772 and 570 at 732...imagine the result ). I believe they base this info on the gpu-z screenshots we have seen.
CPU: - I7 4930K (EK Supremacy )
GPU: - 2x AMD HD7970 flashed GHZ bios ( EK Acetal Nickel Waterblock H2o)
Motherboard: Asus x79 Deluxe
RAM: G-skill Ares C9 2133mhz 16GB
Main Storage: Samsung 840EVO 500GB / 2x Crucial RealSSD C300 Raid0
what is going on with all the new beffy cards having only 2x 8pin power
2500k @ 4900mhz - Asus Maxiums IV Gene Z - Swiftech Apogee LP
GTX 680 @ +170 (1267mhz) / +300 (3305mhz) - EK 680 FC EN/Acteal
Swiftech MCR320 Drive @ 1300rpms - 3x GT 1850s @ 1150rpms
XS Build Log for: My Latest Custom Case
Yes, PhysX is somewhat CUDA related, although not absolute mandatory, ie. the original AGEIA accelerator.
And PhysX 3.0 is supposed to thread across multiple CPU cores, not GPU's, that's what you mean? http://physxinfo.com/news/3414/physx...lti-threading/ No point in dividing it across multiple GPUs when something like a 9600 is enough to run it, at least in current implementations of it.
Thank you for clearing that up for me DarthShader
Although I am sure I did hear somewhere that PhysX 3.0 would be kinder in multiGPU situations. At the moment on the single PCB GTX 295 all PhysX processing is done on GPU B
Rendering is done on both GPU A+B. So in games which use PhysX GPU B is working a lot harder than GPU A. If the work could be split across multiple GPU's, then PhysX would have less of an impact.
But hey, nothing wrong with having some SSE and multi-threading love
John
Stop looking at the walls, look out the window
So much speculation. After plugging some numbers into Excel, and assuming the leaked specs are reasonably accurate, even at a 600MHz clock it will be at least 50% faster than a GTX 580 because of the number of cores. The GTX 580 stock clock is 772MHz. It will also be faster than 2 stock 560Ti's in SLI. In short this will be a monster card but it will be slower than the 6990 overclock card. The 590 would need a 650+ clock to beat the 6990. If somehow they managed to get the clocks up to 700MHz, the GTX 590 will destroy everything and bring about Armageddon.
For GTX 590 to be faster than 6990 it would have to be faster than 570 sli. Now can Nvidia pull of a 570 sli in one card? They can of course use full GF110 chips, but they need to drop the voltage and clocks to a 375W level. And I don't see how they could pull it off considering a single GTX 570 has TDP of 220 W.
Personally I think performance wise Nvidia will admit defeat, but the dual GPU card could still be a good offer if priced accordingly. It would give a good option for Nvidia Surround. Also the reference cooler might actually be something usable.
"No, you'll warrant no villain's exposition from me."
Bookmarks