Okay, didn't he say a few posts back he knew how fast it is?
Printable View
Kyle claims: http://hardforum.com/showpost.php?p=...9&postcount=89
Quote:
You will start to see benchmark leaks and such probably in the next couple of days. I think most of those will be at less than 2560 resolution. And let's face it, you don't need a flagship card for resolution that small. Honestly, like always, we are going to have to put it through [H] large resolution gaming paces before we make a call on where it is compared to 7970. I think the 680 will be faster but by how much, we don't know yet, and I have seen NO MSRPs confirmed.
The rising tension of this thread is really starting to build...
Show me the card!!!
I going to the great country United States of America next month, second best country after Kazakhstan... and I going to buy brand new PeeCees! Very niiice!
Choosing between 7870 OC or 680.. please bring it out soon!
http://discussions.texasbowhunter.co...9&d=1252697380
This dynamic clocking rumor is very interesting. I hope it's sophisticated. I could imagine the TDP budget being shifted around on the GPU as needed. Highly utilized units are clocked higher while waiting/idle units are clocked lower. This would increase power consumption on average but would stay within the TDP. I hope one can deactivate it for older games though. Or when you are CPU bound or have vsync enabled that it doesn't kick in when the chip doesn't run at full capacity.
It's been the same thing with benchmarking CPUs with turbo mode. You run the bench 10 times time after time and you get 10 different results... numbers in reviews are more like close approximations, that's why a score difference of 1-3% between competing products is called a draw, not a win.
relative to 7970 for the advantages of the new stuff is as follows: low-voltage, high frequency (broken GHz), low power consumption low-noise, high-performance (the DX11), AA, and obtained the support of the business of the game
PHK Expreview -> http://bbs.expreview.com/thread-49806-1-1.html
http://oi43.tinypic.com/muy5co.jpgQuote:
GTX6xx pics, teh card, another PCB shot, and those power-connectors.:rolleyes:
Source:http://we.pcinlife.com/thread-1849001-1-1.html
http://oi43.tinypic.com/1072fc4.jpg
http://oi39.tinypic.com/24n44g3.jpg
http://oi40.tinypic.com/11l1w14.jpg
Do you realize what sort of tranny budget that would add?
Nvidia didn't even want to go simple for Fermi, they went software to try to keep the cards under control and we all know how well that worked out...
Not saying it is impossible but I would be very surprised to see them jump into that all at once rather than gradually implementing the feature over a generation or two.
I see them implementing into the hardware to control TDP, such as AMD's Powertune, but anything more seems unlikely at this point especially something relatively complex.
Several sources speak of decoupled shader and core clocks. This is already different from Powertune. It seems, both clock domains are dynamic (705-950MHz for the core and up to 1411 MHz for the ALUs).
As long as those connectors don't make the card thicker thana standard 2 slot design I don't care.
If they do, well others and myself might have trouble getting it into a M-ITX system.
Why nonsense? Please elaborate and remember that we're talking about 1536 of those bad boys here, not just 512 or so.
Ask yourself:
1) what kind of clocks can Nvidia expect from TSMC 28 nm?
2) what kind of transistor budget they have?
3) what was the shader clock in every generation of GPUs that had hot clocks?
And finally:
4) is it therefore possible to have 1536 SPs at 1500+ MHz and still fit into thermal and power constraints?
IMHO What Kyle is saying is, what everyone suspects, gk104 won't stand a chance against Tahiti in 2560x1600 gaming, let alone multi-monitor resolutions