Wait don't the PCI-E connectors have a power limitation of 300W? That says 350W.
Printable View
No, 8 pin is ~150 watt rating each.
EDIT: I guess I should say 150+150+75, technically the standard for two 8 pin PCI-E connectors should be around 300watts but the the pci-e slot supplies an additional 75 watts. Keeping in mind the rating is just a standard and not an absolute most of us exceed the various standards with overclocking.
http://tof.canardpc.com/view/ad50609...6a8e7087f1.jpg
Well, the author knows nothing and it's pure speculation and guesses ...
Maybe when Nvidia says that this architecture will have better performance x watt, it does not mean consuming less but better use of the power necessary for the chip to work, on average equal to or use of Fermi, but with better performance
*sorry my english ugly ... I'm practicing the language
Is it basically what happend by going from 40 to 28nm .. Well this have been allways part of any talk and launch, just watch all release slides of gpu from thoses last 6years, and you will get an "improved performance / watts " slide somewhere ( AMD or Nvidia ). This is just cause with all new generations, AMD, Nvidia, Intel ( same for ARM chips makers ) try to improve the performance / watts. ( and they do, well normally ( Fermi was a real accident on this part ) .
This is a part of the job for each new generation ( CPU, GPU, monitor ), improving performance / watt
Do PCI-E v3.0 slots have the ability to provide more power to the card from the board itself vs PCI-E v2.0 ???
I thought they were limited by the board's actual physical properties (i.e. too many watts would simply fry the board) but I could be well wrong
I never heard of something like that related to AIB's especially in cases where the power is supplied directly from the power supply via the pci-e power connectors. I could possibly see pushing things a little far with a OC'd video card that's strictly powered from the pci-e slot through the mb.
At any rate you would be reading about allot of meltdowns if that where the case, especially on this forum where OC'ing and voltage tweaking push power draw much higher than normal along with the use of power supplies with enough current reserve to weld with.
^^^Quote:
Originally Posted by Kyle Bennett
Dissapointing, if true.
Just saw that myself... will be interesting to see how it does with launch drivers, especially. I hope the pricing isn't stratospheric however. Who knows how accurate this is though...
With pre-launch drivers that they're still pushing on quickly, it doesn't sound disappointing to me assuming pricing is decent. With their smaller die size I would think they could compete on both price and performance here.
and later on he said he'll keep his 7970 XFire setup ...
Quote:
Originally Posted by Kyle
if i remember correctly that in order for pci-e 3.0 to have provided more power they would have to have increased the traces on the mobo which would have really added to the complexity and cost. From what i remember pci-e 3.0 provides the same amount of power as pci-e 2.0
So? What he said basically made the cards sound like they're going to trade blows. His system is still state of the art, so if the GK104 and 7970 are really similar performance he has no reason to move. Besides the big chip isn't here yet. Lets see if he is going to say that when he gets that on his hands. Back a few year ago he did not switch his 5870 Crossfire setup to a GTX 480 SLI when it first came out anyway.
Matches my statement of about 1.5x over GTX580.
We all know shaders can end up idle. By allowing shaders that are being accessed to use the resources of the idle shaders you fixed that problem; greatly improving efficiency per shader.
Anyway, of course NVidia are contemplating calling this the 680... If you could beat the opposing company's high end with what would've been a midrange chip that's smaller and know your opponent will take awhile to counter attack, wouldn't you do the same exact thing? I'm thinking the 6xx series will be short-lived, and they'll drop the 7xx (headed by the GK110) when AMD refresh this fall, where we'll see the 680 slightly tweaked with higher clocks as the GTX 760ti.
There is absolutely NO reason to upgrade if you have XF'ed 7970's. Name a game that XF'ed 7970's can't max out at perfect frame-rates! Only reason there would be is if a huge title comes out using physx that the physx makes a huge difference on.
Not too disappointing. They said it beats the 7970 in games and game related benchmarks. So it's better than AMD's best where it counts. Definitely not disappointing to me.
Of course, with this beating the 7970, that means the GK110 may be the next G80.