Well according to TPU's power consumption numbers for a single 570 it's very realistic for a dual 570 to operate at a 375w level.
http://www.techpowerup.com/reviews/H...D_6970/27.html
Printable View
Well according to TPU's power consumption numbers for a single 570 it's very realistic for a dual 570 to operate at a 375w level.
http://www.techpowerup.com/reviews/H...D_6970/27.html
2x GF110 with 512 SPs @ 650-670mhz may be able to beat HD6990 at default clocks (830mhz)
700mhz seems high but not impossible with handpicked chips and agressive power control, c'mon nvidia, surprise us... :D
I hope ATI and NVIDIA don't get caught up in a perf race... both cards will be monstrous regardless of clocks. Who cares if a is 10% faster than b if both are MORE than fast enough. I hope the don't release super hot n noisy cards which make sli and xfire look like a better alternative lol.
Looking back into the past my guess is ATI will be faster but hotter and noisier.
I dont think you will see PhysX across multiple GPU's being used in games as it already induces too much latency to be used real-time with just a single GPU. Thats why all PhysX effects on "GPU" are only effects and not interactive. Interactive physics stay on the CPU.
you are optimisit, ( this is not a troll ) 2x 580 @ 650mhz will never close them ..
580 are @ 772mhz ............ 570 SLI have 732mhz base speed
And we know all how the core clock or shader speed have a big impact on Nvidia ALU.
I don't want say AMD or Nvidia will be faster ( waiting the test and for what i care lol ), just the 650-670mhz look to don't be enough compared to a 6950 cfx with 30mhz more and full 6970SP or a 880mhz version with full 6970 cfx core speed and SP ..
This is not for enter a " fight of who will win or loose " ( for what i care ) but just for comment your numbers.
I don't think ... underclock 2x GTX 580 and you will understand the problem. but yes you are right, let's wait " review " .
^^ 3 DVI's? No HDMI? Thats kinda odd because I would have expected it to have HDMI but whatever, I prefer DVI.
I think Nvidia is in alot of trouble if they really want that performance crown.
They will need 2 downclocked GTX 570's to compete in the same powerconsumption as 6990:
http://www.webpagescreenshot.info/im...01171656pm.png
BUT who knows what lies or tricks nvidia is ready to use ;)
i think that will be their (our) benefit
if you have to pay for 2 downclocked chips, hopefully the price you pay is for their current performance (so like 600$ instead of 800+), then all we gotta do is watercool and overclock and get 30% more perf out of it and catch up to 580sli OC. most people paying for such cards either dont care about a higher price, or know how to overclock.
remember the 5970, they advertised non stop the ability to OC it past specs. and thats how its going to be in the future if people still try to squeeze in as much as possible into 300W, or they built it for more and just have a profile for 300W. the perf crown back in the day was just a simple, whos the strongest, but now it seems to be who has the more efficient design at 300w exactly. and they are trying to be smarter about packing in more perf while maintaining that pcie compliance, and testers need to be aware of that too so they can give a better idea of real world use efficiency and perf, instead of looking at power consumption by just one benchmark that is no where near real life use.
IF the GTX 590 is within 10% of the Radeon 6990, and costs £100 less and is a lot quieter and a lot cooler then nVidia will win this round...... in my opinion.
However I can see the GTX 590 being more expensive, hotter and potentially louder too :(
= A DRAW!
John
lolQuote:
What the......
That is Sweclockers.com test and those numbers are wattage during a normal Vantage run, wich represent the real powerdraw better than Furmark. Sweclockers 6990 review
How many runs were done?
As many know, Vantage peaks in several different areas; many of which are less than a second long and may not be picked up by a standard power meter.
In addition, CPU usage is a HUGE factor and can increase / decrease number accordingly and in a non-linear fashion.
Looking at that chart, it seems like the calculations for some cards are VERY high while others are low. It could be that the monitor is picking up the areas where CPU + GPU peaks converge in some situations and registering situations of non-convergence in others.
yeah I agree that chart looks really fishy.....
Kristers Bensin can you please link us that REVIEW?????
http://images.hardwarecanucks.com/im.../HD6990-84.jpg
Its already linked and here is the furmark part:
http://www.webpagescreenshot.info/im...01193550pm.png
As you can see Furmark doesnt show a realworld perspective, either because the card gets downclocked by amd powertune or they just show the "peak" wattage consumption.
Here is also a review from Nordichardware wich shows a 472W draw for the whole system, these results of course differs depending on the equipment used, different examples of gpu and cpu. Click here.
http://www.webpagescreenshot.info/im...01194704pm.png
The Nordic hardware chart doesn't include other comparative solutions.
Whatever, as i was saying, nvidia will have a hard time battling the 6990 within the same powerconsumption. Especialy when u look at how close the single GTX 580 is to 6990. Even the SLI 570 is above 6990 in terms of powerconsumtion.
It will be interesting to see nvidia's binned 580 cores compeeting against amd's binned 6970 cores.