No TDP can keep an nvidia boy away from his geforce. As proven by the gtx480. Maybe AMD will try the same this time around.
Printable View
Yeah who cares about all this TDP nonsense? My car has more TDP in 1" squared than the whole of my computer on full load. I'm really not concerned... Only for performance, and everything else my watercooling set up can sort out.
That's why you have a 2900XT in your computer,because you only care about performance? :))
haha, same goes for 5970 owners. so many people put way to much stock in power handling.... like really does it make that big of a difference as long as you have a real PSU you should not have any issues... ya the GTX 480 uses tons and tons of power but it also has tons and tons of performance especially when OCed. i have NO issue with a really power hungry card as long as it's got performance to match, im not talking about performance per watt because you always gotta pay more for the most performance. take the GTX 480 for example, it would be crazy if it was slower than a 5870 and used more power but it is faster than a 5870 so the power draw is justified.
I have no issue if Cayman or Antilles use just as much power or more than my 480 as long as they are faster I would totally look at upgrading.
nobody buys a top end card for it's power consumption so I think we need to stop worrying about it.
what I do hope ATI fix is there stock cooling and stock fans, I have had many issues with blower fans on ATI cards blowing out bearings after a year of not so heavy usage, I have seen in on my 2900, a pair of 4870's and my 5870 and i really makes me hesitant to run the fans up for a long period of time. it would be nice to see a more innovative stock cooler on Cayman, maybe some external heat pipes like the GTX 480? or heatsink shroud combo? larger lower RPM fan? maybe bat-mobile v2? ok maybe not the last one:p:
well to each he own i guess, I did notice however when I switched from the 5870 to the GTX 480 my temps stayed close to the same because the cooler on the GTX 480 is much better, it does get loud but I game with BOSE noise cancelling headphones so I never hear it. it does pump more heat into the room but considering it's starting to get cold up here in Canada thats not really a bad thing...
cayman will be better performance per watt then GF100, looking at the 6870 they have that down pretty good, but people who think like me are hoping it has close the same power consumption meaning it will have monster performance haha
if i need noise canceling headphones, the cooling solution is fail, lol
i can agree with that for certain things, but ifs any bit audible during idling, then i wouldnt use it at all.
R300 might have been one of the all time great @ss kickings delivered, so that's quite the bit of hype... but even being close to that would be amazing
If this chip is officially the biggest chip ATi have ever released, how does anyone expect a dual-gpu card out of this thing?
I'm not saying it won't be possible, but from a physical side of things how can they pull that off?
Not just from a power stand-point, but a pricing stand-point as well. If this is the biggest core ATi have ever made, that's going to return them to the ultra high-end price bracket I would assume, correct? It's pretty clear this thing will probably not take the old Cypress price points, as I doubt AMD want to make less money per card sold on this than they did with the 5xxx series as it was CLEARLY more work to create. How do you price a board that's essentially 2 of those chips?
Again, I'm not hating on it, I'm just curious as to how they'll pull this one off.
I think It's off by a bit. :p:
Maximum Board Power:
As far as I've understood and the name suggests, this is the amount of power the board has to be able to handle, a.k.a. whole card power consumption, including the inefficiency of VRMs etc. The TOTAL power draw of the product under it's intended workloads. Note that this is mainly for the AIBs so that they know what the board should be able to handle, and as thus does NOT mean the maximum theoretical power draw of a product.
Thermal Design Power:
This value is intended to the cooler designers. The amount of power the cooling solution needs to be able to dissipate without the cooled chip/circuitry exceeding it's maximum junction temperature(maximum actual temperature INSIDE the die). Here comes the tricky part: If the cooling solution is supposed to cool only the GPU die(no VRM, no VRAM cooling), the TDP number "should" indicate what the chip manufacturer believes that the chip in question will draw power under normal operation conditions. It's just the average power draw of the chip only under your "everyday average real world apps" -> games. Sure, the chip can draw lots of power under special circumstances(thats what FurMark tries to do), but those are rare in the common software being ran on the chip.
So, this means that for example HD 5870 has MBP of 188 Watts, Cypress GPU's TDP can be ranging from some 115 W to 150 W for example, no one seems to know as AMD does not give out this information. Also, I'm not entirely sure that IF the cooling solution is supposed to keep VRAM and VRM cool too, do they add to the TDP value(which is aimed to the cooling solution designers)? Or is it just the CHIP TDP?
I'm not quite sure about how Nvidia defines their TDP, is it the usual way(average power consumption of a chip under average load), or do they include VRAM, VRM and board power to it? If they do, they're REALLY pushing the limits(GTX 480's (CARD, not the GPU) TDP is 225 W, and the card draws around ~225 W average in games. From a quick glance this would seem to indicate that the TDP is the actual card power consumption. But noting that usually there is some headroom in the TDP value(to make sure that the chip temperature doesn't get too near the max junction temperature, dust build up etc.) Nvidia wouldn't have left ~ANY headroom which would then indicate that the 225 W TDP is for GF100 GPU, NOT the whole card power draw as MBP is.
In short look at this: (Info from TechPowerUp!'s GTX 480 Fermi article)
AMD(HD 5970, Hemlock(2xCypress XT @ PRO): 294 W MBP (42 W MBP idle).
Nvidia(GTX 480, GF100): 225 W TDP (No info about idle TDP).
Idle:
HD5970: 39 W
GTX 480: 54 W
Average:
HD5970: 178 W
GTX 480: 223 W
Peak:
HD5970: 211 W
GTX 480: 257 W
Max:
HD5970: 304 W
GTX 480: 320 W
Thats why it's impossible to compare TDP and MBP in accurate manner, and people should understand the difference between the two and NOT mix them up when speculating!
if it's an extremely large chip it would explain the decision to move the naming one notch higher; they only have a single chip 69xx Card which might blow away the 5970 and a 5950 @ gtx 480 level
considering the size/efficiency of 6870 they could make a cypress-sized chip (300mm˛) at the same performance level of GTX480 AND a dual-gpu card with this chipwhich is another 50-70% faster
or a 400+mm˛ behemoth @ 5970 performance without a dual gpu card
i believe that they went with the first option but we'll see in a month or two....
http://www.fudzilla.com/graphics/ite...s-the-new-r300
These are boastful words, I hope Fud isn't bs'ing about what he heard.Quote:
AMD promotes Cayman as the new R300
Its interesting... some people don't mind about power draw of 480gtx because of its performance. But me on the other hand run my 5850 under-volted at 720mhz at 1.000v and a decrease of 16 watts(over stock). I also have my phenom x6 overclocked but at stock volts under normal gaming my computer with 1 ssd and 3 hdds + 5 fans uses 225 watts. My point being some of us do care about power draw. But my desktop is less efficient at browsing the web compared to my laptop(153 watts vs about 8.1 +/-.3). So in a scene I also sacrifice power draw for performance.
I'm interested in 6970 but I might still replace my 5850 with 6870:rofl:
AMD isn't able to create dual GPU card from chips which consume more than Cypress XT @ PRO without doing some wizardry, this is because 5970 is already pushing the very limits of PCI-SIG standards. What they COULD do is reduce the maximum power draw somehow, which would let them increase the average power draw of the card and thus improve real world performance, while working under the 300 W limit. So.. this leaves them with three options:
1: There is no Antilles
2: Antilles is Barts x2/Cayman Pro doesn't consume significantly more
3: They've done wizardry which allows them to squeeze two more power hungry chips to a single PCB while keeping the max board power under 300W. Underclocked Cayman PRO? AIBs would create the REAL Antilles with OC'd chips and > 300W MBP, effectively circumventing the PCI-SIG 300 W standard limitation for PCI-E devices. ;)
thats what im hoping for, i would love it if Antilles comes out with clocks like 600mhz and low volts, but cooling bigger than Mars. the stock settings might only be 20-40% faster than caymen xt, but when running full speed will be loud and scary.
Antilles is probably Cayman pros with lower clocks to ensure the controllable TDP rating.Still,it will be the fastest card around,faster than Cayman and 5970(and the most power hungry too :) ).
Yeah you could be right :D. But first,we would have to see the mysterious 580 card and then it would have to beat old 5970 in performance ("max. power draw king" title is going to 580 in this match for sure).
I believe that there's not much to push anymore, some cards(For example 5970 and GTX 480) already exceed 300 W load under some situations. Still, it's possible that they optimize the power draw in such way that the average/typical load increases while the max/peak load remains ~300 W. This requires some engineering, it's not simple.
sounds like furmark throttling to me, first by the bios, then by temp sensors. its tough to know if temp exactly equals power consumption, but it can be reliable enough to make sure its able to draw as much as it wants, without damaging itself (but the PSU might be hurt anyway)
The best way I can think of is to actually make smaller Cayman cores just for the job, before you shoot me, hear me out here. The scaling is so good on Barts you'd only need a fraction more shaders to make a noticeable impact between Antilles and Cayman.
I imagine making a core with about 3/4's the shaders of Cayman would be enough to make economic, consumption and yield levels workable for AMD. If Cayman really does have 1920 complex shaders, then we're talking about 1440 shaders a core with the front end improvements in Barts but each Shader is able to do more work. At that size they'd weigh in around the same transistor count as Cypress but with 95% scaling two of them would still beat Cayman with by about 40% as they'd have a 960 shader advantage.
So yeah it sounds stupid, but if I wanted a 40nm X2, I'd just make Cayman's around the size of a Cypress anything bigger would be too hard for me to think around.