In this case paper launch means you can hand them money and they will hand you a shiny new card with paper receipt.
Printable View
In this case paper launch means you can hand them money and they will hand you a shiny new card with paper receipt.
Quite spot on!!
I don't see no 225w TDP easily, an xt already uses two six pin power connectors which is indicative alone of what potential overall power draw can/will be.
http://img514.imageshack.us/img514/196/6870p.jpg
Any scaling back of a pair of xt's in an attempt to fit them in a 225w tdp would equal a failed replacement for the 5970 for the consumer.
Nice paper launch except the part of handing them over the money. :) Anyway OBR is always negative about about ATI, he is NVIVIA fanboy so I doubt he has any ATI or the AIB partners inside info.
For me it looks like its going to be for the third time in row ATI cards update, the last NVIDIA card I bought was 8800 GTX.
After that I updated to HD 4870 x2 because Nvidia didn't have anything to compete for about 6 months and even than it was originally the 2 cards sandwiched model.
Than I bought the HD 5970 and now about 10 months later NVIDIA still doesn't have any card to compete with it.
Most likely my next update is going to be the AMD/ATI Antilles since it looks like NVIDIA is not going to have any card to compete with it for very long time.
TDP, MBP.. What do they stand for again?
I know TDP is thermal design power, is it the maximum power the computer can dissipate?
Maybe someone else can explain it better.
No idea what MBP is....
Thermal design power is what the thing usually means. It tells you two things: How much heat does it make at maximum load (and thus what sort of cooling is necessary) and also how much power does it use at maximum load. Video cards are about 99.9% inefficient when it comes to output on an energy conversion level. Everything it takes except for a few milliwatts that escape as signals and a few watts that escape as kinetic energy (fan) leaves as heat.
This whole concept is not so hard to grasp if you think of it this way
-MBP maximum board power = the amount of power in watts that goes IN the card and is limited to what the 6pin/8pin/pcie connectors can allow.
-TDP = thermal design power and is the amount of heat in watts dissipated OUT of the chip.
-Power draw = total electricity required by the whole card to function. cannot exceed mbp since that is the maximum power that can be fed to the board. Also since energy cannot be created only converted, tdp in watts cannot be higher than power draw.
I think this is correct anyway - think in and out electricity vs. heat and youll figure it out better.
These two posts should cover the TPD & MBP
http://www.xtremesystems.org/forums/...0&postcount=95
http://www.xtremesystems.org/forums/...6&postcount=98
Edit: Argh.. ~All the electricity is turned to heat eventually, 180 W in as electricity = 180 W out as heat.
MBP: The new term that tells you its not furmark/non-vsync/powervirus safe.
The 300W limit should be based on whatever AMD's definition of TDP is, since they cut down 5970 exactly to 300W.
FUD: AMD promotes Cayman as the new R300
Then Antilles is the new dual R300! :eek:Quote:
We have received some new information about Cayman, claiming that the new GPU will end up big and hot, which is not surprising as it is the biggest GPU that AMD has ever made. We don't have any exact numbers yet, but the general feeling about this quite secretive project is that it might be the new R300.
If you are old enough you will remember the famous Half life 2 vouchers and ATI’s first leader series, which included the now legendary Radeon 9700. This chip dominated against Nvidia's Geforce FX 5800, the infamous NV30, so Cayman has a lot to live up to. This was some eight years ago, in summer of 2002, way before Tweeter, Facetube and Youbook launched.
AMD is either hoping that Nvidia doesn't have a chance with its GF110 aka GTX 580 or that Nvidia could fail again. However, we believe that making the same mistake twice would be something that would really upset Nvidia CEO Jensen Huang.
ATI partners are confident and it looks like they should get enough cards. If all goes well Cayman Radeon HD 6970 should launch in last week of November, of course if it doesn't get pushed back further.
so antilles = R300 ??? :O cant wait to see how epic it will become
So if Antilles is R300 and I have an R600 then I'm 300 better than the to be released card? :welcome:
Lets hope cayman is the new R300! I feel a graphic wars coming on in price and performance!
so now caymen is going to be bigger than a 2900?
that would make antilles looks like a volcano compared to GF100 :S ...
R300 = 9700 pro .... and seriously i don't think the Cayman will bring what R300 was bring uppon the older generation .... The comparaison is a little bit optimist.
Yeah who cares about all this TDP nonsense? My car has more TDP in 1" squared than the whole of my computer on full load. I'm really not concerned... Only for performance, and everything else my watercooling set up can sort out.
That's why you have a 2900XT in your computer,because you only care about performance? :))
haha, same goes for 5970 owners. so many people put way to much stock in power handling.... like really does it make that big of a difference as long as you have a real PSU you should not have any issues... ya the GTX 480 uses tons and tons of power but it also has tons and tons of performance especially when OCed. i have NO issue with a really power hungry card as long as it's got performance to match, im not talking about performance per watt because you always gotta pay more for the most performance. take the GTX 480 for example, it would be crazy if it was slower than a 5870 and used more power but it is faster than a 5870 so the power draw is justified.
I have no issue if Cayman or Antilles use just as much power or more than my 480 as long as they are faster I would totally look at upgrading.
nobody buys a top end card for it's power consumption so I think we need to stop worrying about it.
what I do hope ATI fix is there stock cooling and stock fans, I have had many issues with blower fans on ATI cards blowing out bearings after a year of not so heavy usage, I have seen in on my 2900, a pair of 4870's and my 5870 and i really makes me hesitant to run the fans up for a long period of time. it would be nice to see a more innovative stock cooler on Cayman, maybe some external heat pipes like the GTX 480? or heatsink shroud combo? larger lower RPM fan? maybe bat-mobile v2? ok maybe not the last one:p:
well to each he own i guess, I did notice however when I switched from the 5870 to the GTX 480 my temps stayed close to the same because the cooler on the GTX 480 is much better, it does get loud but I game with BOSE noise cancelling headphones so I never hear it. it does pump more heat into the room but considering it's starting to get cold up here in Canada thats not really a bad thing...
cayman will be better performance per watt then GF100, looking at the 6870 they have that down pretty good, but people who think like me are hoping it has close the same power consumption meaning it will have monster performance haha
if i need noise canceling headphones, the cooling solution is fail, lol
i can agree with that for certain things, but ifs any bit audible during idling, then i wouldnt use it at all.
R300 might have been one of the all time great @ss kickings delivered, so that's quite the bit of hype... but even being close to that would be amazing
If this chip is officially the biggest chip ATi have ever released, how does anyone expect a dual-gpu card out of this thing?
I'm not saying it won't be possible, but from a physical side of things how can they pull that off?
Not just from a power stand-point, but a pricing stand-point as well. If this is the biggest core ATi have ever made, that's going to return them to the ultra high-end price bracket I would assume, correct? It's pretty clear this thing will probably not take the old Cypress price points, as I doubt AMD want to make less money per card sold on this than they did with the 5xxx series as it was CLEARLY more work to create. How do you price a board that's essentially 2 of those chips?
Again, I'm not hating on it, I'm just curious as to how they'll pull this one off.
I think It's off by a bit. :p:
Maximum Board Power:
As far as I've understood and the name suggests, this is the amount of power the board has to be able to handle, a.k.a. whole card power consumption, including the inefficiency of VRMs etc. The TOTAL power draw of the product under it's intended workloads. Note that this is mainly for the AIBs so that they know what the board should be able to handle, and as thus does NOT mean the maximum theoretical power draw of a product.
Thermal Design Power:
This value is intended to the cooler designers. The amount of power the cooling solution needs to be able to dissipate without the cooled chip/circuitry exceeding it's maximum junction temperature(maximum actual temperature INSIDE the die). Here comes the tricky part: If the cooling solution is supposed to cool only the GPU die(no VRM, no VRAM cooling), the TDP number "should" indicate what the chip manufacturer believes that the chip in question will draw power under normal operation conditions. It's just the average power draw of the chip only under your "everyday average real world apps" -> games. Sure, the chip can draw lots of power under special circumstances(thats what FurMark tries to do), but those are rare in the common software being ran on the chip.
So, this means that for example HD 5870 has MBP of 188 Watts, Cypress GPU's TDP can be ranging from some 115 W to 150 W for example, no one seems to know as AMD does not give out this information. Also, I'm not entirely sure that IF the cooling solution is supposed to keep VRAM and VRM cool too, do they add to the TDP value(which is aimed to the cooling solution designers)? Or is it just the CHIP TDP?
I'm not quite sure about how Nvidia defines their TDP, is it the usual way(average power consumption of a chip under average load), or do they include VRAM, VRM and board power to it? If they do, they're REALLY pushing the limits(GTX 480's (CARD, not the GPU) TDP is 225 W, and the card draws around ~225 W average in games. From a quick glance this would seem to indicate that the TDP is the actual card power consumption. But noting that usually there is some headroom in the TDP value(to make sure that the chip temperature doesn't get too near the max junction temperature, dust build up etc.) Nvidia wouldn't have left ~ANY headroom which would then indicate that the 225 W TDP is for GF100 GPU, NOT the whole card power draw as MBP is.
In short look at this: (Info from TechPowerUp!'s GTX 480 Fermi article)
AMD(HD 5970, Hemlock(2xCypress XT @ PRO): 294 W MBP (42 W MBP idle).
Nvidia(GTX 480, GF100): 225 W TDP (No info about idle TDP).
Idle:
HD5970: 39 W
GTX 480: 54 W
Average:
HD5970: 178 W
GTX 480: 223 W
Peak:
HD5970: 211 W
GTX 480: 257 W
Max:
HD5970: 304 W
GTX 480: 320 W
Thats why it's impossible to compare TDP and MBP in accurate manner, and people should understand the difference between the two and NOT mix them up when speculating!
if it's an extremely large chip it would explain the decision to move the naming one notch higher; they only have a single chip 69xx Card which might blow away the 5970 and a 5950 @ gtx 480 level
considering the size/efficiency of 6870 they could make a cypress-sized chip (300mm˛) at the same performance level of GTX480 AND a dual-gpu card with this chipwhich is another 50-70% faster
or a 400+mm˛ behemoth @ 5970 performance without a dual gpu card
i believe that they went with the first option but we'll see in a month or two....
http://www.fudzilla.com/graphics/ite...s-the-new-r300
These are boastful words, I hope Fud isn't bs'ing about what he heard.Quote:
AMD promotes Cayman as the new R300
Its interesting... some people don't mind about power draw of 480gtx because of its performance. But me on the other hand run my 5850 under-volted at 720mhz at 1.000v and a decrease of 16 watts(over stock). I also have my phenom x6 overclocked but at stock volts under normal gaming my computer with 1 ssd and 3 hdds + 5 fans uses 225 watts. My point being some of us do care about power draw. But my desktop is less efficient at browsing the web compared to my laptop(153 watts vs about 8.1 +/-.3). So in a scene I also sacrifice power draw for performance.
I'm interested in 6970 but I might still replace my 5850 with 6870:rofl:
AMD isn't able to create dual GPU card from chips which consume more than Cypress XT @ PRO without doing some wizardry, this is because 5970 is already pushing the very limits of PCI-SIG standards. What they COULD do is reduce the maximum power draw somehow, which would let them increase the average power draw of the card and thus improve real world performance, while working under the 300 W limit. So.. this leaves them with three options:
1: There is no Antilles
2: Antilles is Barts x2/Cayman Pro doesn't consume significantly more
3: They've done wizardry which allows them to squeeze two more power hungry chips to a single PCB while keeping the max board power under 300W. Underclocked Cayman PRO? AIBs would create the REAL Antilles with OC'd chips and > 300W MBP, effectively circumventing the PCI-SIG 300 W standard limitation for PCI-E devices. ;)
thats what im hoping for, i would love it if Antilles comes out with clocks like 600mhz and low volts, but cooling bigger than Mars. the stock settings might only be 20-40% faster than caymen xt, but when running full speed will be loud and scary.
Antilles is probably Cayman pros with lower clocks to ensure the controllable TDP rating.Still,it will be the fastest card around,faster than Cayman and 5970(and the most power hungry too :) ).
Yeah you could be right :D. But first,we would have to see the mysterious 580 card and then it would have to beat old 5970 in performance ("max. power draw king" title is going to 580 in this match for sure).
I believe that there's not much to push anymore, some cards(For example 5970 and GTX 480) already exceed 300 W load under some situations. Still, it's possible that they optimize the power draw in such way that the average/typical load increases while the max/peak load remains ~300 W. This requires some engineering, it's not simple.
sounds like furmark throttling to me, first by the bios, then by temp sensors. its tough to know if temp exactly equals power consumption, but it can be reliable enough to make sure its able to draw as much as it wants, without damaging itself (but the PSU might be hurt anyway)
The best way I can think of is to actually make smaller Cayman cores just for the job, before you shoot me, hear me out here. The scaling is so good on Barts you'd only need a fraction more shaders to make a noticeable impact between Antilles and Cayman.
I imagine making a core with about 3/4's the shaders of Cayman would be enough to make economic, consumption and yield levels workable for AMD. If Cayman really does have 1920 complex shaders, then we're talking about 1440 shaders a core with the front end improvements in Barts but each Shader is able to do more work. At that size they'd weigh in around the same transistor count as Cypress but with 95% scaling two of them would still beat Cayman with by about 40% as they'd have a 960 shader advantage.
So yeah it sounds stupid, but if I wanted a 40nm X2, I'd just make Cayman's around the size of a Cypress anything bigger would be too hard for me to think around.
- fud title but if AMD actually are comparing Cayman to RV300 ..then wow.Quote:
AMD promotes Cayman as the new R300
They expect it to best the 580.
Yeah I should've checked fudo's other rumor posts. He contradicts himself here a day before his R300 headline http://www.fudzilla.com/graphics/ite...man-is-sampled
Can't call it the R300 if based on the linked description. Just covering his bases.
I'm really curious how AMD will handle the power consumption in Antilles. Both manufacturers are soon going to run into power walls with their new designs unless they are willing to break spec or a new spec comes out that increases the limit.
For Antilles I could imagine a few solutions. First is the obvious: use cut-down and/or underclocked chips to keep it just under the power limit.
Another solution could be to reduce the other components on the board. If they made the GPUs communicate directly instead of over PCI-e they could ditch the PLX chip. And if they made the GPUs able to share the same ram they could use half the number of ram chips for the same memory size.
Or they could supply it with an external power supply. :ROTF:
If nVidia can pull a GTX 295 using dual GT200b chip sized 480 mm^2 with TDP under 300w, basically a dual GTX 275 which has 215 w TDP of its own, i think AMD is competent enough to match that feat.
The new R300 eh? That means the Cayman based cards are going to be epic.
Also, R300 didn't use that much power, either. ;)
if cayman is epic i cant wait for only 1 antilles :D
id be funny/cool if the antilles only came with a watercooling kit
Antilles facts:
Fact 1 - You do not place antilles on your motherboard. You assemble your motherboard over and around antilles.
Fact 2 - Antilles will be so long, you will not be able to install hard drives - all your hd bays now exist only to fit Antilles. From there on you may only access external hard drives.
Fact 3 - When playing games with Antilles, all graphics settings from all know games shall have only one mode: "You win" mode. It will be the only card to make all games playable on that setting.
antilles can play crysis 2
That really makes me wonder, how are they going to make such a massive card. Cayman chips are supposedly bigger than cypress and significantly more power hungry. Unless they completely castrate the cayman chips(considering they underclocked the 5970 so much), they are going to need a monster cooler and very likely long card. If the 5970 was 13 inches, when how long is antilles going to be? I actually wouldn't mind a 14-16 inch card. It might just barely fit in my corsair 800d, but if I am going to pay 700+ dollars for it, I want my card to feel substantial. Also they better put two 8 pin connectors this time, they are going to have to underclock it anyways and a 6 and a 8 pin don't feel like enough this time around.
Fact 5 - Only the speed of Antilles is faster than the speed of thought.
Fact 6 - Antilles is its own element in the chemistry table denoted by the symbol "Win".
Fact 7 - These aren't facts.
Antilles is the technoviking!
Still 1GB of VRAM... I was expecting 2GB at least. Also, since they said the die is big I wonder if its still a 256 bit or a 512 bit.
Fact 8 : Antilles is the only known entity to emit awesomeness greater than that of Chuck Norris.
.... it was so coming.
AMD won't be the first graphic IHV that try to fit two 200+ w TDP chip into a single card with 300 w TDP limit, nVidia actually did that earlier with their GTX 295 (dual GTX 275 TDP 215 w), and successfully complied with PCIE sig limit of 300 w of a single slot PCIE device. ;)
As much as i want AMD to squeeze as much performance out of Cayman chip, i don't believe the individual TDP of Cayman XT card would exceed that GTX 275 number. They don't want their own version of baconator, LOL. :ROTF:
GTX 295 has the same length of its single chip card brother, GTX 285. AMD went conservative with HD 5970 in lengthening its dual chip board, so the thermal & noise characteristics would improve (considering the market segment it aimed, the decision is quite understandable).
I'm sure Hemlock's length was already within the frontier of what consumers would accept, so i don't think Antilles would be any longer. The non reference board might grow wider though, if the AIBs try extract the final clock out of this monster chip & not adhering to PCIE sig limit (atleast in OCing).
well since you obviously know this as you apparently designed it... since your avatar is a large cpu cooler....
so tell me, based on what awful design flaws will Antilles not be able to cool itself? I am just asking simply because I never met anybody from the GPU designer team, and judging by the way you talk you seem to be one.... good on ya:rofl::rofl::rofl:
In Soviet Russia... Antilles buys you!
Thats pretty far off from a hunk of metal...
Give me a break guys. It is a video card, not a breakthrough in fusion or the end of global strife. In a year or (hopefully) less it will be surpassed by something even better.
And this is coming from someone who has owned a 4870x2, 5970, and will probably own an Antilles card (unless Nvidia has a compelling answer, which I doubt).
Ps. Chuck Norris was surpassed in brainpower by a vacuum tube, I doubt Antilles will have any trouble in that regard.
Fact 9 : When buying Antilles a servant is provided that ******** and builds a temple to your honor.
EDIT - do we really need comments like that? - STEvil
On that we agree. I'm sure his cranial cavity could evacuate many millions of hollow glass cylinders and still have room to spare.
Fact 10) All the awesomeness blown off the Antillies card can be redirected into a whole new computer to power it.:D
I wish AMD/ATI would release some specs already, we already know it still comes with 1GB, which by the way is not enough. 2GB would be more sufficient for today's high end gaming.
Unless you play in eyefinity, thats not really the case, besides, 1GB is for reference model, 2GB versions are coming in for sure.Same affair was with the 5870.Quote:
I wish AMD/ATI would release some specs already, we already know it still comes with 1GB, which by the way is not enough. 2GB would be more sufficient for today's high end gaming
More memory doesnt magically make a card faster unless it needs this memory.
fact 11: antilles can divide by zero.
fact 12: everytime antilles is manufactured, chuck norris senses a disturbance in the force.
:up:
there are so many tests which found out that there might be some settings you can trigger with console switches or textur packs that might make it necessary to use more Vram but stock games hardly benefit from the extra ram :rolleyes:
if they improve memory efficiency and compressing they can get the same effect as with doubling the amount of memory together with a wider bus width :rolleyes:
remember 5870 2GB versions proved to be pretty much useless except for e-peen and those who like to mod games like crysis to use even bigger textures...
Fact 13: When the power goes out, all power is redirected at Antilles because Antilles does not lose power.
i love these fun facts .. they keep this thread civilised :D
Lol some very good facts there guys
and they say amd didnt release any info on antilles yet. there are plenty of facts now! and vs facts there are no arguments...
Fact 14 - Antilles is so fast its calculations can break space-time continuum and correctly predict lotery results. It can also predict your last day alive - this only fails on Chuck Norris since Chuck Norris cannot die.
i think AMD didn't wanted go to 2GB, not really usefull in most case. If you don't have at least 3x 24", or play crysis in ultra, on 30" with AA8x, there is no real need.
Actual games, don't use 512Mo of textures.
And if they want go to a 384bits bus in next gen ( they won't go in 512bits, we don't need it before a very long time ), they'll sell 1.5go Vram video card, and sold before 2GB. So This could become a problem for the "upgrade" path.
1GB and special 2GB eyefinity 6 is a good choice.
The thing i hope, is AMD will sell the first day in same time, 2GB special eyefinity and 1GB normal edition and will be fully available in sellers. It was a problem with old 5870 six edition.
and i would like to see cayman pro 2GB edition too.
It would be a real smarter choice.
Fact 15: Antilles is so good it creates all it's own facts. A sort of self fulfilling FACTory card. :D
Problem I see is with the new display ports and outputs each card will push 5 or even six monitors already if Im correct. making the six edition sorta a moot product this time. I think they need reference designs with 2gb and not have a 100$ premium like the six edition. amd advertises eyefinity for their cards but its exactly those resolutions that require 2gb. they should push 2gb more this time around.