future games will demand more than 1.2 GB at 1920x1200, with max details, 8x AA, 16x AF. going with 2 GB is really a nice future-proofing of your gaming rig.
Printable View
future games will demand more than 1.2 GB at 1920x1200, with max details, 8x AA, 16x AF. going with 2 GB is really a nice future-proofing of your gaming rig.
Nvidia WINS!
Flawless Victory.
Great new product.
crossfire 6950 got to be bang for the buck of the decade.
the order of dominance AMD has over nvidia is astounding,
fastest card in the world 5970 soon to be 6990,
so AMD will have 6990/5970 and then a trailing mid range 580 comes along.
Utter Pimhole, and you know it.
The 580, is the single best GPU, then we have the 6970 a little above the 570 and the 6950 at the rear.
so
580>6970>570>6950
If you play at 1900x1000, then the 570 looks to be the better card (with the 6950 shaking up for those on a little bit more of a budget). If you play at higher resolutions then the 580 is king, with the 6970 for those on a budget (budget all being relative).
Looks like the norm we've had for a couple of years now.
I am not impressed. Nvidia still looks as good as it did yesterday. It's very interesting that AMD lost all the advantage it had in pricing and power efficiency.
One positive things I will say that comes out of this generation, is it really shows both architectures may have legs on the next generation and a super obvious advantage really doesn't exist at this point for either company. Although I am not sure if will ever happen, it just means both companies have to really throw off the gloves the next generation at 28nm and not hold back because neither company has room because they are closer than ever with performance. I kind of like this type of situation because it gives room for both companies to make money. AMD actually might need to get not conservative and build a monster monolithic chip.
Sure NV chip is bigger but it has only 10% less transisters and I don't see it loosing the GPU Computing crown. So I think they are playing at pretty even level.
On another note, I hope when the 5970 get sold out, people don't mistake the 6970 for a dual chip and successor to the 5970. AMD naming scheme is a real mess.
ding ding ding ding that is the CORRECT answer sir!
Its not that these cards suck, it's cause Amd's small chips lower power strategy failed this time around. These cards are shader starved and thing are worst off this round then they were last round. At least the 5870 had clear performance and power advantage over the 470.
Also I am not too pleased with the 6950, its basically 10%% faster than Barts in most cases and that is barely relevant. They could just as well given 6870 2gb and more clock and called it a day.
Amd should be praying there won't be a dual fermi in the works cause that will destroy their strategy on all levels then.
This post is ridiculous:
If this card were on 32nm with all the features Anand hinted at, would people still complain?
The most likely issue is that 32nm was cancelled and AMD wanted to recuperate its R&D cost, so the best thing to do is sell it at 40nm (which they're quite familiar with) and hope for the best until TSMC gets its act together for 28nm. The fact this supposedly all happened in less than a year's time clearly caught AMD by surprise so it's amazing they even managed to get cards out that fast, given that most cards are planned out far earlier
10% faster than Barts? What reviews are you looking at? The vast majority has the 6950 awfully close if not right at the 570 - the card that's dissapointing is the 6970, which is just 10% faster than the 6950Quote:
These cards are shader starved and thing are worst off this round then they were last round. At least the 5870 had clear performance and power advantage over the 470.
Also I am not too pleased with the 6950, its basically 10%% faster than Barts in most cases and that is barely relevant. They could just as well given 6870 2gb and more clock and called it a day.
Of course, that's just proof that the 69xx's are a forward looking architecture - in older games they don't beat the 5870 by much, but in newer engines they do by a good amount - look at Stalker and Metro.
More ridiculous-ness. When the:Quote:
Amd should be praying there won't be a dual fermi in the works cause that will destroy their strategy on all levels then.
GTX 295 > 4870 X2
GTX 285 > 4890
GTX 275 > 4870
GTX 260-216 > 4850
Did AMD's destruction across every single card matter? Of course not, the 4800's brought AMD back in market share and everything.
Now that Cayman is closer but not quite that the performer of the 580, its doom and gloom? Seriously?
How's this also for perspective:
Look at where the 5870 performed at release - it lost to the 4870X2 in a lot of things and the GTX 295 as well. Look at where AMD's drivers and game optimizations + game development has gone - the 5870 is clearly ahead, and it's even creeping up on the GTX 480 in performance (at release, the GTX 480 was a good 15-20% faster, now we have situations where the 5870 can close within 10%).
Let me ask this: who's more likely to get a boost over the next year, the 580 based on the 480 or the 69xx based on nothing prior?
And to say nothing of the fact that if 28nm really did get delayed by TSMC to 2012, as some rumors are swirling now, who will be in the better position to deliver another 40nm card? The company with a 530mm^2 GPU flagship or the one with a 389mm^2 GPU flagship?
Sheesh, some people need to seriously calm down and look at perspective here outside of JUST raw performance #'s
Someone please explain?!!
WTF AMAZON 6970=483$
Amazon 6950=372$
Amazon price gouging because they don't have sales tax in states Newegg does
And why do you care so much anyways for someone being outside the US? Just curious, since all your posts are geared the same way
after all the buzz, they could only match a geforce GTX570 :down:
"Best Ever"
Quote:
SUNNYVALE, Calif. — Dec. 13, 2010 — AMD (NYSE: AMD) today announced AMD Catalyst™ graphics drivers have been independently verified as providing the most stable and reliable experience in the industry, and introduced an intuitive user interface. The new user interface is available for the full suite of AMD Radeon™ drivers supporting desktop, notebook and chipset graphics products. With support for Windows® 7 and Windows® Vista operating systems, the new AMD Catalyst™ drivers enable industry-leading stability, advanced performance and superior usability and simplicity.
i am tired of hearing about how awesome amd drivers are then being asked to wait for the next driver to get a products "real" performance.
Cause i buy everything from Amazon! I have a US address and they get shipped there and to here afterward. And there's Tax on amazon just FYI!!
No, it doesn't, or Maybe in very few cases
Looking at anandtech review, GTX 570 performed better in 4 out of the 10 games they tested at 2560x1600
HD6970 performed better in the other six games, but difference was small in most of them
(for example the difference in mass effect 2 was just 0.2 fps)
It seems that the the extra memory that HD6970 has is almost useless even at extremely high resolution like 2560x1600
Nope, only in states they have a physical presence in that also has a sales tax.
Anandtech is just one review. Someone needs to compile the results, because every reviewer's game suite dictates results (hence they're all over the place). Try comparing Techreport with someone else for instance, and it's a headache
I've read through so many posts on this forum, and I find it odd that when people say that it would be the end of Nvidia, or doom and gloom for Nvidia or anything bad about Nvidia it is fine. It seems around here we'll grab at anything that would make our assumptions "seem" valid but will ignore anything that might counter our assumptions or beliefs.
I know that I'll be labelled as a troll for stating my opinion, not because it is my opinion but because I seem to be stepping on toes. As they say: "If the shoe fits, then you should wear it."
All reviews I read put 6970 slightly above 570 and well behind 580.
All you guys whining about how it didn't live upto the hype - AMD has remained virtually silent - the Hype is of your own making. Theorising over and over about specs, when nothing concrete has been released, trying desperately to get the information you all crave. So you built up the card into some nVidia toppling king, which clearly ISN'T the case.
Look at the data you have:
5870 = 1600SP's/80TMU's
6970 = 1536SP's/96TMU's
You're expecting miracle performance from this....why? The fact it's 15 to 20% faster than a 5870 which has an SP advantage, is pretty impressive. Take into account the whole package. It's EVOLUTIONARY - not REVOLUTIONARY.
nVidia have consistently released cards with the words 'WE ARE TEH WINZ0RS! WE ARE FASTER THAN ANYONE!' and then slap on a pricetag to match. So AMD comes along, gives you 95% the performance, for a lower price, gains marketshare, as they're 'fast enough' (this is an INITIAL release, with release drivers afterall). Seriously - what gives with some of you?
I buy cards not based on the brand, but the bang-for-buck. I considered upgrading to a GTX460 from my HD4850, and then the 6850's came along, stomped on that sandcastle, and here i am again, re-assessing what i really NEED. Do i game at 2560x1600 with ALL the eye candy? No. I have a 1680x1050 monitor that i want to last before replacing - and most games i find don't need more than 4xAA & 8xAF to look 'pretty'. If you want more performance out of these cards - overclock them, perhaps? That is, afterall, what this forum specialises in, not hypothetical fanw@nking...
Well, it used to be AMD doom and gloom way back
I'll give you a little timeline of how things went these past 4 years in the GPU world:
May 14, 2007 - R600 is released - "AMD bankrupt by end of the year"
Oct 29, 2007 - 8800 GT released - "OMG AMD is done! 8800GT and G92 is going to be legendary!" (little did we know what kind of legendary :ROTF:)
Nov 19, 2007 - RV670 is released - "Good try AMD, but too little too late - your dual GPU can't even match our top single GPU - see you in bankruptcy"
Jun 17, 2008 - GT200 released - "Can of whoop ass opened. 2x G80 vs. 480 SP RV770 - gg AMD"
Jun 25, 2008 - RV770 released - Can of whoop ass spills on self. "zomg Nvidia is screwed"
Sep 23, 2009 - 5870 released - "Just wait for Fermi..."
Dec 31, 2009 - "Er... Nvidia where you at"
Mar 26, 2010 - GF 100 released - "Nvidia bankrupt by end of the year"
:rofl:
It is mentionned countless of times, but go read a review that uses the 10.11 drivers which are available.... Anandtech used the 10.10. Hardocp for example, the 6950 NEVER looses to a 570GTX.. the 6970 trade some blows with the GTX580 but is overall slightly slower. Still blows the 570GTX by a significant margin (e.g.10%).
While Nvidia's 500 series might be just a bug fix for Fermi, Cayman is simply a letdown in terms of raw performance increase. The new architecture obviously isn't all that effiecient yet. It does fix the problems with tessellation and AA-performance, though it still doesn't beat fermi in either area. There might be some promise for the next gen and with future driver updates, but that's just speculation.
In the end this release means Nvidia doesn't have to lower prices at all, but on the other hand they do have to compete with 6970 vs. GTX 570. It's just sad to see 5870 being so close to these cards, there's been hardly any improvement since last year. In my books the whole fermi fiasco and these new slightly updated cards are about on the same level as the GeForce 9000-series update:
http://knowyourmeme.com/system/icons...jpg?1266306464
Graphics power increase relies heavily upon shrinking nodes. Considering these are still at 40nm, I wouldn't say that this year's increases have been very disappointing.
My friend, Did you notice that Hardocp was using different settings when they compared GTX570/580 to 6950/6970 ? Did you ?
Example, Look here they are using 8x AA for GTX 570, but only 4x AA for HD6950 in Civilization
http://www.hardocp.com/images/articl...ST7GBp_4_4.gif
Thats not a fair comparison. If they were using the same settings, then GTX 570 would win easily
Not according to this site.
http://data.fuskbugg.se/dipdip/AMD%2...weClockers.png
Looks like it may be running 'hotter' but using that as a benchmark is struggling to justify your point (almost a strawman argument).
It looks like it does run 'hotter' but it is quieter and has less of a power draw. 'Hotter' is a bit of a sham argument really, as other factors are in effect, and the temp difference seems to be within a few degrees C.
Ultimately the fault for the current situation lies with TSMC for epicly failing 40nm, and skipping 32nm. Also Nvidia failed to see that coming and the result was Fermi half a year late and neutered. If Nvidia had managed to release a full Fermi at the same time as 5800-series, the current situation would be much much better for the consumer. The performance would be better and the prices would be lower.
So better pray TSMC knows what it's doing with 28 nm. :rolleyes:
Look at "apples to apples"
http://www.hardocp.com/images/articl...ST7GBp_4_7.gif
Tried to look at the other games of that review?
Not bad cards but looks like Nvidia really snuck one up on ATI and us with the new 500 series. Had they not been released (which i think we can safely say wasn't expected) ATI would have been sitting pretty. As it stands performance wise the 500 series is better and runs cooler, less power. WTF how did the roles reverse in the space of 1 month???
The performance is all over the place.
It performs much better than 5870 in some titles; in some titles it's barely faster. Why? Most likely drivers.
Some of the numbers are quite promising.
But yeah, I agree, quite a raw product so far.
good cards, performon well, nice features (dual bios, power tune, mlaa), not very hot or loudy and have a good price.
but, I think almost everybody expected more, it really looks like AMD didn`t expected GTX580 and GTX570.
People got too much hype over those cards, and now it looks disappointing. But in anyway they are bad cards, AMD did a nice work. :yepp:
Now lets hope newer drivers brings at least 5~7% improvement and fix some games where those cards performs strangely slow.
Oh, and HD6950 in crossfire looks like a very good deal.
It would be very stupid to buy a card ASSUMING that it's going to get much better with drivers. For one thing, remember X2900XT. Second, if you're relying on drivers because you think the performance varying wildly means "lack of driver compatibility," you have to take into account that there has been a lot of architectural changes in Cayman which would favor some games and dislike other games. Much like Fermi - its performance varied a lot from game to game compared to other cards.
see comment Nintendork..
and i was wrong about anandtech, they either changed the text or i remembered it wrong.
Altough bothered by anantech showing all the cards and crossfire setups when the resolution is 25.... and for resolutions 16xxx they show only the single cards. So if you look closely the 25 the 6970 is between the 570 and 580 and the 6950 is continously around the 570. for 1920 the 570-580 takes the advantage and 16xx the difference is completely in nvidia favour.
HardOcp tested the best image quality settings possible while still playable and did an apples to apples which were alway high res and qualitity settings.
Anand showed that only the first graph and afterwards they lower the settings and quality and show them single card vs single card. making the clearest chart about performance also the bottom one and the one with the lowest quality settings (settings you won't use when you buy a card of >300euro)
I'm not sure if hardocp is trustworthy
I mean techpowerup did use Catalyst 10.11, and results was so different. GTX 570 was clear winner against HD6950 in techpowerup review
Correction
Anand used 10.11
AMD Catalyst 8.79.6.2RC2 = 10.11 RC2
http://www.anandtech.com/show/4061/a...eon-hd-6950/12
HARDOCP is the only website showing such results out of all the website. Hardocp has never been the most credible website.
IMHO this new cards are a
http://diamondgirl55.mlblogs.com/DISAPPOINTMENT.jpg
I already corrected my statement, see previous comment of mine, two posts before yours. and no, anandtech shows the same thing, again read my previous comment. The difference is an nvidia advantage for low resolutions, but if you turn those up, see first graph for anand viewers, after some searching in there graph you will see 6970> 570 and 6950~570
If I could sum up the 6970 in one word it would be "disappointing". And I do not understand the marketing logic of producing what is for the moment the "top" AMD card when it can only perform around the level of Nvidia's number two card, the GTX 570. OK, so it is priced at a similar level to the GTX 570. But this only confirms the reality, that GTX 580 levels of performance, from a single GPU at least, are no longer within AMD's capabilities.
Those games share similarities with the VLIW5/4 thing in 3dmark. That games actually don't seem to adress the total capabilities of the gpu.
For example barts vs cypress is a very constant comparison without surprises.
11.1 is supposed to be made from the scratch right?
I just figured 3870 to 4870...big jump...4870 to 5870...big jump...5870 to 6870...wait, huh?
/reboot
5870 to 6970...wait, huh?
So performance's less than what we'd all come to expect from a Radeon. But price doesn't line up either.
Hardocp (Kyle the bar bouncer) has a no bull:banana::banana::banana::banana: approach.
if he approves, you know its good stuff.
Not many people play at 2560x1600 which makes the comparison at that resolution useless for most people. But let us calculate the average difference between HD6970 and GTX 570 from anandtech review.
GAME/HD6970/GTX 570/Advantage over GTX570
Crysis/36.6fps/32.6fps/12.2%
Battleforge/48.2fps/54.5fps/-13%
Metro/25.5/23/10%
HAWX/92/104/-13%
Civilization /34.5/45/-24%
BC2/47.8/45/6%
STALKER/39/34.3/13%
DIRT2 /56.8/64.2/-13%
ME2/56.1/55.9/0%
Wolf/79.4/66/20%
Total =(12.2-13+10-13-24+6+13-13+20)/10= -0.18%
If my calculations are accurate then that means that HD6970 is 0.18% slower than GTX 570 at 2560x1600
hardware cannucks did a good one once again
good job skymtl
http://images.hardwarecanucks.com/im...BGreen/DGV.gif
HD 6950 2GB
totally agree with the conclusion btw!!
now on to remove those ccc oc limit :D
Thanks for proving my point!
didn't had time to go through them.
The higher the settings, the the closer the gap and how closer 6970 gets to the 580.
Now add in some test higher quality settings (higher AA since only used 4x) and those results will come closer.
test which can be higher because there is still framerate room and settingroom: battleforge (-13%), Civilization(-24%) brings those to 8xAA and the gap is getting closer. (just to name 2)
So add those higher quality settings and magically the 6970 becomes faster overall.
25x might not be used very often, it is an indication for future. for 16x, most used, those cards can reach playable framerates and you won't notice the difference with all settings enabled.(which was also not tested).
edit:
Maybe. As it stands currently you might be right. But i don't think either of them is a clear winner. I believe the 6950 is the best choice for any card at the moment. and GTX560 will need to be really close to 570 if it wants to compete with that product.Quote:
6970 need to reduce price in my country to fight with 570
6970 need to reduce price in my country to fight with 570
AMD dropped the ball on this one, Radeon 6970 can't even beat Geforce 570 gtx on pure performance not even in performance/watt the Nvidia card actually uses less power!
http://images.hardwarecanucks.com/im.../HD6900-82.jpg
Crossfire > SLI for 69xx. That and the 6950 2 major wins imo.
Where does this 30% come from? According to the TPU review 6970 is 13,6% faster than 5870 at 1920x1200. The die size increased ~16% at the same time. Epic indeed.
As for the Power consumption, HWC is afaik the only site that puts 6970 power consumption over gtx 570. IMO it's the test method making 570 look better. There are several game power consumption tests done by other sites that give a better picture.
CrossFire is the only bright spot, the scaling is simply superb now, clearly better than SLI. Those wanting to try eyefinity (like me) should be in for a treat with a couple of these. :yepp:
wtf?
I was going for the 6950 cause in most of the reviews it seemed to have same power consumption as 6870, now I see tests where it withdraws 50w more :confused:
perhaps from a more consistent source : Anandtech for example, you will see that actually the 570 consumes more, so NO AMD didn't drop the ball this time, sure it's not a major enhancement from 58xx series, I also expected i bit more from the increased SP count, but neither is the 5xx series from NV they just fixed what they screwed up before. AMD has set the design for new design in future, expect more from next generations. NV always had the fastest single gpu card for many generations (comparing generations against each other, not just a small glimps in time due to design issues), don't understand all the fuss that AMD would take single card perf crown... that stupid hyping.
http://www.anandtech.com/show/4061/a...eon-hd-6950/24
No, see the anand chart.
This is a power consumption chart not an fps one (message to some people)
http://images.anandtech.com/graphs/graph4061/34663.png
You won't expect to consume the same as 6870 being a faster card (and twice the memory) but Powertune is your friend, a pretty cool feature.
Is my Corsair vx 450 enough for an E8400 @ 4ghz on p35 and a 6950? (fine on a 4890 atm)
4890 actually consumes more so you should be OK unless you want to do a useless furmark bench with powertune savings off.
Well, GTX570 has also lost 8W from its own review.
http://images.hardwarecanucks.com/im...GTX-570-88.jpg
What's going on?
yeahh wtf on hardware cannucks power draw test.... Oo
To me it seems AMD did the most they could with what they had,the load power consumption is quite high albeit alleviated by very low idle power,and an ability to manage load consumption with powertune.If the chip consumes so much at only 390mm2 ,perhaps AMD was limited,as to how large they can go by the arch and didn't aim too low,like I first assumed.
The cards do seem well balanced overall,nothing is really bad,just underwhelming,and a 2GB buffer does make it look more appealing for some users.
Maybe same question for duploxxx, but aren't those CF power numbers of anandtech strange?
full system power
1 6950 --> 292W
2 6950 --> 472W
difference = 180W
1 6970 --> 340W
2 6970 --> 564W
difference 220W
so these numbers are without power tuning?
also that would imply that the rest of the system would only consume 292-180 = 112W
but that would mean the 5770 would consume 130W (243-112).
I know there can be illeguralities as cpu limited or other things, but those would make the gap bigger instead of smaller. (if for example card 2 only is used 80% due to limitations somewhere else, the actual card consumption would be even higher).
Basically, we had to redo ALL of the tests because the Corsair HX1000 I was using died. It was replaced with the same product. Since overall efficiency changes from one sample to another it was necessary to redo them.
Most of the cards stayed the same. However, one or two changed by a bit (usually less than 2%). You will also see this when we revisit the HD 6850 and GTX 460 in the near future as both cards had their consumption trend downwards a bit as well. :up:
well written! what is to be expected with less shader and same node? I like some new things as power saving thing :up:
69xx is quite good.. even better at Crossfire again..
people were disappointed for 68xx but missed that they scale in CF at good way with less shaders. I think 6850/6950 CF gives amazing value/perf.
Seems that Nvidia fans care about being No.1 while AMD is humble, silent player, acting to sell price effective cards for mass people. while most nvidia fans afford 460 and spend time praising the ultimate fastest card :rofl:
6990 comes and rock their world down like 5970 does :D
Regarding efficiency (or lack thereof), I just want to make it clear that we do everything possible to exclude outside variables from our power consumption test.
The power supply is plugged in to a Tripp-Lite line conditioner that regulates input voltage to a constant 120V. If this isn't done, input voltage WILL impact the consumption of the system and let's be honest; no one likely lives in a house with constant live voltage.
In addition, the 3DMark test we use puts a very minimal load on the CPU. This is important since the CPU's fluctuating load patterns in most of the apps out there can really mess up results.
I am also surprised to see quite a few sites still using FurMark, etc to test power consumption. AMD admits to that program being capped by PowerTune.
Found something interesting...
So 5xxx series incorrectly worked with wide gamut monitors, and now it's fixed? Or does it only apply to monitors connected via HDMI, and those connected via DVI have no gamut issues with either card?Quote:
Originally Posted by Anandtech
Most people seem to be dreaming in technicolor here. If you think that AMD released drivers to reviewers that wouldn't feature the full performance, you are sadly mistaken. This these hopes and dreams seem to happen with EVERY AMD and they NEVER come to pass.
The drivers given to the press (both 10.11 and 10.12 RC2) are "full speed" drivers, make no mistake about that.
Having used both, I can tell you that they are identical in performance other than a small increase in 3DMark 11 scores and a patch that fixes some random crashes I was having in F1 2010.
For all of you harping on the drivers used, I think that is in poor form simply because there won't be one lick of difference between the two packages in terms of overall single GPU in-game performance.
I want to address this right away because placing the blame on TSMC shouldn't be done once you understand the situation.
As I said in my article, AMD had already begun taping out their cards on 32nm when it was realized that the power and space savings incurred were minimal whicle the pricing was too high. This meant AMD decided to use an existing 40nm process for their entry and mid-level cards. This left Ibiza at 32nm but since the volume from the lower-end cards would no longer be there, TSMC decided to cancel the 32nm node altogether. This left Ibiza which then needed to be ported to 40nm and eventually became Cayman.
So, it wasn't TSMC's fault but rather a combination of economic decisions by both parties which sunk 32nm.
I tried. :up:
Nope. PowerTune only caps when the infered TDP reaches a certain point. If a program doesn't allow the core to each that point, PT won't kick in.
It is completely possible that the program I use pushes TDP closer to the PT barrier that what Anand and some other sites use.
Seriously - did no one READ what i wrote? The only reason these cards are disappointing, is because you CREATED all the Hype, with rampant, pointless speculation, and expected a giant killer - which is NOT what AMD have been trying to make. They wanted Cayman at 28nm, TSMC let them down, and they had to re-configure for 40nm - what about that DON'T you get? Sometimes i swear you people ask for too much.
Wooww.... Cayman... all the way...
Two Caymans... oh My God....
Two Caymans All the way.... So beautiful....So Powerful.....
wWwooooohhhoo..hoo..hooo.....
O.. my godd... Oh.. my goood..... Whoooooo.......
What does this mean? OOoooohhh.... Ooooohhhhh......
Sooo Powerful..... so beautifullllll..... Like double Rainbowww....
Oooooohhhh..... OOoohhhhhhh....... oooh.. my gooddd.....
Two Caymans.... oooohhh...... ooooooohhh.....
/JK
:D
Performance wise, cayman is a Failure.
AMD is getting HATTONed this round.
Holy :banana::banana::banana::banana:. What were you guys expecting AMD to do being stuck on 40nm? They clearly weren't going to create a 300w monster as that would go against what they've been doing since the HD3870. The HD6970 looks to me like a test of the architecture while they wait for the 28nm node to mature for mass production. On top of everything it isn't like the HD6970 is a GTX 480. It is a minor improvement on the HD5870 and it paves the way for a killer 28nm core.
Also, it is slightly faster than the GTX570 with a little over half a gig more memory and priced slightly higher in what I feel is the proper performance/price slot. What do you want, AMD to bleed money so you can get your high-end GPUs for $200?
I'm so disappointed in you guys, not AMD. AMD kept their lips sealed, you guys worked yourselves into a frenzy, and then turn on AMD when they don't deliver what you had hyped yourselves into expecting.
Just so we are ready for the HD6990.. "omg it comes with 2560SP, 128TMUs, two men/women(depending on the bundle you purchase) to wait on hand and foot and it consumes negative 400 watts!."
I will agree though that this naming scheme is horrible. What a trainwreck that is.
That is what i'm saying. But your result indicate a 150W difference between idle and load for the 6950 which has a power cap of 140W. Add the idle statement and you are getting close to the 200W for the 6950, which is the +20% on powertune. So either you set it at +20% or it is not kicking in when it should be.
Or the cpu is loaded more then you expect. But that won't change the strange difference between the 5850 and 6950. If the 6950 would be capped 140W, that would mean the 5850 would use 90W during 3dmark. which seems a bit optimistic.
note that most numbers look to be correctly when you watch the idle-load of the same card. I do however see something unexpected for the 6950 and the 58xx-68xx. The difference is to big while their power consumption should be in the same range.
6950 crossfire beats the 580 sli in 5760x1080 resolution, so not sure why people think Nvidia is faster?
http://www.hardwareheaven.com/review...ty-vs-sli.html
so for me, playing at that resolution, I save half the cost chosing amd 6950.
2 amd cards for one one 580gtx if they were in stock............................another CARD isnt cost effective buying two nvidia cards for more money for worse performance????
twice the failure wouldnt you think?
A good review from Xbitlabs:
http://www.xbitlabs.com/articles/vid...70-hd6950.html
Impressive HD6970 vs HD6950 same clock
http://www.xbitlabs.com/images/video...s695aa_big.png
vs GTX570:
http://www.xbitlabs.com/images/video...s470aa_big.png
Power consumption:
http://www.xbitlabs.com/images/video...agr_pw_xbt.png
others power consumption test (HD6970 = GTX570):
http://www.computerbase.de/artikel/g...stungsaufnahme
http://www.hardware.fr/articles/813-...ossfire-x.html
Battlefield Bad Company 2 @ 1920x1200 6fps advantage over a 5870, you gotta :banana::banana::banana::banana:ing be kidding me.
I am really disapointed
And only playing F1 , we already knew 6970 does very good in F1 game ,now try playing more games.
According to Anandtech, the 6970 goes head to head with the GF480...I think this is pretty impressive..I don't know why some of you folks in here find this card to under-achieve.....this card was intented to compete against the 480 and so it does......and remember, in CF mode these cards are very powerfull....what did you most expect to see from these cards anyway? they are still under the 40nm technology....blame the semis...for the delay of course.
// Just woke up... All reviews added from topic and PM. Special thanks to PM guys, makes adding easier :P
Also start adding video reviews for those who hate read bazzilion pages... like meh xD
Considering 6950 and 6970 have the same die, same tesselation and only minor differences as in less shaderclusters. i don't understand your comment :-)
but an 6950 ran at 870MHz and slightly higher memory clock than 6970 (so little less core clock and bit more memory bw, 10% less shaders etc) causes a a performance difference of 1-2% and a difference of 80W on average (between 6970 and the oced 6950) makes me wonder what is wrong.
If that is the case then yes.Quote:
You have your numbers wrong.
HD 6950 PowerTune Max = 200W
"Typical" Gaming Power (whatever that is...) = 140W
So yeah, I AM getting close to 200W which is the upper limit of power tune and basically justifies the methodology.
Looks like an awesome card. A few fps here and there are meaningless, especially since AMD targeted these price points directly.
Pretty good engineering IMO. Also PowerTune looks like killer technology. Imagine the implications for mobile ( ala fusion, mobility 6900) and Antilles. Like anandtech says, 'this is just the beggining for Cayman.
They didn't. There were last minute price drops. GTX 570/580 surprised them quite a bit.
Ah. Well, I am not shure what reviews you have been reading but here is the rundown.
Basically, the move to VLIW5 to VLIW4 did save ALU space on the die but AMD didn't just use this to shrink the core. Basically, they used the "saved" space and expanded upon it by adding additional SIMD engines. The engines with their associated TMUs, cache, etc naturally take up more die area than the reduction to VLIW4 allowed.
The graphics engine iteself (which contains the fixed funtion units like the tessellator) was cloned so instead of one, there are now two. This also adds to the transistor count.
Finally, there are the GPGPU compute changes that necessitated an additional direct memory access engine be added along with some other bits.
All of this of course added to the transistor count and thus increased overall TDP.
That math don't works also for HD5850 vs HD5870, 1 years ago :)
http://www.xbitlabs.com/images/video...s587aa_big.png
3% for 320-5D vs 288-5D.
And power consumption:
http://www.xbitlabs.com/images/video...power(xbt).png