ccc 10.12a
http://support.amd.com/us/kbarticles...12ahotfix.aspx
support for 5x1 eyefinity and cayman
Printable View
ccc 10.12a
http://support.amd.com/us/kbarticles...12ahotfix.aspx
support for 5x1 eyefinity and cayman
For Eyefinity, + CFX and tri CFX ( vs SLI ), Hardware heaven have some result .( for what it worst )
It's very good at both. The AA performance hit is less than for GTX580. DX11 games perform really well (it's much closer to Nvidia's offers in DX11 games than in DX10 games, for whatever reason). The tessellation performance with "light" and "normal" settings is good (compared to Nvidia's cards), it's lagging behind when tessellation levels are set to "extreme", but that's only used in benchmarks right now.
I should elaborate: Performance or functionality in general wasn't a concern for me as I'm sure XF works in EF like it did on 5K cards. I'm only interested in if the frame timing issue that was so bad on 5Ks in EF mode is gone. To put it another way, if a user used two 5850s in crossfire to play Bad Company 2 using Eyefinity, their framerate might climb from 30fps to 70fps. Despite this, it was jerkier and didn't feel even as smooth as playing with one card. This is an issue that isn't told by fps. I need someone who has sat down to play to elaborate on if they've had this on 6K hardware, but their information only counts if they were able to notice it in 5K cards initially. Everyone has a different threshold where they even notice, just like for refresh rates on CRTs.
I think AMD did a good job with cayman and barts and puts them in a position to really brake Nvidia...talking about price...
performance wise, AMD can slap to 6870 and call it 6990 and is game over for the gtx580, we know 6870 crossfire consumes less in idle and load compared to the 580, or slap two 6950 and powertuned them ...or slap together something between the 6870 and 6950(something between 1120sp and 1408sp) ...any of those 3 positions is game over for Nvidia...
thats AMD strategy since the 4000 series...always the top range card is a dual gpu...;)
Nvidia did a good job with the 500 series and caught AMD off guard...but AMD has the solution for it...;)
either way is good for us(consumer) , better prices from both sides :)
whats the per SP performance efficiency between barts and caymen? has anyone done that math yet?
Well i just installed it for the sake of testing, it's 8.79.6.2.
But 10.12 WHQL is newer on all levels, 3d, 2d, D3D, opengl etc.
Trying to run 3DMark 11 to see the difference but i keep getting a black screen on the first physics test after it loads :shakes: graphics tests run fine...
huum what gpu you use ? and check your OpenCL + DX11 drivers, can come from this . ( Physics test use Bullet ( OpenCL) + Directcompute5.0 (DX11) for rigid body )
Yea its not really about AMD supporting CUDA. Why would they? Its their biggest competitors tech, and its not like its a standard.
The problem is AMD not being aggressive enough and lacking the developer support that Nvidia does. I'm pretty positive Nvidia just threw adobe a lot of money early in the dev cycle of CS5 suite and made it so it only worked with CUDA based cards..Yea thats kinda wrong if you ask me but I also think AMD dropped the ball. Did they even attempt to reach out to Adobe?
And yes ATI cards DO work, but not with Adobe's Mercury Engine which powers Premiere Pro Cs5 (as in they don't accelerate or enhance it). In photoshop and illustrator, they both recognize my ATI mobility 4000 series and allow me to turn on OpenCL gpu rendering. So when I do/render any major effects or filters, or when I zoom in and out, or scroll through a large image, it's all handled on GPU. Its actually nice and smooth. I just wish it worked in Premiere :shakes:
Well I think I remember a long long time ago in a galaxy far far away Crossfire had poor scaling...
That memory has become an old fiction story.
Scaling has been pretty good for a while (50%+ often), but 6Ks kick it up a notch. It isn't the scaling I'm worried about.
you should have it allready installed so, try google AMD Open CL SDK you will fall on their official Stream / APP OpenCL site (AMD developper ) grab the last version and see what it give... Check too if reinstall 3Dmark give anything..
But we are offtopic a little bit there . just hope you fix it.
//updated.... Tnx Gaul for the XS review :)
There isn't a big hurry to get the 6990 out since they still own the performance crown. It doesn't look like there will be any confusion over who owns the top end once it does debut though. I doubt nv will be able to compete with it in perf/dollar.
As far as hardware goes, Cayman should be a pretty good core for a multi GPU solution thanks to PowerTune (mobile also).
/ I could definitely be wrong!
[edit]
here's a pretty important review http://www.widescreengamingforum.com...eatured_Review
With the 6850 being $155AR http://hardforum.com/showthread.php?t=1569476
I really expect 6900 prices to come down at least $50
@dezmen thanks for the great thread bookmarked for future reference :D, bet ur so tired huh? Stack up on some mother cans haha..
are the asus volt tweaks card supplied with a way to control voltage and how wide is their supported voltage range ????
Based on the Crossfire reviews in the OP, looks to me the Cayman Crossfire performance is great.
AMD needs to work on drivers for single cards and they might also get a winner there, (based on current prices).
Waiting for the HD 6990, the great CrossFire scaling is good news for me.
Quote from Guru3D Radeon HD 6950 CrossfireX review "Final Words & Conclusion"
Quote:
I have to admit I was a little surprised by the excellent performance scaling we see on the R6950. Where the single card is a little so-so to position, we can surely recommend going the CrossfireX route. Granted, two cards will set you back a chunk load of money at roughly 550 EUR, but the performance gained here is truly amazing.
Today's article will not only show you the CrossfireX performance of these two cards, but also will roughly reveal what AMDs to be released product under codename "Antilles" will bring to the market in terms of performance. Antilles will replace the Radeon HD 5970, a dual-GPU solutions merged into one graphics card.
And we have to tell you, where we are a little puzzled about the Radeon HD 6950 all by itself, but in CrossfireX this solutions seems to kick ass massively.
@ heinz69 cant wait for antilles, really cant!! Bringing back memorys when AMD brang out the 5970 which was crown, looks like AMD is going to do it again!! Hope this time though they get drivers right because when i was rocking CF 5970s so much microstuttering it wasnt funny.. So this time please get the drivers right for a dual GPU card
Seems 10.12a improved 3DMark11 performance for me.
10.12 (8.801) - http://3dmark.com/3dm11/197922
10.12a (8.790.6.2) - http://3dmark.com/3dm11/202116
GPU score, 4938 vs 5137.
It looks like cayman still can't drive more than two of its dvi and hdmi ports at the same time. The card I bought didn't include a mini dp to regular dp adapter, either.
ATI Tray Tools seems to cause a BSOD when it loads with Cayman installed. Might want to be careful, guys.
Also, what's the deal with CCC? It looks identical to the CCC I'm used to.
Did you download this?
AMD Catalyst 10.12 Preview for Windows 7
https://a248.e.akamai.net/f/674/9206..._Win7_Dec7.exe
Microstering is not a problem with right system setup and game settings. It is old story, and it was mainly problem with older CrossFire and SLI cards.
I had the HD 4870 x2 no problem, the reason I bought the HD 5970 and now waiting for HD 6990.
I'm sure not an idiot to punish my self with microstutering if it was the case.
The microstuttering problem is old tale mostly repeated by people that never used the CrossFire or SLI with the newer cards edition or never had any CrossFire or SLI period.
I can safely bet that most people using SLI or CrossFire with the right setup have no such problem. Also I can safely bet the minute Nvidia comes out with dual GPU card, the problem will go away with some people.
Some people just have a HUGE problem to read something positive in this AMD Cayman doom and gloom review thread. Beside I was not even posting my own experience.
Go read again here or the other CrossFire review in the OP.
Can't speak for 5970, but I had(still somehwere in a closet) GF 295 and my buddies had 4870X2, both had microstuttering in somegames. Now given many people had no idea they had it until pointed out to them , ignorance is a bliss as they say.
Look, I agree with you. I am on the 'microstutter does exist but not everyone sees it, its a mountain out of molehill" side. Reason why I explicitly said.Quote:
Unless they solve the afr rendering framerate inconsistency its gonna be the same microstutter experience for you most likely.
No long explanations needed for me. I can't speak for others experiences but its not a problem for me and never has been, but I believe people if they say they experience it just as long as they know the difference between micro stutter and regular stutter.
There were huge improvements since the HD 3870 the same goes for Nvidia compared to some of the older dual GPU sandwitch card.
The AMD dual GPU cards have the same amount of GDDR5 per GPU as single card so how can that be problem?
The only way to get top performance graphic card setup is, SLI or CrossFire I prefer to have such on single card.
agree with you heinz, but i actually owned 2x 5970s both watercooled, microstutter only happened when i crossfired these suckers running a resolution of 5760x1200.. But yeah to many people rant when they've never even experienced microstutter lol
microstutter is non existent with 2gb of memory.. haven't noticed anything yet..
this review confirms it:
http://benchmarkextreme.com/Articles...%202GB/P1.html
also shows 5870's with 2gb are closer to 69x0 performance at high res
I'm seeing people returning their GTX580 then upgrading to a HD6950 CF for $50 more.
I can certainly appreciate your opinion, but I encountered it and it wasn't on old hardware. I ended up selling my second Radeon 5850 because it didn't help at all. Framerates were up but so was the jerkiness, effectively making it harder to play competitive games instead of easier.
I think the key to making cayman faster focusing drivers on the 6970 side of things.
From what I remember about the earlier series, alot of huge driver increases happened when they stopped designing drivers for the 1950xt and below series. Cayman is different so working on drivers so focused on utilizing 5 way shaders is going to hold them back. Having two different set of drivers for both companies I think would help both of them at this point.
Gigabyte GTX 570 vs. Sapphire 6970 and 6950 vs. Radeon 5870 - All Overclocked
http://www.hardwareheaven.com/review...roduction.html
HD 6970 CrossFireX Review
http://www.hexus.net/content/item.php?item=28056
http://www.tweaktown.com/articles/37...ire/index.html
Still seems rather underwhelming. Guess i'm sticking with my 5870 and just skipping this generation. Maybe there will be a refresh, though.
same, 5970 will suffice until we see 1st or 2nd gen 28nm, the current process is right at the top of the cost/performance curve, where more improvements become smaller and more expensive for R&D, bring on 28nm
Those CFX #'s are interesting... basically at 2560x1600, the 6970 and 580 are neck and neck when dual GPUs are used, with the 6970 slightly lower in min FPS. Antilles should be a beast, but going 2x6970 budget-wise might be worth it if you game at that resolution
It is stupid if there were no architectural changes, but that's the X-factor. If you're going to ask why the performance is near the same with the older generation, that's the only logical solution besides serious engineering failure, which I doubt here based on theoretical figures. These same arguments were thrown out there anyways when the 5870 was released and people chuckled that it didn't beat the 4870X2 or the GTX 295, and look at where it is now...
As for the 480, it hasn't gained as much as the 5870 has over its lifetime (granted, the 480 was around 6 fewer months) and the 570/580 are based on the 480, so who knows where the 570/580 will head
The 5870 was based on 4870 was based on 3870 was based on 2900XT. Yet 5870 was able to see significant performance increases. Meaning 570 and 580 can too. 6970 is more or less equal to GTX 570 and I doubt AMD is really going to "pull away" with drivers.
I think the reason why we won't see huge driver increases as I had said earlier was legacy driver support.
http://www.ngohq.com/news/16670-amd-...eon-cards.html
This occurred in late OCT of last year. My guess is by not taking pre 2900xt into consideration, they were able to make sacrifices on stuff that might be detrimental to 19xx and earlier series and focus entirely on making improvements for R600 and up(basically anything based on a 5 way shader architecture. Some of the biggest driver gains were made during the Radeon 5xxx years which is surprising considering the drivers should be mature at this point. I think it was 10.3 was where AMD had a huge jump.
I think we might see 5% at best, but alot of the performance is going to be held back by making driver support for anything older than 69xx. That is a 5 way shader driver is going to be considerably different than a 4 way shader driver and is going to slow down improvements for this new architecture. I suspect this is one of the reason why fermi shaders are individually weaker than the gtx 28x and downward. NV current drivers for cards based on g80 shader technology and focused on extracting as much performance out of them as possible. It will be difficult to make drivers that make fermi faster while not hurting anything before it I think.
I think if both company focused entirely on making a fermi only driver and an 69xx driver, we could see huge improvements.
Fermi arch has been around for 7 months, GTX580 and 570 are pretty much rename material for the most part (add a couple of sps and / or cut off some VRAM, throw a TDP limiter chip on, voila).
Yea, right, 4870 learned to do DX11 all of sudden, oh wait... 5870 was a significant redesign of 4870. Go read Anand's (or any other decent) review before making absurd claims.
Yeah, 5870 is not as based on 4870 as 580 is on 480, I'll give you that. But the fact remains that it was still VLIW 5; and mostly the same with 4870 except the tessellator unit and the added SP's. The grouping of stream processors, and the processors' capabilities (and general dependence on the performance of the scheduler, I assume) hadn't changed.
5870 by itself is the fourth iteration of the same arch. Obviously it's not going to be ENTIRELY the same. But before saying "it's going to be awesome with drivers" and possibly misleading people, think of the last time AMD introduced a new arch; X2900XT. When it was released, everyone was disappointed at the performance, and a lot of people encouraged people to buy the card with "it's going to get better with drivers", whereas it didn't. It got better with iterations.
X2900XT was a new arch, as was GTX 480. Those products both sucked (2900xt sucking more, I might add). The next versions, HD3870 and GTX 580, were 10x better received by everyone, and were the products they caught up with the competition. There is a similarity.
GTX480 got a quite significant performance boost with driver updates. Hopefully AMD will follow suit. :) But no one can really predict the performance increases. The arch is different (which suggests it might need some optimisations), but who knows what code mess there is in Catalyst drivers, maybe they're nearly perfect already (CFX profiles do need fixing, though).
On a side note, the difference between 2900XT and 69x0 cards is that the latter are already very decent. So one can surely say there will be no similar disaster.
Cypress was terrible in terms of perf/transistor compared to RV770. AMD increased the transistor count by 125% and managed a 60% performance increase. 4870 on the other hand was made on the same node as 3870 (55nm) and increased transistor count by 43% while increasing performance around 55%. The only reason 5870 seemed half decent was due to process shrink and fermi failing. You can see this also from the efficiency increase of 6870, Cypress had way too much shaders compared to the rest of the gpu.
According to AMD they've managed to get about 15% improvement to several review games since Cypress launched. That's about as much you can expect from a year's worth of driver updates. Of course Cayman being a new arch it might gain a bit more. Does anyone have numbers on how much Fermi has improved with drivers?
Cayman won't gain because it is not shader limited so optimizing shaders won't really do that much performance-wise. E.g. if you are able to improve the parallelism by 35 %, the actual performance gain could be around 5-10 % in most cases, if not even less. And when that number is applied to the whole scene rendered, the actual relative framerate gain would be single digit for sure. Yay for 2 % performance boost.
6970 CF review by me, here :
http://www.xtremesystems.org/forums/...d.php?t=263746
Yes, we can say the 6970 is a little dissapointing, but we also need to look at what we are comparing here.
In Norway (looking at komplett.no webshop):
- the cheapest 580 is 4039 nkr
- the cheapest 6970 is 2995 nkr
So the 580 is about 30% more expensive then the 6970. Amd is better at something and nVidia is better at something. But it is not valid to call the 6970 a dissapointment because it dont totallty stomp a card that is 30% more expensive. This is in fact a BIG price difference!
jarle
my opinion is that in the time it will take for the 6900s to mature due to drivers, 28nm will be right around the corner offering the same perf with half the power and a 40% drop in cost.
do i want a 6950? sure do, at 150$ and 100W please...
it be interesting to see a barts chip with same SP's as the 6900 series, id bet it perform the same. amd/ati has a long way to go to optimize there drivers for games because there doesnt seem to be that much of an increase with the new vliw-4 arch.
Why are you comparing it to the GTX580?! The 6970 is aimed at the GTX570, both in performance as in price, as reviews and prices throughout the world have shown.
The sad and ugly truth is that AMD took ~9 months to release a card to go head to head with the GTX480/GTX570.
The problem is that the gtx 570 exists. It is slightly cheaper and about the same speed or a percent or two slower. AMD brings nothing new to the table that forces NV to change their prices. Compared to earlier launches, this chip is a disappointment because AMD loses ground compared to its earlier generation. I.e they consume more power than before and NV has a part with the same speed at the same price.
Barts was a much better launch as it made NV get defensive and lower the price of their existing products.
Additionally for business reasons, the 6970 is worse too. They have a product that cost more to make(bigger die and 2x the memory and more expensive PCB and cooler) and they have to sell it for the same or less than it's earlier generation because the competition is a lot more competitive this time around. Considering how much of a disaster the original gf100 was, this should have not happened. But somehow AMD released a new top end product that is only 20% faster, which is pretty bad considering they were already slower than the competition. NV did the same, but they reduced power consumption, heat and they had the lead already for single chip performance.
If this was a good generation for AMD they would be able to charge more for cayman but they can't.
AMD must be pretty unhappy about having to sell the 6950 for 299 when they are selling barts for only 60 dollars cheaper when it has half the memory(which should already cost 60 dollars), a smaller pcb, chip and cooler.
It is weird how people are ignoring the existence of the gtx 570 and only compare the 6970's value to the gtx 580 to inflate its value. Don't get me wrong, the 6970 is a good value but there is a Nvidia card on the market offering the same type of value on the market already.
The gtx 580 is a bad value compared to the gtx 570 too. As a value proposition the gtx 580 is bad and pretty much any top product from AMD or NV have been a bad value proposition.
many of you guys forget that the gfx cards coming out end of year 2010 were supposed to not be on 40nm,as originally designed by AMD...
then TSMC said it has problems with the smaller process and AMD was able to redo partially the current release somehow.
my guess is that they will be making good money both on the 68xx and 69xx.
so in a way i guess,Barts and Cayman are plan B
Would that mean, in your eyes, that Nvidia has taken...a long long time not to release a card to go up against the 5970?
or do Ugly truths, only go one way.
As I see it, AMD needed something against the 570; The 6870 does that. Until the 5xx series came along, AMD could happily keep the hounds at bay with the 5870 and the 5970. The 570/580 changed matters.
we're now back in the position we've been in before with the 48xx series of cards. Nvidia hold the top single GPU, while AMD hold the fastest card and more efficient (in terms of production) cards.
I'm sure the biggest issue was TSMC and it's inability to drop down a node, as AMD with it's smaller dies can utilise quicker.
Contrary to popular belief, TSMC didn't have issues with 32nm. It was dropped for economic reasons after AMD decided to transfer Cozumel and Kauai to 40nm. There were still plans to do Ibiza at 32nm (likely where the 1920 SP rumor came from) but those fell through when TSMC no longer saw a point in pushing a manufacturing process which very few companies would pick up on.
AMD's lineup would have looked like this on 32nm:
Ibiza
Cozumel
Kauai
Instead we are getting 4 products:
Cayman
Barts
Turks
Caicos
Basically, they are now able to better cover the market with cards using a more mature process while costs are kept to a minimum. Suits me fine. :)
If dual GPU solutions performed on the same level as single GPU solutions and didn't suffer from micro stuttering and an extreme need for optimized drivers, nobody would buy a high end single GPU card. If the performance stability and smoothness was the same, everybody would just buy two 5770 or two GTX460 SE and call it a day.
You invert the thing ... most peoples will not buy an high end card, just for the price, and so they don't want dual gpu cards or SLI/CFX cause the price, if they don't want put 500 dollars in a card, they don't want buy 2 at 300..
Specially since some years ... middle class card has become more and more reliable for gaming even at high resolution. the mass ( and i speak about gamers ) will buy a 5770 - 6850-70 a GTX460 and deal with that.
Most Peoples i know who are using CFX or SLI want just more perf of what a single high end gpu can offer...
What has that to do with what I said..? I wasn't talking about why mainstream gamers buy X or Y card. I was talking about differences between a high end single GPU card and a dual GPU card and how comparing both is not "fair" because they don't offer the same type of performance, cause the 5970 suffers from the same issues of all dual GPU solutions.
what you are doing is invalidating amds strategy to make dual efficient chips to match a giant chip from nvidia. Basically you mean that it is only fair to compare one chip with one chip no matter how they are designed. Sure the 5970 has some microstuttering, but most people dont care or dont notice it when purchasing the product. The most fair comparison is that between the same price range and which fits into one pcie slot. And thus 580 vs 5970 is a fair comparison between similar products. Just because one is big and powerful and the other is dual efficient doesnt change much of the user experience today, they are merely design choices.
SKYMTL thanks for clearing that up.
Seeing microstuttering enough to annoy you is like being allergic to seafood. Sucks to be you. Dual gpu scaling is amazing this round for both AMD/Nvidia. 100% scaling on Metro 2033 for AMD, and 97% scaling for Nvidia
I wish that all games scaled like that but as we all know that isn't the case especilly once we start talking about games that aren't exactly the latest big thing. I still remember Morrowind with MGE chugging along at unacceptable framerates at settings that my GTX280 ran very well. Source ports like Darkplaces and eDuke32 didn't scale at all. These are just a couple of examples.
There's one thing:
Volume, people can buy HD6900 and find them easily. GTX500 never had a real hard launch. And in europe, many stores don't have stock right now. It's december and christmas, for me it seems like a win for the HD6900.
Nvidia with their current die size can't afford the same volume as cayman.
tajoh111
Then what can we say about GTX460 with his huge die size compared to barts selling for the same price. Barts cpu's has overprice and this days I see some kind of price-cuts. Both Sapphire/Gigabyte OC versions sells for the same price as their non OC ones.
/close thread lol
Same with them too. NV must crazy hate to sell such a big chip at that type of price. The deals for the gtx 460 are ridiculous nowadays. Atleast NV had a few solid months of sales at its full price so everything is not completely bust.
And the gtx 560 just turns out to be gf104 but everything enabled, then things might just turn around for them.
Any links about that, I can't Google any but there are plenty reports about TSMC 32nm cancellation. Personally I believe TSMC cancelled the node because of so much problem with 40nm and GLobalFoundries announcing to work on 28nm.
The bottom line is, if GLobalFoundries got successful 28nm process against the TSMC 32nm than TSMC could have even loose the Nvidia business. They sure could not afford that.
Here is what AMD Vice President and General Manager of AMD's GPU Division said according to bit-tech
I don't think Skynner lied about it, after all they still need TSMCQuote:
Mr. Skynner admitted that the HD 6000 series was originally set to use TSMC's 32nm process, but that AMD had to opt back to 40nm earlier this year after that process was unceremoniously dumped by TSMC in favour of concentrating on 28nm only.
EDIT
If TSMC cancelled the 32nm because of AMD, why wouldn't the TSMC say so? Or did they? If they did that sure is a big new news.
Like Heinz68, I am curious about this. Is this something AMD told you?
In your 6970 review you said AMD had taped out some of the new architecture products before deciding against using 32nm for all of them. So they had some products for this arch taped out before ~Nov'09? That seems like a really long time.
There are lots of people that get multiple midrange boards and SLI/CF them to match or beat the performance of larger single chip cards. Companies aren't offering (many) cards with multiple midrange chips because the extra cost of board components needed for CF/SLI offsets the savings from smaller chips.
WOW 4 GPU on one card what a bright new idea. The first GPU would say hi, the problem is the last GPU would not be able to close the door.
Plus if some people believe there is so much problems with 2 GPU, four would not make it any better. Most time there is very good scaling with 2 GPU, not sot so much with third one and even less with forth one, if any.
No one outright lies in this industry but PR is all about selective truth telling...and of course a fair amount of embellishment by certain publications in order to give a certain voice to articles.
TSMC cancelled their 32nm process. Why should anyone need to know more? Even the shareholders usually get a warmed-over version. There are so many stories within stories that the real truth is hardly ever so simple.
I am not saying that AMD's dropping of their lower-end 32nm cards was the end-all for 32nm but rather one of the main contributing factors to TSMC's re-evaluation of their roadmap.
In the past, it has been ATI's cards that have very much been route proving products for TSMC's High Performance lines. We saw this with 40nm, 55nm, etc. The manufacturing relationship between ATI (now AMD) and TSMC allowed for a mutually beneficial roll-out procedure that ended up benefiting clients like NVIDIA as well.
So yeah, there were probably other economic factors behind TSMC's shutting down 32nm fabrication before it even started producing anything past test wafers. However, loosing high volume parts from a major client likely had a massive impact.
Regardless of what certain outlets state, an initial tape-out usually happens 9-12 months (or even more) before volume production. And yes, I can state that my conversations with AMD covered the points above and then some. Some I can discuss, most I can't.
Just installed a HD6970.
Here are the 3DMark06 results.
http://img692.imageshack.us/img692/8943/29148.jpg
Amazing how many people dont realize it is possible to compare apples to oranges. What matters is the end user's preference, not your own.
Im not impressed with these. £220 for the cheapest 6950, £280 for the cheapest 6970 with those rubbish reference coolers (high temps, too much noise), or £155 for the MSI talon attack GTX 460 hawk edition with low temps and noise, and great overclock potential.
The GTX 560 looks like it will have the 6950 beat by a large margin.