Speak for yourself.
Printable View
You invert the thing ... most peoples will not buy an high end card, just for the price, and so they don't want dual gpu cards or SLI/CFX cause the price, if they don't want put 500 dollars in a card, they don't want buy 2 at 300..
Specially since some years ... middle class card has become more and more reliable for gaming even at high resolution. the mass ( and i speak about gamers ) will buy a 5770 - 6850-70 a GTX460 and deal with that.
Most Peoples i know who are using CFX or SLI want just more perf of what a single high end gpu can offer...
What has that to do with what I said..? I wasn't talking about why mainstream gamers buy X or Y card. I was talking about differences between a high end single GPU card and a dual GPU card and how comparing both is not "fair" because they don't offer the same type of performance, cause the 5970 suffers from the same issues of all dual GPU solutions.
what you are doing is invalidating amds strategy to make dual efficient chips to match a giant chip from nvidia. Basically you mean that it is only fair to compare one chip with one chip no matter how they are designed. Sure the 5970 has some microstuttering, but most people dont care or dont notice it when purchasing the product. The most fair comparison is that between the same price range and which fits into one pcie slot. And thus 580 vs 5970 is a fair comparison between similar products. Just because one is big and powerful and the other is dual efficient doesnt change much of the user experience today, they are merely design choices.
SKYMTL thanks for clearing that up.
Seeing microstuttering enough to annoy you is like being allergic to seafood. Sucks to be you. Dual gpu scaling is amazing this round for both AMD/Nvidia. 100% scaling on Metro 2033 for AMD, and 97% scaling for Nvidia
I wish that all games scaled like that but as we all know that isn't the case especilly once we start talking about games that aren't exactly the latest big thing. I still remember Morrowind with MGE chugging along at unacceptable framerates at settings that my GTX280 ran very well. Source ports like Darkplaces and eDuke32 didn't scale at all. These are just a couple of examples.
There's one thing:
Volume, people can buy HD6900 and find them easily. GTX500 never had a real hard launch. And in europe, many stores don't have stock right now. It's december and christmas, for me it seems like a win for the HD6900.
Nvidia with their current die size can't afford the same volume as cayman.
tajoh111
Then what can we say about GTX460 with his huge die size compared to barts selling for the same price. Barts cpu's has overprice and this days I see some kind of price-cuts. Both Sapphire/Gigabyte OC versions sells for the same price as their non OC ones.
/close thread lol
Same with them too. NV must crazy hate to sell such a big chip at that type of price. The deals for the gtx 460 are ridiculous nowadays. Atleast NV had a few solid months of sales at its full price so everything is not completely bust.
And the gtx 560 just turns out to be gf104 but everything enabled, then things might just turn around for them.
Any links about that, I can't Google any but there are plenty reports about TSMC 32nm cancellation. Personally I believe TSMC cancelled the node because of so much problem with 40nm and GLobalFoundries announcing to work on 28nm.
The bottom line is, if GLobalFoundries got successful 28nm process against the TSMC 32nm than TSMC could have even loose the Nvidia business. They sure could not afford that.
Here is what AMD Vice President and General Manager of AMD's GPU Division said according to bit-tech
I don't think Skynner lied about it, after all they still need TSMCQuote:
Mr. Skynner admitted that the HD 6000 series was originally set to use TSMC's 32nm process, but that AMD had to opt back to 40nm earlier this year after that process was unceremoniously dumped by TSMC in favour of concentrating on 28nm only.
EDIT
If TSMC cancelled the 32nm because of AMD, why wouldn't the TSMC say so? Or did they? If they did that sure is a big new news.
Like Heinz68, I am curious about this. Is this something AMD told you?
In your 6970 review you said AMD had taped out some of the new architecture products before deciding against using 32nm for all of them. So they had some products for this arch taped out before ~Nov'09? That seems like a really long time.
There are lots of people that get multiple midrange boards and SLI/CF them to match or beat the performance of larger single chip cards. Companies aren't offering (many) cards with multiple midrange chips because the extra cost of board components needed for CF/SLI offsets the savings from smaller chips.
WOW 4 GPU on one card what a bright new idea. The first GPU would say hi, the problem is the last GPU would not be able to close the door.
Plus if some people believe there is so much problems with 2 GPU, four would not make it any better. Most time there is very good scaling with 2 GPU, not sot so much with third one and even less with forth one, if any.
No one outright lies in this industry but PR is all about selective truth telling...and of course a fair amount of embellishment by certain publications in order to give a certain voice to articles.
TSMC cancelled their 32nm process. Why should anyone need to know more? Even the shareholders usually get a warmed-over version. There are so many stories within stories that the real truth is hardly ever so simple.
I am not saying that AMD's dropping of their lower-end 32nm cards was the end-all for 32nm but rather one of the main contributing factors to TSMC's re-evaluation of their roadmap.
In the past, it has been ATI's cards that have very much been route proving products for TSMC's High Performance lines. We saw this with 40nm, 55nm, etc. The manufacturing relationship between ATI (now AMD) and TSMC allowed for a mutually beneficial roll-out procedure that ended up benefiting clients like NVIDIA as well.
So yeah, there were probably other economic factors behind TSMC's shutting down 32nm fabrication before it even started producing anything past test wafers. However, loosing high volume parts from a major client likely had a massive impact.
Regardless of what certain outlets state, an initial tape-out usually happens 9-12 months (or even more) before volume production. And yes, I can state that my conversations with AMD covered the points above and then some. Some I can discuss, most I can't.
Just installed a HD6970.
Here are the 3DMark06 results.
http://img692.imageshack.us/img692/8943/29148.jpg
Amazing how many people dont realize it is possible to compare apples to oranges. What matters is the end user's preference, not your own.
Im not impressed with these. £220 for the cheapest 6950, £280 for the cheapest 6970 with those rubbish reference coolers (high temps, too much noise), or £155 for the MSI talon attack GTX 460 hawk edition with low temps and noise, and great overclock potential.
The GTX 560 looks like it will have the 6950 beat by a large margin.