December 22? Wow... AMD was really sneaky with this card... We will not even make 20 pages by launch date!
A few numbers:Quote:
1. NDA ends on December 22
2. Beats GTX580 by 20% on average, not 3DMark. Beats GTX580 by around 30% in Crysis. Expected goal has been reached. There has been no goal of doubling the performance of previous generation cards as speculated.
3. Two cards will be (paper) launched: PRO + XT
Number 2 fits my earlier expectations and reasoning:
Quote:
It wouldnt surprise me. We have to keep in mind the change in architecture, with hardware schedulers, instead of a software one on VLIW. That has to account for higher sized die (compared to one in VLIW) as well as higher power consumption. However, if the upgrade in performance is little, I will also say they didnt went overboard with die size. They are following the carefull strategy of not doing big chips. To keep the die size low, they traded a big increase in performance for the scheduller, not risking doing both this generation.
Take it with a grain of salt, and this is only unconfirmed information from one of my friend working in one of the AIB distro in my country -as a distribution manager, or something like that. GK 104, NVidia's answer to Tahiti, will be here no later than end of Q2 2012, but he didn't expect it arriving before late March or early April next year too. He's quite confident talking about it, and his info has been quite accurate AFAIK.
Then, if we're talking about GK 110, things get much more blur, since this will be one big monster chip, we're gonna see it when NVidia is confident enough with TSMC's 28 nm process. He wasn't all that sure, but his boss has said to expect this monstrosity at early Q4 2012 at the earliest (when everything went totally perfect), and can easily slip to 2013.
No solid info regarding peformance, but he expects GK 104 to be trading blow with Tahiti -and he's quite even handed to both IHVs.
Ofcourse, perhaps all of that are just silly misinformations, can't be assure of anything. :)
If Nvidia comes that late with their monster chip, AMD will have an answer ready.
...try 2 months tops. Where are people getting this 2-3 quarters stuff from?
It's due IN 1Q 12, NOT end of the year.
Also, the 5870 wasn't really innovative. It was, for all intents and purposes, a 4890 with DX11 tacked on and a higher shader count. That's what makes the 7970 interesting, because it's the first time AMD have REALLY stepped away from the architecture they used in the R600.
GK104 is the midrange part. GK100 is the monster part, and GK110 is likely the refresher part later in the year.
Supposed leaked slides show the GK100 as around double the GTX580 (sometimes higher, sometimes lower). That'd put it at over 60% faster than the 7970. It's suppose to be here in Q1 2012. NVidia's midrange GK104 should be about 75-80% of the GK100 in terms of overall power, that's a scary situation.
AMD's answer to NVidia's high end has generally been a dual-gpu card. Anyone heard anything about a 7990 coming? Last few cards we heard about said card while we were hearing about the rest on the way, but this time all is quiet on the front. Is this chip too big to pull off that set-up?
...
6800Ultra was 2x 5950 Ultra (higher than that in DX9)
8800GTX was OVER 2x 7900GTX (in some cases 2.5x, on the release drivers no less. By the end of it's life it was more like 3x)
You've seen it, you just needed a friendly reminder. Personally, I'm not listening to the slides from either company, but the specs match the percentages the slides are claiming. 1024 nvidia shaders (which are also reworked for kepler) with a 512bit memory bus is definitely enough to double the performance of the GTX580.
Either way, lets get back to the topic at hand here. AMD's 7xxx series.
8800 GTX was really something compared with 7900 GTX :
http://tof.canardpc.com/view/e28de2f...decf6d70a6.jpg
http://www.hardware.fr/articles/644-...formances.html
You know you are not allowed to post things from OBR in this forum ? not cause it is OBR; but cause he's is banned, and this is a rules for all people who have been banned..
What does this slide tell us more of what we know allready ? outside the gcn he have copy in each CU ?
But you are certain that GTX 680 or GTX 780 whatever it is called, will be over 2x faster than 580 at launch?. And btw 6800Ultra & 8800GTX were close to 2x faster at launch but they didn't quite make it that far. Please find evidence here and here. And please don't link things which shows it faster than 2x in under extreme circumstances and ideal scenarios. I'm talking in overalls here.
With each generation consuming more power and requiring more cooling, you can see why obtaining this yet again will be very hard with 28nm. Not to mention the normal diminishing returns, ie. even if you exactly double the GTX580 in every aspect, you will not get linear 100% improvement.
If you believe that then you are just buying into the hype too early...
AFAIR since the HD 4XXX gen at least, the fastest single GPU Radeon has been as fast as Nvidia's second fastest single GPU card, I bet we'll see the same thing with this generation.
http://vr-zone.com/articles/nvidia-g...ew/4216-1.html
Keep in mind, that's comparing it with the 7950 gx2 (the dual-gpu card). Now look at those AA results! Yeah...2x was childs play once you started cranking up the AA.
Anyway, no, gains aren't linear. Kepler is a new design though, it's not just Fermi with more shaders tacked on, much like how the 7970 isn't just a 6970 with more shaders tacked on either. Also, another thing to remember is that the GTX 580 was memory starved at the top-end of the spectrum (hence why the 6970 was able to over-throw it at ultra high rez with AA), while the GK100 should fix said problem (meaning, at 2560x1600 double should definitely be doable, which is what the slides claim).
This is why I keep saying I can't wait to see this battle go down. It'll be the first time we've seen a show-down between TWO new architectures since the G80 vs R600. Since then, AMD have mainly been tacking things on to their R600 design and trying to clean up it's flaws, so I'm actually extremely intrigued to see how this new card will fare against it's competition.
Anandtech 8800 GTX ReviewQuote:
A single GeForce 8800 GTX is more powerful overall than a 7900 GTX SLI configuration and even NVIDIA's mammoth Quad SLI. Although it's no longer a surprise to see a new generation of GPU outperform the previous generation in SLI, the sheer performance we're able to attain because of G80 is still breathtaking.
:rolleyes:
Its about 2x faster.. its not a hallmark faster than that.. Yet the GK100 slide is claiming an average of 2.5x faster than GTX580, even with twice the hardware, you would not achieve this, go ahead and check your GTX580 SLI numbers and prove me wrong. Besides, even if GK100 is 2x faster than GTX580, we still need confirmation that it will also come in dual chip flavor, because its direct competition will not be the 7970 right?
Anyway, one can believe whatever he wants, we should not go off topic any longer, I think 40% over Caymen will be acceptable to most people provided that:
a) price is competitive
b) they have the option to purchase a 7990 dual chip card in case they need more raw power in the same slot.
2x increases are mostly limited to specific scenarios where newer GPUs excel, like AA, tessellation etc. On avarage the increases have been far more modest in the last few generations (after 8800GTX and 4870). That's why I expect a max 50% performance increase for next gen Radeon/GeForce on avarage. There might be some cases that produce 2x increase of course, but that doesn't mean that gen is 2x faster overall. 5870 more than doubled everything and resulted in ~70% performance increase on avarage. Based on the leaked specs 7970 is nowhere near double 6970, so even 50% increase would be impressive. The same goes for GK104. Of course architecture brings its own little spice. For AMD it probably results in a disproportionate performance increase in tessellation, perhaps even 2x in some cases, but otherwise perhaps lackluster general performance due to immature drivers. Nvidia has a more mature architecture so it'll probably have a more even performance increase.
...in about 50% of games, in the other 50% it is almost as fast as the fastest nvidia ;)
GK104 is the mid-range part. If the rumors hold true, it's a 768 shader 384-bit monster of a midrange part that should easily fly in comparison to even the GTX580. GK100 is the high-end, and it's supposedly 1024 shaders and a 512bit bus.
It's going to end up a clash of the titans for sure.
Yes, it would be interesting to see Nvidia and AMD on a level playing field if the rumors about GK104 end up true. GK104 according to estimates would be 50% larger than GF110. Tahiti is 65% larger than Cayman according to the OBR architecture slide. All in all they should be fairly even considering GF110 is around 15% faster on avarage compared to Cayman. Especially if AMD improves tessellation performance and Nvidia has more even improvement.
This 7970 will be a bust if it's not at least 50% faster than the GTX 580.
In my eyes this is probably the most exciting pre-release build up since the G80 and R600 (before we found out about all the delays, when we expected it on time) because it's the first time both teams have brought new architectures to the table since that generation. As such, even though we know the rumored specifications, we have no idea about each teams architectural efficiency per unit in all honesty. We can make guesses based off that fact--but, until the release of each we can only make a slightly educated hypothesis on the subject.
I really wish the CPU battlefront was as competitive as the gpu side of things.
GF114 is 30% faster than GT200b.. i would assume that GK104 should end up the same..
What will DX 11.1 bring?
http://msdn.microsoft.com/en-us/libr...=vs.85%29.aspx
Mostly stuff for developers, no new gimmicks like tessellation for end users.
Why?
All AMD has to do is price their next generation cards accordingly if they can't compete against the top bin NVIDIA offerings. As they have proven time and again: marketing success may lie in the high end but the real sales take place in the $150 - $300 price brackets. Even if NVIDIA's next gen cards blow AMD out of the water, there is still plenty of room for competitive products at lower price points.
QFT.
Seriously, that's the one thing people don't get. AMD haven't really been trying to compete with NVidia's top bin for awhile now. Why do people keep expecting them to have the fastest performing single gpu out there? The ATi glory days of the x1950xtx and prior are done and over with, AMD have found it to be a better tactic to aim a bit lower with cheaper to produce parts, plain and simple.
You make it sound like AMD has been competing with cheap half arsed chips for a long while, it's not really like that at all, it's simply a different strategy of efficiency and most importantly, timely release.
A lot of people prefer single core solutions, but there is a lot of glory to be had by having the fastest single card on the market, whether dual chip or not...
I don't think AMD's strategy is about targeting low-end and mainstream, but rather providing a scale-able product that covers all price and performance brackets. I think this isn't too far off NVIDIAs tactics, but their monolithic approach makes it more difficult.
Good point. It always seems like people forget that. This is just my opinion, the 5870 wasn't a bad card, but it was overpriced for what you got (compared to 4800 series not Fermi). The 4870 was released at like $300 and was almost as fast as the nvidia single high end GTX 280. The 5870 released at like $370+ or something. The DX 11 felt rushed; especially tessellation performance. The DX9/10 was obviously faster, but not as much as I expected given twice the shaders. I remember the 4890 going for like $150 or something a few months after release while the 5870 was selling for over $300 just about a year ago. Those 5870 cards took forever to drop below $300 and most inventory was gone before you could get one for under $200 new. I wasn't happy with the 5800 series pricing at all. Of course, you have to factor the market overall. I'll sure have fond memories of the 4800 series, but the 5800 series just doesn't cut the mustard in my opinion. AMD has been increasing prices with newer releases. We need more competition. I sure would love to get a 7970 for $300 or lower.......
yah GCN will be interesting for sure. and im sure the CAT driver updates for the 7000 series will bring significant performance upgrades as they mature.
7990 idle power will be <6w muahaha! ;)
Well, if it is a 75% performance increase over the 6970, then it should be about 50% over the GTX 580. I don't expect a christmas miracle, but I also don't expect just a modest increase in performance over its predecessor given that bandwidth has increased 50% along with a 30% dieshrink while having a max TDP of around 250-300w...
Actually, what he was saying is that AMD is concentrating upon what they are good at: making small, versatile chips that can maximize yields on a per wafer basis.
High yields and more cores per wafer can equate a lower pricing structure through higher volume while still retaining profit margins.
Having the highest performance single core graphics card allows for bragging rights and not much else if a company is struggling to cope with low yields.
Personally, I'd love to see AMD compete one on one with NVIDIA's best. BUT....I would MUCH rather see their GPU division stay afloat and that means avoiding huge and expensive monolithic dies.
Btw are there numbers what marging/profit AMDs GPU division is making? I read somewhere they barely make any profit in regard to revenue, so their margin is quite low. If true, why is that?
With AMD, I would rather them spend efficiently on their GPU research and development and stick to smaller efficient chips(not small but not monolithic). AMD research money is better spent on making their CPU's better because they are freakishly behind. AMD has a massive lead over intel in the GPU department and needs to catch up badly with the CPU market. Making their CPU good again, will do wonders for AMD image. I think AMD removing the ATI moniker for their graphic side was a mistake because the general public have a rather negative opinion of AMD chips. The only way for them to do this is to make good CPU's.
AMD seems unable to make much money in their graphic division which I think is why it is not the best to spend it money here. I think there best quarter in the last few years, AMD's graphic division made 33 million net profit, and this was with the 5xxx series competing against not very competitive products. I consider this a best case scenario. A 33 million dollar net profits(especially with the hundreds of millions lost during less competitive years) are not going to bail AMD out.
The only way to make AMD much more profitable into the graphic segment is to make their professional products better. The problem with this is AMD has been weak into marketing to the professional crowd and driver support is key with the pro market. The basis of professional cards and the reason for their high cost is the driver support(and continued at that), AMD doesn't have the manpower to keep this up. The only numbers I have heard put AMD driver team at a third the size of Nvidia's. Their recent problems with recent games such as skyrim and batman attest to this.
With AMD spending more money on low power APU, SoC. Something has to give and I hope AMD does it to their graphic department, rather their desktop chip side. The rumors of AMD conceding the high end desktop market is scary.
pcinlifeQuote:
4.3b vs 2.64b
Measured a total of 12 games, including a sub-run software, but can not play 3, a mapping error.
Name of the game is not being said, the resolution is 1920x1200 all max
12%
35%
20%
n / a
n / a
-8%
17%
31% (mapping error)
19%
-5%
33%
n / a
As 4.3b vs 3b we can estimate their own
http://www.bouweenpc.nl/wp-content/u...ds-575x298.png
bouweenpc.nlQuote:
Originally Posted by Googlish
Quote:
Power Tune parts:
the introduction of a more a more granular power control, based entirely on digital technology (Status Register?), independent of the driver and configuration files.
In the "black" state or PC into idle mode, GPU core state into the 0 watts, the fan will turn off, CFX configuration, when this technology is very attractive 7970 TDP is about 210 watts, 7950 is ... to be determined.
7970 frequency is 925 MHz (2048 Core/32 CU), the memory is 384-bit@5.5 GT / s..
7950 core frequency specifications to be determined, but the core is 1792 to determine the number of / 28 CU, memory bus is still 384-bit, but the rate of reduction for the 5GT / s (AMD's own not fully determined) level.
Quote:
HD3D parts:
7900 to achieve the single-GPU multi-screen, multi-track orientation of the output stream
will be launched in February next year, a custom resolution (yeah!), pre-configuration management, the task of cross-screen technology
i hope custom resolution is something about downsamplingQuote:
Video part:
UVD 3.0 generation still remains the same.
Focus here, is called VCE 7900 with the hardware video stream encoder and more, to achieve more than 1080p60 h.264 video encoding hardware to support full hardware encoding and fixed-function GPU shader supporting hybrid coding mode.
Compressed 4:2:0 color space encoding
quality to choose from a variety of compression
previously mentioned QSAD to achieve stability of steady video screen technology video acceleration, the new steady video 2.0 also supports interlaced video, and provide contrast mode about new features.
pcinlife
i feel like doing monologue
ASUS 7970 696€
http://www.abload.de/img/desktop_2011_12_17_19pp2ny.png
http://www.salland.eu/product/108271...7970-3gd5.html
That's a steep price, let's hope it will come down after launch quickly to 350-450 region.
Ideally this card should take GTX580 price spot over and push competing products down, not create new higher price tag.
Oh, no...
I've contracted the upgrade itch.
*searches for anti-itch medication*
Haha, I've had the upgrade itch for a minute now. I just haven't seen anything worth-while of replacing my GTX 460 yet. I was going to snag a 6950 2gb but its tessellation performance held my hand from hitting the "continue" button on newegg.
If there's one lesson I've learned in my decades of PC gaming, it's not to buy hardware that has an apparent hardware deficiency. Learned that one during OpenGL's hey-day when ATi barely supported it. Same reason I bought a 9800pro (although I did get a 5900 just to flash and toy with for a lil bit before selling it for a profit). If a card has an obvious performance flaw, I won't buy it in fear that a game I end up wanting exposes that weakness. When that happens, it usually loses to the other sides card in the price segment below it, and I end up rather annoyed.
Going off the news on both cards so far, neither camp should have an obvious flaw this time around. One of the big reasons I'm so excited this generation! Notice I haven't had anything but good to say about both camps upcoming products for the first time in years? :D
p.s. The cure for the upgrade itch is simple... Take care of a child.
Bulldozer definitely held off my upgrade plan as well, as I'm probably going to give my present system (q6600 & GTX460) to my woman now that I've spoiled her with the world of PC gaming.
I found out that the kid thing works without planning it, I've taken on the daddy role for my deceased friend's 4 year old. Definitely took a major hit on the wallet in the process, so I only upgrade if it's something truly special and I know it'll last quite some time if I can't swing another big upgrade for awhile.
DilTech,
He is 4 years, too. "Active" little bugger, that one. :)
The final launch price will be around 500 euros, at least the Sapphire model.
Sounds like AMD plan on making every dollar they can muster this round. Makes me wonder what the 28nm yields are like.
It would still be a better deal if you would get a 7950 for 400€ since 580 is around 450€ here. But for next gen the price/performance would be pitiful, if 7970 really is only 20% faster than 580.
AMD usually prices their parts according to their performance, so I think such high prices are a good sign. Plus, because of the architecture change, I believe the driver optimizations will be a a lot easier and faster... We shall see in 4 days or so.
Fixed that for you... Yields are better then early 40nm ramp. Capacity, back when the first 28nm cards went into product, that is a different matter but December sounds good.
Another AMD launch may be pushed up a bit, compared to what rumors previously have been saying.
-I agree with you , people are buying 580's and their off springs[+5%] at a lot higher prices over the cheaper 6970 for 10-15% better fps.so even at min.+ 20% over a 580 people will buy it.
-but amd will and has left a lot of money on the table with their small chip gpu's ,as one person said it cheapens their high volume sku's.= lower profit.
That's the problem these days. It won't happen. AMD has been increasing prices since the 5870. Gone are the days of getting something like a 4870 for $300 or 4890 for under $200 on release. I really don't think it's just because of the performance bracket. Nvidia is notorious for insane high end pricing and most justify it for the 15% or so performance increase over AMD. It's comical when idiot fanboys justify based on the cost to manufacture. AMD is almost as bad these days. AMD is now comfortable jacking up the new higher price tag on it's new cards. It seems like the 7970 will be closer to the $500 range and the next competing Nvida product will probably be closer to $600. Absolutely pathetic. What you people do with your money is your own business, but come on. I can afford it, but can't justify it. Regardless, AMD should have released this before the end of the holiday season. Global economic indicators aren't looking pretty for next year. People might not have the cash or credit to buy this stuff anymore....
no reason to get work up till i see the price at newegg, just went through this with my 3930k and the pre-launch price gouging uk sites
that said i paid 525 first day for a x1950xtx
yield are excellent right now... but the TSMC capacity will maybe let down us .. ( for price ) ..... seriously, Samsung should give some of their unit to TSMC next time. ( and i dont bring the name of Samsung for nothing there, it seems some discussion are beginning between thoses 2 )
-a 5970 & 6990 arent fully clocked 2x5870s or 2x6970s the way a 4870x2 was, hence the dual gpu price is not double that of the single gpus
-5870 is a tad slower than a 4870x2 assuming good scaling & it launched at like $400 was it? 4870x2 would still be hanging around $500-600 at that time
sounds exactly like the performance brackets & it kinda has the 6800 series making perfect sense, generates a new cheaper bracket that the 4870 & 4850 once had, without being too weak midrange
(also the vram is jacking up the price too, along with new features like varying load states, dual bioses, opencl stuff, tesselation)
Amd has always wanted to jack up the price. They wanted to last generation but could not do it due to competition. No way amd wanted to give twice the memory and make among there biggest ever for them and charge slightly less than their predecessor. I still believe amd went for the new worse naming scheme in an effort to justify higher prices in their lineup. If the gtx 580 and 570 didn't show up, I have a feeling amd would have charged at least 100 dollars higher for the 69xx series. The 6970 was meant to compete at 499 with the gtx 480. I think amd will finally get to do what they wanted because of the lack of competition. Decent business savy but amd will be losing it value proposition compared to earlier generations.
In terms of performance :
79XX>=580 and costs less its a good deal due to better thermals than 580 and the fact that 780 is far off. The GK104 might be released first and it is suppose to be faster than the 580. Thus GK104 might crush 7970 but only if its out within a few weeks after 7970.
You should stop discussing the prices that the webshops in our country (yes, I'm Dutch) are putting online, based on absolutely nothing, which they do every release again. How often has Salland been in the news before? Indeed, too often.
Heck, how often did we pay this for a 6970?
http://www.vipeax.nl/prices.jpg
Indeed, we did not.
Judging by how gloomy the Euro's future looks, that price in Euro's may have to go up to keep pace with the dollar.
Saying that, I'll take 2 on launch day for £500 each give or take a few quid, been putting off this upgrade round for a year now... poxy BD... poxy delayed IB...
http://www.abload.de/img/amdzerocore2_dh_fx57mwo59.jpg
http://www.abload.de/img/amdzerocore3_dh_fx57rxotn.jpg
http://translate.google.com/translate?langpair=auto|en&u=http%3A%2F%2Fwww.dona nimhaber.com%2Fekran-karti%2Fhaberleri%2FAMDden-Nvidiaya-gonderme-Coklu-GPU-enerji-verimliliginde-acik-ara-ondeyiz.htm
You're too fast buddy ^^
Nice feature regarding XFire idle power consumption :up:
More Eyefinity stuff (totally hot and loving those changes).
http://i43.tinypic.com/2m5near.png
http://i43.tinypic.com/n3olf.png
, this is only features, but this look good. thanks guys for uploading them.
yeah i have correct my post, i was seeing it wrong, maybe i have watch to long the slides with the 28pages,.., ( you know the one with image of 1x1 cm lol ).. Its sunday and my brain is in wait mode.
How many people are going to run 6 monitors / HD3D + Eyefinity combo? And if performance increase rumours are true then they will also have to buy 4 cards to get acceptable frame rates at such resolutions...
All these features but power consumption management look like a gimmick to me...
Those Eyefinity improvements are certainly welcome and expected, as is HD3D Eyefinity and HD3D CF support. Personally I gave up on CF and am using standard resolution Eyefinity, so the only improvement looks to be the centered desktop. But those are driver improvements. 7900 has so far offered better idle power consumption and tessellation performance, and perhaps better cooling as well, according to these slides. I'd say that alone makes it a decent launch, even if the performance increase is not what the hype would have you believe. To me it looks like the chip is still fairly modest in size, and the only reason it isn't the normal 300 mm^2 is because of the memory bus. Remember, Cypress increased die size 263->334 mm^2 and had 125% more transistors, while this looks like it will decrease die size and up the transistor count only 65%. So it would be easy to say it will bring only half the performance increase of what 5870 brought, so 30-50% maybe would be my guess.
^How cute. More curious about the 7770-7870 specs. Haven't heard much more about them.
It's pretty insane, only 4 days to lauch and almost no decent leaks.
Those Eyefinity slides seriously made me need to change my pants.
The fact you can create custom res and have different screen size configs is huge, especially for me as I now can go out and get a couple of 19in panels to go with my 24 and just map the res accordingly.
I cant wait for this card to launch.
The only "major issue I may have would be with length. I hope its shorter than the GTX580 Reference board.
lol i see your point. But i mean from the photos you can see three different screen sizes. Maybe a few 19s in portrait or some 22s would scale better. But the idea that you can have three different sized panels displaying eyefinity is great news for me. This way I don't have to shell out for another 2 24" benQ panels just to experience eyefinity.
[QUOTE=Vipeax;5018093]
http://i43.tinypic.com/2m5near.png
Bloody hell, I only just recently bought a new 1.5m wide Ikea desk!
....
Now I need something like 3m wide, but I dont have space for that :(
The thing is I currently have a 24" 1920x1200 monitor, plus a 19" 1280x1024. So if I was to buy myself a shiny new 27" upgrade, I could use all three different sizes in Eyefinity now?
[QUOTE=bhavv;5018371] I personally would just stick to a single 27" if I were you. That's just my own take, but you can see how the different resolutions of each screen slightly messes with the images. Just look at the yellow triangle that starts in the middle screen and moves to the left
Or just stick to my 24" until it breaks. I only got a bigger desk so I could use 2 monitors which helps a lot in windows.
The backlight on my 19" died ages ago, so it looks a lot darker than it should, but I found a setting to increase the gamma separately for each monitor.
My Acer G24 runs at 30% brightness and 35% contrast, on default 50% its just blindingly bright. Using different monitors together isnt ideal because of how different the brightness / contrast can be.
The yellow triangle is for bezel compensation. You move it to the left or right to get it to line up. No doubt showing off the new "flexible" bezel comp which will allow for it to work properly with different sized monitors. I don't think it's indicative of how the final image will look.
Depends on what your definition of excellent yields are... AMD's 28nm yields are better than early 40nm ramp yields which weren't as bad as everybody thinks.
28nm is going very well atm. As long as TSMC keeps increasing capacity like they promised this will be much more smooth than 40nm.
78x0, aka Pitcairn, should be 24CUs on a 256bit bus. Probably around ~3bil trannies and ~240-250mm2.
77x0, aka Cape Verde, should be ~12CUs on a 128bit bus. Probably around ~1.5b trannies and looks like ~130mm2.
So, Pitcairn, HD 6970 like performance with >150 w TDP, and Cape Verde, HD 5770 or HD 4890 performance with >80 w TDP, right ?
7870 should be a nice crossfire setup
5x1 landscape setup! :eek: