IF it is 4D, 1920 shaders, and 860 clock I'm going to predict about 50% faster then a 5870 in most games. In games with heavy tessellation it should be more, but it isn't clear how much.
Printable View
IF it is 4D, 1920 shaders, and 860 clock I'm going to predict about 50% faster then a 5870 in most games. In games with heavy tessellation it should be more, but it isn't clear how much.
Didn't someone say that 4D shaders run at something like 90-98% the equivalent in 5D shaders? So 1920 x 4D would be equivalent to nearly 2400 SP's.
Of course, the issue is how much of a bottleneck Cypress had, and we saw that 2 x Juniper often scaled better than 1 x 5870 despite being the equivalent in unit count, so it'll be interesting to see if Cayman uses its GPU much more effectively
I posed this question in another forum. What about AMD's sweet spot strategy. They always like to keep margins high with respect production costs. If this is true TDP would be more like 225W and smaller die than what is rumored. If that's the case they are just happy to sit within 10% of GTX 580 or just below it.
I'll quote my post at the beginning of this thread:
This is corroborated in that Chinese/Japanese link with the chart. Basically, the sweet spot is still around, but AMD is now more willing to take chances/enter the high end GPU realm again, hence why Cayman is coming out later but with more architecture tweaksQuote:
I posted this earlier elsewhere:
Hints about what Cayman has coming up next:
HardwareCanucks
[H]ardOCP:Quote:
November will see the release of the Cayman XT-based HD 6970 and Cayman Pro-based HD 6950 which have all of the features seen in Barts plus enhanced rendering scalability and off-chip buffering for DX11 applications. These will be the spiritual successors to the HD 5870 and HD 5850 and should go head to head with the higher end Fermi cards.
December will see the introduction of Antilles which is meant to be the lynchpin of AMD’s renewed assault on the DX11 market. The HD 6990 will bring untold performance to the table through the use of a pair of Cayman GPU cores and additional features we can’t divulge at this time.
PC PerspectiveQuote:
The 6900 series will have a superset of features compared o the 6800 series. This means that there will be features and architecture differences between 6900 and 6800 series. This allows AMD to take more chances on the high-end enthusiast class GPUs and architecture things different, to really step up performance that enthusiasts demand. So, just to restate, the new 6800 series will offer performance of the 5800 series, at a lower price, with lower power, and a smaller chip.
HexusQuote:
Later in the year we will see the release of future architectures that much more unique in the Cayman and Antilles product lines. We'll have to leave you with that tease for now and touch again on both of those items later.
Looks like AMD might be back to playing the high-end GPU gameQuote:
These arrive armed with improvements in the two metrics discussed above, soon to be followed by a genuine performance GPU in the form of the Radeon HD 6950 and HD 6970 'Cayman' parts and, a little while later, the dual-GPU Radeon HD 6990, code-named Antilles. Phew!
Higher clocks on Barts help out with the pixel fillrate the same way as the extra ROPs would. So perhaps the optimal amount of ROPs would be somewhere in between 16-32, theoretically speaking.
Otherwise I have a hard time believing a mere 2% difference between the setups when the 5830 can't pull ahead of 5770 more than 10% with 40% more shaders.
It is also rumoured to have much higher geometrical and tessellation performance, so there will be less chances of a bottleneck. I just hope that the VRAM is fast enough, it is still on 256 bit bus with GDDR5 as 5870 is, but has twice (or more?) faster chip, so this is worrying.
I guarantee you, it is 4D. Very reliable source.
HD6950 has PCI-e 6+6pin connectors, so <=225W, HD6970 has 6+8pin connectors so <=300W.
HD6870 has 6+6pin and is <150 in real powerdraw. I would guess from that that HD6970 is ~210-230W. Think about what 50% more power gives you in performance compared to Barts..
There has to be some negatives, there are rarely changes that only result in positives. Increased power consumption is already one.
I think your assuming everything is shader limited and there will be a linear increase in performance with shader increase. This has been proven wrong almost entirely this generation.
The increase from a 5870 to a 4890(same clocks) was between 30-50%
Or even the gtx 280 to 480 is around 30-50%.
http://www.tweaktown.com/articles/29...on/index4.html
This was a best case scenario because AMD got to double everything, texture units, ROPS, Shaders.
http://www.anandtech.com/show/2841/17
Drivers might have brought performance up 5% but it also brought up 4890 and 4870 performance as well.
AMD this time is going to at best increase performance by 50% because there is only a 50% increase in real shaders. There is a big but this time around.
AMD is not going to double ROPS and texture units this time around; because of this, AMD will start seeing drops off from that 50%.
Even the wildest optimist on this board beside yourself, doesn't believe an 80% increase will happen because this is the same node.
Much of the purpose of changing to a new architecture again, is trying to make those spec gains linear again. Because if their was only a 1920 shaders difference(using the same technology), their might be a 5% performance increase between the 5870 and 6970, because the 5850 and 5870 perform the same at the same clocks and thus shows, shaders are encountering another bottleneck in the architecture. Changing to a new architecture is going to help the generation pick up some gains again, but not this perfectly linear 80 percent your thinking of. Barts XT performs as well as it is, because it has the ideal configuration(encounters the least amount of bottlenecks) to get maximized performance out of the r600 - r800 architecture. AMD is not going to get this ideal architecture off the bat, as it took AMD 3 and a half years to get there.
BTW, Barts xt is clocked at 900 and not 725mhz.
I invit you to read the report of Beyond3D about Fermi architecture ( vs Cypress ), and see where the strong and the weakness are on Cypress ... and just imagine if they have work a little bit on thoses weakness ( parrallel/geometry / data flow loss) how much the gain can be against Fermi... this will not come only by the 4VLIW ( who increase double floating point perf by around 20-25% and single precision by 40%( 1 cycle will be executed 20% faster just by this change for double float precision, and 40% for single at minimum)....
I agree to some extent. However, although there is a lot of room for improvement, getting 50% extra performance is already showing that improvement. If AMD was going to bump the specs as is(1920 shaders), with the same architecture, we would be seeing a tiny performance boost(5-10%). Not this 30-45% I am expecting.
The only way you would see a 80 percent improvement jump is if you had perfect drivers and a game which entirely was built for the 6970 so no resources were wasted.
It even might be that they found out that for 4D it must be enough shaders to be more effective than 5D. It depends on trig vs other calculations. It very well might be that from 1920 shaders in 4D they can extract much more performance than in 5D but in ~1200sp count it could be vice versa. There is so much variables in it. Think also about the fact that 4D change will improve DP performance quite a bit, it was in 5D only those 4sp's not the fat one which was used in DP calc.
HD 6970 average performance increase over HD 5870:
20% = FAIL !
30% = decent but MEH ! Can be good if priced aggressively, quite less than US$ 400
40% = VERY GOOD, my expectation, MSRP 449, US$ 479 at most
50% and up = AMAZING, hail the new R300 king !
You mean over GTX 480? GTX 580 is coming out at $499 it appears by Amazon pricing, and will hopefully be 20-25% faster than the 480 in games, run cool & overclock well. The GTX 480 is already a good bit faster than the 5870 (especially in SLI/Crossfire 2x card mode).... so I sure hope that 6970 is more than 40% faster than a 5870 and then would still be deserving of a $400/less pricetag from the looks of things. Just my two cents :up: . If it's only 40% faster than a 5870 (or thus maybe 10% than a GTX 480), and launched at $449-479 it would be a fail in my eyes.
A 480 is about 15%-20% better than a 5870. If Cayman XT is 40% better than a 5870 that would put it at about 20-25% better than a 480. If the latest leaks are any indication of real performance, 580 is going to set the bar really low this round at only 15% over a 480.
I would argue therefore that the price tag $449 for 40% would be decent.
First, GTX 280 only leads HD 5870 by around 15-20% on average
Second, i got a reliable source that i trust saying that GTX 580 will only be on average 15-20% faster than GTX 480 in real world gaming (while it will be a BEAST in 3DM Vantage Xtreme).
So, perhaps you should tame your expectation over GTX 580 a bit, it will be a decent refresh but not a second coming, never was. If you can do that, then perhaps my expectation won't be too extravagant in your eyes.
With that 40% jump, AMD has made a turn around from basically 15-20% behind in absolute performance into a level player in the enthusiast segment, still using the same 40 nm process node.
Regarding pricing, with around the same performance but costing around 10% less, i think that's plenty fair & competitive in the first place, already factoring nVidia lead in (propietary) features department.
Regarding OCing, we shall see, both sides have their own advantages & disadvantages regarding this. Serious OCer is still a small segment, even among enthusiasts, and OCing a beast like those cards won't be a walk in the park like their smaller brothers (GF104 and Bart based cards).
can't compare HD4890-HD5870.
There was some obvious efficency lost moving to the DX11.. just look at HD5770 vs 5830 vs 4890
There's (logically) only room for improvment in efficency, as we saw with barts.
Still, I agree it will be well below 50% increase in performance if current specs are true..
Not sure about your figures.
I thought the 480 is 10-15% faster than 5870, and that the 580 is rumoured to only be 15-20% faster than 480?
Personally I can see the 6970 being able to achieve 50% greater performance over 5870 just by removing bottlenecks and increasing transistor count.
Maybe BS, but I also recently heard a rumour that the full fat Cayman will actually have 2400 shaders.
Yes, I know, but they got to double the specs of everything which makes up for much of the bottlenecks. The 6970 is not going to get that luxury and the biggest increase will be the number of real shaders, hence the size of the chip not reaching gtx 480.
If they can improve the performance by 50% that already shows a huge efficiency jump. I am still thinking about 40% because I think they will keep the chip under 400mm2.
I think the huge increase in efficiency can be easily noticed from the following comparison.
(double TMU, Double ROPS, Double shaders, a tad faster memory) = 44% increase.
50% increase in shaders, significantly faster memory = 40 percent in speed.
Even if they get 40%, considering how they are not beefing up the rest of the architecture as much, it will still be an accomplishment considering they are on the same node.