IF it is 4D, 1920 shaders, and 860 clock I'm going to predict about 50% faster then a 5870 in most games. In games with heavy tessellation it should be more, but it isn't clear how much.
Printable View
IF it is 4D, 1920 shaders, and 860 clock I'm going to predict about 50% faster then a 5870 in most games. In games with heavy tessellation it should be more, but it isn't clear how much.
Didn't someone say that 4D shaders run at something like 90-98% the equivalent in 5D shaders? So 1920 x 4D would be equivalent to nearly 2400 SP's.
Of course, the issue is how much of a bottleneck Cypress had, and we saw that 2 x Juniper often scaled better than 1 x 5870 despite being the equivalent in unit count, so it'll be interesting to see if Cayman uses its GPU much more effectively
I posed this question in another forum. What about AMD's sweet spot strategy. They always like to keep margins high with respect production costs. If this is true TDP would be more like 225W and smaller die than what is rumored. If that's the case they are just happy to sit within 10% of GTX 580 or just below it.
I'll quote my post at the beginning of this thread:
This is corroborated in that Chinese/Japanese link with the chart. Basically, the sweet spot is still around, but AMD is now more willing to take chances/enter the high end GPU realm again, hence why Cayman is coming out later but with more architecture tweaksQuote:
I posted this earlier elsewhere:
Hints about what Cayman has coming up next:
HardwareCanucks
[H]ardOCP:Quote:
November will see the release of the Cayman XT-based HD 6970 and Cayman Pro-based HD 6950 which have all of the features seen in Barts plus enhanced rendering scalability and off-chip buffering for DX11 applications. These will be the spiritual successors to the HD 5870 and HD 5850 and should go head to head with the higher end Fermi cards.
December will see the introduction of Antilles which is meant to be the lynchpin of AMD’s renewed assault on the DX11 market. The HD 6990 will bring untold performance to the table through the use of a pair of Cayman GPU cores and additional features we can’t divulge at this time.
PC PerspectiveQuote:
The 6900 series will have a superset of features compared o the 6800 series. This means that there will be features and architecture differences between 6900 and 6800 series. This allows AMD to take more chances on the high-end enthusiast class GPUs and architecture things different, to really step up performance that enthusiasts demand. So, just to restate, the new 6800 series will offer performance of the 5800 series, at a lower price, with lower power, and a smaller chip.
HexusQuote:
Later in the year we will see the release of future architectures that much more unique in the Cayman and Antilles product lines. We'll have to leave you with that tease for now and touch again on both of those items later.
Looks like AMD might be back to playing the high-end GPU gameQuote:
These arrive armed with improvements in the two metrics discussed above, soon to be followed by a genuine performance GPU in the form of the Radeon HD 6950 and HD 6970 'Cayman' parts and, a little while later, the dual-GPU Radeon HD 6990, code-named Antilles. Phew!
Higher clocks on Barts help out with the pixel fillrate the same way as the extra ROPs would. So perhaps the optimal amount of ROPs would be somewhere in between 16-32, theoretically speaking.
Otherwise I have a hard time believing a mere 2% difference between the setups when the 5830 can't pull ahead of 5770 more than 10% with 40% more shaders.
It is also rumoured to have much higher geometrical and tessellation performance, so there will be less chances of a bottleneck. I just hope that the VRAM is fast enough, it is still on 256 bit bus with GDDR5 as 5870 is, but has twice (or more?) faster chip, so this is worrying.
I guarantee you, it is 4D. Very reliable source.
HD6950 has PCI-e 6+6pin connectors, so <=225W, HD6970 has 6+8pin connectors so <=300W.
HD6870 has 6+6pin and is <150 in real powerdraw. I would guess from that that HD6970 is ~210-230W. Think about what 50% more power gives you in performance compared to Barts..
There has to be some negatives, there are rarely changes that only result in positives. Increased power consumption is already one.
I think your assuming everything is shader limited and there will be a linear increase in performance with shader increase. This has been proven wrong almost entirely this generation.
The increase from a 5870 to a 4890(same clocks) was between 30-50%
Or even the gtx 280 to 480 is around 30-50%.
http://www.tweaktown.com/articles/29...on/index4.html
This was a best case scenario because AMD got to double everything, texture units, ROPS, Shaders.
http://www.anandtech.com/show/2841/17
Drivers might have brought performance up 5% but it also brought up 4890 and 4870 performance as well.
AMD this time is going to at best increase performance by 50% because there is only a 50% increase in real shaders. There is a big but this time around.
AMD is not going to double ROPS and texture units this time around; because of this, AMD will start seeing drops off from that 50%.
Even the wildest optimist on this board beside yourself, doesn't believe an 80% increase will happen because this is the same node.
Much of the purpose of changing to a new architecture again, is trying to make those spec gains linear again. Because if their was only a 1920 shaders difference(using the same technology), their might be a 5% performance increase between the 5870 and 6970, because the 5850 and 5870 perform the same at the same clocks and thus shows, shaders are encountering another bottleneck in the architecture. Changing to a new architecture is going to help the generation pick up some gains again, but not this perfectly linear 80 percent your thinking of. Barts XT performs as well as it is, because it has the ideal configuration(encounters the least amount of bottlenecks) to get maximized performance out of the r600 - r800 architecture. AMD is not going to get this ideal architecture off the bat, as it took AMD 3 and a half years to get there.
BTW, Barts xt is clocked at 900 and not 725mhz.
I invit you to read the report of Beyond3D about Fermi architecture ( vs Cypress ), and see where the strong and the weakness are on Cypress ... and just imagine if they have work a little bit on thoses weakness ( parrallel/geometry / data flow loss) how much the gain can be against Fermi... this will not come only by the 4VLIW ( who increase double floating point perf by around 20-25% and single precision by 40%( 1 cycle will be executed 20% faster just by this change for double float precision, and 40% for single at minimum)....
I agree to some extent. However, although there is a lot of room for improvement, getting 50% extra performance is already showing that improvement. If AMD was going to bump the specs as is(1920 shaders), with the same architecture, we would be seeing a tiny performance boost(5-10%). Not this 30-45% I am expecting.
The only way you would see a 80 percent improvement jump is if you had perfect drivers and a game which entirely was built for the 6970 so no resources were wasted.
It even might be that they found out that for 4D it must be enough shaders to be more effective than 5D. It depends on trig vs other calculations. It very well might be that from 1920 shaders in 4D they can extract much more performance than in 5D but in ~1200sp count it could be vice versa. There is so much variables in it. Think also about the fact that 4D change will improve DP performance quite a bit, it was in 5D only those 4sp's not the fat one which was used in DP calc.
HD 6970 average performance increase over HD 5870:
20% = FAIL !
30% = decent but MEH ! Can be good if priced aggressively, quite less than US$ 400
40% = VERY GOOD, my expectation, MSRP 449, US$ 479 at most
50% and up = AMAZING, hail the new R300 king !
You mean over GTX 480? GTX 580 is coming out at $499 it appears by Amazon pricing, and will hopefully be 20-25% faster than the 480 in games, run cool & overclock well. The GTX 480 is already a good bit faster than the 5870 (especially in SLI/Crossfire 2x card mode).... so I sure hope that 6970 is more than 40% faster than a 5870 and then would still be deserving of a $400/less pricetag from the looks of things. Just my two cents :up: . If it's only 40% faster than a 5870 (or thus maybe 10% than a GTX 480), and launched at $449-479 it would be a fail in my eyes.
A 480 is about 15%-20% better than a 5870. If Cayman XT is 40% better than a 5870 that would put it at about 20-25% better than a 480. If the latest leaks are any indication of real performance, 580 is going to set the bar really low this round at only 15% over a 480.
I would argue therefore that the price tag $449 for 40% would be decent.
First, GTX 280 only leads HD 5870 by around 15-20% on average
Second, i got a reliable source that i trust saying that GTX 580 will only be on average 15-20% faster than GTX 480 in real world gaming (while it will be a BEAST in 3DM Vantage Xtreme).
So, perhaps you should tame your expectation over GTX 580 a bit, it will be a decent refresh but not a second coming, never was. If you can do that, then perhaps my expectation won't be too extravagant in your eyes.
With that 40% jump, AMD has made a turn around from basically 15-20% behind in absolute performance into a level player in the enthusiast segment, still using the same 40 nm process node.
Regarding pricing, with around the same performance but costing around 10% less, i think that's plenty fair & competitive in the first place, already factoring nVidia lead in (propietary) features department.
Regarding OCing, we shall see, both sides have their own advantages & disadvantages regarding this. Serious OCer is still a small segment, even among enthusiasts, and OCing a beast like those cards won't be a walk in the park like their smaller brothers (GF104 and Bart based cards).
can't compare HD4890-HD5870.
There was some obvious efficency lost moving to the DX11.. just look at HD5770 vs 5830 vs 4890
There's (logically) only room for improvment in efficency, as we saw with barts.
Still, I agree it will be well below 50% increase in performance if current specs are true..
Not sure about your figures.
I thought the 480 is 10-15% faster than 5870, and that the 580 is rumoured to only be 15-20% faster than 480?
Personally I can see the 6970 being able to achieve 50% greater performance over 5870 just by removing bottlenecks and increasing transistor count.
Maybe BS, but I also recently heard a rumour that the full fat Cayman will actually have 2400 shaders.
Yes, I know, but they got to double the specs of everything which makes up for much of the bottlenecks. The 6970 is not going to get that luxury and the biggest increase will be the number of real shaders, hence the size of the chip not reaching gtx 480.
If they can improve the performance by 50% that already shows a huge efficiency jump. I am still thinking about 40% because I think they will keep the chip under 400mm2.
I think the huge increase in efficiency can be easily noticed from the following comparison.
(double TMU, Double ROPS, Double shaders, a tad faster memory) = 44% increase.
50% increase in shaders, significantly faster memory = 40 percent in speed.
Even if they get 40%, considering how they are not beefing up the rest of the architecture as much, it will still be an accomplishment considering they are on the same node.
I think AMD still wants to keep the card around 400nm and keep power reasonable. Adding twice as much memory and increase the speed is already going to add alot. 50% more shaders is going to add quite a bit as well. And if these shaders are used properly, they are probably going to consumer more watts pers shader.
^^^
Apparently AMD wants to take a risk with Cayman as Barts can hold the fort, so I'm expecting about 450mm2...
By getting rid of one shader in an group that goes mostly unused and spreading the remaining functionality between the other shaders they can increase utilization. That could result in better performance/watt to not have some mostly idle units.
I think we will have to wait and see, there is quite a lot we don't know still.
Even Charlie is quiet. By this time he's usually writing entertaining articles.
Whew, didn't mean to rile people up with my response, but usually most reviews I've seen show a 25+% gap between 5870 and 480 @ 2560x1600 (which is the only resolution I really watch nowadays and have for awhile), ESPECIALLY in crossfire/sli modes with two cards it's even wider. I could be wrong, and if I am misremembering the reviews, just carry on to my next paragraph...
My figures may be off a bit, but if Cayman XT is for argument's sake 10% slower than a 580, I would expect it to be more than 10% off of the 580's price overall (more like 15-17% less) since they need a bigger incentive to settle for less (same goes for if Cayman XT is faster than the 580, of course...). The top card always carries a bigger premium in respect to price/performance, and I expect amd/nvidia will capitalize on that whichever way the pendulum swings here.
P.S. I'm in for 2 of whichever is faster... ;). Frankly I'd love for them to come in nearly even, and spark off some nice price cuts.
Don't worry about it. I know me and 2 other people were quick to reply but that's just because unlike [H] forums (http://hardforum.com/showpost.php?p=...ostcount=6full of sheeple and lemmings) fud is quickly corrected. Anyways you will most likely be waiting a bit longer 1-2 months at least for GTX 580 due to paper launch. Of course you can always be an early adopter and get in on the premium price in November.
They have reworked Cypress' MC and they are a little bit better as far as power efficiency goes. Even though it is double the memory they are double the density, supposedly, and on a lower node which should decrease power consumption slightly compared to the HD5870 2Gb models which I believe added a good ~30-40w over the stock HD5870. Obviously having 2Gb will increase power consumption but it wouldn't be as much as last gen's 2Gb versions.
I would expect GTX580 and Cayman to at least be trading blows, since Cayman was targetting +10-15% on a full specced GTX480 w/ 512SPs. So even if they miss the mark a bit or GTX580 gets a bit more performance, there shouldn't be a huge difference in performace. I just hope that they will have Cayman Pro at or slightly above GTX480 levels of performance.
One thing to ponder about, if GTX 580 is gonna be so wonderful and HD 6970 is gonna suck, i just can't help thinking why nVidia would launch it about 2 weeks sooner than its rival. It would be a total PR victory if you can launch a product that tops competitor's just launched product, the HALO effect would be fantastic, sudden comeback FTW. But i digress, perhaps Cayman does suck and nVidia have a pile of GTX 580 in warehouse begging to be released, we'll see.
Interesting point. GTX 580 was kept very secret til less than a month before release, so its also possible they were waiting to release something quickly to stem the tide
Given that it looks pretty much like it's what the GTX 480 should've been months ago, it's possible
I have the opposite feeling. If they are going to launch it, launch it early, even if they have no quantities to sell. That way the hype of the 6970 is derailed because it can no longer claim fastest GPU crown.
If NV had crap loads of cards to sell I could understand your reasoning(as they want to sell a crap load of cards before the 6970 comes out), but it sounds like quantities are horrendous. So NV is not launching this card early to make money obviously. I think launching such a low quantity product is to just steal your competitors fanfare.
This release is a pure marketing move. If the 6870 was launched before the gtx 460, the release of the gtx 460 would have been a crazy disappointment.
I think you have to ask yourself? Why would NV release a card that not going to be for sale immediately ahead of it competitor if at the very least it didn't beat it. Their is no logical reason to release it later.
I would say because now they have an established product ready for market even if in limited numbers vs the challenges brought about in just getting v1.0 out of the labs and into the market. I mean we got A3 silicon on gf100, everything since has been A1 silicon hasn't it.
Because it wasn't ready at all(the card was a mess of wires until the beginning of the year) and reviews would not be the most positive(it wasn't really positive as it was), early models of the card were hitting 500-600mhz, sending such a disastrous card to reviewers would only make AMD look good as the 5870 would have been faster. The only reason NV would release it early is to get positive reviews, otherwise you would just be hyping your competitors up.
What would you think would happen if NV released a early gtx 480 release and it performed 10 percent less while consuming way more power? That's the answer to your question.
Even with the current edition of the gtx 480, considering what the card is, it was released to early and should have been fixed before it came out.
You could say the gtx 480 has a very beta feel to it and was released too early as quantities were bad and their was some obvious flaws in the chip design(no card should have a hole for cooling).
It can go both ways, like Zerazax suggested & i admitted. :yepp:
Perhaps nVidia do have a clear winner in their hand and want to make a headline first with the new ace, OTOH, if you have a loser card vs upcoming competitor card, don't you want to give it the best light by releasing it first so it won't have to deal with its conqueror and instead walking all over its old nemesis (HD 5870) ?
Either way, it doesn't really matter in the end, since the release dates aren't too far off beetween them, as consumer, just wait till the dust clear & prices settle, then pick the one that suit your needs & preferences best. I think we all can agree on that. :)
It wouldn't only have the best light if they had card to sell. By the time the gtx 580 becomes commercially available, if cayman xt is as good as some of you think, the hype would be gone and for naught.
I agree it is nice to have an idea of performance when they are released so close to each other(NV's is more of a performance preview), so we can wait and see without each other company jerking the consumer around and waiting for disappointment(fermi gf100).
Hurt who? The release and price of the 4870 was a nightmare for Nvidia. ATI crashed NV party and worse, with quite the quantity of cards. AMD plans for a card really didn't change course. This if anything demonstrates how you can kill a launch with the right product.
If you mean didn't hurt AMD, of course it didn't. AMD plans were just reinforced by the outrageous pricing of the gtx 280. This is a different scenario for NV though as they have no cards to sell, just cards that can be used as marketing tools. Releasing a completely inferior product that can be used only for marketing at this point in time would be simply pointless.
It would be better to spin and not release a card at all(just say a card is coming, like they did with the gtx 480) to hold back sales of cayman. They can't do this if gf110 is released already and people already know its performance.
Giving performance details when your card is not released yet and your competitors card are out with quantity is like showing your cards at the first turn on poker. Your just giving your competitors the advantage because people will know whether to wait for the cards or not. I think NV has confidence at this point on the performance of gf110 to pull such a movie.
If NV cards is 600 dollars and cayman xt is 500 and faster, NV would have just done AMD a favor. Worse yet, you didn't even sell any cards before the arrival of the cayman xt because you didn't have any. That just sounds like a notoriously dumb marketing move and would just help AMD.
One more thing that has lead me to believe NV is not completely screwed anymore is their stock.
http://www.bloggingstocks.com/2010/1...es-downgrades/
It has been upgraded from neutral to buy by analysts and the stock has risen 20% in the last month too.
If NV was completely screwed, I don't think this would be the case.
you mean as in 3x faster than we need? ^^
seriously, whoever NEEDS that performance is gaming at 2560x1600+ or 1920x1080+ 3D, and if you can afford a display for 500-1000$ then youd have a weird sense of value if you consider 550$ for a vga to feed that display too much... :p:
i think the real dealbreaker for both nvidia and ati right now is the lack of killer apps, ie games, plus the high cost of 24"+ displays... if displays were cheaper AND/OR there would be at least 2 kick 4ss games that NEED a highend gpu to be maxed out, highend would actually sell some actual volume and not be a corporate e-peen contest :D
what reviews?
normalized there is a 10% difference on average according to tpu
http://tpucdn.com/reviews/Zotac/GeFo...rfrel_2560.gif
anandtech 2560x1600 max:
0% difference in BFBC2
http://images.anandtech.com/graphs/graph3987/33205.png
~5% difference in Wolfenstein
http://images.anandtech.com/graphs/graph3987/33236.png
~10% difference in crysis warhead
http://images.anandtech.com/graphs/graph3987/33215.png
~10% difference in metro2033
http://images.anandtech.com/graphs/graph3987/33230.png
~15% difference in dirt2
http://images.anandtech.com/graphs/graph3987/33221.png
~20% difference in hawx
http://images.anandtech.com/graphs/graph3987/33224.png
~20% difference in ME2
http://images.anandtech.com/graphs/graph3987/33227.png
~30% difference in civ5 (no driver tweaking yet from ati i think)
http://images.anandtech.com/graphs/graph3987/33212.png
~30% difference in Stalker CoP
http://images.anandtech.com/graphs/graph3987/33233.png
(unplayable on either one, at playable settings the difference is only ~20%)
http://images.anandtech.com/graphs/graph3987/33232.png
~30% difference in battleforge (why is it in dx10 and not 11?)
http://images.anandtech.com/graphs/graph3987/33209.png
thats 16-17% difference between the 5870 and the 480 on average according to anandtech, which i think chose a very good selection of games...
the 480 has MUCH better min fps than the 5870 and 68xx... but av fps is in no way 25% faster on average, and 25%+ on average, thats just ridiculous man, i wanna see those reviews :D
n-zone? ^^
been waiting for that answer since April
That depends on how much Nvidia actually knows what AMD is doing. Seeing as how tight lipped AMD was with Barts and Cayman, my bet is that Nvidia doens't know much.
But Nvidia DOES know that if Cayman beats the GTX 480, they may very well have no answer besides the 580, which looks mostly like the full GF100 with process improvements.
Basically, for Nvidia, it's either release the card and at least make some money, or not release it at all and make no money. It's pretty obvious which choice is the best.
Not releasing the card at all would only hurt share holders. You've already sunk cost into researching the card, and since this card is likely based on the full GF100, it makes no sense not to release it after you've spent all this time trying to fix GF100 to have the full 512 units.Quote:
It would be better to spin and not release a card at all(just say a card is coming, like they did with the gtx 480) to hold back sales of cayman. They can't do this if gf110 is released already and people already know its performance.
Giving performance details when your card is not released yet and your competitors card are out with quantity is like showing your cards at the first turn on poker. Your just giving your competitors the advantage because people will know whether to wait for the cards or not. I think NV has confidence at this point on the performance of gf110 to pull such a movie.
And who's to say how many GTX 580's Nvidia actually has? The only paper launch rumors out there are from a few sources. In all likelihood, Nvidia has enough to get some buyers, especially the die hard Nvidia fans, and generate buzz. Plus, being a 5xx generation, even if it's just a rebrand, makes money with OEMs and wins points from less knowledgeable shareholders.
And you aren't giving the competitors any advantage if your competitor is tight-lipped about performance. Now, this may well be a repeat of GT200/RV770 - if GTX 580 performance is underwhelming, AMD may push Cayman out (or at least leaks out) earlier, just like RV770 where they saw that the 4850/4870 more than beat its expectations and we saw reviews authorized for release a week before they were supposed to be.
Again, you're assuming Nvidia has none in stock. There's clearly pictures of people with them for reviews, so the card does exist. And pricing it at 600 dollars then lowering it to 400 once Cayman hits, if Cayman is faster, is nothing that Nvidia hasn't done before.Quote:
If NV cards is 600 dollars and cayman xt is 500 and faster, NV would have just done AMD a favor. Worse yet, you didn't even sell any cards before the arrival of the cayman xt because you didn't have any. That just sounds like a notoriously dumb marketing move and would just help AMD.
We ARE talking about the company that rebranded G92 130598130980158 times, the same one who is naming this the 580 when it's looking like its more of a 485... you don't think they'd do something like this?
Stocks hardly mean anything. Everyone bought stock before the release of GTX 280 thinking it was going to be great, and it went downhill. How many investors actually know what the 580 is going to be about? Or what AMD's plans are? Few to begin with. Now how many know both?Quote:
One more thing that has lead me to believe NV is not completely screwed anymore is their stock.
http://www.bloggingstocks.com/2010/1...es-downgrades/
It has been upgraded from neutral to buy by analysts and the stock has risen 20% in the last month too.
If NV was completely screwed, I don't think this would be the case.
Edit: One last point
Remember those GTX 580 performance slides from Nvidia PR? They're being benched against the GTX 480, the 6870, and the 5870.
Now imagine if the 6970's already out on the market. What does Nvidia PR do if Cayman is significantly better? Can Nvidia still say "fastest GPU on the market?" Of course not, it would be an utter PR disaster, and ultimately hurt your relations with shareholders. Everyone knows those PR slides (starting at 80%) are complete BS, but they please shareholders.
Frankly, the safe thing to do is release this card early, because if you release it late, you run into Fermi 2.0's scenario - late to the game, and possibly underwhelming. Release it early, and at least you can claim you've got the fastest GPU, even if it lasts only 10 days
AMD is trying to balance DP, Tesselation and keeping shaders fed is a priority due to the lack of efficiency on cypress.
AMD is lacking tesselation output and DP power for their professional setup.
Cayman will be a good balanced card with shader and geometry output.
Andy warhol once said, being famous is 15 minutes in the spotlight.
Nvidas Fermi beats AMD with tesslation which for people who dont understand it is the future of gaming on the PC, due to tesselation we can increase eyecandy while keeping fps high.
We are talking better than crysis 1 and 2 graphics here and better frames per second with tesselation than crysis 1 and 2.
Now, AMD or rather, Richard Huddy states, well our cards have enough tess (damage control) with image/fps quality.(not true)
A game developer thinks, Gee, tessselation allows me to build a cheaper set of textures, due to less time is needed to create the art assest, but they have to build for people who can run the game, which isnt the case with 4800 series from amd, not with their 5800 series and not their 6800 series.
Maybe the adress this in their 6900 series, but then its a 400+us cash game. For that you can buy both a xbox360 and a ps3 with motion gaming for the family.
Fermi is unbalanced card for the moment, the 580gtx shows how its done and it is still on 40nm, with 28nm they be rocking due to what they have learned from their implementation and yes, they still build big chips.
Faster dosnt allow small.
Its like guys and their e-penis, if its big and powerful they feel really good about themself, if its small and efficient they can fool themselves until they met a girl, who knows big and powerful when she see one.
In the technology you need money and resources, Intel got those in spades so does Nvidia.
AMD is still a low budget company with horrible marketing.
People dont buy that often with wellinformed decisions, they buy from benchmarks and just walking into the store buying a computer.
They buy what they know off and it isnt AMD.
Intel inside, the way it meant to be played with NVidia.
Now, where is AMD in that arena?
Nowhere....
I do agree that in the cpu market, AMD is not doing great against Intel but in the gfx market, AMD (ATi) is competing pretty well against nvidia and there are lots of pre-built systems or laptops with ATi cards.
I should add I want idle power usage from my graphics cards. I could care less about load but some of the Nvidia cards have insane idle usage. If they made a GTX 580 dual gpu card I would be interested. Sadly Nvidia releasing a real Fermi card a year after they promised it kinda of pisses me off.
6950 is 1536sp @ 890 Mhz apparently.
1gb vram
^^^
Not to mention he's either high or insane...
Sincerely, part of me hopes you're right regarding this, GTX 580 rocks & priced sensibly @US$ 549 at most (i'm quite confident it will, with the leaked info that i've got about it), and HD 6970 is somewhat inferior and costs @US$ 449 at most, fulfilling my pricing expectation & nullifying yours. Therefore, i don't have to part ways with this forum posting wise for the next 6 months. ;) :D
OTOH, the other part of me is willing to take the punishment, as long we get another R300 coming our way. I'm just torn inside. :ROTF:
Forrest@B3D. Not sure if he's messing around or not.
Quote:
Originally Posted by Forrest
4 Rasterizers with 2 tessellator unis.
2 Rasterizer with a Tessellation Unit per Ultra-Threaded Dispatch Processor (UTDP) and each UTDP has 12 SIMDs.
What do you think ??
He's most certainly not. Only thing is that he might have the cards factory disabled by AMD with usual pre-launch crippled BIOS but I doubt that's the case so close to launch.
You should remember his post from last year and first pics of Juniper he posted on B3D.
http://forum.beyond3d.com/showthread...65#post1341265
http://forum.beyond3d.com/showpost.p...postcount=3994
The card has 8 plus 6 pin power (which makes 300W) so I think it'll be around 250W, which is consistent with what we have heard until now.
If AMD has managed to keep performance/watt the same as 6800 (5770's perf/watt is around the same as 5870's, for example), we could be looking at a card that's 65% faster than HD6870, which would make it:
- 50-55 percent faster than HD5870.
- 15 percent faster than HD5970.
- 12-14 percent faster than the GTX 580 which according to rumors should be around 15 to 20 percent faster than GTX 480.
Also we must consider that a new core configuration will have a variable improvement in various applications. Real world code typically lets a 5th shader idle in evergreen. But benchmarks and tuned code can utilize the 5th shader more effectively. So we may have a situation where synthetic tests don't show as much of an improvement in cayman as in real world tests. A different % performance improvement depending on how effectively code utilized the old architecture.
While i don't think the situation will be that simple & your numbers are correct (i don't think AMD will make their own version of baconator), i do think your thought brought another interesting question.
Will AMD improve their performance/watt in this mArch ? Stay the same ? or regress ? Because it seems nVidia has succesfully increased their efficiency in this metric, no matter how small it might be perceived. :shrug:
Well, to be fair, nVidia's predecessor product wasn't exactly an efficient built in the first place, it would be much easier going from horrible to acceptable than improving on a quite efficient design like Cypress boards. :yepp:
* 2.15 billion 40nm transistors
* TeraScale 2 Unified Processing Architecture
o 1600 Stream Processing Units
o 80 Texture Units
o 128 Z/Stencil ROP Units
o 32 Color ROP Units
* GDDR5 interface with 153.6 GB/sec of memory bandwidth
* Engine clock speed: 850 MHz
* Maximum board power: 188 Watt
so better shader utilisation ... with about the same amount of shaders should give us a good enough boost :D
... please amd give us antilles allready
I do not thin such info is worth dedicatinga thread, so here
http://news.softpedia.com/news/AMD-S...t-164938.shtml
No sense of humor m8?
Cayman has 20% less die space for the same performance as cypress with a rearranged shader and more tesslation and if amd stick to the small is better routine, it dosnt get us a lot more especially on 40nm, what else is there?
Now, I provided more information there than your silly troll comment.
If AMD did remove their gloves and went with a bigger chip and a higher price range which they might have to pull a all out card for this generation then a r300 might be had but that is unlikely.
They also have to redo the drivers due to the new shaders which allow for a improved performance over time even if cayman was planned for 32nm and that work been done for over a year its still takes a lot of time writing drivers well.
Gee, I provided even more information than your silly comment.
Due to none knows what is going on, did amd send out a limited gimped 6950?
With less shaders?
Calling it a XT?
How effective is the new shader arrangement and what have they tweaked and changed due to the 40nm situation?
You are in effect displaying the troll behaviour providing no information or speculation about amd new set generation cards, youdisplay the troll behaviour with such statements.
If people dont understand that, they really should check up their logic.
http://en.wikipedia.org/wiki/Troll_%28Internet%29
Dave Bauman is hiring a new suit, as he is going on stage to present his latest, correction AMD´s latest card and new generation he looks over the forum at beyond3D and shakes his head how off they been with their speculations. The shadows grow darker, the cloak and dagger crew is going back to their hiding, time for a new day to shine the light.
Maybe we save the whales one day but not today.
He walks out, the flashes of light, the press, the expected journalists, signed their NDA and now its time for the world to know.
Much more fun than saying, way to troll... dont you think?
Have fun, life to short.
if you dont think that, Gee...your missing out on life.
so according to current rumors 6950s raw processing power (1536 SPs) is 10% higher than 5870; together witht he supposed to be more efficient shader setup this thing should be able to beat the 480 and 6970 should be on par with the 580 (if we consider the gap between 50 and 70 cards of previous series)
Muropaketti said that AMD sent ~1600sp cutdowns to partners.
It might very well be, that other partners got other sized gpus's so, that any leak
a) could be traced
b) in case of more than one leak, would contradict themselves.
It would be fail for amd to have all new acrhitecture(sry lazy) and still not beat an old hot chip like the 580,to take fastest single cored chip.
maybe by xmas we see some chips
-----------
fudzilla
Friday, 05 November 2010 10:23 Cayman might not make it for 22nd Nov launch
Written by Fuad Abazovic
No final bios, cards, partners grumpy
Sources on a small island that doesn’t get along well with red China are informing us that Cayman has little chance to make the scheduled November 22nd launch.
The situation doesn’t look good as the final chip never made it to partners and the final boards are not ready yet. Partners do have early samples, but our sources assured us that there is no final bios and that drivers are not ready.
Just to assure you that this is not Nvidia talking, these guys are all unhappy as they were supposed to get some good sales numbers over the holiday season, as quite a few of people are excited about Cayman.
As of today, November 5th, at least some of the mayor players have no cards, no final bios and no drivers. They are saying the way things worked in the past they would have a small miracle to have full availability at launch. Of course, it's possible that the launch will proceed as expected, but availability is the key issue at this point.
We are sure that many will burn the messenger, but there is nothing we can do about it. We can only hope that we will see some Cayman cards selling in December.
quite possible; they could send these cut down chips for board validation, fully equipped ones for production (without a working BIOS) and the final BIOS just in time to flash the completed cards and ship them as fast as possible, partners may not like that but it's an explanation for the lack of information
the other possibility is that AMD has some problems with the cards and can't ship in time...
Assuming the 580 is hot and old might be the biggest mistake AMD can make. Cayman XT needs to be on par with the 580 otherwise nvidia is going to sell them for as much as they want for a pretty long while.
jeez nvidia really has achieved some success in making 5970 and consequently Antilles "not exist".
They know AMD probably won't release the final specifications before the 580 reviews. Like S|A I don't think these actions are for the benefit lowering AMD sales, or getting customers excited, if they wanted that they'd release after AMD or closer to the 22nd.
If AMD don't feel the need to do so then they must be confident Cayman is at the performance level they need against the 580. Both sides probably have some idea of what the other has, but I suspect nvidia has confirmed some of that info for AMD while AMD haven't done the same for nvidia.
I dont believe that 1536 SP number.
Why? Because if that is true, Cayman will not have more shader power than Cypress. Charlie Demerjian went on record saying that 4 Cayman shaders represent roughly 98% of the Cypress 5+1 combination.
1536 SP is rather close to 98%, being 96%. If that is case, Cayman performance will depend a lot on uncore parts of the chip, probably, because shader power will stay the same. I have a hard time coping with that.
Cypress has VLIW5 SP arch. : 1600/5=320SP and 80TMU
Cayman has VLIW4 SP arch. : 1536/4=384SP and 96TMU
So the difference is obvious
Again, following the logic of 4 Cayman shaders being roughly equal to Cypress 4+1, where is the obvious difference in SHADER POWER?
If there was not this "little" detail give my Mr. Charlie Demerjian who is quite fond of ATI/AMD, you would be quite right.
Still, ignoring what he said, more 64 shaders is only 20% more shader power than Crypress, which its probably not enough to beat GTX580, with the data we have available. I refuse to believe it, and im an nVIDIA fan ;)
Cayman has 1920 shaders in 4VLIW configuration. Those rumours are based on saples AMD has sent, that has been cutdown. Same as with Barts.
we're in a pretty strange situation atm.
amd went with 2-architectures (vliw5 and vliw4) for their 6000 series so it's pretty hard to derive the performance of cayman from the already released barts.
then there are rumors cayman will have a huge die - if we now consider that barts is smaller than cypress, amd needs to put a lot more stuff into cayman to reach "a huge die".
on the other side of the gpu-river we have nvidia with their gtx580 - "a fixed fermi"? nobody really knows, but i really doubt that nvidia had enough time to really improve fermi that much since its release.
so what does the gtx580 offer to deserve being part of a 500 series? there must be more to it as well.
when the first tangible 6000 series rumors hit the web a lot of people asked why barts/cayman deserves to be a new series. but if you now compare what we know about 6900 and gtx580, i really think that the 6000 series deserves its name more than the gtx580 does.
i really hope all these questions will be answered at the end of november, after both companies presented/released their new high-end. it just sux to stumble in the dark like this :F
I'm kinda doubting that leaked info saying Cayman would have lesser SPs than Cypress. Yes, with improved mArch & 4D shader array design, it might still achieve better performance overall in gaming vs Cypress, but that would mean the chip won't improve its SP Flop performance if the clock stays around the same. The DP Flop ofcourse improves, but still ..... :shrug:
Maybe supply is low and some partners have only gotten 6950 (1536 shader) equivalent cards.
Though a difference of 25% shaders between 6950 and 6970 seems like too much.
If all in all Cayman has less transistors dedicated to shading, where does the rumored increase in GPU die size (from 5870) come I wonder?
320 vs 384 means a 20% increase in speed. Considering that 6870 newly increased performance/watt, there is no way 6970 will be 250W but will perform only 20 to 25 percent better than 6870 which is 150W. And the card is 8 + 6 pin which would put its TDP definitely above 225W.
I don't buy that Cayman will be 1536sp.
cumon guys.. info on barts was only accurate 2-3 days before launch. till then many were claiming 4d on it as well. AMD spreads dissinformation before every launch, I doubt that number going around is true. youd think people learn from past experience.
Here is another idea:
Barts XT has VLIW5 SP arch. : 1120/5=224SP and 56TMU
Cayman has VLIW4 SP arch. : 1536/4=384SP and 96TMU
Then Cayman has 71% more SP than Barts.
So is it possible than Cayman is 70% faster than Barts?
If that so then Cayman performs similar to 2xBarts Pro (Barts Pro has VLIW5 SP arch. : 920/5=192SP and 56TMU)