Xtreme not bad either, Heaven 2.1 very nice!
Printable View
Xtreme not bad either, Heaven 2.1 very nice!
wow that heaven score is nice.
11.1 should give a better picture as to what's happening w/ these gpu's.
If these scores are true then why would AMD continue producing them? It would be better to just keep the old Cypress a bit longer, save some money on smaller dies, and rework the arch...
F1 2010 with latest patch
1980x1080
8xMSAA and everything on max
Average: 68
Min: 58
Samples: 8554
We're all getting so trolled.
over twice as fast as a 5870 in unigine heaven. not bad.
These scores make no sense at all, don't understand why it's this low unless that switch has something to do with it... unless I was expecting too much to start with. :(
No way AMD would even release it like this, so somethings up. I wonder if Afterburner can read its utilization, that'll give us some perspective I think.
Ask for game results. 3dMark isn't going to like it being 1536 SP's, hence the lower score compared to Cypress
http://img149.imageshack.us/img149/8954/00001jn.jpg
STOCK 5970
Neck to neck:)
weak showing after all this secrecy
Basically 5870 with 2x tessellation. :shrug:
may be it will priced aggresively wow what a dissapointment lol nvidia will easily own this time i mean every single card they release as 5xx series will probably end up being faster than their competitor only choice for ati is to reduce price which means lower income ...
Weak showing? I'm impressed personally. It seems roughly on par with a 5970 and in actual games like F1 2010 edges past the 580.
Real games please
I already wrote this elsewhere:
1408 * 800 * 2 = 2.25 TFlops
1536 * 880 * 2 = 2.70 TFlops
For Cypress:
1440 * 725 * 2 = 2.08 TFlops
1600 * 850 * 2 = 2.72 Tflops
So Cayman is actually worse off in synthetic performance due to lower shaders. HOWEVER, compare it to the 5970 and compare it in game and see where it actually performs, since it utilizes the shaders differently
F1 2010 is only one game and difference is small for all top cards , in other game 5870 lacks way behind fermi series , look at Lost Planet 2, its twice as slow.
Maybe 6970 fixes that unbalance need more benchmarks.
seriously someone contact this guy and ask him to bench hawx lost planet other dx 11 games may be crysis also since its very demanding
Well, I was all ready to spend $500+ on a new graphics card, but if these results are true I'm simply going to skip this "generation"
Hard to imagine AMD dropping the ball like that though. My guess is that the older drivers are specifically set up to lower performance. Would be a great way of preventing leaks/spreading disinformation...
think unless those 6970 benches use 10.12 drivers then they are useless..
http://www.hardwareheaven.com/review...w-f1-2010.html
1980x1080
8xMSAA and everything on max
Average: 68
Min: 58
Samples: 8554
actually according to this review 6970 is faster than 580 and no 5870 doesn't beat 580 with in game bench
3dMark is shader intensive and 3dMark also utilizes 5-VLIW a lot better than real games since drivers are optimized for it - the 4-VLIW of Cayman probably hurts synthetic shader performance
Need some in game performance
Anyone got 5870,480,580 for comparison ?
http://img140.imageshack.us/img140/4...crysis4mny.jpg
Edit: DVS beat me to it, although i think the benchmark is bugged, 4xAA=8xAA and vice versa.
Yes it does.
Link: http://www.hardocp.com/article/2010/..._card_review/3
Quote:
The aging ATI Radeon HD 5870 seems to hold its own in this game, and compares right up there with the new GeForce GTX 580. We found that 8X MSAA at 2560x1600 was playable on both the GeForce GTX 580 and the ATI Radeon HD 5870.
I can use my 470 if you want, at 850 core I should be close to a 480 give or take a couple numbers.
823Mhz (I think is closer to a 480 than anything, but not sure):
http://img253.imageshack.us/img253/8...ysis823mhz.jpg
No modified cfg's or anything, hate people that use em'.
min fps on ati side is much better 28 vs 20 thats huge difference
Different resolution & maps too
http://tpucdn.com/reviews/NVIDIA/GeF..._1920_1200.gif
well since nobody wants to bench here is one from tpu
It is Crysis (v 1.21), not warhead, unless he ran warhead and I'm just having a long day and missed that??
you should use Hawx2 and see the 215% gain have the GTX480 over a 5870 2gb in this benchmark/game ( developped with and by by Nvidia ) :rofl:
Even a GT430 is faster of the 5870 in this game ..
Lost Planet 2 ( like the one ) have been developped in collaboration ( complete collaboration ) with Nvidia, the DX11 part have been completely ( i say completely and this is not a joke ) designed by Nvidia,
forget to see any other cards who are not Nividia to shine in this game .
3dmark11 with another driver (the one from the 07.12.):
http://3dmark.com/3dm11/136857
didn't change much.
note today's date in US style: 12/11/10 - mystery? :off:
we need more in game benchs not vantage cr_p whatever gonna wait until official reviews
If AMD is releasing something 10% faster than a 5870 that is complete facking fail no matter how you spin it. The only thing that makes those scores acceptable is if they were for 6950. But seems like not. Now that begs the question, what is the point of a 6950 if its barely above a 6870?
Can we still be looking at fake scores even at this point?
wtf happened AMD?
Vantage numbers look unreal.but yea...a synthetic benchmark is not that important
I dont know man this story is turning flipside very quickly. First Amd was keeping secret cause performance was really high now its the opposite, seems like Amd is hiding a mediocre product at best. And lets face it if the rumors hold true for the 1536 shaders, its going to be a huge letdown. The truth about the 4d is that one smd cluster is 90% the performance with 10% less die space. So although you got 20% more shaders they perform 90% as good so in reality 1536 shaders will be only 18% more efficient than 1600.
Its just not enough even with the other changes and this lets me down big time. Unless they manage 100% more shaders at 28nm then kepler with 700+ cuda will rape southern islands.
of course there is antiles but how many people will afford that and how much better is it gonna be than 580? from these scores it will be at best 15%.:(
Someone said that the guy is getting these low scores because Power Tune(a new feature of 69xx series) is not working properly with the drivers he is using. It would be interesting to check the GPU load to make sure that the core is working 100%.
http://img530.imageshack.us/img530/7...gehighn6lj.jpg
Hmm, better than GTX 480, still well under GTX 580, probably on pair with GTX 570 at least on Vantage.
http://lab501.ro/wp-content/uploads/...e-High-GPU.jpg
unless he had wrong driver or something it looks to be about the same as a gtx570
if priced the same or a little lower should do ok
but at 450 forget it:down:
Man... i don't think i've ever used my keyboard's F5 key as much as i've used today :eek:
Only reviewers, retailers, members of the illuminati, and God himself; know the real performance of the 6900 series :p: :rofl:
Jokes aside, 570 will still beat that score i reckon, considering it has a performance bump over the 480 in vantage.
But really, take a look at this post from OCUK staff:
http://forums.overclockers.co.uk/sho...5&postcount=69
At those prices, something must be horribly wrong with these benchmarks, either that or those prices are horribly wrong. :shrug:
Why do people keep posting 3dmark studded? I already said the scores will disappoint since shader count is lower. But gaming performance is a different story
Something has to be wrong.. look at this Compute score. That's no better than 2x5770
http://www.abload.de/img/6970computemarkxmc9.jpg
Well Gibbo says 6970 competes with GTX580 and is priced accordingly.Similar goes for 6950 and 570. And this guy has payed a lot of $$$$ in order to get healthy stock of those cards(meaning he has the cards,he has the hard numbers and knows how each performs).
When two-three syntetic benches show the same thing, the truth isn't far away even in games...The price is the key...
Still considering the biger die size, the 2gb of memory, the prices i don't think that can be very low...
Although these benches and everything is a nice sneak peak, I won't draw any conclusions until actually release, just hold your horses off for a little bit longer guys.
And again it's a bad result when compared to GTX570 and GTX580. It does love Nvidia, but still...
http://www.bjorn3d.com/Material/revi...eGiant1920.png
With all that disappointing results here is something to cheer you up guys, true Radeon HD 6970 specification slide.
http://img88.imageshack.us/img88/7020/76498662.jpg
And here is refreshed card positioning slide
http://img691.imageshack.us/img691/1634/51742261.png
:D:D:D:D:D:D
chiphell is king of photoshop :ROTF:
Instead of making up more reasons to believe/disbelieve and buying another ticket for the emotional roller coaster, why not just wait a few days instead?
Could someone please suggest to that Ducati750SS dude that it would be a really good time to update his graphic drivers and bench again? :D
I don't know about you but don't those numbers seem a bit disappointing?
I mean if this card is 450 dollars. It doesn't seem to really bring anything new to the table over NV. And a 250 dollar 5870 seems like a much better deal.
What happened to ficherodel post with all the benches?
Want to know numbers? Sleep or go travel for 4 days. That's the Cayman way of life.
Dubai seems the place xD
I'm going to say this again: stop looking at synthetics!
First of all:
6950 - 1408 SPs * 800 * 2 = 2.252 TFlops
6970 - 1536 SPs * 880 * 2 = 2.703 TFlops
Now compare this to the other cards:
5850 - 1440 SPs * 725 * 2 = 2.088 TFlops
6870 - 1120 SPs * 900 * 2 = 2.016 TFlops
5870 - 1600 SPs * 850 * 2 = 2.720 TFlops
So in terms of theoretical shader power, Cayman and Cypress are a wash.
Why is this important? Well, look at how 5VLIW is implemented in Cypress and Barts:
http://img714.imageshack.us/img714/5011/37595963.png
For those who don't get the Cayman change to 4VLIW, it goes like this: R600,RV670,RV770, Cypress, and Barts all use the above image's shader configuration. Each SIMD has 4 simple shaders and 1 "fat" complex shader - the transcendental one - that can do complex math (such as calculating pi, sine/cosine, etc.). The other four shaders only do simple MADD (multiply-add). Since most shader operations are just MADD, the 5th shader often sits idle and thus 5-VLIW wasn't as efficient.
The move to 4-VLIW is this: instead of 4 simple + 1 complex, there's 4 moderately complex shaders. Each can still do MADD. However, to do a transcendental, I'm not sure how AMD is going to handle it - it might have one of the 4 shaders iteratively calculate a value, or it might use all 4 to do it at the same time so it requires fewer clock cycles.
Now why is that important? Because synthetics can utilize the 5th shader because they're designed to run em! So, synthetics often got to utilize the 5800's / 6800's up to their full shader potential, which is actually more or less a wash with the 6900's.
However, supposedly real gaming wasn't the case - most vector operations are shorter than 4 values, and so the 5th shader in Cypress/Barts is under-utilized.
Hence, in synthetics, the fact that Cypress is close to Cayman is a reflection of the fact that shading power in Cayman isn't vastly improved for benchmarks. The real key will be how gaming performance goes, since supposedly in games, going to 4-VLIW SIMD brings it to 98% the same performance as a 5-VLIW SIMD. The leaked gaming benchmarks seem to suggest that gaming performance is much higher than 3dMark is hinting at.
This can all be proven wrong of course, but it's the best reason for the gap between 3dMark/synthetic scores (to say nothing of Nvidia's optimizations in the tesselation benches) and gaming scores that were leaked
Also, people forget that the 5800's got HUGE boosts in 3dMark Vantage through drivers. In fact, a few boosted it scores by a considerable few hundreds of points. Who's to say that AMD doesn't do the same and pushes it upwards? The Extreme score on the 6970 already puts it above the 570, so we'll see where it goes. And if this slide is accurate, we still have drivers to use:
http://img571.imageshack.us/img571/261/96904816.jpg
But that's why its pissing me off the guy won't use his GTX 480 to bench against the 6970 in games, and will only bench useless synthetics
I don't know. A guy that has a with a picture like this
http://d.imagehost.org/0333/ducati75...01-500x375.jpg
seems to be more reliable than the random screenshots we have been seeing.
Not simply benchmarks, but that crysis score is a bit disappointing. Its scores 10% over a 5870 and matches more or less a gtx 580. Crysis recently has been a strong suit for AMD so I expected much better.
http://b.imagehost.org/t/0022/6970heavenbench.jpg
Heaven should be better off the bat, but this still seems well below a gtx 580.
Perhaps he's sponsored by Nvidia. :p:
Well, it confuses me why he refuses to bench against the GTX 480 he has - it would be a simple comparison then
edit: some Bulgarian reviewers with a 6950 leaked (then took down the quote) saying it reached 30-40% best scenario faster than the 6870 @ 1680x1050. Well, the translation might be bad, but if that's what they meant, they would place it ~570 performance
And they had a pic of it running too so no reason to doubt they're testing it
That ducati guy should have good drivers in the box shouldn't he?
I mean it does appear to have a retail sample which should have good drivers in the box.
Regarding Heaven: *everything* AMD scores lower. The 5970 with a fast i7:
http://img155.imageshack.us/img155/2096/captureca.png
But where does the 5970 game compared to the 580? Exactly why synthetics don't represent real world gaming and why Heaven is pointless to compare overall cards with
FWIW this 5970 score is ~5% more than the one provided, which ought to be proof that the 6970 does have vastly improved tesselation compared to Cypress, though probably not up to nvidia levels
Regarding Crysis, I'd want to see the scores based on the same computer and same run and min FPS.That said, there's two games now (F1 and Crysis) where the Cayman is at least around 580 performance, but we need more games to see where the trend is
As I've stated, Fermi saw some games where it took a ridiculous lead over Cypress, and in some games Cypress was awfully close. There was a huge performance delta / wide variance. What'll be important to see is if Cayman has that same performance delta, or if Cayman performs much closer on average (smaller standard deviation) and thus in some games performs close to Cypress, and in others performs higher.
The 10.12 drivers are supposedly 1GB in size and have a new Catalyst Control Panel interface, which doesn't appear to be what he has. And PowerTune and AMD Overdrive are supposedly coming with that. Wouldn't be the first time drivers shipped with the CD were out of date
Everyone is expecting greater than slightly faster than HD5870 performance. AMD has had over a year since Cypress! Of course seeing these kinds of numbers people are going look for any possible explanation, because frankly the numbers we're seeing right now don't make sense.
I'm still holding onto some hope that Cayman will be much faster, and I'm ready to lay down 500-600 dollars if that is the case. But if this is all that Cayman has to offer, I'm holding off until 28nm.
I wouldn't, unless AMD are sandbagging. I would expect a far greater improvement over the coming months, due to the different architecture though. Possibly something similar to the "big bang" drivers, that you reviewed for Nvidia.
Wouldn't you say, in your non biased view?