Yeah...
GTX480 -> GTX580: +32 shaders
5870 -> 6970: -64 shaders :confused:
They might be -somewhat- more efficient, but come on, WTF!
Printable View
Don’t count the SPs 320 vs 384
Count the shaders or the SIMDs 20 vs 24
And yes ROPs (render Back Ends) will be 2x the performance of Cypress
If 1536 shaders is true, then...
http://www.youtube.com/watch?v=xHS-fOaO87w
If last rumors are true, Nvidia will sell a lot of GTX570 :rolleyes:
Had someone over on OCUK forums saying 6900 is fantastic, for the price. So looks like AMD are still keeping their strategy. I don't mind too much tbh; i'd rather have the low prices, than the 'new r300' but expensive.
Looks like it's going to be a 6950 for me then, GTX 570 a bit too expensive atm.
And i can easily see how 1920 SP's could get thrown in, as generics_user said.
~20 % increase in shader power(minus the small inefficiency in rare cases of high ILP), "double the functionality" of ROPs. Doesn't sound that bad. However 50 % increase in shader power would've been way better. But guess AMD knows better. :rolleyes:
i hear that 6970 is close to 580 .....
if 1536 sp is true, then a cheap 5870 will be better choice than a 6970 for boinc users like me.. mw, collatz and dnetc is already a lot faster with vliw5 sp mArch..
I have to say, no idea anymore what to expect from Cayman. In otherhand it is clear fact that Cypress was highly inefficient design, remember double shaders but only ~60% more performance compared to HD4870 (if memory serves, feel free to correct if wrong]. Same happened on HD4870 vs HD3870, performance increase was not straight up comparable to shader increase. Basically that leaves :banana::banana::banana::banana:load of room to improve. I would say +40% to Cypress if they can find all bottlenecks and figure how to remove them. I believe it can then "trade blows" with GTX580. That would leave room for Cayman Pro at +10-20% over Cypress. But there is lots of if's in that.
Regarding complaints about die size, power, and shader counts: Remember that this ASIC was never supposed to exist on 40nm. They're just doing the best they can to deal with the target process having become unavailable.
Unless HD6970 performs quite close to GTX 580 for that amount of money (450-450$), it is better (performance/ price) to get 2xHD6850 and enjoy performance level of the last ;)
BTW here is a short teaser
http://www.youtube.com/watch?v=tuYck...layer_embedded
Can we end the "slower than nvidia" speculation please? I have no reason not to trust this guy, and no I'm not trying advertise his site.
http://forums.overclockers.co.uk/sho...&postcount=707
Price is about the same as nvidia counterparts :D
5 freaking long days and this would be put to rest.
From that:
Can't wait! 5 freakin daysQuote:
Hi there
All I can say is they are bloody fast.
For the money I am amazed.
I can see this launch being hugely sucessul for ATI, easily on par with 58xx series but better because I literally have enough stock to prevent running out which will also mean our pricing will be excellent and I should be able to keep the price low low, no price hikes.
I hope so. It would be great to see a well done launch (without shortages/price gouging), they are not as common as they should be.Quote:
I can see this launch being hugely sucessul for ATI, easily on par with 58xx series but better because I literally have enough stock to prevent running out which will also mean our pricing will be excellent and I should be able to keep the price low low, no price hikes.
the US government should learn a trick or two from AMD on how to keep stuff secret..
maybe breaking their nda means a fate worse than death and listening to Justin Bieber ?
I think AMD listened a bit from Nintendo. No one in hell can leak any Nintendo future consoles data (remember revolution specs or the "what kind of controller it will have").
That guy Gibbo wins troll of the year awards if its complete opposite of what he's saying.
If the 6970 does have less shader units then thats very disappointing. And why are they still using a 256 bit memory bus? I'm not impressed at all.
Any reviews mention you can play 3D BluRays with a 6xxx? (with PDVD, Arcsoft etc.) I'm all set up and running, just waiting for the 6970.
256bit*5700Mhz GDDR5 = 182.4GB/s
384bit*3800Mhz slow GDDR5 = 182.4GB/s
With GDDR5 there's no reason to increase bus width unless you have an inefficient memory controller that can't handle high speed GDDR5 such as nvidia. Just with that a chip cost more-take more space in die --> bigger die- for the same performance.
I see you are new to AMD arq improvements. See Barts HD6800 for example. Cayman brings even more efficiency than Barts.
PaganII
You already can with any HD6000.
Sadly, there's no way do to do an apples to apples comparison (same gpu with higher bus, slower speed mems and the other with lower bus, higher speed mems).
The most close one but also innacurate, 4890 with mem underclock vs HD5770.
There's also the question of more complex PCBs to handle the extra traces for a bigger bus width. Also more memory chips to put on + QA. Granted, there's that tradeoff against higher speed GDDR5 which is more costly, but that requires actually knowing prices to know which is which. Nvidia did pretty much state the GDDR5 controller was far more complex than they thought it was, part of why Fermi had many delays
Depends on the architecture too. I remember these same arguments rehashed every gen since the 4800 was released - "oh it's 256-bit, it'll be eaten alive" etc.
AMD's high end cards simply haven't been memory bandwidth limited in the same way some of Nvidia's cards have
So the Chinese rumor from about 4 months ago about Cayman having less SPs than Cypress was right all along... Just like the Chinese rumor early on about RV770 having 800SPs... Good times.
As neliz explained, the switch is an advance overclocking switch for those with highend air/water or dice/ln2 so they don't have to do a hard mod to get a decent increase to voltage, though some no doubtly will still hardmod to get even more.
Saw something early today that makes me think someone with the card already has it under LN2, can't wait to hear the numbers.
Supposedly that French vendor cannot get direct supply from AIBs so he has to go a round-about way of getting inventory, i.e. not having them at launch and high prices. Like I mentioned at the end of Nov, most AIBs have already sent out first shipments to priority retailers and they should have received them last week. Volume is good. Depending on sales, there might be a small hiccup in Feburary due to Chinese New Year (and another reason).
Some seem to think Cayman will have dynamic voltage for the GDDR5, idle/2d/3d. I don't think it will but that would be nice.
Die size is supposedly 75% of GF110, based on GF110 being 530mm2. (Edited due to confirmation)
Someone else mentioned it being less than 20% larger than Cypress, meaning just under 400mm2. As I said before, the most specific number I have heard is 385mm2, which is what I am leaning towards.
TDP is similar to Cypress according to neliz.
Let say it cost a little less and is 10% performance under 580gtx at 150w less tdp.
Now, a 6990 under such tdp, going to totally rock.
single card performance, is though to do when the chips get big, small efficient and fast, the crossfire performance with the 6850 has been phenomenal.
there been more rumors, and I read Gibbo as a super great card for the price/performance ratio, which 90%+ people buy.
150W lower TDP than gtx 580!!! lets be reasonable here and not let the rationality go out the window. There's a reason why there's is a 8 pin connector and a 6 pin connector on the 6970. The 6870 still consumes 150 watts and the gtx 580 at the most consume 300 watts and most of the time a bit under 250. I can't imagine this card using 100watts to 150 when it has a 8 pin connector and a 6 pin connector. The 6950 has to consume more than 150 watts considering the 6870 consumes that.
Forgot to add to the above post, Antilles doesn't use same bin as Pro.
I think AMD definitely aimed too low, given that a 420mm^2 Cayman XT probably takes the crown from the 580 pretty easily. Problem with that line of thinking though is that we don't know how much yields/power consumption would change going that direction. After all, the last (and only) 400mm^2 card they tried was the R600, and that was an utter disaster. It's always easier to second guess
That said, it all comes down to performance, price, and availability again. The 6950 at < $350 might be a killer deal, much like the 5850 was
FWIW, I saw this from a different perspective as well, which myself being an engineer was pretty cool to think about:
3870 was 65% of 9800GTX
4870 was 75% of GTX 285
5870 was 85% of GTX 480
6970 is ?? % of GTX 580
If a < 400mm^2 < 200W card can perform at 90%+ of a GTX 580, that's a pretty impressive engineering feat and it nearly makes needing a dual GPU more of a e-peen "look what we can do" thing, rather than the necessity it was for the 3800's and 4800's
Of course this is irrelevant until we get the actual performance and actual #'s reviewed
150W statement being absurd aside, we actually don't know if it uses a 6+8 now... some people were saying it's 6+6 and since the card is coming in at under 200W, its possible
In fact, if you look at that leaked slide with the switch, it's blurry but it doesn't look like an 8pin
http://h-5.abload.de/img/switch_pdfom24.png
Hmm its speculation but maybe with voltage unlock the card passes 300w ? It clocks like mad at those volts but AMD couldnt release it like that so they give the option to keep TDP under 300w ?
Cant wait seems it wont go faster than 580 ..... also maybe delay was to finalise clocks from 800-880mhz ? maybe at 800 it would have been too close to 570 even and AMD adjusted for that. 570 and 580 are pretty close anyways if this is going to be inbetween them its probably gonna cost ~420$ ?
TDP for 580gtx is 350w.
cayman likely is around 200w.
its 150w difference.
300w difference for 6990, now, that is why amd went value and small mm/W design.
they beat nvidia with efficiency and offers better price/performance ratio and now for 3 straight designs they just hammer nvidia.
amd sells with high profit, if Nvidia didnt have their professional market they be gone like 3dfx.
Thats not the tdp of the 580, show us where the 580 comes close to 350w during in game play.
Why make up the numbers when you know people on these forums might have a clue.
http://www.techpowerup.com/reviews/N...TX_580/25.html
True, but the 58** series launch can hardly be compared, since the HD 58** series did not have any competition for about half year and even longer with the mainstream card. Actually even the AMD flagship card is still the fastest Graphic Card on the market now.
I think he was maimly promoting good supply for launch day and competitive prices.
Good supply is good news since some people in this thread were predicting paper launch.
I think if the HD 6970 acording to rumors with about 20-25% smaller die can perform anywhere close to GTX 580 than the HD 6970 is very competitive.
The rumored 190 TDP on HD 6970 should also be AMD advantage when designing the HD 6990, I hope to buy one in january.
Well that info adds credibility to the whole not 1920sp thing. Now the question remains how aggresively AMD will price the 6970 in regards to the 570. IF they launch between 400-450, they should be a great card.
Now roughly eye balling those comparisons, on average we are looking at a 20% advantage for 6970 over the 570, which should place it in 580 territory in a few scenarios no doubt. It looks like they have a product which *should* use less power and hopefully make less noise than Nvidias most comparable product at a higher performance level for the money.
PS: Those graphs are evil.
Well, I guess this shows that AMD is pitching HD 6970 against GTX 570 and not GTX 580. (If the slides are for realz, of course.)
Nvidia Confidential? :ROTF::rofl: Can't wait till this card is released. I'm sick of all these conflicting rumors.
Ontopic though, I really think AMD has something huge waiting for us other than performance. With the conflicting TDP and stuff it really makes me think that perhaps AMD has a switch (the rumor has been popping up everywhere) that can switch between perhaps 190 TDP and 240 TDP? Maybe the 190W is almost as fast as the 580, and the 240W is faster? Also, I don't get the shader count too. How come it's 1536 SP? I mean seriously? Or maybe it can switch between 1536 to 1920 too? That would be sweet.
I don't know what I am talking about. :shrug:
i don't know why people expectations have changed from unrealistic performance to now unrealistic power usage. The card can be just a competitive card with good performance near, at or beating a gtx 580 with a relatively high power usage(still not as high as the gtx 580) which is completely believable and I think will happen. The 5870 has a power usage of 212 watts at peak in furmark and it's significantly lower in gaming, but if we are going to go by that method, so does the gtx 580. If we test during game sessions, the gtx 580 consumes around 225 watts as wizzards test show, while the 5870 consumes 144 watts. If by gaming tests, power consumptions goes up to 200 or 190 watts, this is completely believable for the 6970. However they don't look nearly as awe-inspiring while the gtx 580 is at 225. It's better power efficiency than the gtx 580, but not super incredible.
The only way your going to get close to that 150 watt number flopper(and you still won't reach it) is if you compare real world gaming scenarios running on the 6970, vs a gtx 580 running furmark which is hardly a fair comparison.
In a peak tests like furmark, there is no way the 6970 is staying under 200 watts. You guys have to remember the 2gb of slower ddr5 vs 1gb of slower ddr5 added 30+ watts to the power consumption of the 5870.
http://www.tweaktown.com/reviews/324...d/index17.html
The 6970's memory is significantly faster so it should add more than 30watts+. Lastly one of the most telling tales is the size of the chip and the frequency. This chip is larger than the 5870 and clocked higher. Ultimate if the 6970 has more efficient shader usage, power consumption should be going up considerably.
The reason why the 6870 is more efficient size wise compared to the 5870 and they have similar performance is because the 6870 is actually using its shaders, while a lot of the 5870 shaders are just idle. Hence when you run furmark which uses all the power a graphics card, the power consumption on the 5870 jumps from 144 to 212w or a 48% jump compared to the 6870 which jumps from 127 watts to 163 watts or a 28% jump.
They actually have similar performance per watt in gaming scenario's, furmark just shows how how efficiently those shaders are being used in games when you compare the two. The larger the difference between the two, the less efficient the shaders are being used.
If the 6970 has high performance, it's power usage should go up considerably if it is an efficient architecture because that performance is coming from shaders doing work rather than just idling. The cost of performance is energy and the 6970 is not all of a sudden going to outperform the 5870 in games by 30 or 40% and consume less power.
I noticed the r300 tag disappeared..
Well supposedly the 6970 consumes < 200 W, and might even be 6pin + 6 pin, in which case it might consume even less than the GTX 570
Interestingly enough, AMD has had a tendency to lowball their own card comparisons in terms of positioning - they had the 6850 against the GTX 460 768mb and the 6870 against the GTX 460 1GB when real world comparisons of reference models is more of 6850 against the 1GB and the 6870 against the GTX 470. And of course, there was the 4850 against the 8800GT and 4870 against the 9800GTX :ROTF::rofl:
And yeah, if that performance slide is accurate, eyeballing it, the 6970 is probably nipping on the heals of the 580
If 6970 performs worse than GTX 580 I will be seriously disapointed, and AMD just lost a sale.
Because that's where the latest rumors from credible sources are pointing its power consumption at - credible people aren't just making it up out of nowhere at this point, with all the leaks. They're posted in b3d and ocuk and guru3d and some other places now. Whether that's rated vs. real in game remains to be seen. And 144 vs. 225 is a big deal because the 5870 is rated at 188W (so the Furmark ~200W peak isn't out of the range) and Nvidia's 580 at 240W - but Nvidia rates their GPUs differently than AMD (I don't want to rehash that argument, but it was explained quite well in this thread itself) - if the 6970 is RATED at 190W, then its actual power consumption will be quite lower - heck, even if its rated at 225W, using AMD's terms, its a significant drop
Yes and no - we don't know what they did in terms of streamlining power. Keep in mind Barts managed to get 90% of the performance of the 5800's with even less power draw - its hard to believe they didn't change some things over to Cayman - not to mention the whole power regulation thing they're incorporating. It's not going to be a flat 30W+ moreQuote:
The only way your going to get close to that 150 watt number flopper(and you still won't reach it) is if you compare real world gaming scenarios running on the 6970, vs a gtx 580 running furmark which is hardly a fair comparison.
In a peak tests like furmark, there is no way the 6970 is staying under 200 watts. You guys have to remember the 2gb of slower ddr5 vs 1gb of slower ddr5 added 30+ watts to the power consumption of the 5870.
http://www.tweaktown.com/reviews/324...d/index17.html
The 6970's memory is significantly faster so it should add more than 30watts+. Lastly one of the most telling tales is the size of the chip and the frequency. This chip is larger than the 5870 and clocked higher. Ultimate if the 6970 has more efficient shader usage, power consumption should be going up considerably.
Also, the GDDR5 definitely will consume more power going from 1GB -> 2GB faster stuff, then again, better binned GDDR5 isn't going to automatically scale linearly either
No one is making (well, I guess flopper is) that the 6970 is going to consume less power - it is, however, possibly rated at the same power.Quote:
The reason why the 6870 is more efficient size wise compared to the 5870 and they have similar performance is because the 6870 is actually using its shaders, while a lot of the 5870 shaders are just idle. Hence when you run furmark which uses all the power a graphics card, the power consumption on the 5870 jumps from 144 to 212w or a 48% jump compared to the 6870 which jumps from 127 watts to 163 watts or a 28% jump.
They actually have similar performance per watt in gaming scenario's, furmark just shows how how efficiently those shaders are being used in games when you compare the two. The larger the difference between the two, the less efficient the shaders are being used.
If the 6970 has high performance, it's power usage should go up considerably if it is an efficient architecture because that performance is coming from shaders doing work rather than just idling. The cost of performance is energy and the 6970 is not all of a sudden going to outperform the 5870 in games by 30 or 40% and consume less power.
Going with the same numbers you're using, if 6970 really does get rated at 190W like some are claiming, we might expect a similar Furmark peak (lets go with your 212W) but its more efficient, so using Barts efficiency #, we get 165W in game for Cayman. Lets say 170W.
Still significantly lower than the 225W or so the 580 pulls in game
And that's assuming Barts even fixed utilization of its shaders in the same way you're thinking - AFAIK, they're still using the same 5D as Cypress, just the scheduler and depth of each row was changed. Not to mention we have no clue how the change to 4-VLIW changes power from 5 etc.
edit: Taking a look at the sources, they say 190W in game, rated at 250W. Much closer to the 225W or so for the 580 (although we'd have to have w1z do a comparison so we're using the same equipment/methodology to be more accurate), so it remains to be seen what performance level its at - the same guy who said 190W said it more or less matches the 580, just a few % points under on the overall average
Can't really say AMD is responsible for hyped expectations when they haven't said a peep ;)
From AMD's slides it looks like HD6970 will be closer to GTX 570 than GTX 580.
From "Nvidia confidential" it looks like HD6970 will be on par with GTX 580.
The world must be coming to an end...
Apparently Nvidia confidential are slides from either an internal Nvidia benchmark or one done by one of the AIBs that dose Nvidia AND AMD
As mentioned though, AMD has had a history of placing their cards in their slides against lower competition. The 6850 vs. the 460 768mb, the 6870 vs. the 460 1GB when reference vs. reference, the 6870 is ~the 470 for example. And of course, 4850 vs. 8800GT and 4870 vs. 9800GTX wasn't even close to reality
Interesting thing I realized is that the 190W might also be from AMD's pdf of slides: they supposedly did the idle/typical/max breakdown as something like <30W/190W/250W. Going off Barts, 190W->250W is similar percentage-wise
That said, I wonder if AMD underrated/overrated power - which is up to the reviewers
The gtx 580 is not close to being a champion of power efficiency, so beating it should be no problem. AMD earns bragging rights if it signficantly beats its prior generation since it was actually good with power consumption. E.g barts xt uses 127 watts of power, if they increase speed by 50% and power consumption is 190 watts, they simply matched barts efficiency and have not surpassed it.
If the gtx 580 consumes 225 watts in gaming scenario's and the 6970 consumes 180 or 190 watts in gaming scenario's, which I can see happening, it's will be better efficiency than gf110 for sure but no better than barts(barts is great anyways at this as is cypress so no shame in this).
If the rated tdp of Cayman xt from AMD is 190 watts vs Cypress 188, I have a feeling AMD has become just more like Nvidia in its rating, because the memory addition alone should make it more than 2 watts greater than cypress. I think it has to increase more because increased efficiency, means less wasted shader because they are used more often, which translate into higher power consumption, which translate into higher performance. I have a feeling this is what is letting AMD increase the size of the chip only 20% but get more than 20% performance.
Basically what I am saying is there is no way AMD has increased performance over cypress 30-40%, has added 2gb of ddr3. That would have to mean Cayman xt consumes 30-40 watts less than cypress(which is incredibly efficient in this respect already) and performs 30-40% better. This chip is bigger and clocked higher. I am almost certain that this is impossible.
Zerazex, I believe 190watts could be the typical usage. This is completely in the realm of believability but I still think it will be a tiny bit higher.
Right, I'm pretty sure the 190W typical/250W max rating rumored is most likely - keeping Barts efficiency should be a priority anyways since that'd be silly to do that for Barts and not for its big brother. Of course, if they did that, the 2GB factor still must be considered... in which case, it's even more efficient
That depends, at least in wizzards case, the 5870 and 6870 consume their respective power amounts in 3dmark. How typical is 3dmark?
Something tells me those charts are fake. The Nvidia confidential things is weird combined when the slide has a title of an AMD marketing slide. They even have the trademark logo in the radeon name. Its just seems so wrong in so many ways because Nvidia wouldn't make such a slide in the first place and as a result, why would they need to put a Nvidia confidentiality or NDA on it. There really should be no NDA on such a slide unless it was an AMD one.
WTF? No 1920SP??? ATI please make HD 6980 with 1920SP.
You guys "being disappointed" with the product placement slide of AMD: I can't understand you, a month before there was the Analyst Day and there was another product placement slide which placed HD6970 significantly below HD5970... nobody took care of it then, but now people don't want to believe this.
If the 28nm process really got pushed back to mid 2012 by TSMC, I can see a ~450mm^2 1920SP version of this architecture on 40nm as a single high end GPU
I guess AMD is fortunate that they still have a lot of room to get bigger
Probably above typical, since synthetics are good at stressing the card, but not at Furmark levels
What if that's where they're priced ;)
Why would anybody buy 6950 if "6950 = 5870"? Clearly 5870 is going to be much cheaper...
Also, interesting die size figures. Cayman has ~370mm^2 die size, GTX580 has ~530mm^2. I wouldn't say that Cayman is a chip with a big die, then.
And 1920sp for 6970 is back, heh...
Fecking rumours!
Sigh.
What do tighter timings help when the cycle delay is lower? They don't. Just as there is no replacement for displacement in engines, there is no replacement for raw bandwidth.
Look at the timings from DDR1 era and compare them to DDR3 era. And see how the performance has been increasing. The only difference between e.g. 166 MHz DDR1 CAS 2 and 667 MHz DDR3 CAS 8 is that the latter gives quadruple the bandwidth. The CAS latency is exactly the same. It has been near the same for a decade, though there has been some progress, it's far from the progress in bandwidth.
Obviously in situations where there are just small few byte memory accesses and the limiting factor is the latency, the difference between such DDR1 and DDR3 is slim. But when doing tons of memory operations the difference is tremendous. Given GPU <-> VRAM bandwidth(What, ~180 GB/s these days?), the limiting factor most of the time is the bandwidth, not the latency.
1920SP and only "faster than 5870?" I think those could be wrong...
Plus 2x6pin for 225W seems a bit low... 2x6 would be a max of 225W already, yes I know 6 pins aren't actually limited to 75W but GPU design isn't like that, usually..
Plus, that chart seems to underestimate performances, HD6870 is closer to HD5870 than to HD5850.
Thats because people for some reason got unreasonably optimistic about this card and generated excuses that the placement on the slide didn't equate to the 5970 being faster than the 6970.
The hype has gotten out of control for this card and people are realizing the gravity of such high expectations. The higher your expectations are, the more likely something is going to let you down. I remember reading 30-40% faster than a gtx 580, 80% faster than a 5870 during this thread.
Noticed the r300 tag has been removed because the original poster realized this card is not shaping into one and such a reappearance is unlikely.
Conspiracy Theory:
When Charlie said that allocation for HD6950 was changed to HD6970, what if there wasnt a change in the chips? Imagine HD6970 was going to be 1920 SPs, and instead of a real allocation from 1536 SPs to 1920 SPs, what was changed was just the name?
The chip might have 1920 SPs, but launched with only 1536 enabled. This would give space for AMD to launch a HD6980 in the future, or even a renamed product for the 7xxx series, while waiting for the 28nm process to mature (quite possibly it will only be available on Q42011 or Q2012).
I cant imagine them developing another 40nm chip meanwhile. Maybe they are just playing their cards for the near future?
30% faster than 580 they said ..... charlie doesn't bull:banana::banana::banana::banana: they said ..... it looks like 580 will be getting self a cayman skin jacket for christmas :P
Let me quote this -again-.
Moar rumours inc:
So this time instead of ~370mm^2 it's 389mm^2 (around 2.64B transistors) vs 520mm^2 (some say 530mm^2; around 3B transistors, some say up to 3.2B). No matter how you look at it it appears that 6970 has a much higher transistor density, or some numbers just don't add up...Quote:
Originally Posted by Fudzilla
Worst case scenario for AMD (389mm^2):
2640 / 389 = 6.79
Best case scenario for AMD (370mm^2):
2640 / 370 = 7.14
Best case scenario, 3.2B for Nvidia:
3200 / 520 = 6.15
Worst case scenario for Nvidia (3B, 530mm^2):
3000 / 530 = 5.66
Come down everybody, That info is old (22nd of November, to be exact)
http://img820.imageshack.us/img820/8...9763093102.jpgQuote:
Such as the title: casual run, the performance generally and GTX570 rather, beat GTX580.
A meal should let them be disappointed, I am also quite disappointed. We next week, right on the line 3 and so evaluation.
3D MARK 11 Extreme 1920X1080 4aa score is 1640
We were wrong, just received a message AMD is positioning for the 6970 hit, not GTX580 GTX570
Pricing:
HD 6970 ¥ 3299 ~ 3399 products on the bit GTX570
HD 6950 ¥ 2499 ~ 2599
6950 really do not know who was playing
http://itbbs.pconline.com.cn/diy/12257812.html
Hm, looks low... for 6970. This is a picture of 6950, though, from my understanding. Isn't it?
The post goes like this:
http://www.hardwareluxx.de/images/st...3dmark11_3.jpgQuote:
6950 really do not know who was playing
PIC
http://www.hardwareluxx.de/images/st...h/vantage3.jpg
@ http://www.hardwareluxx.de/
Sounds false, but would be funny to see AMD releasing a new flagships card that is slower than my oced GTX 470 (x1703) :D
So its HD6950 vs. GTX480 and HD6970 vs. GTX570, I guess.
That Vantage score is fake.....it's probably 5870 there.....
I get X10120 with my oc'ed 5870 to 905core / 1270 mem on i7@4Ghz HT off....so....seriously doubt that picture has anything to do with 69xx series:rolleyes:
not too mention 1600 shaders - doesn't add up