http://www.pctunerup.com/up/results/...43701_0000.jpg
Printable View
What is that graph suppose to mean?
I honestly don't know what's being implied with such a graph.
quite simple. If 5750 is 100% performance, than the rest are higher percentage than it. Dunno what is so hard to understand it. If it's the real-deal, that remains to be seen.
So 6870 is 80% faster than 5870 :shocked:
no... lol.
5870 = 216%
6870 = 304.2%
These are all related to the 100% 5750 GPU.
It's about 40% difference if you compare 5870 to 6870. According to this suspicious graph. We shall see in 1-2 months
its a nice guesstimate, but its just that... a guesstimate...
Of course, really stretched guestimate since it has around 9-10 GPUs which are unreleased.
I really don't see what's "quite simple" about it. Is that a representation of just game, games and multimedia, etc? Is it suppose to imply a specific group of DX9 and/or DX10 and/or DX11 titles, etc? It leaves a lot to be desired as to what it's implying even though it's nothing more then a guesstimate. My initial thought that prehaps this was an attempt of satire.
In any case, I'm sure some decent leaks will popup lettings us know where the performance of these cards stands sooner rather than later. Also, is a slight rumor that new, revamped (hopefully newly built) cat drivers may accompany the new release.
Generic performance. General performance. Average performance. Not something specific. Whatever you want to call it.
Just like you would say GTX 480 is around 15% more powerful than a 5870, that's generic because it does not say anything about specific cases, but it's an average.
as much as i would love 40% perf boost, my gut says we will see 20% in most games, and in tessellation a much higher jump which i think will make a 6870 about 5-10% faster than a 480
think until next console gen arrives games won't need this crazy performance.. we can already run lazy ports from the xbox360 :p:
I don't know man, even if that were the case I'd rather have it this way than the other way around. Having games like when Crysis first came out and 8800s being the only things powerful enough to run it was more frustrating.
Hardware should always be there and the game devs should take advantage of it ideally. I'd love to see 5970/GTX 480 sli like performance on a single GPU, then come 28nm and beyond I hope Nvidia/AMD focus more on power efficiency and all the extras like tessellation and geometry over brute fps.
http://forum.beyond3d.com/showpost.p...postcount=1564
Quote:
Some more info here: http://pclab.pl/news43091.html, from "a little unofficial talk with an AMD worker". You can try to google translate it, but I don't think it will work well. It's also written rather vaguely, so I complied the main points for you:
- improved Cypress architecture, all GPU blocks improved (so shaders too?)
- better performance clock for clock, less power consumption.
- The experience in the 40nm process allows for better organization of the die (and save space - that's what I personaly wondered if possible)
- Better yields allow for more complicated structures
- new UVD 3.0 decoder, with full video playback acceleration for Eyefinity, up to 6 monitors. Overall better decoder.
- no problems with samples of chips and cards, not even with the 6970.
- prototypes might be send to AMD partners in the coming weeks
- AMD wants HD6970 on the market before Xmas (obviously!)
- No worries from competetion, they believe Radeons will reign in DX11 generation
- Aggressive pricing planned, as long as enough cards can be produced. (buy one asap before prices go up! )
- First low-end, then middle, then HD6970.
- HD6800 planed for beginning of 2011, could be earlier though.
- no comment on the rumour AMD would be doing a fusion processor for next Xbox.
- lots of optimizm in the AMD camp
That's all. FWIW applies here and as many grains of salt you need.
Dirk Meyer said in front of analysts that the WHOLE graphics card linup will get a refresh THIS YEAR.
So, I do trust him more than a rumor.
doesn't make much sense? :shrug:Quote:
[...]
- AMD wants HD6970 on the market before Xmas (obviously!)
[...]
- First low-end, then middle, then HD6970.
- HD6800 planed for beginning of 2011, could be earlier though.
[...]
All they need is Bart performing @5850 level priced @199 US$ in lots of lots quantity before christmas, combined with Juniper price drop when Bart came out to around 109 US$ for HD 5770 1 GB & 89 US$ for HD 5750 1 GB, also in good quantity, everything else north of Bart can drip until Q1 2011, as long the availability is there & the bench scores rock ! Holiday season sales will be DOMINATED IMHO.
to say the truth amd picked nice season for introducing new line. if everything goes right they again will easily take major piece of christmas sales
It would be actually funny to see nvidia releasing a 512 core Fermi in november. Well, 1 year late, but still november. :ROTF:
i wonder what is really the interest of such a graph.
eccept creating a buzz on vaporware, or for the website publishing those "assesses" lol they should put " Mrs Yrma has red the palm of an ATI soz...AMD employee and saw that SI is going to be 100% faster than an hypothetical GTX 490"
There was an article on Anandtech a while back and it described how AMD had changed (and what made HD4800 such a success). Instead of thinking of new features their card might have, implement all of them and launch the GPU whenever it's ready, they scrap all the ideas they have that don't fit in the GPU because of the time needed or DIE-space. One of the main goals is to have a new product ready to holiday and when the OEMs plan their new products.
Here's the story. Very good read.
http://www.anandtech.com/show/2679
http://news.ati-forum.de/index.p ... -der-hd6000er-serie
Juniper(HD5700) rebrand as HD6700
Barts=HD6800
Antilles=HD6990
The new gpus will be tweaks of the previous generation, not just a re-name. This is not the 8800GTS 512 transforming into 9800GTX, it's a refresh, nothing more.
AMD said it as well, they will refresh their line-up at the end of 2010.
Sounds ridiculous to shift all the names if they want to rebrand one mainstream card.
idd. i've always condemned nvidia for their rebranding and if amd starts to do the same they'll lose some plusses in my book as well.
rebranding simply sux, as it's just a cheap attempt to make the average joe think it's some new product.
however, renaming the 6870 to 6970, and 6970 to 6990 doesn't make sense, imo. they'll completely break everyone's association with the naming where x870 = single gpu highend, x970 = dual gpu highend etc... bad idea if you ask me.
i don't think this is going to happen. atleast i hope so :shrug:
loll no concrete info from ati as of now ... so i still hold faith that they will improve the 5K series design and change some things and not do a simple rebrand
I'd bet my money on 4+1 -> 4 transition. The rebranding would mean same amount of SPs while actually number of clusters have increased.
Well, because a 5 shader array is like a quad-core CPU. It's easier to fill a Dual-core than a quad-core or an octo-core.
So a 4 shader array will be fed easier with data than a 5 shader array. Thus, improving efficiency per array.
And, if you go with 4 shaders array, you have space for more arrays. Keeping the final shader number the same as previous GPUs, but with higher efficiency.
Some people mentioned something about the possibility of having 2 double precision capable shaders + 2 single precision shaders in such a 4 shader array. so a 2:2 ratio instead of the present 4:1. Would improve computational capabilities for sure, but nothing official from AMD.
Right now the cores consist of 800/5(4+1) SIMD processors(160). With the 4+1 -> rumour the cores would consist of 800/4 SIMD processors(200). That's a 25 % increase while the SP count remains the same.
The catch here is that with 4+1 configuration there is one SP more or less unused most of the time, 4 SP configuration eliminates this and allows denser packing of the SIMD processors. So they can add more SIMD processors and thus bring the SP count to 800 while working with same die area limitation.
I am very doubtful towards this 4+1->4 theory though, I'd take it as 4+1 -> 3+1. Well, still effectively cutting number of ALUs per SIMD processor from 5 to 4. But the main idea is that there's hardly ever such cases that the instruction level parallelism would be so high that all the ALUs would be busy, so cutting one off isn't a big deal and only gets costy in very rare cases.
Apparently you dont know how ATI shaders work...
http://www.beyond3d.com/content/reviews/52/9
Quote:
The structure of a packed VLIW passed for execution by a discrete block is made up of between 1 and 5 64-bit scalar ALU operations and at most 2 64-bit literal constants for a grand total of 7 64-bit words length. Control flow instructions are passed as separate 64-bit words assigned to the branch hardware. The 4 equal ALUs can handle 1 FP MAD/ADD/MUL/dot product per clock, or 1 INT ADD/AND/CMP/LSH*_INT (but not MUL!) per clock, to list just a few instructions.
For single precision floats, MUL and ADD are ― ulp IEEE using round-to-nearest-even rounding, and MAD is 1 ULP with the same rounding mode. For double precision, as already mentioned, these ALUs are fused in pairs of two to compute 1 DP MAD per cycle across all 4 of them. Only a limited set of instructions are supported in DP (no transcendentals being the obvious omission), and compliance with IEEE754 is relative: denorms are flushed to 0, only round-to-nearest rounding is supported, and a MAD produces different results from MUL+ADD due to rounding. Integer and float instructions can't be processed in parallel.
The transcendental unit (the Rys unit!) is different from its more silhouette conscious brethren: it's (surprisingly!!!) capable of handling transcendentals (cos, sin, log, exp, rcp et al.) at a rate of 1/cycle, INT MUL, due to a slightly higher internal precision (40-bit versus 32-bit, allowing expression of a 32-bit int in the FP exponent) than the other ALUs, and format conversions, all whilst not being able to process dot products or double precision work (so it's idle when double precision processing is happening).
huh? it's there, just open the link ;)
http://news.ati-forum.de/images/stor...03_09_2010.jpg
http://news.ati-forum.de/index.php/n...mit-rebranding
Well perhaps it has to do with surprisingly good performance to come so AMD thought they'd brand them a little higher or so they can more easily price them higher (say $499 for a Cayman XT)? $499 for a HD 6870 might look a lot but if you get a 6970 it looks better, for the knowledgable people it's just a number but for people who don't have a clue about the graphics cards it could have a point when comparing to HD 5870 and 5970.
Yea I think so too, I hope they don't use this branding, they just start to remind too much of Nviditactics. It really seems to be a bad thing in the long run being at the top and controlling the market which I think they are doing with DX11 generation so far, the companies get only more and more greedy being at the top and start using silly marketing tactics to squeesh even more cash by exploiting people who don't know any better. It happened to Nvidia before, now it could start happening to AMD too in the graphics department with this possible rebranding and rebranding what was meant to be 5870 successor into a higher number etc. But we should take all this with a bit grain of salt as usual. :p
Still, true or not I hope JF sends certain message to gf part of marketing department. The message being "DONT :banana::banana::banana::banana: UP WITH REBRAND". He might not know of their intentions, but that does not mean he could not give feedback from ethusiast part of us.
can we stop talking about the rebranding until we have a 100% fact proof of this ..... amd cant afford to make those silly move since they are barely on top on the gpu market ... but still amd as a whole isnt on top of anything ..... so they really need to dominate or at least keep pushing for domination ....
I really doubt they are rebranding anything. They're just improving the old design, and the specs are seemingly the same by the numbers, not under the hood. :)
...so worry not. ;)
I think AMD is either a) screwing with everyone leaking bogus info b) planting bogus info internally to find out who leaks it
It could also be that leaked information was misunderstood.
I doubt we will see any nVidia style rebranding.
id go with calmatory's post .... it might be as simple as a workaround of the current arch so the numbers would most likely look the same on papers thus creating the rebranding rumors ....
I don't have anything against tweaking and selling a product but for all the outrage against NVIDIA...
4000 series > 5000 series (doubled 4000 specs + dx11) > 6000 series (more of the same tweaking)
Where is similar outrage that we saw against NVIDIA?
i could be wrong .. but when nvidia did the renaming they didnt really change anything right???? from 8800 something into 9600 something ....
Worse, they renamed 8800GT to 9800GT. Exact same card. OEMs changed the packaging, the BIOS, and nothing else (in most cases)
because a lot of nvidia's renames there was no tweaking
typically it was the same amount of "cuda" cores same amount of memory..the difference was slightly higher clocks..which any of us could do ourselves with any over clocking program
no ones upset if nvidia or ati take a arch and tweak it..thats what happens but if ati where to take a 5870(1600sp 1gb gddr5 256bit 850core) and the 6870 was say 1600sp 1gb gddr5 256bit and 900-1000 core..id be upset with it and not waste my time..
however they are "supposedly" tweaking it a bit more then that making it more efficient and improving tessellation which seems a bit different then just taking a gpu and simply clocking it slightly higher and giving it a different name
I mentioned this a couple of pages ago :D
They wont rebrand (rebrand = 8800GT > 9800GT, or 9800GTX+ > GTS250 IMO), but most probably the 67xx will be a tweaked Cypress. Thats what I hear
However, look out for clocks at or above 850 for the 6770, memory will probably be 1250. I feel it is going to be a great overclocker, surely some tweaking has been done to allow for higher clocks (4870 > 4890 ish)
I personally have no issues if I can get a higher clocking Cypress than 5870 for $250. Bring it on
Edit: that chart is completely wrong. Theres no way a 5770 will be tweaked and rebranded 6770, if there is such a "rebrand" it will be what is mentioned above
http://www.chiphell.com/thread-121067-1-1.html
Quote:
Antilles 6990
Cayman XT 6970
Cayman PRO 6950
Barts XT 6870
Barts PRO 6850
Juniper XT 6770
Juniper LE 6750
Turks 66xx / 65xx
Caicos 63xx
nApoleon 发表于 2010-9-4 22:55
Cayman has a 256bit bus,not 384bitQuote:
另外,经过这几天的调查,Cayman是256bit,并不是384bit,AMD现在烟雾弹 越来越厉害了
nApoleon 发表于 2010-9-4 23:11
JUNIPER what?
No. It can't be! :confused:
XS has the worst fanATIcs on the internet.. :yepp:
Even after a humongous post was made detailing how ATI was cheating in Crysis (and other games) with their filtering method and delivering inferior IQ for greater performance, there was no real outrage.
Bitter.. Bitter green cookie! :)
Well, its not like nvidia has not been caught in a cookie jar many many times either..
http://www.extremetech.com/article2/...1103987,00.asp just one example.
Ati hasnt demoed wooden card either, and physx stuff aint helpin nvidia and annoucing the annoucment etc, nvidia leadership looks like they take from steve jobs sometimes.
And as for the "renaming" thing, 3xxx.4xxx.5xxx. and now 6xxx series are NOT the same exact thing, every generation has many differencies and performance, in the case of nvidia, they were selling the same chip with 3 different names.Aint that a lil different ?
Not bitter, just nonplussed :)
Nvidia hasn't done an IQ affecting cheat in years, as evidenced by your article which goes back all the way to the FX 5900..
ATI on the other hand, have been GETTING AWAY with filtering cheats for quite some time now. Only a few people and some small websites have called them out on it. Major websites like Anandtech however, have effectively turned a blind eye, or become deluded themselves. At first it was sort of excusable since they've always used angle dependent AF until the 5000 series, while Nvidia got rid of angle dependent AF with the G80.
But now that they use angle independent AF like Nvidia, they still resort to these tactics; despite the tremendous raw power of the 5000 series. And to rub salt in the wound, you can't even turn it off.
I guess ATI fans must have lower expectations, because I as an Nvidia fan would never let Nvidia get away with something like this..
God knows what ATI (or AMD now I suppose) is going to do next to catch up with Nvidia..
What's with all this tweaking talk?
They're changing the; bus width, amount of SP's, the type of sps (from 5 to 4), the memory frequecy, the amount of memory, more tesselation units.
That's everything that could possibly change which actually influences performance being changed.
This is NOT a tweak, even if the basic architecture is similar or revised everything else is different.
Ignore carvidia. OUT WITH ONDORE'S LIES! I'M CARVIDIA VON RONSONBERG OF NVIDIA.
Catchup how ? Doesnt ati has the most powerfull card on the market today ? 5970, not even taking onto consideration 5970 4gb and ares type of thing. :)Quote:
God knows what ATI (or AMD now I suppose) is going to do next to catch up with Nvidia
i think this is what people are talking about.
http://img833.imageshack.us/img833/982/80187435.jpg
and no that is not everything they could possibly change. take a computer architecture course or something of the sort.
Some people alleged to notice the difference. That thread I linked to had a few such people..
Anyway, it's the principles and not the technicalities which are important.
If you drop the cash for a "high end" video card(s), don't you expect to be able to run at higher IQ settings without compromise?
With ATI however, you're not even getting the full FP16 rendering...meaning, you're not getting what you've paid for.
Why the hell would you spend hundreds of dollars on a high end video card, when it can't even run AF properly?
And as I mentioned before, you can't disable it at all.. ATI is lying to it's consumers by making it seem as if the card is really doing AFx16, when in fact, it isn't.
This is 1000 years old but some of it still applies
http://www.extremetech.com/article2/...1154593,00.asp
well, it would help if carfax knew the difference between a texture format and a filtering algorithm. the rumor is that in some cases ATi was using FP11 where the game called for FP16.
Ignorance is bliss seems to be the favorite mantra for fanATIcs? :rolleyes: You don't have to take anything I say.. Look it up for yourself..
Though I'm sure you won't. You'd rather stay in la la land and believe that ATI has the same IQ as Nvidia..
Unless they've redone their drivers, nothing has changed. In the Guru3d article, they used the 10.6 drivers and the IQ was still inferior to Nvidia's.
http://www.guru3d.com/imageview.php?image=25127
Keyword "some."
Carfax, just being objective here, but, what is your ati vs nvidia thing has to do with title of this thread ?
I mean, when it goes out, and there will be benchmarks and shots, of games using new cards and new drivers, then maybe it will have some point compare games and IQ.But now its pointless isnt it ?
Double post dl please
Many of you are talking about "cheats" like it was a bad thing but I actually concider it as a good thing to get extra performance if you can do it without noticing the difference while gaming and not studying a zoomed in screenshot. I'd actually go as far as encourage both Nvidia and AMD to find ways of doing different technics differently for better performance as long as there's no noticable IQ difference without analyzing it more closely. I would always turn off options that give no noticable IQ improvements in favor for extra performance, knowing that I get less IQ in practice that I don't quite notice while gaming is just some BS to me not worth a thing even if I had bought a $500+ GPU.
You know what, you're 100% right.. The AF fiasco has already been covered, and there's no use in beating a dead horse with a stick..
As far as I know, they've been around for years in one form or another. Like the article that Ajaidev posted, ATI has always been a big believer in selective use of AF since the days of the 9700/9800 series. It's now evolved way past that now though..Quote:
@Carfax: Is there any info whether tweaks like that have appeared recently, or have they been around for long?
Anyway, I'm done with that topic..
That's not what I meant. If you can do different techniques like AF or AA differently and the difference in IQ might only be visible if zoomed in. If you can find ways to do it with less performance impact with neglible difference in IQ then I'm all for it. Forcing AF16x to trilinear has nothing to do with this. That's forcing AF into trilinear when trilinear isn't the same technique as AF. However if you can do AF with another algorithm but with same results albeit lower performance hit and difference in IQ only visible if zoomed in I'm all for it.
http://www.chiphell.com/thread-121384-1-1.html
Cayman XT,2XDVI+2XMini DP+HDMI:shocked:
http://i52.tinypic.com/fnhhz8.jpg
the nvidia one is a bit sharper, but i would like to see the image as it is, not a zoomed in one. If when your not zoomed in you don't see the difference, then i don't care.
nvidia shot looks sharper
I find funny the fact that they use a DX9 game with poor graphics to compare the latest graphics hardware.
Could that pic be with AA on? Maybe should use another game?
I could do the exact same thing with another game, only this time, you actually notice the difference.
http://www.hardocp.com/images/articl...4NzG_1_1_l.png
If pic doesn't work you have it here:
http://www.hardocp.com/image.html?im...dfMV8xX2wucG5n
Ati looks sharper here.
http://www.chiphell.com/thread-121412-1-1.html
Caicos (HD6300) 64-bit 1GB 4pcs 128Mx16 DDR3 650e/800m 4L,UVD3+3D Blueray
http://i52.tinypic.com/m6h4x.jpg
http://i51.tinypic.com/xqm1vo.jpg
Very nice up's mindfury :up:
Keep them coming MindFury:clap: