^ Yes I noticed it was 1920x1200 AA4x HD 6870 vs 1920x1080 noAA GTX 480 comparision at first but perhaps it wasn't targeted at me. :p:
Printable View
^ Yes I noticed it was 1920x1200 AA4x HD 6870 vs 1920x1080 noAA GTX 480 comparision at first but perhaps it wasn't targeted at me. :p:
I couldn't find initially the right benches but mindfury pointed the Hexus review.. :P
Note that the resolution is 1920x1080 for GTX 480 and 1920x1200 for HD6870. However, the difference should be fairly inisgnificant IMO.
Well it should also be noted that the HD6870 test is used with almost 1GHz CPU overclock advantage (4.2GHz i7-930 vs stock clocked 3.2GHz i7-965, but turbo enabled in Nexus' test) but as what I've seen CPU won't play that much role in the extreme tesselation test anyway so I'd still say around 20~25% advantage in that test for HD6870.
well, the resolution difference is quite important. You can calculate the number of pixels each GPU has to render.
So, equal graphic settings (except resolution).
GTX 480 has 1920 x 1080 = 2073600 pixels = 30.5 fps
HD 6870 has 1920 x 1200 = 2304000 pixels = 36.6 fps
the resolution difference equals to 11.11% more pixels rendered for the 6870.
The 1 Ghz helps it a bit, but Unigine is not really CPU hungry, especially since it's built to show what tesselation is all about.
Anyway, the performance difference looks solid.
Mind if I ask the source? Or is that your own setup with the GTX 480 and i7-950? So 24% average fps and 30% minimum fps advantage, talk about tesselation performance improvement. :eek: (However if that HD6870 bench would be fake and improvements are way smaller I wanna stab that person :rofl: if true then he's succeeded in doing early very valuable pre-launch marketing ;))
GTX score brought from a guy on the most famous french forum - Hardware.fr
Wow. If the 6000 series has that kind of tessellation performance, I will buy one as soon as they come out! Maybe, just maybe Stanford will have their ATI OpenCL client out by then so all my dreams can come true at once.
Be ready to cough up some $$$ guys:D
In a few weeks probly a lot of 480s and 5970s @ F/S section:)
as i said in my previous post already, the difference is not only the amount of rendered pixels.
if unigine engine handles different aspect ratios correctly, it uses a higher fov value for the 1920x1080 resolution. so it would have less pixels, but a higher fov and therefore more visible objects etc in one single frame.
no idea what has the bigger impact on performance, more pixels or higher fov. so, yeah... :p:
how can displaying pixels not be related to the rendering power of that gpu ????? the bigger the resolution the the more it has to work + the triangles etc... it has to render on that screen .... refresh rate stresses the gpu so i think that florinocanu has a point .. but to what extent maybe someone could run a test on a fermi and a 5800 so we could see the difference ...
Bigger resolution doesn't mean more triangles to be processed, unless it affects FOV. It means more pixels to be processed though, so with AA and AF will play bigger role since they are processed per-pixel so there is more processing going on. Higher resolution also stressees TMUs and ROPs more, since they handle texture mapping and pixel handling.
I wonder how it's gonna looks like on 3x 1920x1050 :D
Uh oh, if these stuffs are true, IMHO nVidia gets screwed for the next 3-4 Q until 28 nm process node arrives. :shocked:
EDIT: And why do i sense there would be tripled or quadrupled FUDs regarding ATi's bad driver, game support, etc, in the next few months or until there's a new rumour regarding nVidia's next gen card surfacing in the web. :rolleyes:
Hopefully they fix Eyefinity flickering with these new cards !
Haha, yeah, I was kind of waiting for "but 20% faster, 30% less power consumption do not matter when ATI canīt write drivers worth s$%#".
I just hope these cards are not priced higher than 5xxx cards, rather that they replace them at more or less the same price point. Thinking about it, a second hand 5870 would probably serve me well enough.
At least I am. Resolution (UNLESS IT CHANGES FIELD OF VIEW!) doesn't have ANYTHING to do with tesselation. The units which handle tesselation do not handle per-pixel operations, as far as I know, so there is no extra work for those units. The work remains the same regardless of the resolution.
However, obviously even with tesselation, the resolution has impact on the framerate, because there is a limit how many operations the ROPs can do, and how many operations the TMUs can do. With bigger resolution they need to do more. And they also need a bigger share of the memory bandwidth, too.
offcourse it will, tesselation impact on the anti-aliasing and anistrope filtering....... indirect lightning, and reflect............... It was not the case when ATI have release the API cause driver was optimised for it, but as Microsoft have choose a diferent path in DX11............. it's impact it a lot.......... For make it simple, tesselation don't kill the framerate, but the addtion of the filters uppon the surface tesselated do it.......
AMD needs to start getting their logos in a few games now.
My thoughts exactly. Now would be a good time to do it, something corresponding to NVIDIA's "TWIMTBP". They've already had quite an upswing with HD 5xxx series when it comes to cooperating with game devs (well they were way ahead with DX11 support so no suprise), with HD 6xxx they could gain trust even easier.
Well, one thing saaya.
tesselation performance is not directly correlated with resolution when doing a unigine test, but.
Rendering is done per pixel. More pixels = more area to render, which means more shading/displacement/geometry etc....
I can tell you about normal CGI rendering, which shares a lot of similarities.
Displacement takes quite a lot of time to render and when the resolution is smaller, rendering goes faster. If you leave the same displacement and up the res, you increase linearly the rendering time, because the objects being displayed/rendered share a bigger pixel space. Displacement is done on screen, not for the whole scene, it would be crazy to do it like that.
So, resolution increase means also lower performance, even in Unigine.
Crysis at 1920x1200 4AA: HD 6870 scored 43.5 FPS. The GTX 460 1024 SLI scored 40.0 FPS. Very Impressive if the results are true.
http://www.techpowerup.com/reviews/N...460_SLI/7.html
Well... CGI rendering is actually quite different from the graphics cards rendering pipeline.
Take a look at this:
http://i.msdn.microsoft.com/Ff569022...s,VS.85%29.png
From Input Assembler until after the Geometry Shader everything is done resolution independent! (as somebody mentioned assuming same aspect ratio) The Rasterizer and Pixel Shader are then operating on the actual pixel level of the current rendering target.
It is said that the Unigine results are fake because the fonts are not consistent:
http://img830.imageshack.us/img830/3056/2ytsco8.jpg
Source: http://www.chiphell.com/forum.php?mo...&fromuid=58492
Umm yeah, it originates from here. They got the information from the other thread.
Nope. In CGI effects are calculated resolution independent as well, but then to display them, it takes more time at higher resolution than lower one. Displacement is the best example for this, it is done usually with physical scale in mind (the amount the model is tessellated (1 polygon every 1 mm or 3 mm or 5etc..) determines how detailed the model will be after the displacement is done). So, even if i render at 640x480 or 1024x768, the displacement is done independently on the model.
But, doing displacement at 1024x768 usually gives linear increases in the time of rendering vs 640x480. That's why a lot of times i avoid using displacement, because it's a resource hog.
This is a bit amazing. The guy has a GPU and don't post any screenshot of the die ? ...
I think it's a fake too, but a well done one.
And that's exactly the point.
The DX11/OpenGL tesselation we are talking about are spawning millions of new polygons in the middle of the pipeline - independent of the rendering resolution.
Your CGI renderer most likely is implementing displacement mapping in a sub-pixel displacement way without generating tons of micro-polygons like we have in the GPU pipeline - so the renderer is resolution dependent.
I donīt know if all the leaked numbers are fake, but real performance must be very good. Otherwise i donīt think that AMD would kill the Ati brand.
I like the theory someone mentioned earlier in the thread. Why change it now? Let's hope its true and they deliver with the rumored performance for the 6 series.
mmmhh Interesting am have been for the last 3 years an Nvidia user (not a fan fanatic despite the PIC lolz just prefer them so far) maybe this ATI cards shows some really potential and I switch over... Thanks for the thread screw the haters LOLz... Keep info coming... after all isn't Education one of the purposes of this forum??
im sure it does, but from what ive seen the difference is very very low...
especially when you go from 1920x1080 to 1920x1200, which is what the original point was...
thx for the details :toast:
but im sure you will agree that the difference between 1080 and 1200 is tiny :D
:lol: :toast:
so its really fake then?
good point... but as long as cayman outperforms gf104 they own the highend with a dual cayman card, so... they dont necessarily have to kick 4ss to feel confident about the 6000 series... hmmm
Saaya, if the new chips are not really fast enough over the current line ups, so like you've previously suggested, then why they have to bother R&D & creating this whole new family ? DX 11 strengthening can wait until 28 nm arrives, can't they ? We know in current games, Evergreen is more than good enough on average.
Evergreen while long in the tooth is holding the fort just fine, and since Evergreen fastest chip is actually smaller than nVidia's current best perf/watt/die size area performer, GF 104, should the supply constraint abated, ATi could easily engage a price war for market share expansion, if needed.
These new chips IMHO have to be considerably faster while adding efficiency in the process, i know, that's quite a feat should ATi successfully pull it all the way out, but i think they have the capability & room to be so. Then, a new generation tagline would be well deserved & justified.
Regards. :)
is this real or fake? scores seem to be smashing single 480s.. if true then hell yer!
But why would they edit the image when they can change the html?
http://a.imageshack.us/img163/7199/faket.png
If I had real numbers and put them put I would use a fake screen as people then would disregard them.
In spite of being real.
I assume the 6870 is the card to get ;)
Per-pixel..? Bro, resolution is ratio. More is more... 1920 x 1200 is more than 1920 x 1080. Any benchmark will change it's fov to render @ said resolution, otherwise it would just stretch scenes & be dysfunctional as a benchmark.
More to tessellate, more work to be done. Aren't you dismissing the obvious?
I said physical scale. So, i can say to the model, i want a polygon every mm of the model (that implies working the model up to scale). So... if the model will be distant then i use some low tessellation on it, if the model is close to the camera than i will input a higher value to get more detail.
Still, even then resolution plays a huge role into the render time since the actual displacement is done through the shader, through a black and white map (exactly how you do it in DX11 as well). So, more pixels = more space to shade/displace.
Remember, tessellation means you divide the polygons of the model into more polygons.
Displacement means you then use a black and white map on the tessellated model and you displace extra geometry via shader = which means more render time when you increase the resolution. especially when you use higher AA (you have that in CGI as well, you can use low AA or high AA, depending on the scene or desired render time/quality).
Also, shading is done usually at a sub-pixel level, because you use an AA filter for subpixel transitions (better, crisper renders), like Catmull-Rom for example, so as you up the resolution and AA levels, the displacement takes longer to do.
I can go on about this forever, but i think you get the picture. There is a clear distinction between tessellation (done via geometry shader in DX11) and displacement, done after that.
Profit margins. AMD with the current price makes quite a bit of money but what if they could charge about a hundred dollars more across the line. They would be making a crap load more, probably a hundred million more with the number of cards they already sold.
With the rates that these cards are selling out, AMD might be thinking they released the 5xxx series at too low of a price and they can not outright raise the price themselves too much(the most they could get away with was the 20 dollar raise they did).
What they could do is release a somewhat modified card which has better performance that atleast matches the competition and somewhat beat's it and increase the price of the line 100 dollars(and these cards likely cost the same to produce considering improved yields). Not taking huge design risk allows them to develop the card way faster which is probably the most important thing considering the time between now and 28nm is when NV will have nothing to respond with in the high end market beside the gtx 490.
By keeping this performance in check and not doing something crazy to this generation: it allows them to make a new product for cheap(low R and D), keep current 5xxx customers happy by not feeling that they were duped into buying a slow early 5xxx and it also prevents these owners from flooding the market with used 5xxx cards which might prevent a new sale of a 6xxx series.
According to steam, close to half the 5xxx cards are the 58xx variants. Thats 8 million cards considering 16 million cards have sold. Both the 5850 and 5870 have prices of 259 and 379 MSRP but have sold for atleast at the beginning at $350 and $450. What if AMD simply refreshed the line with about 20% more performance, released at better quantities so that the card sold at MSRP and they reaped the benefits rather than the retailers who gouged us.
This is what AMD should have done in the first place and I think they realized this.
This quarter, the AMD graphic division made 40 million dollars net which is pathetic considering the extreme monopoly they had. NV had quarters where they made hundreds of millions of dollar during their prime. Just imagine how much difference to this net profit if they charged 100 dollars more, if AMD for example charged 100 dollars more and sold 10 million cards not including the lowend, depending on the deal they had with partners, your looking at atleast 200 million more in your net profits even with R and D, money shared with partners.
Fermi has been such a PR disaster for the consumer because of heat and energy concerns that the rate of the 5870 and 5850 don't seem to be slowing down that much and are still selling out.
They could release a card that just as fast as the gtx 480 add 100 dollars to the 58xx current price and it would still sell if they avoided the energy and heat output of the gtx 480.
They have alot of momentum so people might just buy an AMD card just because it is new and they haven't jumped on the directx 11 fence this generation.
Considering SI was originally design for 32nm and considering how quickly they got this line up together I am thinking this card just from a time perspective has to be way way more cypress than NI, which lead me to believe they might improve performance 20-30 percent but nothing crazy like the fake tesselation score in unigine.
Tajoh, ive been hearing that Cayman was originally designed for 28/32 nm as well.
Thinking about it some more, it makes sense. As GPU's take years to go from design to being finalised, I am sure the architecture after Cypress was designed with 28/32 nm in mind. Its not like the transition from 45 to 32 nm was taking too long so AMD just decided to modify the Cypress core here and there and sell it as Cayman in a few months. Also, the fact that 67xx seem to be modified Cypress cores means that Cypress would be cheaper to manufacture than a dumbed down Cayman cores, which might happen due to the fact that Cayman will be bigger in size compared to Cypress since, again, it was designed for 32nm.
Power consumption apparently seems to have gone up in comparison to Cypress as well. Hmm.
Also, you do need to realise that 67xx are going to replace the 58xx cards. I doubt AMD would price their midrange 67xx cards at current 58xx prices. Hence from what I feel, 68xx will be priced slightly more than 58xx, and 67xx are going to fill up the $150 - $200 market.
Of course I might be wrong :shrug:
are you familiar with a z-buffer? triangles directly affect rasterization and z-buffer performance. because of the way z-buffer algorithms are optimized(hierarchy, with quadtrees, coarse grained culling) tiny triangles hurt efficiency which tessellation creates.
i am saying that the # of triangles and their properties are directly related to rendering speed.
why do you guys continue to discuss semantics though?
the original point was that one score was 1920x1080 and the other 1920x1200 and that they were not comparable... to which i and others said, the difference in unigine between those two resolutions should be tiny and the scores would be somewhat comparable...
and they are... somebody compared 1080 vs 1200 and the performance is almost the same...
Thx for the lengthy response, appreciated the thought & gesture. :)
Regarding your opinion, it has merit of its own, but i dunno man, i don't think ATi will make a whole family refresh just to keep profit margin by creating a bigger & more power demanding chip at the top and just relegating the old chips downward + renaming them nVidia's style. Remember RV 790 ? A new SKU with relative minimal R&D to matchup better with competitor's better SKU, it still kept the same family nomenclature. Only time will tell the truth (will be unveiled in the near future i presume) & speculation is fun, like what Saaya said earlier. :D
About the leak numbers, i think those are quite possible, but for me personally, the Vantage & Crysis score seems more believable & more probable. Why ? Because i do have a strong feeling regarding the added efficiency of the rumored, new 4D SP array design, applied on Cayman with 1920 SP (480*4). Regarding tesselation, while the smart money will put on a stand that ATi should have improved its current mArch relative dismal geometry & tesselation capability, i really don't have a clue on how they could achieve that.
I could see if AMD doesn't update the 5770 them still keeping the 6770 range at the low price they are at. However, if it performs 25% better, I could see the 6770 being knocked up 50 dollars. If the 6770 performs like a 5830 or a gtx 460, it can totally get away with a price tag of 179.99 and the 6750 being 150. AMD can get away with such pricing because they have the momentum right now and Nvidia's low range products are looking mediocre.
If the 5xxx series is off the market, the consumer will be basically forced to buy AMD's line if they want something new at all or buy NVidia's slightly cheaper but worse performing line with higher power and heat.
The later they don't mind because they allow them to be reestablished as the premium brand in addition it allows their competitor to make some money just not that much.
Making money is the number one priority of a company, and getting something as quick to the market ASAP when the competition is vulnerable is the most important thing during that time. Nvidia is going to jump on 28nm ASAP when it comes out and AMD going to need a line at 28nm at that time too. This is likely going to be late 2nd quarter or early in quarter 3 of next year.
I cannot see AMD spending a full R and D budget and more to rush this product on a product that's going to only have a 9 month life cycle(which will be likely the shortest lifecycle ever for a lineup). They really don't need to blitz NV at this point. They are selling like crazy as is, and by keeping performance in check, it gives more incentive for people to upgrade when the 7xxx generation happens. Considering that this chip has to be bigger than the chips as is, to perform better, its not the place to take risks as we have seen with the TSMC 40nm. Its better to be safe so you can have predictable power outcomes.
I could see Nvidia still selling during this period with a 384 shader part at 249, gtx 480 at $399 and a gtx 490 at a price of 499.
Too much performance really is going to lead to gouging which really doesn't help AMD as a brand to profit as seen from their last quarterly statement. At this point timing is more important. The longer AMD can get away with big margins, the more money they will make.
that's almost never done and for good reasons. also a gf100 based on 28nm could be easily 40% faster with relatively little effort.
R&D is a fixed expense. it doesnt change much. but really it's not a quick paced game like you think. development of gpu's is 3-4 years so they arent going to base decisions on how the competition is performing 3 years from now, they are just going to reach the goals of the chip. i.e. performance, area, cost, etc.Quote:
I cannot see AMD spending a full R and D budget and more to rush this product on a product that's going to only have a 9 month life cycle(which will be likely the shortest lifecycle ever for a lineup). They really don't need to blitz NV at this point. They are selling like crazy as is, and by keeping performance in check, it gives more incentive for people to upgrade when the 7xxx generation happens. Considering that this chip has to be bigger than the chips as is, to perform better, its not the place to take risks as we have seen with the TSMC 40nm. Its better to be safe so you can have predictable power outcomes.