It's weird, from their track record, usually the leaks from the Chinese on cards tend to over-estimate performance....
If those figures are true, then get used to high prices...
Printable View
It's weird, from their track record, usually the leaks from the Chinese on cards tend to over-estimate performance....
If those figures are true, then get used to high prices...
This isn't aimed at anyone, and I hope that no one takes offense.
But if you (a theoretical "you") think 15% to 20% increase over a 5870 with a huge increase in power consumption and heat (possibly cost?) is "good" after 7+++ months of waiting...
Wow... Wow... That's some pretty hardcore devotion to a company. :shocked:
I don't think it's good. However, the overwhelming majority of people are claiming Fermi will be a huge flop and back their points with Charlie articles.
So, relative to the portrait of the current situation, drawn by Charlie and his fans; yes, a GTX 480 which adds 20% over a 5870 is good.
The huge increase in power consumption that you mention is about 35 percent. Yeah, it's not in line with its proposed performance increase; but if you take a look at HD5870, its TDP is 67% higher than the 5870 with only an average of 45% increase in performance. So, TDP wise, Fermi makes as much sense as a 5970 does. (if the figures are correct of course)
Not (maybe) just a 15-20% increase..
You get Physx, you get CUDA, you get folding at home, you get a DEDICATED driver team, you get a wider range of support in games, theres a WONDERFUL lack of catalyst control centre, you get a card which retains a higher second hand value...
Need i go on?
Wheres this "Huge increase in power consumption"? 42A.. my 280 GTX says it requires this in the manual.
Heat? i'd imagine it's no worse than my 280 GTX.
Seeing as the next gen is usually twice as fast as the previous, i'll be more than happy with a card which is equal to 280's in SLI..
Obviously, i'm not a fool. If this 480 GTX cost's around £450, and the 5970 is £499 and the difference between them is huge, then i'd get the 5970.
I'll give you the drivers. That's basically why I got a GTX 260 instead of a 4870. Catalyst isn't as bad as some people make out imo, but yea. Nvidia definitely has more solid drivers.
Cuda, Folding, and (especially) Physx I disagree with though. I don't know one person outside of XS (including myself) who cares even slightly about those things. And granted, some people think they care about Physx, but let's be honest with ourselves, Physx is useless in 99.99% of games.
The rest is debatable/up in the air until we get solid numbers and testing, so I won't argue.
And yea, I know my point about no one caring about cuda/etc is debateable too. :p:
They just seem awfully situational to me, most people don't have too much use for those things as far as I know. :shrug:
If someone actually uses cuda, yea, I agree, it's great. But I know I have no use for it.
Whats up with the A2? Is this pic even real?
http://www.heise.de/imgs/18/4/8/9/6/...714eca01e.jpeg
Good points. Big assumption here that it will beat it by 20%. In the average case you might be looking at a much smaller difference, if at all.
If nvidia is getting physX through to you, it is not getting through to me. Sure it's something to say you have that others don't, but think about it. ATi can just easily allow physX on their GPUs but nvidia won't allow it. ATI won't do it now because they want to see it go down, maybe if it picks up later they will license it. It will take a few years for something to emerge as the dominant "physics" api. If there is a lesson we can learn from history it is that the first to buy raffle tickets aren't always the ones to win the raffle.
CUDA. Ok you have CUDA. I want to play games, but you have CUDA. Games, CUDA, games, CUDA, games, CUDA. How are they related again? Let's just say I hope they have that "15-20%" advantage because they will definitely need it.
I admit, I am not an "average" computer user. I don't see things the way regular users and I cannot recall the last time I struggled with CCC. I love all the features in CCC, I love the way its laid out. I also use nvidia drivers at work, I don't upgrade them as often but I do use them a bit differently at work. At work my primary concern is not gaming: stability, ease of use, multidisplay support. I like the features they have for multidisplay, but they are not well thought out. Nvidia drivers give more control over display settings, color/contrast/brightness/gamma, but no control for video playback, no deinterlacing options. At least with the release I have now.
I don't use dual GPU so I cannot comment on that, but to be fair I will say no clear advantage to nvidia or ATI in the driver department from my perspective and from my personal usage.
I have no idea where this completely unfounded statement comes from but ok :welcome:
Please do, to me it sounds like you are out of ideas :)
Who are we kidding with power figures? We know the power consumption of the gt200 @ 65nm. How much power reduction do we expect from a simple die shrink for the gt200? Looking at the figures for the 5770 vs the 4890 (40nm vs 55nm) at full load there is a difference of 50 watts. I know they have slight difference in specs, 5770's mem is a bit faster, more transistors and a smaller bus, but I think that only adds to the case I am trying to make.
Nvidia has simply scaled the gt200 architecture. If they went to 55nm from 65nm, using the info above, a less than 40watt diff. Now add more ram, double the number of SPs, more transistors for DX11 and you can easily put the GF100 past the GT200 in power consumption. Using quick calculations my guess puts it at 60W hotter than the GT200 @ full load. If they went to 40nm it might be half that number to 30W. That leaves it ahead of the 5870 in power consumption.
Let's hope they improved the idle power consumption as well.
Fair game.
wait so, ATI doesn't have a 'dedicated' driver team???? :confused
To be honest, no idea. The thing is, whether 5870 is more efficient or not, GTX470 has like 200MB more VRAM, so even if it's less efficient it still has some extra headroom.
:lol:
A PC component! :D
Sure they do. He is just making things up.
He renamed it 360/380 to avoid the censorship of the name by Nvidia..
I dunno. If the GTX 480 had come out within 2-3 months of the 5870, I'd say that yeah, it's not a flop. But coming out over 6 months later, and being potentially hotter without major performance increases is definitely disappointing, not "good"
It's funny that people are focusing so much on Cypress that they don't realize those numbers would be a joke even compared to GT200. That's even worse.
http://www.abload.de/img/nv-tesselation-benchma0m5f.jpg
new picture?!
+1 to that...AMD is really not as much interested at HD 5000 now as some still believe. Or maybe PR is. But engineers are looking forward to next gen. HD 5870 Eyefinity is almost released, maybe the team which worked on Evergreen will release HD 5890 in summer (if so, the card is entering the last phase before release now), but HD 5000 becomes forgotten now. I mean, NVIDIA totaly sucked for them, they are looking forward to the biggest change from R600, what is NVIDIA with their half year late Fermi than?
As a result, I am afraid AMD will lower prices not due to Fermi finally comes to market, but due to current gen is almost in half of it's lifetime. Than some pricecut in october/november time and next we have new line...
That can't be a 470 just by the simple fact that 13000 points can be attained with a 8800GTS 512.
Barely faster than a GTX 260? Nice :)
Dual core at 2,6GHz in 3dmark06 is surely a bottleneck to all that GPU power at 1280x1024 :rolleyes:
I can score higher with my OCed G92 and e8400 4Ghz...
http://www.prnewswire.com/news-relea...-85999182.html
Looks like Just Cause 2 will use CUDA directly for water simulation and post processing.Quote:
Working closely with engineers at NVIDIA, developer Avalanche Studios has incorporated support for NVIDIA CUDA technology which helps to deliver a higher level of visual fidelity within the game's environments. CUDA-enhanced features in Just Cause 2 include incredible in-game effects, with rivers, lakes and oceans beautifully rendered with realistic rising swells, flowing waves and ripples, while advanced photographic Bokeh lens techniques add an additional cinematic quality to the look and feel of the game. Just Cause 2 has also been optimised to take full advantage of NVIDIA 3D Vision technology on compatible hardware, creating an incredibly immersive 3D experience.
http://kotaku.com/5484795/just-cause...oves-on-nvidia
The water looks really good.
@SkItZo
Sorry about that. It was quite amusing when I got that message in Server 2003, though :D
I never even heard of Just Cause 1 :rofl:
Proprietary standards just need to die, quickly.
@the 3DMark-screen: That's hilarious :D I never knew there were ppl that used Dualcores (except in notebooks).
When will that be? After a shrink to 32 nm? That's nothing unheard in the industry (R600 ;) ).
God I should go to sleep. Five o'clock in the morning in Germany *yawn*
Anybody here notice that Super Mario's head must be made of adamantium to keep bashing those bricks.. at least you'd think he'd use some asparin instead of all the shrooms *caugh* steroids. And whats up with the polka dot sky high place.. my tingling spidey sense is guessing LSD was involved.
Can somebody here make a Skynet/Fermi poster ۩♪♫☺☻♀♂█▓▒▒░
Oh, then it must certainly be an irrelevant POS if it slipped under your radar.
Yep, if there are open alternatives. Which there aren't. There are millions of CUDA capable card owners out there who will benefit from these additions who won't care much that it's proprietary.Quote:
Proprietary standards just need to die, quickly.
If it something be advertised for NVIDIA and anything related to them. I.e Cuda and physX, fermi it should die. This would normally sound sarcastic but this is AMDz.....I mean xtremesystems.
The purpose of this thread was to contain the trolling around the forums and to reduce the amount of Fermi threads. But now all the fermi trolling is concentrated to make the so called "trolling party" and in addition, there seems to be anything related to AMD at all, to be posted in the news section. From every single 5970 being released(5970 black edition, 5970 sapphire edition, 5970 ares edition, 5970 haxxors edition) to AMD contests(MSI 58xx total prize value for three prizes 800 dollars), to someone posting something related to a currency which just happened to be AMD abbreviated, interviews with AMD people(catalyst maker x2), blogs, drivers and etc.
It just feels like an AMD forum, which normally isn't bad. But this forum already has an AMD section and most of this stuff could go in there or tech talk or extreme competition(which is also for contests). I started feeling sorry for Nvidia a long time ago with all the trolling against them. Sure they rename alot but they are not doing some terribly underhanded like what Intel did against AMD.
Anything like a negative rumor against NV seems to be posted instantly and its hard to not feel sorry for them. E.g things like charlie articles and somehow even positive like their recent financial results have even been trolled or them giving away a game.
NV could cure cancer and it would somehow be trolled.
Anyone notice that nVidia now has "Geforce 300 Series" drivers posted up on their site? (if this has been posted already, please ignore)
NVIDIA has stated numerous times that they are willing to license PhysX to anyone, including AMD/ATI. I'm not sure how "easily" ATI could implement it though.
How are games and CUDA related? PhysX is an API with a HAL that supports CUDA. In addition to C for CUDA, you get C++ and OpenCL support. Devs are slow to adapt these technologies, but it is happening. Everything else being equal*, are you going to go with the card that has support for these extras, or for the card that doesn't? I don't see these features as a deal breaker, but they are certainly nice to have.
Batman AA whetted by tongue for PhysX, but the Super Sonic Sled Demo is what really sold it to me. It's a feature that can add a ton of realism to games. It frees up a developer from having to make a physics engine for their title, less work for devs = more likely to see it put into a game.
* According to the perf numbers I've seen, things aren't equal. The point of DX11 is a massive improvement to geometric realism through tessellation. And AMD's architecture doesn't hold a candle to NVIDIA's in terms of geometry perf. 2.03x to 6.47x faster in high geometry (this more of an abstract, there's more than just geometry perf to get a game running) than the 5870, are the numbers I've seen. I'm eager to see how a big DX11 title performs (Sorry ATI, DiRT2 just wasn't cutting it for me). The beauty of tessellation is that devs can build their geometry for hardware performance that doesn't yet exist, and smoothly scale it down to what's out, without a load of extra work (just move the slider). It will be a little while before we see a game that was built around DX11 tessellation, rather than just having it slapped into the game.
Anyone wanna loan me some ATI cards to bench against on the 26th? :P
Amorphous
NVIDIA is the biggest backer of OpenCL, which can certainly be used to run PhysX-like physics processing.
It takes a LONG time to develop and get out a set of open standards. NVIDIA didn't want to wait to get a good set of physics tools out to devs and into games. With the development of CUDA, NVIDIA acquired Ageia and ported Ageia PhysX into CUDA (in only a few days, I'm told).
At the same time, NVIDIA chairs Khronos, the guys that do, most notably, OpenGL, OpenCL and other open-source standards. In short, NVIDIA doesn't care if your physics are PhysX or OpenCL based, either is good for them, they both help sell more graphics cards. The 100M+ NVIDIA GPUs that support CUDA also support OpenCL.
The current GeForce 300 GPUs you'll see around don't use the Fermi nanoarchitecture that the GF100 based GTX 470 and GTX 480 will. Mid-range solutions that use the Fermi arch are likely to follow a few months after the March 26th launch of the GTX 470 and 480s.
Here's more rumors and talks of the mystical Firmy from Fudzilla:
http://www.fudzilla.com/content/view/17922/1/
http://www.fudzilla.com/content/view/17921/1/
For resume:
- GTX 480: around 300w (HD 5970 level); GTX 470: 220w.
- GTX 480 "should be faster than HD 5870 in some cases". GTX 470 (I hope it's not the GTX 480) will be 20-30% faster than GTX 285.
- Partners still don't have final design yet. They only know the dimensions and cooling design currently.
BTW, "launch" date is still March 26th (3 weeks), I hope.
Ok...fermi is definetly smaller then even g200-b3 chip (gtx 285 55nm etc)
GTX 285 IHS is 1.75in flat or 4.45cm.
http://www.heise.de/imgs/18/4/8/9/6/...714eca01e.jpeg
http://i465.photobucket.com/albums/r...r/IMG_0563.jpg
Correction: the IHS is smaller, likely due to the smaller memory bus.Quote:
Ok...fermi is definetly smaller then even g200-b3 chip (gtx 285 55nm etc)
GTX 285 IHS is 1.75in flat or 4.45cm.
Unless you open up the IHS or x-ray it, there's no telling how big the die is.
Amorphous, the biggest problem is not how easily it would be for ATI to implement PhysX, but the fact of supporting and helping to standardize a competitor's propietary API, be it PhysX, CUDA, or whatever. I suppose you realize it, don't you? Of course NVIDIA would love AMD/ATI embracing PhysX/CUDA (the sw API) because being only compatible with NVIDIA hw is one of the reasons why its use is not widely supported by mainstream sw developers (because of compatibility for the features they should invest resources into). But I think you understand very well that if AMD are anything else than simply dumb, they have to do anything in their hands to avoid those competitor's propietary API's to become widespread, and supporting it by making their hw compatible is not the best thing to do for them.
NVIDIA will always have the possibility to do what they want with their propietary API's, and taking competitive advantage from it. Look at the case of EAX and Creative Labs. AMD won't support something like it unless they have absolutely no other possibility.
And in its current situation of hw vendor specific API's, I think CUDA/PhysX don't have a real place in mainstream market (and that for PhysX means anywhere). Of course, non mainstream HPC market (scientifical research, stadistical analysis, data mining, some applications in engineering, and other kind of data crunching and complex systems simulation applications) is another completely different thing, and I think in that ground CUDA (and generalizing, NVIDIA) is becoming THE big player for the moment (let's see how much advantage they can take until Intel is able to compete in that market, and in which measure they are able to do it). HW compatibility is important for a mainstream product from POV of the cost-results of implementing a given feature, and being vendor specific doesn't help in that. The less when you think that there are other wider solutions (OpenCL, Direct Compute) which in a short/medium term will be used by equivalent middlewares (Havok, Bullet for example in the case of real time physics simulation libraries).
That said, I'm not really convinced about the usefulness of GPGPU computing for videogames in PC's. For example, you mentioned B:AA. In that game, the physics effects that use CUDA can be perfectly done in a current quad core CPU (maybe in one with less cores also) with a multithreaded physics library. Most of them can be faked without much of a difference to be run even with less hw. In order to take real advantage of the computing power of a GPU you should aim to make effects orders of magnitude more complex, and that would bee a too hard hit for the GPU computation budget given to graphics. I've been taking a look at the rendering pipeline in CryEngine 3, for example, and there's simply no place where you could fit any non graphics related workload for the GPU. And that's after discarding a lot of visual effects that simply don't fit too, and taking the more rough but performant approaches for some others. And on the other hand, you hardly have anything intensive to do for the CPU apart from physics and AI computations, for which is enough (except, I insist, if you aim to do some much more complex computations that would make a huge impact on the leaving GPU budget for graphics).
Maybe the situation is different in a videoconsoles background, though, or maybe will change in the near (or far) future. Maybe I'm simply wrong, that's nothing else than a very light, superficial, analysis of mine. Anyway, if GPGPU computing it's going to become widespread in videogames, it will probably happen when there's a sw solution available which is not dependant on a given hw vendor (maybe except exclusive titles for a videoconsole with hw of that hw vendor of course).
see
http://www.xtremesystems.org/forums/...postcount=1620
basically die size has to be smaller then 55nm g200b3s
naked 470 and 480
http://i789.photobucket.com/albums/y...razno/th_2.jpghttp://i789.photobucket.com/albums/y...razno/th_1.jpghttp://i789.photobucket.com/albums/y...razno/th_3.jpg
http://i789.photobucket.com/albums/y...razno/th_6.jpghttp://i789.photobucket.com/albums/y...razno/th_4.jpghttp://i789.photobucket.com/albums/y...razno/th_5.jpg
Thank you for quoting yourself without any explanation :)
As said by trinibwoy, the IHS size is the size of the GPU package and does not neccessarily reflect the die size.
GT200 needs a big package because of its 512-bit memory bus. GF100 likely can get away with a smaller package with a 25% reduction in bus width.
heise.de (in german) has new information about the GTX470
shader@1255 MHz
RAM@1600 MHz
Vantage-X 7511 (compare GTX285/HD5850/HD5870: 6002/6430/8730)
Vantage-P 17156 (compare HD5850/HD5870: 14300/17303)
Unigine benchmark
4xAA [fps]: 29/22/27 (GTX470/HD5850/HD5870)
8xAA [fps]: 20/19/23 (GTX470/HD5850/HD5870)
http://www.heise.de/newsticker/meldu...lt-946411.html
Just to clear things up about CUDA + Just Cause 2.
OpenCL and DX Compure Shaders are accelerated THROUGH the CUDA architecture standards. As such, the effects in Just Cause 2 will likely be accelerated on AMD hardware through their Stream architecture as well if they use OpenCL or Compute Shaders.
Everyone has to remember that CUDA is an all-encompassing term for the architecture that accelerates all types of APIs on NVIDIA GPUs (OpenCL, PhysX, folding, Compute Shaders, ray tracing, Adobe Photoshop GPU acceleration, Flash acceleration, etc., etc.). It is WAS proprietary but is now acting as a type of umbrella term for everything GPU accelerated. Much like Stream on AMD products.
whats with the pictuires with white outs and black outs that makes no sense whatsoever. remeber leaked HD5 series pics? were any of them whited out?? nope why the secrecy? maybe because this isnt what it will actually be? i just dont get it
Maybe some manufacturer's insignia and the MB maker don't want to be blamed for leakage...?
Jeeez resize your pictures lads, its pissing me off to be scrolling right, left up and down all the time in order to see anything. Why there are no rules about image size?
You really don't have to go too far in this thread before you can find some posts to delete. It's a bit of a challenge really, given how quickly this thread grows.
I understand that people are frustrated with Fermi's late launch and some of nVidia's other practices. But let's check our frustrations at the door when we come to this thread because it is about Fermi news, info and updates... that's it.
I don't think that's possible. Unless the developer is using PhysX (which needs to be used in conjunction with another API to accurately model hair, water, etc.), there isn't anything out there other than DC and OpenCL right now that can use the GPU for this type of acceleration. If it is a proprietary engine that can do this we're talking some major $$$$$ invested on the part of the developer.
In order to accurately add water to a scene you need tessellation and selective geometry shading on the rendering side, DirectCompute for animation and physics if the water is interacting with anything. Unless the developer found away using DirectCompute for the animations, I can't see how this would be "exclusive to NVIDIA". Basically, they may use this to show off the GF100's power in DC or something along those lines.
What do you mean? Of course you can use CUDA directly to implement these effects. There are hooks in CUDA for interoperation with OpenGL and DX buffers. Remember, while CUDA is the wider compute architecture it also refers to the "C for CUDA" language on which DC and OpenCL are very much based/similar to.
I don't think that, for example C/C++ for CUDA are any lower level than OpenCL so... why not? :shrug:
Indeed, what I have read about it is that programming something with OpenCL might be less straight forward than with C for CUDA because of a lower level (and harder) setting up process.
But don't take my word on any of this, I have no clue about GPGPU programming.
Thank you sir :up: People get caught up in marketing terms like always...
At this point it looks like I may just have to go crossfire if I want a more cost effective performance boost (as much as multi gpu setpus are facepalm inducing affairs). I can see the 480 being nice at high res / IQ due to its vram and bandwidth advantage over Cypress (and unless you use 8x + AA at 1920x1200 and more often 2560x1600, 1GB of vram should remain adequete at 1920x1200 4x well into the future eg HD6000 ), but beyond that all these rumors so far are merely :shrug: (WTB facepalm smilely face) If the 480 truely creeps up on 300 watts, that is quite strange. Would another shader cluster and some extra ram and similar ( perhaps lower? ) clocks really result in 80 more watts power usage (max) over the 470? For comparisons sake what was the max board power of the original GTX 260 and 280?
I wonder why the GTX 480 has a two pin fan connector and GTX 470 has a 4 pin one?
No, you're right. The runtime C for CUDA interface is higher level than the OpenCL API which is more similiar to CUDA's driver interface. Nvidia has absolutely zero motivation to use OpenCL in any situation where CUDA would suffice because that would effectively give AMD a free invite to the party. Although I don't think OpenCL is yet in a stable enough state to ship with a commercial game so it's a moot point anyway.
Didn't I just say that in plain English? ;)
CUDA allows for interoperability with DC, etc. but it goes to reason that same interoperability can be created (though not through CUDA) for ATI's Stream. Maybe I just didn't explain what I was saying well.
Allow me to post a slide directly from NVIDIA:
http://images.hardwarecanucks.com/im...0/GF100-27.jpg
I've been programming all my life, and have been looking into these stuff lately.
The CUDA architecture enables developers to leverage the parallel processing power of NVIDIA GPUs. CUDA enables this via standard APIs such OpenCL and DirectCompute, and high level programming languages such as C/C++, Fortran, Java, Python, and the Microsoft .NET Framework.
Here's my general feeling about PhysiX
In about 1 year time, many people will be wondering this:
"Humm, with 4 cores a bit busy now with this game, I wonder how it would be like if I could use my other 4 Bulldozer cores to do something useful like run physics, too bad I had to waste 150$ on this graphics card just for that"
Surprisingly clear PCB photos of the
GTX470
GTX480
HD5870 for comparison
comments on razno's Fermi PCB photos:
470:
- 9+2 blue, 3 red capacitors, by constrast 5870 PCB only has 5 med caps.
- 4 phases for GPU (big ass inductor + 3 FETs each)
- Huge DRAM chip looking inductor + 2 caps and 3 FETs for GDDR5 Vddq
- no visible IC for Vreg on right side of board, 1 med and 3 small DIP IC on left.
- silk screen for 3 extra caps on right, and 2 small IC on left
480:
- 6 phase power? 5870 only had 4!? (same 3 FET for each phase setup)
- 4870's 2 big caps and huge inductor for GDDR5, replaced by tiny caps and inductors??
- 6 blue, and 8 big red caps
- 6 extra (tiny) chips for each phase (with silly putty still on)
- to the right of vent holes PCB looks barren
- 1 med and 2 small IC on left. Guessing big Vreg chip under white square.
- silk screen for 4 small and 1 large IC on left.
- silk screen for extra 3rd 6ping power connector !!
- smaller vent holes than 470! and 470 had 3 mounting holes around them!(fans screws to pcb)??
Interesting observations compared to AMD:
5870 uses 4 phases, 5850 uses 3 phases. Both nVidia and AMD abandoned 1 PCB for all GPU long ago.
Despite 2x SP, 5850 has ~power as 4850/4890 - 100-120W. Very unlikely for GTX470.
GTX470 has so many capacitors on right, no room for full heat sink.
Obvious absence of nVidia display chip.. and DVI+DVI+HDMI -> no eyeFinity equivalent?
Thanks for the video!
Since I'm nice :P I took the following screenshots for your digestion:
http://i.imgur.com/9ROAO.jpg
It looks like:
- Charlie's and other's rumor that GTX480 kicks HD5870's arse in Unigine Heaven benchmark is true.
- 3D surround is likely a software response to ATI's eyefinity. They plugged the 3rd monitor into the second GTX 480.
It's nice to finally see something REAL from NVIDIA!
All this proves one theory of mine, Nvidia believe in using tessellation as a distinctive feature in DX11 from their rivals. Without heavy tessellation "tessellation in Heaven is very heavy" there is a chance that 5870 2GB comes close to GTX 480's performance.
I think Nvidia will make a huge deal about tessellation and most TWIMTBP games will include heavy tessellation effects that ATi cards just cant work with.
Yeah, I strongly believe this too. Nvidia made Fermi to handle tessellation better against ATI.
Just read this off of gametrailers forums. "It's something like this, ATI cards have a dedicated unit called a tessellator that can only perform tessellation related functions and since it's a dedicated unit it has fixed performance. Nvidia has a flexible solution that can allocate performance to where is needed, if tessellation is demanding it will allocate resources toward it accordingly, if not more resources will be dedicated to increase performance for shaders etc."
So basically why it's faster is cause it can allocate and is dynamic. Now, what I would like to see is tessellation being disabled, and then see the performance. It's probably a safe bet to say the performance will be very close, but I don't know. I'm speculating that the 480 is going to be faster than the 5870 anyways in regular games, the question is how much. Could be 20%, could be on par, these are all with common games. Don't know about the 470, could be slower, or it could catch up in some games. Though, what worries me is with tessellation enabled seeing the 5870 struggle. They're going to have to come up some kind of solution in the future when more games come out this year that use tessellation. The only thing they have right now is the dual GPU cards that could beat it, but at the same time they are dual GPU and there's the micro-stuttering issue, and the fact that 60fps single gpu doesn't equal 60fps on a dual GPU.
Don't know about the roadmap though, the only thing I can think of is Northern Islands, but that's still a ways away. ATI could really be stuck in a rut here with tessellation until new hardware comes out.
When tessellation will only have a ON and OFF option and the ON option will enable heavy tessellation that only Nvidia h/w can handle and ATi's 5870 may have a difficult time handling heavy tessellation.
What Nvidia will have is a legit excuse of why the games perform better on Nvidia hardware than ATi one. Just like Nvidia Physx was the keyword for 2009, tessellation may as well be a keyword for Nvidia for 2010 !!
Lol, guys, look at this screenshot. Nvidia is magically using version 1.1 of heaven. This could bring an improvement to the FPS numbers of heaven. We don't know whether they used 1.1 or 1.0 for their HD5870 benchmarks... Also, the HD5870 seems to catch up with higher IQ settings. I would love to see a benchmark with atleast 8x AF and 8x AA.
I'd like to see the benchmark with AA and AF maxed out too.
Well I just saw on nvidia driver site they already have 300 series on the list for driver downloads. So it must be getting close.......errrr
Except this is not the 300 series
avp maybe, haven't played it. dirt 2 and stalker? if 1 pixel changes because of tessellation I don't call it tessellation
what I call tessellation is the awesome rocky ground in Unigine Benchmark. I don't care if the left pocket button of an enemy is rendered in tessellation or not