Sup? :cool:
http://img218.imageshack.us/img218/9058/nvidia.png
Printable View
Heaven performance #s and benchmarks for Fermi should not even be looked as legit #s to compare to. As I and a few others have stated, there is no dedicated hardware for tessellation in Fermi. Nvidia uses its shaders, the "Poly-morph" engine, to accomplish tessellation. Just like Charlie pointed out, Heaven was pure tessaltion demo..No gaming involved. And if there was gaming involved, Fermi would have taken a massive hit because it was using its shaders for tessellation. Just like I have suspected, Fermi can't do both: Have awesome tessellation performance and awesome GAME performance at the same time.
Quote:
On that synthetic benchmark, the numbers were more than twice as fast as the Cypress HD5870, and will likely beat a dual chip Hemlock HD5970. The sources said that this lead was most definitely not reflected in any game or test they ran, it was only in tessellation limited situations where the shaders don't need to be used for 'real work'.
Heaven uses tessellation in a way that games can not, Heaven can utilize far more shaders for tessellation than a normal game can, they have to use them for, well, the game itself. The performance of Heaven on GTX480 was not reflected in any games tested by our sources, DX9, 10, 10.1 or 11.
if thats truth then femri is failure like r600 and its awesome shader power
It's good to hear someone else is fed up with the top-end price gouging that goes on :up:
My last bunch of cards (gaming cards at least)?
Ti 4400 -> 9800 Pro 128mb -> 6800 GT -> 8800 GTS 640mb
Notice a pattern? :ROTF:
Screw paying an extra $200 for 5% to 10% more FPS (if that). It's just not worth it, I'd rather have bang/buck while keeping great performance than just being able to wave my e-peen around for the next 6 - 10 months.
Vantage scores are nice to look at, but I don't think we can look too deep into the scores. No mention of whether PhysX was on or off is one issue. And of course, different architectures perform differently in synthetics.
The rumors do seem to point out to the 470 being somewhere between a 5850 and 5870, and the 480 being faster than the 5870. Smells like a repeat of GTX260/280 vs. 4850/4870, only this time Nvidia is 6 months late
R600 had the AA on shader issue... if this is true, and Fermi can't do both, it will be very R600-esque indeed!
8800GTS 640? Your epeen is really really small...you should work on that. I plan on selling my left kidney for the 480 and its 30% thingy so I can out e-peen my nerd friends. Btw, where are the games that can actually put these powers to use? It's like we've been paying up the arse for cards that hardly break a sweat running these games...
Same here. Just looking for confirmation. :)
That's about 80% faster than GTX 285 so within the realm of possibility. I'm hoping that Fermi addresses whatever it is that prevents Cypress from achieving its theoretical performance.
You should inform Nvidia!!! They still think the polymorph engine consists of fixed function hardware that communicates back and forth with the shader ALUs :eek:
Quote:
The PolyMorph Engine has five stages: Vertex Fetch, Tessellation, Viewport Transform, Attribute Setup, and Stream Output. Results calculated in each stage are passed to an SM. The SM executes the game’s shader, returning the results to the next stage in the PolyMorph Engine.
That's what I've been saying too.
2006: 8800GTX and Vista. 2007: Crysis and 8800GT.
What "revolutionary" games have we had since then?
- Bioshock - cool art, lots of shiny specular. Not too demanding.
- Bioshock2 - same old... no prob for old GF8/GF9.
- Batman AA- (apart from PhysX) is same old shiny. need 2nd card for PhysX.
- FC2 - less demanding than Crysis - 4870 PWNs it.
- MW2 - 8800GT is OK for max settings.
- FO3 - [s]Nothing graphically spectacular. 9600GT is OK for max settings.[/s]4870 1GB for 4xAA
- DMC4 - If colored transparent textures are your thing. very low demanding. 9600GT is Ok for max settings.
- RE5 - great game, but dont even need 8800GT.
- COH - old RTS, could even play on IGP.
- Cryostasis - Ok thats #1 demanding game. But, effects subtle in game.
- Stalker - newer DX10/11 versions are extremely demanding.. but the effects dont make the game much better at all. #2.
- Avatar - besides all the glitches, poorly textured. Stick to 9600GT.
- Lost Planet - Any DX10 card chew it up.
- Call of Juarez - Hard to notice DX10 effects brings even 5870 to its knees #3.
- HAWK - Cool game. Depending on AA level might need that 1GB 4870.
- Wolfenstein - not even DX10.
- Dead Space - not even DX10. very low demanding.
- Mass Effect 2?- not even DX10. Yet another XBOX360 low rez texture port. Fire up your 8800GT.
- BattleForge - its DX11, but its RTS so dont need 60fps. Still, thats #4.
- L4D2 - Easy 100fps with 8800GT at max settings.
- AVP - lame low rez textures. Yet even 5870 struggles. Thats #5.
- AC2 - supposedly DX9 just like XBOX360 version.. so yeah.. all that $$ you spent on that DX11 card was wasted..
There you go, 20 games since Crysis, and only 5 (most are DX11) where you have trouble getting 40-50fps avg at 19x12 4AA with 3 year old GF8/GF9:
Cryostasis
Stalker DX11 - unplayable even on 5870
Call of Juarez (old game)
BattleForge - unplayble on nVidia and anything below 58xx
AVP
Conclusion. You either have great action games like FC2, Batman, RE5, HAWX where your trusty old GF8/GF9 powers through, new XBOX360 ports which dont require anything special, or the few DX11 gimicky games that run fine once you turn settings down 1 notch.
Besides 3DMark and Vantage records, what's the point of $600+ Fermi, if there are no games to showcase its power?
Yeah, if you play at 800x600. My gtx285 can't run fo3 or half those games without some lag. I prefer no lag and for that you need a new gpu.
Almost all the games above could probably run better if they were programmed better. Consoles do so much with their inferior hardware. Something like a 8800gt and above should be able to run things alot better than they do at the moment.
Alot of it has to do with windows running in the backround still. Sometimes I miss Dos.
thats why ati and eyefinity is so great...theres nothing that really needs that horsepower of a gpu...atleast until u have 3 screens running it..
Yeah I must be doing something wrong cause I don't see this abundance of extra performance that people keep talking about. My GTX 285 gets slapped around @ 2560x1600 in almost every game I play. That's why I'll be in trouble if Fermi only matches Cypress cause that won't be good enough.
Yes, but last I checked EyeInfinity is an AMD only thing.
The question was, what super uber cool mezmorizing game technology does nVidia have to make people want to go and spend $600.
I know, let me guess, 64xAA!!
Maybe I'm being overly cynical, but I dont think most people spend half a grand, just to get 5fps/10% more. Regardless of whatever card you use, its still the same poor textures, lame blurry water effect and super squeeky shiny floors. :rolleyes:
Hoping Rage and/or Crysis2 change that.
Desensitization to 3D. Same with movies and sfx. 20 years ago, audience was drooling at 15sec of CGI in Abyss. Nowadays with heavy use of CGI especially in commercials, I dont even notice. Its not cool anymore.
But nVidia and AMD to the rescue with tesselation right? Perhaps recent examples werent the best, because it looks too subtle compared to properly done bump/parallax. Lets face the music: consoles are killing PC gaming.
IMHO death of console only games, and XBOX360 should fix things ;)
I'm sorry but the Xbox 360 will die only after the next generation is released:p:
Deimos, as much as I agree with you. It doesn't change the fact that my computer doesn't run these crappy looking games at the performance they look-like-should which is why i'm willing to spend money. Because those terrible textures are better than the solid color textures they use on medium and low and they're the only stuff your going to get for the next couple years until the next gen consoles come out and then in 5 more years start to limit pc gaming again.
@ Deimos ....nvidia will have an eyefinity like setup(however it needs sli for it) and i thought u were just talkin about expensive video cards in general not just nvidia ..my fault :)
Oh, definitely. Charlie is a much better source on Nvidia's architecture than Nvidia themselves. Silly me.
Did you stop to ask yourself if what he's saying even makes sense? Do you even understand how tessellation works? Those are rhetorical questions by the way.
Excuse me for believing someone who has had a decent track record on information on Fermi. If the same information was written about in a more traditional manner and the author's name wasn't Charlie, you'd be all over it. He's been pretty right thus far about Fermi, its delays and problems. Regardless of his slant. I don't believe everything he writes like you seem to claim. I just know since day one Fermi wasn't going to come out in 2009 like so many on this forum claim, and rather than saying "its going to be late because its bad" he gave some pretty detailed and specific evidence. I wouldn't call them facts because no one knows for sure. The point is, I don't have any bias like you. I dont think everything he writes is automatically false like you. If it makes sense, then I'm more inclined to believe it. I'd rather not get caught up in this pointless argument. As far as tessellation, the reasons given make sense.
http://www.xbitlabs.com/articles/vid...3_7.html#sect1
max settings, 19x10 4AA. 1 year old drivers
_________avg/min
98GTX+ 31.7 21
GTX260 39.3 29 <- definetly playable, most of the game is not as demanding
1GB4870 50.3 38
And you have a much faster GTX285.. which according to this review:
http://www.xbitlabs.com/articles/vid...5_9.html#sect1
got 60fps avg (42 min) @ 2560x1600 4xAA!! What more do you want?
I'm not here to stop you or discourage you from getting a Fermi, or DX11 card.
IMHO when almighty 5870 gets 21fps in Stalker:COP (DX11), 40 @16x10, what's the point of the feature if you can't use it. And Battleforge requires HD58xx for max DX11 settings. And how do these make the games better?
So like I said before, you either got 50-100fps on old video card even with newest XBOX360 ports, or crazy DX11 games with "Crysis" feature that drops fps in half to 20s with little (re: nothing) to show for it.
I'm surprised more folks dont agree with me.:shrug:
yepp, thats very rue... if i had to chose now between 250 or 260 i think id get the 250 because it runs cooler, uses less power, is notably cheaper, yet offers 90% of the performance of a 260, and most importantly, its fast enough in almost any game out there, even at very high res and very good iq.
cause it affects the prices of the cards im interested in? a LOT?
huh? dont get what you mean...
note how the 480 score is always the same 5tage faster than 285 sli... those numbers are calculated, not meassured ;)
in one benchmark in that graph... :rolleyes:
geez man, do you ever hear yourself talk? :stick:
ive used the following cards, not sure if i got the order correctly :lol:
TI4200 (good price perf)
9500np@9700pro (best price perf i had so far)
x850 XTPE (terrible price perf)
x850XT Xfire (terrible price perf)
6600gt (bad price perf and too slow)
x1600pro (good price perf but too slow)
X1600 pro Xfire (bad price perf)
x850
1950pro (good price perf)
3850 (good price perf, a bit slow)
3850 Xfire (ok price perf)
3870 Xfire (ok price perf)
250 (very good price perf)
250SLI (good price perf)
260 (good price perf)
260SLI (bad price perf, too fast :shrug: )
i think 4850s were a steal a while ago, and might still be, didnt check the prices, same for 4830s while they lasted :D
oh and the 260s for 180$, do you guys remember that? that was an awesome deal as well! now they are back up to 230+ :rolleyes:
5750s sound good too...
i was hoping for more with the 5830, i heard 1280 shaders, but now it seems its only 1120... i still think its going to be the best price perf card for a couple of months though... its a cut down 5850, but looking at the specs i see it as a 5770 on steroids, almost 50% more sps and full 256bit gddr5 for 250$... if anybody would ask me what vga to upgrade to id recommend that one...
i think thats not what he meant, fermi CAN do both at the same time, its just that games will need more shader perf for rendering than heaven, which means they wont be able to use so much shader power for tesselation.
thats how i understood it at least...
and it makes sense, essentially it says that heaven is a very unrealistic scenario... its not giving people an idea of actual in game tesselation perf but is more of a worst case scenario situation. what IF a game would go totally nuts on tesselation... since i dont expect nvidia to get close to even 50% market share of dx11 or even dx10.1 cards which are tesselation enabled, i dont think game devs will go crazy on tesselation...
very nice post! :toast:
very very nice, lots of good info in there :)
and yes, totally agree.... why do you think nvidia pushes 3d so much? ;)
cause you need at least 50% more graphics horsepower for ANY game you play in 3d, if not even double the horse power :D
im not saying 3d sucks! i love it! if it works right...
but the reason nvidia pushes is mostly because it drives hw requirements, and only secondary because it looks cool... thats what i believe at least :D
huh?
might wanna run a virus scan on your pc m8 :D
maaaaybe thats cause 99% of us dont play at 2560x1600? :D
2560x1600 isnt even listed in the steam hw survey :D
http://store.steampowered.com/hwsurvey/
and the highest res, 1920x1200 is used by 5%, and 1920x1080 by 7%...
well one or two years after that even id say, so if the next xbox comes out NOW, 360 will still be around for at least a year and will STILL drag hw requirements for games down for at least 6 months :S
does that sound like he thinks it sucks at tesselation?Quote:
It is far too math DP FP heavy to be a good graphics chip, but roping shaders into doing tessellation is the one place where there is synergy.
he says that games wont use tesselation as much and all the tesselation power isnt needed...
no, hes talking about 470 to 480 perf...
bill suggested the 480 might be 40% faster than a 470...
he was only talking about "what if" though...
anyways, looking at the 448 vs 512 shaders, i highly doubt itll be a 40% difference between the two, even if they cut down memory bandwidth... theyd have to ct down a LOT of bandiwdth to make it 40% slower... and why would they? they need 470 to be fast...
id say the difference between 470 and 480 will only be 15%, maybe even less...
at least in the resolutions that matter... and nvidia might actually use that as an excuse for bad 512core availability, saying its only a little faster so the 512core card isnt needed...