yeah, quite a bit off the line.. :(
Stop all the ati/nv fanboy crap guys, bangun is being loaded.
Printable View
yeah, quite a bit off the line.. :(
Stop all the ati/nv fanboy crap guys, bangun is being loaded.
Not a very good comparison and not a very tasteful joke if you ask me...
I don't really understand why the word "cheating" is used before enough testing is done.
Changing the exe does not change anything and it's unsure if this bug improves performance.
At this point it is a bit early to compare this to the things nvidia did in the past in my opinion.
Wait, do we have any real confirmation that this IS increasing FPS? Did anyone do a follow-up? Screenshots are nice, but I see no FPS numbers... or old vs. new driver comparisons.
MM already had a range
http://img682.imageshack.us/img682/1...emanbanner.png
2 things on this, 1) its a glitch not a cheat since it didnt make it go faster, why would they bother i dont think that any1 makes a buying decision or benches cysis anymore if they were to cheat they would have done it some ware better (still bad that this is messed up in the game) and 2) who cares about crysis its a poorly coded 3 year old game
i do realize that im more of an ati fan as i dislike NV but i have never found cysis to be relevant as to what card maker is better
10.3 gave me a performance boost i dont notice any IQ lowering thats why i decided to take those screen-shots, i can take more if that helps sort this mess
IMO i dont think its cheating, ATI have managed to get the drivers to mix of bi linear and tri linear texture filtering on the fly for improved performance. In some console games, the video might pop in and out of v-syn or AA for better performance-and its not noticeable unless you really look at it.
bi/trilinear filtering has a massive performance advantage over anisotropic, if ATI is dropping back to bi/trilinear, then there will be a performance increase. I'm still very curious about all this talk about angle independent anisotropic filtering, as the the view angle is what determines the line of anisotropy and therefore the sampling region, bi/tri linear is angle independent as the sampling region is always uniform.
everywhere you look you see that one screen shot of the round rings and everyone uses that as proof of how good the ATI filtering implementation is, unfortunately the major problems in regard to texture filtering, which a lot of the posters above me don't get, is that the problem occur during motion.
Ever notice how a texture all of a sudden "pops" from low res to a sharper version? that's bi-linear filtering, trilinear smooths this out by interpolating between the mip levels, so the transition is smoother and anisotropic is a special case of trilinear, created to reduce the aliasing of oblique textured surfaces.
perhaps ATI figured that no one would notice their "feature" and so enabled it to gain extra performance but just like everyone else in this thread I want some proof:
* compare the 10.3s to the 9.12s and see if this "feature" is present
* compare filtering across games and APIs, might be a DX10 only "feature"
Someone should also post a performance comparision between bilinear / trilinear / 8x anisotropic and 16x anisotropic in this thread just to quiet down the guys claiming its not a big deal.
I know for a fact that the graphics programmer has absolutely NO control over how filtering is performed, all he can do is enable/disable it. The actual implementation of the algorithm is company specific and in the drivers.
I think it was on Anandtech where they were talking about Catalysts applying negative LOD automatically, i think it was in Cat 9.11 and later. That sharpens the textures but sometimes cause shimmering. It's not performance optimization because negative LOD actually creates small performance hit, but it may look like optimizations that also caused shimmering on textures even with 16xAF.
And just for info, as far as i know, AF doesn't work on parallax mapped surfaces (at least in Crysis) so...
pfft. NVidia cheats by having an automatic built-in advantage in Crysis in the game's coding.
I'm waiting for a site like anandtech or tech report to publish an article, I mean, if it's really a big thing they will investigate this too.
The deaf man following the blind man?
100 posts and 0 facts.
But from my experience benchmarking 30+ cards:
1. There is N0 standard. Nadda. None. WHQL ensures your Start popup menu is rendered correctly.. not Crysis. Sharper = better!? Assuming game developer intended maximum sharpness blur on everything?
2. 10 years since Q3. Hundreds of games. Actual performance cheats (ie 3Dmark) confirmed < 5. Driver rendering bugs people notice < 100. Actual game rendering bugs > 99999.
3. Can't just do per-pixel comparison between nVidia, Intel and AMD. Which are you assuming is the golden reference? BIAS hmm? They all use different AF algorithms. Likewise, pixel differences in new drivers doesn't mean cheating either - what if it was fix for graphics glitch you didn't notice?
4. For the most part, the 3DMark AF and other artificial AF tests are meaningless. They are detected (not just exe name). And behavior even with same game engine is different.
Lets say a game developer tests code on their nVidia rig. Rendering looks ok. They ship it. But, it was using a driver bug. If nVidia fixes it, rendering is not like intended. Should AMD "break" their driver to emulate intended rendering? What do you do when game patch comes out - do patch version detection in driver?
Bottom line:
3D rendering is the wild west. You think programming "hello world" is too different in COBOL, Java and Python!? Video drivers are expected to work perfectly with 10000+ games running in DX9, DX10, DX11, OGL and dozens of OS versions, not to mention all the video and GPGPU extensions, and each hardware generation works differently!
If a few pixels are darker than before, use the older driver, and cut them some slack.
================
EDIT: Statement like "this game is optimized for x" are total BS. What does that mean? For 3 years nVidia/AMD did nothing to fix rendering or improve performance? Quite insulting. Sorry to burst your bubbles, but virtually ALL games, even Start menu and mouse cursor are "optimized".
32AA mode is a big cheat by itself according to same article :ROTF: iq is clearly lower than atis 24 AA so what does that mean ? false ad ? or cheat ? :p:
One question: why is an optimization being called a "cheat"?
As long as the effect is invisible to the end user and has a positive impact on performance, I personally don't give two hoots if it increases framerates by two percent or two hundred percent. I tried to allude to the same thing when talking about the substitution of FP16 render targets since while it has a positive impact upon performance, a person actually PLAYING a game (instead of staring at comparison screens) will likely NEVER see the difference. Me, I see a slight difference in some isolated cases but that's just because I play some games like DoW 2 ALOT so I can see the minor differences with ATI's implementation in that game.
The same thing goes (IMO) for anything above 8xMSAA. Other than old Myst-style point and move style games, 99% of todays apps involve paying attention to a moving picture; not glorified screenshots. So, why would someone even stop to care about a few jagged lines on a fence 200 feet away? On the other hand, if higher IQ modes can be enabled through the use of higher end hardware, I'm all for that as well. ;)
Naturally, decreasing overall image quality for higher scores in reviews isn't "ethical" but I booted Crysis on both ATI and NVIDIA hardware over the weekend and couldn't see any difference. Yes, the ATI cards do have an odd issue where some edges shimmer a bit but between the 9.12 drivers and the newest 10.3a, I saw no differences in either performance or overall IQ.
No, because it is clearly documented how it works. I am speechless after reading the responses to this thread. If the roles were reversed here, and it was the GTX480 called out for this issue, there would be a :banana::banana::banana::banana:storm greater than the one coming in 2012. Not only are people dismissing this issue, some people are actually trying to bash Nvidia some more...
It seems AMD's social engineering program is very successful.
There is a standard, its called the API specs and drivers are supposed to follow them. Now lets say developer A creates custom mipmaps and writes the rendering engine assuming no LOD bias as per the SPECS, and the guys at ATI obviously know better and introduce a driver level LOD bias after the game has shipped, now who's responsibility is it to fix the problem? The developer that followed the API specs or the driver team that knew better.
3D rendering is not the wild west! The whole graphical pipeline is very simple, the developer has control over most key stages in the pipeline except for the finer points of triangle setup/traversal/texture filtering/clipping and to an extent blending. The driver team just needs to ensure that the driver behaves according to the API specs.
From your post, I'm assuming you don't really know what LOD bias does and from where the supposed sharpness comes from.
The problem with your logic is that that performance boost from their optimization will often mislead consumers into thinking that the card is faster than it actually is. When one card is doing 16xAF and the other is doing trilinear, it really isn't fair to compare performance levels, and I think that is the gist of the problem. Yeh, the gamer will probably not notice but then he will also go around spouting nonsense about how awesome his new fangled card is until he hits a game that isn't "optimized" for his card and then will obviously start complaining about how terrible the game engine is.
If Nvidia disabled their AF in crysis you can expect a large boost in FPS as well but then the internet would run red with nerd rage!
AF is a pretty standard thing and they should implement it correctly instead of attempting to fake it. They are trading quality for performance and I can see how that is perfectly acceptable from their viewpoint but I still think that they should leave that choice to the developer and end user. If I find that AF kills performance let me turn it off, don't do it for me and then lie to me about it.
if its that black and white, people everyone should notice it and complain about it. i dont care enough to look into if its a bug or intentional, or how to fix it. but if i was playing and i couldnt turn on 16xAF, i would be pissed. i always game at my screens native res, and 16xAF, then i max out textures until the framerates too low, then i add in AA if i feel its needed or if i have more perf available. but AF is one of the few things where its a small perf hit, for a whole truckload of more IQ (seriously when theres lines across the ground and textures magically look different, its complete BS)
I know its not that black and white, I just used that as an example. Though AF can be seen as a special case of trilinear and as such the difference may often be a tad indistinguishable to the average user.
Or maybe it's Nv's fault that they take a lot of flak? Too much renaming; too arrogant CEO; too expensive GT200 at launch; some cheats/bugs too; wooden screws (that was funneh); late, overhyped and paper launched fermi. Hell, after all these I'm not even crossing them out of my list, I still consider Nv cards when buying.
ATI.. R600 was crap (very late too), R700 was hot with stock cooling but pushed prices down, and R800 was a great launch. :shrug:
I'm guessing others have the same train of thought.
The cheats that Nvidia used changed the image quality and thus the gameplay but this is hardly a game play changer.
Has anyone seen the AF results of GF100, can you say that the implementation in practicality and theory complement each other? In theory a 5850 is much better than the GTX 480 in AF but does in really translate in practical superior AF performance....
http://images.anandtech.com/reviews/...X480/480AF.png
GTX 480
http://images.anandtech.com/reviews/.../5870/5870.png
5870
http://www.xtremesystems.org/forums/...d.php?t=248897
So AF seems much better in 5870 so one can say Nvidia's implementation should be faster because its not perfect and Nvidia is cheating and gaining performance in AF situations but in reality the IQ is not really influenced that much at all.