A unit dedicated to perspective correction isn't even worth mentioning in the white papers, much less including in theoretical FLOP numbers, as many architectures particularly the R300 architecture have a dedicated perspective correction unit running in the background that ATI never bothered to include in the shader diagrams or in theoretical numbers when describing it to the press. It's essentially free in old architectures, suddenly by illuminating the MUL in the G80 and saying it's needed to come up with the theoretical shader numbers the G80 is capable of, NVIDIA is saying next gen means age-old routines like perspective correction are no longer "free". But in marketing, 518 GFLOPs looks a hell of a lot better than 345 GFLOPs.
But this move by NVIDIA brings up an interesting take on what their opinion may be of the R600. It may show a lack of NVIDIA's confidence in the efficiency of the R600 as compared to the G80. There's a very good chance the R600 will use the same 4-way Vector shader structure of the R300 ancestry, which is a pretty tried and true architecture, but will prove dismal compared to a fully scalar architecture that NVIDIA is boasting. They'll have to use a mass amount of these shaders and a high clockspeed to create shader performance on par with NVIDIA. On paper, the theoretical numbers will actually be slightly higher (suggesting a clockspeed of slightly over 800MHz holds true), but in real world utilization NVIDIA will have the upper hand, which is crucial. Instead, the R600 will have to rely on other aspects to obtain the win over the G80, and that is raw fillrates, both pixel and texel, and memory bandwidth. That will give ATI a much needed boost in heavily bump-mapped scenarios, HDR and post-processing effects, anisotropic filtering, and anti-aliasing. Add all those together and they make up the bulk of what a graphics card has to do in modern games, and thus the R600 will have the edge.
At least over the first generation of G80.

Bookmarks