I think we are missing the point here, just as most people missed the point of the GTX 280 vs. the Radeon 4870 battle. We shouldn't really care too much about the configuration of GPU cores, as this is besides the point. But more on that later, now let us do a quick numerical recap if we may.
The GTX 280 has 933 GFLOPS of overall computing power in single precision.
The Radeon 4870 has 1200 GFLOPS of overall computing power in single precision.
So, going by GFLOPS along, the Radeon 4870 should be the faster card in pretty much all gaming situations save the ones specially coded for Nvidia architecture (and possible driver errors). As we have seen from the dozens of reviews of these cards so far, in the vast majority of games including synthetic gaming benchmarks, we see the situation is reversed with the GTX 280 being the most consistent winner. But how is this possible? The more or less obvious answer is that gaming performance is not solely based on raw GFLOPS output. So if the GFLOPS is the not the number we should be looking at, then what? Well, I'm sure at least some of my colleagues here might attest to looking at actual game benchmarks. This is perfectly fine if, and only if, you want to compare GPU power in JUST that specific situation and NOT taking those results as OVERALL levels of performance. The reason for this is simple; there are many different gaming engines out there. Given this multiplicity of choices, even an average of say 10 or 20 of the current most popular games out there would still not be a totally accurate representation of the differing levels of performance between the GPUs. As some of us might know, in statistics, the average of a set is an artificial number which may not represent the initial conditions of that set. Basically, there is too much individual variation to use a simple average of an arbitrarily chosen group of games to call that results absolute. Thankfully, there is a way of finding the differing (if different) output between GPUs which is not arbitrary or somewhat subjective. The way to do this is by comparing another group of numbers, namely, the texture and pixel fillrates. Here is how the GTX 280 and Radeon 4870 stack up:
The GTX 280 has a texture fillrate of 48.1 GT/s and a pixel fillrate of 19.2 GP/s.
The Radeon 4870 has a texture fillrate of 30.0 GT/s and a pixel fillrate of 12.0 GP/s.
As we can see, the GTX 280 has a significant output advantage over the Radeon 4870. This advantage seems to manifest itself as higher gaming performance in most game engines, and seemingly reflected in various benchmarks. It should be noted that how a certain output is produced is beside the point. The number of Shaders, TMUs, ROPs and the speed they operated at is JUST a way of getting their output; texture and pixel fillrates. Think of it in terms of internal combustion engines, just as how the number of cylinders, valves, displacement, RPM, etc., of an engine are just means of getting desired results, i.e. torque and horsepower. And to continue on the relevant note of next generation GPUs, lets try to compare the flagships of the GT300 (probably the GTX 380) and R800 (the now known Radeon 5870).
The Radeon 5870 has a texture fillrate of 68.0 GT/s and a pixel fillrate of 27.2 GP/s.
The GTX 380 has a texture fillrate of 83.2 GT/s and a pixel fillrate of 31.2 GP/s.
(If the rumored specs from Wikipedia are to be believed:
http://en.wikipedia.org/wiki/Compari...rce_300_Series)
So, it seems the GTX 380 may be faster then the Radeon 5870 (with mature drivers, if not earlier). The GTX 380 (using the rumored specs) has 22% more texture fillrate and 14% more pixel fillrate. Given that the difference in performance is not too extreme, I suspect the GTX 380 will beat the Radeon 5870 in a majority of games and benchmarks but not necessarily all.