Originally Posted by
GoldenTiger
By pure math...
32 shaders would be about 6.5% of the shading, lost...
but you'd gain about 17% clock rate... on the cores alone... and a good chunk more memory bandwidth.
You'd also be speeding up the ROP's whereas more cores don't do that alone...
Additionally the TMU's would be sped up as those are tied to the core clock and not shaders as far as I'm aware...
He also said the newer drivers added a couple % performance...
Also worth noting, this guy claims his Engineering Sample card had 512 shaders @ 600mhz, 1200mhz shader, and 2800 (GDDR5) RAM: full specs should be 480 shaders @ 700mhz, 1400mhz shader, and 3600-3700 (GDDR5) RAM.
So my guess is that with those #'s considered, it would be maybe 16-17% better performance for 700mhz @ 480 cores, vs. 600mhz @ 512 cores, taking into account the memory frequency as well. Couple that with a couple of extra percent from those tests adding 3% (he said a few) and we might see a graph looking a bit more favorable. Still, if this guy's accurate, it wouldn't be enough to make it what I would call a clean win for nV here.