Quote Originally Posted by LordEC911 View Post
I wasn't trying to prove a point, I was simply answering a question.
I can see where he might have been headed by asking that question though.

4890 is what ~10-15% behind a GTX285 with both at stock on average in "normal" games and apps?
Yet with OCCT, the 4890 is ~56% faster, using the 83FPS vs 53FPS.
However this app is programmed it stresses every part of the chip to the max, or at least quite a bit more than other "normal" apps/games.

Also none of the numbers, i.e. FPP, seem to add up.
4890@850mhz= 1.36Tflops
GTX285@1476mhz= 1.06Tflops(MADD+MUL), .708Tflops(MADD)

1.36/1.06= 1.28x greater (1/2 the FPS difference)
1.36/.708= 1.92x greater (amusing since it doesn't mean anything but = largons power draw increase)

Simply using max theorectical FPP is not an accurate way to estimate performance but in this case it seems to be related. Since this app has been said to use simple shaders to completely load the ALUs, you could come to the conclusion that the MUL is only being used ~45% of the time.

Basically, the way this app is programmed it is able to use the 4890's architecture to the max and seems to not fully load Nvidia cards, persay.

Edit- Anyone know what the stock volts for a GTX280 is under load? 1.3-1.4v?
I just ran the GPU test on a pair of GTX 280s SLI for about 10 mins (clocks 712/1512/1242 - voltage 1.185). Full screen 1920x1200 settings. Max current draw was just below 30A on both cards. Max frames per second 144. Both cards are water cooled. Temps of GPU0 max 56C and GPU1 53C. Ambient temp was ~23.8C. During gaming temps usually top out at 15C above ambient.