Here is my theory
An that case can happen in games too. So why are we focusing on high resolution benchmarks? When your are at low resoultions and the prefetcher works well, the Core Architecture will always give you a bunch more FPS then an AMD thats why you will get higher average Frames, in fact there is no focusing on low fps. You wont see the results of the, i will name it, "latency hole" of a C2Q.
First you have to know that a gpu bound situation doesnt exclude a cpu bound situation.
So when your are at high resolutions a graphic card is like a frame limiter and there is more focusing on the low FPS, so where there are more latency holes, frames will drop much more then with a K10 and you will might get an better average score with the K10 because the better high fps score of a Core2Q are simply cutted off .
Thats what happend on the review of overclockersclub and World of Conflict, they used a graphic card which ran very early into a gpu bound situation and showed that a Phenom performed better at the cpu bound situations. By taking a better graphic card just forwards such a scenario to a higher resolutions.
Again: thats not always the case and in fact a very rare scenario, depending on the game and how its written. Jack provided us with very good Data has perfectly shown this and i realy appreciate his work.
And Jack, ist there a program which limits the frames software wise? Maybe you can check out my theory.. Thanks alot.






Bookmarks