Well, it is hard to draw generalized conclusions ... this is one game, with two different scripted scenes ... one thing to take away from this data set is that a gaming experience is really dictated by a complex set of factors (I am diverging here from a CPU/architectural discussion true).
In any given benchmark, at any given set of settings the GPU may be important, the CPU may be important ... overall, it is the GPU that determines the experience for the most part -- choosing the CPU should be done carefully though, either CPU in today's gaming environment will handle any game (only possible exception is Crysis). It is hard to declare any CPU a 'winner' with such a very narrow subset of conditions/scenes/rendering conditions.... the best way (impractical way) would be to determine through the entire story line of a game, how often either CPU limits the gaming out put to below the playable FPS.... this is simply not possible, i.e. you can never guard against the 'worst case condition' as none of the review sites nor any of the built in benchmarks can possibly account for worst case in actual game play.
Back on topic -- I think you are right, ultimately the BW argument will ultimately be the appropraite explanation, be it at the HT (analogous FSB) point, or a the PCIe point of connection.... if we were to generalize a statement, it would fall out very similar to what has been said time and again -- CPU limited work loads, Intel has a stronger core -- throughput limited workloads AMD's interconnect shows it's advantage.
jack






Reply With Quote
Bookmarks