Gosh --
This thread is about debunking your assertion that the FSB is crippling Intel in high resolution gaming. Comparing two processors is a different topic all together, and it is widely shown and been sufficiently demonstrated that Intel currently produces the superior part. This does not make the Phenom (Agena or Barcelona) a bad processor, it is a good processor.
In terms of good game play, this criteria simply resides in the ability of the HW to produce frame rates such that the minimum is above the refresh rate of the monitor.
Your post here http://www.xtremesystems.org/forums/...2&postcount=17 is what is at issue. Nobody, not even me, will argue that the FSB implementation is not old and antiquated, however for client desktop applications it is not the critical factor you are asserting.
Intel and AMD take two approaches to feeding the core(s) with instruction or data from system memory -- a fast memory access will smaller cache or a large cache with slower access to memory. Intel has designed and implemented a caching system that works around the FSB deficiencies compared to AMD's IMC approach.
It works like this ....
When a processor crunches data and instructions it fetches that information into the queue. Those instructions proceed through a general progression fetch, decode, reorder, execute and retire. As it moves through the fetch buffer and requires the next block, it goes to cache first if it is there it generates a cache hit, if it is not there it is a cache miss. The hit rate for Intel cache is higher than it is for AMD cache, due to size and a few other variables.
The miss rate on cache is directly proportional to the size, the large the size the lower the miss rate. Each cache miss generates a penalty for going to main memory in which case the processor needs to wait X cycles for that data. Intel also has more aggressive prefetchers which are working to populate the cache with the next needed instruction to avoid misses that would require the FSB.
AMD generates > misses than Intel does, but each miss has less penalty. So which is better, a fast memory connection with high miss rates or a slower memory connection with low miss rates. Overall each approach accomplishes the same thing, and considering the body of data on the net showing clock for clock Intel significantly outperforming AMD in most any application then it is sufficient to conclude that Intel's archaic FSB technology is not a problem.
Where the FSB does become a problem is when the workload and memory footprint exceeds the capacity to keep the cache miss rate low... this is the case for high throughput applications found in both server and HPC applications. In cases where BW is the limiter AMD will win, in CPU bound workloads Intel will win. DT is all CPU bound workloads, BW is hardly a factor, I have found only a handful of cases where I could point to the BW of the FSB the culprit.
If you wish to see this coming from someone other than me, find a library, read and learn.
http://ieeexplore.ieee.org/xpl/freea...number=4536342
Their conclusions are exactly what I just told you above and have been telling you for the past 5 days.
Now, I have showed you data which indeed demonstrates that changing the FSB BW has no impact on the observed performance in high resolution gaming. There is a reason for this the rate determinant is the ability of the GPU to finish it's work -- the GPU is the bottleneck.
This is due to your lack of understanding how a PC behaving in a graphically intensive gaming environment. Part of this lack of understanding is you do not grasp or comprehend that the video card GPU is high through put and as a result, modern 3D cards put high speed RAM on the card and the data that is needed is loaded to this RAM before the game even starts, the traffic over the FSB is small during actual game play. The only time it does become a factor is when the amount of textures needed exceeds the memory capacity of the video card, in which case you will get a stutter ... this will happen on both AMD and Intel systems, neither NOT even AMD, has enough main memory BW to satiate the throughput needs of a GPU. Here is a less academic site that explains this:
http://www.yougamers.com/articles/13...ly_need-page2/
This is also the reason why nVidia and ATI cards are now coming with more and more ram, as high as 1 GB.
You can test this... download the Doom3 demo for HOCbench.com and run it using their demo file... Doom3 is bad about not precaching their textures. Run this on your uuuubber 4870X2 and Phenom rig... it will stutter and jerk, it will make no difference what speed you run the processor or what graphics card you use. Run it through twice, you will always get higher frame rates the second time because the first round has put all the textures that weren't there into video memory.
Finally, you had better hope you are wrong --- for AMD's sake. Because if the antiquated FSB is indeed holding back hidden potential from the Intel Core microachitecture, then what happens when Intel removes this roadblock?
Nehalem will be out by the end of the year, DT will show impressive gains especially in multithreading to be sure, but server (where the FSB BW issues are truly a problem) it will be:
http://www.realworldtech.com/page.cf...208182719&p=10Nehalem will be nothing short of a miracle – with performance gains of 2X or better.
Jack







Bookmarks