No....![]()
Dude this is almost painful to watch.
A CPU crunches the physics, AI, and other non graphical portions of the game, it then wraps all that information up in a small package, sends it through the DX API where it is loaded into the command buffer for the GPU, the GPU uses that info and the local texture and vertex information video ram to render the frame. All the rendering duites have been moved of the CPU for more than 5 years now.
In a GPU limitation, before the CPU can send the next package of information, the GPU must complete it's work... conversely, in a CPU limited regime ... if the GPU finishes the frame before the CPU has completed the next frame, the GPU must wait until the CPU finishes it's work. This does not mean work cannot be done in parallel -- say the GPU is rendering frame 12110, the CPU can be working on the next frame 12111 AT THE SAME TIME -- but one will finish the task before the other -- it has to happen, in which case to 'synchronize' the next frame of rendering one will wait on the other or vice versa. The GPU shades the pixels, determines the visibility of the Z-buffer, applies the aliasing corrections. The CPU calculates the physics, collision boundaries, the AI, the animation of characters, etc. etc. BUT DOES NOT participate in creating the image this is why the GPU is called the Graphics Processing Unit, it processes the graphics. Changing resolution changes the load on the GPU not the CPU, which is why with weaker GPUs you can overwhelm the GPU at high resolutions and move into the GPU-limited regime.... all the data shows this... it is not difficult to see.
The slowest of the two will determine the observed frame rate... period. All the data around the web shows this to be true. It has nothing to do with how well or how poorly a CPU is threaded, it has everything to do with when does the CPU finish it's work relative to when the GPU finishes it's work. If the CPU is the slowest component it determines the output of the frame rate. If the GPU is the slowest it determines the observed output of the frame rate.
Read the whole thread, nVidia (even your link) shows the flow charts to figure out how to determine which one is the rate limiter.
If the CPU is the the limiter, then increasing the performance of the CPU will vary the FPS... (which is what the LegionHW data shows).... if the GPU is the slowest component, then varying the CPU speed with have no effect on FPS... as shown by the 4870 data in the same game, lower resolution but weaker GPU above from Firingsquad. This is not rocket science.
EDIT: Also -- it is a one way street -- the CPU is the host controller in the current program model --- I have linked in this thread references to you that explains this, the CPU recieves no data from the GPU, the CPU sends commands and object information (non-world assets to be exact) to the GPU buffer which initiates the GPU to do it's work. Spend some time researching it... it will educate you.
Remember, I told you shortly after the 4870 X2 launch that the card was so fast that most all situations at 1920x1200 would be CPU limited, and Phenom would be significantly behind... this is what the LegionHW data shows to be true... even the fastest Intel processor can still bottleneck this card in many games (Devil May Cry is an exception) at 1920x1200 full AA, the 4870 is one hellava card.








Bookmarks