MMM
Results 1 to 25 of 525

Thread: Intel Q9450 vs Phenom 9850 - ATI HD3870 X2

Hybrid View

  1. #1
    Xtreme Enthusiast
    Join Date
    May 2008
    Posts
    612
    JumpingJack: So you think that there isn’t any interest among readers to know how video cards perform using AMD processors?
    One thing that you can be certain of next time nVidia or ATI is going to release a new card is that there will be at least 10+ reviews using intel processor to test the card. Good imagination among reviewers don't you think?

  2. #2
    Xtreme Mentor
    Join Date
    Mar 2006
    Posts
    2,978
    Quote Originally Posted by gosh View Post
    JumpingJack: So you think that there isn’t any interest among readers to know how video cards perform using AMD processors?
    One thing that you can be certain of next time nVidia or ATI is going to release a new card is that there will be at least 10+ reviews using intel processor to test the card. Good imagination among reviewers don't you think?
    No, there are plenty of people who would like to see it no doubt, me being one of them.

    But the reviewers are reviewing video cards, not processors. If they used a Phenom, most likely at best it would show the 4870 X2's tied with the nVidia 280 and 260 since the games will all bunch up to the limit of the CPU.

    They use the Intel CPU not to advertise Intel CPUs, the use the fastest available platform so that the cards will demonstrate the performance differences without any CPU bottlenecking. Even with Intel CPUs, there is some bottlenecks being shown through the reviews.

    Many sites have to use over-clocked Intel quads to make sure that the CPU does not rail to the same value. Even the nVidia 280 GTX can open up some bottlenecks.

    I have two 4870 X2s on the way, I will provide you examples of this. At the same resolutions, settings, etc. The phenom paired with the 4870 X2s will most likely produce lower scores in some (likely most, if not all) the test runs compared to the Intel CPUs.

    It isn't about imagination, it is about doing the right thing and reviewing the cards correctly. Just like before the C2D was launched, all the reviewers used AMD processors to review video cards, not because they liked AMD or wanted to promote AMD, but because the Athlon 64 just B!tch-slapped the Pentium 4 up, down and sidways at gaming... why would you use a P4 to run a high end game with a high end video card. It would be pointless... it is just that today, that role is reversed.
    Last edited by JumpingJack; 08-14-2008 at 05:10 PM.
    One hundred years from now It won't matter
    What kind of car I drove What kind of house I lived in
    How much money I had in the bank Nor what my cloths looked like.... But The world may be a little better Because, I was important In the life of a child.
    -- from "Within My Power" by Forest Witcraft

  3. #3
    Xtreme Enthusiast
    Join Date
    May 2008
    Posts
    612
    Quote Originally Posted by JumpingJack View Post
    But the reviewers are reviewing video cards, not processors. If they used a Phenom, most likely at best it would show the 4870 X2's tied with the nVidia 280 and 260 since the games will all bunch up to the limit of the CPU.
    Do you think that AMD isn't able to run 8800GT? That was a fast card not that long ago

    If one game is only using one thread then I agree with you, the cpu would bottleneck the system if the video card is very fast. But do you really think that games that are using two or more cores effectively are going to be main bottleneck on advanced graphics?
    And this thing about the CPU or GPU bottlenecks the system. When application runs they are using different types of work, they sometimes(?) is using the CPU and sometimes it sends data to the GPU. It is very hard to balance workloads and use all hardware in order to hide bottlenecks. Threads need to be synchronized. If one thread is running very fast but that thread needs to wait for another thread it doesn’t matter, the slower thread will delay it. If you have bigger threads (they do more) then it is harder to estimate how fast the work is done. If you have one main thread than it might be easier. I think crysis and some other games are done like that. You will se work among other cores but one core is going crazy, this isn’t going to use the CPU as much (all four cores). New games may use other techniques, they need to do that in order to take advantage of the all the cores.
    How the game performs depends on the design. Maybe one game does the all the calculation of how the "image" is displayed using four threads. When all these threads are ready they may use one thread that sends the data to the video cards, when this thread is sending data the speed needs to be high. In between it may not be that much traffic(only memory but they are probably trying to optimize for cache hits). Games may use buffers etc to speed up but when I look at how hard the CPU is working it is very rare to see 100% on one core.

    EDIT. About anand and nehalem. OF COURSE intel knew that they got the processor, the journalist would otherwise had done a crime and maybe some responsible persons at intel would be kicked because the security was that bad.
    Last edited by gosh; 08-14-2008 at 05:41 PM.

  4. #4
    Xtreme Mentor
    Join Date
    Mar 2006
    Posts
    2,978
    Quote Originally Posted by gosh View Post
    Do you think that AMD isn't able to run 8800GT? That was a fast card not that long ago

    If one game is only using one thread then I agree with you, the cpu would bottleneck the system if the video card is very fast. But do you really think that games that are using two or more cores effectively are going to be main bottleneck on advanced graphics?
    Wow we are right back where we started aren't we ... A game using 4 threads effectively can bottleneck the game. Both the Phenom and the QX9650 do this in Lost Planet.... Lost Planet is a multithreaded game and pushes all 4 cores. All that data I showed you earlier was multithreaded on all 4 cores for both Phenom and QX9650. The cave scene demonstrates that it is a bottleneck.

    The Phenom will most likely bottleneck the 4870 X2 (I say most likely because there is no data yet, I will have that in a few days), even a QX9770 is holding back the X2 in some scenarios based on data around the web.

    EDIT: Your favorite, GRID is also CPU limited ... badly. Which I dropped by the store to day and grabbed a copy. This is one fascinating game.


    And this thing about the CPU or GPU bottlenecks the system. When application runs they are using different types of work, they sometimes(?) is using the CPU and sometimes it sends data to the GPU. It is very hard to balance workloads and use all hardware in order to hide bottlenecks. Threads need to be synchronized. If one thread is running very fast but that thread needs to wait for another thread it doesn’t matter, the slower thread will delay it. If you have bigger threads (they do more) then it is harder to estimate how fast the work is done. If you have one main thread than it might be easier. I think crysis and some other games are done like that. You will se work among other cores but one core is going crazy, this isn’t going to use the CPU as much (all four cores). New games may use other techniques, they need to do that in order to take advantage of the all the cores.
    How the game performs depends on the design. Maybe one game does the all the calculation of how the "image" is displayed using four threads. When all these threads are ready they may use one thread that sends the data to the video cards, when this thread is sending data the speed needs to be high. In between it may not be that much traffic(only memory but they are probably trying to optimize for cache hits). Games may use buffers etc to speed up but when I look at how hard the CPU is working it is very rare to see 100% on one core.

    EDIT. About anand and nehalem. OF COURSE intel knew that they got the processor, the journalist would otherwise had done a crime and maybe some responsible persons at intel would be kicked because the security was that bad.
    I am not sure why this is sohard to understand ... the GPU is responsible for one of the workloads, it only needs to know a small amount of information to complete it's task. The CPU is responsible for completely different workloads, and it only needs to send information to the GPU before the next frame is rendered.

    If one waits on the other then the hold up is a bottlneck at that computational resource. It is easy to see this in the data.... if you increase the resolution and the FPS does not change the CPU is the culprit conversely if the FPS changes when you change the resolution, the GPU is the bottleneck.

    You even pointed it out yourself when you linked techreport on lost planet.

    Computationally, even with 4 threads, Intel runs gaming code faster... significantly faster. There are many good reasons for this. In fact, I am reviewing a paper for a study that will be published at RealWorldTech that goes through the reason why Intel runs game code 20 to 50% than an equivalently clocked Phenom.
    Last edited by JumpingJack; 08-14-2008 at 07:28 PM.
    One hundred years from now It won't matter
    What kind of car I drove What kind of house I lived in
    How much money I had in the bank Nor what my cloths looked like.... But The world may be a little better Because, I was important In the life of a child.
    -- from "Within My Power" by Forest Witcraft

  5. #5
    Xtreme Enthusiast
    Join Date
    May 2008
    Posts
    612
    I think you should read some about game programming (DirectX etc) instead of reading papers about this and that. Then you will understand more

  6. #6
    Xtreme Mentor
    Join Date
    Mar 2006
    Posts
    2,978
    Quote Originally Posted by gosh View Post
    I think you should read some about game programming (DirectX etc) instead of reading papers about this and that. Then you will understand more
    I have read a lot of papers about this.... threading the processor and the computational result is part of the CPU duty.... it doesn't change the fact that Intel produces a faster result on 4 threads overall.

    However, why don't you cite your references (as I have done) .... I am always curious to read up.

    However, if you are going to go into detail on thread locking and blocking, that is moot, I am well read on that, and has nothing to do with the ability of a CPU or GPU to be the hold up in the graphical output of 3D gaming.

    Jack
    Last edited by JumpingJack; 08-14-2008 at 07:29 PM.
    One hundred years from now It won't matter
    What kind of car I drove What kind of house I lived in
    How much money I had in the bank Nor what my cloths looked like.... But The world may be a little better Because, I was important In the life of a child.
    -- from "Within My Power" by Forest Witcraft

  7. #7
    Xtreme Mentor
    Join Date
    Mar 2006
    Posts
    2,978
    Quote Originally Posted by gosh View Post
    Do you think that AMD isn't able to run 8800GT? That was a fast card not that long ago
    Ooops, just saw this.... and yes, when nVidia launched the G80 core, AMD CPUs bottlenecked that card in some cases, and most cases with the GTX:

    http://www.firingsquad.com/hardware/...ling/page5.asp

    As an example.... now, you seem to have a hard time connecting the observe FPS with what is limiting FPS be it a CPU or a GPU.

    Those of you with AMD CPUs who were planning on upgrading to GeForce 8800 will want to look over our performance results on the preceding pages carefully, especially if you planned on upgrading to the GeForce 8800 GTX. In many cases you’ll find that the GeForce 8800 GTX is so powerful that it is CPU-bound with AMD’s flagship FX-62 in games like Quake 4, and Source engine games like Dark Messiah and Half-Life 2 Lost Coast.
    But again Gosh, it is not hard to understand.

    This is the basics of gaming processing on a platform today. The GPU has local memory with ubber gobs of BW to the GPU (because GPUs are such high throughput beasts), all the data they need to render the scene is located locally in the video RAM (textures, vertices, etc). The only information the GPU needs is details of where objects are located in 3D space, particles, baddies, etc.

    So a game runs, the CPU is responsible for calculating the data needed for a frame as described above... the GPU then recieves this data and processes the frame to render it, when the GPU finishes it is ready to recieve the next frame of data from the CPU, and so forth and so on...

    The CPU calculates its information on the frame anyway it wants, single, double quadruple threaded, the GPU does not care -- it simply needs the next frame of data from the CPU.

    Two scenarios. The CPU finishes it's calcuation before the GPU finishes the prior frame ... thus the CPU waits. The other scenario is that the GPU finishes the rendering the frame before the CPU is ready to send the information, the GPU waits.

    It is that simple.

    Before you go on about programming and game programming and stuff.... forget about the code. Just answer a simple question.

    If I run a game a 1024x768 and measure the FPS, then run exactly the same game at 1600x1200 and again measure the FPS, should the FPS go up or down with resolution?

    jack
    Last edited by JumpingJack; 08-14-2008 at 07:57 PM.
    One hundred years from now It won't matter
    What kind of car I drove What kind of house I lived in
    How much money I had in the bank Nor what my cloths looked like.... But The world may be a little better Because, I was important In the life of a child.
    -- from "Within My Power" by Forest Witcraft

Bookmarks

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •