MMM
Results 1 to 25 of 525

Thread: Intel Q9450 vs Phenom 9850 - ATI HD3870 X2

Threaded View

  1. #9
    Xtreme Mentor
    Join Date
    Mar 2006
    Posts
    2,978
    Quote Originally Posted by gosh View Post
    Face it! No matter what I say you are not going to listen. This discussion for you isn’t about technology it’s about pride.
    It isn’t possible to understand the processor if you isn’t doing development and check for yourself.
    Maybe in a month or two the speed_test will be good enough for nonprogrammers to use it and test and check behavior. You can design your own tests with that (I know that the code was too complicated for you to understand that).
    No.... I will listen... I linked where I showed exactly the same behavior at high resolutions with Lost Planet above showing Phenom taking over the FPS lead at high resolutions. I guarantee you, if you show me data that proves your hypothesis I will agree....

    Your problem is that you completely lack the capacity for analytical thought. The second problem is that your not correct, you extrapolate a conclusion on a single observation... i.e. you have jumped to a conclusion based on a preconceived notion of how you think a CPU handles itself in a graphically intensive environment.

    Showing a bunch of benchmarks where the game is GPU limited, then claiming it is because the FSB jams up the threads, is not proving your theory. Your speed_test algorithm is going to do nothing but choke the system up, get the result you want it to get, then you will proclaim greatness. Unfortunately, it does not represent reality.

    For your hypothesis to hold water, you must demonstrate that the work load associated in a real world environment is actually creating the situation across the board. This is not the case....

    So let's think about it... again, if I run a game at high resolution, and measure the frame rate, then I change the FSB bandwidth, and run it again... I should get a different frame rate for both high resolution limited case....

    So here you go... a multithreaded game, which runs scripts in both the GPU and CPU limited domains....

    Lost Planet: Conidition Zero
    GTX8800
    QX9650 @ 2.4 GHz
    DDR2-800
    1680x1050, 8xAA, 8XFSAA

    FSB = 1600 MHz, BW = 12.8 MT/sec


    FSB = 800 MHz, BW = 6.4 MT/Sec


    I have cut the BW, the FSB, where your little tiny threads cannot find themselves on a C2Q, having such hard time getting BW, all that latency and what happens....

    At 12.8 MT/sec (1600 MHz FSB)
    Snow = 63.8
    Cave = 85.7

    At 6.4 MT/sec (800 MHz FSB)
    Snow = 64.0
    Cave = 80.1

    Heck, the GPU limited run is even a bit higher in the limited BW regime... so take a look, 6.4 to 12.8 is 100% increase in BW, 2x... yet Snow is the same, and you get maybe 6% in cave....

    EDIT: Shoot, why not even add a run with the CPU pumped up ... again, no real difference. So now we have the CPU ubber fast, the bus ubber slow but the FPS does not change ...

    QX9650@ 3.0 GHz, (FSB=800 MHz) 1680x1050 8xAA 8xFSAA


    QX9650@ 3.0 GHz (FSB=1600 Mhz) 1680x1050 8xAA 8xFSAA



    At 12.8 MT/sec (1600 MHz FSB CPU @ 3.0 GHz)
    Snow = 64.2
    Cave = 90.2

    At 6.4 MT/sec (800 MHz FSB CPU @ 3.0 GHz)
    Snow = 64.2
    Cave = 87.4


    Explain that. I have done the same for Crysis, the same for HL2, yada yada... the answer is still the same.
    Last edited by JumpingJack; 08-10-2008 at 01:48 PM.
    One hundred years from now It won't matter
    What kind of car I drove What kind of house I lived in
    How much money I had in the bank Nor what my cloths looked like.... But The world may be a little better Because, I was important In the life of a child.
    -- from "Within My Power" by Forest Witcraft

Bookmarks

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •