MMM
Results 1 to 25 of 525

Thread: Intel Q9450 vs Phenom 9850 - ATI HD3870 X2

Hybrid View

  1. #1
    Xtreme Mentor
    Join Date
    Mar 2006
    Posts
    2,978
    Quote Originally Posted by pumbertot View Post
    logic does not work against fanboyism.
    The problem is this.... at high res ( > say 1600x1200), depending on the game, the GPU becomes a major factor. The bus mastering in pushing textures around becomes a factor. At typical game play settings, the C2Q really holds no major advantages over the Phenom ...

    His reason for it is wrong.
    One hundred years from now It won't matter
    What kind of car I drove What kind of house I lived in
    How much money I had in the bank Nor what my cloths looked like.... But The world may be a little better Because, I was important In the life of a child.
    -- from "Within My Power" by Forest Witcraft

  2. #2
    Xtreme Enthusiast
    Join Date
    May 2008
    Posts
    612
    Quote Originally Posted by JumpingJack View Post
    His reason for it is wrong.
    Face it! No matter what I say you are not going to listen. This discussion for you isn’t about technology it’s about pride.
    It isn’t possible to understand the processor if you isn’t doing development and check for yourself.
    Maybe in a month or two the speed_test will be good enough for nonprogrammers to use it and test and check behavior. You can design your own tests with that (I know that the code was too complicated for you to understand that).

  3. #3
    I am Xtreme
    Join Date
    Jul 2007
    Location
    Austria
    Posts
    5,485
    Quote Originally Posted by gosh View Post
    Face it! No matter what I say you are not going to listen. This discussion for you isn’t about technology it’s about pride.
    It isn’t possible to understand the processor if you isn’t doing development and check for yourself.
    wtf... the only one whos rambling about pride is you. You always point out that everyone is a retared for not knowing how to program, and then this fact automatically results in that someone can't understand how a cpu works... so i guess all hardware designers are software engineers.

  4. #4
    Xtreme Enthusiast
    Join Date
    May 2008
    Posts
    612
    Quote Originally Posted by Hornet331 View Post
    wtf... the only one whos rambling about pride is you.
    The only one that is trying to explaing facts and how the processor works is me. I haven't seen anyone here trying to explain.
    Maybe you could explain WHY some of you think that the FSB isn't a problem even if there are thousands of links informing about this problem. Games is one type of application that has the highest demands on hardware. Maybe you could provide some test showing games on high res comparing AMD and Intel?
    Using tests that is running games on low res, or games that isn't using advanced graphics is another area.

    The repeated sentence "games are 100% GPU limited" is only one proof that the person doesn’t understand how the computer works. You can get close to 100% if something is very slow but you will never reach 100% when two or more parts is working to get the job done.

  5. #5
    I am Xtreme
    Join Date
    Jul 2007
    Location
    Austria
    Posts
    5,485
    Quote Originally Posted by gosh View Post
    The only one that is trying to explaing facts and how the processor works is me. I haven't seen anyone here trying to explain.
    Maybe you could explain WHY some of you think that the FSB isn't a problem even if there are thousands of links informing about this problem. Games is one type of application that has the highest demands on hardware. Maybe you could provide some test showing games on high res comparing AMD and Intel?
    Using tests that is running games on low res, or games that isn't using advanced graphics is another area.

    The repeated sentence "games are 100% GPU limited" is only one proof that the person doesn’t understand how the computer works. You can get close to 100% if something is very slow but you will never reach 100% when two or more parts is working to get the job done.
    sry but the only thing your tried to explain is, that C2D sucks at multithreading, not how it works.
    And noone said that a game is 100% gpu limited, check all the recent post. It was said that at resolutions higher then 1600x1200 the GPU becomes the bottleneck, and the cpu only playes a minor role.

    http://www.guru3d.com/article/cpu-sc...e-processors/9 crysis bottlenecked at 1600x1200
    http://www.guru3d.com/article/cpu-sc...e-processors/8 Quake Wars bottleneck @2560x1600
    http://www.guru3d.com/article/cpu-sc...e-processors/7 fear gpu bottlenecked at 1280x960

    btw your quite active with spamming that useless race driver grid list everywhere -> http://forums.guru3d.com/showpost.ph...3&postcount=10
    and everywhere the people tell you the same...

    for me that means there are 2 possibilities:
    a) your a diehard fanboy, right at the same levle as sharikou, in fact you remind me a lot of him.
    b) you are payed, keyword AEG
    Last edited by Hornet331; 08-10-2008 at 05:30 AM.

  6. #6
    Xtreme Enthusiast
    Join Date
    May 2008
    Posts
    612
    Quote Originally Posted by Hornet331 View Post

    And that test is done with a older card (very fast when data is in the card) for PCI Express 1.0. But ok, could you show one more test apart from that one? Or is this the only proof that you have?

  7. #7
    I am Xtreme
    Join Date
    Jul 2007
    Location
    Austria
    Posts
    5,485
    Quote Originally Posted by gosh View Post
    And that test is done with a older card (very fast when data is in the card) for PCI Express 1.0. But ok, could you show one more test apart from that one? Or is this the only proof that you have?
    oh please, as if pcie2.0 would make any difference. Also with newer cards it gets even harder to reach the gpu limit.

    There are more reviews out ther, but mostly for older configs, cause most reiview sites dont thest the obviouse... everyone knows when you reach the limit of the gpu everything is the same regardless what cpu is used.

    The only difference with newer graphic cards would be, that the limit is pushed further up, so even at 2560x1600 you see differences in scores, without insanse settings like 8xaa with 16xaf.

    If you dont belive that, just buy a Q6600 and X49750, mix in some dualcore tests (disable cores on the quads) and test it with a 4870. You wouldn't get different results.

  8. #8
    Xtreme Mentor
    Join Date
    Mar 2006
    Posts
    2,978
    Quote Originally Posted by gosh View Post
    And that test is done with a older card (very fast when data is in the card) for PCI Express 1.0. But ok, could you show one more test apart from that one? Or is this the only proof that you have?
    Now if you are going to argue 2.0 PCIe ... you have an argument, but you are wrong about what is happening.
    One hundred years from now It won't matter
    What kind of car I drove What kind of house I lived in
    How much money I had in the bank Nor what my cloths looked like.... But The world may be a little better Because, I was important In the life of a child.
    -- from "Within My Power" by Forest Witcraft

  9. #9
    Xtreme Mentor
    Join Date
    Mar 2006
    Posts
    2,978
    Quote Originally Posted by gosh View Post
    The only one that is trying to explaing facts and how the processor works is me. I haven't seen anyone here trying to explain.
    Maybe you could explain WHY some of you think that the FSB isn't a problem even if there are thousands of links informing about this problem. Games is one type of application that has the highest demands on hardware. Maybe you could provide some test showing games on high res comparing AMD and Intel?
    Using tests that is running games on low res, or games that isn't using advanced graphics is another area.

    The repeated sentence "games are 100% GPU limited" is only one proof that the person doesn’t understand how the computer works. You can get close to 100% if something is very slow but you will never reach 100% when two or more parts is working to get the job done.
    We have been explaining it to you until we are blue in the face. And please link up those thousands of links showing the FSB is a problem. What I read is the FSB is aging, and has lower bandwidth than a point to point solution --- typically trumpeted by AMD PR (hint, do not trust anyone telling you something that is trying to sell you something). FSB BW is only a problem when the workload demands more BW than is available, this is the case in many HPC and some server applications, but for desktop/gaming it is a non issue and the data shows that. AMD has BW limitations as well that can hold back performance, just look at the latency in a non-local transaction in their NUMA architecture.

    Games work along two workloads divided between two computational resources. The CPU calculates the collision boundaries, AI, physics (which is changing) etc. The GPU is responsible for rendering the scene, plotting vertices, painting textures.

    The GPU does this pixel by pixel, this is why it is called rasterization. Each pixel has it's color and intensity calculated based on the information presented to the GPU. The GPU acquires the data, stores it in memory on the card dedicated to the GPU, and paints the texture based on a complex set of 3D to 2D transformations, this is why GPUs are built the way they are... they do not need to virutalize memory, they do not need to branch predict, they just need to compute a lot of numbers. The shear volume of data is the reason GPUs design in such high BW memory interconnect, so high that it puts either AMD or Intel to shame. Nope, unlike your misconception that the threads reach out to the GPU for all it's work, the GPU is a standalone processor with (until recently) single fixed function purpose in mind... grab data as quickly as possible form video RAM (where it gets most of it's info) and render it to the frame buffer where the RAM dacs can then put it on the screen.

    Thus, the load on the GPU is directly proportional to the total number of pixels, the speed or rate at which the frame buffer is updated, is dependent upon the quality and speed of the GPU. However, before the GPU can render that frame, it must have all the information about that scene finished... such as what the camera angle is, where each character is standing, what the model of the bad guys is doing in terms of animation -- this is the duty of the CPU. Conversely, the CPU cannot calulate it's next frame of information until the GPU has finished doing it's job.

    So you have two computational sources, each working on their own set of data that one depends on the other to finish before it moves on....

    Thus, if the GPU is waiting on the CPU -- a case in Lost Planet Cave due to all the models floating around, then it is CPU limited. The opposite is true for the GPU, if the GPU is full load crunching as fast as it can but the CPU is waiting on it to finish, then it is GPU limited.

    This is not terribly hard to understand. At a resolution of 640x480 the GPU must shade 307,200 pixels (There is a reason there is an 'x' when they quote resolutions). However, at 1680x1050 the GPU must shade 1,764,000 pixels .. 5.74 times more pixels, multiply this times whatever oversampling you are doing and the computation demands become enormous. Demonstrating this is straight forward, run a game and measure the FPS from very low to very high resolution, but plot it against total pixels rendered, if the GPU is the one and only determinant of the output rate then the results should drop monotonically as the number of pixels increases.... however, if the GPU is not the determinant of the output result, all other things being equal... there should be no observed change in rate.

    I use Lost Planet, it is my favorite for this because it has two scenes that push the envelop on either end. You have probably often read or heard that Snow is GPU bound and cave is CPU bound.

    Lost Planet puts the latest hardware to good use via DirectX 10 and multiple threads—as many as eight, in fact. Lost Planet's developers have built a benchmarking tool into the game, and it tests two different levels: a snow-covered outdoor area with small numbers of large villains to fight, and another level set inside of a cave with large numbers of small, flying creatures filling the air. The former doesn't appear to be CPU-bound, so we'll be looking at the latter.
    http://techreport.com/articles.x/14756/5

    This is indeed true, run this game from 640x480 up to 1280x1024 and observe the Snow vs Cave behavior. Snow is clearly GPU bound as it responds monotonically to the load presented to the GPU via the resolution selection. It has NOTHING to do with threading the CPU, the load and owness is on the GPU with changing resolution all other things being equal. Cave on the other hand is clearly CPU bound, and consequently responds to strength of the CPU better.

    QX9650 @ 2.5 Ghz (FSB 1333) for Lost Planets


    Now, if we look at what a Phenom 9850 @ 2.5 Ghz (2000 MHz NB), again you can see the results are clear -- the difference is the at lower pixel loading (where the GPU is less taxed), it levels off for the snow condition .... this is where the Phenom's weakness really shows up.



    Now, this is a multithreaded game, that scales just fine on both platforms with core count ... and, when raising the question about which CPU threads game code better most reviewers rightly look at the CPU bound cases otherwise using uber high resolutions you are extrapolating a statement about the CPU when the observation is actually dictated by the strength of the GPU -- which results in false conclusion and a bunch of people like you spouting junk throughout the web.

    Explaining threading and how the CPU, memory, cache and the management is functioning is probably beyond your understanding, and would take much longer.

    EDIT: Here are the screen dumps for the runs that generated those plots:
    http://forum.xcpus.com/gallery/v/Jum...QX9650Screens/
    http://forum.xcpus.com/gallery/v/Jum...PhenomScreens/

    And the original article I wrote on this very topic:
    http://www.xcpus.com/GetDoc.aspx?doc=12&page=1

    jack
    Last edited by JumpingJack; 08-10-2008 at 12:52 PM.
    One hundred years from now It won't matter
    What kind of car I drove What kind of house I lived in
    How much money I had in the bank Nor what my cloths looked like.... But The world may be a little better Because, I was important In the life of a child.
    -- from "Within My Power" by Forest Witcraft

  10. #10
    Xtreme Enthusiast
    Join Date
    Apr 2008
    Posts
    912
    Quote Originally Posted by gosh View Post
    The only one that is trying to explaing facts and how the processor works is me. I haven't seen anyone here trying to explain.
    Maybe you could explain WHY some of you think that the FSB isn't a problem even if there are thousands of links informing about this problem. Games is one type of application that has the highest demands on hardware. Maybe you could provide some test showing games on high res comparing AMD and Intel?
    Using tests that is running games on low res, or games that isn't using advanced graphics is another area.

    The repeated sentence "games are 100% GPU limited" is only one proof that the person doesn’t understand how the computer works. You can get close to 100% if something is very slow but you will never reach 100% when two or more parts is working to get the job done.

  11. #11
    Xtreme Mentor
    Join Date
    Mar 2006
    Posts
    2,978
    Quote Originally Posted by bowman View Post
    One hundred years from now It won't matter
    What kind of car I drove What kind of house I lived in
    How much money I had in the bank Nor what my cloths looked like.... But The world may be a little better Because, I was important In the life of a child.
    -- from "Within My Power" by Forest Witcraft

  12. #12
    I am Xtreme
    Join Date
    Jul 2007
    Location
    Austria
    Posts
    5,485
    Quote Originally Posted by bowman View Post
    ...
    omg.... that fits perfectly.

  13. #13
    Xtreme Mentor
    Join Date
    Mar 2006
    Posts
    2,978
    Quote Originally Posted by gosh View Post
    Face it! No matter what I say you are not going to listen. This discussion for you isn’t about technology it’s about pride.
    It isn’t possible to understand the processor if you isn’t doing development and check for yourself.
    Maybe in a month or two the speed_test will be good enough for nonprogrammers to use it and test and check behavior. You can design your own tests with that (I know that the code was too complicated for you to understand that).
    No.... I will listen... I linked where I showed exactly the same behavior at high resolutions with Lost Planet above showing Phenom taking over the FPS lead at high resolutions. I guarantee you, if you show me data that proves your hypothesis I will agree....

    Your problem is that you completely lack the capacity for analytical thought. The second problem is that your not correct, you extrapolate a conclusion on a single observation... i.e. you have jumped to a conclusion based on a preconceived notion of how you think a CPU handles itself in a graphically intensive environment.

    Showing a bunch of benchmarks where the game is GPU limited, then claiming it is because the FSB jams up the threads, is not proving your theory. Your speed_test algorithm is going to do nothing but choke the system up, get the result you want it to get, then you will proclaim greatness. Unfortunately, it does not represent reality.

    For your hypothesis to hold water, you must demonstrate that the work load associated in a real world environment is actually creating the situation across the board. This is not the case....

    So let's think about it... again, if I run a game at high resolution, and measure the frame rate, then I change the FSB bandwidth, and run it again... I should get a different frame rate for both high resolution limited case....

    So here you go... a multithreaded game, which runs scripts in both the GPU and CPU limited domains....

    Lost Planet: Conidition Zero
    GTX8800
    QX9650 @ 2.4 GHz
    DDR2-800
    1680x1050, 8xAA, 8XFSAA

    FSB = 1600 MHz, BW = 12.8 MT/sec


    FSB = 800 MHz, BW = 6.4 MT/Sec


    I have cut the BW, the FSB, where your little tiny threads cannot find themselves on a C2Q, having such hard time getting BW, all that latency and what happens....

    At 12.8 MT/sec (1600 MHz FSB)
    Snow = 63.8
    Cave = 85.7

    At 6.4 MT/sec (800 MHz FSB)
    Snow = 64.0
    Cave = 80.1

    Heck, the GPU limited run is even a bit higher in the limited BW regime... so take a look, 6.4 to 12.8 is 100% increase in BW, 2x... yet Snow is the same, and you get maybe 6% in cave....

    EDIT: Shoot, why not even add a run with the CPU pumped up ... again, no real difference. So now we have the CPU ubber fast, the bus ubber slow but the FPS does not change ...

    QX9650@ 3.0 GHz, (FSB=800 MHz) 1680x1050 8xAA 8xFSAA


    QX9650@ 3.0 GHz (FSB=1600 Mhz) 1680x1050 8xAA 8xFSAA



    At 12.8 MT/sec (1600 MHz FSB CPU @ 3.0 GHz)
    Snow = 64.2
    Cave = 90.2

    At 6.4 MT/sec (800 MHz FSB CPU @ 3.0 GHz)
    Snow = 64.2
    Cave = 87.4


    Explain that. I have done the same for Crysis, the same for HL2, yada yada... the answer is still the same.
    Last edited by JumpingJack; 08-10-2008 at 01:48 PM.
    One hundred years from now It won't matter
    What kind of car I drove What kind of house I lived in
    How much money I had in the bank Nor what my cloths looked like.... But The world may be a little better Because, I was important In the life of a child.
    -- from "Within My Power" by Forest Witcraft

Bookmarks

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •