Page 3 of 21 FirstFirst 12345613 ... LastLast
Results 51 to 75 of 525

Thread: Intel Q9450 vs Phenom 9850 - ATI HD3870 X2

  1. #51
    Xtreme Enthusiast
    Join Date
    May 2008
    Posts
    612
    Quote Originally Posted by JumpingJack View Post
    His reason for it is wrong.
    Face it! No matter what I say you are not going to listen. This discussion for you isn’t about technology it’s about pride.
    It isn’t possible to understand the processor if you isn’t doing development and check for yourself.
    Maybe in a month or two the speed_test will be good enough for nonprogrammers to use it and test and check behavior. You can design your own tests with that (I know that the code was too complicated for you to understand that).

  2. #52
    Xtreme Addict
    Join Date
    Oct 2006
    Location
    England, UK
    Posts
    1,838
    Quote Originally Posted by pumbertot View Post
    logic does not work against fanboyism.
    That pretty much says it all.

    Goshs theory that the C2D and C2Q are only good at single threaded apps is possibly one of the most undeniably stupid things i have ever heard anyone say. Maybe i should buy a Phenom for all my Video encoding as thats all multithreaded.

  3. #53
    Xtreme Enthusiast
    Join Date
    May 2008
    Posts
    612
    Quote Originally Posted by Scubar View Post
    Goshs theory that the C2D and C2Q are only good at single threaded apps is possibly one of the most undeniably stupid things i have ever heard anyone say. Maybe i should buy a Phenom for all my Video encoding as thats all multithreaded.
    That’s not what I said. C2D/C2Q is designed for single threaded applications, that doesn’t mean that it can’t run multithreaded applications. Depending on how the application has been developed the performance will shift towards Phenom. Exactly where Phenom is taking the lead depends on the application. But threaded applications where threads don’t talk to each other and isn’t using that much memory, then Intel will perform well.
    If one game engine is developed for the PC then it probably is optimized for Intel also, the reason is that they will design the game for the most common computer and doing some tradeoffs on functionality. The thing is that PC isn’t that important for the game engines anymore.
    Games that are developed for PS3 would perform better on AMD compared to Intel C2D/C2Q. I just have some basic knowledge about the cell processor but if the game is optimized for the PS3 it isn’t possible to tweak it that much for Intel if you don’t want to redesign the engine. AMD does work better for games that are optimized for PS3 if what I have read is right. Race Driver Grid is one game that could be used to test this, more will be out soon I think so there should be clearer if I am right or wrong.
    Another game engine that is optimized for PS3 is id Tech 5

    We put so much more effort into optimizing on the PS3.

  4. #54
    I am Xtreme
    Join Date
    Jul 2007
    Location
    Austria
    Posts
    5,485
    Quote Originally Posted by gosh View Post
    Face it! No matter what I say you are not going to listen. This discussion for you isn’t about technology it’s about pride.
    It isn’t possible to understand the processor if you isn’t doing development and check for yourself.
    wtf... the only one whos rambling about pride is you. You always point out that everyone is a retared for not knowing how to program, and then this fact automatically results in that someone can't understand how a cpu works... so i guess all hardware designers are software engineers.

  5. #55
    Xtreme Enthusiast
    Join Date
    May 2008
    Posts
    612
    Quote Originally Posted by Hornet331 View Post
    wtf... the only one whos rambling about pride is you.
    The only one that is trying to explaing facts and how the processor works is me. I haven't seen anyone here trying to explain.
    Maybe you could explain WHY some of you think that the FSB isn't a problem even if there are thousands of links informing about this problem. Games is one type of application that has the highest demands on hardware. Maybe you could provide some test showing games on high res comparing AMD and Intel?
    Using tests that is running games on low res, or games that isn't using advanced graphics is another area.

    The repeated sentence "games are 100% GPU limited" is only one proof that the person doesn’t understand how the computer works. You can get close to 100% if something is very slow but you will never reach 100% when two or more parts is working to get the job done.

  6. #56
    I am Xtreme
    Join Date
    Jul 2007
    Location
    Austria
    Posts
    5,485
    Quote Originally Posted by gosh View Post
    The only one that is trying to explaing facts and how the processor works is me. I haven't seen anyone here trying to explain.
    Maybe you could explain WHY some of you think that the FSB isn't a problem even if there are thousands of links informing about this problem. Games is one type of application that has the highest demands on hardware. Maybe you could provide some test showing games on high res comparing AMD and Intel?
    Using tests that is running games on low res, or games that isn't using advanced graphics is another area.

    The repeated sentence "games are 100% GPU limited" is only one proof that the person doesn’t understand how the computer works. You can get close to 100% if something is very slow but you will never reach 100% when two or more parts is working to get the job done.
    sry but the only thing your tried to explain is, that C2D sucks at multithreading, not how it works.
    And noone said that a game is 100% gpu limited, check all the recent post. It was said that at resolutions higher then 1600x1200 the GPU becomes the bottleneck, and the cpu only playes a minor role.

    http://www.guru3d.com/article/cpu-sc...e-processors/9 crysis bottlenecked at 1600x1200
    http://www.guru3d.com/article/cpu-sc...e-processors/8 Quake Wars bottleneck @2560x1600
    http://www.guru3d.com/article/cpu-sc...e-processors/7 fear gpu bottlenecked at 1280x960

    btw your quite active with spamming that useless race driver grid list everywhere -> http://forums.guru3d.com/showpost.ph...3&postcount=10
    and everywhere the people tell you the same...

    for me that means there are 2 possibilities:
    a) your a diehard fanboy, right at the same levle as sharikou, in fact you remind me a lot of him.
    b) you are payed, keyword AEG
    Last edited by Hornet331; 08-10-2008 at 05:30 AM.

  7. #57
    Xtreme Enthusiast
    Join Date
    May 2008
    Posts
    612
    Quote Originally Posted by Hornet331 View Post

    And that test is done with a older card (very fast when data is in the card) for PCI Express 1.0. But ok, could you show one more test apart from that one? Or is this the only proof that you have?

  8. #58
    I am Xtreme
    Join Date
    Jul 2007
    Location
    Austria
    Posts
    5,485
    Quote Originally Posted by gosh View Post
    And that test is done with a older card (very fast when data is in the card) for PCI Express 1.0. But ok, could you show one more test apart from that one? Or is this the only proof that you have?
    oh please, as if pcie2.0 would make any difference. Also with newer cards it gets even harder to reach the gpu limit.

    There are more reviews out ther, but mostly for older configs, cause most reiview sites dont thest the obviouse... everyone knows when you reach the limit of the gpu everything is the same regardless what cpu is used.

    The only difference with newer graphic cards would be, that the limit is pushed further up, so even at 2560x1600 you see differences in scores, without insanse settings like 8xaa with 16xaf.

    If you dont belive that, just buy a Q6600 and X49750, mix in some dualcore tests (disable cores on the quads) and test it with a 4870. You wouldn't get different results.

  9. #59
    Xtreme Mentor
    Join Date
    Mar 2006
    Posts
    2,978
    Quote Originally Posted by gosh View Post
    Face it! No matter what I say you are not going to listen. This discussion for you isn’t about technology it’s about pride.
    It isn’t possible to understand the processor if you isn’t doing development and check for yourself.
    Maybe in a month or two the speed_test will be good enough for nonprogrammers to use it and test and check behavior. You can design your own tests with that (I know that the code was too complicated for you to understand that).
    No.... I will listen... I linked where I showed exactly the same behavior at high resolutions with Lost Planet above showing Phenom taking over the FPS lead at high resolutions. I guarantee you, if you show me data that proves your hypothesis I will agree....

    Your problem is that you completely lack the capacity for analytical thought. The second problem is that your not correct, you extrapolate a conclusion on a single observation... i.e. you have jumped to a conclusion based on a preconceived notion of how you think a CPU handles itself in a graphically intensive environment.

    Showing a bunch of benchmarks where the game is GPU limited, then claiming it is because the FSB jams up the threads, is not proving your theory. Your speed_test algorithm is going to do nothing but choke the system up, get the result you want it to get, then you will proclaim greatness. Unfortunately, it does not represent reality.

    For your hypothesis to hold water, you must demonstrate that the work load associated in a real world environment is actually creating the situation across the board. This is not the case....

    So let's think about it... again, if I run a game at high resolution, and measure the frame rate, then I change the FSB bandwidth, and run it again... I should get a different frame rate for both high resolution limited case....

    So here you go... a multithreaded game, which runs scripts in both the GPU and CPU limited domains....

    Lost Planet: Conidition Zero
    GTX8800
    QX9650 @ 2.4 GHz
    DDR2-800
    1680x1050, 8xAA, 8XFSAA

    FSB = 1600 MHz, BW = 12.8 MT/sec


    FSB = 800 MHz, BW = 6.4 MT/Sec


    I have cut the BW, the FSB, where your little tiny threads cannot find themselves on a C2Q, having such hard time getting BW, all that latency and what happens....

    At 12.8 MT/sec (1600 MHz FSB)
    Snow = 63.8
    Cave = 85.7

    At 6.4 MT/sec (800 MHz FSB)
    Snow = 64.0
    Cave = 80.1

    Heck, the GPU limited run is even a bit higher in the limited BW regime... so take a look, 6.4 to 12.8 is 100% increase in BW, 2x... yet Snow is the same, and you get maybe 6% in cave....

    EDIT: Shoot, why not even add a run with the CPU pumped up ... again, no real difference. So now we have the CPU ubber fast, the bus ubber slow but the FPS does not change ...

    QX9650@ 3.0 GHz, (FSB=800 MHz) 1680x1050 8xAA 8xFSAA


    QX9650@ 3.0 GHz (FSB=1600 Mhz) 1680x1050 8xAA 8xFSAA



    At 12.8 MT/sec (1600 MHz FSB CPU @ 3.0 GHz)
    Snow = 64.2
    Cave = 90.2

    At 6.4 MT/sec (800 MHz FSB CPU @ 3.0 GHz)
    Snow = 64.2
    Cave = 87.4


    Explain that. I have done the same for Crysis, the same for HL2, yada yada... the answer is still the same.
    Last edited by JumpingJack; 08-10-2008 at 01:48 PM.
    One hundred years from now It won't matter
    What kind of car I drove What kind of house I lived in
    How much money I had in the bank Nor what my cloths looked like.... But The world may be a little better Because, I was important In the life of a child.
    -- from "Within My Power" by Forest Witcraft

  10. #60
    Xtreme Mentor
    Join Date
    Mar 2006
    Posts
    2,978
    Quote Originally Posted by gosh View Post
    That’s not what I said. C2D/C2Q is designed for single threaded applications, that doesn’t mean that it can’t run multithreaded applications. .
    No, what you implied is that Phenom is a better multitasker (well, let's correct that terminology), you really meant to say that Phenom executes multithreaded code better, thus it is better at anything multithreaded.

    This is 1/2 correct, and 1/2 wrong. Phenom will, in some cases, scale better with thread count ... this is a given. The problem is that core for core, Phenom averages 20-40% weaker IPC than an Intel core... Intel loses about 3-10% potential scaling because two caches need to arbitrate across a parallel external bus (which I agree with you, it is slower than other solutions available). It could scale 10% or even 20% worse to 4 threads and the raw computational performance would still out perform clock for clock. So the real question is does the disadvantage of cohering the caches across an external bus really matter? Is it the case that it causes FPS to dip at high res? The answer of course is a resounding NO and experimental treatments will show this conclusively.

    Which reminds me.... you claimed that on at 5 threads or greater the scaling stopped because of poor coding. That was a real laugher, I am glad you posted that.... highly entertaining.

    So is it still poor coding because a dual core stops after 2 threads?

    Last edited by JumpingJack; 08-10-2008 at 01:27 PM.
    One hundred years from now It won't matter
    What kind of car I drove What kind of house I lived in
    How much money I had in the bank Nor what my cloths looked like.... But The world may be a little better Because, I was important In the life of a child.
    -- from "Within My Power" by Forest Witcraft

  11. #61
    Xtreme Mentor
    Join Date
    Mar 2006
    Posts
    2,978
    Quote Originally Posted by gosh View Post
    And that test is done with a older card (very fast when data is in the card) for PCI Express 1.0. But ok, could you show one more test apart from that one? Or is this the only proof that you have?
    Now if you are going to argue 2.0 PCIe ... you have an argument, but you are wrong about what is happening.
    One hundred years from now It won't matter
    What kind of car I drove What kind of house I lived in
    How much money I had in the bank Nor what my cloths looked like.... But The world may be a little better Because, I was important In the life of a child.
    -- from "Within My Power" by Forest Witcraft

  12. #62
    I am Xtreme
    Join Date
    Jul 2007
    Location
    Austria
    Posts
    5,485
    lol jack, stop owning him, he probably wont come back anymore after that.

  13. #63
    Xtreme Addict
    Join Date
    May 2005
    Posts
    1,656
    For some reason the jumpingjack posts give me that I can fly a helicopter perfect with no experience because I just stayed at a holiday inn express feeling!
    Work Rig: Asus x58 P6T Deluxe, i7 950 24x166 1.275v, BIX2/GTZ/D5
    3x2048 GSkill pi Black DDR3 1600, Quadro 600
    PCPower & Cooling Silencer 750, CM Stacker 810

    Game Rig: Asus x58 P6T, i7 970 24x160 1.2v HT on, TRUE120
    3x4096 GSkill DDR3 1600, PNY 660ti
    PCPower & Cooling Silencer 750, CM Stacker 830

    AMD Rig: Biostar TA790GX A2+, x4 940 16x200, stock hsf
    2x2gb Patriot DDR2 800, PowerColor 4850
    Corsair VX450

  14. #64
    Xtreme Mentor
    Join Date
    Mar 2006
    Posts
    2,978
    Quote Originally Posted by gosh View Post
    The only one that is trying to explaing facts and how the processor works is me. I haven't seen anyone here trying to explain.
    Maybe you could explain WHY some of you think that the FSB isn't a problem even if there are thousands of links informing about this problem. Games is one type of application that has the highest demands on hardware. Maybe you could provide some test showing games on high res comparing AMD and Intel?
    Using tests that is running games on low res, or games that isn't using advanced graphics is another area.

    The repeated sentence "games are 100% GPU limited" is only one proof that the person doesn’t understand how the computer works. You can get close to 100% if something is very slow but you will never reach 100% when two or more parts is working to get the job done.
    We have been explaining it to you until we are blue in the face. And please link up those thousands of links showing the FSB is a problem. What I read is the FSB is aging, and has lower bandwidth than a point to point solution --- typically trumpeted by AMD PR (hint, do not trust anyone telling you something that is trying to sell you something). FSB BW is only a problem when the workload demands more BW than is available, this is the case in many HPC and some server applications, but for desktop/gaming it is a non issue and the data shows that. AMD has BW limitations as well that can hold back performance, just look at the latency in a non-local transaction in their NUMA architecture.

    Games work along two workloads divided between two computational resources. The CPU calculates the collision boundaries, AI, physics (which is changing) etc. The GPU is responsible for rendering the scene, plotting vertices, painting textures.

    The GPU does this pixel by pixel, this is why it is called rasterization. Each pixel has it's color and intensity calculated based on the information presented to the GPU. The GPU acquires the data, stores it in memory on the card dedicated to the GPU, and paints the texture based on a complex set of 3D to 2D transformations, this is why GPUs are built the way they are... they do not need to virutalize memory, they do not need to branch predict, they just need to compute a lot of numbers. The shear volume of data is the reason GPUs design in such high BW memory interconnect, so high that it puts either AMD or Intel to shame. Nope, unlike your misconception that the threads reach out to the GPU for all it's work, the GPU is a standalone processor with (until recently) single fixed function purpose in mind... grab data as quickly as possible form video RAM (where it gets most of it's info) and render it to the frame buffer where the RAM dacs can then put it on the screen.

    Thus, the load on the GPU is directly proportional to the total number of pixels, the speed or rate at which the frame buffer is updated, is dependent upon the quality and speed of the GPU. However, before the GPU can render that frame, it must have all the information about that scene finished... such as what the camera angle is, where each character is standing, what the model of the bad guys is doing in terms of animation -- this is the duty of the CPU. Conversely, the CPU cannot calulate it's next frame of information until the GPU has finished doing it's job.

    So you have two computational sources, each working on their own set of data that one depends on the other to finish before it moves on....

    Thus, if the GPU is waiting on the CPU -- a case in Lost Planet Cave due to all the models floating around, then it is CPU limited. The opposite is true for the GPU, if the GPU is full load crunching as fast as it can but the CPU is waiting on it to finish, then it is GPU limited.

    This is not terribly hard to understand. At a resolution of 640x480 the GPU must shade 307,200 pixels (There is a reason there is an 'x' when they quote resolutions). However, at 1680x1050 the GPU must shade 1,764,000 pixels .. 5.74 times more pixels, multiply this times whatever oversampling you are doing and the computation demands become enormous. Demonstrating this is straight forward, run a game and measure the FPS from very low to very high resolution, but plot it against total pixels rendered, if the GPU is the one and only determinant of the output rate then the results should drop monotonically as the number of pixels increases.... however, if the GPU is not the determinant of the output result, all other things being equal... there should be no observed change in rate.

    I use Lost Planet, it is my favorite for this because it has two scenes that push the envelop on either end. You have probably often read or heard that Snow is GPU bound and cave is CPU bound.

    Lost Planet puts the latest hardware to good use via DirectX 10 and multiple threads—as many as eight, in fact. Lost Planet's developers have built a benchmarking tool into the game, and it tests two different levels: a snow-covered outdoor area with small numbers of large villains to fight, and another level set inside of a cave with large numbers of small, flying creatures filling the air. The former doesn't appear to be CPU-bound, so we'll be looking at the latter.
    http://techreport.com/articles.x/14756/5

    This is indeed true, run this game from 640x480 up to 1280x1024 and observe the Snow vs Cave behavior. Snow is clearly GPU bound as it responds monotonically to the load presented to the GPU via the resolution selection. It has NOTHING to do with threading the CPU, the load and owness is on the GPU with changing resolution all other things being equal. Cave on the other hand is clearly CPU bound, and consequently responds to strength of the CPU better.

    QX9650 @ 2.5 Ghz (FSB 1333) for Lost Planets


    Now, if we look at what a Phenom 9850 @ 2.5 Ghz (2000 MHz NB), again you can see the results are clear -- the difference is the at lower pixel loading (where the GPU is less taxed), it levels off for the snow condition .... this is where the Phenom's weakness really shows up.



    Now, this is a multithreaded game, that scales just fine on both platforms with core count ... and, when raising the question about which CPU threads game code better most reviewers rightly look at the CPU bound cases otherwise using uber high resolutions you are extrapolating a statement about the CPU when the observation is actually dictated by the strength of the GPU -- which results in false conclusion and a bunch of people like you spouting junk throughout the web.

    Explaining threading and how the CPU, memory, cache and the management is functioning is probably beyond your understanding, and would take much longer.

    EDIT: Here are the screen dumps for the runs that generated those plots:
    http://forum.xcpus.com/gallery/v/Jum...QX9650Screens/
    http://forum.xcpus.com/gallery/v/Jum...PhenomScreens/

    And the original article I wrote on this very topic:
    http://www.xcpus.com/GetDoc.aspx?doc=12&page=1

    jack
    Last edited by JumpingJack; 08-10-2008 at 12:52 PM.
    One hundred years from now It won't matter
    What kind of car I drove What kind of house I lived in
    How much money I had in the bank Nor what my cloths looked like.... But The world may be a little better Because, I was important In the life of a child.
    -- from "Within My Power" by Forest Witcraft

  15. #65
    Xtreme Enthusiast
    Join Date
    Apr 2008
    Posts
    912
    Quote Originally Posted by gosh View Post
    The only one that is trying to explaing facts and how the processor works is me. I haven't seen anyone here trying to explain.
    Maybe you could explain WHY some of you think that the FSB isn't a problem even if there are thousands of links informing about this problem. Games is one type of application that has the highest demands on hardware. Maybe you could provide some test showing games on high res comparing AMD and Intel?
    Using tests that is running games on low res, or games that isn't using advanced graphics is another area.

    The repeated sentence "games are 100% GPU limited" is only one proof that the person doesn’t understand how the computer works. You can get close to 100% if something is very slow but you will never reach 100% when two or more parts is working to get the job done.

  16. #66
    Xtreme Mentor
    Join Date
    Mar 2006
    Posts
    2,978
    Quote Originally Posted by bowman View Post
    One hundred years from now It won't matter
    What kind of car I drove What kind of house I lived in
    How much money I had in the bank Nor what my cloths looked like.... But The world may be a little better Because, I was important In the life of a child.
    -- from "Within My Power" by Forest Witcraft

  17. #67
    I am Xtreme
    Join Date
    Jul 2007
    Location
    Austria
    Posts
    5,485
    Quote Originally Posted by bowman View Post
    ...
    omg.... that fits perfectly.

  18. #68
    Xtreme Addict
    Join Date
    Jul 2007
    Location
    California
    Posts
    1,461
    Being a bigot doesn't do anyone any good. Back in the early 2000s, I bought into the Northwood and Prescott P4 hype, and I was confused when they stopped shipping 3.33GHz Laptops.
    Then I started reading about AMD and the Athlon 64s, and that was the end of Intel fanboyism... I simply decided that I would get whatever performed better for my money, and I got an Athlon 64 X2. Most the enthusiasts were AMD supporters in 2005 and 2006, so I joined... then after Phenom never showed any improvement, I decided to stay neutral, which is what I am now.

    I got an old Photoshop picture that accurately conveys your viewpoint, gosh.

    1.7%

  19. #69
    Xtreme Mentor
    Join Date
    Mar 2006
    Posts
    2,978
    Quote Originally Posted by Hornet331 View Post
    oh please, as if pcie2.0 would make any difference. Also with newer cards it gets even harder to reach the gpu limit.
    There may be a valid argument here I believe -- from what I have been able to muster is that Intel's implementation of the 2.0 PCIe spec is not as good as AMD or nVidia's.

    Anandtech made a passing comment that nVidia claimed they would not allow SLI on Intel chipsets because the Intel PCIe implementation was not very good... I am beginning to see reason in the data that this is probably true.

    It is still hard to tell, it is not easy to isolate the PCIe performance on a board.
    One hundred years from now It won't matter
    What kind of car I drove What kind of house I lived in
    How much money I had in the bank Nor what my cloths looked like.... But The world may be a little better Because, I was important In the life of a child.
    -- from "Within My Power" by Forest Witcraft

  20. #70
    Xtreme Guru adamsleath's Avatar
    Join Date
    Nov 2006
    Location
    Brisbane, Australia
    Posts
    3,803
    wont nehalem put paid to these bandwidth arguments? intel imc should squish any amd imc argument anyway.

    and throughput to pci-e, so if this is so, then it is possible that amd may actually have better chipsets than intel for their pci-e/2.0?
    Last edited by adamsleath; 08-10-2008 at 03:38 PM.
    i7 3610QM 1.2-3.2GHz

  21. #71
    I am Xtreme
    Join Date
    Jul 2007
    Location
    Austria
    Posts
    5,485
    Quote Originally Posted by JumpingJack View Post
    There may be a valid argument here I believe -- from what I have been able to muster is that Intel's implementation of the 2.0 PCIe spec is not as good as AMD or nVidia's.

    Anandtech made a passing comment that nVidia claimed they would not allow SLI on Intel chipsets because the Intel PCIe implementation was not very good... I am beginning to see reason in the data that this is probably true.

    It is still hard to tell, it is not easy to isolate the PCIe performance on a board.

    for multigpu solututions maybe, but afaik we where talking about singel cpu solutions. And even for multigpu solutions thers only marginal increasement.

    http://www.tomshardware.com/reviews/...ss,1761-5.html

    maybe with the new RV770 or G200 we would see better scaling between pcie 2.0 and pcie 1.0, but i doubt that.

    edit:

    also it depends a lot on the game:

    http://www.tomshardware.com/reviews/...0,1915-10.html

    just look at MS flight simmulator, with its insane texture size. If it would fit into the onboard ram there would be no problem.
    Last edited by Hornet331; 08-10-2008 at 03:55 PM.

  22. #72
    Xtreme Mentor
    Join Date
    Mar 2006
    Posts
    2,978
    Quote Originally Posted by adamsleath View Post
    wont nehalem put paid to these bandwidth arguments? intel imc should squish any amd imc argument anyway.
    Most likely.
    One hundred years from now It won't matter
    What kind of car I drove What kind of house I lived in
    How much money I had in the bank Nor what my cloths looked like.... But The world may be a little better Because, I was important In the life of a child.
    -- from "Within My Power" by Forest Witcraft

  23. #73
    Xtreme Mentor
    Join Date
    Mar 2006
    Posts
    2,978
    Quote Originally Posted by Hornet331 View Post
    for multigpu solututions maybe, but afaik we where talking about singel cpu solutions. And even for multigpu solutions thers only marginal increasement.

    http://www.tomshardware.com/reviews/...ss,1761-5.html

    maybe with the new RV770 or G200 we would see better scaling between pcie 2.0 and pcie 1.0, but i doubt that.

    edit:

    also it depends a lot on the game:

    http://www.tomshardware.com/reviews/...0,1915-10.html

    just look at MS flight simmulator, with its insane texture size. If it would fit into the onboard ram there would be no problem.
    I won't argue against that. The extent, if any, that throughput matters in these cases is when peak demand starts to run up against the BW wall ...

    I can run the same Lost Planet test above on a Phenom 9850, but set the hypertransport link to 400 Mhz this should absolutely starve the GPU (per the Gosh Theorem), and I get the same results. Snow does not move, Cave about like what happens around the QX9650.

    The real ticket is to Fraps it. What I find there is that in high resolution cases, the Max FPS peaks at locations when new 'bad guys' arrive in the scene... this typically is associated with fresh textures being put into video ram (and this is where a BW argument makes sense). It is also interesting to see that while in 2 or 3 segments Phenom's peak max is higher than the Intel counterpart and pushes the average frame rate up to be just a bit higher in the high res limits, the Phenom also holds the min frame rate record (which is the most important for a smooth game play experience generally speaking). Frankly, it is irrelevant, even the Phenom because even though the Phenom held the lowest FPS for the min, it was still well above the mark for smooth rendering.

    The problem is, however, I cannot move max (peak) around by changing FSB or HT speeds for either processor, however, I can move the Intel close to matching the Phenom by bumping up the PCIe frequency 15% .. this is what prompted my statement above. Unfortunately, I am not stable above a 15% overclock on the PCIe bus so I cannot see what the deficiency is, nonetheless, the bottlenecks seem to be in the chipset.... in which case, the high res gaming mode I can see the 790FX being a better solution, that is if 4 or 5 FPS matter.
    One hundred years from now It won't matter
    What kind of car I drove What kind of house I lived in
    How much money I had in the bank Nor what my cloths looked like.... But The world may be a little better Because, I was important In the life of a child.
    -- from "Within My Power" by Forest Witcraft

  24. #74
    I am Xtreme
    Join Date
    Jul 2007
    Location
    Austria
    Posts
    5,485
    ok i get your point, but the thing with min fps is also kinda tricky. There where highres benches of crysis where they improved the min fps by a significat ammount, while the avarage fps nearly stayed the same.

  25. #75
    Xtreme Mentor
    Join Date
    Mar 2006
    Posts
    2,978
    Quote Originally Posted by Hornet331 View Post
    ok i get your point, but the thing with min fps is also kinda tricky. There where highres benches of crysis where they improved the min fps by a significat ammount, while the avarage fps nearly stayed the same.
    Yeah, it is not conclusive at all.... just something I was dinking around with last night. It would take a lot more work to figure it out.
    One hundred years from now It won't matter
    What kind of car I drove What kind of house I lived in
    How much money I had in the bank Nor what my cloths looked like.... But The world may be a little better Because, I was important In the life of a child.
    -- from "Within My Power" by Forest Witcraft

Page 3 of 21 FirstFirst 12345613 ... LastLast

Bookmarks

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •