Results 1 to 25 of 126

Thread: 3.2G OC'd CORE i7 940 vs. 3.16Ghz Stock E8500 vs. 3.2Ghz Stock QX9770 Complete Review

Hybrid View

  1. #1
    Xtreme Mentor
    Join Date
    Mar 2006
    Posts
    2,978
    Quote Originally Posted by Sr7 View Post
    That's not really true.. if nehalems micro-arch changes had increased efficiency for things that games need, you couldve at least seen *some* gain, but this obviously isn't going to be their focus.
    \
    Nehalem could very well show those improvements computationally, but you won't see any gain if the GPU is capping the output. No amount of CPU power thrown into the equation will change the result in a GPU limited regime.
    One hundred years from now It won't matter
    What kind of car I drove What kind of house I lived in
    How much money I had in the bank Nor what my cloths looked like.... But The world may be a little better Because, I was important In the life of a child.
    -- from "Within My Power" by Forest Witcraft

  2. #2
    Banned
    Join Date
    Feb 2008
    Posts
    696
    Quote Originally Posted by JumpingJack View Post
    Nehalem could very well show those improvements computationally, but you won't see any gain if the GPU is capping the output. No amount of CPU power thrown into the equation will change the result in a GPU limited regime.
    Of course, that's a given. But the sad part is I'm on a 12x10 monitor still Even at those lower resolutions like 10x7 or 12x10, where the CPU is much more of a bottleneck, we see it doesn't seem to be beneficial to upgrade to nehalem, according to these released benchmarks. This is the first time I can remember that being the case with a new gen of CPUs in quite some time. Usually Intel and AMD specifically recommend that reviewers test at 8x6 or 10x7 to see the gains of their new processors.

    The whole purpose of benchmarking is to see how much you stand to gain in your experiences with your computer by buying this product right? So how much sense does it make to turn settings way down to check your CPUs gain in performance, if you don't see any of that additional performance when actually gaming at your normal/native resolutions/settings?

    It's like buying a car for commuting based on it's top speed. You'll never see that speed in every-day use because you don't do that kind of driving.

    Obviously there's an exception if you're running multiple GPUs and therefore putting the bottleneck back on the CPU, but most people don't run those kinds of setups.
    Last edited by Sr7; 10-18-2008 at 10:03 PM.

  3. #3
    Xtreme Mentor
    Join Date
    Mar 2006
    Posts
    2,978
    Quote Originally Posted by Sr7 View Post
    Of course, that's a given. But the sad part is I'm on a 12x10 monitor still Even at those lower resolutions like 10x7 or 12x10, where the CPU is much more of a bottleneck, we see it doesn't seem to be beneficial to upgrade to nehalem, according to these released benchmarks. This is the first time I can remember that being the case with a new gen of CPUs in quite some time. Usually Intel and AMD specifically recommend that reviewers test at 8x6 or 10x7 to see the gains of their new processors.
    Depends on the game... Gen-1 games, yeah I agree.... but current DX10 games... even 1280x1024 with high fidelity settings is GPU limited.

    WIC at 1280x800, high setting default, QX9650 @ 3.2 Ghz
    Min =32 Ave = 58 Max = 154

    WIC at 1280x800, high settings default, QX9650 @ 2.66 Ghz
    min =28 Ave = 59 Max = 141

    This is on a 4870X2 ... lost planet is doing the same thing (everything high)..... so this review ran GPU limited but tries to conclude (as well as most other on this thread) about the CPU.... this is incorrect. EDIT: Note I ran XP DX9, DX10 will be even more the same...

    Does it make a difference, nope ... why? Because we like to play at those settings ... however, I personally, prefer to not buy a whole new system ... the CPU is the lowest common denomenator -- and revs every 1-2 years, GPUs rev every 6-9 months, so if I want to future proof -- I prefer the fastest CPU then incrementally upgrade the GPU as needed... that's me ... which is why I want to see both the high quality, high res result but also the low res, lower quality results to ascertain the viability of the CPU ...

    This review did not do that... the question whether Nehalem actually improves gaming is still a question mark... I do not expect a huge leap, and I suspect to see some games actually under perform ... but the oddity of this data set is that all the CPUs compared bunched up to be roughly the same ... this is GPU limited.
    One hundred years from now It won't matter
    What kind of car I drove What kind of house I lived in
    How much money I had in the bank Nor what my cloths looked like.... But The world may be a little better Because, I was important In the life of a child.
    -- from "Within My Power" by Forest Witcraft

  4. #4
    Banned
    Join Date
    Feb 2008
    Posts
    696
    Quote Originally Posted by JumpingJack View Post
    Depends on the game... Gen-1 games, yeah I agree.... but current DX10 games... even 1280x1024 with high fidelity settings is GPU limited.

    WIC at 1280x800, high setting default, QX9650 @ 3.2 Ghz
    Min =32 Ave = 58 Max = 154

    WIC at 1280x800, high settings default, QX9650 @ 2.66 Ghz
    min =28 Ave = 59 Max = 141

    This is on a 4870X2 ... lost planet is doing the same thing (everything high)..... so this review ran GPU limited by tries to conclude (as well as most other on this thread) about the CPU.... this is incorrect. EDIT: Note I ran XP DX9, DX10 will be even more the same...

    Does it make a difference, nope ... why? Because we like to play at those settings ... however, I personally, prefer to not buy a whole new system ... the CPU is the lowest common denomenator -- and revs every 1-2 years, GPUs rev every 6-9 months, so if I want to future proof -- I prefer the fastest CPU then incrementally upgrade the GPU as needed... that's me ... which is why I want to see both the high quality, high res result but also the low res, lower quality results to ascertain the viability of the CPU ...

    This review did not do that... the question whether Nehalem actually improves gaming is still a question mark... I do not expect a huge leap, and I suspect to see some games actually under perform ... but the oddity of this data set is that all the CPUs compared bunched up to be roughly the same ... this is GPU limited.
    I see your point, but who is buying a nehalem system in order to play CPU limited DX7 and DX8 games that already run in the 100's of fps though?

    The thing you *need* more performance for is current day games and if they're so GPU bound at 12x10 on average, I'd say it's probably not worth the price to someone who wants gaming performance. Just my opinion though.

    Do you mean to imply that we don't know the CPUs gaming potential purely because they tested 12x10 with GPU limited settings? If so, you might have problems in the future, because that trend is only going to continue, and the average resolution is going up, not down.

    Should they have bumped down resolution to settings that no one plays at to gauge the CPUs "gaming performance"? Sure if you're benchmarking by running a game at 800x600 you can call it "gaming performance" but it's not real-world gaming performance.

    I guess the fundamental question is.. what is "gaming performance"?

  5. #5
    Xtreme Mentor
    Join Date
    Mar 2006
    Posts
    2,978
    Quote Originally Posted by Sr7 View Post
    I see your point, but who is buying a nehalem system in order to play CPU limited DX7 and DX8 games that already run in the 100's of fps though?

    The thing you *need* more performance for is current day games and if they're so GPU bound at 12x10 on average, I'd say it's probably not worth the price to someone who wants gaming performance. Just my opinion though.

    Do you mean to imply that we don't know the CPUs gaming potential purely because they tested 12x10 with GPU limited settings? If so, you might have problems in the future, because that trend is only going to continue, and the average resolution is going up, not down.

    Should they have bumped down resolution to settings that no one plays at to gauge the CPUs "gaming performance"? Sure if you're benchmarking by running a game at 800x600 you can call it "gaming performance" but it's not real-world gaming performance.

    I guess the fundamental question is.. what is "gaming performance"?
    Exactly ... to your rhetorical question ... meaning who would buy this purely for gaming (see my response to Glow above).

    I own probably 200 games from Doom to my most recent purchase Spore.... I have played maybe 10 all the way through ... but I use them to stress and play with the GPU, CPU, etc combos ... it's just what I like to do.

    I use my computer mostly for other things, some modeling, NLE editing, etc. So Nehalem is looking pretty darn nice to me.

    EDIT: In terms of gaming 'performance', i.e. what is it... that is in the eye of the beholder in my opinion. The output that we measure to ascertain a performance number is frames per second ... which actually isn't even that, your monitor will only display 60 (in some cases 75 or 100) frames per second, and anything over 60 FPS is a waste. What you are actually measuring is the number of times the frame buffer is refreshed by the GPU. Even that is meaningless, because the performance that we 'measure' is a small slice of the overall game -- typically 30, 60 or 120 seconds of a game that should last several hours .... this is not very representative (statistically) of the population anyway.

    Jack
    Last edited by JumpingJack; 10-18-2008 at 10:18 PM.
    One hundred years from now It won't matter
    What kind of car I drove What kind of house I lived in
    How much money I had in the bank Nor what my cloths looked like.... But The world may be a little better Because, I was important In the life of a child.
    -- from "Within My Power" by Forest Witcraft

  6. #6
    Banned
    Join Date
    Feb 2008
    Posts
    696
    Quote Originally Posted by JumpingJack View Post
    Exactly ... to your rhetorical question ... meaning who would buy this purely for gaming (see my response to Glow above).

    I own probably 200 games from Doom to my most recent purchase Spore.... I have played maybe 10 all the way through ... but I use them to stress and play with the GPU, CPU, etc combos ... it's just what I like to do.

    I use my computer mostly for other things, some modeling, NLE editing, etc. So Nehalem is looking pretty darn nice to me.

    EDIT: In terms of gaming 'performance', i.e. what is it... that is in the eye of the beholder in my opinion. The output that we measure to ascertain a performance number is frames per second ... which actually isn't even that, your monitor will only display 60 (in some cases 75 or 100) frames per second, and anything over 60 FPS is a waste. What you are actually measuring is the number of times the frame buffer is refreshed by the GPU. Even that is meaningless, because the performance that we 'measure' is a small slice of the overall game -- typically 30, 60 or 120 seconds of a game that should last several hours .... this is not very representative (statistically) of the population anyway.

    Jack
    Well the other part of my question was "do you see a major benefit in everyday use for anything over current processors?" I don't think it's a noticeable performance increase at this point, because things are so fast as it is. Granted certain operations in workstation apps might process some data faster, but I can't think of much else that you'd benefit from by having one of these processors when the GPU already does some of the things better. Granted there aren't many GPGPU consumer apps out there right now, but I'm just speaking in theoreticals.. in terms of "the device best suited for workload x."

    As far as your mention of the inability for a monitor to display more than 60 frames a second, that's actually wrong.

    This is true if you have v-sync on you'll only see 60 fps, exactly, at maximum (1 frame per vertical blank period).

    But with v-sync off, if you have 120fps on a 60Hz monitor, you see parts of multiple frames.

    By this I mean that the vertical refresh is not instantaneous.. it will write out whatever is in the front buffer of the swap chain, at the time of the pixel being lit on that refresh pass.

    So let's say you have 1 frame ready to display. Your monitor starts displaying that frame 1 pixel at a time, working from left to right, and moving down row by row, displaying what's in the buffer. If the next frame is done and presented when the pixel being refreshed is halfway down the monitor in that pixel refresh process, the second half of that monitor's pixels will show the contents of this new frame, so you get actual visual feedback on your position/environment in-game faster, instead of having to wait for the next refresh cycle to see *any* of that new frame.

    Now whether you can turn this faster visual feedback into a meaningful response/reaction in-game is a different story, since it's all happening in a very short period

    If you had 180FPS you'd see roughly the top third of your monitor with data from frame 1, the middle third displaying data from frame 2, and the bottom third with data from frame 3. With Vsync on you would've only seen frame 1 and had to wait until it was done displaying the whole thing before the refresh moved back to the top of the screen, at which point it would've displayed frame 4, and you'd have never seen 2 or 3.

    My point is that your statement about a monitor not displaying more than 60 frames in a second is false.
    Last edited by Sr7; 10-18-2008 at 10:49 PM.

  7. #7
    Xtreme Mentor
    Join Date
    Mar 2006
    Posts
    2,978
    Quote Originally Posted by Sr7 View Post
    Well the other part of my question was "do you see a major benefit in everyday use for anything over current processors?" I don't think it's a noticeable performance increase at this point, because things are so fast as it is. Granted certain operations in workstation apps might process some data faster, but I can't think of much else that you'd benefit from by having one of these processors when the GPU already does some of the things better. Granted there aren't many GPGPU consumer apps out there right now, but I'm just speaking in theoreticals.. in terms of "the device best suited for workload x."

    As far as your mention of the inability for a monitor to display more than 60 frames a second, that's actually wrong.

    This is true if you have v-sync on you'll only see 60 fps, exactly, at maximum (1 frame per vertical blank period).

    But with v-sync off, if you have 120fps on a 60Hz monitor, you see parts of multiple frames.

    By this I mean, that the vertical refresh is not instantaneous.. it will write out whatever is in the front buffer of the swap chain, at the time of the pixel being lit on that refresh pass.

    So let's say you have 1 frame ready to display. Your monitor starts displaying that frame 1 pixel at a time, working from left to right, and moving down row by row, displaying what's in the buffer. If halfway down in that pixel refresh process the next frame is done and presented, the second half of that monitor's pixels will show the contents of this new frame, so you get actual visual feedback on your position/environment in-game faster, instead of having to wait for the next refresh cycle to see *any* of that frame.

    Now whether you can turn this faster feedback into a meaningful response/reaction is a different story, since it's all happening in a very short period.

    If you had 180FPS you'd see roughly the top third of your monitor as frame 1, the middle third as frame 2, and the bottom third as frame 3. With Vsync on you would've only seen frame 1 and had to wait until it was done displaying the whole thing before the refresh moved back to the top of the screen, at which point it would've displayed frame 4.

    My point is that your statement about not seeing more than 60 frames in a second is false.
    Wow ... you are talkative .. ... this my last tonight, gotta get in bed.

    In terms of noticeable difference.... again, that is in the eye of the beholder. Ironically, just a few days ago I was using Lightroom on a dual core rig for some quick touch ups and got very annoyed ... (this was an X6800) ... which at the time I was using that rig was darn fast, but I was annoyed because it wasn't as snappy or responsive.... my quad does it so much faster, and I notice it. Also, when I am transcoding or importing from a different video format into Premier or for simple routine stuff -- Pinnacle Studio 11 -- it is very noticable, especially when it takes 2x longer to build the DVD ... but see this is me, I prefer it faster... you may not be doing this level of computing.

    In terms of GPGPU this is really a different topic, a different thread for debate -- but I don't see CUDA, for example, really taking off -- for a few reason which I will not elaborate on ... my personal opinion is that CUDA/nVidia will be victimized very much like AMD victimized Intanium.

    For your video monitor commentary, I am correct on this you should do some more research ... google is your friend.... -- video monitors refresh the entire screen at typical 60 Hz that means each pixel within the field is updated in tandem 60 times (progressive), that is 60 frames in one second. Some monitors support higher, but the human brain cannot distinguish between frames that until about 20 FPS ... if you run the game full bore, your 60 Hz refresh will often capture the frame buffer in mid refresh since they are not synced -- this is what creates the lines and tearing in the image ... Vsync gives the smoothest, most enjoyable game play because each refresh of the monitor coincides with a completed frame buffer i.e. that is what it means to be sync'ed. We may be saying the same thing but differently ...
    One hundred years from now It won't matter
    What kind of car I drove What kind of house I lived in
    How much money I had in the bank Nor what my cloths looked like.... But The world may be a little better Because, I was important In the life of a child.
    -- from "Within My Power" by Forest Witcraft

Bookmarks

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •