Page 2 of 2 FirstFirst 12
Results 26 to 35 of 35

Thread: Difference Between Xeon versus Desktop?

  1. #26
    I am Xtreme zanzabar's Avatar
    Join Date
    Jul 2007
    Location
    SF bay area, CA
    Posts
    15,871
    i think that source would be the best since its cpu heavy
    5930k, R5E, samsung 8GBx4 d-die, vega 56, wd gold 8TB, wd 4TB red, 2TB raid1 wd blue 5400
    samsung 840 evo 500GB, HP EX 1TB NVME , CM690II, swiftech h220, corsair 750hxi

  2. #27
    Registered User
    Join Date
    Oct 2004
    Location
    Victoria
    Posts
    35
    Figured I would give this thread a bump and see if anyone else has experience/knowledge/ or can officially comment on this supposed difference between xeon gaming vs desktop gaming.

  3. #28
    Xtreme Enthusiast
    Join Date
    Mar 2008
    Posts
    750
    Hmm... one question: supposedly... a E8400 should be faster or on par with a Q9450 in gaming at the same clock speed, right? Unless the game supports quad-core processors. In which case, I'd like to ask if... Crysis supports quad-core processors?

  4. #29
    Registered User
    Join Date
    Oct 2004
    Location
    Victoria
    Posts
    35
    Quote Originally Posted by RunawayPrisoner View Post
    Hmm... one question: supposedly... a E8400 should be faster or on par with a Q9450 in gaming at the same clock speed, right? Unless the game supports quad-core processors. In which case, I'd like to ask if... Crysis supports quad-core processors?
    At the same clock speed as an E8400, I would expect the Q9450 to be faster, simply because it has a 12 mb cache versus 6mb. As for crysis and quad core support, I'm not too sure. I've heard rumors that is is quad friendly but I can't verify that.

  5. #30
    Xtreme Enthusiast
    Join Date
    Mar 2008
    Posts
    750
    Quote Originally Posted by Nebulus View Post
    At the same clock speed as an E8400, I would expect the Q9450 to be faster, simply because it has a 12 mb cache versus 6mb. As for crysis and quad core support, I'm not too sure. I've heard rumors that is is quad friendly but I can't verify that.
    Well, it's definitely faster (just by a little bit) on my X3350 versus on the E8400. Another member and I had a little competition about that earlier, and I ended up using both my E8400 and X3350 on the same setup to test it out. Incidentally, the X3350 was faster by 1fps although on the E8400 at the same clocks, the graphics card was clocked a bit higher. If the rumor proves true... then at the same clock speed, we'd see the Q9450 outdoes the E8400 greatly in Crysis, maybe by up to 4fps or so.

  6. #31
    Registered User
    Join Date
    Oct 2004
    Location
    Victoria
    Posts
    35
    Quote Originally Posted by RunawayPrisoner View Post
    Well, it's definitely faster (just by a little bit) on my X3350 versus on the E8400. Another member and I had a little competition about that earlier, and I ended up using both my E8400 and X3350 on the same setup to test it out. Incidentally, the X3350 was faster by 1fps although on the E8400 at the same clocks, the graphics card was clocked a bit higher. If the rumor proves true... then at the same clock speed, we'd see the Q9450 outdoes the E8400 greatly in Crysis, maybe by up to 4fps or so.
    Scratch what I said earlier about cache advantage, it's wrong. If crysis only supports two cores, then a Q9450 and E8400 should perform EXACTLY the same when at equivalent clock speeds. Why? Because I just did some research and the 12 mb cache of the Q9450 is not shared amongst all 4 processors. The Quads are just 2 "core-2-duo" processors on one chip right? So each pair of cores has access to it's own 6mb cache independent of the other.

    So if crysis is only dual core supported, those two cores being utilized by Crysis only have 6mb of cache to work with... just like the E8400.

  7. #32
    Xtreme Member
    Join Date
    Jan 2007
    Location
    Dorset, UK
    Posts
    439
    Quote Originally Posted by Nebulus View Post
    Scratch what I said earlier about cache advantage, it's wrong. If crysis only supports two cores, then a Q9450 and E8400 should perform EXACTLY the same when at equivalent clock speeds. Why? Because I just did some research and the 12 mb cache of the Q9450 is not shared amongst all 4 processors. The Quads are just 2 "core-2-duo" processors on one chip right? So each pair of cores has access to it's own 6mb cache independent of the other.

    So if crysis is only dual core supported, those two cores being utilized by Crysis only have 6mb of cache to work with... just like the E8400.
    Not necessarily...

    Think a bit more. If the E8400 has 6MB of cache total and two cores only, then yes the max amount of cache available to it is 6MB. But how much cache is available to the quad depends on WHICH two cores the game uses. If it is two "adjacent" cores on the same die then the performance and cache use will be identical. But if the cores are on different dies then potentially they can access 12MB of cache between them as in your first suggestion, so there should be a cache boost to performance.

    IIRC the OS schedules threads to cores based on core load, so if you can set app affinities so that one core is always less used, and an "opposite" core is the preferred one for the game, then you might be able to force usage of opposite cores like that. Or maybe you can set multiple affinities for a single app, not sure as I don't have a multicore proc right now.

  8. #33
    Xtreme Enthusiast
    Join Date
    Mar 2008
    Posts
    750
    That's what I thought as well. And yes, the cache on the X3350/Q9450 chips are 2 x 6MB instead of 4 x 3MB.

    But I don't think Crysis performs the same on Q9450 and E8400. Why? Because here are some results of E8400 and X3350 on the same system, with the same setup, and... same kind of benchmark:

    E8400:


    X3350:


    Disregarding the higher VGA mem clock with the E8400, you can see a good 1fps speed boost on the X3350 side on run 1. Both CPUs were clocked the same (8 x 475) and were even given the same vCore. So why the difference? And if Q9450 should indeed perform on the same level as E8400, then that means X3350 is... a tad faster than Q9450? We're talking about 1/30 of a difference here, about 3.33% if you ask me. Not to say anything, but at least for now, I am confident that this chip can perform as good as the E8400, and at least it can in Crysis. I also checked this with another member in another thread (as said before), and the results also favored the X3350, but maybe it was because my VGA was faster than his in this game... 9600GT versus HD 3870.

    Quote Originally Posted by IanB View Post
    Not necessarily...

    Think a bit more. If the E8400 has 6MB of cache total and two cores only, then yes the max amount of cache available to it is 6MB. But how much cache is available to the quad depends on WHICH two cores the game uses. If it is two "adjacent" cores on the same die then the performance and cache use will be identical. But if the cores are on different dies then potentially they can access 12MB of cache between them as in your first suggestion, so there should be a cache boost to performance.

    IIRC the OS schedules threads to cores based on core load, so if you can set app affinities so that one core is always less used, and an "opposite" core is the preferred one for the game, then you might be able to force usage of opposite cores like that. Or maybe you can set multiple affinities for a single app, not sure as I don't have a multicore proc right now.
    Good theory, though. I'll see about trying it out tomorrow... after I've passed Orthos 10K for 8h or so. Some people demand stress testing results... And yeah, I guess I have to affiliate core 0 and core 3 to Crysis. I mean... what are the chances of them being paired up? And if it doesn't work out, I'll try all other possible pairs (which is 2) before I draw any further conclusion.
    Last edited by RunawayPrisoner; 03-28-2008 at 12:54 AM.

  9. #34
    Xtreme Cruncher
    Join Date
    Apr 2006
    Location
    Tampa
    Posts
    1,056
    I really don't believe that the Xeon is 25% slower in games, that just doesn't make any sense.
    Workstation :
    Q6600 @ 3.2ghz/Scythe Infinity (Screw mount mod)
    GA-P35-DS3L (F7)
    4GB G.Skill @ 400mhz 4-4-4-12
    BFG Nvidia 7600gt
    WD 250GB SATAII 16MB (OS/Programs)
    Seagate 250GB SATAII 16MB (Storage)
    NEC DVD+/-RW
    Antec NeoHE 550W
    X-Fi Xtreme Music
    Razer Copperhead
    MS Natural Ergonomic 4000 (Best KB ever!)
    Windows 2008 Enterprise Server x64

    Laptop:
    Dell Latitude D610 - Pentium M 750 1.86Ghz / 1GB RAM

  10. #35
    Xtreme Mentor
    Join Date
    May 2007
    Posts
    2,792
    The Xeons are the best chips off the wafer in each bin, most would clock better, at lower temps, power and volts than the LGA-775 counterparts given the same platform, cooling and BIOS. They have far better stability and lower degradation over lifetime too and yup, they are specifically tuned for different tasks native to the server world. Since Intel doesn't make much public mention on this through documentation means we can't really know how and its full effects until rigorously and astutely benchmarked following a fixed methodology, OR, if a devoted review site carries this out for us. I doubt highly there is such a difference between the two as shown in Post#1 with low load applications such as desktop ones; it would matter and show up in server applications properly, esp. TPC-C, Linpack and the like, wherever you have very high core loads with high memory usage scenarios, the Xeon will show more performance optimizations. Maybe they added some latency in the cache algorithms to optimize for aggressive large array prefetching, that's all I'm thinking has been done.

    I've used both sets of Penryn at two of my work places running daily but not "benchmarked" the two at all.

    As for Crysis CPU dependency, it's a shambles. I've no idea what is wrong with its code but there really is something wrong with its execution multi-thread parallelism and procesor power scaling. Far too erratic. See under benchmarking for instance, Q6600 3.8G should beat 3.2G, but it didn't, and the Phenom 2.6G which is slower at equal clockspeed overall, was equaling the 450FSB 3.6G Q6600 perf. in it - kinda shows how lame the bench is.

    And that's by removing any GPU bottleneck, playing 800x600 & medium GFX settings.

Page 2 of 2 FirstFirst 12

Bookmarks

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •