i think that source would be the best since its cpu heavy
i think that source would be the best since its cpu heavy
5930k, R5E, samsung 8GBx4 d-die, vega 56, wd gold 8TB, wd 4TB red, 2TB raid1 wd blue 5400
samsung 840 evo 500GB, HP EX 1TB NVME , CM690II, swiftech h220, corsair 750hxi
Figured I would give this thread a bump and see if anyone else has experience/knowledge/ or can officially comment on this supposed difference between xeon gaming vs desktop gaming.
Hmm... one question: supposedly... a E8400 should be faster or on par with a Q9450 in gaming at the same clock speed, right? Unless the game supports quad-core processors. In which case, I'd like to ask if... Crysis supports quad-core processors?
Well, it's definitely faster (just by a little bit) on my X3350 versus on the E8400. Another member and I had a little competition about that earlier, and I ended up using both my E8400 and X3350 on the same setup to test it out. Incidentally, the X3350 was faster by 1fps although on the E8400 at the same clocks, the graphics card was clocked a bit higher. If the rumor proves true... then at the same clock speed, we'd see the Q9450 outdoes the E8400 greatly in Crysis, maybe by up to 4fps or so.
Scratch what I said earlier about cache advantage, it's wrong. If crysis only supports two cores, then a Q9450 and E8400 should perform EXACTLY the same when at equivalent clock speeds. Why? Because I just did some research and the 12 mb cache of the Q9450 is not shared amongst all 4 processors. The Quads are just 2 "core-2-duo" processors on one chip right? So each pair of cores has access to it's own 6mb cache independent of the other.
So if crysis is only dual core supported, those two cores being utilized by Crysis only have 6mb of cache to work with... just like the E8400.
Not necessarily...![]()
Think a bit more. If the E8400 has 6MB of cache total and two cores only, then yes the max amount of cache available to it is 6MB. But how much cache is available to the quad depends on WHICH two cores the game uses. If it is two "adjacent" cores on the same die then the performance and cache use will be identical. But if the cores are on different dies then potentially they can access 12MB of cache between them as in your first suggestion, so there should be a cache boost to performance.
IIRC the OS schedules threads to cores based on core load, so if you can set app affinities so that one core is always less used, and an "opposite" core is the preferred one for the game, then you might be able to force usage of opposite cores like that. Or maybe you can set multiple affinities for a single app, not sure as I don't have a multicore proc right now.![]()
That's what I thought as well. And yes, the cache on the X3350/Q9450 chips are 2 x 6MB instead of 4 x 3MB.
But I don't think Crysis performs the same on Q9450 and E8400. Why? Because here are some results of E8400 and X3350 on the same system, with the same setup, and... same kind of benchmark:
E8400:
X3350:
Disregarding the higher VGA mem clock with the E8400, you can see a good 1fps speed boost on the X3350 side on run 1. Both CPUs were clocked the same (8 x 475) and were even given the same vCore. So why the difference? And if Q9450 should indeed perform on the same level as E8400, then that means X3350 is... a tad faster than Q9450? We're talking about 1/30 of a difference here, about 3.33% if you ask me. Not to say anything, but at least for now, I am confident that this chip can perform as good as the E8400, and at least it can in Crysis. I also checked this with another member in another thread (as said before), and the results also favored the X3350, but maybe it was because my VGA was faster than his in this game... 9600GT versus HD 3870.
Good theory, though. I'll see about trying it out tomorrow... after I've passed Orthos 10K for 8h or so. Some people demand stress testing results...And yeah, I guess I have to affiliate core 0 and core 3 to Crysis. I mean... what are the chances of them being paired up?
And if it doesn't work out, I'll try all other possible pairs (which is 2) before I draw any further conclusion.
Last edited by RunawayPrisoner; 03-28-2008 at 12:54 AM.
I really don't believe that the Xeon is 25% slower in games, that just doesn't make any sense.
Workstation :
Q6600 @ 3.2ghz/Scythe Infinity (Screw mount mod)
GA-P35-DS3L (F7)
4GB G.Skill @ 400mhz 4-4-4-12
BFG Nvidia 7600gt
WD 250GB SATAII 16MB (OS/Programs)
Seagate 250GB SATAII 16MB (Storage)
NEC DVD+/-RW
Antec NeoHE 550W
X-Fi Xtreme Music
Razer Copperhead
MS Natural Ergonomic 4000 (Best KB ever!)
Windows 2008 Enterprise Server x64
Laptop:
Dell Latitude D610 - Pentium M 750 1.86Ghz / 1GB RAM
The Xeons are the best chips off the wafer in each bin, most would clock better, at lower temps, power and volts than the LGA-775 counterparts given the same platform, cooling and BIOS. They have far better stability and lower degradation over lifetime too and yup, they are specifically tuned for different tasks native to the server world. Since Intel doesn't make much public mention on this through documentation means we can't really know how and its full effects until rigorously and astutely benchmarked following a fixed methodology, OR, if a devoted review site carries this out for us. I doubt highly there is such a difference between the two as shown in Post#1 with low load applications such as desktop ones; it would matter and show up in server applications properly, esp. TPC-C, Linpack and the like, wherever you have very high core loads with high memory usage scenarios, the Xeon will show more performance optimizations. Maybe they added some latency in the cache algorithms to optimize for aggressive large array prefetching, that's all I'm thinking has been done.
I've used both sets of Penryn at two of my work places running daily but not "benchmarked" the two at all.
As for Crysis CPU dependency, it's a shambles. I've no idea what is wrong with its code but there really is something wrong with its execution multi-thread parallelism and procesor power scaling. Far too erratic. See under benchmarking for instance, Q6600 3.8G should beat 3.2G, but it didn't, and the Phenom 2.6G which is slower at equal clockspeed overall, was equaling the 450FSB 3.6G Q6600 perf. in it - kinda shows how lame the bench is.
And that's by removing any GPU bottleneck, playing 800x600 & medium GFX settings.
Bookmarks