I'm going to be mean here but, WOW, you noticed an improvement going from a single core to a dual core?
/sarcasm
I always knew that my Opteron 165 @ 2.7GHz with 6GBps+ memory bandwidth wasn't that far behind an E6600 in games.
Printable View
I'm going to be mean here but, WOW, you noticed an improvement going from a single core to a dual core?
/sarcasm
I always knew that my Opteron 165 @ 2.7GHz with 6GBps+ memory bandwidth wasn't that far behind an E6600 in games.
That obviously depends on the game, rez and vidcard. If you're playing a game that isn't CPU limited at a rez that is limited only by your vidcard, you won't see a difference from a faster CPU.Quote:
Originally Posted by LiQU!D
I play FPS, so there's bugger all difference, but even in some of the RTS's the difference isn't going to be that large.Quote:
Originally Posted by Fred_Pohl
What's quickly becoming the norm : 1680 x 1050. Maximum in game settings.Quote:
Originally Posted by Fred_Pohl
I see your point but it only reinforces what I said before about gaming performance being dependent upon the game, rez and vidcard. If we use SMP Q4 @1600x1200 with a 8800GTX as an example, your Opty @2.7GHz would be getting less than 100fps vs 157fps for a stock E6600.Quote:
Originally Posted by LiQU!D
Overall there may not be a huge difference between DC and SC CPU systems in many of today's games when using DX9 vidcards but I think we're going to see that gap widen quite a bit in 2007. That being said, until more games are SMP optimized and more work (physics, etc.) is delegated to the second core, there's still some tangible benefit to running other apps and processes on the second core without slowing game play. For example, I no longer have to close or pause F@H when I game and I've even encoded DVDs while gaming.
http://images.anandtech.com/graphs/a...1256/13784.png
According to that image it would be more likely around the 130-140fps area. :confused:Quote:
Originally Posted by Fred_Pohl
From reading the rest of your post you've confused an Opteron 165 as a single core, it is, in fact a dual core.
Quote:
Originally Posted by LiQU!D
Yeah, my bad. I knew that the 165 was a DC but forgot that's what you had. My short term memory sux. :(
In this Q4 scenario you would get about 140fps compared to 150fps for a 2.4G C2D, 160fps for a 3GHz C2D and <100fps for a 3GHz 1C K8.
well I noticed a feelable difference between my opteron 165@2700mhz and my C2D 6400 even when not overclocked, the core duo system is more responsive in unzipping, loading stuff etc ... and even if I had 130 fps in Quake4 before it still would dip to lower levels when in hard fights ( and that was with a 7900GTX or 7950X2 vidcard) Especially multiplayer levels where each extra fps is appreciated and I know it dropped below 60fps mate, even with a tweaked config.
These numbers by websites only time a short period of the game ( a few minutes max) they don't play it hours a day and their timedemos are sometimes hilarious, it gives a good indication but you have to play it for yaself to see what gives.
We are all spoiled with huge fps numbers and it might seem weird but each improvement in my rig ( gfx card , cpu , ram) is noticeable to me. I need to adapt again to it to enjoy my multiplayer experience. Also when going from single ( AMD4000 to my opty) was an improvement in windows etc ... games didn't benefit as much becasue they lacked support in those days
No matter what cpu you buy you will get good performance for your money, but please stop claiming the AMD is almost as fast as the Intel Core2. If you are a hardware purist and want the fastest rig out there Intel is at the moment the way to go and AMD is this time in the catchup position. Even it might seem money wasted each fps is welcome for any game and it's worth it if you play online games. Nuff said...
I hope we can all agree (and not solely by virtue of opinion) that, clock-for-clock, C2Ds hold consistent advantages over K8s across various applications (single- or multi-threaded) – while expending less energy.
SuperPi:
However, I also hope that it is understood that SuperPi is not a good indicator of the disparity between the two chipmakers’ architectures. Previously noted on this thread was the need to “run SuperPi” to see how the chips compared. While I believe SuperPi is a good reference stressor tool, it is not necessarily an accurate way to compare overall performance between architectures.
Qualification:
Let’s use the E6300 (1.86 GHz) as an example. On average, it crunches through a 1M Pi digit calculation in 29.9 seconds. An FX-62 (2.8 GHz) finishes the same calculation at approximately 30.0 seconds. To put this further into perspective, even K8s overclocked to 3.3 GHz find it difficult to dip below 25-second 1M calculations – while stock E6600s are known to hit the 21-second mark. Moreover, stock X6800s readily strike 17 seconds or fewer. Does this mean a stock X6800 is nearly twice as fast as a stock FX-62? That’s a humorous proposition, at best.
Facts:
In Cinebench 9.5 Multi-Threaded CPU Benchmark the FX-62 renders 3D faster than an E6600 and is almost on par with an E6700.As a matter of fact, using WinRAR's highest level of compression, the FX-62 bests the E6700. Additionally, Encoding WMV (9.0) from MPEG, 3D Studio Max 7 rendering, and Maya 7 HD rendering the Fx-62s performance is comparable to (and, in the instance of Maya 7, better than) the E6600's (see bottom link).
Reference: http://www.gamepc.com/labs/view_cont...6AF9EBCFE93C5A
(Note: Not bad for an architecture whose basic design was released in September of 2003, with only revisions in the areas of power consumption, maturing of manufacturing processes, sizes of cache, speed of HT link, memory channeling/bandwidth, and number of cores.)
Investment:
For someone living in the PC dark ages – or without one – or if money is not an issue, building a C2D system may be the wisest choice for value and future-proofing or even simply for benchmark bragging rights. Though, for someone who picked up an Opteron 165/170 a few months back (for less than $200) achieving overclocks of 2.9 to 3.2 GHz, purchasing a new board, CPU, and (in some cases) DDR2 memory is not a viable option.
Some might say, “Well, what about gaming?” Simply put, AMD’ers have their GPUs (or multiple GPUs) of choice, CPUs overclocked, and have their frame rates locked into vertical refresh rate synchronicity (as do many gamers on either side of the Intel/AMD isle who value the absence of tearing, while giving their GPUs a slight heat-producing break). Hence, they cannot necessarily benefit from the significant frame-rate increases that C2Ds oftentimes provide – just like Pentium owners could not necessarily benefit from an upgrade to Athlons in years past.
Conclusion:
Just like Intel faithfuls clung on to their Pentium 4s, Ds, and EEs, while K8 reigned over the gaming and benchmarking domains, you can expect AMD followers to do the same and await K8L – while C2D presently leads the way. If you’re in need of a new PC (and are devoid of an Intel/AMD bias), I see no reason why one would not build around the C2D architecture – it’s fast, available, overclockable, and well priced. However, if someone is currently at the top of the AMD food chain and cruising at respectable speeds of 2.6 to 3.0 GHz (or faster), I wouldn’t understand the absolute need to jump on the C2D express lane (although, I know I must account for those who simply have the monetary means – or who can sacrifice enough – to simply have the latest and greatest). Nonetheless, it’s another exciting era in the world of computing.
P.S.
Try leaving the SuperPi comparisons alone. As discussed, they don’t truly amount to anything substantive.
I never use vsync. Under 100/125 fps (depends on game) = no thanks for multiplayer.Quote:
Simply put, AMD’ers have their GPUs (or multiple GPUs) of choice, CPUs overclocked, and have their frame rates locked into vertical refresh rate synchronicity (as do many gamers on either side of the Intel/AMD isle who value the absence of tearing, while giving their GPUs a slight heat-producing break). Hence, they cannot necessarily benefit from the significant frame-rate increases that C2Ds oftentimes provide – just like Pentium owners could not necessarily benefit from an upgrade to Athlons in years past.
Also you're comparing a 600 dollar FX-62 to a 300 dollar 6600. Then on top of that a good FX62 will run about 3-3.2 where as a good 6600 will run 3.6-3.8.
Great point, many non-vsync'ers (and especially those playing AMD-CPU bottlenecking games) will benefit supremely from C2Ds.Quote:
I never use vsync. Under 100/125 fps (depends on game) = no thanks for multiplayer.
Those FPS “100/125” + HQ CRT = gaming visual bliss or smooth visuals and playability with an LCD. Thankfully, LCD PC monitors will finally see 100Hz refresh rates later this year - though, it's already being showcased by Samsung's LE4073BD, a TV (http://www.behardware.com/articles/6...afterglow.html)
The FX-62 was not used to compare value, but rather as a reference (you can substitute the FX-62 with any of the various sub-$200 dual-core K8s that easily overclock to 2.8 GHz, with the right steppings) to demonstrate how C2D SuperPi advantages over K8 do not scale similarly in other or real-world scenarios. Also, I mentioned the Opteron 165 and 170, priced at $154 and $189 respectively (for which highly overclockable steppings can even more easily be found, thanks to the height of the process’s maturity) to demonstrate how/why current owners of those CPUs running at speeds of 2.6 to 3.0 GHz (or faster) may not need or desire to upgrade to C2D just yet or at all (from the perspective of value, considering the costs incurred from switching platforms).Quote:
Also you're comparing a 600 dollar FX-62 to a 300 dollar 6600.
That’s the absolute reality of it folks! And interestingly, a great K8 overclock of 3 GHz only places it between a stock E6600 and E6700 in some applications and below them in many games. Add even mild C2D overclocking, and the gap widens – very quickly.Quote:
Then on top of that a good FX62 will run about 3-3.2 where as a good 6600 will run 3.6-3.8.
That's either costing you a lot in hardware or graphics quality... :slobber:Quote:
Originally Posted by afireinside
I'm perfectly happy with my Opteron 165 and my v-sync on. :P
I'd choose 6300 over fx62 and beat it :)
I've tested around 10 chips so far (including 6300) and the worst clocker was 3.4GHz 24/7 stable on air.
No Amd chip can compete with that at the moment for top results. Just hope they have something better in the future.
Don't get me wrong, amd did make good chips before, I had many amd cpus myself but not now.
Cost me 500 for an 8800GTS when they came out :p:Quote:
Originally Posted by LiQU!D
I run 1280*1024 and pull a constant 125 in cod2 DX7 (DX9 still sucks in that game) and a constant 100 in CS source with DX9 max detail 16qaa 16af. All powered by an AMD at 2.9ghz :) Single player games I just max everything and live with crap fps, no need to have an fps advantage in single player. I only upgraded to conroe again because the 8800s can really eat up CPU power and I want to take full advantage of my card.
If you're still on a DX9 card and don't feel a need for super fast encoding I really don't see a need to upgrade to conroe unless you're coming from something ancient like socket a or 754.