I've heard this on countless forums, and here, " AMD runs games and Windows smoother."
Is this just an excuse for AMD lacking in benchmarking, fps, etc... ???
What's behind these claims?
Printable View
I've heard this on countless forums, and here, " AMD runs games and Windows smoother."
Is this just an excuse for AMD lacking in benchmarking, fps, etc... ???
What's behind these claims?
Good question, I heard many people say this as well, and I wonder what this claim is all about.
Can anyone elaborate?
it isn't quantifiable, but i'm guessing the IMC plays a role. Users who own both platforms often say the AMD (K8/K10) feels snappier and the C2D feels kinda skippy in comparison, be it at 2.4ghz or at 4ghz.
horse power vs elegance i guess.
I don't have two systems to compare with side by side, I have only owned AMD systems for the last few years. A theoretical advantage for AMD makes some sense (because of IMC and lower latency) but I can't see this making enough of a deal in real world usage that it would be noticeable. With the newer, faster FSB's plus Phenoms having high latency relative to the A64's (because of slow L3/NB) AMD doesn't actually have much of an advantage in memory latency anyway.
If you look under memory access latency, E8500 is equal to Phenoms: http://www.techreport.com/articles.x/14424/3
All i can say is that during first c2d hype i also bought conroe e6600 and it sure felt fast in the beginning and some benches during test phase, but once all sw installations were done and lots off background apps were there and running the system started to slow down allot, even my wife started to complain about the speed ....:( and she questioned why i updated the main rig. old Rig was by that time am2 x2 5600 btw, for the rest same os, mem, psu, gpu etc only board+cpu was shifted
can't say that games are smoother, since i play only older games, but indeed you hear a lot of reviewers talking about this smoothness.
Yes games are smoother on Phenom vs C2D & C2Q:yepp:
I have both an AMD 9850BE and Intel Q6600 quads with extremely similiar hardware. My AMD does in fact run smoother. Slower but smoother and I enjoy my AMD much more than my Intel atm. I love quality of picture in gaming, movies etc. over pure speed. For example, I prefer lower FPS with no jaggies than high FPS with more jaggies. You really don't see a noticeably huge difference from 60 FPS to 200+ FPS. 35 FPS and up is playable for me as long as pic quality is spectacular.
I believe that Intel FSB is outdated (which they are discontinuing with Nehalem etc.), in fact they are moving the NB to the CPU, if I understand Blauhung correctly. The FSB bottleneck shows up mostly in gaming real world stuff and not synthetic benchmarks. So yeah, you got 3.5+ GHz but unfortunatley its just not as good quality (smooth). AMD can take better advantage of RAM speed with HTT than Intel atm, i just saw an article about that yesterday, I hope I can remeber where I found it.
Remember, this is my opinion and in no way trolling or flaming Intel but rather stating why I'm happier with AMD:shocked:. AMD is giving me what I want which is snappiness and smoothness. I will look for a reputable lab to back this up. In the meantime, Tony here on XS forums has also stated that his Phenom runs smoother as well.
EDIT: Here are the articles I quoted with regards to DDR2 memory and phenom
http://www.digit-life.com/articles3/...nom-page1.html
http://www.digit-life.com/articles3/...ma-phenom.html
Great feedback. I didn't expect such a quick response.
Another question...
Did intel get the cart in front the horse? Or is the horse just to fast for the software and OS?
IMHO, Intel got the cart in front of the horse. Amazing horspower, but crappy drivetrain and tires. That' how I would describe it. All the parts must be at amx or you get bottlenecks (limiting factors). FSB in my opinion is like a speed governor on an auto. Sure the car can go 200+mph but the governor won't let it go past 130 mph.
I've had so many lackluster Intel setups (including my current Intel quad, its good but not spectacular) that my AMDs consistently blew away that I don't get hot and bothered by anything Intel puts out, hype did not equal performance. Intel will 100% give you higher benchmarks and possibly better folding (I have yet to test my AMD on WCG, I do not speak unless I have tested).
I'm not trolling its just that in my opinion AMD has served me better in what I want my computer to do. That may change. That's why its called a Personal Computer, the user experience is personal and will vary depending on what you are into.
I think this might be a better comparison...
Say you have 2 different cars. On has a 7 speed transmission that has close ratios, the other is your standard 4 speed. Now imagine pushing the pedal to the floor like in a drag race. AMD is like the 7 speed tranny, very smooth acceleration because of the close ratios. Intel is like the 4 speed, it still goes fast, but its jerky because of having less gears.
Another example that probably makes about as much sense as the last one. Take two runners, a distance runner and a sprinter. AMD is like the distance runner because it keeps going pretty fast but doesn't have bursts. Sure some stuff is slightly better optimized, kind of like running down a hill, but its only because its optimized. Intel is a sprinter. Feed them lots of sugar and they go fast, but they still need that recovery time. Sugar is like the massive L2 cache on C2's, recovery time is how long it takes to retrieve more stuff from memory. AMD doesn't need that recovery time because of its IMC.
Putting this into better perspective, Intels bench better because its monotonous and repetitive, meaning there's alot less to go to and from the memory. The "jerkyness" in games is because of all the information that changes that's stored in memory. That's why having the higher front bus speed makes such a difference, its that much closer to having the actual throughput to the memory as AMD, but it can't because its not an IMC.
Please don't flame me on this, I'm just posting my interpretation of what other people have said. I did have a Core 2 system before, I had the B2 rev of both the E6600 and the E6700. I burnt out the E6700 at 3.82ghz on air, I burnt out the Asus Striker I was using the E6600 with, didn't have the money to replace it, and my T2500 is now in the hands of my older brother. And my laptop has the Pentium-M.
Probably either:
a) The A64/Phenom uses a lot more power and therefore cause the fan to speed up more compared to a C2D/C2Q. And as we know from car culture, a louder car is a faster car.
b) In games, the C2D/C2Q goes up to higher fps when in CPU limited situations and therefore the delta between CPU limited and GPU limited situations is greater on the C2D, which results in a possibly "less smooth" experience, even if the C2D/C2Q has higher maximum and minimum frame rates.
A F1 car is less smooth than a mid-size sedan, but it'll still take less time to complete a lap of a track. Just like a C2D and C2Q clearly finish tasks faster or do more work in the same amount of time as compared to their AMD competition.Quote:
IMHO, Intel got the cart in front of the horse. Amazing horspower, but crappy drivetrain and tires. That' how I would describe it. All the parts must be at amx or you get bottlenecks (limiting factors). FSB in my opinion is like a speed governor on an auto. Sure the car can go 200+mph but the governor won't let it go past 130 mph.
As you can tell I'm not much of an auto buff so thanks for taking the time to explain it better. I've only recently returned to performance PCs in the last few months, I feel good that at least my thoughts are validated
That is True.
On the other hand HISTORICALLY Intel has been known for doing just about anything they can to get higher benchmark scores. Since most people will accept that fact based on available past evidence, then it should not be difficult to believe that they might also sacrifice something that is non-quantifiable for better benchmarks. And that they might continue this trend on their current crop of processors.
1. If you have a super silent fan you'd have to base your opinion on what you actually experience. (I can't hear my CPU's fan over the case fans.)
2. What if you have higher max but lower min speeds? (Which happens on many benchmarks.)
My older brother also said that his old S754 New Castle 3000+ felt way smoother win WinXP and in WinVista than his new E6320. Guess the IMC makes the difference or something.
I think the HT link actually makes everything a lot better. It's basicly a HUGE highway what's connected to everyting in the whole system.
IMC surely helps, but not due to the latency tests you see from for example Everest but latency's we can't test (I think). I mean, if you'd run DDR800 5-5-5 (giving crap latency) or DDR1200 4-4-4 (superior latency), I wonder if this would make a whole lot of difference in your 24/7 usage experience. It's more the IMC being architectural wise good located and, as already said, gives untestable good latency.
But only the IMC isnt cutting it, I think it's more the whole connection with the whole system, the HT link.
i have been testing over the weekend. while not very scientific tests. still tests.
rather then overclocking the cpu. instead running the ram at higher speeds along with getting the HT and NB higher and upping the pcie. i am seeing better fps and over all faster boots ect. going this route rather then trying to Oc the cpu speed itself. when i OC just the CPU i have to of course run lower ram sppeds and lower HT and NB.
so it seems going this other route is matching up the bandwidth pipe better with the whole system.
as far the thread here. i can say the home ati/amd machine does seem faster amd smoother then the work intel quads. not sure why since the work intels are rated at a higher stock speeds and benchmarks everywere say the intels should be faster.
My thoughts on why people are seeing this is because Phenoms have the ability to SHARE data between cores (L2 is mirrored in shared L3). This is why my AMD dualies suffer lags (crossbar isn't quite what you may think) and my Intel C2D (with its shared L2) does not.
The windows scheduler does not seem to prefer leaving a running thread on the same core (known as soft affinity) for extended periods of time. If you are running a single threaded app, you will most likely see the cpu time split to some degree between all of your cores (60-40 for example). When I run single threaded apps in Ubuntu, the load will remain on the same core for a long period of time (several seconds usually) before something causes the scheduler to switch it and I experience lag. If you want to know the reason for this, look up NUMA (Non Unified Memory Architecture).
When the XP scheduler (not NUMA aware) switches the cpu your thread is running on, all of the data in the first cpu's cache needs to go into the second cpu. This requires it to be written to memory and then read into the cache of the second cpu. If your 2 cpus have a shared cache (Phenom, C2D, or between cores 0&1 or 2&3 of your C2Q) that delay will be greatly reduced.
This is just my best educated guess as to why this happens. Whenever I am playing a game (probably the only time you will notice) and I experience these lags, there is usually a dip accompanied by a rise in Task Manager or System Monitor (depending on my OS at the time). The good news is that some games are compensating for this. STALKER for example forces its main thread on CPU 0 and will peak its utilization while running less demanding threads on your other cpu(s).
If you want to avoid these evil task switches, you can use task manager or taskset to set your (single-threaded only) app's affinity to one core.
behind those claims is a legend: let me explain. This legend is based on a true story: it all started with the "rambus" disaster, in year 2000. Starting in 2000 Intel went through 2 or 3 dark years. In year 2000, in order to get the maximum performance out of the first Pentium 4, you had to use the so called "rambus" memory modules that started selling for 1200 euros at the time (I was in Germany so that was 2400 DM), and were at the same time almost impossible to find. At the same time AMD cpus were getting faster and faster, and cheap as well. Finally Intel replaced their bloody i820 chipset (needed for Rambus) with i840, running SDRAM memory: and the combination of P4 + SDRAM was terrible. AMD cpus were selling more and more. Intel, in 2002, finally converted their i840 chipset into a i845 that could use DDR RAM (there was also an i850 for rambus, much cheaper than in the beginning, but still too expensive). Anyway the result is that for all these reasons, most gamers bought AMD cpus (and via chipsets, mostly on MSI mobos) during more than three years, where those AMD things were really faster (except for videos). That's where the story ends.
Then when Intel came around with their first dual-core, in 2004 I think, AMD couldn't follow in either speed or efficiency. Not to mention that according to something I read recently, AMD hasn't been able to modify their cpu's architecture since 2003, while Intel produced one generation of dual-core after the other, and then the quad cores. But the legend remained :D
Now if you talk about recent comments giving superiority to AMD versus Intel, sorry I'm lost. I haven't read a single review for quite a long time that didn't give Intel as a clear winner in all areas: gaming, video editing etc...see Q6600 success in this forum for instance ;)
Too much craps in this post...
MSI mb for gamers with AMD chip :ROTF:
Except for rambus being crappy, what remains of the post is bull:banana::banana::banana::banana: :down:
I am a system builder for extra coin, so frequently I have AMD and Intel systems with similar internals (vid card, ram, drives, etc) side by side. The only thing I can tell you is I do not see ANY appreciable, or measureable difference...and those that do, probably feel the HP difference when installing new tires on thier car.... what Im getting at is, its just not there...and if it is, its not b/c of the CPU! ;) :lol:
In end 2002, NF2 was THE chipset for AMD chips (thoroughbed or whatever, barton...) and NOT via or whatever :down:
And in 2003 (not 3 years later), "gamers" :rolleyes: switched to P4C which was goot at overclocking with good performances...
And if I remember correctly, the first true dual core chip was an AMD one (even before the false P4 dual core)
Well, you should stop booze:shrug:
I think in my post I covered the ram, and video card. The only differences from some of my systems were ONLY the CPU and mobo. If it was the mobo, many other people would believe this is true. I do not believe many people think this is true. Like my reference above, I think its a "butt-dyno" situation. Both are silky smooth for me with the same systems sans CPU and mobo.
Could this Analyzing Efficiency of Shared and Dedicated L2 Cache in Modern Dual-Core Processors be why?
The tested systems are not particularly modern, one being a Socket 939, 2.0 GHz X2 3800+, 1MB total L2, E6 revision AMD system, the other being a X6800, 2.93 GHz, B1 Conroe Intel system.
Regardless of that when testing with 512KB+512KB and 2MB+2MB working set sizes respectively then the AMD system wins.
I was going to chip in and say Savantu had his concepts all wrong - you can produce a super complex die on 65nm, sure, but ramping it up is what is the real argument here.
AMD managed to produce K10 in volume on 65nm.
Intel can produce Tukwila, but it can't do so in volume. So, yeah. After x amount of tries, you'll get success sooner or later.
But I don't want to fuel the flames, so I'll stop now.
IMHO, it is quantifiable. The key to quantify interactive performance is measuring the latency or standard deviation of FPS in games. Smoothness = lower std dev, because FPS doesn't vary much. But i don't know if there is a benchmark out there that measures these.:shrug:
It may be measurable by equipment, but is it noticeable to the human eye? Are you certain its the CPU that is directly affecting those 'stutters' that people claim ot have or not have?
Everest is NOT correctly reporting the memory benchmark.
Probably all synthetic memory tests are doing this to K10.
http://abinstein.blogspot.com/
ROFL!
fantastic links guys!
Finally some evidence that most bench software out there is either not well written or not up to date with the latest technology.
And thats why we get stuffed with a lot wrong results in regards to benchmarks.
EDIT: And this holds true for real-world applications as well.
BINGO. You CAN measure it. But you are correct. Current benchmarks do NOT.
As I have said many times in the past... this debate has so many similarities to the "RAID-0" versus "a single fast drive" debate.
Generally I bypass that entire debate by owning a TRUE hardware raid card. But sometimes you still get people tell you that their software RAID works just as well. (And many of them do NOT accept the fact that they HAVE software RAID.)
Typical discussion:
ME: Is the RAID built into the mb?
Fan: Yes.
ME: Is it a server motherboard?
Fan: No.
ME: Dude... you have have software raid. Period.
Fan: NO WAY... it's in the hardware... on my motherboard. It's hardware RAID.
ME: Yep. And I'll bet you believe the slowest Intel QUAD CPU are better than the fastest AMD QUAD CPU because you read it on the same site that told you that your motherboard was hardware raid, eh?
Fan: Of course... look at the benchmarks at the hackandslashdeadbodieslazerpewpew site Intel just devastates AMD in just about all of the single threaded game benchmarks!
(OOOPSSS:::: Sorry. I drifted off topic. I'm bored. In a hotel. In Ohio. On business. And I'm alergic to something in the air in this stinking state. I think I'll go drink beer and drown my sorrows.)
BUT ANYWAY: BACK ON TOPIC::: Yes... as the benchmarks mature and get tweaked... we can expect that we'll discover that the Phenom isn't the bad awful chip that many people kept trying to FUD us into believing.
[QUOTE=keithlm;2987233]BINGO. You CAN measure it. But you are correct. Current benchmarks do NOT.
As I have said many times in the past... this debate has so many similarities to the "RAID-0" versus "a single fast drive" debate.
Generally I bypass that entire debate by owning a TRUE hardware raid card. But sometimes you still get people tell you that their software RAID works just as well. (And many of them do NOT accept the fact that they HAVE software RAID.)
Typical discussion:
ME: Is the RAID built into the mb?
Fan: Yes.
ME: Is it a server motherboard?
Fan: No.
ME: Dude... you have have software raid. Period.
Fan: NO WAY... it's in the hardware... on my motherboard. It's hardware RAID.
ME: Yep. And I'll bet you believe the slowest Intel QUAD CPU are better than the fastest AMD QUAD CPU because you read it on the same site that told you that your motherboard was hardware raid, eh?
Fan: Of course... look at the benchmarks at the hackandslashdeadbodieslazerpewpew site Intel just devastates AMD in just about all of the single threaded game benchmarks!
(OOOPSSS:::: Sorry. I drifted off topic. I'm bored. In a hotel. In Ohio. On business. And I'm alergic to something in the air in this stinking state. I think I'll go drink beer.)[/QUOTE]
:rofl::rofl::rofl::rofl::rofl::rofl::rofl::rofl::r ofl::rofl::rofl::rofl::rofl::rofl::rofl::rofl:
Enjoying the charms of Ohio eh mate?
well, stutter = a bottleneck somewhere doesnt it?
maybe their is a memory problem without IMC.
i can get stutters on amd depending on what's happening with graphics.
...but smoother in what application?
i think if you have everything set up right you will get just as "smooth" with a faster cpu :hehe:
8or16xQ is a sweet ride with 60+ fps minimum
now someone call be an intel fanboy.:rolleyes: cos if they do, they'd be right.
my intel is way better out of my 2 little boxes....but i guess a 3.4GHz am2 would be pretty snappy.
some 'quantitative' stuff to back up the whole smooth thing i would need....i got two smooth systems as far as i am concerned:)
probably graphics card related anyway....but perhaps variations at 800x600 res or something like that could be compared between amd and intel cpu's...:shrug:Quote:
IMHO, it is quantifiable. The key to quantify interactive performance is measuring the latency or standard deviation of FPS in games. Smoothness = lower std dev, because FPS doesn't vary much. But i don't know if there is a benchmark out there that measures these.
i spose i should have read the thread 1st:rolleyes:Quote:
your right its not the cpu its the total package cpu,ram,mobo, and video wich give that uber smooth, sily HD quality.
I think the main reason is threads synchronization - shared L3 cache, IMC, and all cores in same chip.
My background - I am a programmer (server side).
When a complex program (good game, server, ...) is multithreaded it needs to synchronize access to some resources which are exclusive (only single thread can use them at a time). Synchronization is usually implemented by spin locks or other techniques using shared memory.
On a Phenom, the spin-lock gets cached in the shared cache L3 and it stays there - it is accessed very often by many threads and likely all cores.
Occasionally it moves from L3 to L2, L1, L2 and back to L3 when a thread/core spins on it.
On a C2D/C2Q the spin-lock stays mostly in memory and moves around between caches of individual cores. But it stays mostly in memory when 2 cores try to spin on it. So it moves Memory - L2 - L1 - L2 - Memory.
So, when 2 cores try to spin the same lock, it moves around between each core. On Phenom it moves up to L3, on C2D/C2Q it goes up to Memory.
L3 is much faster than memory - lower latency, so contentions finish a lot faster.
On benchmarks, all thread are just simple copies of each other with no synchronization between them, so this is not a factor. But real application will require synchronization and will experience the penalty described above.
Add to this the no-IMC on C2D/C2Q and you get even bigger extra penalty.
Extra explanation:
Spin lock is when a thread keeps trying to acquire the lock until it succeeds - it continuously spins on the lock until it succeeds acquiring it. (Using atomic instructions like TestAndSet)
Bellow is an example of 2 threads spinning on a lock and how the lock(an int - 32/64 bit) moves around.
(L1A is L1 cache of core A, L1B for core B, ...).
Asuming same latencies for Phenom & C2Q: L1 - 3 cycles, L2 - 15, L3 - 48, Memory - 150
Phenom:
Core A spins: Mem -> L3 -> L2A -> L1A -> Core A -> L1A (total 150 cycles)
Core A spins: L1A -> Core A -> L1A (3 cycles)
Core B spins: L1A -> L2A -> L3 -> L2B -> L1B -> Core B -> L1B (48 cycles)
Core B spins: L1B -> Core B -> L1B (3 cycles)
Core A spins: L1B -> L2B -> L3 -> L2A -> L1A -> Core A -> L1A (48 cycles)
C2Q:
Core A spins: Mem -> L2A -> L1A -> Core A -> L1A (total 150 cycles)
Core A spins: L1A -> Core A -> L1A (3 cycles)
Core B spins: L1A -> L2A -> 2xFSB/Memory -> L2B -> L1B -> Core B -> L1B (150 cycles)
Core B spins: L1B -> Core B -> L1B (3 cycles)
Core A spins: L1B -> L2B -> 2xFSB/Memory -> L2A -> L1A -> Core A -> L1A (150 cycles)
Imagine how much waiting happens when 2 cores spin on the same lock while it has been acquired by a 3rd thread. It cannot be cached for more than 1-2 instructions because both cores want exclusive access to it...
Just count how many clocks are required when the lock is transitioned from one core to another... No shared cache becomes very big penalty. Add to that FSB penalty...
Of course, good program will use as few as possible locks, but they are still required...
A single threaded program doesn't have this problem - most of the frequently used data will move in L1 or L2 cache and stay there, giving C2D higher performance (because of clock frequency).
PS: Hope it makes sense... sorry for being so long
interesting.
there must be programs that show up the benefits of AMD's IMC and cache.
Looked close.
Who wins?
You do know that Core 2 Duo has a shared L2, there is no L2A or L2B just the L2. Of course Core 2 Quad is 2 C2D processors in the same package linked together by the northbridge. C2Q does have 2 L2 caches but Windows will schedule each task to each C2D die, will it not? Which is still useless for a 4 threaded application.
Anyway I think this extract from the Efficient Data Sharing in Intel Core Microarchitecture Based Systems presentation backs you up:
"Frequent modified cache line sharing is bad
• Intentionally – e.g. synchronization
• Mistakenly - False Sharing"
A limitation of MESI?
All we need now is a "fair" benchmark or application which considers all the above mentioned......
Any volunteers to write one ? :D
I'm not sure. I don't think Intel or AMD provide sufficient data for me to answer that but I think AMD has the advantage, here's some info on MESI and MOESI:
Page 468.
Table on MESI: http://download.intel.com/design/pro...als/253668.pdf
Page 3. An incorrectly coloured simple flow diagram for MESI:
http://softwarecommunity.intel.com/i...ry_Traffic.pdf
Page 168. Simple flow diagram for MOESI
http://www.amd.com/us-en/assets/cont...docs/24593.pdf
Page 219. Definition of MOESI states
http://www.amd.com/us-en/assets/cont...docs/40546.pdf
I've spent far too long trying to write this and trying to think of all the variables so let me just say I'm too tired and I'm sorry if it's useless:
C2D:
Both cores read data J (S)
One core modifies J (S-M, other becomes I)
Both cores read J, can't have 2 copies of an M line
J is written to RAM
Both cores can access J in the S state
Dual core K10:
Both cores read data J (S)
One core modifies J (S-M, other becomes I)
Both cores read J, the modified copy of J becomes O and the other core gets an S copy
Both cores can access J without having it written to RAM
In both protacols E and M are unique states, i.e. if a cache line is in the E or M states it is guaranteed to be the most up to date copy of the data. As the O state can transition to M it can be locked onto and shared without writing to RAM...
C2D:
Core A locks on and for arguments sake loads data from RAM into it's L1 in a locked E state, it works on it transitioning it to a locked M state, it can update it multiple times.
Core A unlocks
In order for the cache line to be shared it is written to RAM then called back in in the S state
Core B locks it...
Core B unlocks
Cache line is written back to RAM and then shared
K10:
Core A locks on and for arguments sake loads data from RAM into it's L1 in a locked E state, it works on it transitioning it to a locked M state, it can update it multiple times.
Core A unlocks
In order for the cache line to be shared it transitions to the O state and is given to core B in the S state
Core B locks it...
Core B unlocks
Cache line is shared without being written to RAM
I hope what I wrote makes sense but as it probably doesn't if you ask me to clear it up I should be able to do so easily.
AMD's the little guy, they can't afford hired gun men. They can however afford bulldozer driving, hammer wielding men. If you mentioned FutureMark because R6xx scores high on 3DMark but not FPS that's because it has a massive texture deficit compared to G8x/G9x.
Wow a lot of writing here. I've been saying it to my freinds for a while now that my AMD opteron 939 @ 2.9 felt smoother than my q6600 at 465x7. It has to be the memory controller because with AMD I can get better memory performance. I believe its as simple as we don't have a bottleneck at the CPU so why does it matter if the Intel CPU is faster. I'm looking at AMD systems now because I'm just not happy with the Intel system. Its lame in my opinion and I'm sick of riding this Intel bandwagon because it seems like everybody is on it. I want to join the rebel side and say F them benchmarks cuz my everyday computer use does not include benchmarking my CPU. lol.
I just found this
http://img134.imageshack.us/img134/1304/9850mh0.jpg
* QX9775 - 800MHz @ 5-5-5-15 (FB-DIMM)
* QX9770 - 1600MHz @ 7-7-7-20 (DDR3)
* QX6850 - 1333MHz @ 7-7-7-20 (DDR3)
* Q6600 - 1066MHz @ 7-7-7-20 (DDR3)
* E8500 - 1333MHz @ 7-7-7-20 (DDR3)
* E6750 - 1333MHz @ 7-7-7-20 (DDR3)
* Phenom 9850 - 800MHz @ 5-5-5-15 (DDR2)
* Phenom 9600 - 800MHz @ 5-5-5-15 (DDR2)
With my q6600 the 9850 owns my bandwidth when I'm at around 1165Mhz 5-5-5-15
I know for sure that I'm not utilizing my CPU to its fullest. For example I'm running my q6600 @ 2.4 and hardly realize a difference in gaming compared to running it at 3.3. I'm pretty confident though that a boost in memory bandwidth would be felt in gaming. So I would stand to benefit if I had a 9850 right now. Please correct me if I'm wrong because I'm seriously thinking of ditching this Intel chip.
:welcome: to AMD land! I like to be off the beaten path...been buying AMD since 1997 before the great perfromance of the Athlon. I don't get worked up about reviews all that much, I only believe what my eyes tell when I'm testing systems.
Yeah, I've got the rig in my sig and a q6600 with as near same specs as possible and I see the smoothness as I've posted before. I'm not saying Intel's all that jerky or anything but I do say AMD's smoother. Someone made a reference to feeling a change in HP when you change tires on a car, well no that's not what I'm saying. AMD is SLOWER, but smoother. I would notice a nicer ride and maybe better handling with the new tires but not a change in the engine. There's a reason tires are only rated to a certain speed.
Anyway it's only my opinion and its not gonna break Intel or AMD.
Don't forget that AMD Hypertransport plays a strong role in system I/O latency. PCI-E interconnect using Hypertransport is very efficient.
Much of what users are seeing may not be directly CPU or memory related but rather... I/O based.
http://www.hypertransport.org/docs/w...ore_Design.pdf
http://www.hypertransport.org/docs/w...s_Parallel.pdf
http://www.hypertransport.org/
Get more powerful GPUs. The reason why you're not utilizing the CPU to the fullest is that the C2D is so powerful that even at 2.4GHz, it's enough to be GPU limited.
No, you'll see a noticeable decrease in gaming since the 9850 is a slower processor, especially in most games.Quote:
I'm pretty confident though that a boost in memory bandwidth would be felt in gaming. So I would stand to benefit if I had a 9850 right now. Please correct me if I'm wrong because I'm seriously thinking of ditching this Intel chip.
The 9850 is like right on par with the Q6600, look at this article.
http://www.guru3d.com/article/cpu-sc...re-processors/
That was a pretty good review. I hear all this talk about the GPU being so powerful yet some games still get a big boost from the CPU. Its too bad I don't have any of those games installed to see how my CPU performance affects the FPS. I only have rainbow six vegas 2 installed now and I don't notice a whole lot between 2.4 and 3.6. I will look into WIC to see what differences I can dig up. It would also be interesting to look at CPU utilization during game play.
I moved to a E7200 from a s939 AMD 3800 X2 @ 2.7 ghz.
Seriously, except for faster OS and game load times .. I feel no difference at all except in benchmarks.
I should've realized that all the benchmarks people use to test processors with are things I'd never use, like those test suites .. and .. decoding .. and .. winrar ...
hmm, weird
I have corel painter running with a crazy amount of cell textures (at least 5 GB of files)
my tv app in the background, firefox3, opera9.5, openoffice 3, my mail, limewire, utorrent,
reaper (with about 300mb of audio takes) and steamchat
all programs are open and actually being used and my pc feels as responsive as if nothing was running,
even though my cpu-usage is at 80% steady and has been for a long time
corel painter alone uses crazy resources :p:
check that pc out :/
so when all evidence & reviews say that the phenom is slower than core, people come up with a unprovable victory for amd.
The funny thing is that when Intel wins a bench mark by 10-20% people say that they would not notice it in the real world and it is only a couple of seconds or a few fps but yet these same people can pick up on the milliseconds advantage the imc brings. lol, just lol.
Its the same old sh1t, just a different day.
Some of the newer games use only 2 cores (load was 200-230%: 76% + 32% + 44% + 62%).
Try http://www.webtemp.org/ for logs while playing, but don't install speedfan as it recommends.
Core is no longer winning all "evidence & reviews" of the benchmarks. You seem to be stuck in LAST YEAR. The Phenom 9850 is actually beating the heck out of the Intel Q6600 and coming closer to the Q9300-Q9450 in performance. Welcome to reality! And that is at STOCK speeds. (Should I mention that people are hitting 3.0Ghz with older 9500 chips and 3.5Ghz with the new chips? Oh... perhaps I'll hold off... I don't want to completely blow your reality away.)
You are SO living in the past. But you are welcome to your biased viewpoint. Luckily we are not FORCED to buy inferior chips based on an old architecture that is already being replaced just because they come out ahead on a few non-relevant benchmarks.
Actually what you see happening is that people are noticing that their AMD machines are running better and they say: "Hey... how come the benchmarks don't show this.. it's obvious to anyone that pays attention." And then they look and try to explain why one company's chips benchmark better but do not actually perform as well as the other company's chips.
But if it makes you feel better you can go on believing that it is just people that prefer one brand exaggerating and making things up. (If it makes your bruised ego feel better... GO FOR IT.)
This is directly related to the fact that many people have ignorantly claimed that "what is good for servers is not always good for the desktop". I'm going to enjoy watching that myth get busted.
After I went from a Sempron to an X2 @ 2.6 GHz I found the main bottleneck was my hard drive. So I got 6 GB of mem and http://www.superspeed.com/desktop/supercache.php and dedicated about 3GB RAM to disk cache in XP x64 and voila, super responsiveness.
I also noticed with a dual core that sometimes I'd start the Winrar benchmark and forget, and only notice later when Speedfan's temp reading showed high in the taskbar, rather than notice a decrease in responsiveness.
Ah yes, spinlocks. In my world, we call 'em SyncLocks. :) That is totally true. No matter how well you try to design a multithreaded system, if it isn't a benchmark then at the very least the work controller will need to synchronize the work unit tables. It's like I was trying to explain a distributed project I was working on one time: I can process this stuff very, very fast on five machines without any delays and minimal memory usage. However, good luck decoding the data into anything meaningful. Real life data has to be in a specific order to be useful. This can't be "promised" if your threads are just running chaotically without synchronization of the work being done. Synthetic benchmark data does not have this requirement. Because of this, gains in my distributed project weren't close to their theoretical potentials.
---
And I do have to agree--the quality of experience on my Phenom is much better than it was on my C2D despite the benchmark discrepencies. Call it BS if you want to, but until you've actually used both platforms enough to experience the difference first hand, your opinion is meaningless.
---
Games are almost always CPU limited. When is the last time you saw a game that couldn't max a single core no matter how high it was clocked? Or alternatively, when is the last time a game didn't manage to max 50% on a dual core or 25% on a quad? I know of no games that fall short of maximizing at least one thread. The bottom line is that games run as fast as they can, and more often than not the GPU is waiting on the CPU instead of the other way around. Better GPUs manage to do more work without the CPU's involvement, but it would take a major architecture overhaul of an entire system (not just the GPU) to completely eliminate the need for the CPU.
I can both tell and feel a large difference between my E6600 at 2.4GHz and 3.6GHz in all Source engine games. Other games are like that as well, but Source really shows off the difference in clock speed.
It's not beating the heck out of the Q6600, it's barely competitive.
Except where are these applications which the C2D benchmarks better yet the Phenom performs better? If it does and is noticeable, it should be easy enough to measure.Quote:
And then they look and try to explain why one company's chips benchmark better but do not actually perform as well as the other company's chips.
Do you accept the fact that not every experience that a user will undergo when using a certain setup is reproducable? Because that is what a lot of users are mentioning here.
Sure, Intel wins in a lot of benchmarks. But those benchmarks only show an incomplete performancechart. A lot of experience depends on things like responsiveness et al, but those things are far less perfectly reproducable, thus rendered useless for benchmarking. A competitor can perform lousy at the typical suite of benchmark, but surprise the common user in the gray area of performance, which is what is discussed here.
So when throwing with the typical benchmarks where Intel supposedly wins, you should know that the betterness of AMD which users are experiencing isn't covered by that typical testsuite of benchmarks, but rather is in that grey zone of performance where objective measurement is more complex and difficult.
This isn't just an excuse to suck at typical benchmarks though, but if several users confirm this theory, I'd like to believe them. Not everyone speaks out of fanboyism. Although there is a social theory that states that a person likes the item he bought more than the item he didn't buy, just because he bought it. But that argument is null and void since most users here have an Intel and an AMD setup. Total objectiveness can never be achieved, but we can presume we're getting close. Or maybe not?
No I don't. Human perception is so unreliable as to be meaningless.
That sounds like alot like the argument of a tube amplifier lovers or expensive speaker cables.Quote:
So when throwing with the typical benchmarks where Intel supposedly wins, you should know that the betterness of AMD which users are experiencing isn't covered by that typical testsuite of benchmarks, but rather is in that grey zone of performance where objective measurement is more complex and difficult.
Agreed.
But then again: the suite of benchmarks used to put a number on the performance of a processor is in (not only) my opinion not fit to give a complete image of the performance of this processor in its every aspect it can be used. Don't you agree with that?
Also true, but my guess is that speed (although in relative more complex situations) as in repsonsiveness and latencies actually CAN be measured, as opposed to the "quality" of sound or even a certain signature that is added to the sound by tube amps and/or oxygen-free cables and the lot.
Anyway, I'd like to see an evolution in computerbenchmarking that will uncover the not so explored corners of cpu performance in every day use as a workstation. Bench suites like spec perf are nice for servers of which the tasks are preset and limited and where a quasi constant workload is assured, but for workstations where the overal performance depends on a lot more than just the performance in several benchmarks when the computer is doing nothing else than being benchmarked, it is less meaningfull to giving an overall image of its potence. Also this also depends on how the user uses his workstation, which makes such a fitting image very complex to get to. I know, maybe it's all not worth the trouble. :(
Also I'm aware I'm just stating the obvious. But to me it seems that some people don't want to hear this story and go all the way on the limited set of today's benchmarks when justifying some brand over the other.
Please define what you mean by "barely competitive". If you mean that it still loses a couple of benchmarks against a Q6600 then you are correct.
Of course when the Phenom was winning about 30% of the benchmarks against the Q6600 it was deemed "an utter disaster". So I guess the Q6600 is a complete and utter loss now.
I'm sorry that people can no longer play the "Even the best can't beat the worst" card anymore... but they need to move on. Why don't you tell us about how the 9850 can't compete against the Q9450. <yawn> Wonder how the 9950 is going to compare against the Q9450 since they will both clock at 2.66Ghz. (Perhaps that's why AMD decided to clock it at 2.66?)
(EDIT: Mistyped 9440 instead of 9450.)
HAH, couldn't come up with your own example? :p
In any case, this is different from those audiophile nuts. Computer experience is all about quantity (of time), not quality like audio. Quantity is measured far easier than quality. Therefore this so called fluency that has been used as a argument pro AMD actually can be measured, although indeed in not as simple ways we typically benchmark computers/processors. That makes your analogy erroneous.
To be fair, I did mention expensive speaker cables first.
On the other hand, both rely on human perception. The typical human ear is usually not able to determine the difference between pretty good hardware and highend hardware. Similar to the old adage of how a new system has to be 50% faster than the old one before a person can perceive it to be faster. If a Phenom is better that it is noticeable by a person, then it should be measurable.Quote:
In any case, this is different from those audiophile nuts. Computer experience is all about quantity (of time), not quality like audio. Quantity is measured far easier than quality. Therefore this so called fluency that has been used as a argument pro AMD actually can be measured, although indeed in not as simple ways we typically benchmark computers/processors. That makes your analogy erroneous.
Yes, it does. Encoding and crunching are both faster on my Q6600 as well as rendering, but only with the OC at 3.4 to 3.6. I haven't had any of the newer Intel quads so I can't talk about them.
I'm merely stating I like the feel of my Phenom better. It is very difficult to reproduce exact setups as not all Phenoms clock the same even among the same stepping and there is variation on the Intel side too from chip to chip, stepping to stepping. There may be variation of other components like RAM OCs and GPU OC too. Then you take into account the various skill levels of OCers.
I've said my piece, you can accept what I say or not. Neither AMD or Intel pays me a dime so I could care less.
Fair enough.
That's right, but my point was that this so called responsiveness, being quantitative in nature, CAN be turned into an objective benchmark, if only people had the means and the ambition to do that. Then we would be able to bench this unexplored range of performance and back up this people's statements with hard numbers, which isn't possible now because of the lack of this kind of benchmarks. I don't know if this view is a realistic one, but I do know that no benchmark is ever completely right in judging a processor's total performance.
Anyway, with only little experience on dual cores (which are in computers that are not mine) and no experience with quads whatsoever, I have little right to speak out of experience. I merely like to think about it and pose theories that at the time seem (partially) plausible to me and to the stories I read here and there on boards. In all this thinking and being pulled and drawn from one brand to the other, I will someday be finally able to make a decent conclusion as to which brand and matching platform I'd choose above all others. If only it was easier...
When I see a review that has results that are completely contrary to several other reputable sources... I have to discount those results and go with what the majority of the reviews say.
But congrats on picking two of the most biased reviews. No surprise there. You do realize that I can also quote reviews that show exactly the opposite?
This is a battle that can never be won. This is your analysis of the results against my analysis of the results. (And I'm right.)
At the end of the day, it probably won't matter much which brand you pick. People with always state AMD is #1 and people will always state Intel is # 1! at the same time. One thing we do know is technology may advance and who knows Nehalem may stink and K10.5 may rock. Unlikely, but possible. As regard to benchmark websites, I always take into question testing methods having worked in stats in the sciences I know exactly how you can "legitimately" fudge results, if that makes any sense.
My best advice to you, is take into account what's important in a computer to you as a user see examples from your friends, then do cost/benefit analysis in light of your budget, then make your best guess!:up:
It would be nice to be able to see graphs of the following two things for games:
- Latency between control manipulation and its effect on a frame
- Deviation in time between frames in a game
The first would relate to responsiveness.
The second would relate to smoothness. If on an Intel you get 40fps average with a deviation of 10ms or on the AMD you get an average of 35fps with a deviation of 5ms, I wonder what the perceived difference in smoothness would be. I imagine that people would mistake the lower framerate for a higher one just due to increased uniformity of motion. I would equate the deviation in a video's frame timing to noise in a still photograph, functionally. More noise makes people think a picture is of lower detail than a softer/less noisy version even when the reverse is true.
Why should I bother? It's been done many times before in this forum and we have see that it's not going to change the mind of someone that is in the AMD forums arguing that Intel is better. You are biased and will remain so no matter what is presented to you.
As such I have no need to "convince" you of something you won't accept.
(Okay.. let me queue you up... look up fanboy response #515.... come on... don't disappoint me. Or if you don't want to look it up in the Intel Fanboy handbook... look in this forum and copy the response given the last time somebody asked for someone else to go lookup benchmark results.)
QFT. who does this guy think he is kidding with blatant lies, come on, even the amd fanboys will admit that there are not to many reviews showing the 9850 beating the heck out of the Q6600 let alone the majority of them. I mean to have a debate about perceived responsiveness of cpus is one thing but to blatantly lie? come on guy:(
Thank you for the personal attack.
There is nothing in my posts that is not correct.
Have fun looking silly with the other guy. You both look rather pathetic.
And guess what... I don't need to help you guys appear that way. You can do it yourselves. Please continue. It's good for many laughs.
Seriously. Neither one of you are really naive enough to believe that I can't post a review that meet those criteria are you? HINT: If either of you actually answers "yes" then I'm not sure my opinion of you can actually go down any further...
It is simple, you said that "The majority of the reviews show the 9850 beating the heck out of the Q6600"
This is a lie.Be smart all you want about it.
What I said is a correct statement.
But apparently you haven't figured out how to find reviews on the internet. You lose. Perhaps someday when you get older you'll learn how to do your own research and not rely on an outdated printing of "Teh Intel Factbook for Good Little Girls and Boys".
Go back to the Intel forums where nobody will question your quoting of biased reviews.
EDIT: Dang... can't find the ignore list on this forum.
Anyone remembers what popular and reputable review site wrote a review about the 9850 where they had accidently run all of the review tests using a bios that had the TLB fix enabled by default?
(And they later did an update with the TLB fix disabled and the results were different... resulting in a very poor showing for the Q6600? Or it might have been the Q9300.) Of course by the time they did the update... the original incorrect results were being thrown all over the internet as an example of how bad the 9850 chip runs... so the "correct" results rarely got noticed.)
Man , you are sure owning me with your smart put downs, Imagine the hurt I will feel when you put half the effort into backing up what you say with evidence.
Jakko, Of course the 9850 bests the q6600 in some benchmarks, to suggest otherwise would be bs. I would go as far as to say the 9850 is shaping up nicley and is now worth considering. And I know this topic is about smoothness which is fair and could be a good debate because I don't think people are lying when they say this, Its just there perception but it branched of into madness when keithlm said "The majority of the reviews show the 9850 beating the heck out of the Q6600". This is not true.
At the end of the day, This is not AMD zone, this is the amd section of xtreme systems for members to discuss AMD. Every post does not have to be positive in nature, just the truth.
Mabey if you are having trouble remembering one review to back up your clams it might be time to stop being a smart ass and consider you were a bit hasty with your words and admit so, so we can get back to debating the hole smoothness thing which is worth doing because it might actually hold a element of truth.
In all truthfullness, I can not fail to see that you aren't eager to give actual names of sites that were involved in this "fiddle". What gives? And can you give me those sitenames please, because I'd like to make my own conclusions based on numbers, not on other's conclusions.
http://forums.guru3d.com/showthread.php?t=259848
Plenty of reviews for you to check out and make up your own minds. But then again, isn't this thread about how benchmarks are failing to reflect real world, first hand experience in the first place?
still no reply. did I just win the argument?
You don't win these types of arguments. No one here is going to change their mind if they're already made up. BTW where did we start debating about AMD v. Intel global perofrmance, I thought we were debating smoothness and personal experiences? Next we're gonna hear about Intel's dirty business dealings etc. Sheesh!
You are right, of course. Any AMD topic always ends up with an AMD vs. Intel comparison. I http://forums.pcper.com/images/smilies/rolleyes.gif at that inevitable pattern.
Im using an amd 9500 clocked to 2600 and im Always first in games in team fotress 2. These are full 24 player games. Maybe 1 in 10 loads i will be second, but mostly first. Using Samsung F1 hard disk also.
Oh lad! You know you're not supposed to give them actual benchmarks! (As if they would go and look anyway.)
Come on you know better. You know that the game is supposed to be this:
1. They ask for links.
2. You provide specific links.
3. They either belittle the site and/or choose to ignore it.
OR
3. They may tentatively accept the results and pull out their "Fine I'll just go overclock my Q6600 more and you lose" card.
Yesterday when I wouldn't play the above game they then played the "let's pretend that we really want to see a link" routine.
Anyway... regardless... the game is stupid but it goes on and nobody wins. (And Intel fans will keep coming into the AMD forums to make remarks that are not correct to incite people...)
ANYWAY BACK TO THE TOPIC AT HAND: "SMOOTHNESS".
When I run 2xdatabase servers, 1 web server, 2 Data Integrator servers, an FTP server, virus scanner, and other miscellaneous corporate software on a C2D machine... it becomes fairly unresponsive; if I try to run Eclipse or the Business Objects Data Integrator or a GUI database client I get long pauses and a very unresponsive system. When I do the same on an AMD machine of about the same speed it is more responsive in a very noticeable manner.
In the past whenever I've tried to mention this... the C2D fans have jumped in and told me that it's just my own perception and since it can't be benchmarked it doesn't matter. (As if to suggest that human perception is not something that can be noticed.) Luckily I'm not the only person that noticed the difference... everyone that has used both machines has wondered about the difference or asked if the C2D machine was broken.
NOTE: The above was with dual core machines... but my decision to upgrade to a Phenom at home is based on that simple fact.
The C2D fans might win a few single threaded game benchmarks... but to me that means almost nothing. Even the Phenom 9150 (?) would be a better choice than the Q6600 if the C2Q has performance on par with the C2D.
Ouch. :(
You know, I just defended these smoothness statements that were being shot at by intel believers. But to call me an Intel fanatic? Really. I just am sceptical about everything, not just benchmarks that declare Intel the winner. You are right though that I could search for my own, but I figured it could take days before I find that specific review you mentioned earlier.
Anyway, indeed back on topic.
edit: What you are telling there is indeed what I want to hear.
When I recently upgraded my dad's A64 3800+ to an X2 5600+, I noticed improvements in the stability of the responsiveness of the system. The small hickups I would encounter on the machine, when for example something would get loaded, were like totally gone. It felt like the system was idling for like all the time, but it was in fact doing things. Yay!
That was all on an Asus M2N32-SLI Deluxe Wireless Edition with 2x1GB OCZ ram. I know, why does he need that kind of setup for just surfing and mailing? Even then when Windows (XP) tends to foul up, he complains about the sluggishness of his system, so I need to format an reinstall windows every now and then to keep him satisfied.
Anyway, since I had a Duron 800MHz coupled with a Via KT133A mobo, and since my brother has an nforce 4 based mobo with a venice 4000+, and both were not flawless at some things (think PCI latency patch for Via and some general weirdness for the nforce4 mobo coupled with an X1950Pro) I was determined to from then on I would couple a processor with a chipset of the same brand. That's why a year ago I was looking at the cheap but yet good 690G/V chipset based mobo's to go with a nice brisbane to upgrade my system (see sig - AXP2400+,1GB,nforce2,GF6800).
But now the 780 series of AMD chipsets have arrived, and also the B3 Phenoms. The only reason to keep me going completely for AMD is that Intel is better at power consumption per performance ratio. Since I will be paying my own eletric bills beginning at september, I thought a nice low usage but high performance AND also cheap setup is what I need. Intel fullfills the low usage and the high performance aspect of the equation, but fails to deliver lower prices. Then Tom's HW comes with the 9100 and shows that it has decent potential. I sure hope so that the potential of that chip (2,4GHz at 1,1V) is present in all Phenoms at this very moment.
Anyone who is with me for choosing an Intel chipset for an Intel CPU and an AMD chipset for an AMD CPU, or is that just a load of BS?
edit2: This urge to choose for a complete platform instead of a random combination of hardware started when I first got my Dell Latitude D610 laptop. This is a centrino platform based laptop, and frankly, I never had such a stable AND responsive system. Ever. Even my nForce2 system - of which I thought it was great compared to my KT133A - doesn't respond as smooth and fluent as my Centrino system. I have the EXT-P2P's Discard Time bug on my Abit NF7-S mobo because I use 2 s-ata discs, and I fellt a slight drawback in the potency to deal with loads. Everytime something would stress the system it would hick up. Just the inverse of the effect from a AMD multicore system.