Yup ... the current field of GPUs in single cards obviously are well paired with midrange CPUs.
Printable View
Hmmm..
If I have a Phenom 9500 (which I have in the game computer) and I paired that with ATI 4850, that might get min 40 FPS for some game. Are you saying that I would get less min FPS changing to ATI 4870X2?
Or do you agree with that if the cpu and gpu is enough for my eyes to display the game as fast as the eyes need in order to make the game smooth is enough? faster cpu or faster gpu will not make any difference for my eyes
Depends, if that minimum in frame rates was caused by a lack of CPU processing power then yes, but if the bottleneck is with your current 4850 the a 4870X2 won't lower you minimum frame rate. That's because the 4870X2 will need some additional CPU power to coordinate both GPU chips, so if the situation was limited by the CPU then it could now be even more limited. The question still remains though, how much of a difference would it make?
I'd say yes, unless that extra power could be utilized to increase some image quality settings like AA and AF of course.
Hmmmmm... no to the first statement, I defer to Helmor's rebuttal above. However, I do not entirely agree with you on your second rhetorical question. Anytime the source FPS is not in sync with the display frame rate, artifacts will occur and the game will look worse than if you matched the FPS with the refresh rate of the monitor.
You said... this:
This is, ironically, not quite incorrect. Measuring 180 FPS means that your monitor will only display 1/3 of those frames at a refresh rate of 60 Hz... on average.Quote:
Don't forget to buy a superfast monitor that can display all those fps, you also need superfast eyes. No game exist today that need a very fast CPU what I know of.
However, does this not beg the question ... why, then, do people drive and want the highest FPS performance they can get within their budget?
The answer is to ensure that no matter what game you want to play, during what ever portion of the game you are playing that the FPS remains above the minimum threshold. What that minimum threshold is defined by many people differently, my definition is 60 FPS ... simply because I can run V-sync, and the game would by 100% smooth, no tearing, no jittering 100% of the time.
Your statement and my pointing out that your weirdness of that statement really has nothing to do with what specific CPU or what specific GPU ... technically, start with getting the fastest GPU you can get, followed by the fastest CPU with the money left over -- however, looking at what you put together as a gaming system it is pretty weak ... but it is indeed well balanced, the CPU is just about enough for that GPU.
Jack
OT
I have a phenom 9500 and ATI 4870X2, it works very well. If I would buy a faster CPU for gaming the only thing I would notice is that the computer will use more power.
The balance you are talking about is mostly for the Intel C2Q because that is a very schizophrenic processor. When the memory is in the cache then it is very fast, not in the cache it isn't fast. It has many tricks to hide latency for the slow FSB, most of the time these tricks work well but when they don't it will be slow.
It is better to have slower CPU compared to a slower GPU. The GPU is always one frame after the CPU. If the CPU is very fast it will almost be two frames before the GPU.
----
Maybe I will need to upgrade the CPU sometimes next year.
Not to your eyes maybe but an experienced fps gamer will notice the difference, it's a feel you have when a game runs smooth ... I'm not talking 24-27fps which your eyes can only detect...
If my favourite online game goes below 60fps I DO notice it... in the way I move, the way the game reacts to my input etc...
It's funny but it's like that... can I prove that with benchmarks numbers, nope...as the included test rocket over 180fps easily, but I still can feel it if it has a dip on some maps, numer of players etc... weird but it's like that...
You have the Force ;)
I get a feeling sometimes when I bench that the system doesnt like the settings an is gonna crash. Just get a vibe from it. It develops the longer I have the setup...its weird.
Actually, the balance I am talking about is that the CPU is fast enough that the GPU will work to it's fullest potential. There is nothing schizophrenic about a C2Q, they simply can run gaming code faster and support faster GPUs, this data has been shown to you many times over.
It is not better to have a slower CPU compared to a slower GPU, you need a CPU fast enough that the GPU can be utilized to it's intended potential. Again, for a gaming rig the primary investment should indeed be the GPU, but thought should be made on the CPU such that money and HW capability is not wasted.
Your Phenom 9600 is not a good match for a 4870 X2, you are wasting GPU cycles on a slower CPU, for example.
http://www.legionhardware.com/document.php?id=770&p=7 , http://www.legionhardware.com/document.php?id=770&p=11
http://www.legionhardware.com/document.php?id=770&p=3
These are good examples, a Phenom 9650 (i.e. 2.3 GHz) -- is holding back almost 40% of what the 4870 X2 is capable of performing at 1920x1200 full AA/FA, max quality.
:) .... get use to it. He has FSB burned into the brain -- you cannot pry it out with any form of data, experiment, observation or rationalization. Also, graphics subsystems do not go through the FSB when retrieving information from main memory (say an off card texture or command buffer), so the FSB argument is irrelevant anyway.
Software
- Microsoft Windows Vista Ultimate SP1 (64-bit)
- Intel System Driver 8.4.0.1016
what is that intel thing for ?
anyways you can't really compare those because their bus speeds were never the same and neither was the memory speed or timings and ratio's.
I have plenty of positive things to say ... perhaps you read them :) ...
Shanghai looks strong.
AMD has a great interconnect technology.
AMD currently produces the fastest single video card solution on the market.
I would not get banned for that because those are all true statements.
You get "warned or banned" because you post details that go outside what is true -- some sites would categorize this as trolling. It isn't about saying positive or negative things about a company, there is a point in time when one company produces a better solution than the other. 2005, AMD had a better CPU than Intel, 2007 Intel has a better CPU than AMD, who knows one or two years from now AMD may have a better CPU. Today, they have a better GPU solution.
But I am not a big HardOCP fan either.
I was anyway
Here is the thread
http://www.hardforum.com/showthread....1362408&page=2
I have been warned here too because I criticized the scaling of C2Q. One problem could be that I talk from a programmer perspective. On forums they just see it from applications out there. If you want to scale well on C2Q you need to think about the design before you build it.
If I have forgot to think about scaling for some application before I started to create the application. Then there might be some area that is used a lot and needs speed. Maybe I want to thread that area. When you are just going to scale a small part in one application you need much more synchronization and maybe you need to guard memory. The performance penalty on the C2Q is too large for this to be worth the effort. On C2D there is only two cores so there is another problem. Moving spinlocks etc between caches on C2Q takes time.
Now when i7 is out this type of scaling for applications might be much more common. it is much simpler for programmers to do this to gain speed. Rewriting applications is very seldom an option.
I will look through the thread and see if I can provide you some feedback ...
On the scaling question, it is not unfair to criticize the C2Q scaling, relative to a single instance, say, to a complete loading say 4 instances or threads. Everyone is well aware that Kents and Yorks are MCM using the FSB backbone to cohere the two L2 cache pools, this is not optimum granted. And it shows up in the data, in applications where you can observe this. A good example is cinbench, AMD quads routinely get scaling factors of 3.8-3.9 from single to 4 threaded measurements, whereas Intel hovers in the 3.5-3.6 range ... obviously, Intel does not scale up quite as well.
However, it is one thing to say Intel does not scale as well it is quite another to say AMD is better because they scale better. The better scaling afforded to AMD does not overcome the absolute difference, and again the data shows this. So (and this is just an example with numbers, no meaning really) if Intel shows 30% better performance single threaded, it may only show 15% better multithreaded over AMD because they don't scale well, however they are still 15% better overall.
This translates into sever space especially well ... AMD scales even better than Intel there because as you add more sockets AMD's design also adds more aggregate bandwidth, where as Intel's fixed uniform memory architecture increases the demand but the BW stays the same. As a result, AMD competes with Xeon much better than they do in the desktop.
Indeed, one problem is you say it and then justify it via 'programming' or 'look at the source code' ... this is not good reasons nor good basis for supporting your argument. No matter the program or algorithm, nothing changes the inherent empirical observation by simply observing which platform finishes the job quicker.
EDIT: Ok, I see the thread :) ... you made, more or less, a public accusation about HardOCP that was not flattering. Kyle did not like it at all -- from my perspective, Kyle was right and here is one of your problems -- and I am being constructively critical here, not trying to bash you -- you are obviously a staunch AMD supporter, and you attempt to condense the current state of the competitive environment into fanboys bashing AMD for no apparent reason. When it goes to that level it quickly adds fuel to the fire and the discussions soon denigrate into name calling and personal attacks. You would serve yourself (and AMD for that matter) much better if you argue with the data/facts and not against it by assuming every site is pro-Intel because of some nefarious conspiracy. For example, it is show throughout the reviews most anywhere that Intel C2Qs are out performing AMD clock for clock and clock higher... true, performance sells, but this does not mean AMD does not make a good product. AMD markets their current line as good processors with good value (i.e. price), and this is true... there are good arguments for procuring and using AMD processors, Phenom is a darn good processor... it really depends on the needs, wants, and other stuff... having the fastest is not always the best.
Jack
I am talking about scaling tiny (relatively) small functions in one application. This type of scaling needs much more talk between threads. You don't have time (that will make the scaling worse than single threading) to allocate memory for each thread etc. it is also possible to use threads for fetching memory for other threads so that the memory is located in the cache when they want to use it.
I think a saw somewhere that nehalem had about 5 times the performance compared to Core 2 in that type of scaling for one sample. Intel is soon going to release tools for developers to make it easier for them to scale there apps.
Yeah, there is no doubt that Nehalem will scale better. Intel focused their design efforts toward multithreaded performance in this revision.
My opinion, and this is, of course an opinion, is that Intel is in a good rhythm, at this point, as a whole with the industry. What I mean by this is they launched C2D, then followed up with C2Q.... great single threaded performance, and decent scaling to get acceptable multithreaded performance. As the industry now has more multithreaded apps available, now is a good time to be designing more toward multithreaded environments.
This is a recurring theme in my opinion ... again, my opinion. AMD seems to be way ahead of the needs of the industry when they push for a brand new something or another. They provide 64-bit extensions but ahead of when the industry was really ready for it (in desktop, server was begging for it hence they did really well in server), with the monolithic quad they deliver better multithreaded performance over their single threaded capability, but most software was still single threaded so it did not shine as bright.
Maybe I am out in left field here, but this is basically my observation.
Jack