One hundred years from now It won't matter
What kind of car I drove What kind of house I lived in
How much money I had in the bank Nor what my cloths looked like.... But The world may be a little better Because, I was important In the life of a child.
-- from "Within My Power" by Forest Witcraft
Jack: Yes I see what you mean. This is just like two threads running in parallel and all stuff about FSB, memory, hypertransport is the CPU management (CPU work).
But that also means that the frame rate isn't good for testing how smooth the game is. It's just frames that may be a bit old because how "up to date" the picture is depends on when the cpu did its work for it.
If the CPU decides frame rate speed and it is above 30 to 40 FPS and LOW FPS is above 25 then the game will be smooth. But if the game has an average 60 FPS and it is the GPU that slows the game to that frame rate the game could in fact behave more unresponsive and delayed compared to when the slower frame rate decided by the cpu.
gOJDO: When the frame rate depends on the GPU you don't measure the CPU. AND if the CPU is ALL computer works then this is also the mouse work as one example.
GPU processor and CPU processor don't know about each other and GPU is in fact asynchronous. That means that there isn't any way for the processor to know when exactly the frame was produced. If it doesn't know that it can't calculate the exact difference in time in order to know how much movement etc that should be done.
In this discussion this will mask bottlenecks in the cpu if you test the gpu
w = work
i = idle
Code:Sample (only one frame is buffered): GPU |wwwwwwww| wwwwwwww | wwwwwwww | CPU |wwwiiiii| wwwiiiii | wwwiiiii | In this situation the picture that will be shown on the screen is (big W) wwWiiiii. Now something happens and the cpu needs to do extra work GPU |wwwwwwww| wwwwwwwwiii | wwwwwwww | CPU |wwwiiiii| wwwwwwwwwww | wwwiiiii | Here the difference for the two pictures when cpu needed to do extra work is: wwWiiiii wwwwwwwwwwW 16 "units" But the frame rate is 11 "units" (MAX) Another example (most extreme) with no difference in GPU frame rate GPU |wwwwwwww| wwwwwwww | wwwwwwww | CPU |wiiiiiii| wwwwwwww | wwwiiiii | Here the difference for the two pictures when cpu needed to do extra work is: Wiiiiiii wwwwwwwW 15 "units" Frame rate is 8 "units" If you buffer more than one picture this error will increase.
Last edited by gosh; 09-12-2008 at 04:10 AM.
Congratulations!!! 20 pages of BS and you are still failing to make a point.
When I think that you have posted the greatest crapload of BS ever, you come with more marvelous BS. I fail to realize how you came to your theories, but if you have ever tried to understand anything related to how CPU & GPU work you have understood everything completely wrong.
Last edited by gOJDO; 09-12-2008 at 10:05 AM.
Actually the point all together is in truth you didn't even have a clue of what really happens between the cpu and gpu from the time you started this thread and you still seem very lost or hard headed one. You should be posting a page long thanking JumpingJack for the free education.
Work Rig: Asus x58 P6T Deluxe, i7 950 24x166 1.275v, BIX2/GTZ/D5
3x2048 GSkill pi Black DDR3 1600, Quadro 600
PCPower & Cooling Silencer 750, CM Stacker 810
Game Rig: Asus x58 P6T, i7 970 24x160 1.2v HT on, TRUE120
3x4096 GSkill DDR3 1600, PNY 660ti
PCPower & Cooling Silencer 750, CM Stacker 830
AMD Rig: Biostar TA790GX A2+, x4 940 16x200, stock hsf
2x2gb Patriot DDR2 800, PowerColor 4850
Corsair VX450
Man this thread is so awesome. Just one recommendation to Jack: stop writing that kind of posts, not worth the effort with some guys. Save your time![]()
Friends shouldn't let friends use Windows 7 until Microsoft fixes Windows Explorer (link)
![]()
Jack is like an mixture of Gandhi and Mother Theresa.
After the first gems, I have preserved my brain by not reading those walls of text written by gosh.
Yes I have read about how some solutions in hardware are done. But this isn't a rule when a programmer develops against one driver. How the video card works or the driver works isn't something that the programmer needs to know and different vendors for video cards can solve API's as they like. The programmer just call api's then and measure time spans.
And this isn't what really what this thread is about. it's about how CPU works and why there are so much intel fanboys out there.
Whats been said in this thread about how the videocard works is just that if you buy a Intel processor your computer will use a lot more power because it will produce frame rates that isn't needed (max fps and average fps). Also checking how this works will rule out the need for faster processors in current games (those that have been tested). Intel runs good when it finds data in the cache (almost20 times faster compared to memory). And framerates for this are sky high for fast video cards. If the processor really needs to work for the game using threads it is a different scenario.
The reason why I am in this discussion is that I think this is strange, it got me curious. Doing other types of development it is easy to check different scenarios when amd is better compared to intel and intel better compared to amd. Why are people willing to spend more money for something that isn't noticed?
Also most know the behavior for intels processors. it has worked on it's fast parts and skipped the weak parts.
Last edited by gosh; 09-12-2008 at 01:27 PM.
Oh my gosh.
Are we riding a marry go round ?
Once again you're implying that AMD is faster in games and specifically that it gives you better ( higher ) minimum framerates ( which isn't true ).
I for one give up, nobody and nothing can change your mind.
Coding 24/7... Limited forums/PMs time.
-Justice isn't blind, Justice is ashamed.
Many thanks to: Sue Wu, Yiwen Lin, Steven Kuo, Crystal Chen, Vivian Lien, Joe Chan, Sascha Krohn, Joe James, Dan Snyder, Amy Deng, Jack Peterson, Hank Peng, Mafalda Cogliani, Olivia Lee, Marta Piccoli, Mike Clements, Alex Ruedinger, Oliver Baltuch, Korinna Dieck, Steffen Eisentein, Francois Piednoel, Tanja Markovic, Cyril Pelupessy (R.I.P.), Juan J. Guerrero
Yes I have read about how some solutions in hardware are done. But this isn't a rule when a programmer develops against one driver. How the video card works or the driver works isn't something that the programmer needs to know and different vendors for video cards can solve API's as they like. The programmer just call api's then and measure time spans.
And this isn't what really what this thread is about. it's about how CPU works and why there are so much intel fanboys out there.
Whats been said in this thread about how the videocard works is just that if you buy a Intel processor your computer will use a lot more power because it will produce frame rates that isn't needed (max fps and average fps). Also checking how this works will rule out the need for faster processor for games in current games (those that have been tested). Intel runs good when it finds data in the cache (almost 20 times faster compared to memory). And framerates for this are sky high for fast video cards. If the processor really needs to work for the game using threads it is a different scenario.
The reason why I am in this discussion is that I think this is strange, it got me curious. Doing other types of development it is easy to check different scenarios when amd is better compared to intel and intel better compared to amd. Why are people willing to spend more money for something that isn't noticed?
Also most know the behavior for intels processors. it has worked on it's fast parts and skipped the weak parts.
Last edited by gosh; 09-12-2008 at 02:08 PM.
Doubtful because:
1) Clock per clock the Core 2 CPUs are faster than the Phenoms
2) The operating frequencies of the Core 2 CPUs are way higher than those of AMD's Phenoms.
3) The "scaling" advantage of AMD's with Quad-Cores isn't good enough to cover the Clock per clock performance gap.
4) Unfortunately in most games ( if not all ) in real-life gaming settings ( resolutions, game details and AA/AF ) the minimum framerate depends on the graphics card, just like the average & maximum framerates.
Coding 24/7... Limited forums/PMs time.
-Justice isn't blind, Justice is ashamed.
Many thanks to: Sue Wu, Yiwen Lin, Steven Kuo, Crystal Chen, Vivian Lien, Joe Chan, Sascha Krohn, Joe James, Dan Snyder, Amy Deng, Jack Peterson, Hank Peng, Mafalda Cogliani, Olivia Lee, Marta Piccoli, Mike Clements, Alex Ruedinger, Oliver Baltuch, Korinna Dieck, Steffen Eisentein, Francois Piednoel, Tanja Markovic, Cyril Pelupessy (R.I.P.), Juan J. Guerrero
But i can assure you that no game developer will develop a game that needs one processor that is running at 3.0 GHz or more (few buyers). Whats very easy to check for the programmer is how the processor works for the game. If it is raw clocks that matters this will be noticed immediately. if the processor slows the game it will probably be because there are something strange happening. Maybe the cache needs to be refreshed or latency for something else will be high. This is harder for the developer to check.
omfg... this post of yours leave only to conclusions:
a) your a total amd fanboy or
b) your preception dont even reaches beyoned your house door and you reject reality and substitute it with your own. (god, finally i could use that quote)
So much BS in that bold highlighted part, it isn't even funny any more...
Just in case it sliped your attention:
Phenom consumes more power while delivering less performance then C2Q.
http://www.computerbase.de/artikel/h...stungsaufnahme
this chart show full load with cpu (prime 95) and gpu (Firefly Forest“ & „Canyon Flight“ endlessloop 3dmark06)
clock for clock amds phenom consume 3% more power then a kentsfield and 20% more then a yorkfield... all with the same graphics card....
And the third option: c) He's just trolling us.
I vote for a mixture of a) + b) + c). I can't believe it if not![]()
Friends shouldn't let friends use Windows 7 until Microsoft fixes Windows Explorer (link)
![]()
Well, for each amd fanboy you will find at least 10 intel fanboys. and they are very sensitive to hear something that says that amd could be better
My friend and I just tested computers running at idle to see how much it was. He has one E6600 and one 7900 GTX. It used 140 watt idle. I had one opteron 165 and 7900 GTX and it was using 130 watt idle. Also tested other computers and the strange thing is that they often seem to draw less power compared to intel on idle. Now I have one using 9750 and ATI HD3850. It uses 100 watt idle, one server with X2 3600+ and that is using 52 watt. It is from the wall.
GPU's are using much more power, and CPU's isn't maxed out (quads)
Last edited by gosh; 09-12-2008 at 03:39 PM.
I have no doubt whatsoever he is trolling.
If you are an AMDZone True Believer as he is, then your warped perceptions can never be altered and that is why I posted to Jack earlier in this thread the below:
Jack is completely wrong to think Gosh has taken on anything he has been told, all he has done is zigged and zagged to keep his trolling efforts alive.
Also I think he has taken pleasure in getting Jack to do so much leg work all for nothing(well at least as far as Gosh is concerned), so that is why he keeps writing nonsense replies hoping to get Jack to keep wasting hours of his time.
Only when that something is complete and utter bullsh1t.
If it is true like K8 was better than P4 for gaming, you will get no dispute, but when a AMDZone loopie wants to claim that clock for clock the K10 is better than Penryn, then of course the AMDZone loopie's claims will be rubbished for the nonsense that they are.
I think that all has learned quite a bit in this thread. Check whats been said at the start and you will find that much of those things has been cleared later in thread.
The problem to talk about good parts for amd is well known. Most forums have quite a bit of people that likes intel and they will immediately hit on amd talks. it is even hard to ask about processors, they just can't handle it
I haven't seen anyone say that the cpu and gpu runs asynchronously which they in fact does
Last edited by gosh; 09-12-2008 at 04:06 PM.
from market's perspective
1) insignificantly in games
2) for a cost
3) scaling is the same as for Intel's CPUs. Platform's scaling is higher
4) true, though not sure about the "unfortunately" part. There should be a reason to buy high-end GPUs
From Mods point of view
I think some of you guys need to edit your posts as per forum policies to conduct clean no-flaming-or-name-calling policy.
Apparently these people cannot oppose in a polite manner
Last edited by Cooper; 09-12-2008 at 04:13 PM.
that view is so faulted its not even funny...
1) you compare a 2,4ghz proc to 1,8ghz proc,
2) you compare different systems with different psus -> different efficiency -> different results (and they are quite massive depending on the what psus are used)
3) i gave you a review that showes both cpu and gpu maxed out, with same psu and same graphic cards.
It's time to phrase some Willi Wonka:
It's all there, black and white, clear as crystal! You lose! Good day sir!
Bookmarks