Enabled, both 4x
Enabled, both 4x
Jack can you test CoHOF and fix the display to 60Hz? Found out CoHOF always enables V-Sync and the sweet spot is around 60 FPS (@all max details 8*AA) so it just would be perfect for a comparision. 8800GTX and 4870 would be nice.
Last edited by Boschwanza; 08-22-2008 at 05:57 AM.
Jack have you ever tested the Phenom with increased northbridge (L3) frequency? I've always wondered what the impact of increased NB/L3 frequency is on Phenom but no sites have really tested this. I'd like to see how Phenom @ 2.5GHz core / 2.4GHz NB performs compared to the NB @ 2GHz.
Intel Core i7 920 @ 3.8GHz - Asus P6T Deluxe X58 - 6GB (2GBx3) G. SKILL DDR3-1600 @ 8-8-8-20 - 2 x EVGA GTX 280 1GB SLI - Corsair TX750 PSU - Windows Vista HP 64-bit
Have been reading some game programming (very little) the last couple of days checking how it is done. What strikes me is that how little the performance of the game depends on the processor. If you can run the game on 640x480 then the processor is ok. How high (resolution) you can go depends on the video card.
There seems to be very few (if any) settings in games that will add work for the processor that are resolution dependent. AI and physics, detail etc will probably add work for it, haven’t read about that. But it should be the same for all resolutions.
Games don’t seem to use take advantage of higher resolutions for increasing the game experience for the player.
If the game is using threads then one single core processor should degrade performance but apart from that it’s the video card that decides game performance. Faster processors will of course increase FPS on low res but you will not go below min FPS on 640x480 because the processor isn’t enough on higher resolutions. If you go lower then it depends on the video card or something else, not the processor.
All this talk about what type of processor you need for gaming seems to just be fictional.
wow, just wow....
people where telling you for 15 pages that cpus mostly dont matter for highres gaming... and you where the one that refused to listen... people where bombarding you with facts... and now all of a sudden you came to the same conculsion, after "reading something about game programing", what people where telling you from left and right and what already is a well known fact...
i spare to say what this makes you look like...
But why are most gamers using intel then? And why do they overclock?
The processor doesn't matter but the communication between processor and hardware, memory speed does matter when it comes to min fps.
I admit that I thought that was some logics in what you read every ware that you need faster processors.
When you develop normal programs you try to use the whole screen and calculate how much it is possible to display, didn’t think that this was any different for game programmers. What I have seen among those is that type of programming today is very simple.
It’s enough for me to write in this language and I think it makes me look like…
You know, I don't care![]()
Last edited by gosh; 08-22-2008 at 03:55 PM.
In case you didn't read it correctly but the HL2 tests are wrong.
It's not 300 fps with the AMD and 295 fps with the Intel CPU.
9850 → 123.26 fps
Q9450 → 100.97 fps
Hold on.... I applaud what he has done. He took the initiative to spend some energy to learn, rethink what he knows, and put that new knowledge to good use.
@ Gosh -- good job! This whole discussion has made me learn as well, I think overall it has been extra positive.
Jack
One hundred years from now It won't matter
What kind of car I drove What kind of house I lived in
How much money I had in the bank Nor what my cloths looked like.... But The world may be a little better Because, I was important In the life of a child.
-- from "Within My Power" by Forest Witcraft
yes but nothing systematically -- and never recorded the results, i would like to look at that, so i will produce a little data for the thread.
The closest I have come is a fairly decent comparision of a 9600 BE @ 2.3 Ghz and 1800 Mhz NB vs a 9850 @ 2.3 GHz and 2000 Mhz, I also did some latency meaurments using CPUID latency and Everest latency reports... I can try to pull that info up and post it relatively quickly.
Jack
One hundred years from now It won't matter
What kind of car I drove What kind of house I lived in
How much money I had in the bank Nor what my cloths looked like.... But The world may be a little better Because, I was important In the life of a child.
-- from "Within My Power" by Forest Witcraft
Last edited by JumpingJack; 08-22-2008 at 07:22 PM.
One hundred years from now It won't matter
What kind of car I drove What kind of house I lived in
How much money I had in the bank Nor what my cloths looked like.... But The world may be a little better Because, I was important In the life of a child.
-- from "Within My Power" by Forest Witcraft
Here you go... I did 1800, 2000, 2200, 2400 (2600 would not boot, though I have had it there before, not sure why at the moment). I ran 3DM06 and Everest memory benchmark and captured the screens. Again, I posted a link to the entire configuration of this system earlier in the thread... if you want me to re-link, let me know. This is done using a M3A32-MVP Deluxe board with DDR2-800 C4 RAM, a 4870 X2 with catalyst 8.8 drivers. 3DM06 was set to default, meaning 1280x1024.
Here are the screen shots.
As one expects, the mem BW, L3 BW and latency track linearly with NB speed. 3DM06 score also tracks somewhat linearly, starting at 13232 for 1800 MHz NB and ending at 13553 for 2400 MHz NB speed.
What is interesting is the CPU score, it peaks at 2200 then starts down.... but I caution you on this... I need to repeat the run many many times to ensure that it is statistically valid... I have not done that.
I may spend some time collecting that information and write an article about the data...
One hundred years from now It won't matter
What kind of car I drove What kind of house I lived in
How much money I had in the bank Nor what my cloths looked like.... But The world may be a little better Because, I was important In the life of a child.
-- from "Within My Power" by Forest Witcraft
but why didn't he do that 2 months ago?
He based all his assumptions on that single forum posted comapring racedriver grid numbers. It would never come to my mind to say x processor beats y processor by only providing one sample of data...
as you know this is not the first forum he posted this, i think he doesn't expected to meet someone like you with a vast knowledge about cpus/silicon tech and the time and systems to give him real numbers.
For me his first post was a blunt attempt to start a flamefest, and i think even you saw it that way, when i look at the first page.
anyways, with your help this topic turned way from a blunt flamefest to a very informative topic.
simple, most gamers also do other thing then gaming. For myself i run boinc, and intel is way faster in the projects i run then amd.
Also -> overclocking raises your min fps, thats why you see gamers overclock there systems.
Did you know how the processor works in games and how games are beeing developed?
But facts and information that has been shown here seem to confirm that MIN FPS (maybe the lowest 1-3%) which would be the most important measurements on low res (640x480) is the most important test if you want to check how good the processor is for that game.
Single threaded games the processor is probably not any problem today. You can run ALL single threaded games on new processors. What matters is the video card.
Threaded games and the processor will be more important but all these average or max FPS is of no use if you want to test how the game behaves.
OT
I am working with Visual Studio every day (developing analytical software for information stored in databases, both for servers and clients). When they test Visual Studio do you know how they test it? They test the compile time for ONE application. This is almost of no use to test how the good the processor is, very little.
If I am doing one release then there about 4 or 5 projects that need to be compiled, I don’t sit and wait for each to run one at the time. I have some addins in the environment that helps med writing code. I also run vmware and of corse there are a lot of databases. Think that over 90% of all applications done is using databases. One good flow when you write code is very important. Beeing able to work with other tasks if you do one long compilation is also good.
When there was single core processors out then the performance on how fast ONE application was would be important. Today in a multithreaded world and if you work professionally. The performance of ONE applications isn’t that important anymore. If you don’t take advantage of multithreading and are using applications that will need to some time to do work, then you are not working efficiently.
Last edited by gosh; 08-23-2008 at 04:15 AM.
i have more then 10 year experience as a gamer with games and various systems, so yes i dare to say i know how a processor works and what it influcences in games. For the second point i dont care, since i only play them i dont need to know what development method the use, be it classical project menagment or another style.
you blow the min fps as factor way out of porpotion, yes as a gamer i know how importnat min fps is and in fast first person shooter fps decide about who is winning and who is loosing.
But min fps is ONE, i repeat ONE point, lets take jacks fear benchmark as example:
http://www.xtremesystems.org/forums/...&postcount=281
The amd rig @ 3ghz has higher min fps but lower avrg fps, as gamer i'll take higher avarage fps over a seconed (or less) lower min fps.
Single threaded games should be better on Intel when it comes to MIN FPS also, it is a bit strange that AMD did win a MIN @ 3.0 GHz. If they are testing MIN for less time then one second then it would be more logical to see higher MIN because higher memory bandwidth ore something like that. One second for one processor is a very long time.
I did some reading about how they count FPS (min and max). It seems that every game counts frames in some relation to time in order to know how much they need to move parts. They have a game loop (some games anyway). If the game is single threaded it is also important for the reading of the mouse (how accurate that will be) and keyboard.
They don’t seem to count FPS in respect of delta between two frames and present that score. This can be a bit risky also because there are probably some things that could happen on the computer and there will be extra time between two frames, maybe textures will need to be reloaded etc. And then MIN FPS could less then 10 but that will not be a number that is important for the game play and it will be bad advertising for the game.
It is possible to hook into DirectX and create one application like Fraps. I will try to do one application that does this when there is some time over here.
Very interesting..... the memory performance rises very nicely, a 10ns reduction in latency + a nice increase in bandwidth just by increasing the NB speed 20%.
Can you run any gaming tests comparing 2.0GHz / 2.4GHz NB speeds?
I really wish review sites were as comprehensive as you![]()
Intel Core i7 920 @ 3.8GHz - Asus P6T Deluxe X58 - 6GB (2GBx3) G. SKILL DDR3-1600 @ 8-8-8-20 - 2 x EVGA GTX 280 1GB SLI - Corsair TX750 PSU - Windows Vista HP 64-bit
If you look closely you can see for example that:
HDR Canyon flight goes from 85.8 to 88.1 fps
and so on, guess it should translate same way to normal games.
--------------------------------------------------
AMD Phenom II 1090T @ 4GHz Asus Crosshair IV
HD6970
LSI Megaraid 9260-4i 4xMomentus XT
OCZ Vertex 3@SB850
8 gig Patriot Viper 7-7-7-24 T1
Swiftech Watercooling
Filco Majestouch 2
Zowie EC1
--------------------------------------------------
Did some testing with crysis at different NB speeds.
In an not GPU bound scenario you can see quite a bit performance gain.
pictures NB 2000MHz vs. 2400MHz. CPU at 3100MHz in both cases
Last edited by xPliziT; 08-23-2008 at 12:23 PM.
--------------------------------------------------
AMD Phenom II 1090T @ 4GHz Asus Crosshair IV
HD6970
LSI Megaraid 9260-4i 4xMomentus XT
OCZ Vertex 3@SB850
8 gig Patriot Viper 7-7-7-24 T1
Swiftech Watercooling
Filco Majestouch 2
Zowie EC1
--------------------------------------------------
And another strange thing.
I overclocked from 3100Mhz to 3316Mhz and my MIN FPS jumped from 20 FPS to 85 FPS???????????
Lol strange.
Edit: seems to be a windows issue. did every test after a fresh windows boot and wait 3 minutes. Repeated the tests without reboot later
And enabling the red dot in AOD gave me boost of min fps between 5-8 fps which is alot.
Last edited by xPliziT; 08-23-2008 at 12:39 PM.
--------------------------------------------------
AMD Phenom II 1090T @ 4GHz Asus Crosshair IV
HD6970
LSI Megaraid 9260-4i 4xMomentus XT
OCZ Vertex 3@SB850
8 gig Patriot Viper 7-7-7-24 T1
Swiftech Watercooling
Filco Majestouch 2
Zowie EC1
--------------------------------------------------
Guys dont look on that freakin min fps. I mean this can only be ONE Fps of an overall FPS around 1000 and more and just means nothing. If you want to focus on the "overall min fps" you have to provide a slide recorded all fps.
Last edited by Boschwanza; 08-23-2008 at 01:43 PM.
I disagree to an extent, but not completely ... min FPS means everything as it will relate to the quality of game play. The limited sampling of 'game' scene, any time the min falls below a playable threshold is related to how much 'stutter' and pausing you have in a game.
It also is useful to look at when analyzing the architecture, some segments of the script will be GPU taxing, other CPU taxing, and others both ... how a system behaves at min is important if you want to understand how it responds as a whole.
Last edited by JumpingJack; 08-23-2008 at 05:35 PM.
One hundred years from now It won't matter
What kind of car I drove What kind of house I lived in
How much money I had in the bank Nor what my cloths looked like.... But The world may be a little better Because, I was important In the life of a child.
-- from "Within My Power" by Forest Witcraft
Not really strange -- unless it is very reproducible, after running it 4 to 6 times, is each run consistently 85 FPS? If it is always 85 FPS, then i think that is strange.
Crysis CPU test is physics and debris heavy, which is why the CPU test blows a lot of stuff up ... where as the GPU test is just a flyby (rendering heavy). Watch the CPU script run several times, while the trace/pattern is the same, the way things fall apart is much different, and the debris pattern, quantity is different -- so some randomness is not unexpected.
One hundred years from now It won't matter
What kind of car I drove What kind of house I lived in
How much money I had in the bank Nor what my cloths looked like.... But The world may be a little better Because, I was important In the life of a child.
-- from "Within My Power" by Forest Witcraft
See my post reponse above, this is not completely unexpected.
Run UT3, using a self-sustaining bot match ... a useful benching utility is UT3bench (google it) ... if you do, ping me for instructions... damn thing is buggy and it won't do what you want it to do unless you do something special firstThis will show you some real random variability
![]()
Last edited by JumpingJack; 08-23-2008 at 04:54 PM.
One hundred years from now It won't matter
What kind of car I drove What kind of house I lived in
How much money I had in the bank Nor what my cloths looked like.... But The world may be a little better Because, I was important In the life of a child.
-- from "Within My Power" by Forest Witcraft
Bookmarks