Page 11 of 21 FirstFirst ... 891011121314 ... LastLast
Results 251 to 275 of 525

Thread: Intel Q9450 vs Phenom 9850 - ATI HD3870 X2

  1. #251
    Xtreme Enthusiast
    Join Date
    Mar 2008
    Posts
    750
    Or in all games. Although the card is too overkill for my system right now (Crysis works perfectly fine at 4x AA and 1280 x 1024), I still think there's room for improvement on ATI's part. This thing seems like it can be much faster, and currently, the drivers seem to offload some work on the CPU (all 4 cores utilized while running some dual-core games, especially noticeable in applications that fully stress the graphics card).

    Actually, if you want to see it offloading work on the CPU, try PCSX2. I've seen up to 50% of the extra two cores of the CPU being utilized while the graphics card is stressed in that software.
    Motherboard: ASUS P5Q
    CPU: Intel Core 2 Quad Q9450 @ 3.20GHz (1.07v vCore! )
    RAM: 2GB Kingston HyperX 800MHz
    GPU: MSI Radeon HD 4870 @ 780/1000 (default)

  2. #252
    Xtreme Mentor
    Join Date
    Mar 2006
    Posts
    2,978
    Quote Originally Posted by RunawayPrisoner View Post
    Or in all games. Although the card is too overkill for my system right now (Crysis works perfectly fine at 4x AA and 1280 x 1024), I still think there's room for improvement on ATI's part. This thing seems like it can be much faster, and currently, the drivers seem to offload some work on the CPU (all 4 cores utilized while running some dual-core games, especially noticeable in applications that fully stress the graphics card).

    Actually, if you want to see it offloading work on the CPU, try PCSX2. I've seen up to 50% of the extra two cores of the CPU being utilized while the graphics card is stressed in that software.
    Well, the GPU people work differently than the CPU people... each game has a tailored profile, and the x-fire or SLI usually takes more time to tweak out.
    One hundred years from now It won't matter
    What kind of car I drove What kind of house I lived in
    How much money I had in the bank Nor what my cloths looked like.... But The world may be a little better Because, I was important In the life of a child.
    -- from "Within My Power" by Forest Witcraft

  3. #253
    Xtreme Mentor
    Join Date
    Mar 2006
    Posts
    2,978
    Well ... I am beginning to get this all figured out. The 4870 X2 is a mixed bag at the moment, and there is a lot of sensitivity to configuration, expecially CPU clockspeed and core count. I will not be doing anything systematic just yet until I feel comfortable that I can reproducibly recreate scores from various reviews (particularly, TechPowerUp -- that was one of the best on the net). However, my problems that I mentioned above were not because of a single GPU only firing, it was because I was clocked too low on my processor.

    To make a long story short.... I am interested in comparing processors at a base clock speed that is valid, my 9850 is stock at 2.5 Ghz, so I have been running most of my testing around there. So in some games I am seeing huge improvements, in others not so much -- I thought very odd. However, it is clear that the X2 pushes the current gaming matrix (with a few notable exceptions, Crysis, WIC) to the CPU limited domain.

    To see what I mean...

    Here is 3DMark06 (Default values, 1280x1024, no AA, no AF) for a 2.5 Ghz QX9650:


    Here is 3DMark06 for a 3.67 GHz QX9650:


    Here is why I say 'core count' with respect to 3DM06
    Techreport's 3.6GHz 8400 got a 3DM06 score of 17555

    Ok, so the summary:

    2.5GHz QX9650: 3DMark06 score 14471, SM2 5231, SM3 7316, CPU 4025
    3.67GHz QX9650: 3Dmark06 score 20609, SM2 7655, SM3 10285, CPU 5753
    One hundred years from now It won't matter
    What kind of car I drove What kind of house I lived in
    How much money I had in the bank Nor what my cloths looked like.... But The world may be a little better Because, I was important In the life of a child.
    -- from "Within My Power" by Forest Witcraft

  4. #254
    Xtreme Addict
    Join Date
    Jul 2007
    Location
    California
    Posts
    1,461
    I had the same behavior with my Q6600 and 9800GX2... it's because the SM2 and SM3 tests are singlethreaded, so core clock is very important. Try overclocking the GPU with the Qx9650 @ 2.5GHz and you won't see any improvement.
    1.7%

  5. #255
    Xtreme Mentor
    Join Date
    Mar 2006
    Posts
    2,978
    Quote Originally Posted by Loser777 View Post
    I had the same behavior with my Q6600 and 9800GX2... it's because the SM2 and SM3 tests are singlethreaded, so core clock is very important. Try overclocking the GPU with the Qx9650 @ 2.5GHz and you won't see any improvement.
    You, know I haven't checked... that would make sense. I swear, COH must be the most patched game ever.
    Last edited by JumpingJack; 08-16-2008 at 11:04 PM.
    One hundred years from now It won't matter
    What kind of car I drove What kind of house I lived in
    How much money I had in the bank Nor what my cloths looked like.... But The world may be a little better Because, I was important In the life of a child.
    -- from "Within My Power" by Forest Witcraft

  6. #256
    Xtreme Enthusiast
    Join Date
    May 2008
    Posts
    612
    Question: Is there information about the difference in latency reading and writing data comparing Hypertransport and the Front Side Bus?
    Hypertransport is designed to be a very fast point to point communication (if I am right). If this communications is very fast it could be one explanation why AMD is even or sometimes have a bit better numbers on some tests when they test single threaded games on very high detail and high resolution.
    If communication to external hardware is similar on AMD and Intel then Intel should always win ins single threaded games if they are clocked the same even if the main bottleneck is the GPU. 6 MB L2 cache at 15 clocks that can be used for one core compared to 512 KB L2 cache on AMD does make a huge difference (I think more than 10% in all single threaded games) if processor performance are compared.
    Last edited by gosh; 08-17-2008 at 07:48 AM.

  7. #257
    Xtreme Mentor
    Join Date
    Mar 2006
    Posts
    2,978
    Quote Originally Posted by gosh View Post
    Question: Is there information about the difference in latency reading and writing data comparing Hypertransport and the Front Side Bus?
    Hypertransport is designed to be a very fast point to point communication (if I am right). If this communications is very fast it could be one explanation why AMD is even or sometimes have a bit better numbers on some tests when they test single threaded games on very high detail and high resolution.
    If communication to external hardware is similar on AMD and Intel then Intel should always win ins single threaded games if they are clocked the same even if the main bottleneck is the GPU. 6 MB L2 cache at 15 clocks that can be used for one core compared to 512 KB L2 cache on AMD does make a huge difference (I think more than 10% in all single threaded games) if processor performance are compared.
    Gosh ....

    You are confused a bit about the communication to and from the GPU on the different platforms.

    Let's take getting a chunk of data from memory to the GPU (which does happen, just not the volume you think it is)....

    AMD's hypertransport connects the chipset to the CPU, then the CPU to memory. Intel's layout connects the memory to the chipset then to the GPU. PCIe 2.0 is spec'ed to be it's own bus master, as opposed to earlier implementations which used DMA. On Intel's platform, the GPU has direct access to low level system memory for various data and the CPU simply writes the command buffer on GPU memory simply because it does not need to access the frontside bus to get to the memory data to begin with. The GPU is only one hop away from memory on the Intel platform, it is two hops away on AMD's.

    In terms of the data the GPU gets, it is in fact very little, not enough to saturate the FSB or HT ... the CPU populates the command buffer on the GPU for the GPU to do it's action, all the other large data elements are precached onto the GPU memory (hence the reason GPU card makers keep upping memory, to keep pace with the large textures of todays games)

    http://people.cs.uchicago.edu/~robis.../gpu_paper.pdf
    The GPU is able to make calls to a certain window of the system’s main memory and is responsible for loading the data it will operate on into its own memory. The CPU directs this activity by writing commands to a command buffer on the GPU. On the old fixed-function pipeline, these commands associated matrices with vertex data. Now, CPU commands could very well point to program code that will be fetched and executed on the GPU.
    This is essentially your second misunderstanding, in that you are thinking that all the data for a game event is stored in system memory, it is not .. I provided you a link that showed the different usages of video memory per game, perhaps you did not realize that that was reporting the video memory on the card and not system memory. Not sure, but all the heavy duty data that is needed for rendering a level is first loaded into the GPU's local memory (textures, vertex data, etc.) this is why when you start a game it takes several seconds (20, 30 or even a few minutes) to load... it is transferring that data over the low BW bus (both HT and FSB are low BW compared to the memory BW of a GPU).

    Even nVidia provides you the concept of the partition between main and GPU memory:

    http://http.developer.nvidia.com/GPU...gems_ch28.html

    The point is.... on an Intel platform the Graphics Memory Controller Hub (GMCH) provides one hop access for the GPU to memory. AMD's arrangement puts it as a two hop access ... if anything, Intel provides lower latency access for the graphics card to main memory, in either case ... it is irrelevant since all the texture and geometry data is loaded to video ram (with it's high BW interface) before run time.

    This is moot regardless, because the volume of data needed by the GPU from main memory is very small, since all the data that the GPU needs is placed in the Vertex, Texture, Mesh, and other buffers on the local GPU memory. The rendering for the scene is done by the GPU via commands written to the command buffer.

    If the bottleneck is the GPU, then AMD and Intel will tie +/- a few FPS just on noise of the measurement, single threaded, multithreaded -- it does not matter. Again, Gosh ... moving to parallel computational methods for gaming code or any code, will simply speed up the computational result than that to be had over a single thread. A single task application will always speed up if you can run segments in parallel over simple sequential execution. The trick, and challenge, of multithreading in gaming is the interdependency of segments on the other. This is why you see some speed up but not a 2x gain, for example, going from single to dual thread. This is really nothing more than an example of Amdahl's Law.

    Intel produces a computational result (clock for clock) faster than AMD, and as such, the CPU depended code will finish faster single or multithreaded, hence Intel will be faster in games.

    You are being fooled and misled by the forum posters who run their tests upto the GPU limit then you make an incorrect conclusion that it is somehow manefested in single/multithreaded. This happens all the time....

    At low resoltutions, Intel wins by 20 to 30 to up to 50% clock for clock in gaming, but at high resolutions they show tied... this is again due to GPU bottlenecking the computation flow.

    nVidia states it in their own words:
    http://http.download.nvidia.com/deve...erformance.pdf

    They show you a bottleneck flow chart that does exactly what we have been telling you.... to find the GPU bottleneck, vary the resolution, if the FPS varies it is the GPU or some component within the GPU pipeline, if not it is the CPU:


    Varying or increasing the graphically important parameters in a game changes the computational workload on the GPU (NOT THE CPU). This is why, when one wants to assess the computational capability of a CPU to the gaming code, it is important to observe the CPU as the limiter (i.e. low resolutions) in order to make a statement on how well the CPU can handle the code of the game the requires the CPU (i.e. non graphical code such as physics, AI, boundary collisions, etc.)

    Let's go back to your latency question.... it is MOOT, even if the FSB latency was 3x longer it would not make a difference.

    At 200 frames per second, the GPU is busy rendering ~ 1/200 seconds or 0.005 seconds. This is 5 milliseconds, or 5000 microseconds, or 5,000,000 nanoseconds. Latency of even 200 nano seconds is a wink compared to the time the GPU is spending in it's calculation, even for a very high frame rate.

    The more interesting question to ask is what architectural feature of the Core uArch is allowing Intel to perform so much better at executing gaming code vs AMD's solution?

    Jack
    Last edited by JumpingJack; 08-17-2008 at 08:57 AM.
    One hundred years from now It won't matter
    What kind of car I drove What kind of house I lived in
    How much money I had in the bank Nor what my cloths looked like.... But The world may be a little better Because, I was important In the life of a child.
    -- from "Within My Power" by Forest Witcraft

  8. #258
    Xtreme Enthusiast
    Join Date
    May 2008
    Posts
    612
    Quote Originally Posted by JumpingJack View Post
    Let's take getting a chunk of data from memory to the GPU (which does happen, just not the volume you think it is)....
    What is the volume?


    Quote Originally Posted by JumpingJack View Post
    Your second misunderstanding it thinking that all the data for a game event is stored in system memory, it is not ..
    What I wrote was the points in 3d space is calculated and sent from the CPU. I know that DMA (http://en.wikipedia.org/wiki/Direct_memory_access) is used to load these textures etc that is used to "paint" the picture. If all this data would be sent to the gpu for every picture than it would be so slooooow . I think that there is enough data that is sent any way. Just look at 3d drawings that have that grid like looks. And add to that all command used to inform how to "paint" the picture

    What I asked for was if there is information about latency comparing Hypertransport and the Front Side Bus?

    EDIT: If the GPU needs to use RAM (access ram on the motherboard) during gameplay the performance is going down the toilet as we say
    Last edited by gosh; 08-17-2008 at 09:28 AM.

  9. #259
    Xtreme Mentor
    Join Date
    Mar 2006
    Posts
    2,978
    Quote Originally Posted by gosh View Post
    What is the volume?
    Less than what 800 Mhz FSB can support you don't need to know, you can setup an experiment to see if it matters.... I.e. change the FSB speed and see if it changes the results..... I showed you that data http://www.xtremesystems.org/forums/...6&postcount=59.

    Nonetheless, the thought exercise was not about the size but how each platform retrieves a chunk of data.

    I also showed you that Intel scales a multicored multithreaded game along the same curve as an AMD CPU:


    However, if you want to measure it.... you can download Intel's Vtune or AMD's CodeAnalyst and monitor the counters of the bus busy line.

    What I wrote was the points in 3d space is calculated and sent from the CPU. I know that DMA (http://en.wikipedia.org/wiki/Direct_memory_access) is used to load these textures etc that is used to "paint" the picture.
    The vertexes are already loaded to GPU memory, changes in that geometry (such as a wall blowing up) is sent by the CPU, but the entire 3D mesh is not recalculated everytime... camera position and perspective is, and this is what the GPU uses to render the image. The GPU also stores the Z-buffer, which determines which surfaces are visible in front of one another within the perspective of the camera.

    This is by design, you are correct in saying that the FSB is slow... so is HT... in fact, HT is slower than the FSB in one direction (which would be from CPU to GPU). 2000 Mhz HT line gives 2 bytes of data in one direction, or 4000 MB/sec or 4.0 GB/sec... FSB is half/duplex, giving 1333x8 in one direction or 10.6 GB/sec.... technically speaking, if data needs to get from the CPU to the GPU, Intel would provide more peak BW.

    Nonetheless, GPU makers and the HW/software has evolved to move all the BW necessary components to the local memory of the GPU and design in 100 GB/sec BW from v-RAM to GPU for this very reason. All the GPU needs from the CPU is what do I need to do next (a command list, hence a command buffer).

    Now, when the resolution goes up high enough and the size of the textures are larger than what can be held in VRAM, then yes... a huge performance hit is taken because now the GPU must fetch a texture it does not have from system memory across that slow FSB or HT link. I have only seen recent examples of this:
    http://www.anandtech.com/video/showdoc.aspx?i=3372&p=9
    http://www.guru3d.com/article/radeon...w-crossfire/11
    Notice the 512 MB cards dropping like a rock going from 1900x1200 to 2560x1600 ... at 2560x1600 the textures are too large to fit into VRAM... and the reviewers correctly conclude that. This is called texture thrashing. I even provided you a link showing modern games VRAM usage earlier. The vast majority games, in fact all that I have seen so far, are able to fit a levels worth of textures into 512 Meg. Grid is the first I have seen that will exceed the 512 MB barrier -- and it only does so at 2560x1600.



    If all this data would be sent to the gpu for every picture than it would be so slooooow . I think that there is enough data that is sent any way. Just look at 3d drawings that have that grid like looks. And add to that all command used to inform how to "paint" the picture
    Did you even read the above post... did you not see the bottlenecking flow chart by nVidia... even nVidia argues a GPU or CPU limited scenario.

    I am not sure what you are talking about in terms of 3D drawings... Ratracying? Intel is faster there to... significantly faster.

    What I asked for was if there is information about latency comparing Hypertransport and the Front Side Bus?
    I have not seen any study or data comparing the latency of just the bus, it has always been a convoluted measure of latency through the bus to something else. I can look for you...
    Last edited by JumpingJack; 08-17-2008 at 09:55 AM.
    One hundred years from now It won't matter
    What kind of car I drove What kind of house I lived in
    How much money I had in the bank Nor what my cloths looked like.... But The world may be a little better Because, I was important In the life of a child.
    -- from "Within My Power" by Forest Witcraft

  10. #260
    Xtreme Mentor
    Join Date
    Mar 2006
    Posts
    2,978
    If this communications is very fast it could be one explanation why AMD is even or sometimes have a bit better numbers on some tests when they test single threaded games on very high detail and high resolution.
    Ok... there is not a good explanation for this... even Lost Planet, at higher resolutions, shows AMD can support higher FPS in GPU bound scenarios... but if you followed the thread. I can get the Intel's high res GPU bound FPS to exceed AMD's by increasing the PCIe frequency (i.e. the BW of the PCIe bus).

    My hypothesis is that the AMD implementation of PCIe 2.0 in the chipset is better than Intel's.

    jack
    One hundred years from now It won't matter
    What kind of car I drove What kind of house I lived in
    How much money I had in the bank Nor what my cloths looked like.... But The world may be a little better Because, I was important In the life of a child.
    -- from "Within My Power" by Forest Witcraft

  11. #261
    Xtreme Mentor
    Join Date
    Mar 2006
    Posts
    2,978
    4870 X2 update: well guys, it is gonna be a bust for a while I suspect. Only on a few occasions can I get frame rates that exceed a 8800 GTX under the same test conditions (regardless of CPU used). I am most certain that this is a driver issue, and that the drivers that ship with the card is different than the drivers used by the press-reviews that we saw. I am getting in some cases 1/2 the FPS of what other reviewers have shown, under similar settings, setup, etc. If AMD does not produce a press-like quality driver within the next week, the cards are going back.

    This is the quote from Guru3D: "With the latest press-driver used in this review the X2 finally is starting to show some better performance scaling." ... I cannot be certain that the drivers I am using are correct.


    jack
    One hundred years from now It won't matter
    What kind of car I drove What kind of house I lived in
    How much money I had in the bank Nor what my cloths looked like.... But The world may be a little better Because, I was important In the life of a child.
    -- from "Within My Power" by Forest Witcraft

  12. #262
    Xtreme Enthusiast
    Join Date
    May 2008
    Posts
    612
    Quote Originally Posted by JumpingJack View Post
    I am not sure what you are talking about in terms of 3D drawings... Ratracying? Intel is faster there to... significantly faster.
    And I don't really understand your answer. I asked one simple question and am getting big answer where you tell me that I don’t understand.

    The key to performance on the video card is the same as when they compress videos as much as possible. You just redraw what needs to be redrawn. Don’t refresh data that hasn’t been changed. This isn’t a problem if there isn’t any action in the game. Having as high FPS then isn’t that important. You don’t want low performance when there is action though and when there is action then much data needs to be processed. When there is action in the game then the need for fast communication is very important. Processors are extremely fast. They mostly sit and wait for data and moving all that data needs to be fast.

    If you mean that they write to memory on the motherboard first and then copy memory from there to the GPU that would seem rather stupid. If the call is asynchronous they could get more performance (the processor will only need to wait to copy it to ram) for that command but that is very hard to do because you don’t know when the command is ready (you need synchronization). You have to check that (or the driver). If it is synchronous then you need to wait for two memory transfers for the same data. Also I don’t think that is one improvement compared to prepare buffers on the stack and just send it to allocated data on the video card. Stack data is normally in L1 or L2 (for both amd and intel) cache if it isn’t too big if I am right.

    HyperTransport 3.0 is 20.8 GB/s, I have read that they can’t use all that bandwidth on the pc mothterboards for amd but the same goes for the FSB. You need some insane OC to go over 10 GB/s and that is only achieved if data is transferred in long "trains".

    Quote Originally Posted by JumpingJack View Post
    I have not seen any study or data comparing the latency of just the bus, it has always been a convoluted measure of latency through the bus to something else. I can look for you...
    That would be very interesting, I looked some before but finding data on speed between cpu and gpu was not easy to find. I did find other data on the speed but it seemed to low to be true for video comunication.

  13. #263
    Xtreme Enthusiast
    Join Date
    Mar 2008
    Posts
    750
    Quote Originally Posted by JumpingJack View Post
    4870 X2 update: well guys, it is gonna be a bust for a while I suspect. Only on a few occasions can I get frame rates that exceed a 8800 GTX under the same test conditions (regardless of CPU used). I am most certain that this is a driver issue, and that the drivers that ship with the card is different than the drivers used by the press-reviews that we saw. I am getting in some cases 1/2 the FPS of what other reviewers have shown, under similar settings, setup, etc. If AMD does not produce a press-like quality driver within the next week, the cards are going back.

    This is the quote from Guru3D: "With the latest press-driver used in this review the X2 finally is starting to show some better performance scaling." ... I cannot be certain that the drivers I am using are correct.


    jack
    So... which one (drivers) are you using, again?
    Motherboard: ASUS P5Q
    CPU: Intel Core 2 Quad Q9450 @ 3.20GHz (1.07v vCore! )
    RAM: 2GB Kingston HyperX 800MHz
    GPU: MSI Radeon HD 4870 @ 780/1000 (default)

  14. #264
    Xtreme Mentor
    Join Date
    Mar 2006
    Posts
    2,978
    Quote Originally Posted by RunawayPrisoner View Post
    So... which one (drivers) are you using, again?
    The released drivers on the CD ... 8.7 from AMD's website will not install, says valid HW not found. I put beta 8.8 drivers on (a scratch partition 'dirty' build), and same results. What is looks like to me is that ATI seeded the review sites with an alpha driver, with the correct profiles for the games they were using. I can match 3DMark06 for example, and some settings on COH, but most others are a bust.

    The reason I am thinking this is that a few review sites are getting the same results I am:

    http://www.tomshardware.com/reviews/...md,1992-4.html (pains me to link this )....

    They likely use drivers out of the box.... I am just not getting the high octane results I have seen on the other websites, even with the same HW supporting.

    EDIT: the actual driver version reported by catalyst is 8.52.6-080709a-048489c-ATI
    Last edited by JumpingJack; 08-17-2008 at 02:56 PM.
    One hundred years from now It won't matter
    What kind of car I drove What kind of house I lived in
    How much money I had in the bank Nor what my cloths looked like.... But The world may be a little better Because, I was important In the life of a child.
    -- from "Within My Power" by Forest Witcraft

  15. #265
    Xtreme Mentor
    Join Date
    Mar 2006
    Posts
    2,978
    Quote Originally Posted by gosh View Post
    And I don't really understand your answer. I asked one simple question and am getting big answer where you tell me that I don’t understand.

    The key to performance on the video card is the same as when they compress videos as much as possible. You just redraw what needs to be redrawn. Don’t refresh data that hasn’t been changed. This isn’t a problem if there isn’t any action in the game. Having as high FPS then isn’t that important. You don’t want low performance when there is action though and when there is action then much data needs to be processed. When there is action in the game then the need for fast communication is very important. Processors are extremely fast. They mostly sit and wait for data and moving all that data needs to be fast.

    If you mean that they write to memory on the motherboard first and then copy memory from there to the GPU that would seem rather stupid. If the call is asynchronous they could get more performance (the processor will only need to wait to copy it to ram) for that command but that is very hard to do because you don’t know when the command is ready (you need synchronization). You have to check that (or the driver). If it is synchronous then you need to wait for two memory transfers for the same data. Also I don’t think that is one improvement compared to prepare buffers on the stack and just send it to allocated data on the video card. Stack data is normally in L1 or L2 (for both amd and intel) cache if it isn’t too big if I am right.

    HyperTransport 3.0 is 20.8 GB/s, I have read that they can’t use all that bandwidth on the pc mothterboards for amd but the same goes for the FSB. You need some insane OC to go over 10 GB/s and that is only achieved if data is transferred in long "trains".


    That would be very interesting, I looked some before but finding data on speed between cpu and gpu was not easy to find. I did find other data on the speed but it seemed to low to be true for video comunication.
    asked one simple question and am getting big answer where you tell me that I don’t understand.
    Because you don't understand, even the context of your question is ludicrously silly. First you complain that no one is explaining, now that it is explained, you do not want a long answer you do not understand.

    Obviously, no matter what data I show you, no matter if I link up even the GPU makers themselves, you will not understand I have explained it in a simple terms as I can... so I am finished.

    Just a bit advice, do not try to pair a Phenom with a high end GPU, you will be disappointed.

    jack
    Last edited by JumpingJack; 08-17-2008 at 04:21 PM.
    One hundred years from now It won't matter
    What kind of car I drove What kind of house I lived in
    How much money I had in the bank Nor what my cloths looked like.... But The world may be a little better Because, I was important In the life of a child.
    -- from "Within My Power" by Forest Witcraft

  16. #266
    Xtreme Enthusiast
    Join Date
    Mar 2008
    Posts
    750
    Quote Originally Posted by JumpingJack View Post
    The released drivers on the CD ... 8.7 from AMD's website will not install, says valid HW not found. I put beta 8.8 drivers on (a scratch partition 'dirty' build), and same results. What is looks like to me is that ATI seeded the review sites with an alpha driver, with the correct profiles for the games they were using. I can match 3DMark06 for example, and some settings on COH, but most others are a bust.

    The reason I am thinking this is that a few review sites are getting the same results I am:

    http://www.tomshardware.com/reviews/...md,1992-4.html (pains me to link this )....

    They likely use drivers out of the box.... I am just not getting the high octane results I have seen on the other websites, even with the same HW supporting.
    So verdict is you can't make use of the 4870X2 "yet" to provide more information, right? Well... that would be enough for now. You confirmed one thing I said in the first page: AMD implemented better PCI-E 2.0 than Intel.
    Motherboard: ASUS P5Q
    CPU: Intel Core 2 Quad Q9450 @ 3.20GHz (1.07v vCore! )
    RAM: 2GB Kingston HyperX 800MHz
    GPU: MSI Radeon HD 4870 @ 780/1000 (default)

  17. #267
    Xtreme Mentor
    Join Date
    Mar 2006
    Posts
    2,978
    Quote Originally Posted by RunawayPrisoner View Post
    So verdict is you can't make use of the 4870X2 "yet" to provide more information, right? Well... that would be enough for now. You confirmed one thing I said in the first page: AMD implemented better PCI-E 2.0 than Intel.
    I am pretty certain that is the case. nVidia has publicly complained about Intel's PCIe implementation for years -- they use that as the reason they don't release SLI on Intel chipsets.

    I get better performance out of my 8800 GTX than what I am seeing right now with the 4870 X2's, with just a few exceptions.
    Last edited by JumpingJack; 08-17-2008 at 03:03 PM.
    One hundred years from now It won't matter
    What kind of car I drove What kind of house I lived in
    How much money I had in the bank Nor what my cloths looked like.... But The world may be a little better Because, I was important In the life of a child.
    -- from "Within My Power" by Forest Witcraft

  18. #268
    Xtreme Member
    Join Date
    Apr 2006
    Posts
    393
    Jack, don't even bother with gosh. No matter how much data you show him, he won't try to understand because something green blinds him.

  19. #269
    Xtreme Enthusiast
    Join Date
    May 2008
    Posts
    612
    Remember that much of the data that goes through the PCIe to GPU ALSO has been sent through the FSB. During gaming most of the data sent through the PCIe comes from CPU and that data will always travel through the FSB.

    The question is… Is it really PCIe that is bad or is it the FSB that is the main bottleneck.
    Why has Intel removed the FSB on Nehalem?

  20. #270
    Xtreme Member
    Join Date
    Apr 2006
    Posts
    393
    Quote Originally Posted by gosh View Post
    Remember that much of the data that goes through the PCIe to GPU ALSO has been sent through the FSB. During gaming most of the data sent through the PCIe comes from CPU and that data will always travel through the FSB.

    The question is… Is it really PCIe that is bad or is it the FSB that is the main bottleneck.
    Why has Intel removed the FSB on Nehalem?
    Obviously, because server workloads would gain from QPI. Jack showed you TONS OF DATA, GET IT THROUGH YOUR HEAD.

  21. #271
    Xtreme Mentor
    Join Date
    Mar 2006
    Posts
    2,978
    Quote Originally Posted by gosh View Post
    Remember that much of the data that goes through the PCIe to GPU ALSO has been sent through the FSB. During gaming most of the data sent through the PCIe comes from CPU and that data will always travel through the FSB.

    The question is… Is it really PCIe that is bad or is it the FSB that is the main bottleneck.
    Why has Intel removed the FSB on Nehalem?
    It's the PCIe, I can change the FSB speed all I want... no change, I can bump the PCIe 10% and get 5% improvement off the bat. This is wasted time and effort, data means nothing to you.

    Intel has moved away from the FSB because as core count goes higher the need for more BW will be needed, they are starting now before 6 and 8 core ht. 1333 Mhz is plent to satisfy any DT need on most all applications, games included. But 4, 6 or 8 core in server and HPC need that BW.
    Last edited by JumpingJack; 08-17-2008 at 04:22 PM.
    One hundred years from now It won't matter
    What kind of car I drove What kind of house I lived in
    How much money I had in the bank Nor what my cloths looked like.... But The world may be a little better Because, I was important In the life of a child.
    -- from "Within My Power" by Forest Witcraft

  22. #272
    Xtreme Enthusiast
    Join Date
    May 2008
    Posts
    612
    Quote Originally Posted by Clairvoyant129 View Post
    he won't try to understand because something green blinds him.
    Well, it could be the other way around . I have informed about strong areas on Intel, but when you say anything good about AMD compared to Intel these seems to create some allergic reaction and then you will be told explanations that is very hard to make something out if you know a bit about the subject because it doesn’t make sense. If you show tests where AMD wins then there is some error, if Intel wins then it is ok.

  23. #273
    Xtreme Member
    Join Date
    Apr 2006
    Posts
    393
    Quote Originally Posted by gosh View Post
    Well, it could be the other way around . I have informed about strong areas on Intel, but when you say anything good about AMD compared to Intel these seems to create some allergic reaction and then you will be told explanations that is very hard to make something out if you know a bit about the subject because it doesn’t make sense. If you show tests where AMD wins then there is some error, if Intel wins then it is ok.
    What? This is about you making false claims about the FSB and Jack trying to help you but obviously no matter how much he tries, you don't want to listen to him.

  24. #274
    Xtreme Mentor
    Join Date
    Mar 2006
    Posts
    2,978
    Crysis on a 4870 X2. I ran both the CPU and GPU bench, using resolutions of 1024x768, 1280x1024, and 1680x1050. Each was tested at 4x AA and again at 16x AA so a total of 6 runs for each bench

    Phenom @ 2.5 GHz
    GPU Bench

    CPU Bench


    QX9650 @ 2.5 GHz (matched clock speed for clock for clock)
    GPU Bench

    CPU Bench


    Output is attached.
    One hundred years from now It won't matter
    What kind of car I drove What kind of house I lived in
    How much money I had in the bank Nor what my cloths looked like.... But The world may be a little better Because, I was important In the life of a child.
    -- from "Within My Power" by Forest Witcraft

  25. #275
    Xtreme Member
    Join Date
    Apr 2006
    Posts
    393
    Quote Originally Posted by JumpingJack View Post
    Crysis on a 4870 X2. I ran both the CPU and GPU bench, using resolutions of 1024x768, 1280x1024, and 1680x1050. Each was tested at 4x AA and again at 16x AA so a total of 6 runs for each bench

    Phenom @ 2.5 GHz
    GPU Bench

    CPU Bench


    QX9650 @ 2.5 GHz (matched clock speed for clock for clock)
    GPU Bench

    CPU Bench


    Output is attached.
    Jack, I really have to applaud you for being so persistent... but don't you feel like you're talking to a stone? Just give it up...

Page 11 of 21 FirstFirst ... 891011121314 ... LastLast

Bookmarks

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •