MMM
Results 1 to 25 of 488

Thread: Intel Core i7 Review Thread

Threaded View

  1. #11
    Xtreme Mentor
    Join Date
    Mar 2006
    Posts
    2,978
    Quote Originally Posted by Bellisimo View Post
    your numbers are done with a SINGLE 8800GTX judging from those screens....

    btw, 200MHz | 400 MHz | Increase in performance
    average: 55.5 68.1 22.7 %
    Snow: 64 64.1 0.1 %
    Cave: 80.1 85.7 6.99 %

    and this for a single 8800GTX is quite impressive if you ask me

    what do you mean by sock puppet? this is my only XS account dont be silly...

    btw, as you can see on my anand link, 2 GPU's scale quite the same on qx9770 and x58, not a lot of difference, but 3 GPU's act really weird
    the FSB is just my guess, if your guess is more computational power because of nehalem, fine, then i just search some 5.5ghz qx9770 screens with 3-way sli...
    A) Average is real time, the final numbers are updated when the scene finishes .... the Snow and Cave only have real meaning. 'Average' will change depending on when I press screen shot. So your first number is bogus... download Lost Planet demo to see what I mean. (Yes, it was an 8800GTX, 4870X2's were installed later)

    B) Oddly, Anand's tech explanation counters your argument completely, did you comprehend what you actually linked?? In multiple GPU setups nVidia utilizes the broadcast feature of the PCIe, meaning all 3 cards gets the command set in one broadcast:

    Broadcast technology allows only one message to be sent by the CPU where it is then received, replicated, and broadcasted to all GPUs, eliminating the need for multiple, near-identical transfers over the FSB.
    What ever traffic the CPU will send over the bus will be the same for one card as for 3 ... this is typical in broadcast networks where each bus agent is sent the same data. Also, if FSB BW is so critical, why does not cutting it in half have closer to 50% impact? You did not answer that question.

    Texture data is stored on card, and the only time bus transfers are used is when the texture required is not cached on the card. This is why memory keeps going up on video cards. When a texture is required for a new scene or object that is not in memory you will know -- that game stutters horribly (FSB, HT or even QPI) there is no bus currently available that matches the BW of VRAM to GPU...

    Texture bandwidth is consumed any time a texture fetch request goes out to memory. Although modern GPUs have texture caches designed to minimize extraneous memory requests, they obviously still occur and consume a fair amount of memory bandwidth. Modifying texture formats can be trickier than modifying frame-buffer formats as we did when inspecting the ROP; instead, we recommend changing the effective texture size by using a large amount of positive mipmap level-of-detail (LOD) bias. This makes texture fetches access very coarse levels of the mipmap pyramid, which effectively reduces the texture size. If this modification causes performance to improve significantly,you are bound by texture bandwidth.

    Texture bandwidth is also a function of GPU memory clock.
    http://http.download.nvidia.com/deve...erformance.pdf

    EDIT: Ok, so no puppet -- I wondered because you level of understanding of the concepts is on the order of Gosh's.
    Last edited by JumpingJack; 11-04-2008 at 08:25 AM.
    One hundred years from now It won't matter
    What kind of car I drove What kind of house I lived in
    How much money I had in the bank Nor what my cloths looked like.... But The world may be a little better Because, I was important In the life of a child.
    -- from "Within My Power" by Forest Witcraft

Bookmarks

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •