Results 1 to 25 of 730

Thread: OCCT 3.1.0 shows HD4870/4890 design flaw - they can't handle the new GPU test !

Hybrid View

  1. #1
    Xtreme Addict
    Join Date
    Apr 2006
    Location
    Barack Hussein Obama-Biden's Nation
    Posts
    1,084
    Quote Originally Posted by LordEC911 View Post
    I have shown that those numbers don't make sense. Nvidia cards still should be performing better then they do. With the numbers from a GTX285, they are not using the MUL to it's full potential, one of the things that was supposedly "fixed" from G80. It is not performing at the 1.06TFlop(MADD+MUL) level nor at the 708GFlop(MADD) level but an inbetween. Basically the MUL is only being exploited ~45% of the time.

    Sure, it is loading the cards but it is not exploiting all the shaders that the G200 has to offer.

    People are complaining that they cannot fully utilize their RV770 cards, since some have to be downclocked to be stable in OCCT.
    The flipside is that you are also not getting full advantage of all that G200 has to offer with the results of OCCT.
    When using a G200 you are only receiving ~82% of the performance.
    1.06TFlop for MADD+MUL still isnt as good as 1.2TFLOP for 4870 cards. Please elaborate.

    --two awesome rigs, wildly customized with
    5.1 Sony speakers, Stereo 3D, UV Tourmaline Confexia, Flame Bl00dr4g3 Fatal1ty
    --SONY GDM-FW900 24" widescreen CRT, overclocked to:
    2560x1600 resolution at 68Hz!(from 2304x1440@80Hz)

    Updated List of Video Card GPU Voodoopower Ratings!!!!!

  2. #2
    Xtreme Addict
    Join Date
    May 2007
    Location
    'Zona
    Posts
    2,346
    Quote Originally Posted by Bo_Fox View Post
    1.06TFlop for MADD+MUL still isnt as good as 1.2TFLOP for 4870 cards. Please elaborate.
    Here...

    Quote Originally Posted by LordEC911 View Post
    I wasn't trying to prove a point, I was simply answering a question.
    I can see where he might have been headed by asking that question though.

    4890 is what ~10-15% behind a GTX285 with both at stock on average in "normal" games and apps?
    Yet with OCCT, the 4890 is ~56% faster, using the 83FPS vs 53FPS.
    However this app is programmed it stresses every part of the chip to the max, or at least quite a bit more than other "normal" apps/games.

    Also none of the numbers, i.e. FPP, seem to add up.
    4890@850mhz= 1.36Tflops
    GTX285@1476mhz= 1.06Tflops(MADD+MUL), .708Tflops(MADD)

    1.36/1.06= 1.28x greater (1/2 the FPS difference)
    1.36/.708= 1.92x greater (amusing since it doesn't mean anything but = largons power draw increase)

    Simply using max theorectical FPP is not an accurate way to estimate performance but in this case it seems to be related. Since this app has been said to use simple shaders to completely load the ALUs, you could come to the conclusion that the MUL is only being used ~45% of the time.

    Basically, the way this app is programmed it is able to use the 4890's architecture to the max and seems to not fully load Nvidia cards, persay.

    Edit- Anyone know what the stock volts for a GTX280 is under load? 1.3-1.4v?
    Originally Posted by motown_steve
    Every genocide that was committed during the 20th century has been preceded by the disarmament of the target population. Once the government outlaws your guns your life becomes a luxury afforded to you by the state. You become a tool to benefit the state. Should you cease to benefit the state or even worse become an annoyance or even a hindrance to the state then your life becomes more trouble than it is worth.

    Once the government outlaws your guns your life is forfeit. You're already dead, it's just a question of when they are going to get around to you.

  3. #3
    Xtreme Addict
    Join Date
    Apr 2006
    Location
    Barack Hussein Obama-Biden's Nation
    Posts
    1,084
    Quote Originally Posted by LordEC911 View Post
    Here...
    Perhaps more of Nvidia's processing power are "reserved" for PhysX calculations. I wouldnt be surprised if many of those transistors are wasted anyways on 32 ROPs, 512-bit memory interface, and espeically PhysX.

    After all, nothing ever pushes closer than 99% of the theoretical maximum. Some of the older cards were so inefficient, like G92 cards, in only getting within 75-85% of the theoretical maximum for fillrate tests.
    Last edited by Bo_Fox; 05-28-2009 at 01:22 PM.

    --two awesome rigs, wildly customized with
    5.1 Sony speakers, Stereo 3D, UV Tourmaline Confexia, Flame Bl00dr4g3 Fatal1ty
    --SONY GDM-FW900 24" widescreen CRT, overclocked to:
    2560x1600 resolution at 68Hz!(from 2304x1440@80Hz)

    Updated List of Video Card GPU Voodoopower Ratings!!!!!

  4. #4
    Xtreme Legend
    Join Date
    Jan 2003
    Location
    Stuttgart, Germany
    Posts
    929
    Quote Originally Posted by Bo_Fox View Post
    Perhaps more of Nvidia's processing power are "reserved" for PhysX calculations.
    according to nvidia nothing is reserved. when an app requests cuda, the driver will set some shaders aside for that task for the lifetime of that cuda application.

  5. #5
    Visitor
    Join Date
    May 2008
    Posts
    676
    Quote Originally Posted by LordEC911 View Post
    I wasn't trying to prove a point, I was simply answering a question.
    I can see where he might have been headed by asking that question though.

    4890 is what ~10-15% behind a GTX285 with both at stock on average in "normal" games and apps?
    Yet with OCCT, the 4890 is ~56% faster, using the 83FPS vs 53FPS.
    However this app is programmed it stresses every part of the chip to the max, or at least quite a bit more than other "normal" apps/games.

    Also none of the numbers, i.e. FPP, seem to add up.
    4890@850mhz= 1.36Tflops
    GTX285@1476mhz= 1.06Tflops(MADD+MUL), .708Tflops(MADD)

    1.36/1.06= 1.28x greater (1/2 the FPS difference)
    1.36/.708= 1.92x greater (amusing since it doesn't mean anything but = largons power draw increase)

    Simply using max theorectical FPP is not an accurate way to estimate performance but in this case it seems to be related. Since this app has been said to use simple shaders to completely load the ALUs, you could come to the conclusion that the MUL is only being used ~45% of the time.

    Basically, the way this app is programmed it is able to use the 4890's architecture to the max and seems to not fully load Nvidia cards, persay.

    Edit- Anyone know what the stock volts for a GTX280 is under load? 1.3-1.4v?
    I just ran the GPU test on a pair of GTX 280s SLI for about 10 mins (clocks 712/1512/1242 - voltage 1.185). Full screen 1920x1200 settings. Max current draw was just below 30A on both cards. Max frames per second 144. Both cards are water cooled. Temps of GPU0 max 56C and GPU1 53C. Ambient temp was ~23.8C. During gaming temps usually top out at 15C above ambient.

  6. #6
    I am Xtreme
    Join Date
    Dec 2008
    Location
    France
    Posts
    9,060
    Quote Originally Posted by cx-ray View Post
    I just ran the GPU test on a pair of GTX 280s SLI for about 10 mins (clocks 712/1512/1242 - voltage 1.185). Full screen 1920x1200 settings. Max current draw was just below 30A on both cards. Max frames per second 144. Both cards are water cooled. Temps of GPU0 max 56C and GPU1 53C. Ambient temp was ~23.8C. During gaming temps usually top out at 15C above ambient.
    Keep in mind different settings should be used in order to fully load Nvidia chips with this test. They were posted earlier in this thread. Also was VSync off?

  7. #7
    Visitor
    Join Date
    May 2008
    Posts
    676
    Quote Originally Posted by zalbard View Post
    Keep in mind different settings should be used in order to fully load Nvidia chips with this test. They were posted earlier in this thread. Also was VSync off?
    Not sure where to find the VSync off or even if it matters as the fps is already higher than the standard 60hz of an LCD. So, it doesn't look like it's sync locked.

    New settings I just tried VSync forced of in the Global Settings tab of my driver (didn't see a difference).

    13:27 min. with Shader Complexity settings 3, FPS ~83, GPU temps 56C and 53C, current draw still about 30A. Ambient temp ~23.4C.

    I tried to search for other settings in this thread. The above was the closest I could find for Nvidia cards. If you know something better let me know and I'll test again.

Bookmarks

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •