MMM
Results 1 to 25 of 730

Thread: OCCT 3.1.0 shows HD4870/4890 design flaw - they can't handle the new GPU test !

Hybrid View

  1. #1
    Xtreme Member
    Join Date
    Jun 2003
    Location
    Italy
    Posts
    351
    Which of the following seem most likely to you:
    1) The test uses close to 100% of the transistor in the die. In game it doesn't happen cause they aren't optimized for a specifical gpu (as on consoles).

    2) The test doesn't bear any cpu worload, so the gpu hasn't to wait for geometry data and has no time for idling between operations. In game it doesn't happen cause there's always a cpu workload

    3) Both.
    3570K @ 4.5Ghz | Gigabyte GA-Z77-D3H | 7970 Ghz 1100/6000 | 256GB Samsung 830 SSD (Win 7) | 256GB Samsung 840 Pro SSD (OSX 10.8.3) | 16GB Vengeance 1600 | 24'' Dell U2412M | Corsair Carbide 300R

  2. #2
    Xtreme Cruncher
    Join Date
    Aug 2006
    Location
    Denmark
    Posts
    7,747
    Quote Originally Posted by Tuvok-LuR- View Post
    Which of the following seem most likely to you:
    1) The test uses close to 100% of the transistor in the die. In game it doesn't happen cause they aren't optimized for a specifical gpu (as on consoles).

    2) The test doesn't bear any cpu worload, so the gpu hasn't to wait for geometry data and has no time for idling between operations. In game it doesn't happen cause there's always a cpu workload

    3) Both.
    After seeing the same cause for 1000s of people in an MMO I dont believe in any of the above.

    And considering driver profiles for games are filled with idle states, its pretty clear that the GPU cant handle certain games normally either. Hence why renaming .exe files.
    Crunching for Comrades and the Common good of the People.

  3. #3
    Worlds Fastest F5
    Join Date
    Aug 2006
    Location
    Room 101, Ministry of Truth
    Posts
    1,615
    Guys, trying to ignore the thread crappers and some others engaged in a rather dull 7 page long pissing contest, all the OP wanted was to highlight a problem he had encountered whilst testing the functionality of a new feature of his stress testing software which he posted here in the hope of getting more confirmation (or not) that the problem exists.

    The obvious conclusion thus far is that some it that some 4870 / 4890's are power starved by their insufficient VRM's and / or there might be some kind of mechanism that kicks in at a certain high current draw threshold when they are highly stressed in this way.

    A useful step forward at this point would be to collate the results that have been reported thus far and compile a list of which precise cards and manufacturers + editions are affected....
    Last edited by Biker; 05-20-2009 at 02:26 AM.
    X5670 B1 @175x24=4.2GHz @1.24v LLC on
    Rampage III Extreme Bios 0003
    G.skill Eco @1600 (7-7-7-20 1T) @1.4v
    EVGA GTX 580 1.5GB
    Auzen X-FI Prelude
    Seasonic X-650 PSU
    Intel X25-E SLC RAID 0
    Samsung F3 1TB
    Corsair H70 with dual 1600 rpm fan
    Corsair 800D
    3008WFP A00



  4. #4
    Xtreme Member
    Join Date
    Dec 2006
    Posts
    213
    Quote Originally Posted by Tuvok-LuR- View Post
    Which of the following seem most likely to you:
    1) The test uses close to 100% of the transistor in the die. In game it doesn't happen cause they aren't optimized for a specifical gpu (as on consoles).

    2) The test doesn't bear any cpu worload, so the gpu hasn't to wait for geometry data and has no time for idling between operations. In game it doesn't happen cause there's always a cpu workload

    3) Both.
    To me ? 2. I'd almost said "both", as my shader code has been kept simple to be easily optimized for all architectures. But as i haven't done specific codes for specific GPUs, i won't say that of my code. It's a generic code that functions very well on all GPUs out there right now. The very same code runs on every GPU. so i can't say it is optimized for specific GPUs.

    EDIT : mind you, shaders in games uses more different functions than i do. So 1 is unlikely. That's usually why they generate less stress on the card. but what if tyhey find a graphical use to the one i came up with ? They'd run into the same issue. And there... crash.

    It's hard to switch from work to Shader talking
    Last edited by Tetedeiench; 05-20-2009 at 02:01 AM.

  5. #5
    Xtreme Legend
    Join Date
    Jan 2003
    Location
    Stuttgart, Germany
    Posts
    929
    good job on the stresstest. dont listen to teh naysayers here. expect your app to be castrated just like furmark in the next catalyst releases

    Quote Originally Posted by Tuvok-LuR- View Post
    1) The test uses close to 100% of the transistor in the die.
    i doubt it is anywhere near 100% (uvd, tesselator, rops, mc, tmus), dont spill out random claims

    Quote Originally Posted by Tetedeiench View Post
    If it was an overheating crash, you would have artefacts, etc.
    no

    More : the crash wouldn't be THAT immediate : it would take at least a few seconds. Here, it's instantaneous.
    yes. my guess is vrm ocp as well. nvidia delayed a launch because of a similar issue

    Quote Originally Posted by AMD noob moderator
    then increase the fan speed to cool the VDDC's. Obviously underclocking acomplishes a similar goal as they dont get as hot.
    fail. clearly has no idea what he's talking about
    Last edited by W1zzard; 05-20-2009 at 04:40 AM.

Bookmarks

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •