Page 9 of 30 FirstFirst ... 678910111219 ... LastLast
Results 201 to 225 of 730

Thread: OCCT 3.1.0 shows HD4870/4890 design flaw - they can't handle the new GPU test !

  1. #201
    Xtreme Member
    Join Date
    Dec 2006
    Posts
    213
    Quote Originally Posted by Solus Corvus View Post
    I hope you don't think I was flaming you with my source code comment. I am just saying that we can't necessarily rule it out either. Even the best programmers make mistakes and it helps to have extra eyes looking at it.

    If it only happens on GDDR5 cards, what about the 4870x2 or 4770? I would test it on my x2 except that I sold it. I'll be sure to test it on the 4890 when I get it - it better pass at 1.45v and 1100mhz core .

    What kind of water blocks were used? Full cover?
    I don't rule it out completly, but the possibility has been made very, very unlikely at that point. But truly, i only use very basic shader instructions, and DirectX9 functions. Nothing fancy.

    I don't have any reports on the HD4770 card at the moment. The 4870x2 cards encountered the same problem, but it happened on one card only, and in crossfire mode, which multiplies the problem factor by 3 or 4.

    I don't think it was a full cover. the watercooling didn't increase the vrm cooling, but we increased the VRM cooling by adding fans, without success.

  2. #202
    Xtreme Member
    Join Date
    Jun 2003
    Location
    Italy
    Posts
    351
    well 4770 has GDDR5 and castrated VRMs, could be interesting to test if the 40nm can compensate for this
    3570K @ 4.5Ghz | Gigabyte GA-Z77-D3H | 7970 Ghz 1100/6000 | 256GB Samsung 830 SSD (Win 7) | 256GB Samsung 840 Pro SSD (OSX 10.8.3) | 16GB Vengeance 1600 | 24'' Dell U2412M | Corsair Carbide 300R

  3. #203
    Xtreme Member
    Join Date
    Mar 2008
    Posts
    170
    Wow, guy posts and asks for help, community shoots him down in flames and calls his stress test a power virus. Well done members!!

    Chalk up another card failure, a sapphire 4870.
    Quote Originally Posted by ryba View Post
    I don't carre about PCMark - it's for gays with moustache
    A Stranger's thoughts...

  4. #204
    Xtreme Cruncher
    Join Date
    Aug 2006
    Location
    Denmark
    Posts
    7,747
    Quote Originally Posted by iddqd View Post
    There are also certain stability tests that can permanently damage some Intel CPUs, and nothing really stresses them that much in real-world use.

    But one could argue that doing everything you possibly could at once is not a meaningful mode of operation.
    Thats pure BS.

    There is no way you can damage a VIA/AMD/Intel CPU with any software/power virus.

    Because unlike GPUs, CPUs and VRM designs are made to handle anything. Its all about GPU designers going the cheapskater way.

    Quote Originally Posted by Nullack View Post
    How can a power virus be considered valid software instructions?
    Quote Originally Posted by Solus Corvus View Post
    How do we know that the software is giving it valid instructions without looking at the source code?
    If it can run it its valid. And if the GPU/Card dont have any protection against it it means the GPU/card maker FAILED.
    Last edited by Shintai; 05-20-2009 at 12:42 AM.
    Crunching for Comrades and the Common good of the People.

  5. #205
    Xtreme Addict
    Join Date
    Jul 2007
    Location
    Germany
    Posts
    1,592
    Quote Originally Posted by Tetedeiench View Post
    the watercooling didn't increase the vrm cooling, but we increased the VRM cooling by adding fans, without success.
    That's what I was saying, if HS contact is bad it doesn't matter how many fans you slap on it, it'll still crash. I'm trying to be constructive here.
    The XS Folding@Home team needs your help! Join us and help fight diseases with your CPU and GPU!!


  6. #206
    Xtreme Addict
    Join Date
    Mar 2006
    Location
    Saskatchewan, Canada
    Posts
    2,207
    Quote Originally Posted by Solus Corvus View Post
    I hope you don't think I was flaming you with my source code comment. I am just saying that we can't necessarily rule it out either. Even the best programmers make mistakes and it helps to have extra eyes looking at it.

    If it only happens on GDDR5 cards, what about the 4870x2 or 4770? I would test it on my x2 except that I sold it. I'll be sure to test it on the 4890 when I get it - it better pass at 1.45v and 1100mhz core .

    What kind of water blocks were used? Full cover?
    This is a common problem with the xspc ones, I know this for sure and its actually one of the reason beside just putting the project aside, that I haven't installed mine yet.

    If this was an NV card, noone would be insulting the OP, infact, alot of people would cheer on the OP as a hero, who threw another spear at the beast known as NVIDIA.
    Last edited by tajoh111; 05-20-2009 at 01:01 AM.

  7. #207
    Xtreme Member
    Join Date
    Dec 2006
    Posts
    213
    Two pictures of the VRMs :

    The one without any bugs :
    http://mangafranceworld.free.fr/Divers/DSC00550.JPG

    The one with the bug :
    http://mangafranceworld.free.fr/Divers/DSC00551.JPG

    Interesting, isn't it ?

  8. #208
    Xtreme Addict
    Join Date
    May 2007
    Location
    'Zona
    Posts
    2,346
    Quote Originally Posted by Ch@pS View Post
    Wow, guy posts and asks for help, community shoots him down in flames and calls his stress test a power virus. Well done members!!

    Chalk up another card failure, a sapphire 4870.
    Crying wolf by calling this a design flaw is asking for help?
    By definition it is a power virus, so there is nothing wrong with calling it that.
    Originally Posted by motown_steve
    Every genocide that was committed during the 20th century has been preceded by the disarmament of the target population. Once the government outlaws your guns your life becomes a luxury afforded to you by the state. You become a tool to benefit the state. Should you cease to benefit the state or even worse become an annoyance or even a hindrance to the state then your life becomes more trouble than it is worth.

    Once the government outlaws your guns your life is forfeit. You're already dead, it's just a question of when they are going to get around to you.

  9. #209
    Xtreme Member
    Join Date
    Dec 2006
    Posts
    213
    Quote Originally Posted by LordEC911 View Post
    Crying wolf by calling this a design flaw is asking for help?
    By definition it is a power virus, so there is nothing wrong with calling it that.
    Well, how do you explain test 7 how my report then ? a HD4870 works fine, you witch the card, poof, same syptoms as the whole bunch of cards replaced everywere.

    I know you do want the cards to be bug-proof. But please, let's be constructive. My goal here is to narrow down the cause.

    The test we have done narrow the problem to be most probably a problem with the cards, already, and not the software (reread the summary on the previous page to convince yourself). We're trying to get it pinned down now : what is the exact cause of this failure ?

  10. #210
    Xtreme Cruncher
    Join Date
    Aug 2006
    Location
    Denmark
    Posts
    7,747
    Quote Originally Posted by Tetedeiench View Post
    Two pictures of the VRMs :

    The one without any bugs :
    http://mangafranceworld.free.fr/Divers/DSC00550.JPG

    The one with the bug :
    http://mangafranceworld.free.fr/Divers/DSC00551.JPG

    Interesting, isn't it ?
    So its another "going cheap". That kinda takes you back to the review samples of HD4770 vs retail. I wonder how many other cards are like that. Its tempting to think on NVidia since they "block" the view.
    Crunching for Comrades and the Common good of the People.

  11. #211
    Xtreme Member
    Join Date
    Jun 2003
    Location
    Italy
    Posts
    351
    Which of the following seem most likely to you:
    1) The test uses close to 100% of the transistor in the die. In game it doesn't happen cause they aren't optimized for a specifical gpu (as on consoles).

    2) The test doesn't bear any cpu worload, so the gpu hasn't to wait for geometry data and has no time for idling between operations. In game it doesn't happen cause there's always a cpu workload

    3) Both.
    3570K @ 4.5Ghz | Gigabyte GA-Z77-D3H | 7970 Ghz 1100/6000 | 256GB Samsung 830 SSD (Win 7) | 256GB Samsung 840 Pro SSD (OSX 10.8.3) | 16GB Vengeance 1600 | 24'' Dell U2412M | Corsair Carbide 300R

  12. #212
    Xtreme Cruncher
    Join Date
    Aug 2006
    Location
    Denmark
    Posts
    7,747
    Quote Originally Posted by Tuvok-LuR- View Post
    Which of the following seem most likely to you:
    1) The test uses close to 100% of the transistor in the die. In game it doesn't happen cause they aren't optimized for a specifical gpu (as on consoles).

    2) The test doesn't bear any cpu worload, so the gpu hasn't to wait for geometry data and has no time for idling between operations. In game it doesn't happen cause there's always a cpu workload

    3) Both.
    After seeing the same cause for 1000s of people in an MMO I dont believe in any of the above.

    And considering driver profiles for games are filled with idle states, its pretty clear that the GPU cant handle certain games normally either. Hence why renaming .exe files.
    Crunching for Comrades and the Common good of the People.

  13. #213
    Xtreme Member
    Join Date
    Dec 2006
    Posts
    213
    Quote Originally Posted by Tuvok-LuR- View Post
    Which of the following seem most likely to you:
    1) The test uses close to 100% of the transistor in the die. In game it doesn't happen cause they aren't optimized for a specifical gpu (as on consoles).

    2) The test doesn't bear any cpu worload, so the gpu hasn't to wait for geometry data and has no time for idling between operations. In game it doesn't happen cause there's always a cpu workload

    3) Both.
    To me ? 2. I'd almost said "both", as my shader code has been kept simple to be easily optimized for all architectures. But as i haven't done specific codes for specific GPUs, i won't say that of my code. It's a generic code that functions very well on all GPUs out there right now. The very same code runs on every GPU. so i can't say it is optimized for specific GPUs.

    EDIT : mind you, shaders in games uses more different functions than i do. So 1 is unlikely. That's usually why they generate less stress on the card. but what if tyhey find a graphical use to the one i came up with ? They'd run into the same issue. And there... crash.

    It's hard to switch from work to Shader talking
    Last edited by Tetedeiench; 05-20-2009 at 02:01 AM.

  14. #214
    Xtreme Guru
    Join Date
    Jan 2005
    Location
    Tre, Suomi Finland
    Posts
    3,858
    My reference HD4890 crashes at stock 850MHz/975MHz, with the recommended settins (1680x1050, fullscreen, shader complexity=3).
    It's stable at stock core speed 850MHz if memory too is at 850MHz.

    *continues testing*
    You were not supposed to see this.

  15. #215
    Worlds Fastest F5
    Join Date
    Aug 2006
    Location
    Room 101, Ministry of Truth
    Posts
    1,615
    Guys, trying to ignore the thread crappers and some others engaged in a rather dull 7 page long pissing contest, all the OP wanted was to highlight a problem he had encountered whilst testing the functionality of a new feature of his stress testing software which he posted here in the hope of getting more confirmation (or not) that the problem exists.

    The obvious conclusion thus far is that some it that some 4870 / 4890's are power starved by their insufficient VRM's and / or there might be some kind of mechanism that kicks in at a certain high current draw threshold when they are highly stressed in this way.

    A useful step forward at this point would be to collate the results that have been reported thus far and compile a list of which precise cards and manufacturers + editions are affected....
    Last edited by Biker; 05-20-2009 at 02:26 AM.
    X5670 B1 @175x24=4.2GHz @1.24v LLC on
    Rampage III Extreme Bios 0003
    G.skill Eco @1600 (7-7-7-20 1T) @1.4v
    EVGA GTX 580 1.5GB
    Auzen X-FI Prelude
    Seasonic X-650 PSU
    Intel X25-E SLC RAID 0
    Samsung F3 1TB
    Corsair H70 with dual 1600 rpm fan
    Corsair 800D
    3008WFP A00



  16. #216
    Xtreme Member
    Join Date
    Dec 2006
    Posts
    213
    Quote Originally Posted by largon View Post
    My reference HD4890 crashes at stock 850MHz/975MHz, with the recommended settins (1680x1050, fullscreen, shader complexity=3).
    It's stable at stock core speed 850MHz if memory too is at 850MHz.

    *continues testing*
    Thanks for your test. if you keep at 850/850 and push the VGPU up... does it crash ?

  17. #217
    Xtreme Guru
    Join Date
    Jan 2005
    Location
    Tre, Suomi Finland
    Posts
    3,858
    Stock 1.3125v doesn't crash at 850/850. Bump to 1.35v made it crash.

    .
    .
    .

    Anyone tried this with nV cards?
    I'm interested what FPS do GTX200 series cards manage at the recommended settings and 1680x1050? I suspect HD4800s push a lot higher frame rates than GTX200 serie cards, similar to Furmark where HD4800s dominate...
    Last edited by largon; 05-20-2009 at 02:29 AM.
    You were not supposed to see this.

  18. #218
    Xtreme Member
    Join Date
    Dec 2006
    Posts
    213
    Quote Originally Posted by largon View Post
    Stock 1.3125v doesn't crash at 850/850. Bump to 1.35v made it crash.

    .
    .
    .

    Anyone tried this with nV cards?
    I'm interested what FPS do GTX200 series cards manage at the recommended settings and 1680x1050? I suspect HD4800s push a lot higher frame rates than GTX200 serie cards, similar to Furmark where HD4800's dominate...
    This is the very same behaviour that what we noticed on the french board

    I am developing on a GTX285 @home. I can give you that info. But with the time difference (i'm living in France), so it will be in about 10 hours. i'll do that in Shader complexity 3, same res as you (i do have a 22" LCD).

    Many thanks for your confirmation though. That's starting to exclude the isolated problem

    That's the kind of help i was waiting for

  19. #219
    Visitor
    Join Date
    May 2008
    Posts
    676
    Just reporting that the XFX 4850 I tested shuts down too in both OCCT and Furmark (no idea why but I can consistently reproduce it). It functions without trouble however with Freestone Video Card Stability Test and with games.

  20. #220
    Xtreme Addict
    Join Date
    Apr 2008
    Location
    Lansing, MI / London / Stinkaypore
    Posts
    1,788
    Quote Originally Posted by cx-ray View Post
    Just reporting that the XFX 4850 I tested shuts down too in both OCCT and Furmark (no idea why but I can consistently reproduce it). It functions without trouble however with Freestone Video Card Stability Test and with games.


    Hmm, this one's expected. The XFX boards are really, really cheapo.
    Quote Originally Posted by radaja View Post
    so are they launching BD soon or a comic book?

  21. #221
    Xtreme Mentor
    Join Date
    Nov 2006
    Location
    Spain, EU
    Posts
    2,949
    My 4870 (reference design, Sapphire) craps out even at 750/800. 750/700 is fine. Will test more later, including voltage and GPU clocks. Testing with 1680x1050@120Hz.

    Quote Originally Posted by zerazax View Post
    Oh really? He proved two cards? How many other members on this board just posted they encountered no such problem?

    All sorts of cards will have varying tolerances, and unless the OP has more than a SMALL SAMPLE SIZE, his "conclusion" is irrelevant

    Unless you've got a hundred cards that can all REPRODUCE this same issue, you're playing with small sample sizes
    Have you read the thread? Your comment suggest a no. Read it again.

    People with burned cards caused by the load of FurMark were also a "small sample size". Not even a hundred cards. It was sooo small that even ATI itself changed the drivers for it. Go figure.
    Last edited by STaRGaZeR; 05-20-2009 at 02:50 AM.
    Friends shouldn't let friends use Windows 7 until Microsoft fixes Windows Explorer (link)


    Quote Originally Posted by PerryR, on John Fruehe (JF-AMD) View Post
    Pretty much. Plus, he's here voluntarily.

  22. #222
    Xtreme Guru
    Join Date
    Jan 2005
    Location
    Tre, Suomi Finland
    Posts
    3,858
    Hehe,
    OCCT GPU test draws 360W from the wall whereas Furmark stays at 310W...


    edit:
    Crysis Warhead doesn't go above 270W wall draw.
    Last edited by largon; 05-20-2009 at 03:52 AM.
    You were not supposed to see this.

  23. #223
    Worlds Fastest F5
    Join Date
    Aug 2006
    Location
    Room 101, Ministry of Truth
    Posts
    1,615
    Quote Originally Posted by largon View Post
    Hehe,
    OCCT GPU test draws 360W from the wall whereas Furmark stays at 310W...
    OCCT GPU test = not good for the environment
    X5670 B1 @175x24=4.2GHz @1.24v LLC on
    Rampage III Extreme Bios 0003
    G.skill Eco @1600 (7-7-7-20 1T) @1.4v
    EVGA GTX 580 1.5GB
    Auzen X-FI Prelude
    Seasonic X-650 PSU
    Intel X25-E SLC RAID 0
    Samsung F3 1TB
    Corsair H70 with dual 1600 rpm fan
    Corsair 800D
    3008WFP A00



  24. #224
    Xtreme Member
    Join Date
    Jun 2005
    Location
    Bulgaria, Varna
    Posts
    447
    I had an HD4870 1GB from GB (reference design) which couldn't stand to pass FurMark even at stock speeds at mediocre temp's, and this was under benchmark mode, not the burn-in test!

  25. #225
    Xtreme Mentor
    Join Date
    Apr 2003
    Location
    Ankara Turkey
    Posts
    2,631
    nice work Tetedeiench


    When i'm being paid i always do my job through.

Page 9 of 30 FirstFirst ... 678910111219 ... LastLast

Bookmarks

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •