Page 2 of 30 FirstFirst 1234512 ... LastLast
Results 26 to 50 of 730

Thread: OCCT 3.1.0 shows HD4870/4890 design flaw - they can't handle the new GPU test !

  1. #26
    Xtreme Addict
    Join Date
    Nov 2003
    Location
    NYC
    Posts
    1,592
    i get crashing in a non-overclocked HD4890 (with a older) game until I underclock pretty heavily :X there could be some meat to this

  2. #27
    Xtreme Addict
    Join Date
    Apr 2008
    Location
    Texas
    Posts
    1,663
    With motherboards like mine, you can increase the power going to each individual PCIe slot. Stock mobo settings are 75w. I have mine at 150w as recommended by other Asus M3A79 and M4A79 users with HD4870/4890s. We use this for heavy overclocking. It allows higher core and memory clocks as well. Most guys have it bumped to 200w.

    Heres a link to what I'm talking about: http://www.xtremesystems.org/forums/...201154&page=55

    EDIT: Purpose being, some motherboards don't supply an ample amount of power to the slot with stock BIOS settings.
    Last edited by Mechromancer; 05-19-2009 at 12:20 PM.
    Core i7 2600K@4.6Ghz| 16GB G.Skill@2133Mhz 9-11-10-28-38 1.65v| ASUS P8Z77-V PRO | Corsair 750i PSU | ASUS GTX 980 OC | Xonar DSX | Samsung 840 Pro 128GB |A bunch of HDDs and terabytes | Oculus Rift w/ touch | ASUS 24" 144Hz G-sync monitor

    Quote Originally Posted by phelan1777 View Post
    Hail fellow warrior albeit a surat Mercenary. I Hail to you from the Clans, Ghost Bear that is (Yes freebirth we still do and shall always view mercenaries with great disdain!) I have long been an honorable warrior of the mighty Warden Clan Ghost Bear the honorable Bekker surname. I salute your tenacity to show your freebirth sibkin their ignorance!

  3. #28
    Xtreme Member
    Join Date
    Dec 2006
    Posts
    213
    Okay...

    So, you're buying a card, you cannot use it to 100% of its own power, and that's not an issue ? I fail to see the point here, guys.

    GPU:3D is just like any other 3d application. It uses functions that are provided by directX, nothing else. If the problem would rise only with overclocked cards, well, fine. But the problem is arising with card running at [g]stock[/g] speed.

    Imagine the problem arising with CPUs. You buy the latest Intel CPU. Cool. You want to encode your movie, and BAM, crash. You're like what ? Yes, you've have to SLOW DOWN the encoding for the process to complete. Or else, your CPU fails.

    The problem is not Temperature-related, again. It is related to a limit of some sort we reach, a hardware limit. We opened the case, added fans, and woops, that did nothing.

    If one day, a company decide to use very, very optimised shaders for their game, that might crash the card. Why is that ? Because the card is using too much power.

    What i can say right now with my card, is this line (until proven wrong) :
    I have proven that HD4870 and HD4890 cards following the reference design cannot follow their own specification. Again, one day, this limit will be reached again by another app. If I could do it, another one will be able to do it.

  4. #29
    I am Xtreme
    Join Date
    Jul 2007
    Location
    Austria
    Posts
    5,485
    ^
    no, you fail to see that this tests don't refelct reality. Even crysis don't loads the card to that point afair it was ~60Amps peak. and all other games load it far less.

  5. #30
    I am Xtreme
    Join Date
    Jul 2005
    Posts
    4,811
    Quote Originally Posted by Tetedeiench View Post
    Okay...

    So, you're buying a card, you cannot use it to 100% of its own power, and that's not an issue ? I fail to see the point here, guys.

    GPU:3D is just like any other 3d application. It uses functions that are provided by directX, nothing else. If the problem would rise only with overclocked cards, well, fine. But the problem is arising with card running at [g]stock[/g] speed.

    Imagine the problem arising with CPUs. You buy the latest Intel CPU. Cool. You want to encode your movie, and BAM, crash. You're like what ? Yes, you've have to SLOW DOWN the encoding for the process to complete. Or else, your CPU fails.

    The problem is not Temperature-related, again. It is related to a limit of some sort we reach, a hardware limit. We opened the case, added fans, and woops, that did nothing.

    If one day, a company decide to use very, very optimised shaders for their game, that might crash the card. Why is that ? Because the card is using too much power.

    What i can say right now with my card, is this line (until proven wrong) :
    I have proven that HD4870 and HD4890 cards following the reference design cannot follow their own specification. Again, one day, this limit will be reached again by another app. If I could do it, another one will be able to do it.
    The reason why people are not agreeing with you is because:
    1. Your test methods are brought to question
    2. No games have been verified by you having shown this problem
    among other specific concerns...
    [SIGPIC][/SIGPIC]

  6. #31
    Xtreme Member
    Join Date
    Dec 2006
    Posts
    213
    Quote Originally Posted by Eastcoasthandle View Post
    The reason why people are not agreeing with you is because:
    1. Your test methods are brought to question
    2. No games have been verified by you having shown this problem
    among other specific concerns...

    Well, problem is :
    1- I don't have the means to have another testing method. I'm a single developer. the only ATI video card i have access to is a HD2600 Mobility. My main computer is equipped with a GTX285. And i only have one computer. Imagine me as a guy just like yourself, spending his time developing OCCT.

    I tried to get more professionnal testing done. So far no good. People are just plain ignoring my emails. I thought that having more than 1.500.000 downloads would be enough to get, at least, some listening, or a professionnal test when you uncover something in your own field (which is, stability testing). Seems like not. I have enough proof to get, at least, a "duh, let's check, he may be right". I mean, we had this thing happening on a PC with a ToughPower 1500W. I doubt it was at stake.

    2- No games currently exists which raise this issue. That doesn't mean a game will never exist with this issue. And again, if an app could raise this problem, any other app can.

    If you buy a car that boast it can reach 280Mph on a track, i doubt you'll be happy when you learn it can only reach 200mph on the said circuit track, even if you're only allowed 100mph in real life... that's what we're talking about.

    I am actually longing for a professionnal testing proving me right or wrong. I'm almost sure i'm right, at the moment. Problem is, i've gone as far as i could with my limited testing means.

    You want to help me ? Prove me i'm wrong But i'm sure your HD4870/4890 is going blackscreen with my test at the moment, if you do have one... no ?
    Last edited by Tetedeiench; 05-19-2009 at 12:10 PM.

  7. #32
    Xtreme Cruncher
    Join Date
    Aug 2006
    Location
    Denmark
    Posts
    7,747
    This is an old issue and also applies to nVidia. Changes/renaming of game exe files so the driver profiles dont apply and sets in idle states. Its not long ago that as an example with EvE Online that alot of nVidia cards and some AMD cards overheated when in station. Simply due to missing idle states after some changed shadercode.

    You can thank nVidia/AMD for poor designs and driverhacks. Same reason the drivers are like 100MB these days instead of 20.
    Crunching for Comrades and the Common good of the People.

  8. #33
    Xtreme Member
    Join Date
    Dec 2006
    Posts
    213
    Quote Originally Posted by Shintai View Post
    This is an old issue and also applies to nVidia. Changes/renaming of game exe files so the driver profiles dont apply and sets in idle states. Its not long ago that as an example with EvE Online that alot of nVidia cards and some AMD cards overheated when in station. Simply due to missing idle states after some changed shadercode.

    You can thank nVidia/AMD for poor designs and driverhacks. Same reason the drivers are like 100MB these days instead of 20.
    What're we're encountering here is not a overheating problem again, it's a failure in the power supply stage of the card. The temps were handled fine. Watercooled cards were tested, and they crashed all the same (and ran much cooler).

    The very same card, reaching the very same temps (even higher temps), but with different VRMs, runs the very same tests fine.

    So i don't think the problem is the same.

  9. #34
    Xtreme Addict
    Join Date
    Jul 2007
    Location
    Germany
    Posts
    1,592
    I don't understand why people are questioning that there might be a problem with the VRM implementation on said cards. Would each one of you say the same when you buy a shiny new i7 or 955BE, run IBT on your favourite board to check if it's stable and whoops! You make it crash on stock because the processors draws more than then VRM can handle. What kind of argumentation is that when you say that exact thing when it comes to GPU's? That's bollocks.

    To me it rather sounds that s.o. "economized" a bit too much here at ATI. One phase more and nothing happens anymore. Wow, that's just cheap.
    The XS Folding@Home team needs your help! Join us and help fight diseases with your CPU and GPU!!


  10. #35
    Xtreme Member
    Join Date
    Dec 2006
    Posts
    213
    Quote Originally Posted by p2501 View Post
    I don't understand why people are questioning that there might be a problem with the VRM implementation on said cards. Would each one of you say the same when you buy a shiny new i7 or 955BE, run IBT on your favourite board to check if it's stable and whoops! You make it crash on stock because the processors draws more than then VRM can handle. What kind of argumentation is that when you say that exact thing when it comes to GPU's? That's bollocks.

    To me it rather sounds that s.o. "economized" a bit too much here at ATI. One phase more and nothing happens anymore. Wow, that's just cheap.

    There. I do exactly think the same.

  11. #36
    I am Xtreme
    Join Date
    Dec 2007
    Posts
    7,750
    @Tetedeiench, run Furmark, and you will see the same results. it is the only benchmark that has such a similar problem.

  12. #37
    Xtreme Member
    Join Date
    Dec 2006
    Posts
    213
    Quote Originally Posted by Manicdan View Post
    @Tetedeiench, run Furmark, and you will see the same results. it is the only benchmark that has such a similar problem.
    It seems that Furmark doesn't reach the 82A limit, as it reaches, on the VRM, about 78A (extreme burning mode, fullscreen, exe renamed), picked up from one of my beta-tester's comparative test.

    My test reaches, (i've found the value) about 87A on the very same card.

    Or at least, i'm reaching it instantaneously
    Last edited by Tetedeiench; 05-19-2009 at 12:27 PM.

  13. #38
    Xtreme Guru
    Join Date
    Dec 2002
    Posts
    4,046
    Tetedeiench - would you make a list of the good/bad ones so i/we stay away from the bad ones..

    thanks

  14. #39
    Engineering The Xtreme
    Join Date
    Feb 2007
    Location
    MA, USA
    Posts
    7,217
    Still, I have voltmodded both 4870's and 4870x2 and clocked them up and never playing any games or running any benches have they crashed due to VRM overload.... I mean sure this odd test with a furry donut can do it but in reality when will this "furry donut"game be released? my guess is not before the end of this year and most likely not before the end of next year...

    (I will test with my XFX 4890 black PCB reference cards with 6+8pin power on friday)

  15. #40
    I am Xtreme
    Join Date
    Dec 2007
    Posts
    7,750
    Quote Originally Posted by Tetedeiench View Post
    It seems that Furmark doesn't reach the 82A limit, as it reaches, on the VRM, about 78A (extreme burning mode, fullscreen, exe renamed), picked up from one of my beta-tester's comparative test.

    My test reaches, (i've found the value) about 87A on the very same card.

    Or at least, i'm reaching it instantaneously
    did you rename the exe to something else?

  16. #41
    Xtreme Member
    Join Date
    Dec 2006
    Posts
    213
    Quote Originally Posted by SNiiPE_DoGG View Post
    Still, I have voltmodded both 4870's and 4870x2 and clocked them up and never playing any games or running any benches have they crashed due to VRM overload.... I mean sure this odd test with a furry donut can do it but in reality when will this "furry donut"game be released? my guess is not before the end of this year and most likely not before the end of next year...

    (I will test with my XFX 4890 black PCB reference cards with 6+8pin power on friday)
    I'm interested in this test

    But you know, i wonder when they'll release prime95, or OCCT, or IBT, or any of all those CPU stability games they're all talking about

    I hear in the background "Come get some!"

  17. #42
    I am Xtreme
    Join Date
    Jul 2007
    Location
    Austria
    Posts
    5,485
    Quote Originally Posted by SNiiPE_DoGG View Post
    Still, I have voltmodded both 4870's and 4870x2 and clocked them up and never playing any games or running any benches have they crashed due to VRM overload.... I mean sure this odd test with a furry donut can do it but in reality when will this "furry donut"game be released? my guess is not before the end of this year and most likely not before the end of next year...

    (I will test with my XFX 4890 black PCB reference cards with 6+8pin power on friday)
    you will never find a game that utilizes the shader that much, since most of the time therers also a lot more geometry in games.

    Furmark is a unrealistic scenario for games...
    Last edited by Hornet331; 05-19-2009 at 12:50 PM.

  18. #43
    Xtreme Member
    Join Date
    Dec 2006
    Posts
    213
    Quote Originally Posted by Manicdan View Post
    did you rename the exe to something else?
    For the furmark test ? Oh yes, to avoid any limitations.

    My test is running under crysis.exe to avoid any limitations as well (that's to prevent what happened to furmark). The very same behaviour occurs under OCCTGPU.exe or any other name you give to it.

  19. #44
    Xtreme Enthusiast
    Join Date
    Jul 2007
    Location
    Kuwait
    Posts
    616
    That's what we do here we benchmark/test stuff so please guys if you have HD4870 or HD4890 test whats the man is saying screen shot your test and post it.(i would test but all i have now is GTX 260 sp216 and 8800GTX).
    And if you are a guru electric engineer please explain this if you can.

  20. #45
    Xtreme Guru
    Join Date
    Aug 2005
    Location
    Burbank, CA
    Posts
    3,766
    Im can see all the ATI fanboys faces now if this end up to be true.

  21. #46
    Xtreme Member
    Join Date
    Dec 2006
    Posts
    213
    Quote Originally Posted by C.Ron7aldo View Post
    That's what we do here we benchmark/test stuff so please guys if you have HD4870 or HD4890 test whats the man is saying screen shot your test and post it.(i would test but all i have now is GTX 260 sp216 and 8800GTX).
    And if you are a guru electric engineer please explain this if you can.

  22. #47
    Engineering The Xtreme
    Join Date
    Feb 2007
    Location
    MA, USA
    Posts
    7,217
    Quote Originally Posted by Tetedeiench View Post
    I'm interested in this test

    But you know, i wonder when they'll release prime95, or OCCT, or IBT, or any of all those CPU stability games they're all talking about
    exactly why I am a big fan of standard occt2hr for my cpu stress testing I dont do IBT or linpack, ever.

    It will be interesting to see if the 8-pin makes a difference on the crash.

    Quote Originally Posted by Hornet331 View Post
    you will never find a game that utulizezs the shader that much, since most of the time therers also a lot more geometry in games.

    Furmark is scenario is a unrealistic scenario for games...
    this is my thought as well, I dont know the technical specifics but I dont see any furry donuts to fight in the sequels to the games I play - although you never know what kind of bs crytek could cook up.

  23. #48
    Xtreme 3D Team
    Join Date
    Jan 2009
    Location
    Ohio
    Posts
    8,499
    Is the 4850 not affected? I really don't see the point in this. As much as I love OCCT, the point in stability testing is not to throw a 200% load at it. It should (in my opinion,) be about a 90% GPU load, and monitor it for a good 2 hours.

    I run Furmark on my 4850 to check my overclock sometimes, in Xtreme Burning Mode. That's enough if it passes that. I run OCCT for my CPU because it catches errors faster.

    My 4850 hits 90c in Furmark, I'm not going to be stupid enough to run it up around 110-120c. That's just kind of pointless. If you run it that hot or with that much load it will eventually fail.
    Smile

  24. #49
    Xtreme Mentor
    Join Date
    Feb 2007
    Location
    Oxford, England
    Posts
    3,433
    lol well i just tried this.. on stock reference 4870....

    well... it started the test then just a black screen with my mouse.....

    pressed esc after about a min of doing nothing
    then everything was fine just went to normal occt screen.....

    nothing was wrong ... just the test didn't load up it didnt even go into 3d clocks...
    "Cast off your fear. Look forward. Never stand still, retreat and you will age. Hesitate and you will die. SHOUT! My name is…"
    //James

  25. #50
    I am Xtreme
    Join Date
    Dec 2007
    Posts
    7,750
    Quote Originally Posted by Tetedeiench View Post
    For the furmark test ? Oh yes, to avoid any limitations.

    My test is running under crysis.exe to avoid any limitations as well (that's to prevent what happened to furmark). The very same behaviour occurs under OCCTGPU.exe or any other name you give to it.
    well congrats, you have found the most stressful application for these cards, and AMD should use your program as a way to stress test their next cards before giving us something half built.

    the next question, and most important question is, which card can run this benchmark faster, 4890 or 275?

Page 2 of 30 FirstFirst 1234512 ... LastLast

Bookmarks

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •