Page 3 of 30 FirstFirst 12345613 ... LastLast
Results 51 to 75 of 730

Thread: OCCT 3.1.0 shows HD4870/4890 design flaw - they can't handle the new GPU test !

  1. #51
    Xtreme Mentor
    Join Date
    Apr 2005
    Posts
    2,550
    Quote Originally Posted by Tetedeiench View Post
    If one day, a company decide to use very, very optimised shaders for their game, that might crash the card. Why is that ? Because the card is using too much power.
    could you define "very very optimised shaders"?

    I'd say that games like Crysis, or Far Cry 2, or Dawn Of War II, or the new Riddick or STALKER... do use very COMPLEX shaders... and I didn't heard that someone complained that those games burned their Radeons

    So you're saying that yours "simple Alpha blending" is meaner than STALKER's shaders... interesting!
    Adobe is working on Flash Player support for 64-bit platforms as part of our ongoing commitment to the cross-platform compatibility of Flash Player. We expect to provide native support for 64-bit platforms in an upcoming release of Flash Player following the release of Flash Player 10.1.

  2. #52
    Xtreme Addict
    Join Date
    Dec 2007
    Posts
    1,030
    Kudos for doing an apllication even more stressing than furmark

    I think people here are sticking to ATI side because they don't care about having a low quality card that can handle games nice, as long as it's cheap.

    Cheap... What you get is what you pay for, so... People don't complain...
    Are we there yet?

  3. #53
    Xtreme Enthusiast
    Join Date
    Jul 2007
    Location
    Kuwait
    Posts
    616
    Quote Originally Posted by Jamesrt2004 View Post
    lol well i just tried this.. on stock reference 4870....

    well... it started the test then just a black screen with my mouse.....

    pressed esc after about a min of doing nothing
    then everything was fine just went to normal occt screen.....

    nothing was wrong ... just the test didn't load up it didnt even go into 3d clocks...
    Do you have Vista or XP? of you have Vista some time's vista restarts the card for you.

  4. #54
    Xtreme Member
    Join Date
    Dec 2006
    Posts
    213
    Quote Originally Posted by Jamesrt2004 View Post
    lol well i just tried this.. on stock reference 4870....

    well... it started the test then just a black screen with my mouse.....

    pressed esc after about a min of doing nothing
    then everything was fine just went to normal occt screen.....

    nothing was wrong ... just the test didn't load up it didnt even go into 3d clocks...
    That's something new

    Try the very same test with a shader complexity 0 (this lower the load on the GPU by a lot). Does it launch ?

    Try to monitor using RivaTuner : do you see the very same drop in the frequencies as we do record it ?

    Pressing <esc> kills the App, i've always included that escape point. Maybe you could recover from that.

  5. #55
    Xtreme Addict
    Join Date
    Dec 2007
    Posts
    1,030
    Quote Originally Posted by Nedjo View Post
    could you define "very very optimised shaders"?

    I'd say that games like Crysis, or Far Cry 2, or Dawn Of War II, or the new Riddick or STALKER... do use very COMPLEX shaders... and I didn't heard that someone complained that those games burned their Radeons

    So you're saying that yours "simple Alpha blending" is meaner than STALKER's shaders... interesting!
    I don't know much about programming, but I think you don't know a single thing...

    Having complex code may cause less stress because some units will be idling, waiting to get other unit's information, while with simple, and scalable coding, you get much mure GPU load, because information is delivered much faster across units, because calculations are much simpler... I think...
    Are we there yet?

  6. #56
    I am Xtreme
    Join Date
    Dec 2007
    Posts
    7,750
    chances are its a driver failure and windows will just reset the drivers. i think its an option in the CCC to use soft restarts.

  7. #57
    Xtreme Addict
    Join Date
    Dec 2007
    Posts
    1,030
    Quote Originally Posted by Manicdan View Post
    chances are its a driver failure and windows will just reset the drivers. i think its an option in the CCC to use soft restarts.
    It can't be a driver failure, because the test runs @ lower than stock clocks...

    The chance that it is really a design flaw is very likely, you might want to think about considering it as a fact.
    Are we there yet?

  8. #58
    Registered User
    Join Date
    Oct 2006
    Location
    Belgium
    Posts
    19
    I have an HD4890 and i can not run the test in full screen, native resolution shader complexity = 3 without error check.
    I must underclock gpu @ 700 or underclock ram. If not, i have a black screen in 1sec and i must hard reset the computer. (security of the card because of the amp higher than 82-83A)

    If this was the case when you run a cpu not overclocked with IBT would you think it is normal?

    I think a card design should handle 100% charge even if no game is (not yet) doing it.

  9. #59
    Xtreme Member
    Join Date
    Dec 2006
    Posts
    213
    Quote Originally Posted by Nedjo View Post
    could you define "very very optimised shaders"?

    I'd say that games like Crysis, or Far Cry 2, or Dawn Of War II, or the new Riddick or STALKER... do use very COMPLEX shaders... and I didn't heard that someone complained that those games burned their Radeons

    So you're saying that yours "simple Alpha blending" is meaner than STALKER's shaders... interesting!
    Yes. Alpha blending + simple shaders is meaner than stalker's shader.

    Oh, stalker's shader are probably much more complex than mine. Indeed. I will never question that point. Mathematically, they are far, far away from mine.

    I am just pragmatic. I am just combining functions that do load the GPU, period. One day, a shader will have a practical application to combining those functions. And that will load the GPU. And... well. Boom.

    Who knows ? I'd rather be sure, when i buy a hardware, that it can run everything that can be done...

  10. #60
    I am Xtreme
    Join Date
    Jul 2005
    Posts
    4,811
    Quote Originally Posted by Tetedeiench View Post
    Well, problem is :
    1- I don't have the means to have another testing method. I'm a single developer. the only ATI video card i have access to is a HD2600 Mobility. My main computer is equipped with a GTX285. And i only have one computer. Imagine me as a guy just like yourself, spending his time developing OCCT.

    I tried to get more professionnal testing done. So far no good. People are just plain ignoring my emails. I thought that having more than 1.500.000 downloads would be enough to get, at least, some listening, or a professionnal test when you uncover something in your own field (which is, stability testing). Seems like not. I have enough proof to get, at least, a "duh, let's check, he may be right". I mean, we had this thing happening on a PC with a ToughPower 1500W. I doubt it was at stake.

    2- No games currently exists which raise this issue. That doesn't mean a game will never exist with this issue. And again, if an app could raise this problem, any other app can.

    If you buy a car that boast it can reach 280Mph on a track, i doubt you'll be happy when you learn it can only reach 200mph on the said circuit track, even if you're only allowed 100mph in real life... that's what we're talking about.

    I am actually longing for a professionnal testing proving me right or wrong. I'm almost sure i'm right, at the moment. Problem is, i've gone as far as i could with my limited testing means.

    You want to help me ? Prove me i'm wrong But i'm sure your HD4870/4890 is going blackscreen with my test at the moment, if you do have one... no ?
    I think we have a disconnect here. People don't use burn in programs to see how they stack against them but as a gauge of how their video card will perform in games. This is why you are not able to openly receive constructive criticism (based on how you've responded so far).

    Since this is your new test method people will naturally pay close attention on the legitimacy of it based on what (and how) you decide to deliver the information.

    And your car analogy is not correct. There are many cars that have a "top speed" based on the speedometer but aren't necessarily able to achieve it.

    So in a nutshell we come to a question how these results effect the end user. So far, you haven't fully convinced (at least me) how we are negatively impacted in the games we play.
    Last edited by Eastcoasthandle; 05-19-2009 at 01:05 PM.
    [SIGPIC][/SIGPIC]

  11. #61
    Xtreme Mentor
    Join Date
    May 2008
    Posts
    2,554
    Wasn't there a thread here in the News section that proved that almost all high end cards exceded their TDP and in some cases pci-e specs while running furmark?

    Edit: here it is link
    Last edited by BababooeyHTJ; 05-19-2009 at 01:12 PM.

  12. #62
    Xtreme Member
    Join Date
    Dec 2006
    Posts
    213
    Quote Originally Posted by Luka_Aveiro View Post
    I don't know much about programming, but I think you don't know a single thing...

    Having complex code may cause less stress because some units will be idling, waiting to get other unit's information, while with simple, and scalable coding, you get much mure GPU load, because information is delivered much faster across units, because calculations are much simpler... I think...

    Exactly. Said in a much better english than mine. And that's where you see that english is not my native language.

    I built the current algorithm by isolating the functions that did load the GPUs, making a mathematical-function with it that was sound, and a scalable version (the "shader complexity" parameter.

    That's what was done. Sounds easy, was not

    And yes, indeed, the sole fact that lowering the hardware frequencies make the test run almost rules out any software problem. And believe me, i did ALOT of debugging there.

  13. #63
    Xtreme Member
    Join Date
    Dec 2006
    Posts
    213
    Quote Originally Posted by Eastcoasthandle View Post
    I think we have a disconnect here. People don't use burn in programs to see how they stack against them but as a gauge of how their video card will perform in games. This is why you are not able to openly receive constructive criticism (based on how you've responded so far).

    Since this is your new test method people will naturally pay close attention on the legitimacy of it based on what (and how) you decide to deliver the information.

    And your car analogy is not correct. There are many cars that have a "top speed" based on the speedometer but aren't necessarily able to achieve it.
    if you want my test to be more "realistic", you can run the "baby mode" version of it, more realistic, castrated version of it. Launch it with Shader complexity 0. It will be castrated, but closer to what you want.

    And you'll get the error detection mode, which is functioning very well.

    But truly, i fail to see the point.

  14. #64
    Xtreme Mentor
    Join Date
    Apr 2005
    Posts
    2,550
    Quote Originally Posted by Luka_Aveiro View Post
    I don't know much about programming, but I think you don't know a single thing...
    so you're now spoke person for Tetedeiench good for you!
    Having complex code may cause less stress because some units will be idling, waiting to get other unit's information, while with simple, and scalable coding, you get much mure GPU load, because information is delivered much faster across units, because calculations are much simpler... I think...
    after reading this gibberish I concur with you! You really don't know much about programming!!!

    pfff... more complex code imposes less stress on GPU... yeah right!
    Adobe is working on Flash Player support for 64-bit platforms as part of our ongoing commitment to the cross-platform compatibility of Flash Player. We expect to provide native support for 64-bit platforms in an upcoming release of Flash Player following the release of Flash Player 10.1.

  15. #65
    I am Xtreme
    Join Date
    Jul 2005
    Posts
    4,811
    Quote Originally Posted by Tetedeiench View Post
    if you want my test to be more "realistic", you can run the "baby mode" version of it, more realistic, castrated version of it. Launch it with Shader complexity 0. It will be castrated, but closer to what you want.

    And you'll get the error detection mode, which is functioning very well.

    But truly, i fail to see the point.
    Again, another disconnection here. How does your program (pass or fail) tell the end user that there is something wrong (for example) with their video card in the games they prefer playing? Remember this is your new test method here.
    [SIGPIC][/SIGPIC]

  16. #66
    Xtreme Addict
    Join Date
    Dec 2007
    Posts
    1,030
    Quote Originally Posted by Nedjo View Post
    so you're now spoke person for Tetedeiench good for you!


    after reading this gibberish I concur with you! You really don't know much about programming!!!

    pfff... more complex code imposes less stress on GPU... yeah right!

    Are we there yet?

  17. #67
    Xtreme Addict
    Join Date
    Jul 2008
    Location
    SF, CA
    Posts
    1,294
    I don't have time to read all this so I apologize if someone has already mentioned this. Also i haven't started school yet, so excuse me if i've got my facts wrong.

    This IS a serious and very likely (though certainly unacceptable design flaw). Look at any inductor spec sheet and you will see a few numbers that describe the way the inductor behaves. As load is applied, DC bias rises, and inductance falls the inductor reaches a current threshold which causes the temperature to rise by about 40C (Heating current or Iheat) which usually coincides with or is not far from the inductor's Irated. In the area of the Inductance V DC Bias (current) curve within specifications, inductance remains relatively the same. When it reaches Iheat Inductance begins to fall rapidly until current hits the third important number, saturation current or Isat, at which point inductance drops %20, i forget whether it's an asymptotic behaviour, higher PCB and ambient temperatures will cause this to occur at lower current. Therefore, there is a threshold current where the buck converter will no longer be able to meet power demand, i mean that's rather intuitive.

    Essentially I would bet that if Volterra data sheets weren't sodarn clandestine we would see either that their inductor's Isat is around 82A or that Isat occurs at 82A at a certain temperature that is probably achieved in these blast furnaces. As a caveat I have never owned a 4870/90 but i have squinted at their pwm lots.
    gotta go sorry.

  18. #68
    I am Xtreme
    Join Date
    Jul 2007
    Location
    Austria
    Posts
    5,485
    Quote Originally Posted by Tetedeiench View Post
    I built the current algorithm by isolating the functions that did load the GPUs, making a mathematical-function with it that was sound, and a scalable version (the "shader complexity" parameter.
    And heres your failure, no game can utilize all 160shaders 5D shaders to it's fullest potential, your simple shadercode can but neither reflects games nor the circumstance the R700 was layed out.
    Last edited by Hornet331; 05-19-2009 at 01:13 PM.

  19. #69
    Xtreme Enthusiast
    Join Date
    Dec 2008
    Posts
    560
    Quote Originally Posted by Tetedeiench View Post
    .
    Thanks

    I knew there was a reason behind why my 4870x2 could not get another mhz higher overclock with stability, even with 1.35v pumping to the cores.

    This just shows, what I suspected. It had to do with the power being provided.

    Thanks for all your research and hard work. I'm sure it will be much appreciated in the future and by nvidia fanboy's around the world

    I wonder if this has something to do with why, F@H hasn't been optimized for ati's yet

  20. #70
    Xtreme Member
    Join Date
    Dec 2006
    Posts
    213
    Quote Originally Posted by Eastcoasthandle View Post
    Again, another disconnection here. How does your program (pass or fail) tell the end user that there is something wrong (for example) with their video card in the games they prefer playing? Remember this is your new test method here.
    You've got two modes :
    • Burn mode : just calculate the donut and display it. This mode is used to achieve the highest stress possible.
    • Error check mode : the donut doesn't move. I calculate it once, and it is considerated as a "reference" image. Every other image calculated is then "checked" against this "reference" image. If a pixel is of different color, the GPU did a calculation error (it should have produced the very same image). I report this error.


    Obviously, the Error check mode is less effective : checking an image against a reference is never effective, burning wise.

    This behaviour is used in ATITool, which has been around for a few years now.

    The error i'm reporting now is a complete crash in the Burn mode. Which is different from a pixel of different color

  21. #71
    Xtreme Member
    Join Date
    Dec 2006
    Posts
    213
    Quote Originally Posted by Hornet331 View Post
    And heres your failure, no game can utilize all 160shaders 5D shaders to it's fullest potential, your simple shadercode can but neither reflects games nor the circumstance the R700 was layed out.
    Well, why just having built it that way then, and boasting the capability ?

    I wonder.

  22. #72
    Xtreme Enthusiast
    Join Date
    Dec 2008
    Posts
    560
    Hmm, wonder if this is the real why sideport hasn't been enabled and graphics drivers haven't been bringing more performance.
    Cause it would crash the cards

    Oh well, at least ati can test there next gen cards with this tool first now.

  23. #73
    I am Xtreme
    Join Date
    Jul 2005
    Posts
    4,811
    Quote Originally Posted by Tetedeiench View Post
    You've got two modes :
    • Burn mode : just calculate the donut and display it. This mode is used to achieve the highest stress possible.
    • Error check mode : the donut doesn't move. I calculate it once, and it is considerated as a "reference" image. Every other image calculated is then "checked" against this "reference" image. If a pixel is of different color, the GPU did a calculation error (it should have produced the very same image). I report this error.


    Obviously, the Error check mode is less effective : checking an image against a reference is never effective, burning wise.

    This behaviour is used in ATITool, which has been around for a few years now.

    The error i'm reporting now is a complete crash in the Burn mode. Which is different from a pixel of different color
    So your answer is that the burn-in mode has no tangible benefit on those using the video card to play games, gotcha
    Last edited by Eastcoasthandle; 05-19-2009 at 01:30 PM.
    [SIGPIC][/SIGPIC]

  24. #74
    I am Xtreme
    Join Date
    Jul 2007
    Location
    Austria
    Posts
    5,485
    Quote Originally Posted by Tetedeiench View Post
    Well, why just having built it that way then, and boasting the capability ?

    I wonder.
    Cause when you use it the way it's meant to be theres no problem....?

    Your code produces a load behavior of the shadercores that will be never exist in the wilde (not on video encoding, not on GPGPU apps and not on games).

  25. #75
    I am Xtreme
    Join Date
    Jul 2004
    Location
    Little Rock
    Posts
    7,204
    At about 320 sec and a temp of 119C it shuts the computer down

    Lately I have been having Lock up problems and thought it was the motherboard. Far Cry, Cal Of Duty, and Stalker Clear Skies, will lock, sound loop I have to hard reboot. I thought it was the sound card, it locked up with the motherboard sound and with NO sound LOL! I think I'll pull the 3870 from my other rig for the Game tests!
    Quote Originally Posted by Movieman
    With the two approaches to "how" to design a processor WE are the lucky ones as we get to choose what is important to us as individuals.
    For that we should thank BOTH (AMD and Intel) companies!


    Posted by duploxxx
    I am sure JF is relaxed and smiling these days with there intended launch schedule. SNB Xeon servers on the other hand....
    Posted by gallag
    there yo go bringing intel into a amd thread again lol, if that was someone droping a dig at amd you would be crying like a girl.
    qft!

Page 3 of 30 FirstFirst 12345613 ... LastLast

Bookmarks

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •