Results 1 to 25 of 730

Thread: OCCT 3.1.0 shows HD4870/4890 design flaw - they can't handle the new GPU test !

Hybrid View

  1. #1
    Xtreme Member
    Join Date
    Dec 2006
    Posts
    213
    Quote Originally Posted by Eastcoasthandle View Post
    The reason why people are not agreeing with you is because:
    1. Your test methods are brought to question
    2. No games have been verified by you having shown this problem
    among other specific concerns...

    Well, problem is :
    1- I don't have the means to have another testing method. I'm a single developer. the only ATI video card i have access to is a HD2600 Mobility. My main computer is equipped with a GTX285. And i only have one computer. Imagine me as a guy just like yourself, spending his time developing OCCT.

    I tried to get more professionnal testing done. So far no good. People are just plain ignoring my emails. I thought that having more than 1.500.000 downloads would be enough to get, at least, some listening, or a professionnal test when you uncover something in your own field (which is, stability testing). Seems like not. I have enough proof to get, at least, a "duh, let's check, he may be right". I mean, we had this thing happening on a PC with a ToughPower 1500W. I doubt it was at stake.

    2- No games currently exists which raise this issue. That doesn't mean a game will never exist with this issue. And again, if an app could raise this problem, any other app can.

    If you buy a car that boast it can reach 280Mph on a track, i doubt you'll be happy when you learn it can only reach 200mph on the said circuit track, even if you're only allowed 100mph in real life... that's what we're talking about.

    I am actually longing for a professionnal testing proving me right or wrong. I'm almost sure i'm right, at the moment. Problem is, i've gone as far as i could with my limited testing means.

    You want to help me ? Prove me i'm wrong But i'm sure your HD4870/4890 is going blackscreen with my test at the moment, if you do have one... no ?
    Last edited by Tetedeiench; 05-19-2009 at 12:10 PM.

  2. #2
    I am Xtreme
    Join Date
    Jul 2005
    Posts
    4,811
    Quote Originally Posted by Tetedeiench View Post
    Well, problem is :
    1- I don't have the means to have another testing method. I'm a single developer. the only ATI video card i have access to is a HD2600 Mobility. My main computer is equipped with a GTX285. And i only have one computer. Imagine me as a guy just like yourself, spending his time developing OCCT.

    I tried to get more professionnal testing done. So far no good. People are just plain ignoring my emails. I thought that having more than 1.500.000 downloads would be enough to get, at least, some listening, or a professionnal test when you uncover something in your own field (which is, stability testing). Seems like not. I have enough proof to get, at least, a "duh, let's check, he may be right". I mean, we had this thing happening on a PC with a ToughPower 1500W. I doubt it was at stake.

    2- No games currently exists which raise this issue. That doesn't mean a game will never exist with this issue. And again, if an app could raise this problem, any other app can.

    If you buy a car that boast it can reach 280Mph on a track, i doubt you'll be happy when you learn it can only reach 200mph on the said circuit track, even if you're only allowed 100mph in real life... that's what we're talking about.

    I am actually longing for a professionnal testing proving me right or wrong. I'm almost sure i'm right, at the moment. Problem is, i've gone as far as i could with my limited testing means.

    You want to help me ? Prove me i'm wrong But i'm sure your HD4870/4890 is going blackscreen with my test at the moment, if you do have one... no ?
    I think we have a disconnect here. People don't use burn in programs to see how they stack against them but as a gauge of how their video card will perform in games. This is why you are not able to openly receive constructive criticism (based on how you've responded so far).

    Since this is your new test method people will naturally pay close attention on the legitimacy of it based on what (and how) you decide to deliver the information.

    And your car analogy is not correct. There are many cars that have a "top speed" based on the speedometer but aren't necessarily able to achieve it.

    So in a nutshell we come to a question how these results effect the end user. So far, you haven't fully convinced (at least me) how we are negatively impacted in the games we play.
    Last edited by Eastcoasthandle; 05-19-2009 at 01:05 PM.
    [SIGPIC][/SIGPIC]

  3. #3
    Xtreme Member
    Join Date
    Dec 2006
    Posts
    213
    Quote Originally Posted by Eastcoasthandle View Post
    I think we have a disconnect here. People don't use burn in programs to see how they stack against them but as a gauge of how their video card will perform in games. This is why you are not able to openly receive constructive criticism (based on how you've responded so far).

    Since this is your new test method people will naturally pay close attention on the legitimacy of it based on what (and how) you decide to deliver the information.

    And your car analogy is not correct. There are many cars that have a "top speed" based on the speedometer but aren't necessarily able to achieve it.
    if you want my test to be more "realistic", you can run the "baby mode" version of it, more realistic, castrated version of it. Launch it with Shader complexity 0. It will be castrated, but closer to what you want.

    And you'll get the error detection mode, which is functioning very well.

    But truly, i fail to see the point.

  4. #4
    I am Xtreme
    Join Date
    Jul 2005
    Posts
    4,811
    Quote Originally Posted by Tetedeiench View Post
    if you want my test to be more "realistic", you can run the "baby mode" version of it, more realistic, castrated version of it. Launch it with Shader complexity 0. It will be castrated, but closer to what you want.

    And you'll get the error detection mode, which is functioning very well.

    But truly, i fail to see the point.
    Again, another disconnection here. How does your program (pass or fail) tell the end user that there is something wrong (for example) with their video card in the games they prefer playing? Remember this is your new test method here.
    [SIGPIC][/SIGPIC]

  5. #5
    Xtreme Member
    Join Date
    Dec 2006
    Posts
    213
    Quote Originally Posted by Eastcoasthandle View Post
    Again, another disconnection here. How does your program (pass or fail) tell the end user that there is something wrong (for example) with their video card in the games they prefer playing? Remember this is your new test method here.
    You've got two modes :
    • Burn mode : just calculate the donut and display it. This mode is used to achieve the highest stress possible.
    • Error check mode : the donut doesn't move. I calculate it once, and it is considerated as a "reference" image. Every other image calculated is then "checked" against this "reference" image. If a pixel is of different color, the GPU did a calculation error (it should have produced the very same image). I report this error.


    Obviously, the Error check mode is less effective : checking an image against a reference is never effective, burning wise.

    This behaviour is used in ATITool, which has been around for a few years now.

    The error i'm reporting now is a complete crash in the Burn mode. Which is different from a pixel of different color

  6. #6
    I am Xtreme
    Join Date
    Jul 2005
    Posts
    4,811
    Quote Originally Posted by Tetedeiench View Post
    You've got two modes :
    • Burn mode : just calculate the donut and display it. This mode is used to achieve the highest stress possible.
    • Error check mode : the donut doesn't move. I calculate it once, and it is considerated as a "reference" image. Every other image calculated is then "checked" against this "reference" image. If a pixel is of different color, the GPU did a calculation error (it should have produced the very same image). I report this error.


    Obviously, the Error check mode is less effective : checking an image against a reference is never effective, burning wise.

    This behaviour is used in ATITool, which has been around for a few years now.

    The error i'm reporting now is a complete crash in the Burn mode. Which is different from a pixel of different color
    So your answer is that the burn-in mode has no tangible benefit on those using the video card to play games, gotcha
    Last edited by Eastcoasthandle; 05-19-2009 at 01:30 PM.
    [SIGPIC][/SIGPIC]

  7. #7
    Xtreme Enthusiast
    Join Date
    Dec 2008
    Posts
    560
    Quote Originally Posted by Tetedeiench View Post
    .
    Thanks

    I knew there was a reason behind why my 4870x2 could not get another mhz higher overclock with stability, even with 1.35v pumping to the cores.

    This just shows, what I suspected. It had to do with the power being provided.

    Thanks for all your research and hard work. I'm sure it will be much appreciated in the future and by nvidia fanboy's around the world

    I wonder if this has something to do with why, F@H hasn't been optimized for ati's yet

  8. #8
    Xtreme Addict
    Join Date
    Aug 2005
    Location
    Germany
    Posts
    2,247
    Quote Originally Posted by Eastcoasthandle View Post
    I think we have a disconnect here. People don't use burn in programs to see how they stack against them but as a gauge of how their video card will perform in games. This is why you are not able to openly receive constructive criticism (based on how you've responded so far).

    Since this is your new test method people will naturally pay close attention on the legitimacy of it based on what (and how) you decide to deliver the information.

    And your car analogy is not correct. There are many cars that have a "top speed" based on the speedometer but aren't necessarily able to achieve it.

    So in a nutshell we come to a question how these results effect the end user. So far, you haven't fully convinced (at least me) how we are negatively impacted in the games we play.
    but isn't testing cpus with linpack or prime95 the same? you're saying the load produced by occt is "unrealistic", but isn't every benchmark that puts heavy load on a specific type of hardware "unrealistic"?

    you can overclock your cpu and run windows and play games without any problems, but when running prime95 for a few minutes the test eventually aborts because of calculating errors. there are a lot of people who consider an overclock, that's not prime-stable, as a crap overclock. it didn't pass the testing methods, period.
    it's the same with occt's gpu benchmark. it puts heavy load on the gpu in the same "unrealistic" way e.g. linpack does for cpus.

    ...and we're talking about gpus with stock clocks compared to overclocked cpus.

    maybe other applications, e.g. the folding@home, for scientific calculations (especially in the future with opencl and such?) get more and more optimized and will reach these limits as well. who knows?

    *edit* just tested it with my 4850 without any problems, so i can confirm that presumably only 4870/90 are affected. however, the gpu is getting really hot . even with the fanspeed of 75% it reached 80°C within a minute.
    Last edited by RaZz!; 05-19-2009 at 01:45 PM.
    1. Asus P5Q-E / Intel Core 2 Quad Q9550 @~3612 MHz (8,5x425) / 2x2GB OCZ Platinum XTC (PC2-8000U, CL5) / EVGA GeForce GTX 570 / Crucial M4 128GB, WD Caviar Blue 640GB, WD Caviar SE16 320GB, WD Caviar SE 160GB / be quiet! Dark Power Pro P7 550W / Thermaltake Tsunami VA3000BWA / LG L227WT / Teufel Concept E Magnum 5.1 // SysProfile


    2. Asus A8N-SLI / AMD Athlon 64 4000+ @~2640 MHz (12x220) / 1024 MB Corsair CMX TwinX 3200C2, 2.5-3-3-6 1T / Club3D GeForce 7800GT @463/1120 MHz / Crucial M4 64GB, Hitachi Deskstar 40GB / be quiet! Blackline P5 470W

  9. #9
    I am Xtreme
    Join Date
    Jul 2005
    Posts
    4,811
    Quote Originally Posted by RaZz! View Post
    but isn't testing cpus with linpack or prime95 the same? you're saying the load produced by occt is "unrealistic", but isn't every benchmark that puts heavy load on a specific type of hardware "unrealistic"?

    you can overclock your cpu and run windows and play games without any problems, but when running prime95 for a few minutes the test eventually aborts because of calculating errors. there are a lot of people who consider an overclock, that's not prime-stable, as a crap overclock. it didn't pass the testing methods, period.
    it's the same with occt's gpu benchmark. it puts heavy load on the gpu in the same "unrealistic" way e.g. linpack does for cpus.

    ...and we're talking about gpus with stock clocks compared to overclocked cpus.

    maybe other applications, e.g. the folding@home, for scientific calculations (especially in the future with opencl and such?) get more and more optimized and will reach these limits as well. who knows?
    No, because we know that many have used burn-in programs and achieved 0 problems to only discover stability problems in the games they play. Lets not forget that this is an overclocking venture that many enthusiast use to determine how well they are able to play the games/(non burn in) programs they use most. Not a self serving need to determine if this (or that) program works for them alone. Then based their decision on that program alone there something is wrong (or not) with the hardware.
    [SIGPIC][/SIGPIC]

  10. #10
    Xtreme Enthusiast
    Join Date
    Dec 2008
    Posts
    560
    Don't games stress more then just the cpu though.
    If your test only stresses the cpu and not the north bridge or ram, your games could easily have issues, even though your cpu test couldn't find them after an overclock.

    Games don't make good stability tests either though.
    Just the fact that FurMark was being throttled by drivers, to prevent something from being noticed shows there was something to hide and now its been found what was trying to be hidden.

  11. #11
    I am Xtreme
    Join Date
    Jul 2005
    Posts
    4,811
    Quote Originally Posted by Greg83 View Post
    Don't games stress more then just the cpu though.
    If your test only stresses the cpu and not the north bridge or ram, your games could easily have issues, even though your cpu test couldn't find them after an overclock.

    Games don't make good stability tests either though.
    Just the fact that FurMark was being throttled by drivers, to prevent something from being noticed shows there was something to hide and now its been found what was trying to be hidden.
    Greg83 you are on the right track. Hardware specific burn-in application can't and more often don't tell the who story of how stable your PC really is. Which is why we have seen those use them still having game related problems even though their burn-in program tells them otherwise.

    However, I do disagree that games do make for good stability tests. Although it's not the best way to determine stability nor is it convenient having to reboot your PC. However, if you really want to know if your overclock can handle that game or application you will have to eventually test it using that specific non burn-in program/game.

    So the question become more simplified IMO. If the hardware tested fails the burn-in program test yet works fine in the games and applications you play; what tangible effect did using that program have? IMO, none...
    [SIGPIC][/SIGPIC]

  12. #12
    Xtreme Member
    Join Date
    Dec 2006
    Posts
    213
    Quote Originally Posted by Eastcoasthandle View Post
    Greg83 you are on the right track. Hardware specific burn-in application can't and more often don't tell the who story of how stable your PC really is. Which is why we have seen those use them still having game related problems even though their burn-in program tells them otherwise.

    However, I do disagree that games do make for good stability tests. Although it's not the best way to determine stability nor is it convenient having to reboot your PC. However, if you really want to know if your overclock can handle that game or application you will have to eventually test it using that specific non burn-in program/game.

    So the question become more simplified IMO. If the hardware tested fails the burn-in program test yet works fine in the games and applications you play; what tangible effect did using that program have? IMO, none...
    Yet, if it takes 4 hours for the game to crash, and if the stability test will tell you in 4 minutes, which one will you choose ?

    I mean, i'd rather be stable in the stability test, and be sure all my games will run fine, than test for 1h my favorite game, and only know for that my game lasted for that long without crashing.

    Stability tests are great for telling you quickly how stable your system is. And i'd rather learn that quickly, instead of learning than in front of Sephiroth after a 2 hour long battle... aaaaaaaaaaand when the final cinematic launch... cccccccrashhh.

    Yes, i was lvl 99 when i faced sephiroth. That's a silly example.

  13. #13
    Xtreme Addict
    Join Date
    Aug 2005
    Location
    Germany
    Posts
    2,247
    Quote Originally Posted by Eastcoasthandle View Post
    No, because we know that many have used burn-in programs and achieved 0 problems to only discover stability problems in the games they play. Lets not forget that this is an overclocking venture that many enthusiast use to determine how well they are able to play the games/(non burn in) programs they use most. Not a self serving need to determine if this (or that) program works for them alone. Then based their decision on that program alone there something is wrong (or not) with the hardware.
    i really don't get why people here disregard Tetedeiench efforts so much. it's not like he wanted to produce that gpu failure, he stumpled upon it while creating a gpu stability test.
    instead of investigating the issue everyone claims to know how unrealistic and far away "from real world" situations the benchmark is.

    he indeed discovered a flaw in the design of the 4870/90 that leads to such behavior.

    however, these are just my 2cents.
    Last edited by RaZz!; 05-19-2009 at 02:07 PM.
    1. Asus P5Q-E / Intel Core 2 Quad Q9550 @~3612 MHz (8,5x425) / 2x2GB OCZ Platinum XTC (PC2-8000U, CL5) / EVGA GeForce GTX 570 / Crucial M4 128GB, WD Caviar Blue 640GB, WD Caviar SE16 320GB, WD Caviar SE 160GB / be quiet! Dark Power Pro P7 550W / Thermaltake Tsunami VA3000BWA / LG L227WT / Teufel Concept E Magnum 5.1 // SysProfile


    2. Asus A8N-SLI / AMD Athlon 64 4000+ @~2640 MHz (12x220) / 1024 MB Corsair CMX TwinX 3200C2, 2.5-3-3-6 1T / Club3D GeForce 7800GT @463/1120 MHz / Crucial M4 64GB, Hitachi Deskstar 40GB / be quiet! Blackline P5 470W

  14. #14
    Xtreme Member
    Join Date
    Jun 2003
    Location
    Italy
    Posts
    351
    Quote Originally Posted by Tetedeiench View Post
    I tried to get more professionnal testing done. So far no good. People are just plain ignoring my emails. I thought that having more than 1.500.000 downloads would be enough to get, at least, some listening, or a professionnal test when you uncover something in your own field (which is, stability testing). Seems like not. I have enough proof to get, at least, a "duh, let's check, he may be right". I mean, we had this thing happening on a PC with a ToughPower 1500W. I doubt it was at stake.
    what about asking in a game developers forum like Gamedev?
    3570K @ 4.5Ghz | Gigabyte GA-Z77-D3H | 7970 Ghz 1100/6000 | 256GB Samsung 830 SSD (Win 7) | 256GB Samsung 840 Pro SSD (OSX 10.8.3) | 16GB Vengeance 1600 | 24'' Dell U2412M | Corsair Carbide 300R

Bookmarks

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •