Page 24 of 30 FirstFirst ... 1421222324252627 ... LastLast
Results 576 to 600 of 730

Thread: OCCT 3.1.0 shows HD4870/4890 design flaw - they can't handle the new GPU test !

  1. #576
    Xtreme Guru
    Join Date
    Jan 2005
    Location
    Tre, Suomi Finland
    Posts
    3,858
    I wish people would learn how to use Imageshack...
    You were not supposed to see this.

  2. #577
    Registered User
    Join Date
    Sep 2008
    Posts
    45
    Quote Originally Posted by truehighroller View Post
    Hmmm... this could explain my unexplainable crashes... That means that I probably am fine with my power supply and this whole time it was actually this VRM bug thing. I bought this second Power supply booster thing because I thought I was pushing my power supply to much causing these random crashes. I have nice after market sinks on my cards and don't know how hot they are getting now a days (the VRMs) but, I can check real quick.

    The aftermarket heatsinks are probably the culprit, many of those aftermarket designs provide poor cooling of the VRM.

  3. #578
    Xtreme Member
    Join Date
    Oct 2006
    Posts
    247
    "Oh no, my computers unstable, LOOK some guy on the interwebs found a flaw(which btw only becomes apparent under *unusually high* load), that MUST be why my card is breaking, even though the problem that the guy uncovered has nothing to do with my situation!"

    truehighroller and bluedevil, I highly doubt its this "flaw" causing your issue.

    Core 2 Duo(Conroe) was based on the Intel Core Duo(Yonah) which was based on the Pentium M(Banias) which was based on the Pentium III(Coppermine).

    Core 2 Duo is a Pentium III on meth.

  4. #579
    Registered User
    Join Date
    May 2005
    Posts
    31
    I got powercolour 4890 and added accelero s1 rev.2. I noticed that vrm is pretty hard to cool down. I think my powercolour has a shutdown temperture max on the vrm if the vrm goes over 90*c to the range of 95*-110* my system restarts. As you can see my vrm amp is 32.66. Runs furmark fine at 900/1000 and can run indefinitely. I can overclock it to 980/1150 and run games if I can keep the temps cool, but if I ran benchmarks and the gpu uses 100% all the time the amount of heat produce exceeds the amount heat taken off by the fan. Don't think I have reached the 82/83amp the enormous amount of heat produced by the vrm seem to hit the limit for reference design. Which I think is poorly designed I think the non-reference powercolour design seem better and they say its 10+*c cooler.

    http://img24.imageshack.us/img24/1852/4890oc2.jpg

  5. #580
    Xtreme Addict
    Join Date
    Feb 2008
    Posts
    1,118
    So any body care to comment on this one for me? What should I do? Is there anything I can do?
    _____________________________________________



    Rig = GA-P67A-UD3P Rev 1.0 - 2600K @ 5.2~GHz 1.5v~, 1.489~v Under Load - Swiftech Water Cooling - 2 X 4GB Corsair DDR3 2000Mhz @ 1868MHz~ 9,10,9,27 @ 1.65v~ - Asus 6970 @ 950MHz / 1450MHz - 3x Western Digital RE3 320Gb 16Mb Cache SataII Drives in Raid0 - Corsair HX 850w Power Supply - Antec 1200 Case - 3DMark 11 Score = P6234 - 3DVantage Score = P26237

  6. #581
    Xtreme Addict
    Join Date
    Feb 2008
    Posts
    1,118
    Quote Originally Posted by cal_guy View Post
    The aftermarket heatsinks are probably the culprit, many of those aftermarket designs provide poor cooling of the VRM.

    Want me to post pics of my cooling setup for them for you? I will.


    Here is a pic of my cooling setup.

    http://img15.imageshack.us/img15/6901/1022096.jpg

    Another PIC.

    http://img43.imageshack.us/img43/9351/1022097x.jpg

    Tell me what you think. I actually used a posting in this site to figure out what to put heat sinks on. If I need to and can I will order better cooling for the VRMs.
    Last edited by truehighroller; 05-29-2009 at 10:15 AM.
    _____________________________________________



    Rig = GA-P67A-UD3P Rev 1.0 - 2600K @ 5.2~GHz 1.5v~, 1.489~v Under Load - Swiftech Water Cooling - 2 X 4GB Corsair DDR3 2000Mhz @ 1868MHz~ 9,10,9,27 @ 1.65v~ - Asus 6970 @ 950MHz / 1450MHz - 3x Western Digital RE3 320Gb 16Mb Cache SataII Drives in Raid0 - Corsair HX 850w Power Supply - Antec 1200 Case - 3DMark 11 Score = P6234 - 3DVantage Score = P26237

  7. #582
    Xtreme Mentor
    Join Date
    May 2008
    Posts
    2,554
    Quote Originally Posted by truehighroller View Post
    Want me to post pics of my cooling setup for them for you? I will.


    Here is a pic of my cooling setup.

    http://img15.imageshack.us/img15/6901/1022096.jpg

    Another PIC.

    http://img43.imageshack.us/img43/9351/1022097x.jpg

    Tell me what you think. I actually used a posting in this site to figure out what to put heat sinks on. If I need to and can I will order better cooling for the VRMs.
    I run an HR-03 on my GTX280 which uses the same vrms and I picked up some enzotech mos c-1s and they perform pretty well with a 120mm fan mounted on the drive bays across from the card. A little worse than on the stock cooler at 100% fan speed and alot better than the thermalright vrm plate. You may want to pick up a pack.

  8. #583
    Xtreme Enthusiast
    Join Date
    Apr 2006
    Location
    Brasil
    Posts
    534
    I remember I couldn't use a HR-03 on the 4870 because there are so many tiny hot chips on the card you need about a dozen extra (custom) heatsinks.

  9. #584
    Xtreme Addict
    Join Date
    Feb 2008
    Posts
    1,118
    Quote Originally Posted by BababooeyHTJ View Post
    I run an HR-03 on my GTX280 which uses the same vrms and I picked up some enzotech mos c-1s and they perform pretty well with a 120mm fan mounted on the drive bays across from the card. A little worse than on the stock cooler at 100% fan speed and alot better than the thermalright vrm plate. You may want to pick up a pack.
    I can get some but, will they work? Not only that I am guessing I will have to cut those down to fit under my heat sinks, right?
    _____________________________________________



    Rig = GA-P67A-UD3P Rev 1.0 - 2600K @ 5.2~GHz 1.5v~, 1.489~v Under Load - Swiftech Water Cooling - 2 X 4GB Corsair DDR3 2000Mhz @ 1868MHz~ 9,10,9,27 @ 1.65v~ - Asus 6970 @ 950MHz / 1450MHz - 3x Western Digital RE3 320Gb 16Mb Cache SataII Drives in Raid0 - Corsair HX 850w Power Supply - Antec 1200 Case - 3DMark 11 Score = P6234 - 3DVantage Score = P26237

  10. #585
    Xtreme Mentor
    Join Date
    May 2008
    Posts
    2,554
    Quote Originally Posted by truehighroller View Post
    I can get some but, will they work? Not only that I am guessing I will have to cut those down to fit under my heat sinks, right?
    These vrms run hot, I wouldn't cut them down. I would nicely bend the pins down before mounting them. They should work fine as long as they see some airflow. I would imagine that even a cut down one would work better than what you are using now but I would do anything possible to avoid cutting them down.

  11. #586
    Xtreme Member
    Join Date
    Jul 2005
    Posts
    294
    What I don't understand is that people here at Xtreme Systems are blaming the programmer because he wrote a program that runs the gpu's at their maximum capacity has has found what appears(at least for now) to be a design flaw. These same people would be having fits if their cpu did the same thing when stress testing it at stock speeds.
    ASUS Rampage II
    Core i7 920 D0 @ 4.2GHz
    Corsair TR3X6G1600C8 3x2GB DDR3 1600MHz
    Radeon 5870
    Raid 0 2x1TB WD on SATA 3/4
    Seagate 160GB on SATA 1
    LG Bluray ROM/DVD RW
    Corsair HX1000W

  12. #587
    Xtreme Addict
    Join Date
    May 2007
    Location
    'Zona
    Posts
    2,346
    Quote Originally Posted by ElAguila View Post
    What I don't understand is that people here at Xtreme Systems are blaming the programmer because he wrote a program that runs the gpu's at their maximum capacity has has found what appears(at least for now) to be a design flaw. These same people would be having fits if their cpu did the same thing when stress testing it at stock speeds.
    It does happen to CPUs at stock speeds... with certain applications and code.
    Last edited by LordEC911; 05-29-2009 at 02:51 PM.
    Originally Posted by motown_steve
    Every genocide that was committed during the 20th century has been preceded by the disarmament of the target population. Once the government outlaws your guns your life becomes a luxury afforded to you by the state. You become a tool to benefit the state. Should you cease to benefit the state or even worse become an annoyance or even a hindrance to the state then your life becomes more trouble than it is worth.

    Once the government outlaws your guns your life is forfeit. You're already dead, it's just a question of when they are going to get around to you.

  13. #588
    Xtreme Addict
    Join Date
    Apr 2006
    Location
    Barack Hussein Obama-Biden's Nation
    Posts
    1,084
    Quote Originally Posted by LordEC911 View Post
    It does happen to CPUs at stock speeds... with certain applications and code.
    What applications/code?

    I'm not talking about the TLB bug or a 1.13GHz Pentium 3 or anything like that. I'm talking about a CPU with good reputation, like a C2D E6600 with stock cooler.

    --two awesome rigs, wildly customized with
    5.1 Sony speakers, Stereo 3D, UV Tourmaline Confexia, Flame Bl00dr4g3 Fatal1ty
    --SONY GDM-FW900 24" widescreen CRT, overclocked to:
    2560x1600 resolution at 68Hz!(from 2304x1440@80Hz)

    Updated List of Video Card GPU Voodoopower Ratings!!!!!

  14. #589
    Muslim Overclocker
    Join Date
    May 2005
    Location
    Canada
    Posts
    2,786
    That 82A theory is wrong.

    I just ran this test on my reference 4870, went above 83 (84A) exactly and it was happy being there. That was at shader complexity 0 and 1600x1200.

    Turned it up to shader complexity 1, crashes immediately. Shader complexity 3, immediately blows up.

    Anyways, this doesn't make much sense (how it crashes).

    My watercooling experience

    Water
    Scythe Gentle Typhoons 120mm 1850RPM
    Thermochill PA120.3 Radiator
    Enzotech Sapphire Rev.A CPU Block
    Laing DDC 3.2
    XSPC Dual Pump Reservoir
    Primochill Pro LRT Red 1/2"
    Bitspower fittings + water temp sensor

    Rig
    E8400 | 4GB HyperX PC8500 | Corsair HX620W | ATI HD4870 512MB


    I see what I see, and you see what you see. I can't make you see what I see, but I can tell you what I see is not what you see. Truth is, we see what we want to see, and what we want to see is what those around us see. And what we don't see is... well, conspiracies.



  15. #590
    Xtreme Guru
    Join Date
    Jan 2005
    Location
    Tre, Suomi Finland
    Posts
    3,858
    Quote Originally Posted by truehighroller View Post
    That tiny ramsink'ish thing you have on the VRMs is terribly inadequate for Volterra chips.
    You were not supposed to see this.

  16. #591
    Xtreme Mentor
    Join Date
    Feb 2007
    Location
    Oxford, England
    Posts
    3,433
    Quote Originally Posted by ElAguila View Post
    What I don't understand is that people here at Xtreme Systems are blaming the programmer because he wrote a program that runs the gpu's at their maximum capacity has has found what appears(at least for now) to be a design flaw. These same people would be having fits if their cpu did the same thing when stress testing it at stock speeds.
    its just odd seeming the ATI carda re running nearly double the FPS as the competing nvidia cards ... thats why theres many arguments
    "Cast off your fear. Look forward. Never stand still, retreat and you will age. Hesitate and you will die. SHOUT! My name is…"
    //James

  17. #592
    Xtreme Member
    Join Date
    Dec 2006
    Posts
    213
    Quote Originally Posted by ahmad View Post
    That 82A theory is wrong.

    I just ran this test on my reference 4870, went above 83 (84A) exactly and it was happy being there. That was at shader complexity 0 and 1600x1200.

    Turned it up to shader complexity 1, crashes immediately. Shader complexity 3, immediately blows up.

    Anyways, this doesn't make much sense (how it crashes).
    Well, i've got news from you, this has been confirmed from AMD themselves now, according to hardware.fr, a reference in france as you can tell by the name. Here's the article :
    http://www.hardware.fr/news/10235/ra...-probleme.html

    They reproduced the problem, and here is AMD's official position, which is interesting (let me translate interesting parts for you, besides the "we confirm the issue" for you, you can check it with google trad if you want) : (oh, and TDP = Thermal design power = http://en.wikipedia.org/wiki/Thermal_design_power )




    ...

    "This test (GPU:3D from OCCT) loads the Radeon with a charge so well balanced between texturing units, ROPs, computing units and memory that they all function to a very high percentage. The power supply stage wasn't designed to handle such a charge. It's thus shutting down, in security mode, to protect the card, which requires a reboot of the computer.

    AMD indicated us that this is a deliberate choice to protect the power supply stage components from overheating or overintensity (? don't know if this term is correct), both being obviously linked. If AMD GPUs can protect themselves in case the TDP is being surpassed, by reducing their frequency, they are incapable of communicating with the power supply stage controler, and cannot act if this one is surpassed by the load, its sole resort being of shutting down the card.

    It has been a few years since the VRM have been switching from analogical to numerical, providing a full monitoring. RivaTuner provides plugin that can give you access to those values. AMD was the first one to use such VRM on the X1950Pro, and it seems unconceivable that AMD didn't think it would be useful to use those values. AMD indicates that this problem will be fixed in later GPUs.

    For AMD, it is normal to design a Power supply stage that is not in function of the maximum power consumption of the GPU, because there's a huge difference between the theorical maximum and the maximum that is usually seen in practical applications. All the recent chips can draw more current than their TDP and have mecanism that can maintain them between these limits. However, what is not normal, is to have conceived a a power supply stage that does not correspond to the TDP. With the HD4870, 4870x2 and 4890, AMD fixed a limit to the TDP that is not possible to attain with the reference PCB. Imagine, for instance, a GPU that is capable of drawing 100A before going into protection mode and a power supply stage that is going into protection mode at 90A.

    To justify themselves, AMD insist on the fact that that no practical case will charge the GPU as much. You have to want it to be able to do it, and use a very specific code. Nothing says that it is not possible to have such code on a GeForce (BTW, I'm working on it, seeing if i cannot optimize it for GeForce. it'll probably be a different algorithm). Note that the problem is highly unlikely to appear in GPGPU computing, as in this mode, most GPU units are Idling, thus drawing less current.

    Those justifications from AMD changes nothing to the design error they made. Even if the problem appearing is very unlikely, nothing tell it won't happen again. And this is a door open to viruses and such malicious softwares."

    ...

    (they say they did not reproduce the problem on a 4 phase card and on a 3 phase non-reference design card, that were using different components).
    Nonetheless, the fact that we did not reproduce it on those cards might be due to two things : either the VRMs are more powerful, or they are unprotected. While nothing indicates that the VRMs are going over their limits,in the doubt, we advise you not to abuse of this test on your Radeon cards.

    AMD told us he is not planning any modification to solve the problem on existing products. However, AMD may review its limits on the GPU so that it could go into protection mode before the power supply stage. AMD may also use, in asoftware way, the values sent by the VRM monitoring to reduce the GPU frequencies automatically and avoid the crash. AMD may also propose a new reference design, especially for the HD4890, but nothing is sure ta the moment. AMD insist on the fact that the problem foes not occur on any existing application (other than OCCT, of course). A justification which doesn't sound right to us because the Radeon switch from the "reliable" category to the "almost reliable" category.

    Should you avoid those Radeons ? We won't go as far, even if it would be logical for somebody who would like to avoid any potential problem to let aside the reference boards from AMD, or even, in doubt, buy a GeForce.

    Needless to say that Overclocked cards will encounter this problem or different ones in OCCT. In that case, neither AMD or the manufacturer is responsible if at stock values the card doesn't show any problems".





    As for the "don't abuse from the test", i'd say that by default, the shader complexity 0 is selected, and that value will never provoke the bug Radeons are encountering. That's the safety measure i took. Don't ditch my test

    Sorry for the quality of my translation. I'm doing my best... it's not that easy you know And my english is far from perfect, as you may have already seen.

    What's important to note (IMHO) :
    • AMD acknowledges the problem on very specific app code
    • The problem will be fixed in later cards by linking the Power supply stage monitoring info to the GPU info. I still find it weird to design a GPU that cannot function at 100% of its capabilities... truly, this is leaving me... abashed. However, they did not say they're planning to castrate OCCT. If that's true, i'm happy they're not doing it "app by app". Their approach is the good approach IMHO. The best approach would be a card that can handle everything at full speed. Which is what, marketingly, everybody boasts. I wonder why 3d cards functions differently than any other hardware in the PC field...
    • The tester did reproduced it, and did a very great job at isolating the problem
    • AMD is planning alot of modification to get around the problem. It is important to them. Maybe even a new reference board ! Wow. So much for a problem that does not exist... sorry, i'm savoring a bit the moment after the flame war we had here, even recently Please, just let me pass this line It's the only one i will write about it.


    That's all about it. Sorry for the poor quality of the trad, perhaps google trad will do a better job at it, i'll let you read the one you like better Trust me, i tried to do my best, and to be as close to the original test as possible.
    Last edited by Tetedeiench; 06-01-2009 at 02:03 AM. Reason: The new reference board is not for sure ;) editing the post.

  18. #593
    Moderator
    Join Date
    Mar 2006
    Posts
    8,556
    Quote Originally Posted by LordEC911 View Post
    It does happen to CPUs at stock speeds... with certain applications and code.
    Which ones? Link?

  19. #594
    Xtreme Mentor
    Join Date
    Feb 2007
    Location
    Oxford, England
    Posts
    3,433
    Quote Originally Posted by Tetedeiench View Post
    Well, i've got news from you, this has been confirmed from AMD themselves now, according to hardware.fr, a reference in france as you can tell by the name. Here's the article :
    http://www.hardware.fr/news/10235/ra...-probleme.html

    They reproduced the problem, and here is AMD's official position, which is interesting (let me translate interesting parts for you, besides the "we confirm the issue" for you, you can check it with google trad if you want) : (oh, and TDP = Thermal design power = http://en.wikipedia.org/wiki/Thermal_design_power )




    ...

    "This test (GPU:3D from OCCT) loads the Radeon with a charge so well balanced between texturing units, ROPs, computing units and memory that they all function to a very high percentage. The power supply stage wasn't designed to handle such a charge. It's thus shutting down, in security mode, to protect the card, which requires a reboot of the computer.

    AMD indicated us that this is a deliberate choice to protect the power supply stage components from overheating or overintensity (? don't know if this term is correct), both being obviously linked. If AMD GPUs can protect themselves in case the TDP is being surpassed, by reducing their frequency, they are incapable of communicating with the power supply stage controler, and cannot act if this one is surpassed by the load, its sole resort being of shutting down the card.

    It has been a few years since the VRM have been switching from analogical to numerical, providing a full monitoring. RivaTuner provides plugin that can give you access to those values. AMD was the first one to use such VRM on the X1950Pro, and it seems unconceivable that AMD didn't think it would be useful to use those values. AMD indicates that this problem will be fixed in later GPUs.

    For AMD, it is normal to design a Power supply stage that is not in function of the maximum power consumption of the GPU, because there's a huge difference between the theorical maximum and the maximum that is usually seen in practical applications. All the recent chips can draw more current than their TDP and have mecanism that can maintain them between these limits. However, what is not normal, is to have conceived a a power supply stage that does not correspond to the TDP. With the HD4870, 4870x2 and 4890, AMD fixed a limit to the TDP that is not possible to attain with the reference PCB. Imagine, for instance, a GPU that is capable of drawing 100A before going into protection mode and a power supply stage that is going into protection mode at 90A.

    To justify themselves, AMD insist on the fact that that no practical case will charge the GPU as much. You have to want it to be able to do it, and use a very specific code. Nothing says that it is not possible to have such code on a GeForce (BTW, I'm working on it, seeing if i cannot optimize it for GeForce. it'll probably be a different algorithm). Note that the problem is highly unlikely to appear in GPGPU computing, as in this mode, most GPU units are Idling, thus drawing less current.

    Those justifications from AMD changes nothing to the design error they made. Even if the problem appearing is very unlikely, nothing tell it won't happen again. And this is a door open to viruses and such malicious softwares."

    ...

    (they say they did not reproduce the problem on a 4 phase card and on a 3 phase non-reference design card, that were using different components).
    Nonetheless, the fact that we did not reproduce it on those cards might be due to two things : either the VRMs are more powerful, or they are unprotected. While nothing indicates that the VRMs are going over their limits,in the doubt, we advise you not to abuse of this test on your Radeon cards.

    AMD told us he is not planning any modification to solve the problem on existing products. However, AMD may review its limits on the GPU so that it could go into protection mode before the power supply stage. AMD may also use, in asoftware way, the values sent by the VRM monitoring to reduce the GPU frequencies automatically and avoid the crash. AMD may also propose a new reference design, especially for the HD4890. AMD insist on the fact that the problem foes not occur on any existing application (other than OCCT, of course). A justification which doesn't sound right to us because the Radeon switch from the "reliable" category to the "almost reliable" category.

    Should you avoid those Radeons ? We won't go as far, even if it would be logical for somebody who would like to avoid any potential problem to let aside the reference boards from AMD, or even, in doubt, buy a GeForce.

    Needless to say that Overclocked cards will encounter this problem or different ones in OCCT. In that case, neither AMD or the manufacturer is responsible if at stock values the card doesn't show any problems".





    As for the "don't abuse from the test", i'd say that by default, the shader complexity 0 is selected, and that value will never provoke the bug Radeons are encountering. That's the safety measure i took. Don't ditch my test

    Sorry for the quality of my translation. I'm doing my best... it's not that easy you know And my english is far from perfect, as you may have already seen.

    What's important to note (IMHO) :
    • AMD acknowledges the problem on very specific app code
    • The problem will be fixed in later cards by linking the Power supply stage monitoring info to the GPU info. I still find it weird to design a GPU that cannot function at 100% of its capabilities... truly, this is leaving me... abashed. However, they did not say they're planning to castrate OCCT. If that's true, i'm happy they're not doing it "app by app". Their approach is the good approach IMHO. The best approach would be a card that can handle everything at full speed. Which is what, marketingly, everybody boasts. I wonder why 3d cards functions differently than any other hardware in the PC field...
    • The tester did reproduced it, and did a very great job at isolating the problem
    • AMD is planning alot of modification to get around the problem. It is important to them. Even a new reference board ! Wow. So much for a problem that does not exist... sorry, i'm savoring a bit the moment after the flame war we had here, even recently Please, just let me pass this line It's the only one i will write about it.


    That's all about it. Sorry for the poor quality of the trad, perhaps google trad will do a better job at it, i'll let you read the one you like better Trust me, i tried to do my best, and to be as close to the original test as possible.
    so it's like a lot of people said... its not really a design floor at all it just a safety measure sort of thing..

    although personally id prefer to have the order the other way around to what amd have did (so gpu goes to 80a or w/e then v-regs to 90 as an example) that would be easiest way to get around it and very very easy to implement.

    Saying that they DID say the only way your going to manage to do this is if you do TRY to actually make it happen...

    it does make sense in a way that your not going to expect someone to max the card in every way it can etc..

    but hey congrats on your find, wonder if/when you find the code for Nvidia cards that they have it the same was as ati, or the opposite....

    tbh im GUESSING that it will be the same.. as they use the same volterra vrm's lately as on the ati cards in question.



    end of line it seems like. There's nothing to worry about anyway as you have to actually go out of your way to make this happen..
    and if you do worry everything will be fixed in next card's by ati.


    sooooo


    carry on as normal people
    "Cast off your fear. Look forward. Never stand still, retreat and you will age. Hesitate and you will die. SHOUT! My name is…"
    //James

  20. #595
    Moderator
    Join Date
    Mar 2006
    Posts
    8,556
    Quote Originally Posted by Jamesrt2004 View Post
    so it's like a lot of people said... its not really a design floor at all it just a safety measure sort of thing..

    although personally id prefer to have the order the other way around to what amd have did (so gpu goes to 80a or w/e then v-regs to 90 as an example) that would be easiest way to get around it and very very easy to implement.

    Saying that they DID say the only way your going to manage to do this is if you do TRY to actually make it happen...

    it does make sense in a way that your not going to expect someone to max the card in every way it can etc..

    but hey congrats on your find, wonder if/when you find the code for Nvidia cards that they have it the same was as ati, or the opposite....

    tbh im GUESSING that it will be the same.. as they use the same volterra vrm's lately as on the ati cards in question.



    end of line it seems like. There's nothing to worry about anyway as you have to actually go out of your way to make this happen..
    and if you do worry everything will be fixed in next card's by ati.


    sooooo


    carry on as normal people
    Ya. a Safety measure..... forgive the car analogy... but its like a safety measure that causes a Fiat engine to cut out at 60mph... in the body of a Ferrari.

  21. #596
    Xtreme Member
    Join Date
    Dec 2006
    Posts
    213
    Quote Originally Posted by Jamesrt2004 View Post
    so it's like a lot of people said... its not really a design floor at all it just a safety measure sort of thing..

    although personally id prefer to have the order the other way around to what amd have did (so gpu goes to 80a or w/e then v-regs to 90 as an example) that would be easiest way to get around it and very very easy to implement.

    Saying that they DID say the only way your going to manage to do this is if you do TRY to actually make it happen...

    it does make sense in a way that your not going to expect someone to max the card in every way it can etc..

    but hey congrats on your find, wonder if/when you find the code for Nvidia cards that they have it the same was as ati, or the opposite....

    tbh im GUESSING that it will be the same.. as they use the same volterra vrm's lately as on the ati cards in question.



    end of line it seems like. There's nothing to worry about anyway as you have to actually go out of your way to make this happen..
    and if you do worry everything will be fixed in next card's by ati.


    sooooo


    carry on as normal people
    It is a design flaw. Why ?
    • Because it triggers before the TDP
    • Because there's no link between the GPU and those measures
    • And AMD acknoledges this implicitly by taking steps, going as far as thinking of building a new reference design board


    Please, read it all befoire answering. it IS a design flaw. You should never be able to trigger that safety measure, especially at stock speed.

  22. #597

    ...

    Quote Originally Posted by Tetedeiench View Post
    It is a design flaw. Why ?
    Please, read it all befoire answering. it IS a design flaw. You should never be able to trigger that safety measure, especially at stock speed.
    Unless you use specially prepared code that has nothing to do with real applications or games. Why do you fail to acknowledge that every time?

    Yeah, we know "its your baby", but cmon!

  23. #598
    Moderator
    Join Date
    Mar 2006
    Posts
    8,556
    Quote Originally Posted by Shadov View Post
    Unless you use specially prepared code that has nothing to do with real applications or games. Why do you fail to acknowledge that every time?

    Yeah, we know "its your baby", but cmon!
    Because we all know that you could use a very similar piece of code for a nice bit of graphics in a lot of scenarios. Thats why.

  24. #599
    Xtreme Addict
    Join Date
    Feb 2008
    Posts
    1,118
    I still think this is bull my self. I want it fixed. I will call their asses because I seriously think that I am having some sort of issue atleast wth my one video card. I think my other one might be fine. I have noticed that my one card gets way hotter then the other one and yes I know the one at the top of the two will natuaraly get hotter.

    I don't know, I will do some further trouble shooting but, IMO with all the fans and heatsinks I have I shouldn't be having issues like this. Which Heatsink is the best one to use with these things all aound for the VRMs? I still have the original heatsinks if some here can tell me how to use the plate to cool these things down better I would be very greatfull.
    Last edited by truehighroller; 05-30-2009 at 07:36 AM.
    _____________________________________________



    Rig = GA-P67A-UD3P Rev 1.0 - 2600K @ 5.2~GHz 1.5v~, 1.489~v Under Load - Swiftech Water Cooling - 2 X 4GB Corsair DDR3 2000Mhz @ 1868MHz~ 9,10,9,27 @ 1.65v~ - Asus 6970 @ 950MHz / 1450MHz - 3x Western Digital RE3 320Gb 16Mb Cache SataII Drives in Raid0 - Corsair HX 850w Power Supply - Antec 1200 Case - 3DMark 11 Score = P6234 - 3DVantage Score = P26237

  25. #600
    Registered User
    Join Date
    Jan 2009
    Posts
    15
    Gainward 4870 512 ddr5 reference model
    crysis patch 1.2 1600x1200 no AA AF (catalyst settings on default) palyed about 5 min carrier level fighting the aliens on the deck but the temps get higher after 30min 1 hour of continuous play
    i didn't use the riva tunner osd ingame because that limits the voltage regulator curent Amps to about 60 thus giving a false reading but the riva tunner monitor runing in the background does not and shows the true value witch you can see in the screenshot is 72.48 A
    my best bet is the vrm's get hot (i have seen them at over 100 C ) can't handle the load making them less effective and the card crashes under the 83A limit or it reaches the 83A limit ingame witch i can't be certain because the crashes seem to apear after about 30 mins to 1 hour of gameplay and are to random to monitor
    i am not using dual screens
    btw my card seems to squeal witch it shouldn't because ati use digital vrm's (not sure about the term)
    Last edited by bluedevil; 05-30-2009 at 06:38 AM.

Page 24 of 30 FirstFirst ... 1421222324252627 ... LastLast

Bookmarks

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •