I wish people would learn how to use Imageshack...
I wish people would learn how to use Imageshack...
You were not supposed to see this.
"Oh no, my computers unstable, LOOK some guy on the interwebs found a flaw(which btw only becomes apparent under *unusually high* load), that MUST be why my card is breaking, even though the problem that the guy uncovered has nothing to do with my situation!"
truehighroller and bluedevil, I highly doubt its this "flaw" causing your issue.
Core 2 Duo(Conroe) was based on the Intel Core Duo(Yonah) which was based on the Pentium M(Banias) which was based on the Pentium III(Coppermine).
Core 2 Duo is a Pentium III on meth.
I got powercolour 4890 and added accelero s1 rev.2. I noticed that vrm is pretty hard to cool down. I think my powercolour has a shutdown temperture max on the vrm if the vrm goes over 90*c to the range of 95*-110* my system restarts. As you can see my vrm amp is 32.66. Runs furmark fine at 900/1000 and can run indefinitely. I can overclock it to 980/1150 and run games if I can keep the temps cool, but if I ran benchmarks and the gpu uses 100% all the time the amount of heat produce exceeds the amount heat taken off by the fan. Don't think I have reached the 82/83amp the enormous amount of heat produced by the vrm seem to hit the limit for reference design. Which I think is poorly designed I think the non-reference powercolour design seem better and they say its 10+*c cooler.
http://img24.imageshack.us/img24/1852/4890oc2.jpg
So any body care to comment on this one for me? What should I do? Is there anything I can do?
_____________________________________________
Rig = GA-P67A-UD3P Rev 1.0 - 2600K @ 5.2~GHz 1.5v~, 1.489~v Under Load - Swiftech Water Cooling - 2 X 4GB Corsair DDR3 2000Mhz @ 1868MHz~ 9,10,9,27 @ 1.65v~ - Asus 6970 @ 950MHz / 1450MHz - 3x Western Digital RE3 320Gb 16Mb Cache SataII Drives in Raid0 - Corsair HX 850w Power Supply - Antec 1200 Case - 3DMark 11 Score = P6234 - 3DVantage Score = P26237
Want me to post pics of my cooling setup for them for you? I will.
Here is a pic of my cooling setup.
http://img15.imageshack.us/img15/6901/1022096.jpg
Another PIC.
http://img43.imageshack.us/img43/9351/1022097x.jpg
Tell me what you think. I actually used a posting in this site to figure out what to put heat sinks on. If I need to and can I will order better cooling for the VRMs.
Last edited by truehighroller; 05-29-2009 at 10:15 AM.
_____________________________________________
Rig = GA-P67A-UD3P Rev 1.0 - 2600K @ 5.2~GHz 1.5v~, 1.489~v Under Load - Swiftech Water Cooling - 2 X 4GB Corsair DDR3 2000Mhz @ 1868MHz~ 9,10,9,27 @ 1.65v~ - Asus 6970 @ 950MHz / 1450MHz - 3x Western Digital RE3 320Gb 16Mb Cache SataII Drives in Raid0 - Corsair HX 850w Power Supply - Antec 1200 Case - 3DMark 11 Score = P6234 - 3DVantage Score = P26237
I run an HR-03 on my GTX280 which uses the same vrms and I picked up some enzotech mos c-1s and they perform pretty well with a 120mm fan mounted on the drive bays across from the card. A little worse than on the stock cooler at 100% fan speed and alot better than the thermalright vrm plate. You may want to pick up a pack.
I remember I couldn't use a HR-03 on the 4870 because there are so many tiny hot chips on the card you need about a dozen extra (custom) heatsinks.![]()
_____________________________________________
Rig = GA-P67A-UD3P Rev 1.0 - 2600K @ 5.2~GHz 1.5v~, 1.489~v Under Load - Swiftech Water Cooling - 2 X 4GB Corsair DDR3 2000Mhz @ 1868MHz~ 9,10,9,27 @ 1.65v~ - Asus 6970 @ 950MHz / 1450MHz - 3x Western Digital RE3 320Gb 16Mb Cache SataII Drives in Raid0 - Corsair HX 850w Power Supply - Antec 1200 Case - 3DMark 11 Score = P6234 - 3DVantage Score = P26237
These vrms run hot, I wouldn't cut them down. I would nicely bend the pins down before mounting them. They should work fine as long as they see some airflow. I would imagine that even a cut down one would work better than what you are using now but I would do anything possible to avoid cutting them down.
What I don't understand is that people here at Xtreme Systems are blaming the programmer because he wrote a program that runs the gpu's at their maximum capacity has has found what appears(at least for now) to be a design flaw. These same people would be having fits if their cpu did the same thing when stress testing it at stock speeds.
ASUS Rampage II
Core i7 920 D0 @ 4.2GHz
Corsair TR3X6G1600C8 3x2GB DDR3 1600MHz
Radeon 5870
Raid 0 2x1TB WD on SATA 3/4
Seagate 160GB on SATA 1
LG Bluray ROM/DVD RW
Corsair HX1000W
Last edited by LordEC911; 05-29-2009 at 02:51 PM.
Originally Posted by motown_steve
Every genocide that was committed during the 20th century has been preceded by the disarmament of the target population. Once the government outlaws your guns your life becomes a luxury afforded to you by the state. You become a tool to benefit the state. Should you cease to benefit the state or even worse become an annoyance or even a hindrance to the state then your life becomes more trouble than it is worth.
Once the government outlaws your guns your life is forfeit. You're already dead, it's just a question of when they are going to get around to you.
--two awesome rigs, wildly customized with
5.1 Sony speakers, Stereo 3D, UV Tourmaline Confexia, Flame Bl00dr4g3 Fatal1ty
--SONY GDM-FW900 24" widescreen CRT, overclocked to:
2560x1600 resolution at 68Hz!(from 2304x1440@80Hz)![]()
Updated List of Video Card GPU Voodoopower Ratings!!!!!
That 82A theory is wrong.
I just ran this test on my reference 4870, went above 83 (84A) exactly and it was happy being there. That was at shader complexity 0 and 1600x1200.
Turned it up to shader complexity 1, crashes immediately. Shader complexity 3, immediately blows up.
Anyways, this doesn't make much sense (how it crashes).
My watercooling experience
Water
Scythe Gentle Typhoons 120mm 1850RPM
Thermochill PA120.3 Radiator
Enzotech Sapphire Rev.A CPU Block
Laing DDC 3.2
XSPC Dual Pump Reservoir
Primochill Pro LRT Red 1/2"
Bitspower fittings + water temp sensor
Rig
E8400 | 4GB HyperX PC8500 | Corsair HX620W | ATI HD4870 512MB
I see what I see, and you see what you see. I can't make you see what I see, but I can tell you what I see is not what you see. Truth is, we see what we want to see, and what we want to see is what those around us see. And what we don't see is... well, conspiracies.
Well, i've got news from you, this has been confirmed from AMD themselves now, according to hardware.fr, a reference in france as you can tell by the name. Here's the article :
http://www.hardware.fr/news/10235/ra...-probleme.html
They reproduced the problem, and here is AMD's official position, which is interesting (let me translate interesting parts for you, besides the "we confirm the issue" for you, you can check it with google trad if you want) : (oh, and TDP = Thermal design power = http://en.wikipedia.org/wiki/Thermal_design_power )
...
"This test (GPU:3D from OCCT) loads the Radeon with a charge so well balanced between texturing units, ROPs, computing units and memory that they all function to a very high percentage. The power supply stage wasn't designed to handle such a charge. It's thus shutting down, in security mode, to protect the card, which requires a reboot of the computer.
AMD indicated us that this is a deliberate choice to protect the power supply stage components from overheating or overintensity (? don't know if this term is correct), both being obviously linked. If AMD GPUs can protect themselves in case the TDP is being surpassed, by reducing their frequency, they are incapable of communicating with the power supply stage controler, and cannot act if this one is surpassed by the load, its sole resort being of shutting down the card.
It has been a few years since the VRM have been switching from analogical to numerical, providing a full monitoring. RivaTuner provides plugin that can give you access to those values. AMD was the first one to use such VRM on the X1950Pro, and it seems unconceivable that AMD didn't think it would be useful to use those values. AMD indicates that this problem will be fixed in later GPUs.
For AMD, it is normal to design a Power supply stage that is not in function of the maximum power consumption of the GPU, because there's a huge difference between the theorical maximum and the maximum that is usually seen in practical applications. All the recent chips can draw more current than their TDP and have mecanism that can maintain them between these limits. However, what is not normal, is to have conceived a a power supply stage that does not correspond to the TDP. With the HD4870, 4870x2 and 4890, AMD fixed a limit to the TDP that is not possible to attain with the reference PCB. Imagine, for instance, a GPU that is capable of drawing 100A before going into protection mode and a power supply stage that is going into protection mode at 90A.
To justify themselves, AMD insist on the fact that that no practical case will charge the GPU as much. You have to want it to be able to do it, and use a very specific code. Nothing says that it is not possible to have such code on a GeForce (BTW, I'm working on it, seeing if i cannot optimize it for GeForce. it'll probably be a different algorithm). Note that the problem is highly unlikely to appear in GPGPU computing, as in this mode, most GPU units are Idling, thus drawing less current.
Those justifications from AMD changes nothing to the design error they made. Even if the problem appearing is very unlikely, nothing tell it won't happen again. And this is a door open to viruses and such malicious softwares."
...
(they say they did not reproduce the problem on a 4 phase card and on a 3 phase non-reference design card, that were using different components).
Nonetheless, the fact that we did not reproduce it on those cards might be due to two things : either the VRMs are more powerful, or they are unprotected. While nothing indicates that the VRMs are going over their limits,in the doubt, we advise you not to abuse of this test on your Radeon cards.
AMD told us he is not planning any modification to solve the problem on existing products. However, AMD may review its limits on the GPU so that it could go into protection mode before the power supply stage. AMD may also use, in asoftware way, the values sent by the VRM monitoring to reduce the GPU frequencies automatically and avoid the crash. AMD may also propose a new reference design, especially for the HD4890, but nothing is sure ta the moment. AMD insist on the fact that the problem foes not occur on any existing application (other than OCCT, of course). A justification which doesn't sound right to us because the Radeon switch from the "reliable" category to the "almost reliable" category.
Should you avoid those Radeons ? We won't go as far, even if it would be logical for somebody who would like to avoid any potential problem to let aside the reference boards from AMD, or even, in doubt, buy a GeForce.
Needless to say that Overclocked cards will encounter this problem or different ones in OCCT. In that case, neither AMD or the manufacturer is responsible if at stock values the card doesn't show any problems".
As for the "don't abuse from the test", i'd say that by default, the shader complexity 0 is selected, and that value will never provoke the bug Radeons are encountering. That's the safety measure i took. Don't ditch my test
Sorry for the quality of my translation. I'm doing my best... it's not that easy you knowAnd my english is far from perfect, as you may have already seen.
What's important to note (IMHO) :
- AMD acknowledges the problem on very specific app code
- The problem will be fixed in later cards by linking the Power supply stage monitoring info to the GPU info. I still find it weird to design a GPU that cannot function at 100% of its capabilities... truly, this is leaving me... abashed. However, they did not say they're planning to castrate OCCT. If that's true, i'm happy they're not doing it "app by app". Their approach is the good approach IMHO. The best approach would be a card that can handle everything at full speed. Which is what, marketingly, everybody boasts. I wonder why 3d cards functions differently than any other hardware in the PC field...
- The tester did reproduced it, and did a very great job at isolating the problem
- AMD is planning alot of modification to get around the problem. It is important to them. Maybe even a new reference board ! Wow. So much for a problem that does not exist... sorry, i'm savoring a bit the moment after the flame war we had here, even recently
Please, just let me pass this line
It's the only one i will write about it.
That's all about it. Sorry for the poor quality of the trad, perhaps google trad will do a better job at it, i'll let you read the one you like betterTrust me, i tried to do my best, and to be as close to the original test as possible.
Last edited by Tetedeiench; 06-01-2009 at 02:03 AM. Reason: The new reference board is not for sure ;) editing the post.
so it's like a lot of people said... its not really a design floor at all it just a safety measure sort of thing..
although personally id prefer to have the order the other way around to what amd have did (so gpu goes to 80a or w/e then v-regs to 90 as an example) that would be easiest way to get around it and very very easy to implement.
Saying that they DID say the only way your going to manage to do this is if you do TRY to actually make it happen...
it does make sense in a way that your not going to expect someone to max the card in every way it can etc..
but hey congrats on your find, wonder if/when you find the code for Nvidia cards that they have it the same was as ati, or the opposite....
tbh im GUESSING that it will be the same.. as they use the same volterra vrm's lately as on the ati cards in question.
end of line it seems like. There's nothing to worry about anyway as you have to actually go out of your way to make this happen..
and if you do worry everything will be fixed in next card's by ati.
sooooo
carry on as normal people![]()
"Cast off your fear. Look forward. Never stand still, retreat and you will age. Hesitate and you will die. SHOUT! My name is…"
//James
It is a design flaw. Why ?
- Because it triggers before the TDP
- Because there's no link between the GPU and those measures
- And AMD acknoledges this implicitly by taking steps, going as far as thinking of building a new reference design board
Please, read it all befoire answering. it IS a design flaw. You should never be able to trigger that safety measure, especially at stock speed.
I still think this is bullmy self. I want it fixed. I will call their asses because I seriously think that I am having some sort of issue atleast wth my one video card. I think my other one might be fine. I have noticed that my one card gets way hotter then the other one and yes I know the one at the top of the two will natuaraly get hotter.
I don't know, I will do some further trouble shooting but, IMO with all the fans and heatsinks I have I shouldn't be having issues like this. Which Heatsink is the best one to use with these things all aound for the VRMs? I still have the original heatsinks if some here can tell me how to use the plate to cool these things down better I would be very greatfull.
Last edited by truehighroller; 05-30-2009 at 07:36 AM.
_____________________________________________
Rig = GA-P67A-UD3P Rev 1.0 - 2600K @ 5.2~GHz 1.5v~, 1.489~v Under Load - Swiftech Water Cooling - 2 X 4GB Corsair DDR3 2000Mhz @ 1868MHz~ 9,10,9,27 @ 1.65v~ - Asus 6970 @ 950MHz / 1450MHz - 3x Western Digital RE3 320Gb 16Mb Cache SataII Drives in Raid0 - Corsair HX 850w Power Supply - Antec 1200 Case - 3DMark 11 Score = P6234 - 3DVantage Score = P26237
Gainward 4870 512 ddr5 reference model
crysis patch 1.2 1600x1200 no AA AF (catalyst settings on default) palyed about 5 min carrier level fighting the aliens on the deck but the temps get higher after 30min 1 hour of continuous play
i didn't use the riva tunner osd ingame because that limits the voltage regulator curent Amps to about 60 thus giving a false reading but the riva tunner monitor runing in the background does not and shows the true value witch you can see in the screenshot is 72.48 A
my best bet is the vrm's get hot (i have seen them at over 100 C ) can't handle the load making them less effective and the card crashes under the 83A limit or it reaches the 83A limit ingame witch i can't be certain because the crashes seem to apear after about 30 mins to 1 hour of gameplay and are to random to monitor
i am not using dual screens
btw my card seems to squeal witch it shouldn't because ati use digital vrm's (not sure about the term)
![]()
Last edited by bluedevil; 05-30-2009 at 06:38 AM.
Bookmarks