i get crashing in a non-overclocked HD4890 (with a older) game until I underclock pretty heavily :X there could be some meat to this
With motherboards like mine, you can increase the power going to each individual PCIe slot. Stock mobo settings are 75w. I have mine at 150w as recommended by other Asus M3A79 and M4A79 users with HD4870/4890s. We use this for heavy overclocking. It allows higher core and memory clocks as well. Most guys have it bumped to 200w.
Heres a link to what I'm talking about: http://www.xtremesystems.org/forums/...201154&page=55
EDIT: Purpose being, some motherboards don't supply an ample amount of power to the slot with stock BIOS settings.
Last edited by Mechromancer; 05-19-2009 at 12:20 PM.
Okay...
So, you're buying a card, you cannot use it to 100% of its own power, and that's not an issue ? I fail to see the point here, guys.
GPU:3D is just like any other 3d application. It uses functions that are provided by directX, nothing else. If the problem would rise only with overclocked cards, well, fine. But the problem is arising with card running at [g]stock[/g] speed.
Imagine the problem arising with CPUs. You buy the latest Intel CPU. Cool. You want to encode your movie, and BAM, crash. You're like what ? Yes, you've have to SLOW DOWN the encoding for the process to complete. Or else, your CPU fails.
The problem is not Temperature-related, again. It is related to a limit of some sort we reach, a hardware limit. We opened the case, added fans, and woops, that did nothing.
If one day, a company decide to use very, very optimised shaders for their game, that might crash the card. Why is that ? Because the card is using too much power.
What i can say right now with my card, is this line (until proven wrong) :
I have proven that HD4870 and HD4890 cards following the reference design cannot follow their own specification. Again, one day, this limit will be reached again by another app. If I could do it, another one will be able to do it.
Well, problem is :
1- I don't have the means to have another testing method. I'm a single developer. the only ATI video card i have access to is a HD2600 Mobility. My main computer is equipped with a GTX285. And i only have one computer. Imagine me as a guy just like yourself, spending his time developing OCCT.
I tried to get more professionnal testing done. So far no good. People are just plain ignoring my emails. I thought that having more than 1.500.000 downloads would be enough to get, at least, some listening, or a professionnal test when you uncover something in your own field (which is, stability testing). Seems like not. I have enough proof to get, at least, a "duh, let's check, he may be right". I mean, we had this thing happening on a PC with a ToughPower 1500W. I doubt it was at stake.
2- No games currently exists which raise this issue. That doesn't mean a game will never exist with this issue. And again, if an app could raise this problem, any other app can.
If you buy a car that boast it can reach 280Mph on a track, i doubt you'll be happy when you learn it can only reach 200mph on the said circuit track, even if you're only allowed 100mph in real life... that's what we're talking about.
I am actually longing for a professionnal testing proving me right or wrong. I'm almost sure i'm right, at the moment. Problem is, i've gone as far as i could with my limited testing means.
You want to help me ? Prove me i'm wrongBut i'm sure your HD4870/4890 is going blackscreen with my test at the moment, if you do have one... no ?
Last edited by Tetedeiench; 05-19-2009 at 12:10 PM.
This is an old issue and also applies to nVidia. Changes/renaming of game exe files so the driver profiles dont apply and sets in idle states. Its not long ago that as an example with EvE Online that alot of nVidia cards and some AMD cards overheated when in station. Simply due to missing idle states after some changed shadercode.
You can thank nVidia/AMD for poor designs and driverhacks. Same reason the drivers are like 100MB these days instead of 20.
Crunching for Comrades and the Common good of the People.
What're we're encountering here is not a overheating problem again, it's a failure in the power supply stage of the card. The temps were handled fine. Watercooled cards were tested, and they crashed all the same (and ran much cooler).
The very same card, reaching the very same temps (even higher temps), but with different VRMs, runs the very same tests fine.
So i don't think the problem is the same.
I don't understand why people are questioning that there might be a problem with the VRM implementation on said cards. Would each one of you say the same when you buy a shiny new i7 or 955BE, run IBT on your favourite board to check if it's stable and whoops! You make it crash on stock because the processors draws more than then VRM can handle. What kind of argumentation is that when you say that exact thing when it comes to GPU's? That's bollocks.
To me it rather sounds that s.o. "economized" a bit too much here at ATI. One phase more and nothing happens anymore. Wow, that's just cheap.
@Tetedeiench, run Furmark, and you will see the same results. it is the only benchmark that has such a similar problem.
It seems that Furmark doesn't reach the 82A limit, as it reaches, on the VRM, about 78A (extreme burning mode, fullscreen, exe renamed), picked up from one of my beta-tester's comparative test.
My test reaches, (i've found the value) about 87A on the very same card.
Or at least, i'm reaching it instantaneously![]()
Last edited by Tetedeiench; 05-19-2009 at 12:27 PM.
Tetedeiench - would you make a list of the good/bad ones so i/we stay away from the bad ones..
thanks
Still, I have voltmodded both 4870's and 4870x2 and clocked them up and never playing any games or running any benches have they crashed due to VRM overload.... I mean sure this odd test with a furry donut can do it but in reality when will this "furry donut"game be released? my guess is not before the end of this year and most likely not before the end of next year...
(I will test with my XFX 4890 black PCB reference cards with 6+8pin power on friday)
That's what we do here we benchmark/test stuff so please guys if you have HD4870 or HD4890 test whats the man is saying screen shot your test and post it.(i would test but all i have now is GTX 260 sp216 and 8800GTX).
And if you are a guru electric engineer please explain this if you can.
Im can see all the ATI fanboys faces now if this end up to be true.![]()
exactly why I am a big fan of standard occt2hr for my cpu stress testingI dont do IBT or linpack, ever.
It will be interesting to see if the 8-pin makes a difference on the crash.
this is my thought as well, I dont know the technical specifics but I dont see any furry donuts to fight in the sequels to the games I play - although you never know what kind of bs crytek could cook up.
Is the 4850 not affected? I really don't see the point in this. As much as I love OCCT, the point in stability testing is not to throw a 200% load at it. It should (in my opinion,) be about a 90% GPU load, and monitor it for a good 2 hours.
I run Furmark on my 4850 to check my overclock sometimes, in Xtreme Burning Mode. That's enough if it passes that. I run OCCT for my CPU because it catches errors faster.
My 4850 hits 90c in Furmark, I'm not going to be stupid enough to run it up around 110-120c. That's just kind of pointless. If you run it that hot or with that much load it will eventually fail.
Smile
lol well i just tried this.. on stock reference 4870....
well... it started the test then just a black screen with my mouse.....
pressed esc after about a min of doing nothing
then everything was fine just went to normal occt screen.....
nothing was wrong ... just the test didn't load up it didnt even go into 3d clocks...
"Cast off your fear. Look forward. Never stand still, retreat and you will age. Hesitate and you will die. SHOUT! My name is…"
//James
well congrats, you have found the most stressful application for these cards, and AMD should use your program as a way to stress test their next cards before giving us something half built.
the next question, and most important question is, which card can run this benchmark faster, 4890 or 275?
Bookmarks