i hate to quote myself, but it seems like a few ppl in the last 10 pages chose to ignore this gem :)
so im giving it again in hopes that they read and understand, and comprehend what is being said.
Has anyone else tested on 9.2?
....
I'm not sure whether my card powering off right away from launching this test is supposed to convince me of anything.
Well your coding definetly triggered OCP in multiple cards includding mine. Maybe you should ask ATI (if you havent already) how much current they rate their cards for and how much current they expect the rv770 core to draw under maximum load circumstances.
Btw using catalyst 9.4 WHQL and running Windows XP Pro x86 sp3.
since i'm one of the ones who is experiencing 0 isues i figured i'd post this for comparison. testing length for this was 2 min i can go longer if anyone is interested. my earlier test is here.
http://gpuz.techpowerup.com/09/05/21/503.png
(please note i have the max displayed for vddc temps and amps)
They have their logo on my website ( www.ocbase.com ).
Here is the list :
* Nvidia sent me cards as i needed them ( 9800GTX so that i could get my hands on CUDA, and a GTX285 for debugging purposes on GPU:3D).
* Intel sent me a Core i7 EE 1066 :)
* Gigabyte a motherboard (EX58-UD4P) for the core i7
* Materiel.net (french website, reseller) is helping me get known in those companies. They're really doing an great job. Basically, whenever they get in touch with commercial, they mention me ;)
* LDLC.com (french website, reseller) is lending me test hardware (used cards) should the need arise. i'm using this sparingly as it comes to a cost for them and i want to limit this to really needed testing purposes, and such
Believe me, i tried to contact AMD, without success :(
Did you run the test in FullScreen mode ?
Your VDDC max stayed below 82A, that's why you're not experiencing the issue.
The 82A limit seems to hold true so far...
PLEASE NOTE THAT IN THE STABLE VERSION, SHADER COMPLEXITY 0 IS SELECTED AS DEFAULT IN OCCT. YOU HAVE TO SELECT COMPLEXITY 3 MANUALLY, AS OPPOSED TO THE RC1, WHICH AUTO-SELECTED "OPTIMAL"
they've been quite a bit hotter for longer : P
yes it is full screen and yes 3 is selected.
after playing around i am able to recreate the black screen issue. by going into the driver settings and taking the anistropic filtering off of forced 16x and on to "use application setting" it will produce a black screen/freeze everytime for me. changing any other setting to application setting causes no issues.
i'm certainly no expert but this would seem to imply that it is an application issue would it not?
No, it's just because forcing anisotropic filtering lower the GPU load :) When you unforce it, the load goes up the roof, and there you have the bug. That's why, before, the test ran fine : you forced the load to stay under 82A with this setting.
Great finding, that's something new i haven't think of ! :clap:
glad to be of assistance
just for clarification (and forgive me for being slow) are you saying that the act of forcing 16x reduces load when compared to application 16x? Or, are you saying the application is running the filtering at a higher rate than is available for forcing?
Hmm.. my HD 4870 X2 at stock instantly resulted in the black screen (tested using the settings given in the first post). And the fan sped up to 100%. I used 1920x1200, Full Screen, Shader Complexity @ 3.
EXACT same symptoms as when I cranked up GPU voltage with RivaTuner and ran FurMark. I used 1920x1200 FS, 0xAA for FurMark.
Visiontek 4870 X2, reference PCB, and cooler.
My app is running at Anisotropic 0. Anistropic filtering is a process that reduces the load on the GPU during the test. I disable it.
Your setting forced it during the test, and as it is set in the drivers, i cannot have any control on it, unfortunatly. That's why the GPU load went down :(
THAT is important. Furmark triggers the same problems, when you crank up the voltage.
I knew Furmark is a tad less violent than me on the GPU, but upping the voltage makes up for it, and triggers the same problem.
My code is not at stake here ;)
And i doubt anybody will want to put Jegx reputation down here : he is so much more knowledgeable about 3d programmation than i am... i even do think it's his job. Just visit his website, and you'll see. www.ozone3d.net . Read the news.
But is there a solution for it? :D Like... removing the overspeed protection?
i see, i had always assumed the filtering added load and not the other way around ( see i said i was slow).
by forcing the anisotropic filtering down to 2x the program will still run. FPS increases by roughly 10 and amps go up to exactly 82 so this is inline with the thinking that beyond 82 amps is the killer
:hello: friend ! (Congrats for IntelBurnTest ).
I have no idea. I'd guess the protection is in the BIOS, but that's just a guess.
I just had a message, i had a french professionnal article writer look at the problem, he did electrical measurements and such, and came to the conclusion there's indeed something within the card. i'll translate what he did as soon as its done, i don't have enough info at the moment.
Can someone with a KillAWatt meter please measure the power consumption of a HD 4870 / HD 4890 before OCP is triggered? I'm curious as to how much more intense is this test versus driver crippled furmark which already draws 37% more power then most intensive games.
just tried id running clocks down to 720 core 650 mem and it gets to 70~ amps
so I 100% believe that its OCP kicking in (which isn't new) just seems a bit odd a program is doing it,
but then I don't really mind, it's not a bad thing!
This program tbh DOES give the card a LOT more stress then normal games etc will/would
so its nothing to worry about =)
I still dont get why this program crashing cards is a problem, so if applying AF stops the card from crashing then your set - just enable AF in your games and all woes are gone.
I'm not calling this thing a power virus or anything like that, but it is clear to me that this program is loading the card in certain way that no game would ever do - even if it had the best graphics in the world - because games aren't rendering a mostly static image with little geometry and no AF setting
:nono: :argue: or :cord: Enough of this emotional bickering!
What I was doing here is to try to clear things up a little bit in all of this mess that you're stirring up. Like when the OP said that Vsync enabled does not make a difference with power consumption, I said that it is not what I have been experiencing.
I had a HIS 4850 1GB card that died on me after exactly 30 days of use. The display became permanently corrupted after decoding hi-def videos for a couple of hours on multiple screens, and I am wondering if it has anything to do with the cheap VRM's... Ever since I had an X1900XTX from the day it came out, I noticed that the VRM temperatures on ATI cards seemed much higher than those on Nvidia cards. I am geared towards buying either a 4870 1GB or a 4890, and am keenly seeking out this thread for knowing which brand/make is the best quality.