Quote Originally Posted by pirogue View Post
While doing a little GPU overclocking, I discovered the hardware monitor in Afterburner and did an "EKG" (ECG?). Anyway, both the peaks and troughs on the GPU usage last around a minute. The GPU utilization peaks around 84% and the wattage (measured by meter on UPS) goes from 83 to 135 during the peaks. The temp ranges from 42C to 49C. The fan ranges from 20% to 23%.

Click image for larger version. 

Name:	HCC-GPU-EKG.PNG 
Views:	254 
Size:	61.1 KB 
ID:	130636

Link to full view: http://www.wcgdaws.com/HCC-GPU-EKG.PNG
Oh... that raises some points - first, what does this usage graph look like if you run 2WUs on a card? Can we smooth out that idle time with another WU?

Second - those fluctuations are worse for the card than simply running hot. The largest cause of Solid State Electronics failure (atleast at two of the companies I've worked for) was mechanical stresses caused by continuous fluctuations in temperature. It's actually much better to just run them hot than to let them fluctuate wildly. Now these 2-6C fluctuations don't matter, just throwing that out there for anyone who sees more rapid temp changes (say if you're cooling with water maybe? idk...)

Third - fans are designed to run at relatively constant speeds for long, long periods of time without failure, but accelerating and decelerating causes them problems - so it would be better to fix the fan at say, 25% in your case and just let it run. Just be careful that that doesn't cause larger temperature swings!

I'm PROBABLY nitpicking about nothing - but I want to raise the issue before things fail because of it. Crunching already gets a bad rap for hardware failure a lot when it's not the crunching that killed it. Just thinking if I say something before hand it might hold more water IF something goes wrong.