HAHA just above your post
http://www.xtremesystems.org/forums/...ghlight=4870x2
Heres is an example of what I was talking about earlier.
Really when did you test it?
Did you read the thread?
Why didn't you post your results before?
This thread is obviously turning into a crap shoot thanks to the way the OP went about this thread.
Wtf?
Learn to read. He had a 4800 running that didn't crash... Congrats.
Who cares about this OCCT synthetics stress test? We have no reports of massive HD4xxx failures nor did any users who OC and overvolt their cards reported such behavior,even in most stressful tests like Crysis ,FarCry2 etc.
On the other hand ,if anyone likes to tests the quality of board design and VRM circuitry let him run this test and see if it bricks his card.He can say later his radeon has a flaw and RMA it for Nv card:p:
Scientific method anyone? Did we all skip grade school? I mean if his idea is bunk at least disprove it using tests and evidence based off of what he has provided. Glad to see a thread with a simple request can still dissolve into a flame war.
i will agree with this only for the simple fact that people, and i mean overclockers base their entire system stability on tests that are designed to stress their hardware in ways that you would NEVER, let me repeat that N E V E R stress your PC under your normal daily PC use.Quote:
Who cares about this OCCT synthetics stress test? We have no reports of massive HD4xxx failures
even under the biggest gaming load you cant stress it the way OCCT does for CPU/MEM/PSU/GPU
its just not going to happen and for people to keep continually believing that OCCT/Futuremark/Prime/etc are the golden rule for system stability is absolutely f'ing childish.
if your system is stable doing what you normally do then its stable.... period.
4 samples hardly prove or disprove someones theory, especially with a benchmark program. We'd need numbers on how many cards were sold, and then narrow it down to the affected models to get an effective sample size (even then it would be a flawed study because everyone's rig is different, too many variables to get a good standard study). It really comes down to how the cards were manufactured, and if it is a hardware issue, why are people raising such a fuss? Yeah it sucks, return the card and buy a different one. Or for your next purchase buy Nvidia. Of course I doubt we get any of that, but that's what we'd need. For now it's just what is available on this board. Would the Fire cards be affected by this? I have access to 2 at work but I don't know what a comparable model would be to the 4870/90.
Exactly. OP had 2 samples but yet he decided to go post this thread with this thread title.
Also trinibwoy-
http://en.wikipedia.org/wiki/Thermal_Design_Power
Simple fact is that all real applications sit well inside the TDP and design specs of what they are designed for as evidenced by the fact that nobody has seen these issues on gamesQuote:
The TDP is typically not the most power the chip could ever draw, such as by a power virus, but rather the maximum power that it would draw when running real applications. This ensures the computer will be able to handle essentially all applications without exceeding its thermal envelope, or requiring a cooling system for the maximum theoretical power, which would cost more and achieve no benefit.
(or even overclocking their boards with games), however apps like Furmark and this one are not real world tests.
OP would get similar results by striking a match and screaming fire in a crowded theater.
Really?
So gathering the actual results and revealing the glaring bias of the OP is "stubborn and biased interventions?"
Thanks for your insight.:rofl:
So like it was mentioned before, if this is a huge RV770 design flaw why is the 4850/4830 unaffected with their even "cheaper" VRMs?
Aren't there two versions of the 4870 floating around with some variations in the VRM circuitry?
My 4870 runs at 95%-98% load 24/7 no issues running F@H. For kicks I'll download this and run it, just to see. My 4870 is a 1st gen Visiontek right from first release last year.
OMG, He posted with some results that he had found with his program and was hopeing (being XS and all) that some people who own 4870/4890s would mabey give this a shot to confirm or disprove his results and the fanboys went crazy.
Atleast show the guy who developed a free and usefull program that we have all used at some point some respect.
OK I downloaded it, set the settings, and ran it. Nice fuzzy red donut thingy waving around on my screen. Got bored of it after a few minutes and shut it off. How long is it supposed to be before it was to black out?
My 4870 is overclocked to 790 core too by the way (highest stupid CCC will allow me).
Didn't kill my Asus DK card.
Answer 1 : http://en.wikipedia.org/wiki/Optimism_bias :up::up::up:
Answer 2 : http://en.wikipedia.org/wiki/Somebody_Else%27s_Problem :shrug:
Answer 3 : http://en.wikipedia.org/wiki/Rosy_retrospection :up:
Answer 4 : http://en.wikipedia.org/wiki/Positivity_effect :up:
Answer 5 : http://en.wikipedia.org/wiki/Bandwagon_effect :up::up::up::up::up::up::up::up::up::up::up::up:
Answer 6 : http://en.wikipedia.org/wiki/Commitment_bias :up::up::up::up::up:
and so forth..
--
Please note : The preceding list of cognitive biases may not necessarily apply 100%.
Someone disprove the OP's assertions or kindly STFU.
XmX
... :down: AMD, is it possible that you've let me down again? :down: ...
Is it a flaw if performs to the level that ATI intended? If they would have prevented this with there drivers by not loading so heavy would it still be a flaw?
I think that what has been found is fascinating and all but nothing more than a way to stress the board more than it was intended to be. IMHO, not a flaw.
I was pretty sure I was going pass
I already fixed my problem with March's DirectX distribution and Vista
I didn't any stress test to cause a crash
http://i41.tinypic.com/fnfmmt.png
No black Screen here. Gecube 4870, and Sapphire 4870 1gb in Xfire. I didn't like how high the temps went up, in such a short time though!! That is an intense test.
To address the real world application of it... Any game that would put this kind of stress on hardware, would be out of business pretty quick, considering most of us here, have decent hardware, versus the masses... I dont think many Dell's or E-machines would last haha.
As for not being able to use the "full" potential of the card, it seems ATI designed the card for the 99.9% of people who will never hit this wall, they could have made the card impervious to this, by adding more pwm/vrm to the card..but at a cost. I don't see why the cost benefit analysis isn't surprising.. this condition will not surface in the wild, and for those that it does, its cheaper to RMA those few cards, than it is, to design change it to meet such strenuous rare occurances. That said, my cards worked fine, i just dont want to fry them for no reason :P
I don't think any optimisation of a game would make use of all 800 shader cores, the rops, the TMU, etc all at the same time. Aka, unrealistic load.