What are those recent games?
Crysis Warhead at 1680x1050 on my HD4890 draws 42A on average and 47.40A at peaks. But then again, such comparison between nV&ATi hardware is totally pointless.
Point is its a VRM design flaw/corner-cut in ATi reference cards ONLY thats been said.
So any talk about 'Games don't crash/ My Nvidia card pulls 10000Amps and don't crash WTF' is irrelevant at this point in the thread. Maybe not irrelevant when he started it, but if you read the thread, its irrelevant now.
If the card is incapable of limiting itself without a crash based on high load, or simple dealing with high load, A STOCK CLOCKS AND VOLTAGES, its a design flaw. And the reason I'm saying its a design flaw is that OTHER manus who used NON-REFERENCE cards spent the money for their customers to make the power circuitry that DOES work in all situations.
Who cares if your "new test" causes issues with certain cards, lastr I checked it was the performance and stability of GAMES that's important, not "tests".
Agree it's a test but what if a game needs 82A from VRM's? Will be 2 choices:
- end user will stare at a black screen and curse ATI
- game developer will phone to ATI and ATI will tell him to reduce something from graphics because they don't need end users to curse them.
Anyway ATI never stops to amaze me.
PS. How come that the rest of ATI's models are fine? http://i43.tinypic.com/2ui96bn.gif
Because they dont need more than 82Amps on a 3 phase budget VRM and the OCP is set higher.
Facepalm + Sigh....
No, it wouldn't be the same issue as the 3dmark(or any program for that matter) "optimizations". As long as the output is not touched, or changed, then all they are doing is artificially limited your program to prevent damage.
If ATi goes and through their drivers and modify the way it OUTPUTS, then yes its an issue, but by limiting the performance and not changing the output at all? Sure, having to limit performance in the name of stability is disappointing, but ultimately null point.
Wrong, there is a third choice, ATi implents a fix in the drivers and limits the performance slightly. Heres my issue with the whole thing as long as the OUTPUT remains the same, then performance limited or not, who gives a damn.
Its like the TLB bug, the issue affected 0.001% of people, the fix caused a overall performance drop of around 25%. Luckily with GPU's, you can fix the way the GPU/card performances in the specific(0.001%) application, instead of having to castrate the WHOLE thing.
Sure, its a disappointment that the performance of that specific(0.001%) application is lowered, but how often do you think it will be detrimental to you enjoyment? Probably never. Again, the amount of games/programs which *might* affect by this is almost none(0.001%).
Again, luckly any "bug fixes" in drivers would only affect that one specific program and not cause a huge drop all across the board. as long as the output is not changed or degraded, who cares.
If the game you love to play is somehow effected by this, and loss caused by the fix doesn't fit your *performance* requirements, then go buy a card that does. Last I checked ATi never guaranteed the the performance of all games and programs will meet a certain level, and neither does nVidia for that matter.
I'm having problems with my 4850, it is stock and never been overclocked. Far Cry 2, Brothers In Arms Hell's Highway and Call of Duty. I upgraded my system thinking it was anything but the Video card. So if it were only this test, I wouldn't give a flying flickering #uck but that's not the case. This test crashed on me even at lower settings. Maybe my card is bad.
I pulled the 3870 from my wife's rig. It Ran the test and all 3 of the Games perfectly. I have five computers in my house, they all have ATI video cards and I'm pretty anti-nVidia for the record. X800, X1800XT, 3650, 3870 and 4850.
When playing @ 2560 x 1600 these games pull higher amps for me:
Crysis
Crysis Warhead
Mass Effect
Empire: Total War
GTAIV
1680 x 1050 = 1764000 pixels
2560 x 1600 = 4096000 pixels (more than double)
More than double the pixels seems to mean higher current draw (at least on my current hardware)....
The highest thus far is Empire Total war which draws a *lot* of current when displaying the campaign map at 1600P....
That misrepresents my post.
I have read this thread diligently from start to finish and I am soooooo far from a fanboy the concept is ridiculous.
All I was attempting to contribute was the fact that at very high resolutions the current draw through ALL brands of graphics card vrm's is increased.
I was just trying to find some games that when played at this resolution (2560 x 1600) approach 80 amps in current draw...
I just wish I had an ATI card to test with atm because when gaming with my Nvidia card at 2560 x 1600 the current levels are almost in the zone where OCCT helps some reference 4870 / 4890's to fall over.
Now I know that Nvidia and ATI cards will have different current draws in the same situation due to dufferent architecures and configurations etc so I would be very interested if someone could test with ATI hardware @ 2560 x 1600 just to see if the current draw gets over 80 amps at this resolution when gaming...
Anyone out there with ATI hardware and a 30" screen? :)
Just tried Warhead at 2560x1600 enthusiast settings and played a bit of the ice level while running GPU-Z in the background.
The max was 60.3 A
http://img140.imageshack.us/img140/7...ead2560.th.png
I would only add my opinion: "I don't play OCCT, neither Furmark nor this stupid tests means something to me .. if it crashed after 8 hours play of Crysis, then it means something."
Its same stupidity as Linpacks etc. its load that cant be created normal way.. eg. its meaningless.
Here is one piece of answer on the very same computer, with a GTX280 :
OCCT vrmA usage : 84.68 A
http://www2.noelshack.com/uploads/occtGTX280033111.jpg
Furmark vrmA Usage (the donut is not taken as the user had a beautiful black box showing) : 79.84A
http://www2.noelshack.com/uploads/furmarkGTX050607.jpg
I'm currently asking what Furmark configuration the user used. And especially if he could retake the screenshot to include it in the pic (so that i don't get accusation like "you faked it - blablabla", even if it is easy to fake - sigh). But no - i'm not gentle on Nvidia cards. I asked him to use the same resolution as OCCT to be able to compare them.
I did not drop that FPS matter yet, mind you.
No. The user told me that my test was biaised because too gentle on Nvidia chips. Like castrated on purpose, i put a limiter if i found the "Nvidia" string somewhere, because i worked for them. Like i was paid by Nvidia to kill AMD cards, something like that.
He didn't say that "clearly", but that what's transpired from his posts.
I'm just saying that my test is more stressfull on ATI cards than Furmark.
And that my test is more stress full on Nvidia cards too than Furmark.
So he can't accuse me of being a Nvidia spy, or anything like that. Unless furmark is also paid by Nvidia, of course.
now, the readings being what they are, they are definitly NOT the same.
Especially since those were made in Windowed mode. Not fullscreen. That in itself makes a huge difference.
well clearly the gtx 295 should perform about the same as my 2 4890s (which do not crash @ 900mhz core)... but it doesn't at all, its off by nearly 40% performance wise
so whats the problem? your not putting the same performance load on the nvidia cards
is it possible that the nvidia card cannot simply keep up with these kinds of tasks? i thought ati cards also killed nvidias in furmark?
The load just proved that my test isn't gentle on Nvidia cards.
And yes, i'm starting to think the Nvidia cards cannot keep up at those kind of tasks, but they are still giving all they've got, looking at their power consumption.
Seems like the configuration he used for furmark is as follows : 1680x1050, extreme burning, no AA. I don't know if it is fullscreen or not.
OCCT GPU was 1024x768, windowed mode. The OCCT conf can be seen on the screen. I asked him to redo it at the same res than OCCT for a good comparison.
GTX295 should perform on par with 2 4890s,but it doesn't.The only thing that could be responsible for this is the code in the test(and no,I'm not implying it is deliberate attempt by the author).It could be that test is really not stressing Nv cards as much as ATi's.
It is not the code, it is the architecture that is not as effective as ATI on those particular instructions.
I did contact Nvidia about that matter. Let's give the matter some time. Aside from the FPS value, the vrmA rating shows that the cards are put under heavy stress, which is want we want.
I remind you that OCCT is *NOT* a benchmark. For that, use 3Dmark ;)
Then you must have a hard time understanding this graph:
http://www.xbitlabs.com/images/video...275oc/ra3u.png
Take a look at the 2560x1600 test. See how horrible Nvidia cards are. This Red Alert 3 benchmark is confirmed by several other hardware review sites in addition to Xbitlabs.
Do you think that the game is "really not stressing Nv cards as much as ATi's"?
It is a known fact that when it comes to FP16 texture filtering throughput, a 4870 has over 20% higher fill-rate than a GTX 280 as shown in a 3dMark Vantage test. Do the numbers 1.2 TFLOPS for a 4870 and 933 GFLOPS for a GTX 280 mean anything to you? Perhaps this OCCT benchmark is pushing somewhere close to 1.2 TFLOPS?
I checked my cards found one that had Vitec 9853 regulator on it
shut down on full screen(1680x1050) and 1024x768
but I was able to run 20 OCCTGPU's at once
Now that's a stress test :rofl:
http://i231.photobucket.com/albums/e...tresstest2.jpg
I have shown that those numbers don't make sense. Nvidia cards still should be performing better then they do. With the numbers from a GTX285, they are not using the MUL to it's full potential, one of the things that was supposedly "fixed" from G80. It is not performing at the 1.06TFlop(MADD+MUL) level nor at the 708GFlop(MADD) level but an inbetween. Basically the MUL is only being exploited ~45% of the time.
Sure, it is loading the cards but it is not exploiting all the shaders that the G200 has to offer.
People are complaining that they cannot fully utilize their RV770 cards, since some have to be downclocked to be stable in OCCT.
The flipside is that you are also not getting full advantage of all that G200 has to offer with the results of OCCT.
When using a G200 you are only receiving ~82% of the performance.