Agreed.
It's been mentioned many times in the past that the ATI cards pack more punch on paper. It appears this program is actually coded so well it proves that very point.
"QFT?" Even though it has been shown in some cases that reference cards don't crash with OCCT?
I believe that has been pointed out and stated a few times in the thread but both of those have to be repeated since people either choose to ignore those statements or don't read the entire thread...
Also that specific post was buried by other's trivial posts, so I don't find it surprising that people missed it plus it was never properly addressed or disputed.
There are quite a few flaws/problems with the theory that people have brought up in this thread and haven't been addressed. Just because some want to dismiss them doesn't mean they are forgotten...
That wasn't directed at just you. I should have made that clear in the post, sorry.
I haven't seen one post here with anyone with a refrence card with a 3phase vrm that didn't crash that verified that ccc was using default settings and not forcing any post processing.
People crying foul because much like Furmark Nvidia cards aren't seeing the same framerates even though the vrms are definetly getting a work out is getting old and isn't really relevent at this point. Unless there is new as to why that is I just don't see why that matters in this thread. Is this one point supposed to invalidate the test? Although, you are the only one making a point while mentioning this.
Although I do agree that no other app will draw this kind of power from your vrms. I don't even feel comfortable running this test on much lower settings than this on my GTX280 since the vrms hit 100c in seconds, no game comes anywhere near that. I also wouldn't be surprised if Nvidia was using some driver tricks since like you pointed out the difference in performance should be lower, and in most power consumption tests that I have see the GTX280 draws more than a 4870 as well and my overclocked gtx280 hits a little lower than 82A (but I may have had multisampeling enabled in the control panel) in 1920x1080 fullscreen. Lastly we don't even know where this 82A even comes from, that number dosen't make sense since it can't be each vrm and it's not all of them.
My bad as well since I thought it was.:p:
The couple of times I went through the thread I thought there was a couple, I don't think they have a CCC SS though.
No, more like we still don't know for sure if it really is the VRMs at fault, though it seems likely. The point is that certain people were "crying" because RV770 owners could not use the full potential of their cards, a sad argument. With how many times this was brought up and by looking at the performance results, I noticed that even with this "simple" code you also cannot fully utilize G200.
Again, not wanting to keep repeating what has been posted previously but I need to fully explain myself.:shrug:
which nonreference 4890 cards pass occt ?
thanks
my reference XFX xxx edition (with 8+6-pin power) pass @ 900mhz no problem.
thanks sniipedogg
if its not too much to ask.. would someone make list of the reference/nonreference cards that pass or dont pass
Perhaps more of Nvidia's processing power are "reserved" for PhysX calculations. I wouldnt be surprised if many of those transistors are wasted anyways on 32 ROPs, 512-bit memory interface, and espeically PhysX.
After all, nothing ever pushes closer than 99% of the theoretical maximum. Some of the older cards were so inefficient, like G92 cards, in only getting within 75-85% of the theoretical maximum for fillrate tests.
Would this qualify as a glaring example of how nvidia uses TWIMTBP and its game developer "support" to optimize games for its lower powered cards? maybe its not but it kinda seems that way
the 4870 reference cards crash in intensive games like crysis, crysis warhead, unreal 3 gta 4
it depends on shader complexity used in the scene resolution AA AF applied core and memory clocks
heat can make the problem even worse because you don't need to reach 83A as the vrm's get get hotter the limit goes down ,i made my card crash in OCCT gpu test 600X400 window shader complexity 0 when it spiked repeatedly to about 78 A
idk if there is such a thing as vrm aging but that could make the things even worse.
crysis default gpu benchmark reaches about 60 A on 1600x1200 (default clocks) and more shader intensive scenes like the battle on the boat get the vrm's even hotter and combined with even higher vrma load end up crashing the card ---i have experienced this a few times .
One thing that is a given is that 4870/4890 cards have way higher TFLOPS numbers. If there is one program that pushes it as close as possible to the theoretical maximum, this must be it.
Many others report it heating up their Nvidia cards higher than ever before, and also pushing their GTX 280 cards to 80A, just shy of the 82A ceiling.
That is all we really know for now, until more findings can be made. The rest are just theories for now.
i want 40 A QQ
http://img34.imageshack.us/img34/3255/amps.th.jpg
like i said in my previous post (witch you should read ) there are certain conditions that have to be met for the card to crash in crysis
it depends on shader complexity used in the scene(the carrier scene fights at the end of the game for example) because shader complexity = more vrma current amperage ,high resolution ,low AA AF applied, core and memory clocks used (overclock makes things a lot worse ), and most important is heat because after hours of gameplay the vrm's get very hot making them inefficient and the vrma current does not need to pass the 83A limit to crash anymore , it can happen on lower than 83A
i showed you my card at vrma 58.94 A in the gpu benchmark found in the crysis folder using default clocks and 1600x1200 rez
the scenes rendered in the benchmark are not the most intense ones found in the game so the vrma didn't get higher .also notice the temps are not that high and that means the vrm's are working great so of corse it didn't crash !!!!!!
also i am runing only one monitor
Hmmm... this could explain my unexplainable crashes... That means that I probably am fine with my power supply and this whole time it was actually this VRM bug thing. I bought this second Power supply booster thing because I thought I was pushing my power supply to much causing these random crashes. I have nice after market sinks on my cards and don't know how hot they are getting now a days (the VRMs) but, I can check real quick.
Edit:
Here are some pics, first one is me showing where I kicked up the clocks up a little for my test.
http://img40.imageshack.us/img40/184...ingitup.th.jpg
Second is showing the voltage that I bumped them up to.
http://img43.imageshack.us/img43/387...oltages.th.jpg
Third is a pic of my VRM temps of my GPU0 which apparently was the only one that Furmark used?? They both get used on everything else I play or benchmark with.
http://img34.imageshack.us/img34/1852/bigheat.th.jpg
Fourth shows the temp of the one, GPU the test hit up.
http://img43.imageshack.us/img43/6590/someheat.th.jpg
The point is thought hat I get these random crashes or black screens and a reboot is needed while playing GTA 4 as mentioned on some other site I seen this story on. I have also expierienced it while playing COD 4 and playing it hard like blasting every one for a long time.
Edit again: I ran OCCT GPU Test and it just locked up and rebooted my PC. That is to funny, I know that the CPU is stable because I have IntelBurnTested it not to long ago.