Not a good review so ...
Printable View
Your point was that ATI better do something fast to save face. My point was why are you being such a hypocrite by saying that when a) it's not Nov 22 yet (hence no delay) and b) nvidia took 6 months to do what you just described. Be a little fairer to yourself.
Considering the 68xx series are not meant to be compared to nvidia's new flagship, that is an ignorant statement indeed.Quote:
I don't have any cards, but can promise you that those "thing" that AMD has chosen to cal a "new generation" will look really bad tomorrow.
As for your "new generation" quotation nonsense, I think it's pretty clear that new generations do not need a process shrink, seeing as intel, amd/ati, and nvidia have done this before. Also, I hope you have the fairness to criticize nvidia when it renamed parts over 3 generations.
Does SLI scaling suck or is it just me ?
http://www.techpowerup.com/reviews/N...80_SLI/24.html
tpuQuote:
In order to stay within the 300 W power limit, NVIDIA has added a power draw limitation system to their card. When either Furmark or OCCT are detected running by the driver, three sensors measure the inrush current and voltage on all 12 V lines (PCI-E slot, 6-pin, 8-pin) to calculate power. As soon as the power draw exceeds a predefined limit, the card will automatically clock down and restore clocks as soon as the overcurrent situation has gone away. NVIDIA emphasizes this is to avoid damage to cards or motherboards from these stress testing applications and claims that in normal games and applications such an overload will not happen. At this time the limiter is only engaged when the driver detects Furmark / OCCT, it is not enabled during normal gaming. NVIDIA also explained that this is just a work in progress with more changes to come. From my own testing I can confirm that the limiter only engaged in Furmark and OCCT and not in other games I tested. I am still concerned that with heavy overclocking, especially on water and LN2 the limiter might engage, and reduce clocks which results in reduced performance. Real-time clock monitoring does not show the changed clocks, so besides the loss in performance it could be difficult to detect that state without additional testing equipment or software support.
It indeed makes a big difference. Check the 5970 in the GTX580 review compared to the 6870 one.
http://tpucdn.com/reviews/NVIDIA/GeF..._1920_1200.gif
http://tpucdn.com/reviews/ATI/Radeon..._1920_1200.gif
Weirdly the GTX480, rather all cards, lost FPS too?
WTF TPU, why wouldn't you review this card's current main competitor with new drivers??
Especially when you conducted an EARLIER review (HD6870) WITH new drivers?
Especially when there was so much sh@# about you using older drivers in your GTX 480 review months ago... Talk about not learning a lesson.
I hope this is something about these being the "preliminary" review and it'll be fixed in the real review, but that's not a very realistic thing to hope for...
Lets not trust and early pre nda review. Seems to me a lot of stuff not right about this one. I must gongrats nvidia on making a quiet and powerful card though. Noise reduction seems pretty substantial.
ps- the difference in metro scores may be attributed to system setup.
Same, I'd to see how the 480 and the 580 compare when the core and memory clocks match.
This is very interesting. AMD did a similar thing, but I can't remember if they did it in hardware, or in drivers? On the 58xx cards I mean. I find driver throttling a little spooky TBH, what happens if there is a driver bug, or suppose a game or application triggers the throttling? Down goes your performance. It might be a non issue I don't know, but interesting just the same.Quote:
At this time the limiter is only engaged when the driver detects Furmark / OCCT, it is not enabled during normal gaming. NVIDIA also explained that this is just a work in progress with more changes to come.
Or maybe the settings weren't equal... there are more settings than just resolution and AA
still, TPU should seriously redo the review with new drivers, this is just wrong
Maybe the GTX580's in-built killswitch kills more than just the power during Furmark:
http://tpucdn.com/reviews/NVIDIA/GeF..._1920_1200.gif
:ROTF:
So the avarage performance improvement to the GTX 480 is only 13% (1920*1200), even with the difference of much newer drivers not counted out. :down:
This means that Nvidia is still behind in efficiency compared to AMD products which have been released a year ago! Kinda dissapointing. :(
I believe it was hardware coded, temperature monitored VRMs.
From Anandtech last year:
Quote:
For Cypress, AMD has implemented a hardware solution to the VRM problem, by dedicating a very small portion of Cypress’s die to a monitoring chip. In this case the job of the monitor is to continually monitor the VRMs for dangerous conditions. Should the VRMs end up in a critical state, the monitor will immediately throttle back the card by one PowerPlay level. The card will continue operating at this level until the VRMs are back to safe levels, at which point the monitor will allow the card to go back to the requested performance level. In the case of a stressful program, this can continue to go back and forth as the VRMs permit.
Agreed, performing benchmarks where cards will not display their power at the fullest (from both camps) is no good, I was under the impression that a bunch of people that can manage a site such as TPU would have enough common sense to give both cards an equal run at their fullest so that readers can make educated decissions, otherwise there is very little point on performing the exercise.
If you want to discuss nVidia vs AMD from a football-match point of view, I'm not interested in any of that, both of them can bleed, I don't care.
Lets keep the discussion to relevant stuff, and about current cards. Just look at those numbers in the first review, and tell me, how those "thing" that AMD has chosen to call "new generation" looks like compared to GTX580? It will come more tomorrow too.
Don't you think AMD should do something about this and get Cayman out, and it better arrive in time, and it better deliver what they promise, because otherwise AMD will be in big trouble?.
lol @ tpu 10.7 seriously ? anyway 5970 is still fastest one which is sad for nvidia if you think that antilles will arrive within 2 months
Well pretty much the only conclusion you can draw from that performance wise is that it is around 15% faster than a 480. Better to wait other reviews with proper drivers to see how it really compares to other cards. Then again it doesn't really even matter how it compares to 5870 or 5970, both are about to be EOL.
Now that the obligatory criticism has been handed, I have to applaud that Nvidia did manage to get something out this quickly. Overall the GTX 580 seems like a decent upgrade over 480. Though I'd still wait to see what Cayman will offer. :)
I have really didn't make my mind if GTX580 is a real or fake next generation yet. Let me look at more in dept reviews, and I'll tell you tomorrow, OK?
But I can tell you today, those "thing" that AMD has chosen to call "new generation" looks worst than the "old generation", for sure.