Your point was that ATI better do something fast to save face. My point was why are you being such a hypocrite by saying that when a) it's not Nov 22 yet (hence no delay) and b) nvidia took 6 months to do what you just described. Be a little fairer to yourself.
Considering the 68xx series are not meant to be compared to nvidia's new flagship, that is an ignorant statement indeed.I don't have any cards, but can promise you that those "thing" that AMD has chosen to cal a "new generation" will look really bad tomorrow.
As for your "new generation" quotation nonsense, I think it's pretty clear that new generations do not need a process shrink, seeing as intel, amd/ati, and nvidia have done this before. Also, I hope you have the fairness to criticize nvidia when it renamed parts over 3 generations.
E7200 @ 3.4 ; 7870 GHz 2 GB
Intel's atom is a terrible chip.
Does SLI scaling suck or is it just me ?
http://www.techpowerup.com/reviews/N...80_SLI/24.html
tpuIn order to stay within the 300 W power limit, NVIDIA has added a power draw limitation system to their card. When either Furmark or OCCT are detected running by the driver, three sensors measure the inrush current and voltage on all 12 V lines (PCI-E slot, 6-pin, 8-pin) to calculate power. As soon as the power draw exceeds a predefined limit, the card will automatically clock down and restore clocks as soon as the overcurrent situation has gone away. NVIDIA emphasizes this is to avoid damage to cards or motherboards from these stress testing applications and claims that in normal games and applications such an overload will not happen. At this time the limiter is only engaged when the driver detects Furmark / OCCT, it is not enabled during normal gaming. NVIDIA also explained that this is just a work in progress with more changes to come. From my own testing I can confirm that the limiter only engaged in Furmark and OCCT and not in other games I tested. I am still concerned that with heavy overclocking, especially on water and LN2 the limiter might engage, and reduce clocks which results in reduced performance. Real-time clock monitoring does not show the changed clocks, so besides the loss in performance it could be difficult to detect that state without additional testing equipment or software support.
Man from Atlantis(B3D, DH, S|A, 3DC, OCN), MfA(G3D, CH), kaktus1907(XS,TPU,AT) and zennino
SIS 6326 > Ti 4200 > 9800XT > 9800GT > GTX 460
Celeron 366 > Celeron 1700 > Athlon XP 2500+ > E6300 > Q9650
Alice Madness Returns | Assassin's Creed: Brotherhood | Assassin's Creed: Revelations | Batman Arkham City | Battlefield 3 | Bulletstorm | Call of Duty: Modern Warfare 3 | Crysis 2 | Darkness II | Darksiders | Dead Island | Dead Space | Dead Space 2 | Deus Ex: Human Revolution | Dragon Age Origins | Dragon Age 2 | F.3.A.R. | F1 2011 | Half Life 2 | Hard Reset | Kane & Lynch 2 | L.A. Noire | LEGO: Pirates of the Caribbean | LEGO: Star Wars III: The Clone Wars | LOTR: War in the North | Mass Effect | Mass Effect 2 | Mass Effect 3 | Mini Ninjas | NFS Hot Pursuit | RAGE | Renegade Ops | Skyrim | The Witcher 2 | Tomb Raider: Underworld | Transformers: WFC | Trine 2
It indeed makes a big difference. Check the 5970 in the GTX580 review compared to the 6870 one.
Weirdly the GTX480, rather all cards, lost FPS too?
WTF TPU, why wouldn't you review this card's current main competitor with new drivers??
Especially when you conducted an EARLIER review (HD6870) WITH new drivers?
Especially when there was so much sh@# about you using older drivers in your GTX 480 review months ago... Talk about not learning a lesson.
I hope this is something about these being the "preliminary" review and it'll be fixed in the real review, but that's not a very realistic thing to hope for...
Last edited by Lanek; 11-08-2010 at 03:11 PM.
CPU: - I7 4930K (EK Supremacy )
GPU: - 2x AMD HD7970 flashed GHZ bios ( EK Acetal Nickel Waterblock H2o)
Motherboard: Asus x79 Deluxe
RAM: G-skill Ares C9 2133mhz 16GB
Main Storage: Samsung 840EVO 500GB / 2x Crucial RealSSD C300 Raid0
Lets not trust and early pre nda review. Seems to me a lot of stuff not right about this one. I must gongrats nvidia on making a quiet and powerful card though. Noise reduction seems pretty substantial.
ps- the difference in metro scores may be attributed to system setup.
Gigabyte Z77X-UD5H
G-Skill Ripjaws X 16Gb - 2133Mhz
Thermalright Ultra-120 eXtreme
i7 2600k @ 4.4Ghz
Sapphire 7970 OC 1.2Ghz
Mushkin Chronos Deluxe 128Gb
Same, I'd to see how the 480 and the 580 compare when the core and memory clocks match.
This is very interesting. AMD did a similar thing, but I can't remember if they did it in hardware, or in drivers? On the 58xx cards I mean. I find driver throttling a little spooky TBH, what happens if there is a driver bug, or suppose a game or application triggers the throttling? Down goes your performance. It might be a non issue I don't know, but interesting just the same.At this time the limiter is only engaged when the driver detects Furmark / OCCT, it is not enabled during normal gaming. NVIDIA also explained that this is just a work in progress with more changes to come.
Or maybe the settings weren't equal... there are more settings than just resolution and AA
still, TPU should seriously redo the review with new drivers, this is just wrong
So the avarage performance improvement to the GTX 480 is only 13% (1920*1200), even with the difference of much newer drivers not counted out.
This means that Nvidia is still behind in efficiency compared to AMD products which have been released a year ago! Kinda dissapointing.![]()
I believe it was hardware coded, temperature monitored VRMs.
From Anandtech last year:
For Cypress, AMD has implemented a hardware solution to the VRM problem, by dedicating a very small portion of Cypress’s die to a monitoring chip. In this case the job of the monitor is to continually monitor the VRMs for dangerous conditions. Should the VRMs end up in a critical state, the monitor will immediately throttle back the card by one PowerPlay level. The card will continue operating at this level until the VRMs are back to safe levels, at which point the monitor will allow the card to go back to the requested performance level. In the case of a stressful program, this can continue to go back and forth as the VRMs permit.
Agreed, performing benchmarks where cards will not display their power at the fullest (from both camps) is no good, I was under the impression that a bunch of people that can manage a site such as TPU would have enough common sense to give both cards an equal run at their fullest so that readers can make educated decissions, otherwise there is very little point on performing the exercise.
If you want to discuss nVidia vs AMD from a football-match point of view, I'm not interested in any of that, both of them can bleed, I don't care.
Lets keep the discussion to relevant stuff, and about current cards. Just look at those numbers in the first review, and tell me, how those "thing" that AMD has chosen to call "new generation" looks like compared to GTX580? It will come more tomorrow too.
Don't you think AMD should do something about this and get Cayman out, and it better arrive in time, and it better deliver what they promise, because otherwise AMD will be in big trouble?.
► ASUS P8P67 Deluxe (BIOS 1305)
► 2600K @4.5GHz 1.27v , 1 hour Prime
► Silver Arrow , push/pull
► 2x2GB Crucial 1066MHz CL7 ECC @1600MHz CL9 1.51v
► GTX560 GB OC @910/2400 0.987v
► Crucial C300 v006 64GB OS-disk + F3 1TB + 400MB RAMDisk
► CM Storm Scout + Corsair HX 1000W
+
► EVGA SR-2 , A50
► 2 x Xeon X5650 @3.86GHz(203x19) 1.20v
► Megahalem + Silver Arrow , push/pull
► 3x2GB Corsair XMS3 1600 CL7 + 3x4GB G.SKILL Trident 1600 CL7 = 18GB @1624 7-8-7-20 1.65v
► XFX GTX 295 @650/1200/1402
► Crucial C300 v006 64GB OS-disk + F3 1TB + 2GB RAMDisk
► SilverStone Fortress FT01 + Corsair AX 1200W
Gigabyte Z77X-UD5H
G-Skill Ripjaws X 16Gb - 2133Mhz
Thermalright Ultra-120 eXtreme
i7 2600k @ 4.4Ghz
Sapphire 7970 OC 1.2Ghz
Mushkin Chronos Deluxe 128Gb
Well pretty much the only conclusion you can draw from that performance wise is that it is around 15% faster than a 480. Better to wait other reviews with proper drivers to see how it really compares to other cards. Then again it doesn't really even matter how it compares to 5870 or 5970, both are about to be EOL.
Now that the obligatory criticism has been handed, I have to applaud that Nvidia did manage to get something out this quickly. Overall the GTX 580 seems like a decent upgrade over 480. Though I'd still wait to see what Cayman will offer.![]()
"No, you'll warrant no villain's exposition from me."
I have really didn't make my mind if GTX580 is a real or fake next generation yet. Let me look at more in dept reviews, and I'll tell you tomorrow, OK?
But I can tell you today, those "thing" that AMD has chosen to call "new generation" looks worst than the "old generation", for sure.
► ASUS P8P67 Deluxe (BIOS 1305)
► 2600K @4.5GHz 1.27v , 1 hour Prime
► Silver Arrow , push/pull
► 2x2GB Crucial 1066MHz CL7 ECC @1600MHz CL9 1.51v
► GTX560 GB OC @910/2400 0.987v
► Crucial C300 v006 64GB OS-disk + F3 1TB + 400MB RAMDisk
► CM Storm Scout + Corsair HX 1000W
+
► EVGA SR-2 , A50
► 2 x Xeon X5650 @3.86GHz(203x19) 1.20v
► Megahalem + Silver Arrow , push/pull
► 3x2GB Corsair XMS3 1600 CL7 + 3x4GB G.SKILL Trident 1600 CL7 = 18GB @1624 7-8-7-20 1.65v
► XFX GTX 295 @650/1200/1402
► Crucial C300 v006 64GB OS-disk + F3 1TB + 2GB RAMDisk
► SilverStone Fortress FT01 + Corsair AX 1200W
Bookmarks