It's weird, from their track record, usually the leaks from the Chinese on cards tend to over-estimate performance....
If those figures are true, then get used to high prices...
It's weird, from their track record, usually the leaks from the Chinese on cards tend to over-estimate performance....
If those figures are true, then get used to high prices...
This isn't aimed at anyone, and I hope that no one takes offense.
But if you (a theoretical "you") think 15% to 20% increase over a 5870 with a huge increase in power consumption and heat (possibly cost?) is "good" after 7+++ months of waiting...
Wow... Wow... That's some pretty hardcore devotion to a company.![]()
Last edited by Sly Fox; 03-03-2010 at 12:49 PM.
I don't think it's good. However, the overwhelming majority of people are claiming Fermi will be a huge flop and back their points with Charlie articles.
So, relative to the portrait of the current situation, drawn by Charlie and his fans; yes, a GTX 480 which adds 20% over a 5870 is good.
The huge increase in power consumption that you mention is about 35 percent. Yeah, it's not in line with its proposed performance increase; but if you take a look at HD5870, its TDP is 67% higher than the 5870 with only an average of 45% increase in performance. So, TDP wise, Fermi makes as much sense as a 5970 does. (if the figures are correct of course)
INTEL Core i7 920 // ASUS P6T Deluxe V2 // OCZ 3G1600 6GB // POWERCOLOR HD5970 // Cooler Master HAF 932 // Thermalright Ultra 120 Extreme // SAMSUNG T260 26"Has anyone really been far even as decided to use even go want to do look more like?
Not (maybe) just a 15-20% increase..
You get Physx, you get CUDA, you get folding at home, you get a DEDICATED driver team, you get a wider range of support in games, theres a WONDERFUL lack of catalyst control centre, you get a card which retains a higher second hand value...
Need i go on?
Wheres this "Huge increase in power consumption"? 42A.. my 280 GTX says it requires this in the manual.
Heat? i'd imagine it's no worse than my 280 GTX.
Seeing as the next gen is usually twice as fast as the previous, i'll be more than happy with a card which is equal to 280's in SLI..
Obviously, i'm not a fool. If this 480 GTX cost's around £450, and the 5970 is £499 and the difference between them is huge, then i'd get the 5970.
Last edited by fiskov; 03-03-2010 at 01:09 PM.
I'll give you the drivers. That's basically why I got a GTX 260 instead of a 4870. Catalyst isn't as bad as some people make out imo, but yea. Nvidia definitely has more solid drivers.
Cuda, Folding, and (especially) Physx I disagree with though. I don't know one person outside of XS (including myself) who cares even slightly about those things. And granted, some people think they care about Physx, but let's be honest with ourselves, Physx is useless in 99.99% of games.
The rest is debatable/up in the air until we get solid numbers and testing, so I won't argue.
And yea, I know my point about no one caring about cuda/etc is debateable too.
They just seem awfully situational to me, most people don't have too much use for those things as far as I know.
If someone actually uses cuda, yea, I agree, it's great. But I know I have no use for it.
Last edited by Sly Fox; 03-03-2010 at 01:27 PM.
Whats up with the A2? Is this pic even real?
![]()
Good points. Big assumption here that it will beat it by 20%. In the average case you might be looking at a much smaller difference, if at all.
If nvidia is getting physX through to you, it is not getting through to me. Sure it's something to say you have that others don't, but think about it. ATi can just easily allow physX on their GPUs but nvidia won't allow it. ATI won't do it now because they want to see it go down, maybe if it picks up later they will license it. It will take a few years for something to emerge as the dominant "physics" api. If there is a lesson we can learn from history it is that the first to buy raffle tickets aren't always the ones to win the raffle.
CUDA. Ok you have CUDA. I want to play games, but you have CUDA. Games, CUDA, games, CUDA, games, CUDA. How are they related again? Let's just say I hope they have that "15-20%" advantage because they will definitely need it.
I admit, I am not an "average" computer user. I don't see things the way regular users and I cannot recall the last time I struggled with CCC. I love all the features in CCC, I love the way its laid out. I also use nvidia drivers at work, I don't upgrade them as often but I do use them a bit differently at work. At work my primary concern is not gaming: stability, ease of use, multidisplay support. I like the features they have for multidisplay, but they are not well thought out. Nvidia drivers give more control over display settings, color/contrast/brightness/gamma, but no control for video playback, no deinterlacing options. At least with the release I have now.
I don't use dual GPU so I cannot comment on that, but to be fair I will say no clear advantage to nvidia or ATI in the driver department from my perspective and from my personal usage.
I have no idea where this completely unfounded statement comes from but ok
Please do, to me it sounds like you are out of ideas
Who are we kidding with power figures? We know the power consumption of the gt200 @ 65nm. How much power reduction do we expect from a simple die shrink for the gt200? Looking at the figures for the 5770 vs the 4890 (40nm vs 55nm) at full load there is a difference of 50 watts. I know they have slight difference in specs, 5770's mem is a bit faster, more transistors and a smaller bus, but I think that only adds to the case I am trying to make.
Nvidia has simply scaled the gt200 architecture. If they went to 55nm from 65nm, using the info above, a less than 40watt diff. Now add more ram, double the number of SPs, more transistors for DX11 and you can easily put the GF100 past the GT200 in power consumption. Using quick calculations my guess puts it at 60W hotter than the GT200 @ full load. If they went to 40nm it might be half that number to 30W. That leaves it ahead of the 5870 in power consumption.
Let's hope they improved the idle power consumption as well.
Fair game.
Game Rig:
Intel Core i7 920 (3.0Ghz) || EVGA X58 Classified (E760)|| 3x2 Gb A-DATA 1333Mhz Triple Channel + 3x2 Gb Patriot 1333Mhz Triple Channel || WD500GB + WD750GB + Hitachi 1TB || PowerColor Ati Radeon 5850 1024MB GDDR5 CrossFireX|| Chieftec 1020W || Acer 24" P243 (1920 x 1200) || Razer Copperhead Blue || Microsoft Reclusa || SteelSeries Seberia 7.1 || CoolerMaster CosmoS
Water cooling:
WC HeatKiller 3.0 || 2x 120mm Koolance || Koolance RP-980BK || Koolance nozzles
wait so, ATI doesn't have a 'dedicated' driver team???? :confused
To be honest, no idea. The thing is, whether 5870 is more efficient or not, GTX470 has like 200MB more VRAM, so even if it's less efficient it still has some extra headroom.
A PC component!
Sure they do. He is just making things up.
He renamed it 360/380 to avoid the censorship of the name by Nvidia..
I dunno. If the GTX 480 had come out within 2-3 months of the 5870, I'd say that yeah, it's not a flop. But coming out over 6 months later, and being potentially hotter without major performance increases is definitely disappointing, not "good"
Last edited by zerazax; 03-03-2010 at 02:55 PM.
It's funny that people are focusing so much on Cypress that they don't realize those numbers would be a joke even compared to GT200. That's even worse.
http://www.abload.de/img/nv-tesselation-benchma0m5f.jpg
new picture?!
+1 to that...AMD is really not as much interested at HD 5000 now as some still believe. Or maybe PR is. But engineers are looking forward to next gen. HD 5870 Eyefinity is almost released, maybe the team which worked on Evergreen will release HD 5890 in summer (if so, the card is entering the last phase before release now), but HD 5000 becomes forgotten now. I mean, NVIDIA totaly sucked for them, they are looking forward to the biggest change from R600, what is NVIDIA with their half year late Fermi than?
As a result, I am afraid AMD will lower prices not due to Fermi finally comes to market, but due to current gen is almost in half of it's lifetime. Than some pricecut in october/november time and next we have new line...
Last edited by jaredpace; 03-03-2010 at 03:47 PM.
[SIGPIC][/SIGPIC]Bring... bring the amber lamps.
That can't be a 470 just by the simple fact that 13000 points can be attained with a 8800GTS 512.
ASUS Sabertooth P67B3· nVidia GTX580 1536MB PhysX · Intel Core i7 2600K 4.5GHz · Corsair TX850W · Creative X-Fi Titanium Fatal1ty
8GB GSKill Sniper PC3-16000 7-8-7 · OCZ Agility3 SSD 240GB + Intel 320 SSD 160GB + Samsung F3 2TB + WD 640AAKS 640GB · Corsair 650D · DELL U2711 27"
Barely faster than a GTX 260? Nice![]()
Dual core at 2,6GHz in 3dmark06 is surely a bottleneck to all that GPU power at 1280x1024
I can score higher with my OCed G92 and e8400 4Ghz...
Are we there yet?
Bookmarks