Good points. Big assumption here that it will beat it by 20%. In the average case you might be looking at a much smaller difference, if at all.
If nvidia is getting physX through to you, it is not getting through to me. Sure it's something to say you have that others don't, but think about it. ATi can just easily allow physX on their GPUs but nvidia won't allow it. ATI won't do it now because they want to see it go down, maybe if it picks up later they will license it. It will take a few years for something to emerge as the dominant "physics" api. If there is a lesson we can learn from history it is that the first to buy raffle tickets aren't always the ones to win the raffle.
CUDA. Ok you have CUDA. I want to play games, but you have CUDA. Games, CUDA, games, CUDA, games, CUDA. How are they related again? Let's just say I hope they have that "15-20%" advantage because they will definitely need it.
I admit, I am not an "average" computer user. I don't see things the way regular users and I cannot recall the last time I struggled with CCC. I love all the features in CCC, I love the way its laid out. I also use nvidia drivers at work, I don't upgrade them as often but I do use them a bit differently at work. At work my primary concern is not gaming: stability, ease of use, multidisplay support. I like the features they have for multidisplay, but they are not well thought out. Nvidia drivers give more control over display settings, color/contrast/brightness/gamma, but no control for video playback, no deinterlacing options. At least with the release I have now.
I don't use dual GPU so I cannot comment on that, but to be fair I will say no clear advantage to nvidia or ATI in the driver department from my perspective and from my personal usage.
I have no idea where this completely unfounded statement comes from but ok
Please do, to me it sounds like you are out of ideas
Who are we kidding with power figures? We know the power consumption of the gt200 @ 65nm. How much power reduction do we expect from a simple die shrink for the gt200? Looking at the figures for the 5770 vs the 4890 (40nm vs 55nm) at full load there is a difference of 50 watts. I know they have slight difference in specs, 5770's mem is a bit faster, more transistors and a smaller bus, but I think that only adds to the case I am trying to make.
Nvidia has simply scaled the gt200 architecture. If they went to 55nm from 65nm, using the info above, a less than 40watt diff. Now add more ram, double the number of SPs, more transistors for DX11 and you can easily put the GF100 past the GT200 in power consumption. Using quick calculations my guess puts it at 60W hotter than the GT200 @ full load. If they went to 40nm it might be half that number to 30W. That leaves it ahead of the 5870 in power consumption.
Let's hope they improved the idle power consumption as well.
Fair game.
Bookmarks