Quote Originally Posted by DarthBeavis View Post
I am sorry but I disagree. When I as shown the demo I did not accept the proposal at face value (I am a former software engineer). The first thing I did was to check out the CPU utilization and monitor it during the demo . . . .and verified the demo verified the claims being made. There is no reason applications cannot be coded to take advantage . . . .of course that is up to the application developers. I do understand your skepticism though.
Unfortunately, there are. Formerly, there's the question of the hardware itself. I suppose that the demo you're talking about would be running on CUDA. So the real problem here is not the CPU but to be able to run the physics in addition to the graphical load at the GPU. Demos use to be graphically simpler than games, and even if not, you will be always cutting down your graphical resources to include physics. And graphics are shown in screenshots...

Then, it's the thing of how much of the target audience can run that code. Probably only people with a high end graphics card (a much smaller target audience than you could think, amongst gamers). Then, if it's done in CUDA, cut that in half (no ATi compatibility). That gives some unacceptably low numbers that makes non profitably to invest resources on that. Sincerelly, if you were a developer, you would better invest your resources on a thing that so little people could see, or in something more widely useable instead?

Someone could argue than some graphical settings are not usable except with high end hw. Well, graphics are special in a thing: games are sold by selling screenshots, and the graphics are the screenshots. So even if not everyone can take advantage of the graphics, the developers take advantage of their effort with everyone...

Notice that even when PhysX is the most widely used physics library, only a few games have any kind of CUDA accelerated effects (Batman and Mirror's Edge basically, if we don't count the laughable falling leaves in Sacred 2 or the extra out-of-main-game level/s on UT3). There's a reason for that.

Maybe you're right with it being the future of the videogames, maybe not (I'm somewhat skeptical in that it's a good idea to transfer workload from CPU to GPU in a field of sw that has been GPU bottlenecked for years now, but I don't think it's impossible). But anyway I don't think it will happen any soon (not at the GF100 life time anyway, oh well, at least if it finally lives in 2010 or so...). And when it does, it will be with some kind of widely supported GPGPU standard (be it OpenCL, DirectCompute, or whatever it will be), and I think most of them are more immature than CUDA right now.