Quote Originally Posted by SKYMTL View Post
What's wrong with CUDA? People in the professional / HPC world are still "shouting it from the rooftops" as you put it. Currently, there isn't a better, more adaptable GPU compute language on the market. OpenCL surely has the chance to make it big but being an open format, there is still very little directed focus on prioritizing many inefficiencies.
Because every time we talked about gaming perf / dollar, gaming perf / watt, or whatever metric in *gaming*, we got the same refrain of 'CUDA and PhysX, perf regardless of watt, blahblah.' Now Nvidia is pursuing the same philosophy ATI did, and the arguments have (not) surprisingly shifted their focus on the exact same metrics ATI's VLIW4/5 was lauded for since the beginning.