I would like to counter this with one point: PhysX is optimized for a GPU while other physics processing apps are optimized for a CPU.
Bear with me while I go through this.
In many situations, I would rather my CPU take a performance hit rather than my GPU when gaming. Like it or not, PhysX results in lower performance of a GPU when it is enabled.
Let me give you an example.
Say I am playing a game at 1920 resolution with 4x AA enabled on a GTX 260 216. I am getting framerates of around 60. Ok, great. Now I go and enable PhysX on my GPU which will take up valuable rendering performance for somethign a CPU can do. The next thing I know, I am having to settle with lower IQ settings just to justify having a few rocks bouncing around realistically. To me and most other gamers, this is an unacceptable trade-off.
The fact of the matter is that when gaming at enthusiast / "gamer" resolutions, the GPU's RENDERING performance is paramount and it can't afford using cycles to process GPU physics as well. What use is physics if your framerate will go down the crapper? There are very few "free" GPU cycles when playing graphically demanding game.
HOWEVER, when playing at higher resolutions and AA settings, we all know that the CPU itself takes a back seat when it comes to overall gaming performance. That is precisely why the MAJORITY of game developers have decided to go with CPU-bound physics engines such as Havok can be just as efficient or more efficient than PhysX. Don't believe me? Take a look at the the laundry list of games out there that use Havok: Fallout 3, Company of Heroes, Dawn of War II, Resident Evil 5, World in Conflict, etc. Need upcoming titles?
Voila.
I don't know about you, but I find that some of those titles had some seriously good physics effects and no one ever saw their modern quad or dual core CPU becoming an issue when it was processing physics.
Bookmarks