Quote:
Originally Posted by
SexyMF
I was meaning it is a shame Physx was not primarily developed for CPU. This is said in the context of knowing it was only brough to the GPU by nVidia. Essentially crushing what may very well have been a successful development path for Physx on CPU.
Thats a bit silly if you ask me.. How many software physics APIs are there out there?
And now how many hardware accelerated physics APIs are there out there?
PhysX is the ONLY one which uses hardware acceleration. All the others use the CPU only..
So saying it's a "shame" that PhysX was not primarily developed for the CPU seems asinine in that regard.
Quote:
We have spare CPU cores now for running Physx. Available cores is significantly outpacing usage (in game context). We don't really have spare GPU capacity for Physx. You need every bit of GPU power to render the latest games.
Another fallacy. GPUs are so powerful these days that they often have idle resources doing jack. There are few games out there that are so GPU bound as to require all of a GPUs resources, since most games are multiplatform or console ports.
It's quite possible to run Batman AA on a single GTX 285 with PhysX maxed out @ 1200p if you're willing to sacrifice AA for instance, or if you play at a lower resolution.
Quote:
Originally Posted by
Lanek
Come on, developpers from Nvidia are stupid and can't change this ?...
As they was not capable to recompil it using Multithreading and x86 SSE4.x instructions ... oohh please what a BS.. they just don't want it..
PhysX was originally intended to be used with hardware acceleration, and after it was purchased by Nvidia and ported over to CUDA, it REMAINED that way.
Havok utilizes multithreading and SSEn, yet it still doesn't hold a candle to GPU accelerated PhysX in terms of capability.
Quote:
SSE4 will bring PhysX on CPU barely to 4x faster on what it is now, add to this PhysX can use the non used % of CPU resource for work .... Look how multithreaded game work, you understand quickly our 4-6 cores are just too much for thoses games. you will get 100% of the gpu power for the render, and use the maximum of the CPU ... this will surely be better in efficiency of use PhysX on a single gpu..
Sheesh, have you not read anything in this thread at all? The whole premise of the thread was to state that the enormous gains supposedly touted by SSEn optimizated PhysX is extremely difficult to realize..
Quote:
Then you have the multithreading part, the real problem is actualy PhysX driver force the use of the 1st core, it's not only a question of don't be multithreaded and let windows choose what resource is needed and on what core, it's completely force the resource use on the 1st core , giving a major drop in performance, as it overload the 1st core of the cpu, and normal operations can't act normally
PhysX already uses multithreading but it's up to the developer to implement it.
Quote:
It will take 4 days to 3 developpers to recompil it using SSE instructions .. they have buy Ageia in 2008?.... 2 years for do it.. Specially they have dit it for the console version so why not for the PC one ?
Are you a programmer that has experience with vectorization?
There's several people that have tested the effects of SSEn optimization on Bullet, a physics API which uses SSEn by default.....and the performance increase from using SSEn was nowhere near 4x.