So let's see... Seems that Bullet package comes with benchmarks, which should run on Linux... Hmm.. ;)
Lovely, with cmake things get easy. Already compiling with default parameters. :)
So let's see... Seems that Bullet package comes with benchmarks, which should run on Linux... Hmm.. ;)
Lovely, with cmake things get easy. Already compiling with default parameters. :)
Is there any reason for anyone to choose PhysX for CPU physics anyway?
That has nothing to do with the API and everything to do with how the developers chose to expose those settings. Or is it that you expect the PhysX API to automatically generate menu settings and detail levels too? I somehow get the impression people don't actually understand how much developers are responsible for the use of a given API. Are all DirectX games the same, with the same menus and detail options?
Again, you're seeking reasons to criticize PhysX that have absolutely nothing to do with the API. It's really hard to have a useful discussion of the real issues in doing so.
Batman, Metro 2033 and Mirror's Edge run quite well with GPU PhysX enabled. You should try them sometime.
i would love to see the contract thats signed when a developer decides to use physx
It's fortunate for us then that the performance hit of PhysX or any other IQ option is irrelevant. Only thing that matters is whether you have playable performance in the end with PhysX enabled. For those games we do, even on $200 cards. Do you care that AA in Starcraft 2 halves your framerate if you end up with playable fps in the end? Nope....
After testing with some demos(ConstraintDemo and BspDemo) and the benchmark and recompiling them with few g++ flags(-msse, -msse2, -msse3, -mfpmath=sse, -mtune=core2, -ffast-math, -ftree-vectorize), I couldn't see ANY kind of impact in FPS, OR in profiling info on-screen(which gives information of how much time was spent doing the actual physics steps per frame, and percentage of total time spent in doing that per frame). No difference at all in any of the benchmarks. As if there was no changes made to the compiled binary, which I verified to be false, as the binary size fluctuated with every recompile with different flags.
Oh, and tried to force x87 on, which unsurprisingly had no impact either.
I'm not really seeing any detailed rebuttal to the Real World Tech article within the blog. The RWT at least had some analysis to show the use of x87 instructions.
The rebuttal eludes to other bottle necks which limit the effectiveness of using SSE over x87, but doesn't start down the path of identifying these bottlenecks what is involved to overcome them. Or did I miss that?
32-bit Arch linux box with single core Celeron M @ 1.6 GHz(Conroe-L, crippled mobile C2D).
I'd bet that most of the physics iterations are actually bound by something else than vectrorizable math, hence why no gains.
...then again, the demos are very simple with few dozen objects(at max) just showing some rigid body collisions and friction.
The main difference between HL2 and Mafia II physics is that while Mafia's physics are little more than eye candy, the pysics of HL2 are actually a vital part of the game. Something you can play with. Mafia II could exist (and in fact exists) without Physx, however, HL2 would be a different game without it's physics.
HL2 was a good start, but the most fun right now with "physics" in a game I have in Vindictus (aka Mabinogi:HEROS).
Its using the source engien but they put the physical effects in good use. :)
Already been over this several times.
The reason for this is because PhysX is proprietary technology, so it wouldn't make sense for developers to make it "game affecting" where ATI users would end up having a totally different gaming experience.
By limiting it to eye candy physics, developers are able to make good use of PhysX for Nvidia users, without being unfair towards ATI users since technically, the gameplay itself hasn't changed.
I prefer this way personally speaking. Both Mafia 2 and Batman AA are the best examples of hardware PhysX, and while gameplay wasn't affected by the implementation of PhysX in both titles, the overall gaming experience was definitely improved.
I can't even imagine playing Mafia 2 or Batman AA without PhysX..
1) PhysX is proprietary technology yes & it was so under ageia as well & its purpose was to be more than just eyecandy as it could be use in conjunction with any GPU.
2)It is limited to eyecandy because NV want to be unfair to everyone else with its disabling of the function when other GPU makes are in the system & so developers have no choice but to use it for eyecandy only because they would lose out on sales as the game could only be run on NV discrete GPUs & its nothing about being fair if developers could make more by being unfair they would do.
3) Each to there own.
Considering Ageia had 0% of the market it couldn't be used for squat.
It's original purpose was to make money for Ageia, hopefully through acquisition (worked out well for them). Just like Nvidia is using it to make money for themselves. What other purpose do you have in mind? You really think Ageia expected millions of people to buy a PhysX card and for game developers to make it a requirement? Don't make me laugh :)
There's nothing stopping PhysX from being used for both. Some items could be "static" items which are in the game regardless of acceleration method (CPU or GPU) and others can be added if a secondary method is added. Also PhysX would be a lot more popular if it were manufacturer agnostic with reference to allowing to work with other vendors present in the system and caused fewer problems with some games (using GPU accelerated PhysX causes speedups in GRiD for example, even having CPU acceleration selected results in the occasional speedup, or sometimes a slowdown).
The original purpose was to create hardware accelerated physics. If they had a goal of making money then thats great for them but your point is still irrelevant and not serving your intended purpose of muddying up the topic.
Now we have another question: Does PhysX respond to X87/SSE differences like Bullet does? What other optimization are or are not present at the time of compile (Calmatory's list: -msse, -msse2, -msse3, -mfpmath=sse, -mtune=core2, -ffast-math, -ftree-vectorize).