cpu physx is WAY slower than gpu physx.
Printable View
No, at a low level I don't understand the pipeline. But at a higher level, I do have a basic understanding of how a computer works. Let me break it down for you:
1. To use an Nvidia card as a standalone PhysX card, you DO NOT use an SLI bridge to connect that card to other Nvidia cards.
2. No SLI bridge means no direct GPU-GPU communications.
3. Instead, the standalone PhysX processor communicates with CPU, the CPU offloads PhsyX tasks to the standalone card, the card does the calculations and passes the result back to the CPU, and then the CPU merges that result with the rest of the code it`s working on, and passes that combined result on to the GPU (the one dedicated to graphics) to render.
4. If there were `intricate connections` between PPU and GPU then you would have a massive and uneeded increase in latency (unless it went through the SLI bridge, which it doesen`t) and ATI cards just plain wouldn`t work -- ever.
Fermi is an architecture, not a card. Think Nehalem vs i7 920.
Use it as they see fit is nonsense. A product only needs to work as the manufacturer intends it to.
why would you want to run physx on a cpu? thats like saying i want to do HPC tasks on a netbook.
hmmm....
http://www.xtremesystems.org/forums/...6&postcount=12
Not that one would want to or not, however, PhysX use to run on the CPU almost exclusively. Ageia develop physx hardware addin card for physx acceleration, but supported non-addin HW (CPU exclusvie, software) with their API, which has been used in many game engines, in fact, PhysX API will still run on the CPU in the absence of an nVidia GPU, you really have no choice but to do so in such a situation.
Dont quote me on this, but Havok and "physics" has been around since 2000.
I dont understand this obsession with physics that HAS TO run on GPU.
Remember HL2 gravity gun, various see-saw and floating barrel puzzles?
What about Crysis, destructible buildings, and the way vehicles tumble when punched.
And many games have had waving cloth for years.
Recall that GPU physics was already around on X1800XT. Also remember those physics PCIE cards for Ghost Recon2.
Long long ago, in galaxy far far away, games like Unreal Tournament supported D3D, OpenGL, Glide, Metal etc. Lots of games had speedups with 3DNow! or MMX or SSE. But, nVidia PhysX doesn't just make things faster, but in games like Mirror's Edge, adds more glass shards when glass breaks and waving banners/cloth. Well, you can certainly very easily do that on CPU (regardless of how slow, make it an option). I image some consumers feel shafted that they get incomplete game experience because of the video card choice they made years ago. If way back when 10 years ago, programmers could add support for multiple standards, why not today.
PhysX is dead. Nvidia is supporting OpenCL. Apple, Intel, MS and AMD also supporting OpenCL. Move along, move along..
OpenCL is a programming/extension platform, but does not include the libraries and such needed to put physics on the GPU. Physx, in this sense, is the full package. Havok is the full package. These could potentially be integrated into OpenCL just fine with sufficient effort, but are for now limited to GPU and CPU, respectively. The game companies will use whatever platform integrates best with their engine and requires the least effort. Neither is going anywhere, they'll both be supported and used for the next few years, to what extent only time will tell.
EDIT: The reason for GPU physics being promoted is because of speed. In theory, one could do more with physics than under normal circumstances with the CPU. Having Physx available for nvidia graphics is no different conceptually to ATI having eyefinity. Nvidia cards won't be able to split the picture to 3+ monitors like ATI will. It does put ATI at a disadvantage for certain games, but that is the way things go. Each company will be trying to promote their respective features and making them more useful as time goes on. Whether or not game devs will seek some sort of balance in these instances really comes down to how much they stand to gain from doing so.
No mate, i doubt that is a cuda simulation. If it's really used as a promo video by nvidia, it's a lie, that's fumefx mate, i use it at work, it's exactly the same look and feel of a fumefx simulation.
Here is an example so that you can see how similarly the teapot scene in your link looks to a fumefx simulation..
http://www.youtube.com/watch?v=4CuJQZ78YQc
And, also, i doubt nvidia would show scenes from movies claiming they did it on their gpu because.... than a lot of people would realize it's just a bunch of BS.
For example, in that supposed demo, you can see scenes from Transformers 2 and Star Trek. Both movies had their VFX done by ILM (Industrial Light and Magic), which uses a pipeline centered around Maya/Renderman and CPU processing. That's maya simulations, maya fluids, which are calculated on CPU's, that is definitely not gpu simulations.
I work in this field, that is not an nvidia presentation. If it actually is, it's a big bunch of lies and BS, like the Fermi fake card.
their pipeline is based on maya/renderman. If he made such a statement, show me a link. And not just a statement. If he said such a thing, did he said which software did they use? What software did they used to generate the particles etc... If he just said it, without any details, it's just marketing. Lots of studios do such statements, to get free workstations, software, special treatment and support from companies etc..
OpenCL is really a very good std. I tried the OpenCL SDK from AMD and was surprised how good the SDK was. I tried to offload some work from CPU to GPP but it crashed when i tried :shakes: Seems my skills are not good enough.
Not really static the thing did move, but it was more predictable and unrealistic.
About the GPU physics, my personal opinion is, right now, that I can't see the convenience of them. Then, I want to say it's not a completely solid opinion.
For starters, I'm in love with the GPGPU idea, as a way to give 3D rendering hw a use further than the gaming and the 3D modelling. It's a shame than so much computing potential (and an architecture that could come in handy to solve certain problems, even more than the CPU one) is being wasted. GPGPU avoids that stupid situation.
But: being the games a graphically bottlenecked kind of software (and it has been until now), carrying computing load from the CPU to the GPU seems a rather unnatural movement...
Yeah, that architecture is suited to certain physics tasks better than the CPU one, and if you use that architecture to do that, you can do more complex calculations than on a CPU. But then what about graphics? If you want to not limit severely the graphics aspect of the game, you end up limited to do some light effects instead of the really complex effects you could achieve using most of the GPU power to do it, and then, that little computing load could be run on the free, wasted CPU while GPU is computing graphics, so in the end, you may end up improving performance by running physics on CPU. Even if not, the difference wouldn't be so huge and black against white as hyped, I think.
Take the example of the so stale Batman AA. It has very good physics (that's what some people say) that make the GPGPU physics computing worth it. Except that if you remove the single thread constraint to run the physics multithreaded on a multi core CPU, the game runs fantastic with the same physics run on a CPU (that's what some other people say). And that's with a library made to sell the GPU acceleration (I'm sure similar -even if not exactly equal- visual effects could be achieved by using other different algorythms more suited to be run on a CPU instead with a huge performance gain).
Now that open, non hw vendor limited GPU accelerated physics are coming (OpenCL GPU accelerated Havok, Bullet Physics and Pixelux DMM), the usual limitations of CUDA PhysX are going to dissappear and we should see much more GPU accelerated physics in games if it is really a win win situation. But to be sincere, I think that PhysX is more of a commercial movement by NVIDIA to sell CUDA (and therefore to sell NVIDIA cards), and the Open Physics Initiative is more of a commercial movement by AMD to avoid the competitor's PhysX commercial value.
I'm a little skeptic about the GPGPU use in previously always graphically bottlenecked videogames arena, as you see. It just doesn't look like logical to me... maybe at some especific cases, yes, but not as the norm.
Do you really think you need to run a soft body on a GPU to not having to rely on pre-scripted animation? I can run on a single core P4 3GHz (yeah, Presscott) a Bullet Physics Soft Bodies demo with no problems and a high framerate, including a demo about several (dozens) sheets of soft material falling to the ground, getting entangled and so.
I just wonder when NV plans on releasing the gt300 card and how they plan on launching them. ATI seems prepared with a few more cards coming out soon and I cant wait for to see what the X2 cards will be like. I know NV cards will be fast but it looks like it could be a tough start of the season for NV.
farinoco, go play the batman demo yourself, you can dl it from the nvidia server at high speed.
the physics effects in it are NOT great at all... not at all...
there are 1-2 dozen newspapers flying off the ground if you fight in some scenes, and its not realistic at all...
there is volumetric fog, woooooa, havent seen that before :P
and then there are some other effects that you really dont even notice if you dont play the same level several times with physix on and off... :rolleyes:
mirrors edge was still the best physix implementation so far, and even there it didnt really matter or change the gameplay...
i wish theyd focus more on getting physics done right instead of fighting over WHAT processor to process it on :rolleyes:
LOL... :rofl:
CPU physics are more powerful...! Where have you been? All that Nvidia's PhysX offers, is ancillary "fluff" (ie: glass breaking, tiles breaking, paper shuffling on the floor, etc), it's all superficial to actually whats going on in the game. Eye candy! :down:
That^^ puffery is not the type of physics we are asking for and demanding in games. Batman's/Mirror's Edge overdone, superficial physx, is not what we are discussing in this thread. Carmack, DICE, etc all have been using real physical environments using the CPU for YEARS...! UNO? actual physical objects. Like a piece of fuselage being turn off a fighter from AA, and having that land on the road in front of you, as you run it over in the jeep, only to have it kick up and kill the other in the jeep behind you....!
We've had these real deformable objects in games for years. Developers just haven't been able to make heavy use of physics or the power to make full use of multi-threading yet. So that everything within a scene is basically it's own object.(bulldozer?). Just look at Battlefield 1943.. massive use of CPU physics! or (again) THIS video.
Nvidia can't touch that!
The reason nVidia is marketing flowing capes, ancillary paper, broken tiles and such, is because they know it would take quad-SLI to have real physics.
The Intel Core i7 920 is only $240 folks... less than a GTX285. Think on it!
PhysX is no different than Havoc, except Nvidia bought and started to support it minimally in their own video cards, so you didn't need a separate physics card... back when dual-core CPU's were just rumors. Now almost all of us have 8 threaded rigs...
You would need tri-sli to equal what the i7 can do. (ie: Velocity physic engine video)
This is so embarrassing:
http://www.fudzilla.com/content/view/15813/1/
"Judging from we've learned in the last few days we are talking about a handful of boards if not even less than that."
YIELDS ARE FINE!!! NOTHING TO SEE HERE...:rofl:
Edit: I want to note that I'm not laugthing about Nvidia being late (as this is not good for us costumers), but about fudzilla - swallowing every PR-bit they get from Nvidia - and in the end admitting that they are completely wrong.
they have a demo showing very nice water movements, just cause one game only uses a few things, does not mean you cant do something
not very uncommon. go play the last level of HL:2, ragdoll at its funnest.Quote:
That^^ puffery is not the type of physics we are asking for and demanding in games. Batman's/Mirror's Edge overdone, superficial physx, is not what we are discussing in this thread. Carmack, DICE, etc all have been using real physical environments using the CPU for YEARS...! UNO? actual physical objects. Like a piece of fuselage being turn off a fighter from AA, and having that land on the road in front of you, as you run it over in the jeep, only to have it kick up and kill the other in the jeep behind you....!
that video used all 8 threads of the processor, expect the average person to be on duel cores or quads, and expect the game already takes up 50-80% (depends on cores and how cpu limited the game is and at what settings they play) and whats left is a cpu 2-4x weaker than an i7, using up 2/3 of its power before physics, and the result is a average person can do only about 1/10th of what was shown while playing a game.Quote:
We've had these real deformable objects in games for years. Developers just haven't been able to make heavy use of physics or the power to make full use of multi-threading yet. So that everything within a scene is basically it's own object.(bulldozer?). Just look at Battlefield 1943.. massive use of CPU physics! or (again) THIS video.
Nvidia can't touch that!
The reason nVidia is marketing flowing capes, ancillary paper, broken tiles and such, is because they know it would take quad-SLI to have real physics.
The Intel Core i7 920 is only $240 folks... less than a GTX285. Think on it!
so much miss information. not all of us have 8 threaded rigs, i7 was 1% of intels cpu sales last year. and how many of us have a 2P amd rig? like 3 or 4 of us? and where do you see a real comparison between cpu and gpu physics, where some unbiased party did the review? pls show us where this tri-sli statement comes from.Quote:
PhysX is no different than Havoc, except Nvidia bought and started to support it minimally in their own video cards, so you didn't need a separate physics card... back when dual-core CPU's were just rumors. Now almost all of us have 8 threaded rigs...
You would need tri-sli to equal what the i7 can do. (ie: Velocity physic engine video)
so far most of your posts are full of information with no sources and never backed up. go take a look at what real GPU demos can do.
for other points, they need to start making physics scale properly. at what point do i care about how cool a flag waves, vs having 60fps locked. a good physics engine should know how to load balance properly so we can decide its importance. weve been gpu limited, cpu limited, and now were gonna see physics limiting framerates and the only solution is to drop the quality and replay the map (unacceptable in my opinion)
PhysX will become a success the moment it is required and used for a gameplay changing implementation; as long as it remains good enough for "fluff" (aka visual effects without gameplay effect); chances are very low of using "physx" as PRO for NVIDIA hardware.