Consoles brought to software physics discussion. Cheap, sub-performing hardware from... 2006? Priceless.
Consoles brought to software physics discussion. Cheap, sub-performing hardware from... 2006? Priceless.
Ageia failed where Nvidia succeeded because there was no way everyone was going out to buy a PhysX card. Nvidia already had market access to millions of people hence the immediate proliferation of GPU PhysX after it was ported to CUDA. It had nothing to do with what hardware the libraries were coded for. Your analytical skills need a little tune up.
Hehe, and we're back to the same old lame comparisons to animations and computationally light effects. I can't believe after all this time huffing and puffing on this topic you still don't understand the basics of the horsepower required to compute certain effects in real-time.Quote:
Now, today (4 years later) with powerful CPU's there is no reason for physical environments to be processed off chip. That is clearly seen in the Ghost Buster/Inferno engine demo & in Battlefield: Bad Company 2.
And now the next step is for you to rant on about how the real-time computations are stupid/lame etc because the fake animations are just as good. I've seen this movie before and it's getting real old. Just give it up until you actually have a grasp of the concepts we're discussing.
By the way, thought I should clarify something. I have no interest in showing you or convincing you of anything. Actually, your irrational hatred for PhysX simply amuses me. I really can't understand why someone would be enraged by other people liking something but it's extremely entertaining nonetheless. So you can hold on to your seething hatred and have fun with that.
I prefer to base my arguments on logic. Granted most of these PhysX discussions devolve into subjective nonsense but there are a few illogical points that are raised over and over and those are the ones that make sense to respond to. If you have a logical point to make beyond "I hate PhysX and everyone who doesn't is obviously a paid Nvidia shill" then we can talk.
Do you believe there is going to be increasing demand for PhysX in, say 1-3 years? How about game physics and hardware accelerated physics in general?
Frankly put, the only hope for PhysX to become THE standard would be that it would be integrated and hardware accelerated on next-gen consoles. It is verly unlikely to become a defacto on PC when only one manufacturer supports it.
I must admit, I admire Trinibwoy's stamina for dealing with Xoulz.. :D
Out of all the anti PhysX mob, Xoulz is by far the most "insane in the membrane." :rofl:
I'm just sad that GPU physics is, so far, failing to live up to its potential. With this kind of power we could have highly interactive/destroyable environments and extensive GAMEPLAY physics. But instead we have games that fail to surpass the gameplay physics from games >5 years old.
As long as PhysX doesn't run on other brands hardware it will remain relegated to particle effects and random superfluous objects flying around on top of a boring, mostly static environment. I don't hate PhysX, but it certainly isn't living up to it's potential and hence much less interesting to me. I like eyecandy, but not if it's like a tasty condiment on top of a stale tasteless cracker.
If Nvidia really wants to do something for PC gaming they could open up PhysX to other GPUs. Then developers could make mainstream games with GPU accelerated gameplay physics. But I doubt they will so it's up to someone else to do it.
This^^
Understand, I only post to thwart the lunatic fringe types who can't jump off their fanboi bandwagon long enough to admit the reality of the situation. Other libraries are much better choice for development of new games, because PhysX is proprietary and it's use is outdated.
Depends on what you mean by demand and from whom. Developers seem to be content with what Havok and PhysX offer. In-house physics engines aren't pushing the envelope either.
As evident in the many PhysX discussions on the board people seem to be more than happy with the status quo of CPU based physics engines. The only reason it's generating so much discussion is the Nvidia divide. Otherwise I doubt there would have been many complaints about the glacial pace of physics progress.
He's not so bad really. But he's probably like that with anything he can't stand to see other people enjoy. I bet he hates Apple products and its users too. Or even chocolate ice-cream...who knows.
That's one of my fundamental problems with all the anti-PhysX stuff. There's absolutely nothing stopping any developer from picking up a C++ compiler and coding a physics library. Xoulz is right about one thing - the reason Nvidia bought Ageia and ported PhysX was to market it as a unique feature of its products. So what? I don't know of any mandate for a software developer to make its software open and free for all platforms, especially when in doing so it's enabling a competitor who apparently can just sit idly by doing nothing to contribute or provide an alternative.
How do you propose they do that and using what API? You have proprietary physics algorithms (Novodex) implemented using a proprietary API (CUDA). Which part of that should they offer to their competitors and under what sort of agreement? PhysX isn't like DirectX where DirectX is just a way for applications to communicate with the underlying hardware using a very limited set of interfaces. It's a full blown library with concrete implementations that represent intellectual property, just like Havok. Apart from a glorious show of generosity, what steps should Nvidia take to get it running on AMD hardware? Should they also code AMD's DirectX drivers for them too?Quote:
If Nvidia really wants to do something for PC gaming they could open up PhysX to other GPUs.
Nothing is stopping a developer from programming their own physics. Some have. But most aren't going to take the time and effort it takes to write all that from scratch if there is a library that already does it for them. Even if that choice restricts them in some significant way.
I never said there was a mandate. It's just what I believe would be in my best interest as a consumer. Of course they don't have to do something nice for the gaming community like this. But if not it will remain a much less interesting and useful library as a result, IMO.
Tell me, wouldn't you prefer to have mainstream games using GPU accelerated PhysX for full blown gameplay interactivity rather then just effects?
It's not the algorithms that prevent use on other hardware, it's the API (cuda). I suggest that they rewrite the API interface part of the library to use OpenCL or DX11CS.Quote:
How do you propose they do that and using what API? You have proprietary physics algorithms (Novodex) implemented using a proprietary API (CUDA). Which part of that should they offer to their competitors and under what sort of agreement? PhysX isn't like DirectX where DirectX is just a way for applications to communicate with the underlying hardware using a very limited set of interfaces. It's a full blown library with concrete implementations that represent intellectual property, just like Havok.
Nope. Apart from writing the library in an open API I don't suggest that they do anything for ATI. ATI's OpenCL/DX11CS support is ATI's responsibility.Quote:
Apart from a glorious show of generosity, what steps should Nvidia take to get it running on AMD hardware? Should they also code AMD's DirectX drivers for them too?
but for some reason physx wont run in hardware mode in OpenCL as someone keeps claiming.. it just wont. it wont, it wont, it wont..
And look at the comments there & you will see allot of the same sentiments about NV's statics & excuses with PhysX & the lack of trust that OpenCL version will still be hampered in some way on other vendors.
If OpenCL Havoc gets off the ground & is shown to work equally well on both vendors & PhysX does not then you know what there are going to be allot of questions asked.
EDIT: Published on 27th March 2009 by Ben Hardwidge
They are taking there time & its looks to me as they will only port due to competition from Havoc.
Maybe because the article says, that they are only considering it, but they are in no rush, because there's basically no competition.
And that is the point, if they are confident that they have the superior & that's why we we keeping it for ourself then they has nothing to fear'
Its really admitting what they know the one that's supports the most hardware is the one that's going to be used the most & that they don't care about being used the most by gamers & more about the marketing side of what they have to sell the cards as once a competing accelerating API comes out that works on both vendors & works just as well then there marketing edge is over with that API & the control over accelerating physics as its all about selling cards to NV & not about moving the whole game scene forwards like they claim or they would of ported it already.
2009 newflash: GPU physical envireonment's.... is dead.
Note:
http://www.viddler.com/explore/HardOCP/videos/36/ <---- (On a discontinued intel 4-core..)
http://www.youtube.com/watch?v=H9boF-JZKcU
I was using a hack to allow PhysX processing with an nvidia card and using my 5870 as the primary. PhysX is a dead technology so I ended up pulling the nvidia card and haven't looked back since.
They can't, at least not easily on discrete, there's way too many cycles needed for a GPU to talk back to a CPU. However the on board Sandy Bridge GPU could be used as you're thinking, both CPU and GPU write directly to system memory and on sandy bridge there is a direct bus linking GPU to the CPU.
BUT it doesn't have all that much grunt and both AMD and Intel would have to agree to similar rules before anyone would dedicate time to such a solution. Maybe in a couple of years we'll see GPU powered gameplay physics, but nvidia wouldn't be in a position use it. AMD and Intel would freeze them out.
Well thats business for you. As it's been said many times, Nvidia is in this to make money.....not be a champion for "gamers' rights and what not.
Heck, are there even any games in development that are planning to use OpenCL yet?
Batman Arkham City will be due out next year, and if it uses PhysX like it's predecessor (which most likely will happen), then it will be a huge win for PhysX. Batman Arkham Asylum was a huge success, and if you haven't seen the screenshots, the successor looks like it will be even more awesome than the first game since Rocksteady has over a year to devote to polishing as the game is supposedly already finished for the most part.
Even if you use it as a selling point, you're still moving things forward, because you're the only one with a physics engine, that is accelerated by a GPU.
And like I said before, coming out with an engine, that is as capable as PhysX isn't enough. It's going to take some time to make the proper tools and also the developers will need some time to play with it.
They could release it today and it would still be a year or two at least, before we see any AAA game with physics altering gameplay.
1500 boxes and 200 Ragdolls "maxing out" our 4 core / 8 thread Intel i7 processor.
This just proves, that you have absolutely no idea what we're talking about here. Or did I miss something and everyone has now an 8 core / 16 thread CPU, so they can dedicate all that power just for physics?
This whole time, you're saying how a quad core CPU is capable of doing anything that PhysX can, but all of a sudden, we need one of the best CPUs on the market just for that, without any AI involved?
I take it, you didn't actually play the game, otherwise you would know better.
Sure, the environment is more interactive, than we're used to see, but there are still obvious patterns in destruction of certain items. It is still very limited, predefined and it's not being calculated real-time. Same as Havok.