http://www.prnewswire.com/news-relea...-85999182.html
Looks like Just Cause 2 will use CUDA directly for water simulation and post processing.Working closely with engineers at NVIDIA, developer Avalanche Studios has incorporated support for NVIDIA CUDA technology which helps to deliver a higher level of visual fidelity within the game's environments. CUDA-enhanced features in Just Cause 2 include incredible in-game effects, with rivers, lakes and oceans beautifully rendered with realistic rising swells, flowing waves and ripples, while advanced photographic Bokeh lens techniques add an additional cinematic quality to the look and feel of the game. Just Cause 2 has also been optimised to take full advantage of NVIDIA 3D Vision technology on compatible hardware, creating an incredibly immersive 3D experience.
http://kotaku.com/5484795/just-cause...oves-on-nvidia
The water looks really good.
Are we there yet?
@SkItZo
Sorry about that. It was quite amusing when I got that message in Server 2003, though![]()
Main Rig: Phenom II X6 1055T 95W @3562 (285x12.5) MHz, Corsair XMS2 DDR2 (2x2GB), Gigabyte HD7970 OC (1000 MHz) 3GB, ASUS M3A78-EM,
Corsair F60 60 GB SSD + various HDDs, Corsair HX650 (3.3V/20A, 5V/20A, 12V/54A), Antec P180 Mini
Notebook: HP ProBook 6465b w/ A6-3410MX and 8GB DDR3 1600
I never even heard of Just Cause 1![]()
Proprietary standards just need to die, quickly.
@the 3DMark-screen: That's hilariousI never knew there were ppl that used Dualcores (except in notebooks).
When will that be? After a shrink to 32 nm? That's nothing unheard in the industry (R600).
God I should go to sleep. Five o'clock in the morning in Germany *yawn*
Notice any grammar or spelling mistakes? Feel free to correct me! Thanks
Anybody here notice that Super Mario's head must be made of adamantium to keep bashing those bricks.. at least you'd think he'd use some asparin instead of all the shrooms *caugh* steroids. And whats up with the polka dot sky high place.. my tingling spidey sense is guessing LSD was involved.
Can somebody here make a Skynet/Fermi poster ۩♪♫☺☻♀♂█▓▒▒░
24/7: A64 3000+ (\_/) @2.4Ghz, 1.4V
1 GB OCZ Gold (='.'=) 240 2-2-2-5
Giga-byte NF3 (")_(") K8NSC-939
XFX 6800 16/6 NV5 @420/936, 1.33V
Oh, then it must certainly be an irrelevant POS if it slipped under your radar.
Yep, if there are open alternatives. Which there aren't. There are millions of CUDA capable card owners out there who will benefit from these additions who won't care much that it's proprietary.Proprietary standards just need to die, quickly.
If it something be advertised for NVIDIA and anything related to them. I.e Cuda and physX, fermi it should die. This would normally sound sarcastic but this is AMDz.....I mean xtremesystems.
The purpose of this thread was to contain the trolling around the forums and to reduce the amount of Fermi threads. But now all the fermi trolling is concentrated to make the so called "trolling party" and in addition, there seems to be anything related to AMD at all, to be posted in the news section. From every single 5970 being released(5970 black edition, 5970 sapphire edition, 5970 ares edition, 5970 haxxors edition) to AMD contests(MSI 58xx total prize value for three prizes 800 dollars), to someone posting something related to a currency which just happened to be AMD abbreviated, interviews with AMD people(catalyst maker x2), blogs, drivers and etc.
It just feels like an AMD forum, which normally isn't bad. But this forum already has an AMD section and most of this stuff could go in there or tech talk or extreme competition(which is also for contests). I started feeling sorry for Nvidia a long time ago with all the trolling against them. Sure they rename alot but they are not doing some terribly underhanded like what Intel did against AMD.
Anything like a negative rumor against NV seems to be posted instantly and its hard to not feel sorry for them. E.g things like charlie articles and somehow even positive like their recent financial results have even been trolled or them giving away a game.
NV could cure cancer and it would somehow be trolled.
Last edited by tajoh111; 03-03-2010 at 11:53 PM.
Core i7 920@ 4.66ghz(H2O)
6gb OCZ platinum
4870x2 + 4890 in Trifire
2*640 WD Blacks
750GB Seagate.
Anyone notice that nVidia now has "Geforce 300 Series" drivers posted up on their site? (if this has been posted already, please ignore)
"If the representatives of the people betray their constituents, there is then no resource left but in the exertion of that original right of self-defense which is paramount to all positive forms of government"
-- Alexander Hamilton
NVIDIA has stated numerous times that they are willing to license PhysX to anyone, including AMD/ATI. I'm not sure how "easily" ATI could implement it though.
How are games and CUDA related? PhysX is an API with a HAL that supports CUDA. In addition to C for CUDA, you get C++ and OpenCL support. Devs are slow to adapt these technologies, but it is happening. Everything else being equal*, are you going to go with the card that has support for these extras, or for the card that doesn't? I don't see these features as a deal breaker, but they are certainly nice to have.
Batman AA whetted by tongue for PhysX, but the Super Sonic Sled Demo is what really sold it to me. It's a feature that can add a ton of realism to games. It frees up a developer from having to make a physics engine for their title, less work for devs = more likely to see it put into a game.
* According to the perf numbers I've seen, things aren't equal. The point of DX11 is a massive improvement to geometric realism through tessellation. And AMD's architecture doesn't hold a candle to NVIDIA's in terms of geometry perf. 2.03x to 6.47x faster in high geometry (this more of an abstract, there's more than just geometry perf to get a game running) than the 5870, are the numbers I've seen. I'm eager to see how a big DX11 title performs (Sorry ATI, DiRT2 just wasn't cutting it for me). The beauty of tessellation is that devs can build their geometry for hardware performance that doesn't yet exist, and smoothly scale it down to what's out, without a load of extra work (just move the slider). It will be a little while before we see a game that was built around DX11 tessellation, rather than just having it slapped into the game.
Anyone wanna loan me some ATI cards to bench against on the 26th? :P
Amorphous
NVIDIA Forums Administrator
NVIDIA is the biggest backer of OpenCL, which can certainly be used to run PhysX-like physics processing.
It takes a LONG time to develop and get out a set of open standards. NVIDIA didn't want to wait to get a good set of physics tools out to devs and into games. With the development of CUDA, NVIDIA acquired Ageia and ported Ageia PhysX into CUDA (in only a few days, I'm told).
At the same time, NVIDIA chairs Khronos, the guys that do, most notably, OpenGL, OpenCL and other open-source standards. In short, NVIDIA doesn't care if your physics are PhysX or OpenCL based, either is good for them, they both help sell more graphics cards. The 100M+ NVIDIA GPUs that support CUDA also support OpenCL.
The current GeForce 300 GPUs you'll see around don't use the Fermi nanoarchitecture that the GF100 based GTX 470 and GTX 480 will. Mid-range solutions that use the Fermi arch are likely to follow a few months after the March 26th launch of the GTX 470 and 480s.
NVIDIA Forums Administrator
Here's more rumors and talks of the mystical Firmy from Fudzilla:
http://www.fudzilla.com/content/view/17922/1/
http://www.fudzilla.com/content/view/17921/1/
For resume:
- GTX 480: around 300w (HD 5970 level); GTX 470: 220w.
- GTX 480 "should be faster than HD 5870 in some cases". GTX 470 (I hope it's not the GTX 480) will be 20-30% faster than GTX 285.
- Partners still don't have final design yet. They only know the dimensions and cooling design currently.
BTW, "launch" date is still March 26th (3 weeks), I hope.
Ok...fermi is definetly smaller then even g200-b3 chip (gtx 285 55nm etc)
GTX 285 IHS is 1.75in flat or 4.45cm.
![]()
--lapped Q9650 #L828A446 @ 4.608, 1.45V bios, 1.425V load.
-- NH-D14 2x Delta AFB1212SHE push/pull and 110 cfm fan -- Coollaboratory Liquid PRO
-- Gigabyte EP45-UD3P ( F10 ) - G.Skill 4x2Gb 9600 PI @ 1221 5-5-5-15, PL8, 2.1V
- GTX 480 ( 875/1750/928)
- HAF 932 - Antec TPQ 1200 -- Crucial C300 128Gbboot --
Primary Monitor - Samsung T260
Correction: the IHS is smaller, likely due to the smaller memory bus.Ok...fermi is definetly smaller then even g200-b3 chip (gtx 285 55nm etc)
GTX 285 IHS is 1.75in flat or 4.45cm.
Unless you open up the IHS or x-ray it, there's no telling how big the die is.
Amorphous, the biggest problem is not how easily it would be for ATI to implement PhysX, but the fact of supporting and helping to standardize a competitor's propietary API, be it PhysX, CUDA, or whatever. I suppose you realize it, don't you? Of course NVIDIA would love AMD/ATI embracing PhysX/CUDA (the sw API) because being only compatible with NVIDIA hw is one of the reasons why its use is not widely supported by mainstream sw developers (because of compatibility for the features they should invest resources into). But I think you understand very well that if AMD are anything else than simply dumb, they have to do anything in their hands to avoid those competitor's propietary API's to become widespread, and supporting it by making their hw compatible is not the best thing to do for them.
NVIDIA will always have the possibility to do what they want with their propietary API's, and taking competitive advantage from it. Look at the case of EAX and Creative Labs. AMD won't support something like it unless they have absolutely no other possibility.
And in its current situation of hw vendor specific API's, I think CUDA/PhysX don't have a real place in mainstream market (and that for PhysX means anywhere). Of course, non mainstream HPC market (scientifical research, stadistical analysis, data mining, some applications in engineering, and other kind of data crunching and complex systems simulation applications) is another completely different thing, and I think in that ground CUDA (and generalizing, NVIDIA) is becoming THE big player for the moment (let's see how much advantage they can take until Intel is able to compete in that market, and in which measure they are able to do it). HW compatibility is important for a mainstream product from POV of the cost-results of implementing a given feature, and being vendor specific doesn't help in that. The less when you think that there are other wider solutions (OpenCL, Direct Compute) which in a short/medium term will be used by equivalent middlewares (Havok, Bullet for example in the case of real time physics simulation libraries).
That said, I'm not really convinced about the usefulness of GPGPU computing for videogames in PC's. For example, you mentioned B:AA. In that game, the physics effects that use CUDA can be perfectly done in a current quad core CPU (maybe in one with less cores also) with a multithreaded physics library. Most of them can be faked without much of a difference to be run even with less hw. In order to take real advantage of the computing power of a GPU you should aim to make effects orders of magnitude more complex, and that would bee a too hard hit for the GPU computation budget given to graphics. I've been taking a look at the rendering pipeline in CryEngine 3, for example, and there's simply no place where you could fit any non graphics related workload for the GPU. And that's after discarding a lot of visual effects that simply don't fit too, and taking the more rough but performant approaches for some others. And on the other hand, you hardly have anything intensive to do for the CPU apart from physics and AI computations, for which is enough (except, I insist, if you aim to do some much more complex computations that would make a huge impact on the leaving GPU budget for graphics).
Maybe the situation is different in a videoconsoles background, though, or maybe will change in the near (or far) future. Maybe I'm simply wrong, that's nothing else than a very light, superficial, analysis of mine. Anyway, if GPGPU computing it's going to become widespread in videogames, it will probably happen when there's a sw solution available which is not dependant on a given hw vendor (maybe except exclusive titles for a videoconsole with hw of that hw vendor of course).
see
http://www.xtremesystems.org/forums/...postcount=1620
basically die size has to be smaller then 55nm g200b3s
--lapped Q9650 #L828A446 @ 4.608, 1.45V bios, 1.425V load.
-- NH-D14 2x Delta AFB1212SHE push/pull and 110 cfm fan -- Coollaboratory Liquid PRO
-- Gigabyte EP45-UD3P ( F10 ) - G.Skill 4x2Gb 9600 PI @ 1221 5-5-5-15, PL8, 2.1V
- GTX 480 ( 875/1750/928)
- HAF 932 - Antec TPQ 1200 -- Crucial C300 128Gbboot --
Primary Monitor - Samsung T260
Thank you for quoting yourself without any explanation
As said by trinibwoy, the IHS size is the size of the GPU package and does not neccessarily reflect the die size.
GT200 needs a big package because of its 512-bit memory bus. GF100 likely can get away with a smaller package with a 25% reduction in bus width.
heise.de (in german) has new information about the GTX470
shader@1255 MHz
RAM@1600 MHz
Vantage-X 7511 (compare GTX285/HD5850/HD5870: 6002/6430/8730)
Vantage-P 17156 (compare HD5850/HD5870: 14300/17303)
Unigine benchmark
4xAA [fps]: 29/22/27 (GTX470/HD5850/HD5870)
8xAA [fps]: 20/19/23 (GTX470/HD5850/HD5870)
http://www.heise.de/newsticker/meldu...lt-946411.html
Just to clear things up about CUDA + Just Cause 2.
OpenCL and DX Compure Shaders are accelerated THROUGH the CUDA architecture standards. As such, the effects in Just Cause 2 will likely be accelerated on AMD hardware through their Stream architecture as well if they use OpenCL or Compute Shaders.
Everyone has to remember that CUDA is an all-encompassing term for the architecture that accelerates all types of APIs on NVIDIA GPUs (OpenCL, PhysX, folding, Compute Shaders, ray tracing, Adobe Photoshop GPU acceleration, Flash acceleration, etc., etc.). It is WAS proprietary but is now acting as a type of umbrella term for everything GPU accelerated. Much like Stream on AMD products.
whats with the pictuires with white outs and black outs that makes no sense whatsoever. remeber leaked HD5 series pics? were any of them whited out?? nope why the secrecy? maybe because this isnt what it will actually be? i just dont get it
Bookmarks