Maybe some manufacturer's insignia and the MB maker don't want to be blamed for leakage...?
Printable View
Maybe some manufacturer's insignia and the MB maker don't want to be blamed for leakage...?
Jeeez resize your pictures lads, its pissing me off to be scrolling right, left up and down all the time in order to see anything. Why there are no rules about image size?
You really don't have to go too far in this thread before you can find some posts to delete. It's a bit of a challenge really, given how quickly this thread grows.
I understand that people are frustrated with Fermi's late launch and some of nVidia's other practices. But let's check our frustrations at the door when we come to this thread because it is about Fermi news, info and updates... that's it.
I don't think that's possible. Unless the developer is using PhysX (which needs to be used in conjunction with another API to accurately model hair, water, etc.), there isn't anything out there other than DC and OpenCL right now that can use the GPU for this type of acceleration. If it is a proprietary engine that can do this we're talking some major $$$$$ invested on the part of the developer.
In order to accurately add water to a scene you need tessellation and selective geometry shading on the rendering side, DirectCompute for animation and physics if the water is interacting with anything. Unless the developer found away using DirectCompute for the animations, I can't see how this would be "exclusive to NVIDIA". Basically, they may use this to show off the GF100's power in DC or something along those lines.
What do you mean? Of course you can use CUDA directly to implement these effects. There are hooks in CUDA for interoperation with OpenGL and DX buffers. Remember, while CUDA is the wider compute architecture it also refers to the "C for CUDA" language on which DC and OpenCL are very much based/similar to.
I don't think that, for example C/C++ for CUDA are any lower level than OpenCL so... why not? :shrug:
Indeed, what I have read about it is that programming something with OpenCL might be less straight forward than with C for CUDA because of a lower level (and harder) setting up process.
But don't take my word on any of this, I have no clue about GPGPU programming.
Thank you sir :up: People get caught up in marketing terms like always...
At this point it looks like I may just have to go crossfire if I want a more cost effective performance boost (as much as multi gpu setpus are facepalm inducing affairs). I can see the 480 being nice at high res / IQ due to its vram and bandwidth advantage over Cypress (and unless you use 8x + AA at 1920x1200 and more often 2560x1600, 1GB of vram should remain adequete at 1920x1200 4x well into the future eg HD6000 ), but beyond that all these rumors so far are merely :shrug: (WTB facepalm smilely face) If the 480 truely creeps up on 300 watts, that is quite strange. Would another shader cluster and some extra ram and similar ( perhaps lower? ) clocks really result in 80 more watts power usage (max) over the 470? For comparisons sake what was the max board power of the original GTX 260 and 280?
I wonder why the GTX 480 has a two pin fan connector and GTX 470 has a 4 pin one?
No, you're right. The runtime C for CUDA interface is higher level than the OpenCL API which is more similiar to CUDA's driver interface. Nvidia has absolutely zero motivation to use OpenCL in any situation where CUDA would suffice because that would effectively give AMD a free invite to the party. Although I don't think OpenCL is yet in a stable enough state to ship with a commercial game so it's a moot point anyway.
Didn't I just say that in plain English? ;)
CUDA allows for interoperability with DC, etc. but it goes to reason that same interoperability can be created (though not through CUDA) for ATI's Stream. Maybe I just didn't explain what I was saying well.
Allow me to post a slide directly from NVIDIA:
http://images.hardwarecanucks.com/im...0/GF100-27.jpg
I've been programming all my life, and have been looking into these stuff lately.
The CUDA architecture enables developers to leverage the parallel processing power of NVIDIA GPUs. CUDA enables this via standard APIs such OpenCL and DirectCompute, and high level programming languages such as C/C++, Fortran, Java, Python, and the Microsoft .NET Framework.
Here's my general feeling about PhysiX
In about 1 year time, many people will be wondering this:
"Humm, with 4 cores a bit busy now with this game, I wonder how it would be like if I could use my other 4 Bulldozer cores to do something useful like run physics, too bad I had to waste 150$ on this graphics card just for that"