Maybe some manufacturer's insignia and the MB maker don't want to be blamed for leakage...?
Maybe some manufacturer's insignia and the MB maker don't want to be blamed for leakage...?
Jeeez resize your pictures lads, its pissing me off to be scrolling right, left up and down all the time in order to see anything. Why there are no rules about image size?
Last edited by Sam_oslo; 03-04-2010 at 07:25 AM.
► ASUS P8P67 Deluxe (BIOS 1305)
► 2600K @4.5GHz 1.27v , 1 hour Prime
► Silver Arrow , push/pull
► 2x2GB Crucial 1066MHz CL7 ECC @1600MHz CL9 1.51v
► GTX560 GB OC @910/2400 0.987v
► Crucial C300 v006 64GB OS-disk + F3 1TB + 400MB RAMDisk
► CM Storm Scout + Corsair HX 1000W
+
► EVGA SR-2 , A50
► 2 x Xeon X5650 @3.86GHz(203x19) 1.20v
► Megahalem + Silver Arrow , push/pull
► 3x2GB Corsair XMS3 1600 CL7 + 3x4GB G.SKILL Trident 1600 CL7 = 18GB @1624 7-8-7-20 1.65v
► XFX GTX 295 @650/1200/1402
► Crucial C300 v006 64GB OS-disk + F3 1TB + 2GB RAMDisk
► SilverStone Fortress FT01 + Corsair AX 1200W
You really don't have to go too far in this thread before you can find some posts to delete. It's a bit of a challenge really, given how quickly this thread grows.
I understand that people are frustrated with Fermi's late launch and some of nVidia's other practices. But let's check our frustrations at the door when we come to this thread because it is about Fermi news, info and updates... that's it.
Dual CCIE (Route\Switch and Security) at your disposal. Have a Cisco-related or other network question? My PM box is always open.
Xtreme Network:
- Cisco 3560X-24P PoE Switch
- Cisco ASA 5505 Firewall
- Cisco 4402 Wireless LAN Controller
- Cisco 3502i Access Point
I don't think that's possible. Unless the developer is using PhysX (which needs to be used in conjunction with another API to accurately model hair, water, etc.), there isn't anything out there other than DC and OpenCL right now that can use the GPU for this type of acceleration. If it is a proprietary engine that can do this we're talking some major $$$$$ invested on the part of the developer.
In order to accurately add water to a scene you need tessellation and selective geometry shading on the rendering side, DirectCompute for animation and physics if the water is interacting with anything. Unless the developer found away using DirectCompute for the animations, I can't see how this would be "exclusive to NVIDIA". Basically, they may use this to show off the GF100's power in DC or something along those lines.
What do you mean? Of course you can use CUDA directly to implement these effects. There are hooks in CUDA for interoperation with OpenGL and DX buffers. Remember, while CUDA is the wider compute architecture it also refers to the "C for CUDA" language on which DC and OpenCL are very much based/similar to.
Last edited by trinibwoy; 03-04-2010 at 08:13 AM.
I don't think that, for example C/C++ for CUDA are any lower level than OpenCL so... why not?
Indeed, what I have read about it is that programming something with OpenCL might be less straight forward than with C for CUDA because of a lower level (and harder) setting up process.
But don't take my word on any of this, I have no clue about GPGPU programming.
Thank you sirPeople get caught up in marketing terms like always...
At this point it looks like I may just have to go crossfire if I want a more cost effective performance boost (as much as multi gpu setpus are facepalm inducing affairs). I can see the 480 being nice at high res / IQ due to its vram and bandwidth advantage over Cypress (and unless you use 8x + AA at 1920x1200 and more often 2560x1600, 1GB of vram should remain adequete at 1920x1200 4x well into the future eg HD6000 ), but beyond that all these rumors so far are merely(WTB facepalm smilely face) If the 480 truely creeps up on 300 watts, that is quite strange. Would another shader cluster and some extra ram and similar ( perhaps lower? ) clocks really result in 80 more watts power usage (max) over the 470? For comparisons sake what was the max board power of the original GTX 260 and 280?
Last edited by Chickenfeed; 03-04-2010 at 08:21 AM.
Feedanator 7.0
CASE:R5|PSU:850G2|CPU:i7 6850K|MB:x99 Ultra|RAM:8x4 2666|GPU:980TI|SSD:BPX256/Evo500|SOUND:2i4/HS8
LCD:XB271HU|OS:Win10|INPUT:G900/K70 |HS/F:H115i
I wonder why the GTX 480 has a two pin fan connector and GTX 470 has a 4 pin one?
Coming Soon
No, you're right. The runtime C for CUDA interface is higher level than the OpenCL API which is more similiar to CUDA's driver interface. Nvidia has absolutely zero motivation to use OpenCL in any situation where CUDA would suffice because that would effectively give AMD a free invite to the party. Although I don't think OpenCL is yet in a stable enough state to ship with a commercial game so it's a moot point anyway.
Didn't I just say that in plain English?
CUDA allows for interoperability with DC, etc. but it goes to reason that same interoperability can be created (though not through CUDA) for ATI's Stream. Maybe I just didn't explain what I was saying well.
Allow me to post a slide directly from NVIDIA:
![]()
Last edited by SKYMTL; 03-04-2010 at 08:32 AM.
I've been programming all my life, and have been looking into these stuff lately.
The CUDA architecture enables developers to leverage the parallel processing power of NVIDIA GPUs. CUDA enables this via standard APIs such OpenCL and DirectCompute, and high level programming languages such as C/C++, Fortran, Java, Python, and the Microsoft .NET Framework.
► ASUS P8P67 Deluxe (BIOS 1305)
► 2600K @4.5GHz 1.27v , 1 hour Prime
► Silver Arrow , push/pull
► 2x2GB Crucial 1066MHz CL7 ECC @1600MHz CL9 1.51v
► GTX560 GB OC @910/2400 0.987v
► Crucial C300 v006 64GB OS-disk + F3 1TB + 400MB RAMDisk
► CM Storm Scout + Corsair HX 1000W
+
► EVGA SR-2 , A50
► 2 x Xeon X5650 @3.86GHz(203x19) 1.20v
► Megahalem + Silver Arrow , push/pull
► 3x2GB Corsair XMS3 1600 CL7 + 3x4GB G.SKILL Trident 1600 CL7 = 18GB @1624 7-8-7-20 1.65v
► XFX GTX 295 @650/1200/1402
► Crucial C300 v006 64GB OS-disk + F3 1TB + 2GB RAMDisk
► SilverStone Fortress FT01 + Corsair AX 1200W
Here's my general feeling about PhysiX
In about 1 year time, many people will be wondering this:
"Humm, with 4 cores a bit busy now with this game, I wonder how it would be like if I could use my other 4 Bulldozer cores to do something useful like run physics, too bad I had to waste 150$ on this graphics card just for that"
Gigabyte Z77X-UD5H
G-Skill Ripjaws X 16Gb - 2133Mhz
Thermalright Ultra-120 eXtreme
i7 2600k @ 4.4Ghz
Sapphire 7970 OC 1.2Ghz
Mushkin Chronos Deluxe 128Gb
Bookmarks