MMM
Results 1 to 25 of 1035

Thread: The official GT300/Fermi Thread

Hybrid View

  1. #1
    Xtreme Enthusiast
    Join Date
    Feb 2005
    Posts
    970
    Well why stop at gt300? lol Estimates put gt300 3-6 months away. i'll suggest that in another 3-6 months after that, their will be something well worth your time waiting for. so really, you might as well wait another 6-12 months, unless what you really want to say is "i'd rather own an nv card". if so, just grow some balls and say it.

    .
    Last edited by flippin_waffles; 10-01-2009 at 09:14 AM.

  2. #2
    Xtreme Addict
    Join Date
    Oct 2004
    Posts
    1,838
    wait a minute, if nvidia can do something complex like tesselation via cuda, what the hell is gonna stop gt300 from supporting every future api via cuda?
    Last edited by grimREEFER; 10-01-2009 at 09:19 AM.
    DFI P965-S/core 2 quad q6600@3.2ghz/4gb gskill ddr2 @ 800mhz cas 4/xfx gtx 260/ silverstone op650/thermaltake xaser 3 case/razer lachesis

  3. #3
    Xtreme Enthusiast
    Join Date
    Oct 2006
    Location
    Quebec, Canada
    Posts
    589
    Quote Originally Posted by grimREEFER View Post
    what the hell is gonna stop gt300 from supporting every future api via cuda?
    The fact that shaders change considerably each time a new API comes out, that there are new requirements for precision and calculation capabilities, new compression algorithms, bigger textures... lots of stuff changes and it wouldn't be efficient to keep it as is, the performance hit of having programmable shaders doing specialized shaders stuff would probably be pretty high.

    Then again, it could be possible, I'm not a specialist in any, I could be wrong.
    i7 2600K @ 4.6GHz/Maximus IV Extreme
    2x 4GB Corsair Vengeance 1866
    HD5870 1GB PCS+/OCZ Vertex 120GB +
    WD Caviar Black 1TB
    Corsair HX850/HAF 932/Acer GD235HZ
    Auzentech X-Fi Forte/Sennheiser PC-350 + Corsair SP2500

  4. #4
    Xtreme Member
    Join Date
    Jul 2009
    Location
    Madrid (Spain)
    Posts
    352
    Quote Originally Posted by grimREEFER View Post
    wait a minute, if nvidia can do something complex like tesselation via cuda, what the hell is gonna stop gt300 from supporting every future api via cuda?
    Not via CUDA but via shaders. I haven't took a more or less in-depth view of DX11, but as far as I have seen, I think there are 2 new types of shaders that allow to program the tesselation, Domain and Hull Shaders (in addition to the previous Pixel, Vertex and Geometry Shaders). That's not something especif to NVIDIA, but the way DX11 is defined.

    Anyway, via CUDA (i.e., via GPGPU, be it CUDA, OpenCL, ATI Stream or what you want) you can effectively program an entire rendering process from scratch, using whatever aproach you want, and modeling the rendering pipeline to your convenience.

    The downside? You would be programming everything to be run on general compute processors. The reason why even today we use Direct3D/OpenGL with their mostly fixed pipeline it's because the hw implements part of the tasks with hw especif to them (there are units to map the 2D textures to vertices of the 3D meshes, to apply filters, to proyect the 3D data to a 2D bitmap created by the fustrum of the camera, and so on). All this work is (logically) done much faster with hw with the especific mission to do it (TMUs and ROPs basically).

    But yeah, I think the future of the 3D graphics will be on completely programmable pipelines, and the especific hw units to do particular tasks will disappear. When there's enough power to allow it, of course. That's the general direction with computers. The more power, the more we tend to flexibility.

Bookmarks

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •