MMM
Results 1 to 25 of 2723

Thread: The GT300/Fermi Thread - Part 2!

Threaded View

  1. #11
    Xtreme Addict
    Join Date
    Jan 2004
    Posts
    1,313
    Quote Originally Posted by Chickenfeed View Post
    Couldn't agree more. Never in history has a launch card using a new API been able to master it first generation. I doubt we will see strong DX11 cards until at least 3rd generation, perhaps 2nd gen refresh best case. That said, what they did with Cypress was a smart buisness move none the less. I'm sure Nvidia will push DX11 now they can call it relavent but if the past repeats itself, I am not buying it (their marketing). Hell I had a 8800gtx launch week for a good 14 months and never felt DX10 was relavent until much later...
    Mr obvious. Ofcourse future cards are faster with better features - why else would folks upgrade and buy them?

    And I think you underestimate the ENORMOUS effort in architecture, design, floorplanning, validation, etc to get even chips as similar as 9800GT and 9600GT made. FYI 5800fx and 7900GTX were both DX9, but they are very very different.

    Quote Originally Posted by Chumbucket843 View Post
    its getting ridiculous how many people think gf100 does not have fixed function tessellation. if they are that incompetent maybe they should hire people off of tech forums to architect their gpu's. most people probably dont know the difference from fixed function logic or programmable logic anyways.

    what they did was fairly simple. gf100 basically is setting up the scene in parallel compared to serial setup of other gpus. it works well for all of the small triangle tessellation creates.
    Does it really matter? The means to an end? R600 doesn't have either 2D core or correctly working AA hardware. Yet you can surf web and play games with AA.

    Quote Originally Posted by mapel110 View Post
    Obviously fake

    Quote Originally Posted by Designer View Post
    lol, my memory bandwith is better

    Ghz -> GB = 1000000 / 1024^2 = 0.9313 GiB
    512bit DDR = 512 * 2 / 8 = 128 (384=96, 320=80)
    For GDDR5 double the clock rate shown, because GDDR5 fetches twice #bits at a time.

    (if you dont use the Ghz->GB conversion factor, you will get same number as GPU-Z)
    So, for your GTX295: (512/4) x 1.512Ghz = 128 x 1.512 x 0.9313 = 180.2 GigaBytes/s (GiB/s)
    GTX480 from screenshot would be: (384/4) x 1.8 x 0.9313 = 160.9 GigaBytes/s (GiB/s)
    For reference 5870: (256/4) x 2.4 x 0.9313 = 143.0 GigaBytes/s (GiB/s)

    Almighty GTX480 only a smidgen ahead ... pff.. marketing PR wont stand for that.
    Using same clocks as 5870, 480 will be 50% more, or 214.6GiB/s (or 230GB/s in marketing speak - very close to 5970's 256GB/s!!)
    Last edited by ***Deimos***; 03-06-2010 at 01:35 PM.

    24/7: A64 3000+ (\_/) @2.4Ghz, 1.4V
    1 GB OCZ Gold (='.'=) 240 2-2-2-5
    Giga-byte NF3 (")_(") K8NSC-939
    XFX 6800 16/6 NV5 @420/936, 1.33V

Tags for this Thread

Bookmarks

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •