Page 2 of 2 FirstFirst 12
Results 26 to 34 of 34

Thread: D8E aka G92 is dual PCB card [FUD]!!

  1. #26
    Xtreme Addict
    Join Date
    Nov 2004
    Posts
    1,363
    Quote Originally Posted by LordEC911 View Post
    How does that make any sense whatsoever?

    A near 1TFlop card doesn't mean it will DO 1TFlop, if they leave it like the G80 it will only be doing some ~600GFlops.
    It makes perfect sense. A single G80 is rougly 520 gflops on paper and 430 in real world applications. So that would mean it would need rougly twice the peformance to achive this mark since nvidia claimed it would be *over* 1tflop in performance.

    If you go by the logic that die-space is rougly equated with shading units which rougly translates into raw peformance, the G92 would need to be twice as big as their previous G80s in order to achive this goal.

    That would mean that in order to produce a chip that had twice as many shading units would by the exact same size as current G80s if it were produced on a 45nm platform. Logic dictates that since they will not make it on a 45nm platform since not even TSMC is capable of this right now. That would mean at 65nm (which will most likely be the process of choice) the "G92" chip would be rougly 20% larger than current G80s on a 90nm platform.

    #1 G80s are at the absolute limit for heatsink weight, so the new card *CANNOT* have a higher TDP or they risk losing their PCI-E certification.

    #2 G80 yields are not good which is why there are many 96sp 8800GTSs and no cut-down card with the full shading units enabled. In addition the sheer size of the die makes its core extremely expensive to produce at a very mature 90nm process. An unproven 65nm,55nm or even 45nm process would have an even higher defect rate not to mention has the potential to be very leaky which would certianly revoke their PCI-E liscense for that card *IF* they could get one to work properly from the get-go.

    Given this information the G92 *cannot* have a larger die than current G80s and in addition cannot have a higher TDP either. That means that the G92 as a single die-single card is *not viable* if it is to go beyond 1tflop.

    However, if the G92 was infact a reduced G80 with 512bit ram or even 256bit ram, two of those cards in SLI would easily reach 1tflop and would be much easier to produce since the die would be significantally smaller. Even if the defect rate is high the sheer number of cores would offset this much like Winchester did for AMD.

    If I were to take a guess from a buisness standpoint as to what would be the most profitable per peformance for nvidia it would be a 96sp or a 128sp G80 like core with a 256bit memory controller. A card with those specs on a 65nm process would provide a tremendous amount of profit @ a $250 price range and $400-550 as a dual pcb version.

    Also the manufacturing cost would be substantially cheaper since their mid and high end use the same GPU and potentially PCB, which is exactally what they did when they released the G71.
    Last edited by Sentential; 09-25-2007 at 03:44 PM.
    NZXT Tempest | Corsair 1000W
    Creative X-FI Titanium Fatal1ty Pro
    Intel i7 2500K Corsair H100
    PNY GTX 470 SLi (700 / 1400 / 1731 / 950mv)
    Asus P8Z68-V Pro
    Kingston HyperX PC3-10700 (4x4096MB)(9-9-9-28 @ 1600mhz @ 1.5v)

    Heatware: 13-0-0

  2. #27
    Xtreme Mentor
    Join Date
    Apr 2007
    Location
    Idaho
    Posts
    3,200
    Fud so I don't pay much attention.


    NVIDIA is becoming notorious for stirring up fud lately, so I don't we'll be given any actual proof of what their first gen PCIe 2.0 DX10 card will be until shortly before launch.
    "To exist in this vast universe for a speck of time is the great gift of life. Our tiny sliver of time is our gift of life. It is our only life. The universe will go on, indifferent to our brief existence, but while we are here we touch not just part of that vastness, but also the lives around us. Life is the gift each of us has been given. Each life is our own and no one else's. It is precious beyond all counting. It is the greatest value we have. Cherish it for what it truly is."

  3. #28
    Xtreme Addict
    Join Date
    May 2007
    Location
    'Zona
    Posts
    2,346
    Sent-

    I was stating that it didn't make any sense to believe Fud over BenchZowner.
    Either way your logic is flawed.

    Nvidia stated NEAR 1TFlop performance, not over...
    Also there are numerous other ways to "double" performance without doubling the amount of shaders.

    You seemed to have missed the earlier G90/G92 threads...
    BTW- the G80 does ~330GFlops, not 430.
    Last edited by LordEC911; 09-25-2007 at 03:49 PM.

  4. #29
    Xtreme Addict
    Join Date
    Nov 2004
    Posts
    1,363
    Quote Originally Posted by LordEC911 View Post
    You seemed to have missed the earlier G90/G92 threads...
    BTW- the G80 does ~330GFlops, not 430.
    I couldnt find the original PDF so I hope this quote from nvidia forums suffices:

    G80 has 128 fp32 ALUs at 1350MHz with MADD.

    So it should crunch until 256*1.35 GFLOPS (346 GFLOPS) if one feeds it right.

    But AFAIR "GF 8800 GPU Technical Brief" document talks about 520 GFLOPS.
    Is there some branch-unit or texture-unit ALU added and summed up?

    Are this units usable by CUDA C code or only available as one does
    texture array access with some interpolation ?

    Sorry for all this questions about tech spec details. But I got first to convince
    some people that CUDA/G80 is worth the time and effort to port some stuff on it.
    Til now this is regarded as "new toy stuff" by some people.

    Greetings
    Knax
    346gflop max computational MUL
    ~430gflop Real World Peformance ( I cant remember where I heard this, it was on the forums some place)
    520gflop w/t ADD


    EDIT: listed here too

    http://www.3dcenter.de/artikel/2006/12-28_a.php
    Last edited by Sentential; 09-25-2007 at 04:01 PM.
    NZXT Tempest | Corsair 1000W
    Creative X-FI Titanium Fatal1ty Pro
    Intel i7 2500K Corsair H100
    PNY GTX 470 SLi (700 / 1400 / 1731 / 950mv)
    Asus P8Z68-V Pro
    Kingston HyperX PC3-10700 (4x4096MB)(9-9-9-28 @ 1600mhz @ 1.5v)

    Heatware: 13-0-0

  5. #30
    Xtreme Addict
    Join Date
    May 2007
    Location
    'Zona
    Posts
    2,346
    Quote Originally Posted by Sentential View Post
    346gflop max computational MUL
    ~430gflop Real World Peformance
    520gflop w/t ADD
    Where is this 430GFlop real world performance?
    It is somehow "magically" able to do more work then it is capable of doing?

  6. #31
    Xtreme Addict
    Join Date
    Nov 2004
    Posts
    1,363
    Quote Originally Posted by LordEC911 View Post
    Where is this 430GFlop real world performance?
    It is somehow "magically" able to do more work then it is capable of doing?
    Read my previous post, on paper G80 is a 520gflop card. The 346gflop includes only one part of the overall GPU's abilities.

    EDIT: I found the CUDA pdf

    http://www.cs.ucsb.edu/~gilbert/cs24...#37;20CUDA.ppt

    The nVidia G80 GPU. 128 streaming floating point processors @1.5Ghz; 1.5 Gb Shared RAM with 86Gb/s bandwidth; 500 Gflop on one chip (single precision)
    Last edited by Sentential; 09-25-2007 at 04:06 PM.
    NZXT Tempest | Corsair 1000W
    Creative X-FI Titanium Fatal1ty Pro
    Intel i7 2500K Corsair H100
    PNY GTX 470 SLi (700 / 1400 / 1731 / 950mv)
    Asus P8Z68-V Pro
    Kingston HyperX PC3-10700 (4x4096MB)(9-9-9-28 @ 1600mhz @ 1.5v)

    Heatware: 13-0-0

  7. #32
    Xtreme Addict
    Join Date
    May 2007
    Location
    'Zona
    Posts
    2,346
    Quote Originally Posted by Sentential View Post
    Read my previous post, on paper G80 is a 520gflop card. The 346gflop includes only one part of the overall GPU's abilities.

    EDIT: I found the CUDA pdf

    http://www.cs.ucsb.edu/~gilbert/cs24...G80%20CUDA.ppt
    Ok... you are confusing yourself.

    Stock GTX- ~330GFlops realworld performance, ~460GFlops theoretical performance.
    Stock Ultra- ~370GFlops realworld performance, ~500GFlops theoretical performance.

  8. #33
    Xtreme Addict
    Join Date
    Nov 2004
    Posts
    1,363
    Quote Originally Posted by LordEC911 View Post
    Ok... you are confusing yourself.

    Stock GTX- ~330GFlops realworld performance, ~460GFlops theoretical performance.
    Stock Ultra- ~370GFlops realworld performance, ~500GFlops theoretical performance.
    No I am not confusing myself, read what I posted. In order not further de-rail this thread, listed below is the last I will post on this topic:

    G80 Unified Shader:
    173 GFlop/s ADD
    346 GFlop/s MUL
    (518 GFlop/s ADD + MUL)

    ___________________________

    I agree with the rest of the posters who feel that if Kinc claims there will be a GX2 card in the works, I believe what he says.
    NZXT Tempest | Corsair 1000W
    Creative X-FI Titanium Fatal1ty Pro
    Intel i7 2500K Corsair H100
    PNY GTX 470 SLi (700 / 1400 / 1731 / 950mv)
    Asus P8Z68-V Pro
    Kingston HyperX PC3-10700 (4x4096MB)(9-9-9-28 @ 1600mhz @ 1.5v)

    Heatware: 13-0-0

  9. #34
    Xtreme Addict
    Join Date
    Jul 2006
    Location
    Vancouver, BC
    Posts
    2,061
    Earlier this year, an nVidia engineer was quoted as saying the G80 was designed with 160 shaders but only 128 were enabled at 80nm (for a variety of reasons like yields, heat, power, etc). The other shaders would not be enabled until a die-shrink. So it's possible the G92 will have 160 shaders per chip making a x2 variant, fairly potent.

    Also, if history repeats itself, the next high-end single card part will come 1 quarter following the dual card product.

    As discussed at length elsewhere here, this is just more evidence that nVidia is going to milk the profit and margins this current gen GPU as much as possible before launching a next gen part.

Page 2 of 2 FirstFirst 12

Bookmarks

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •