Results 1 to 25 of 257

Thread: Nvidia unveils the GeForce GTX 780 Ti

Threaded View

  1. #11
    Xtreme Enthusiast
    Join Date
    Dec 2010
    Posts
    594
    Quote Originally Posted by tajoh111 View Post
    If we examine the specs between gk110 based k20x tesla, the clocks of the cards were only 732 mhz, yet it had a rated tdp of 235watts. Pretty low clocks, but GK110 based titan still goes to 876mhz(even more in practice).

    Look at GK180 based K6000 and you have a card that is clocked at 900mhz, has more shaders and has a tdp 10 watts lower TDP than k20x. That's a massive improvement. Before you say they aren't comparable since Tesla is different than the workstation quadro, lets look at the fermi generation.

    Tesla M2070 and Quadro 6000 are both based on the same GF100 and if you look at the clocks, they are exactly the same at 574mhz and not only that, they have the same TDP at 225 Watts. They have the same clocks and TDP rating

    So now lets look at GK110 to GK180 again for the professional class of cards.

    732mhz -->900mhz
    2688 shaders --> 2880 shaders
    235watt TDP --> 225 watt tdp

    Now lets fill in the blanks for titan to gtx 780 ti
    876mhz --> ????
    2688 shaders --> 2880 shaders
    250watt TDP --> ???

    So if any of GK180's improvements carry over, even just a little bit(and they used GK180 instead of Gk110), then you have room for monster clocked cards, just as much as gtx 780 if not more.

    The 900mhz clocks and 225 watt TDP on a fully enabled big keplar is very impressive for quadro.
    One important question:
    Does Quadro have full DP rate or not? Afaik, DP is more power intensive, hence the lower clocks on the Teslas and on Titan once you enable DP. It was always my understanding that Quadro is a workstation card. Geometry, graphics etc. And Tesla is for DP. That's the whole sense of Maximus: Put one Quadro and one Tesla together, each with their specialized tasks.

    Second question:
    Will a potential 15 SMX-GeForce get GK180 or GK110 and if there really is an improved energy efficiency, is it due to design or due to better binning/process improvements over time?

    Third question:
    TDP != power consumption. Or better: Is TDP (Quadro/Tesla) comparable to TDP (GeForce)?

    Under sustained gaming load, Titan and the GTX 780 often clock near the base clock due to the low temperature target:
    http://ht4u.net/reviews/2013/nvidia_...st/index10.php
    http://ht4u.net/reviews/2013/nvidia_...iew/index9.php

    http://www.hardware.fr/articles/887-...ost-tests.html
    http://www.hardware.fr/articles/894-...ost-tests.html

    https://www.computerbase.de/artikel/...-gtx-titan/19/
    https://www.computerbase.de/artikel/...80-im-test/11/

    Now with those lower clocks, power efficiency is much better already. The Titan uses about 206W on average, the 780 uses 189W (cards only, no efficiency losses at the power supply, direct measurement):
    http://www.3dcenter.org/artikel/eine...rafikkarten-st

    I simply don't see that much potential for improving energy efficiency between comparable operating points (either no boost vs no boost or full boost vs full boost). Voltage is key here. The difference between no boost and boost is a whopping 0.16V! With my Titan, I measure 50-70W difference between base clock@1.0V and 1006 MHz@1.162V (whole system), and this is in line with the measurements of other reviews that made this kind of investigation.
    Last edited by boxleitnerb; 10-22-2013 at 09:36 PM.

Bookmarks

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •