Page 103 of 143 FirstFirst ... 35393100101102103104105106113 ... LastLast
Results 2,551 to 2,575 of 3567

Thread: Kepler Nvidia GeForce GTX 780

  1. #2551
    Xtreme Member
    Join Date
    Mar 2007
    Posts
    317
    Quote Originally Posted by snoid_zero View Post
    Galaxy GeForce GTX 680 4GB / Hall of Fame Edition

    6+8pin power connectors and TDP of 300W.
    A question, not very relevant:
    Shouldn't be the case that two 6-pin power connectors would be enough for 300W?
    Given that PCI-Express 2.0 onwards we get 150W from the PCI-E port (and a further 150w from the power connectors amounting to a grand total of 300w).

    Of course that would make them/it incompatible with PCI-E 1.1 boards but that should not be much of a problem given that vast majority of people (all?) who where to buy such a card already own a post-2007 motherboard (which has a PCI-Express 2.0 port on board).

  2. #2552
    Xtreme Addict
    Join Date
    May 2007
    Location
    'Zona
    Posts
    2,346
    Quote Originally Posted by Stevethegreat View Post
    A question, not very relevant:
    Shouldn't be the case that two 6-pin power connectors would be enough for 300W?
    Given that PCI-Express 2.0 onwards we get 150W from the PCI-E port (and a further 150w from the power connectors amounting to a grand total of 300w).

    Of course that would make them/it incompatible with PCI-E 1.1 boards but that should not be much of a problem given that vast majority of people (all?) who where to buy such a card already own a post-2007 motherboard (which has a PCI-Express 2.0 port on board).
    It has to be implemented by the mobo manufacturer and other than a small handful of boards, the vast majority don't allow more than 75w through the PCI-e slot.
    Originally Posted by motown_steve
    Every genocide that was committed during the 20th century has been preceded by the disarmament of the target population. Once the government outlaws your guns your life becomes a luxury afforded to you by the state. You become a tool to benefit the state. Should you cease to benefit the state or even worse become an annoyance or even a hindrance to the state then your life becomes more trouble than it is worth.

    Once the government outlaws your guns your life is forfeit. You're already dead, it's just a question of when they are going to get around to you.

  3. #2553
    Xtreme Guru
    Join Date
    Dec 2002
    Posts
    4,046
    kepler was a good time for nvidia to go 1:3 gpu:shader clocks ratio its more than capable

    whats really the hold up on gk110 ? low memory bus clock again ?

  4. #2554
    Xtreme Mentor
    Join Date
    Feb 2008
    Posts
    2,671
    Either they don't need GK110 / 100 to compete against the 7970, or they want to skip the 'GTX 480' this time and only release the 'GTX 580' once its done, ready, and needed.

  5. #2555
    Xtreme Addict
    Join Date
    Jun 2005
    Location
    Madison, WI
    Posts
    1,004
    Quote Originally Posted by bhavv View Post
    Either they don't need GK110 / 100 to compete against the 7970, or they want to skip the 'GTX 480' this time and only release the 'GTX 580' once its done, ready, and needed.
    Exactly this, all of this.
    \Project\ Triple Surround Fury
    Case:
    Mountain Mods Ascension (modded)
    CPU: i7 920 @ 4GHz + EK Supreme HF (plate #1)
    GPU: GTX 670 3-Way SLI + XSPC Razor GTX670 water blocks
    Mobo: ASUS Rampage III Extreme + EK FB R3E water block
    RAM: 3x 2GB Mushkin Enhanced Ridgeback DDR3 @ 6-8-6-24 1T
    SSD: Crucial M4 256GB, 0309 firmware
    PSU: 2x Corsair HX1000s on separate circuits
    LCD: 3x ASUS VW266H 26" Nvidia Surround @ 6030 x 1200
    OS: Windows 7 64-bit Home Premium
    Games: AoE II: HD, BF4, MKKE, MW2 via FourDeltaOne (Domination all day!)

  6. #2556
    c[_]
    Join Date
    Nov 2002
    Location
    Alberta, Canada
    Posts
    18,728
    Quote Originally Posted by UrbanSmooth View Post
    Exactly this, all of this.
    Sitting on your product only reduces its useable marketable lifespan. IF you have a complete, marketable, and competative product then it should be on the market. Otherwise its suicide.

    All along the watchtower the watchmen watch the eternal return.

  7. #2557
    Xtreme Member
    Join Date
    Sep 2010
    Posts
    139
    Quote Originally Posted by bhavv View Post
    Either they don't need GK110 / 100 to compete against the 7970, or they want to skip the 'GTX 480' this time and only release the 'GTX 580' once its done, ready, and needed.
    They really really need it for gpgpu though.

  8. #2558
    Xtreme Guru
    Join Date
    Dec 2002
    Posts
    4,046
    Quote Originally Posted by bhavv View Post
    Either they don't need GK110 / 100 to compete against the 7970, or they want to skip the 'GTX 480' this time and only release the 'GTX 580' once its done, ready, and needed.
    Quote Originally Posted by STEvil View Post
    Sitting on your product only reduces its useable marketable lifespan. IF you have a complete, marketable, and competative product then it should be on the market. Otherwise its suicide.
    i agree with both.. but when this thing is released its memory bus better be capable of 2000mhz/384gb/s

  9. #2559
    Xtreme Member
    Join Date
    Dec 2008
    Location
    India
    Posts
    394
    They only just released 680 surely they have a 3 month window before they need to release something even faster depending on market conditions and how profitable the chip will be. Maybe they are right now concentrating on making the most of 680 and refining GK110 to make the next chip also very profitable and have good power and temp characteristics.

  10. #2560
    Xtreme Addict
    Join Date
    Jun 2005
    Location
    Madison, WI
    Posts
    1,004
    Quote Originally Posted by STEvil View Post
    Sitting on your product only reduces its useable marketable lifespan. IF you have a complete, marketable, and competative product then it should be on the market. Otherwise its suicide.
    I understand that, Stevie.

    However, being able to "sit on" their actual high-end GPU, Nvidia actually has time to tweak it some more and possibly get a "GTX 580-like" release out of it, as bhavv said a few posts back.
    \Project\ Triple Surround Fury
    Case:
    Mountain Mods Ascension (modded)
    CPU: i7 920 @ 4GHz + EK Supreme HF (plate #1)
    GPU: GTX 670 3-Way SLI + XSPC Razor GTX670 water blocks
    Mobo: ASUS Rampage III Extreme + EK FB R3E water block
    RAM: 3x 2GB Mushkin Enhanced Ridgeback DDR3 @ 6-8-6-24 1T
    SSD: Crucial M4 256GB, 0309 firmware
    PSU: 2x Corsair HX1000s on separate circuits
    LCD: 3x ASUS VW266H 26" Nvidia Surround @ 6030 x 1200
    OS: Windows 7 64-bit Home Premium
    Games: AoE II: HD, BF4, MKKE, MW2 via FourDeltaOne (Domination all day!)

  11. #2561
    Xtreme Member
    Join Date
    Aug 2009
    Location
    Belgium
    Posts
    163
    Quote Originally Posted by UrbanSmooth View Post
    I understand that, Stevie.

    However, being able to "sit on" their actual high-end GPU, Nvidia actually has time to tweak it some more and possibly get a "GTX 580-like" release out of it, as bhavv said a few posts back.

    I only hope they won't be making a whole series of cards with a lot of steps in between the GTX 680 and GTX 780.
    Asus Z87 Deluxe, 4770K,Noctua NH-D14, Crucial 16 GB DDR3-1600, Geforce Titan, ASUS DRW-24B3ST, Crucial M500 960GB, Crucial M4 256GB, 3 X Seagate 4TB, Lamptron FC5 V2 Fancontroller, Noctua Casefans, Antec P183 Black, Asus Essence STX, Corsair AX860i, Corsair SP2500 speakers, Logitech Illuminated Keyboard, Win7 Home Pro 64 bit + Win 8.1 Home 64 bit Dual boot, ASUS VG278H

  12. #2562
    Xtreme Enthusiast
    Join Date
    Nov 2007
    Posts
    872
    Quote Originally Posted by STEvil View Post
    Sitting on your product only reduces its useable marketable lifespan. IF you have a complete, marketable, and competative product then it should be on the market. Otherwise its suicide.
    Not necessarily.

    There are scenarios where it would make perfect sense.

    Selling 300mm chips for $500 is more profitable than selling 500mm chips for $500, more per wafer.

    Yields of 500mm chips at current state of 28nm process might make the chips impossible to sell at a price where projected demand will make them profitable to produce.

    If the imagined 500mm chip's successor is on schedule, it is a good gamble to sell only more profitable 300mm chips. If AMDs next gen is beaten by the imagined held back 500mm chip, that can be released and the successor held while its successor is worked on. Or if AMDs product beats the imagined 500mm chip, the successor can be released. Either eway, NVIDIA wins and got to sell the 300mm chip at high end prices.

    Along the same lines, if the imagined 500mm chip wins next gen, they got to sell two chips for $500 or more instead of one, and push back the R&D cycle.

    Could be the market doesn't want 500mm chips any longer. Since ATi introduced their "smaller, less power" business model their fans have been all over teh intarebz yelling about how "smaller, less power" is the "way to go". Maybe that, coupled with good cypress/barts sales, and market research has convinced NVIDIA to change focus.

    Last but not least, if the product product line is all based on smaller chips thaat beat your competitor, more profit.

    I'd say I can come up with a lot of reasons a "done" product should not be released.

    And of course it could be all along the GK104 was designed to be this gens "high end chip", we'll never know. Fortunately it gives us a level of performance and features that are worth buying.
    Last edited by Rollo; 03-30-2012 at 04:18 AM.
    Intel 990x/Corsair H80 /Asus Rampage III
    Coolermaster HAF932 case
    Patriot 3 X 2GB
    EVGA GTX Titan SC
    Dell 3008

  13. #2563
    Xtreme Enthusiast
    Join Date
    Dec 2010
    Posts
    594
    GK110 tapeout was March 2012, so I've heard. That would make it impossible to sell any chips before August or so anyway because it takes time to get them ready and work out any kinks. And then they will first go to all Tesla products and sell for a ton of money. Nvidia has contracts it needs to fulfill. I'm sure it will come to the desktop at the end of 2012 or very early 2013, but right now it is not ready.
    Last edited by boxleitnerb; 03-30-2012 at 04:56 AM.

  14. #2564
    Xtreme Guru
    Join Date
    Dec 2002
    Posts
    4,046
    kinks??? they better kink up that mem bus beatch to nothing less than 2ghz

  15. #2565
    Xtreme Enthusiast
    Join Date
    Dec 2010
    Posts
    594
    Memory bus width is measured in bits
    I'm pretty sure GK110 will have a 512bit bus like GT200(b).

  16. #2566
    Xtreme Guru
    Join Date
    Aug 2007
    Posts
    3,562
    People seem to forget that high end GPGPU processing really isn't necessary on low margin gaming cards.

    NVIDIA is a savvy company which makes a killing off of their highly capable Tesla and Quadro cards. If I were them I would continue down the GPGPU "lite" path for gaming-oriented products and only release the larger-die, more expensive to produce but GPGPU heavy cores into the professional ranges for the time being.

    Meanwhile, development can continue towards refining that same high end part in case AMD somehow (but not likely) manages to release a card within the next 12 months that effectively beats the GTX 680's successor (GK114?).

  17. #2567
    Xtreme Member
    Join Date
    Jul 2007
    Location
    Inside a floppy drive
    Posts
    366
    Quote Originally Posted by SKYMTL View Post
    People seem to forget that high end GPGPU processing really isn't necessary on low margin gaming cards.

    NVIDIA is a savvy company which makes a killing off of their highly capable Tesla and Quadro cards. If I were them I would continue down the GPGPU "lite" path for gaming-oriented products and only release the larger-die, more expensive to produce but GPGPU heavy cores into the professional ranges for the time being.

    Meanwhile, development can continue towards refining that same high end part in case AMD somehow (but not likely) manages to release a card within the next 12 months that effectively beats the GTX 680's successor (GK114?).
    You mean GPU? Or have info about 7990 being definitely canceled?

  18. #2568
    Xtreme Guru
    Join Date
    Aug 2007
    Posts
    3,562
    Quote Originally Posted by Dami3n View Post
    You mean GPU? Or have info about 7990 being definitely canceled?
    Sorry, IMO dual GPU cards will never properly compete with single core products. Too much hinges on (usually) buggy drivers, game profiles, etc to really make them viable alternatives for every situation.

    In addition, those ultra high end cards are never widely released anyways. Take the HD 6990 and GTX 590 for example: a few were released and about a month or two after launch, stock pretty much dried up.

  19. #2569
    Xtreme Addict
    Join Date
    May 2007
    Location
    'Zona
    Posts
    2,346
    Quote Originally Posted by Rollo View Post
    Not necessarily.

    There are scenarios where it would make perfect sense.

    Selling 300mm chips for $500 is more profitable than selling 500mm chips for $500, more per wafer.

    Yields of 500mm chips at current state of 28nm process might make the chips impossible to sell at a price where projected demand will make them profitable to produce.

    If the imagined 500mm chip's successor is on schedule, it is a good gamble to sell only more profitable 300mm chips. If AMDs next gen is beaten by the imagined held back 500mm chip, that can be released and the successor held while its successor is worked on. Or if AMDs product beats the imagined 500mm chip, the successor can be released. Either eway, NVIDIA wins and got to sell the 300mm chip at high end prices.

    Along the same lines, if the imagined 500mm chip wins next gen, they got to sell two chips for $500 or more instead of one, and push back the R&D cycle.

    Could be the market doesn't want 500mm chips any longer. Since ATi introduced their "smaller, less power" business model their fans have been all over teh intarebz yelling about how "smaller, less power" is the "way to go". Maybe that, coupled with good cypress/barts sales, and market research has convinced NVIDIA to change focus.

    Last but not least, if the product product line is all based on smaller chips thaat beat your competitor, more profit.

    I'd say I can come up with a lot of reasons a "done" product should not be released.

    And of course it could be all along the GK104 was designed to be this gens "high end chip", we'll never know. Fortunately it gives us a level of performance and features that are worth buying.
    As was already said. You do not hold back an ASIC effectively pushing back future products because "it is too good." That is how you get humiliated.
    The samething with R&D. You simply do not do that.

    As far as your whole "smaller chip" argument, there is definitely something larger coming and brings me back to the comment I made to you recently, Nvidia isn't going to turn it's back on GPGPU after using so many resources to secure the market.

    Quote Originally Posted by boxleitnerb View Post
    GK110 tapeout was March 2012, so I've heard. That would make it impossible to sell any chips before August or so anyway because it takes time to get them ready and work out any kinks. And then they will first go to all Tesla products and sell for a ton of money. Nvidia has contracts it needs to fulfill. I'm sure it will come to the desktop at the end of 2012 or very early 2013, but right now it is not ready.
    They already filled some of them with Fermi...

    Quote Originally Posted by SKYMTL View Post
    People seem to forget that high end GPGPU processing really isn't necessary on low margin gaming cards.

    NVIDIA is a savvy company which makes a killing off of their highly capable Tesla and Quadro cards. If I were them I would continue down the GPGPU "lite" path for gaming-oriented products and only release the larger-die, more expensive to produce but GPGPU heavy cores into the professional ranges for the time being.

    Meanwhile, development can continue towards refining that same high end part in case AMD somehow (but not likely) manages to release a card within the next 12 months that effectively beats the GTX 680's successor (GK114?).
    No it's not necessary in gaming cards but making two different architectures can be tough. Rather than just scaling down and seeing how your design/architecture works with the process you get to play with a bunch of unknowns.

    In certainly is nice to have a gaming orientated GPU out there because it is so efficient but on the flip side it isn't so efficient in terms of time to market for an entire lineup or ease of design/manufacturing.
    Originally Posted by motown_steve
    Every genocide that was committed during the 20th century has been preceded by the disarmament of the target population. Once the government outlaws your guns your life becomes a luxury afforded to you by the state. You become a tool to benefit the state. Should you cease to benefit the state or even worse become an annoyance or even a hindrance to the state then your life becomes more trouble than it is worth.

    Once the government outlaws your guns your life is forfeit. You're already dead, it's just a question of when they are going to get around to you.

  20. #2570
    Xtreme Guru
    Join Date
    Aug 2007
    Posts
    3,562
    Quote Originally Posted by LordEC911 View Post
    No it's not necessary in gaming cards but making two different architectures can be tough. Rather than just scaling down and seeing how your design/architecture works with the process you get to play with a bunch of unknowns.

    In certainly is nice to have a gaming orientated GPU out there because it is so efficient but on the flip side it isn't so efficient in terms of time to market for an entire lineup or ease of design/manufacturing.
    It isn't a whole new architecture though.

    Fermi is a great example.

    GF100 was very much geared towards compute with a large amount of cache and a relatively efficient compute call order.

    GF104 was the scaled down version and while it still retained a good amount of compute abilities, some aspects were curtailed and replaced with additional in-game rendering efficiencies.

    So while the architecture didn't "change" per se, NVIDIA was able to implement enough differentiations from one core to another so that GF104's primary focus was gaming (and to a lesser extent OpenGL performance for the Quadro line) rather than high range GPGPU capabilities.

  21. #2571
    Xtreme Mentor
    Join Date
    May 2008
    Posts
    2,554
    Quote Originally Posted by SKYMTL View Post
    People seem to forget that high end GPGPU processing really isn't necessary on low margin gaming cards.

    NVIDIA is a savvy company which makes a killing off of their highly capable Tesla and Quadro cards. If I were them I would continue down the GPGPU "lite" path for gaming-oriented products and only release the larger-die, more expensive to produce but GPGPU heavy cores into the professional ranges for the time being.

    Meanwhile, development can continue towards refining that same high end part in case AMD somehow (but not likely) manages to release a card within the next 12 months that effectively beats the GTX 680's successor (GK114?).
    Then they better stop marketing Physx since it causes a massive performance hit on GTX680.

  22. #2572
    Xtreme Guru
    Join Date
    Aug 2007
    Posts
    3,562
    Quote Originally Posted by BababooeyHTJ View Post
    Then they better stop marketing Physx since it causes a massive performance hit on GTX680.
    Ummmm.....of course. Again, this doesn't have much do with overall core per core GPGPU performance since PhysX stresses the core in different ways than OpenCL (caching and general use algorithms are different) but rather two items:

    - A lack of runtime optimizations for Kepler at this time

    - The PhysX GPU runtime dictates that individual SMX blocks be dedicated to processing. In Fermi (GF110) which had 512 cores spread over 16 engines, this meant PhysX would redistribute at the least 1/16 of the card's processing power to the task of processing physics calculations. GF104 meanwhile has only 8 engines which means a whole 1/8 (or roughly twice that of GF110) of the architecture will need to be used for PhysX. So naturally the performance hit will be absolutely massive in comparison to Fermi....

  23. #2573

  24. #2574
    Registered User
    Join Date
    Mar 2007
    Posts
    68
    Anyone know a page for the US stores to find the cheapest GTX 680s or who has em in stock atm ?

  25. #2575
    Xtreme Addict
    Join Date
    Jun 2005
    Location
    Madison, WI
    Posts
    1,004
    Quote Originally Posted by Zipelgas View Post
    Anyone know a page for the US stores to find the cheapest GTX 680s or who has em in stock atm ?
    http://www.nowinstock.net/computers/...nvidia/gtx680/

    (Thanks to Slizzo over at EOCF.)
    \Project\ Triple Surround Fury
    Case:
    Mountain Mods Ascension (modded)
    CPU: i7 920 @ 4GHz + EK Supreme HF (plate #1)
    GPU: GTX 670 3-Way SLI + XSPC Razor GTX670 water blocks
    Mobo: ASUS Rampage III Extreme + EK FB R3E water block
    RAM: 3x 2GB Mushkin Enhanced Ridgeback DDR3 @ 6-8-6-24 1T
    SSD: Crucial M4 256GB, 0309 firmware
    PSU: 2x Corsair HX1000s on separate circuits
    LCD: 3x ASUS VW266H 26" Nvidia Surround @ 6030 x 1200
    OS: Windows 7 64-bit Home Premium
    Games: AoE II: HD, BF4, MKKE, MW2 via FourDeltaOne (Domination all day!)

Page 103 of 143 FirstFirst ... 35393100101102103104105106113 ... LastLast

Bookmarks

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •