Quote Originally Posted by Rollo View Post
Not necessarily.

There are scenarios where it would make perfect sense.

Selling 300mm chips for $500 is more profitable than selling 500mm chips for $500, more per wafer.

Yields of 500mm chips at current state of 28nm process might make the chips impossible to sell at a price where projected demand will make them profitable to produce.

If the imagined 500mm chip's successor is on schedule, it is a good gamble to sell only more profitable 300mm chips. If AMDs next gen is beaten by the imagined held back 500mm chip, that can be released and the successor held while its successor is worked on. Or if AMDs product beats the imagined 500mm chip, the successor can be released. Either eway, NVIDIA wins and got to sell the 300mm chip at high end prices.

Along the same lines, if the imagined 500mm chip wins next gen, they got to sell two chips for $500 or more instead of one, and push back the R&D cycle.

Could be the market doesn't want 500mm chips any longer. Since ATi introduced their "smaller, less power" business model their fans have been all over teh intarebz yelling about how "smaller, less power" is the "way to go". Maybe that, coupled with good cypress/barts sales, and market research has convinced NVIDIA to change focus.

Last but not least, if the product product line is all based on smaller chips thaat beat your competitor, more profit.

I'd say I can come up with a lot of reasons a "done" product should not be released.

And of course it could be all along the GK104 was designed to be this gens "high end chip", we'll never know. Fortunately it gives us a level of performance and features that are worth buying.
As was already said. You do not hold back an ASIC effectively pushing back future products because "it is too good." That is how you get humiliated.
The samething with R&D. You simply do not do that.

As far as your whole "smaller chip" argument, there is definitely something larger coming and brings me back to the comment I made to you recently, Nvidia isn't going to turn it's back on GPGPU after using so many resources to secure the market.

Quote Originally Posted by boxleitnerb View Post
GK110 tapeout was March 2012, so I've heard. That would make it impossible to sell any chips before August or so anyway because it takes time to get them ready and work out any kinks. And then they will first go to all Tesla products and sell for a ton of money. Nvidia has contracts it needs to fulfill. I'm sure it will come to the desktop at the end of 2012 or very early 2013, but right now it is not ready.
They already filled some of them with Fermi...

Quote Originally Posted by SKYMTL View Post
People seem to forget that high end GPGPU processing really isn't necessary on low margin gaming cards.

NVIDIA is a savvy company which makes a killing off of their highly capable Tesla and Quadro cards. If I were them I would continue down the GPGPU "lite" path for gaming-oriented products and only release the larger-die, more expensive to produce but GPGPU heavy cores into the professional ranges for the time being.

Meanwhile, development can continue towards refining that same high end part in case AMD somehow (but not likely) manages to release a card within the next 12 months that effectively beats the GTX 680's successor (GK114?).
No it's not necessary in gaming cards but making two different architectures can be tough. Rather than just scaling down and seeing how your design/architecture works with the process you get to play with a bunch of unknowns.

In certainly is nice to have a gaming orientated GPU out there because it is so efficient but on the flip side it isn't so efficient in terms of time to market for an entire lineup or ease of design/manufacturing.