Quote Originally Posted by Sr7 View Post
No offense, but I find it funny that someone on a forum with an excel spreadsheet for approximating yields thinks they know about yields as though NVIDIA didn't know exactly what they would be dealing with before making their chips. Sorry, but when millions of dollars are at stake, you know these things ahead of time, and you calculate around that. Sure it can fluctuate a bit below what you might expect, but not much.

People making a thread like this seem to think that these guys make wafers and just pray they'll get good yields. When people say good or bad yields ,this isn't a major swing. It's a matter of a percent or two. So it's not the difference between 80 or 20 dies like some seem to believe.
To your surprise , I must inform you that laws of physics tend to apply all across this world and Santa Clara or Taiwain aren't exempted.As 1+1=2 across this planet , so are the formulas for yield calculation based on wafer size , die size and defect density.

We're not stone age and NVIDIA jumped from Star Trek timeframe.

The analysis is rudimentary , but gives a ballpark estimate where their yields are.What we do not know is the defect density in TSMC's 65nm process , but we compared it with the perceived leaders in process technology , Intel and AMD.
We also don't know how redundant GT200 is , but everything suggests most parts can be salvaged and sold as low/mid level.

NVIDIA took a risk with such a large chip ; it could be their architecture at fault , from reviews it looks like it is inefficient compared to R700.

Quote Originally Posted by Sr7 View Post
No no no you misunderstood. I'm saying that difference between good and bad yields is a very narrow margin. I'm saying it's not like NVIDIA was expecting 80 and got 20.
I don't think anyone said that NVIDIA expected 80% yields.In fact , nobody mentioned what NVIDIA expected.Even Intel doesn't get 80% with Penryn which is 107mm^2.As for your narrow margin , that's BS.
There's plenty of empirical evidence that suggests otherwise.You can target 40% and get 25%.That's huge

When choices are made for a chip , most of the time the performance of the process it is meant for isn't known.The process invariably turns less than expected performance for a very simple reason : complexity is skyrocketing the smaller you go.Even so , further iterations are expected to improve process performance to planned levels.

AMD expected K10 to achieve 2.2-2.8GHz@95w at 65nm.We all know how it turned out.Are you implying that AMD engineers were idiots and NVIDIA is full of neo-Einsteins that know everything ahead of time ? All the simulations in the world can't replace the cruel reality of tape out.And the real pains starts when you try to mass manufacture the product.

Intel found it the hard way with Prescott ,AMD with K10 , NVIDIA with FX5800 , now again with GT200 , a never ending story.

We simply calculated their yields with the available data.The results are poor , at least compared to ATI , but it was a calculated risk from NVIDIA.Whether their gambit will payoff remains to be seen.Analysts however quickly jumped on this and for good reason.Performance /die size is poor for GTX280 which could make it a flop.

So , what's your point after all ?