To your surprise , I must inform you that laws of physics tend to apply all across this world and Santa Clara or Taiwain aren't exempted.As 1+1=2 across this planet , so are the formulas for yield calculation based on wafer size , die size and defect density.
We're not stone age and NVIDIA jumped from Star Trek timeframe.
The analysis is rudimentary , but gives a ballpark estimate where their yields are.What we do not know is the defect density in TSMC's 65nm process , but we compared it with the perceived leaders in process technology , Intel and AMD.
We also don't know how redundant GT200 is , but everything suggests most parts can be salvaged and sold as low/mid level.
NVIDIA took a risk with such a large chip ; it could be their architecture at fault , from reviews it looks like it is inefficient compared to R700.
I don't think anyone said that NVIDIA expected 80% yields.In fact , nobody mentioned what NVIDIA expected.Even Intel doesn't get 80% with Penryn which is 107mm^2.As for your narrow margin , that's BS.
There's plenty of empirical evidence that suggests otherwise.You can target 40% and get 25%.That's huge
When choices are made for a chip , most of the time the performance of the process it is meant for isn't known.The process invariably turns less than expected performance for a very simple reason : complexity is skyrocketing the smaller you go.Even so , further iterations are expected to improve process performance to planned levels.
AMD expected K10 to achieve 2.2-2.8GHz@95w at 65nm.We all know how it turned out.Are you implying that AMD engineers were idiots and NVIDIA is full of neo-Einsteins that know everything ahead of time ? All the simulations in the world can't replace the cruel reality of tape out.And the real pains starts when you try to mass manufacture the product.
Intel found it the hard way with Prescott ,AMD with K10 , NVIDIA with FX5800 , now again with GT200 , a never ending story.
We simply calculated their yields with the available data.The results are poor , at least compared to ATI , but it was a calculated risk from NVIDIA.Whether their gambit will payoff remains to be seen.Analysts however quickly jumped on this and for good reason.Performance /die size is poor for GTX280 which could make it a flop.
So , what's your point after all ?





Bookmarks