afaik the amount of "holes" in the wafer is almost constant between a wafer with many small and few big chips... if a small chip that gets you 400 chips per wafer has a yield of 90% that means probably around 40+ defects per wafer (even if a chip gets struck it may still work with some redundant logic disabled or as a cut down version etc)
if you go for a bigger chip that only fits around 100 times on the wafer like gt300, having 40 defects per wafer means you will have a yield of 60%+ plus because the bigger the chip the higher the chance two defects happen in the same chip.
rv870 is 333mm^2 and rumored to have started with yields of ~60% it seems.
im just guessing here, but 333mm^2 should mean they can get around 175 chips per wafer, so that means 105 functional chips which means 70+ defects. gt300 should be around 550mm^2 which means around 100 chips per wafer max, and with 70+ defects, 30+ fully functional chips.
another factor is the bigger your chip, the more wafer space is wasted on the edges, but thats not a huge diference...
wafer costs are around 3000-5000 us$, so 30 chips per wafer = 100-166$ per gpu, pure die costs
for rv870 it should be around 100 chips per wafer so 30-50$ per chip cost...
these numbers are just examples, they arent accurate...
but you can see, for a rough 50% transistor increase of gt300 over rv870, the costs more or less tripple...
Bookmarks