Quote Originally Posted by ***Deimos*** View Post
Can somebody please explain to me. I'm DYING here.
90nm G80. 128 shaders
65nm G92. 128 shaders
55nm G92b. 128 shaders
65nm G200 240 shaders - 576mm2!
40nm GT240. 96 shaders
nVidia has barely changed the shader count at all... AMD went from 64 to 160 to 320, and each has 5 execution units. AMD doesnt even need die shrink to make 2.5x more shaders (RV670 -> RV770) and still small die size.

Does this mean AMD shader architecture design is more compact and efficient,
or
nVidia design has technical limitations and overhead?
I don't know if its necessarily Nvidia's design that as limitations, but that ATI's was designed from the start to be super-scalable.

They said that R600 would be the foundation for 3 generations of video cards, and while a lot of people said that R600 was a failed architecture, R700 definitely vindicated the design. R600's failures were more likely due to the fab process and leakage which killed any chance of higher clocked cores or the rumored original specs of 480SP's, rather than 320.

That being said, it is true that G92 and GT200 have all been heavily based on the G80 (G92 basically just a shrink) and Nvidia did hit a wall earlier on the scaling of its design