Originally Posted by ***Deimos***
For your consideration:
325Mhz GF4
475Mhz FX
400Mhz 6800
500Mhz 6600GT
450Mhz G70 (550 512MB version)
560Mhz 7600GT
650Mhz G71
power consumption creeped up slowly (all standard non OC models):
59W FX 5900U
72W 6800U
80W 7800GTX
84W 7900GTX
1st and foremost, no manufacturer expecting a profit is going to push technology to the breaking point.. it has to be producable in mass quantity and with enough clock speed margins that not just those in Finland but also Singapore can enjoy the hardware.
I will wager my whole Star Wars collection that not only will we not see 1.5Ghz or 750M transistors in 2006, but that it wont happen in 2007 either. Clock speed increases slowly and incrementaly. There has never been such drastic jump, even though often there was new smaller node lithography process. Since G80 will be 80/90nm, it doesn't have advantage of smaller node (ie 65nm), making any such outlandish claims of clockspeed pure fiction. Does it perhaps "scale" to that speed... well, does a 386 scale to 800Mhz? No point in such impractical speculation.
People often cite 200, 250 even 300W for next generation video cards. Do you folks realize how much heat that is?? At only about 90W, the 7900GTX uses a dense fin heatpipe heatsink. To dissipate 300W you would need a heatsink 3 times as large, probably taking up 3-4 slots. Much more likely the number quoted is the max power envelope for a SLI/Crossfire system. And, even then a 125-150W single slot solution is quite risky.
# of transistors: 25M GF2, 57M GF3, 63M GF4, 125M FX, 222M 6800, 300M G70. When new DirectX technology is released, there is appropriate big jump in transistor count. When DirectX refreshes are released (GF4, G70), transistor count increases moderately.
Unified shaders are very easy to make. Afterall, in a sense thats what CPU's did before GF3/GF4 came on the market. GPUs are able to calculate orders of magnitude faster than CPU because of exploiting parallelism and specialization. Customized circuit specialized for specific function faster than general purpose. Unified shader idea goes against all this. I believe what is meant is that unified shader, totally general purpose, cannot be made as fast as separate specialized.
In conclusion, G80 like everything in life, will be limited by physics and compromises. Due to 80/90nm manufacturing, probably within 500Mhz-800Mhz range. If indeed number of processors (shaders) has been increases so dramatically, to stay within reasonable power limits, voltage is probably lowered, and lower clock speed. And although 384 bit memory interface is unlikely but feasable, yields and economics make 2 chip solution impractical.