Quote Originally Posted by ***Deimos*** View Post
Yeah but even though shader clock domain is higher, clocks havent really improved much G80, G92, G92b, G200, G200b..

nVidia's crazy fantastic "PROGRESS" in clockspeeds

90nm - avg = 562
8800 ULTRA 612/1500
8800 GTX 575/1350
8800 GTS 500/1200

65/55nm - avg = 662
8800GT/9800GT 600/1500
8800GTS 650/1625
9800GTX+/GTS250 738/1836

65/55nm - avg = 615
GTX280 602/1296
GTX260 55nm 576/1350
GTX275 633/1404
GTX285 648/1476

40nm - avg = 617 (ie die shrink = slower?)
GT210 675/1450
GT220 625/1360
GT240 550/1360

Although nVidia has yet to beat 740Mhz, which ATI/AMD did with 2900XT, 2600XT, 3870, 3870x2, 4890, 5870 etc..
ATI/AMD clocks aren't improving much either. 850 is tiny improvement over 750, but at least its not slower.
more importantly, note how fast clocks decline on nvidias 40nm gpus with added complexity! over a 100mhz drop for the most complex 40nm part so far, and its only a cut down G92... so a tweaked G92 in 40nm can only clock to 550mhz, but fermi which is 5+ times more complex will reach 650+mhz? (clock derived from nvidias flops numbers mentioned at the super computing event)