I'd say 649$
Printable View
I'd say 649$
Sub 1 Tera flop
Meh..
If the Ultra can get 12 fps on Crysis @ 1920 by 1200, very high, 8 AA & AF on Q9650 @ 4.0ghz
What will the Gtx280 get?
23fps
You can't scale flops exactly. ANd 933 Gflops IS the number - its calculated by 3 (2 MADD + 1 MUL) * 1300 Shader Clock * 240 SP = 933 GFlops
But on the other hand, apparently G80 MUL wasn't working 75% of the time so it was closr to 2 * 1500 * 128 = 350ish Gflops rather than the theoretical 581
Also, thats why you cant compare numbers directly over different architectures, even within derivatives (such as G80 Ultra vs. G92 GTX, where clearly the G92 GTX architecture has some memory bottlenecks and performs up to G80 ultra performance at higher resolutions, and even loses some when AA and other things are on at higher res)
http://www.fudzilla.com/index.php?op...=7523&Itemid=1
If fud is right, we're looking at 600mhz for the core, which seems pretty high considering how large the die is, definitely going to run very hot. That, and that means what the inquirer has said is impossible to happen because 1296mhz just can't happen now for the shaders (unless we're looking at something like a 2.xxxx shader domain lol) and it never could for the core, plus I think its safe to say gt200 won't be spanked by the r700
Yep as LordEC911, CJ had already stated those numbers before Fud ever got a hand on them, and more than a few sources besides TheInq has quoted them.
And that's how the math for counting flops has been done for some time. Lower clocks doesnt mean anything bad - they're just saving us from excessive heat and power draw and the fact that these shaders might have the full 3 operations / sec and you have 240 of them means that you will still get tons more performance from shaders
Same price as 8800GTX launched at IIRC, and any other previous high end pieces of hardware.
And with 50% morre efficient shader units, do they mean 50% improvisation in performance per transistor, or 50% higher transistor count, as in more raw power? If the latter was true, wouldn't the maths have to be adjusted for the flop-count, making it well beyond 1 Tflop?
And how much performance will able to bring "the missing MUL component" ?
This post could probably tell you something: http://forum.beyond3d.com/showpost.p...postcount=1548
Damn power consumption is just increasing and increasing. :(
80 ROPs :shocked:
BTW what do you guys mean the MUL wasnt working 75 percent of the time? Do you mean the third shader can do additional operations now?
Yep essentially the shaders can do all full 3 operations rather than the 2 previously.
Also, it's not 80 ROPs, its 32 ROPS.
Like G80 architecture, its 4 ROPS per 64-bits of memory bus hence 512-bit bus -> 32 ROPS, 448-bit bus -> 28 ROPS
And it looks like the ratio is 24 SP's per 8 TMU's as well which isn't quite the 2 to 1 for G92 but better than the 4 to 1 on G80 (though it was 32TA, 64TF)
whats the story with the 260 shader clock, some sources say 1240 others say 999
Isn't it kinda weird that there is virtually no reviews with definitive numbers for a graphics card that is scheduled for a July release date?