I play it by common sense and some well educated guessing.
If the shader count is true, even without the hot-clocks, there's just no way this thing is only going to perform at 580 levels. Even though it only has a 256-bit memory bus, the ram speeds are apparently a ghz higher than what we saw in the GTX 580 (albeit it would need a memory clock of 6 ghz to match the bandwidth of the 580)...
Now, I know what people are thinking... "with that big of a memory bandwidth handicap there's no way it can beat the 7970, and will only be around 580 performance"... I'll happily remind people of the fact that memory bandwidth is only part of the story. The 8800GTX had a 384-bit memory bus, while the 8800GTS 640mb had a 320bit, we all remember that. Meanwhile, the refresh for the 8800GTS and 8800GTX were both 256bit and slightly higher memory clocks (NOT enough to close that gap in bandwidth). How were they able to stand above in most cases (the 8800GTX beat the 9800GTX when the 9800GTX hit it's memory barrier) with lower memory bandwidth? Efficiency. One thing that was heavily tweaked for the G92 was it's efficiency in usage of its bandwidth. Now mind you, the 9800GTX only had a 60'ish mhz clock speed increase while losing more than a few parts and STILL beat the 8800GTX for the most part.
Fast forward to now, you're talking about a very different situation. 3x more shaders and 33% more rops (and who knows what else) and a bit of a memory bandwidth reduction. You'd have to be crazy to expect this card to only tie the 580. I'd say ~50% faster than the 580 sounds about right. That's not a crazy talk assumption, that's going by NVidia's own history and what we know so far.
Anyone expecting 100% across the board though, THAT is a pipe dream if I've ever heard of one.




Reply With Quote

Bookmarks