Quote Originally Posted by SKYMTL View Post
In an upcoming article, yes. That will be coupled with some additional cooling tests since right now there is no way to accurately test the heat output of the 40nm core against any other card due to the oddball offset of the heatsink mount. After some preliminary tests with some modded heatsinks, I am certain the results will shock many people who are defending the move to 40nm.
The only people who should be defending 40nm at this point is ATI, because they can put more cores into each waffer and start testing and tweaking this new process.
No "new" process has ever been instantaneously beneficial for power consumption and overclocking capacity, as far as I know.




Quote Originally Posted by SKYMTL View Post
Great for them but the 8800 GT / 9800 GT has been their target for the LAST TWO YEARS. Even the HD 4830 didn't win convincingly against the 9800 GT.
I think you got things backwards.
AMD doesn't target GPUs, it targets price-points. What chip is in what graphics card and how old the architecture is, is completely irrelevant for the end user. What matters is what performance+features you get for how much money, period.
And the HD4830 is quite successfull at its price point. Where I live, the HD4830 is priced at the level of a 9600GT.


Quote Originally Posted by hurleybird View Post
That's exactly what the dude is saying. GDDR5 will be slower than GDDR3 at the same *effective* clockspeed.(...)
He did say bandwidth, not clockspeed. If he "meant" clockspeed, all is well .