Quote Originally Posted by FghtinIrshNvrDi View Post
I thought the unified arch fixed that. I'm not much of a microarchitecture specialist, so correct me if I'm wrong.

Ryan
well im not expert either but what i under stood is the unified shader architecture took the pixel pipelines and vertex shaders and made a one size fits all processor but the ROP's and TMU's still are on there own. that seems to be the 2900xt's biggest weakness it has less than the 8800gtx and gts.

Quote Originally Posted by Truckchase! View Post
eDram is nothing more than ram. You can't get "free" aniso from fast ram, as it's a filtering operation. Some could argue that you could get "free" AA, but with today's AA ops as complex as they are that's overly simplistic. Also take into account that while a common target for eDram used to be the frame buffer, today's popular resolutions have grown too high to make it cost effective. Take for example 1680x1050x32bpp w/ NO AA, standard front and back buffer.... the front buffer alone is be 56,448,000 uncompressed. (56MB)

eDram is nothing but a waste of transistors for any decent size resolution, and therefore this looks entirely fake. The only possibility is that this could be closer to realistic now that transistor counts in the last gen have already gotten ridiculous, but I still don't think anyone would waste space on eDram for PC resolutions.

P.S. Both ATI and Nvidia designs have had L1 and L2 cache for quite some time now.
i see your point and you seem to be right. i did not know that you would need that much edram in order to make it work on higher res screens. but surely edram could be used to speed something up. but i guess it's far to "big" to fit on a die and offer good benefits compared to more stream processors, ROP's and TMU's.

on another note it says "next gen unified shader" could that combine the ROP's and TMU's into the steam processors?