Quote Originally Posted by [XC] hipno650 View Post
yes but edram is more efficient than increasing the number of stream processors in term of aa performance. also they may have "too many" stream processors to that other things like the memory, ROP's or TMU's can't keep up so the extra's are wasted. or they could use it to store something that is being used heavily in the game (certain effects of textures eg. the texture of the ground on in an RTS game) or they could use it to deliver free AF up to 8x or maybe 16x as form what i know it uses less memory than AA. i think EDRAM will be the next big step in architecture. look at cpu's think of how slow they would be without cache.
eDram is nothing more than ram. You can't get "free" aniso from fast ram, as it's a filtering operation. Some could argue that you could get "free" AA, but with today's AA ops as complex as they are that's overly simplistic. Also take into account that while a common target for eDram used to be the frame buffer, today's popular resolutions have grown too high to make it cost effective. Take for example 1680x1050x32bpp w/ NO AA, standard front and back buffer.... the front buffer alone is be 56,448,000 uncompressed. (56MB)

eDram is nothing but a waste of transistors for any decent size resolution, and therefore this looks entirely fake. The only possibility is that this could be closer to realistic now that transistor counts in the last gen have already gotten ridiculous, but I still don't think anyone would waste space on eDram for PC resolutions.

P.S. Both ATI and Nvidia designs have had L1 and L2 cache for quite some time now.