This statement more than any other makes it completely clear that you make things up when you don't know the answer.
The GPU is just as complex if not moreso to design. It does *not* take 1-1/12 years for a new architecture. It takes 3-4 years. It also cost $1 billion for developing G80, not 500 million.
Additionally, 70% or so of the CPU die area is cache, not fundamental logic. This is contrasted with the GPUs relatively high % of logic vs. cache.
If you want to talk copy-paste logic you can probably look at multi-core on-die in a CPU.
Since it's so cheap, so fast, and so easy to design a GPU according to you, why hasn't Intel just made a directly competitive traditional GPU ASAP to put ATI and NV out of business? Why are they taking the larrabee/x86 route and risking a lot on something that doesn't have an ecosystem to support it? Do you think it will be easy to thread applications 16-64 ways?
They're making an awfully big bet on an unproven technology. Even if it works well, winning over developers to move to a GPU which has no installed base is a whole other struggle on top of that.




Reply With Quote
Bookmarks