You know that you can't read TDPs like that? If one chip gets 66W and one gets 63W, the first will be labeled 100W and the other 65W.
As I read it the GPU intensive load maxes out the TDP more than the CPU intensive load does.
And you are making bad conclusions. It's reasonable to believe that the modifications in the silicon and transistor types needed for a working GPU isn't optimal for a CPU. GPUs are generally made for low frequencies but higher density on the most power consuming circuits. The design desicions made to fit a working GPU might be bad for a CPU. Llano is not a good example of CPU performance on 32nm. If a Thuban on 45nm isn't to far behind BD then what would happen with a Thuban made on 32nm? It would be almost half the size of Bulldozer, and be capable of higher frequencies than the original Thuban. And still have room for optimizations. And if your theory is correct and 32nm is botched and worse than 45nm, then maybe AMD should stick to 45nm for a while.
Bookmarks