Quote Originally Posted by Sr7 View Post
I see your point but that was long before CPUs had hit the thermal wall, when each subsequent CPU generation really was night and day from it's predecessor. I feel like once you hit the thermal wall, you must rely on micro-arch improvements, and even integrating the IMC gets you a 1 time speed-up. Where do you go from there? It seems like all you have is more cores to throw at the problem, leaving developers to fend for themselves WRT utilizing them via threading and parallelized workloads.

I don't doubt they WILL get faster, I'm just thinking maybe only trivially faster for current day workloads, and not faster by as much as P2->P3->P4->Core 2 transitions were.

If you can't do encoding/decoding/transcoding nearly as fast as another technology *today*, then what relevant workload are you left with that you're going to show your new processors performance benefits via, in the average system? Opening browsers at 2ms instead of 10ms. My point is you end up with gains where the % gain is technically huge, but where the absolute gains are below a perceptible threshold in applications where people don't really care/notice.
But the reality is the current cpu's do show increases over previous generations and also run cooler.
I see that myself with the Harpertowns vs the previous clovertowns.
Clovers( on good air) max in the 3150 range while the Harpers max in the close to 4000 range and with identical cooling run 15C less.
Then they also produce close to 40% more work in a given timeframe.
Cooler and more work done in the same time.
That is the advantage of the newer cpu's then add in the lesser current draw.
My clovers at 100% load at 3150 draw 420w, the Harpers at 3758 draw 320w at 100% load.
There is your "absolute gains"" in real numbers..