
Originally Posted by
Sr7
I see your point but that was long before CPUs had hit the thermal wall, when each subsequent CPU generation really was night and day from it's predecessor. I feel like once you hit the thermal wall, you must rely on micro-arch improvements, and even integrating the IMC gets you a 1 time speed-up. Where do you go from there? It seems like all you have is more cores to throw at the problem, leaving developers to fend for themselves WRT utilizing them via threading and parallelized workloads.
I don't doubt they WILL get faster, I'm just thinking maybe only trivially faster for current day workloads, and not faster by as much as P2->P3->P4->Core 2 transitions were.
If you can't do encoding/decoding/transcoding nearly as fast as another technology *today*, then what relevant workload are you left with that you're going to show your new processors performance benefits via, in the average system? Opening browsers at 2ms instead of 10ms. My point is you end up with gains where the % gain is technically huge, but where the absolute gains are below a perceptible threshold in applications where people don't really care/notice.
Bookmarks