OK, let me try to walk through the history of power.
Intel has throttling, we do not. If their chip gets too hot, it throttles back. So, to some degree, power and performance is linked. We will have that feature in the future.
In the old days, Intel used a derate of max power to determine TDP. That was ~80% (so, theoretically, a 100W max power part would have a TDP of 80W). This is exactly how it was described by Intel engineers when I worked for an Intel-based OEM (so don't flame me on that description.)
When we brought out first parts out, they had a TDP of 95W. However, the problem was that in the worst case scenario, you'd never see more than 50W of power (heavily FP-incluenced HPC.) The way to measure max power is to run something called a "thermal virus" that basically fires up all transistors at the same time, stressing the chip in a way no real workload could.
The problem that we faced was that we were 95W and customers were provisioning their racks expecting that level of power, and it was generally in the low to mid 40's (half the power consumption.) So their racks were running inefficiently with more headroom and less density - that is bad because it eats up floor space.
So, after working with customers for a long time, the overwhelming demand was "tell us what the REAL power is, not the 'design to' power."
ACP was born.
We take several server benchmarks and run them at 100% utilization and measure power at the CPU, that becomes the ACP.
So a 115 TDP, nets a 75W ACP. And the typical customer is seeing power consumption in the low 50's. So ACP is still conservative, but closer to reality.
The real problem was that Intel had roughly 110W TDP parts in that timeframe and they were getting beaten, bad, by AMD in the power area. So, when you have a TDP of 110W and your competitor is at 95W, what do you do? You change the derate from ~80% to ~65%. Voila, lower TDPs.
The reality is, at the wall, until Nehalem, intel was soundly behind AMD in power. In today's world, in the "at the wall" benchmarking that we have seen, the standard power parts are about equal at idle, and they are higher at full load.
But the problem for them is that the power ramp is huge. If we go from idle to ~3% utilization, we jump a watt or two. They jump 10% (they have spent more time working on idle and that can cause the big ramp up.)
So why don't we just use the same measurement as Intel? Two reasons:
1. The architectures are different, and that is not a fair comparison, it favors them.
2. They have a history of changing the definition of things to suit their needs, so if we went to match exactly what they were doing, they could just change their measurement to suit their needs.
I always tell customers to ignore TDP and ACP. What really matters is TOTAL PLATFORM POWER at the wall. I believe that we have the advantage here.
I just came out of the lab last week and saw the new Magny Cours is pulling basically the same power as a 6-core istanbul. Twice the cores, massive performance increase and rougly the same power. Actually, at some utilization levels it is actually lower.
I can't speak to client power, I am a server guy.
Bookmarks