ummm... no. the concept of pipelining is to increase throughput at the cost of latency.
P4 is an extremely good example of how hyperpipeling exacerbates current problems such as branch prediction and cache misses.
simplistic logic would assume that doing more things in one clock cycle means higher ipc thus pipeling increases ipc.
there are dependencies in between instructions.
http://en.wikipedia.org/wiki/Data_dependency
if you miss a branch you have to flush all n stages of the pipelines. the probablity of an misprediction increases
exponentially with pipeline length. this means doubling BP accuracy will increase clockspeed linearly. that's not ideal and a waste of xtor budget.
if you miss a cache line all dependent instructions have to wait for that result. OoO will only hide so much latency and it must keep the FIFO policy.
we have reached a point of diminishing returns for pipelining. at every level it makes things more complex, from architecture to circuit to layout.
it's obvious that amd knows this but saying intel did it wrong and amd did it right/better is a foolish way to look at it. a lot of decisions are based off of what the design team is good at. they are going to do things differently.
frequency depends on the nature of the stage. the instructions being executed are independent of the hardware.
Bookmarks