Originally Posted by
mstp2009
What a load of CRAP. There is so much wrong with these statements I just don't know where I should begin.
First, The thread scheduler of the OS takes care of this, you NEVER have to code your application to say what "core" you want it to run on.
Good thread schedulers fill up "real" cores before they "double up" to an HT core. That's just the way of things, and has been for a very long time. Since Win Server 2003 and Linux 2.6.x at the very least.
Second, A "virtual" core never "waits" on the real core, or vice versa, to complete it's computations. TWO THREADS can be pushed down the same "real + virtual" core at the same time.
XBitLabs has the best Diagram I have seen for this:
Is the solution proposed in BD better? Likely. But does Intel's solution improve overall IPC and resource utilization? Absolutely.
I run servers both with AMD Istanbuls and Intel Nehalems in a cloud environment and push the limits of how many threads can be concurrently run on very very high-end hardware. It's how I make money - how many VM Servers can we push onto a real server without degrading performance.
I love the AMDs because they are CHEAP, low power, and do the job WELL, but I'll be frank with you right here and now: Even fully loaded on all "real" and "virtual" cores, the Nehalem cloud server runs circles around the Istanbul ones that we have deployed in terms of number of customers that can be crammed onto the system without "overloading" the CPU resources. So while the Intel system costs more, on other metrics, it costs me less: LESS power per customer consumed, MORE customers per 2U of rack space (less datacenter costs). So in the end, it is a about a wash from my perspective.
JF-AMD - Just because you have a particular agenda to push, please stop spreading FUD about the competition when it is clear you don't have the technical background to do so. Not trying to be rude, but you yourself have said you are a marketing guy, not an engineer.