stop watching :banana::banana::banana::banana: :cord:
read to the end of the thread... this has been resolved, and I'm no longer asking about the chart.
Some see a vase, some see two faces... :DQuote:
let me remind you, his avatar is a red sun and the yellow thingy was a building, asian style building.
http://www.sapdesignguild.org/resour...ages/faces.gif
I don't think BD has any relation whatsoever with Trips. The people who designed BD did so in 2005-2007 and most have left AMD by now.
You're basically talking Netburst. Those were the premises on which Netburst was based. It is interesting to see that they are more valid today than in the past. It very well could be that we will see a Netburst revisited uarch for the upcoming process nodes. There's a limit to the Pentium Pro off springs anyway.Quote:
One of the possible ways to evade to is to go to a cycletime, where instruction throughput (per unit time) peaks. Most of today's IPC is achieved by increasing complexity, thus increasing die area (and ever increasing static leakage) and limiting cycletime (or even decreasing without new processes), which means nearly stagnating clock frequencies. What if design gets simplified (lowering IPC), but allows for 20% or more area savings and increasing clock frequency (without having to increase voltage, since less complex circuitry does the job faster). Well, this is just one idea.
The lead uarch for Nehalem said that if they would have used Netburst as a starting point for Nehalem and not Core, the performance would have been higher, but with worse energy efficiency. Interesting remark anyway ( from a Stanford lecture on Nehalem ).
True. However, the word no to forget is latency, whenever you need to cross clock domains or arbitrate streams you have latency.Quote:
Having different clock domains in a module would allow to choose closer to optimal cycle times for different units, also driven by the type of work to do. E.g. decoding could have a longer pipeline if some trace caches and good branch prediction is present. A shared FPU would also be a candidate for a longer pipeline while integer cores could benefit from a short schedule execute loop and be clocked lower, but be 4-wide. There are so many possible techniques...
When you have so many cores, it becomes a bit useless to think as small as powering different units down, its way better to power cores down as done in Nehalem. I mean, from 8 modules, what's the benefit of shutting down 3 integer clusters ? Why not turn off 2 modules ?Quote:
What can be different are the granularity and the units themselves.
Granularity: power down/up, clock differently the different subunits, as power budget permits and queue fill level requires
Units: scale them to what fits best and optimize power usage this way.. e.g. switch off some L2 cache ways or execution units
or activate them as needed (by tracking the upcoming instructions and powering up/clocking the appropriate subunits).
Regarding caches, you always need to maintain coherency ( so L3 isn't a candidate ), so powering down the L2 , you can as well power down the cores attached to it.
I don't think BD has any relation whatsoever with Trips. The people who designed BD did so in 2005-2007 and most have left AMD by now.
lol, now?! Now is AMD in very good position behind last 2 years! Began it with HD4000 and Phenom II launch in january 2009 ( first maybe 2008 with first Phenoms B3 revision and good mainstream series HD3000). And now, HD5000 is dominate, Thuban are great (great performance, OC potencial, power consumption at 45nm 6 cores is beautiful)
Could we stop using the f word? (fanboy) Savantu is only trying to prove Dresdenboy wrong, that's something that should be normal in any proper discussion. I don't see anything fanboy about that, he's just trying to realistic in his own ways even though you might consider it to be pessimistic.
How come your ONLY contribution to this thread, and many others, I must say, is to always go after perceived anti-AMD contributors. There's some great info in this thread, you can focus on that or you can choose to focus on peoples sigs. Stay out if you have nothing better to contribute.
And most of those people don't actually contribute anything either, they just want to drive the thread into worthlessness. And they do it over and over. Why aren't you asking them the same questions? I'm all up for debate and discussion, but that isn't what happens in these threads once certain people jump in. These threads get derailed long before I make any posts.
The kernel and the whole operating system. Linux and any other *nix systems welcome more cores. Apps? Highly threaded video encoding : mencoder, transcode - databases, rendering for example.
Instead of ditching the cpu which has more cores, people should ditch the OS or apps which can't fully utilize the *expensive* cpu they just bought. What a waste!
HT is the perfect mask to fool the OS/app/*user that they're doing high level computing. Give real cores to the workload and see them crunch it!
Upgrade the computing instead of the computer. But wait! Ask Intel to upgrade the HT to x3 : 4 cores gives 12 threads. Now that's computing higher as ever.
On linux, try
and split the workload into a 100 threads shared by 2 cores, but it won't make any headway with either speed or time taken to compile. Might be better off with justQuote:
make all -j100
orQuote:
make all -j4
Quote:
make all -j2
Clairvoyant129: good question :). But future is not only for single operations. Who want PC for internet, movies etc, dont need Quadcores, he pay some dual core. Who wnat more, example gaming Arma, GTA, Battlefield, working in 3D, workling with graphic programs, making and editing videos, this people need more than dualcores (as minimal triple core or dual with HT). And enthusiast also.
But u are right, most aplliactions is optimalized for single or dual cores (http://www.tomshardware.com/reviews/...ling,2652.html , here we can see big diference from single up to dualcore), i see minimum dualcore as new build PC today. If today are still mostly single threaded operations, we have at modern CPUs turbo aplications. But...if we compared example E8400 with todays CPUs core to core, dont seen big diference (1 core E8400 is very simillary to x2 550 BE or some core i7 core without turbo).
Good point is more and more multithreaded operation for consumers and in CPU maybe project Fusion.
HT fools the user huh? And where is your assumption based on? Do you have a evidence that shows HT doesn't improve performance? From my understanding and in the reviews I have seen HT has a positive impact on performance. So if it does add performance why not use it? :shrug:
id say the marketing of intel fools the user
the commercials talk about the turbo feature of the cpu as being better than the internet, wifi, and some other things.
http://www.youtube.com/watch?v=RSqMTWrlF-8