I missed that...
Printable View
I don't care about the FUD, I care that it's time to upgrade my 965BE and I can't wait for BD's release :D I'm almost tempted to go for server hardware to satisfy myself :p:
That was a joke.
Well it really didn't look like one to me.Use smileys next time :)
Just a dirty question now. When BD in 22nm is coming. Any guesses?
Well, begining or end of it? Because that would mean only 1 year with BD on 32nm.
beginning of 2013
I know but maybe this strategy is going to change. The desktop steppings may contain some erratas which the server steppings may not. The desktop segment is more tolerant about some erratas. Just think back to Nehalem's launch. The C0 (first desktop) stepping came months earlier than D0 (first server) stepping because the C0 contains some errata which is not allowed for a server CPU (like TLB bug). So if there will be an early stepping which would be suitable for desktop then maybe can happen my speculation.
Why not? The 65nm K10 lived 1 year and 2 months.
AMD should use a 1206 pin socket and align pin-1 on the AM3 CPU with pin-1 on the BD socket that way you have 265 spare pins for BD. :up:
Socket AM34 anyone.;)
use some of em so zambezi works in 2p :D
After thinking about nn_step's comment I think he is mostly right.
I can't say what the IPC of code is like on average. But my own code doesn't often do a lot of linear arithmetic. I can usually break the computational work into multiple threads and/or send it to the FPU instead. But the bulk of my code tends to be branchy. With the new branch fusion, branch prediction, and prediction steered data prefetch this type of code could see a significant improvement on BD compared to K10.
But despite the gains or losses in IPC for specific types of code it seems to me that increasing IPC for average code is going to achieve diminishing returns in the coming years. First because there is only so much more ILP left to extract from code and second because Amdahl's law applies just as much to instruction level parallelism as it does to thread level parallelism. Also any new program doing a major amount of computation will usually be multi-threaded. Increasingly, as programmers learn to code threaded programs better and take care of higher level parallelism themselves, getting more IPC out of each individual thread may be rapidly approaching a wall.
We also seem to be forgetting the case of program level parallelism. When I run a program and it's not taking up all my cores or not utilizing them fully, I run more programs. That's exactly why I have run MP machines since the Pentium Pro days and why I have a multicore processor now. Multitasking has been around for a long time. But because it's hard to quantify as a benchmark people tend to compare performance only in the single instance case.
I believe the answer will depend on what you are using your high end desktop for. Not everyone uses a high end machine for the same purpose and which architecture and core count is right for you depends on which applications you use the most.
It may be the case that for office users or gamers a high-clocked 2 module BD or 2 core SB is the sweet spot. While if you to lots of encoding, crunching, or media creation a 4 module BD or 8 core SB might be the sweet spot. Of course we will have to wait for benchmarks to know for sure.
I think you could rather buy an octy MC and a dual socket mobo... this will be somewhat future-proof... as after BD comes you should be able to run it with a simple bios update... which is not something that you could say about Intel platforms (both server and desktop) and AMD's desktop offerings at this point in time...
Edit: Makes for an incredible value proposition too... what with the cheapest MC coming in at $290 appx and dual socket board for about $450
FYR...
Mobo... http://www.newegg.com/Product/Produc...82E16813131643
CPU... http://www.newegg.com/Product/Produc...82E16819105266
The big problem with AMDs dual server boards are that they are not overclockable!:mad::(
If the MC dual boards ever came out as an overclockable workstation board, then I would by two or three of them!:cool::up:
If MC or BD CPUs came out unlocked (like Intel's) runs at around 3.0 GHz, then I would buy four to six CPUs!:cool::up:
If AMD released BD as an Phenom FX CPU for G34 or whatever socket that comes out, then I would buy four to six of those CPUs!:cool::up:
who really needs to oc a dual socket board ??? seriously... besides for e-peen value
I mean the processing power :)
< This guy. When I was running a dual Istanbul setup between last July and earlier this year, I really wanted to overclock it. I did, to a limited degree, on my first (nVidia) motherboard.
Why? Because I do some of everything. When I'm compiling FPGA programming files during development, that's unfortunately a single-threaded process that takes a long time. At 2.2 GHz, you're looking at a minute and a half to two minutes every time you want to test a configuration. It's not like Visual Studio where you hit F5 to check your changes and the thing instantly starts up. When I'm relaxing by playing games, that's mildly threaded. Most games don't do their best with more cores and lower clock frequencies, though a few like Bioshock do fine. For the most part though, my gaming experience really tanked when I went from a Phenom II X4 940 @ 3.6 GHz to the dual Istanbul @ 2.2-2.5 GHz. And finally I do some video work which tends to take advantage of more cores quite nicely...some times.
It's never fast enough though. A Thuban overclocked to 4GHz, which isn't terribly difficult, can turn in throughput very close to what I used to get out of my dual Istanbul station. That was the point I decided to switch. If I had been able to overclock that platform, the question of changing or not wouldn't have been so easy to answer.