Immediately?
Printable View
And first Pentium 4 as well...
No suprise there... I really want to see how the opterons do... if there is a simillar picture it will be devastating for amd...
Quite shocking to look at those graphs, after all these years of waiting for Amd to be competitive...
lol by THOSE metrics, westmere is 50% more efficient than bulldozer, and sandy bridge is 50% more efficient than westmere
sandy bridge PAYS YOU to buy it over bulldozer. unless electricity is free
Is it posible for the adic who ownes gf to devalue amd by holding back there products with a crappy process only then to buy amd for a bargan price? I can see adic buying AMD at some point in the future then gf launching an amazing revision of there current process. wheres my tin foil hat lol.
this has got to be the worst possible way to show efficiency.
first you dont go with the biggest most power hungry motherboards if you want an efficient system
second you dont simply add up times together. some benchmarks took a few minutes, other took a portion of a second. horrible weighting factor and right there shows they are about as dumb as you can get.
third, you show efficiency per app since there are multiple ways to do stuff, and if you really care about power consumption, you pick the app and method that does it best for you.
and last, you might turn off turbo so that you loose a little bit of performance, but gain a huge power return of power savings.
we all knew that these things are power hungry when pushed hard, this is just trying to use shock value to get more hits rather than a proper test.
btw for a quick reference, i was able to increase efficiency on thuban by over 30% by simply dropping the voltage from 1.3 to ~1.1v at stock clocks, and thats with a chip already built around efficiency.
I would expect the largest gain in the opterons will come from the much more aggressive clockspeed. The current gen tops out at 2.2ghz for the 12 core magny cours and 2.8ghz for the 6 core lisbon. So considering the 8 core parts top out at 3.5ghz (3ghz no turbo) and the 16 core parts top out at 2.8ghz (2.3ghz no turbo) AMD can probably expect some decent gains in integer computation. The weird thing is the FP seems to be much weaker than expected. I can't remember which review it was (i think it was for the 2700k actually) but the fx 8150 had no problems keeping up in integer computation but failed in float point, which is odd since fp had always been AMD's strong suit since the athlon days.
Though the power draw of BD is certainly disappointing, THG was a little sneaky by disabling the Intel on-chip HD graphics and using an HD6850.
That saved 20-30w right there.
That is probably done for fairness. When you can't match up specs exactly since they are different platforms, you still try to make it as similar as possible. The likely purpose of the test was to compare efficiency in a system for workloads(none gaming). Since Zambezi doesn't have integrated graphics, they have no choice but to use a videocard. Intel could run off of the integrated graphics if need be for the test desired.
What would be an unfair test would to have the amd processor use a dedicated, while the intel used its integrated. Since none of these benchmarks are gaming ones, Intel would have the clear advantage since using integrated would probably save more power than using the 6850.
That contradicts with many other test results like:
http://www.rage3d.com/reviews/cpu/am.../index.php?p=7
http://techreport.com/articles.x/21865
Quote:
These results couldn't be much more definitive. In every case but one, distributing the threads one per module, and thus avoiding sharing, produces roughly 10-20% higher performance than packing the threads together on two modules. (And that one case, the FDom function in picCOLOR, shows little difference between the three affinity options.) At least for this handful of workloads, the benefits of avoiding resource sharing between two cores on a module are pretty tangible. Even though the packed config enables a higher Turbo Core frequency of 4.2GHz, the shared config is faster.
Why didn't they tests any games ? Bulldozer needs appropriate software to get it ticking !! Dunno if Windows 8 will cut it out of the box...
This was known for some time. Two threads from same module yield only 160% speed increase from a single thread, because with one thread you don't share resources.
^This
AMD said for a long time that you trade off 20% performance to be able to able to add a second integer core at 35% the size
Maybe even better Win 9, 1-5% is not sufficient, sadly enough... let's hope Globol foundries can produce better products soon so we can cranck these CPUs up to where they belong... Seems they made some errors in the charts, putting some worst scores in bold
Seconded, frankly i'm not impressed with those numbers at all. If anything those increases probably come from the windows 8 kernel itself being better, not better task scheduling. We already know from people physically disabling integer cores that ipc goes up by as much as 20% with proper scheduling
What about the Set Affinity feature in the Task Manager? Task Manager -> Processes -> right click a process -> Set Affinity -> Set cores to run process.
guys, guys cmon its over...no more how about this, or if this , if that....It's ALL OVER..BD flopped end of STORY
lol chill out bro. That's obvious, I'm just wondering about the potential performance gains from windows 8. I have a 2500k myself (very glad i caved over the summer and stopped waiting for BD)
nah brah, its just that some are still under the spell of the BD. Somehow or somewhere is a secret patch that will unlock the real BD..get it?
Fair enough analogy. More reason why I dont respect doctors. They should all be required to try the actual med before prescribing you one, so in this case they'll know if its bitter or sweet....on this end we all know how the BD tastes. Most agree it wouldn't hurt the PR team if they really tested the BD to let us know how it tastes...and they did ..they said it tasted OMFG "They're Gr-r-reat " ......."til the PR is gone"
may be socket 2011 hit retail before FX AM3+ in Thailand !!!
Actually,1 BD core is performing somewhere between Phenom I an Phenom II,with occasional dips below and above these. On average it's like Agena core. BUT,it has a much longer pipeline and God knows what else not working as good as it should inside the core. This is taken from PCgameshardware's website:
http://img845.imageshack.us/img845/3572/95136680.jpg
For gaming results with 480GTX click here. Not bad at all for 8150 @ stock. It is definitely better gaming chip than 1100T is,especially in minimum fps department.
I personally am interested in what the hell they did with all those transistors? As I understand it, the whole point of CMT was to create a very flexible design that was lite on resources to allow for massive parallel expansion. Seems to me that they just created the opposite instead. As a biomed with a minor in EE, I just find the circuitry to be intriguing
...where are the interlagos benches and products already? i mean i know cray got the first batch but...shouldn't they have made more by now?
I think the "major players" will be getting all shipments of Interlagos for a while. yields are too low right now to support sending server chips out to retail places for people to buy, especially with basically the same thing available for consumers already
SourceQuote:
Originally Posted by Phoronix
In some cases the improvements are huge. Should be interesting to see how Bulldozer will turn out in the long run
any truth in this?
http://quinetiam.com/?p=119Quote:
After our own reader-base flagged the matter in comments last week, we began to investigate a rumour that there was a registry patch in the works that could offer a 40 per cent performance increase for AMD’s Bulldozer CPUs.
The issues brought up by the article are true, but I'm not sure if a simple registry patch will be enough to fix it.
Windows 8 employs a new scheduler to better optimize for Bulldozer, and from the developer preview, we can only see a few percent boost in performance at most.
It'd be great if this was true, but I highly doubt it will happen, otherwise AMD would have worked with Microsoft to get the patch out in time for the release of BD.
The increases are truly impressive, but not across the board for the AMD compiler. Furthermore, it may be feasible to recompile your applications when running on a server dedicated to a specific task, thus optimizing the application for the hardware it will be running on and get great performance. But you can't achieve the same thing when you're running Windows with a bunch of closed source, unoptimized games and applications. So as much as the potential performance of Bulldozer is great, it's still not utilized to the fullest due to the lack of OS and compiler (of current apps) support.
I still remember JF-AMD's last post, almost a month ago just before this crap was released:
We are used to this guy's arrogance. He knew about BD's perfomance, and he just played with the fanbois for months. In the end, you get what you deserve.
Funny thing is that people are still looking for magical fixes and other bullcrap. Just let it go guys, BD is a complete failure and you won't change that.
It's official. All those 2012 doomsday theorists are AMD fans.
And in the end this isn't rally iteressting for the people here anyway... most common compiler for the windows platform is the MSVC. Its nice that you can gain some gain, but if you can't get them in 95% of the apps out there what is it worth it?
Intel on the other hand still managaes to get ~5-10% avg ipc increase each generation compared to the previous one in the same apps, so if you run the same apps you actually gain something and you don't have to hope that one dev decides to recompile there programm and also with the right compiler.
Im going to buy one just for cpu-z benching :)
AMD FX-8150 Multi-GPU Gameplay Performance Review
November 03
http://www.hardocp.com/article/2011/...mance_review/1
No suprise there, once you remove the gpu bottleneck (tri sli) BD only sees dust from SB.
Another SLI benchmark set:
http://www.madshrimps.be/articles/ar...#axzz1ceA1SgT8
The BF3 result is especially interesting since the game was chosen by AMD to present BD as an "enthusiast CPU".
We all know the design is broken. It's just literally a matter of what Redbull78 said, "What were they thinking?". They must have known for at least a year that the design was broken. Having said that, I own an 8120 and it's not such a bad CPU.
Give the CPU some time to mature and we'll see how far it can stretch it's legs. :)
BD with different compilers in linux :
http://openbenchmarking.org/embed.ph...ha=830ac33&p=1http://openbenchmarking.org/embed.ph...ha=6fe0918&p=1
Link : AMD Bulldozer With GCC, Open64, LLVM/Clang Compilers
Published on November 02, 2011
Now compare to Sandy Bridge
http://openbenchmarking.org/embed.ph...ha=830ac33&p=1http://openbenchmarking.org/embed.ph...ha=6fe0918&p=1
Link : AMD FX-8150 Bulldozer On Ubuntu Linux
Published on October 24, 2011
Also See : Open64 Compiler Tuning On AMD Bulldozer FX-8150
Yeah, if gaming is your top priority Bulldozer with it's very weak cores is the worst buy for the price. And it's further exacerbated by the fact that most games today are compiled with ICC.
clock speeds are crap in that review, 4.6ghz on an FX-8100 would be a complete waste, and someone would rather disable some cores or go with a 6100 and overclock it higher. it wouldnt be enough to beat a 2500k, but it would bring the gap up way closer is some titles.
Seeing alot of talk about bd being buged and not running a lot of games, even bsoding on some titles.
looks like AMD is cutting 10% of its workforce,according to the Associated Press.
AMD to cut 10 pct of workforce as PC weak
maybe they can put that extra $10 million to good use?
double post, and I only hit the button once...
Olivion: but this is multi-GPU, not single GPU. This was "clear"....
TDP limit is still on and thus its down clocking?
BD is a crap cpu, broken arhitecture, that's the only motiv.
Plain and simple! :down:
Steve Jobs proved that if you push people to their limits, you'll see greatness. I don't think AMD's problem is money, it's leadership. They have so much instability in their board and the CEO seems to be a rental position these days. I think all they need is a guy that's not afraid to get into their employees faces from time to time and tell them they aren't cutting it.
Granted I'm way over simplifying things, as Jobs did have a tremendous ability to read people's emotions and be charismatic as well. With that being said, before he went off on the deep end of price gauging etc, he basically changed the entire computing industry with a team 6 of people. If AMD can find a guy like that they'll be competing with Intel again in less than a decade (yeah that seems like a while, but let's be realistic, this architecture sucks, it'll take at least 1 or 2 more to get back on the ipc playing field).
I see a lot of talk about windows 8 helping bd due to better thread managment, seems to be the prefered excuse atm, am just wondering if that will also benifit intel cpus? Probaly not the 2500 with no ht but will better thread managment not help ht enabled cpus like the 2600?
Unlike the bulldozer's case,Windows 7 already handles i7's SMT cores quite well.
How do we know that? Mabey win8 will give both amd and intel a boost?
I tested 4/4 and it isn't faster clock per clock, at least not in the games I tested it with... and to be honest the end user shouldn't be bothered with this fine tuning at all... Only thing that it allows you to do is to clock up higher and drop power consumption... But it will need to be at least at 5Ghz and beyond to give the raw power of the Intels a run for their money... and for that to be usable 24/7, Global Foundries has to make up for their crappy production process.... as they are responsible for the final blow to this new architecture
The weak multi GPU game performance can be easily spotted in 3DMark06 and Vantage as the GPU tests are way too low. Even lower as a Thuban at 4Ghz ( FX8150at 4.6) Anyone ever wondered why we saw so many Heaven benchmarks ? Coz its 99% if not more GPU limited, if you even loose 100 points there on the score, something is terribly wrong...
Solution to this all : harsh price cuts and a better GF production to allow us to run these things over 5Ghz... All the other software recompiling and stuff is encouraging, but most of the image damage has been done... and later this month another thing is coming...
Also most reviewers are testing with ES CPUs that don't seem to clock as high as the retails, mine would run the full test suite at 4.9, though prime stability was at max 4.6-4.7Ghz...
Leeghoofd......"and later this month another thing is coming...", care to elaborate?
I guess Leeghoofd talked about SB-E launch coming in 10 days, the 14th :shrug:
other two reviews
http://www.xtremehardware.it/eng-rev...-201111146032/
http://www.xtremehardware.it/eng-rev...-201110126039/
maybe if you want to add
other two reviews
http://www.xtremehardware.it/eng-rev...-201111146032/
http://www.xtremehardware.it/eng-rev...-201110126039/
maybe if you want to add
AMD confirms: Bulldozer FX only 1.2 instead of 2 billion transistors
Quote:
Originally Posted by Googlish
????
Huge ass die on GF's 32nm process is only 1,2 billion transistors? Llano has 1,45 billion transistors, 228 mm^2.
BS.
Why does so many people blame GF for bulldozers failures? Even if Bulldozer was at 5GHz+ it wouldn't touch 2600K territories. No company can produce bulldozer at those frequencies. And I doubt any company can make a huge chip with 2 (or 1.2?) billion transistors energy efficient at frequencies around 4GHz, compare with SB-E.
Lol wat?
marketing doesn't even know how much transistors there own product has?
WTH is going on in AMD? :shocked:
The SOI process technology is rather counter-productive for dense logic structures - certainly not as good as node-matching bulk process.
Official number has been corrected now,it's 1.2B and die size is 315mm^2. Bulldozer is server oriented chip and a lot of stuff was made with that target in mind. That's why it's somewhat larger die. Also as Hornet said, intel always had higher cache density versus Kx generation.
Where is that official correction?