See you on Wednesday.
Printable View
If those results are real and the tested unit is a final one with latest BIOS, MB and drivers... then the results are pretty dissapointing.
BD runs very hot, it's much slower than a 2600K and its mono-core performance is very poor ( 20s in SuperPI? omg... worse than a Core2! ) :(:(:(:(
In fact, I have doubts even comparing it with a Phenom II X6 ... because it will perform more or less than that but it will be much more expensive.
This thread has an insane number of views in less than 2 days...
so this thread will be deleted come wednesday ;)
well thats just great, i was cleaning my comp and i had to reapply thermal paste, and i dropped my 1090t, now all the pins are bent....s.o.b!
jeez good thing i didn't become a surgeon like my parents wanted.
I'd rather have bent pins on the bottom of a CPU than bent pins in a CPU socket. Of course, I wouldn't want either to happen, but the former is much easier to recover from than the latter.
Took me some effort to find it ... then I got smart and looked in my history.. and there it was:
http://www.gossamer-threads.com/list...readed#1409305
There was more somewhere else, I'll post it when/if I find it...
--The loon
That errata should be identical on all Bulldozer-based CPUs. It is a per-module issue where instructions get pushed out incorrectly and so the core needs to re-fetch instructions from the L2 (if present[mostly inclusive, so it SHOULD be]) with an ~18 cycle penalty for each occurrence.
In heavy single threaded loads, with the other core intermittently active, you'll see the heavy thread being penalized excessively with the instruction fetches causing improper invalidations of cache lines... up to about 10% or so with normal usage (my estimate based on the architecture details and the patch's code).
It gets REALLY interesting when the entire module is loaded. Now, you have BOTH core causing fetches which cross-invalidate repeatedly and the L1 is essentially bypassed as both cores keep going back to the L2 for their instructions.
I expected this issue the very first instant I saw a shared L1I cache setup... but I'm a long-time coder in heavily multi-threaded environments so those types of issues stand out for me.
I even wrote a program to test my theory (yeah... I'm a geek), so my numbers aren't just wild guesses, though they are case-specific.
Or I'm just completely missing something... ( we'll know in just a couple days ).
--The loon
Thanks for the link, I never saw the reply :-)
That said, however, I'd be willing to bet that my numbers have merit in some loads (likely these "microbenchmarks) mentioned). If the contention is as it was described, the performance effect should be greater - unless it is happening in more rare circumstances than I had anticipated (which is certainly possible/likely given 64KB of cache lines...).
(I'd have been even more conservative if I'd known exactly how widely my comment would be parroted!)
--The loon
Yes, my numbers are accurate... except they only apply for about 20% of the time... or less... the problem apparently irons itself out under sustained load, which is not what I was anticipating when I did my simulations ( yes, simulations - based of scant data, but still... I'm a dork :clap: ).
It was something I had entirely dismissed as being likely due to the nature of the patch - changing the memory mapping, going-semi static (security risk) and much more was being done in the patch as I saw it. Not sure what the final patch looks like, just found out about all this parroting of my post (google it - it's freaking CRAZY!) on other forums and came back here to do damage control...
--The loon
Hi guys,
I've been on the road the last couple of days, so didn't have much time to read or to reply in this crazy thread. Just wanted to say that we do not build the hardware, we just test it. We did it countless times before (Clarkdale, Sandy Bridge, GTX480) and we will most likely do it again. Our interest is to keep our readers well informed, so we try as much as possible to be very accurate. Until now, I really do not remember being wrong, even though in other threads some people also said that these are bogus tests and so on. Of course, I also do not remember any of those guys saying "you are right" after the final reviews came up, but that is a different story :)
We have been playing with hardware and testing hardware a long time now, and we will be doing this for a long while, so it would really not be in our best interest to put out wrong tests or anything like this. After all, there is one more day until all the reviews are out, so anybody can compare all the results we got with all the other results from the web and see if we were right or not. I personally am looking forward to that :)
With this preview, like with all the others, we tried to double-check every little thing, to get the last bios, the latest silicone version, the latest software updates and so on. Also, we could not make a very big preview with many game resolutions, many applications and so on, because we were very, very busy with MOA, so we tried to choose the best scenarios to put accent on the CPU, not on the VGA or anything else. Even so, for a preview, I would say we got enough results, and I am sure that the reviews coming tomorrow will have more, and more results to show exactly how Bulldozer is working.
In the end, remember that competition is the base of progress and evolution, and it is very important for all of us to see good products on the market, so our job as enthusiasts and press is just to show things how they are, in order to help the companies improve their products. It does not matter if it is Intel Prescott, Nvidia GTX590 or AMD Bulldozer. When something should have been better, it is our job to say that so that future products will be better. As a hardware enthusiast I care the most about performance and good products, not about labels and marketing, and as hardware press, I care about correctly informing our readers, not about "shocking" stories that would not be true.
I hope this sheds some light on all the things discused so thorougly in this thread, and also on our position and intentions.
Thanks for that Monstru. :)
wait tomorrow
If these results are true, we are at the mercy of Intel’s prices for years to come :down:
You can realise that we have sources for CPUs and VGA, like most people who want to get hardware in advance. You can also realise that it is useless to ask, nobody will ever divulge souch sources :)Quote:
How did you get the chip?
For heavy GPU benchmarks we used Unigine, which scales with CPU much less then Crysis and Metro (eg none, if there is actually nothing wrong with a CPU). HawX and RE5 are perfect to show the difference between two CPU's at low res, instead of just testing mainly the GPU / PCI-E controller. It is a theoretical game performance test, not a real life one, obviously...Quote:
Also, why HAWX and not Crysis or Metro 2033?
In the end, it does not matter why we chose that instead of the other (we would have tested twice as many games and apps if we would have had the time, and more resolutions and so on, but that would be a review, not preview...), you will definately have tons of reviews with tons games tested at tons of resolutions tomorrow. We just previewed a small batch of games and tests in order to have a basic ideea of how this CPU performs. In order to understand the full picture, read the reviews tomorrow, a lot of guys are working very hard to show you as many situations you would like to see the new CPU in as possible...
hey don't insult prescott, i have a WR on prescott, or better presshott.
Anyone hear about a kernel in the windows OS needs to be updated, as it is heavily impacting performance, or something that, so i am going to try windows 8.
There was a slide about win 8 performance, it was 10% in best case. Most likely it's due to offloading lightly threaded application to single module -> turn off other modules -> increase Turbo Core clock speeds. Win 7 wouldn't know how to do that with BD modules, unless it gets patched. But that's not a BD only benefit, SB can benefit from the same scheduling improvement.
just one question as a regular reader:
WHAT IS THE POINT IN SHOWING NON-REAL WORLD PERFORMANCE?
i don't by some magic numbers from some stupid synthetic tests, i want real world numbers in real world tests (application tests in wide spread settings) - almost all hardware sites are completely useless because they don't put their focus on real world performance and user experience and yours is one of them - you show a completely wrong picture of what users can expect from the reviewed hardware by doing such completely useless junk tests
Because those synthetic tests are easy to compare to other processors.
And putting this CPU in "real world tests" because it loses in benchmarks pretty bad wont magically make the new FX into some monster CPU. Synthetic tests indicate performance in different situations using different parts of the CPU, so if it runs really slow in all benchmarks (hmm, ever wonder why they are called the benchmark?) they are going to run slow in other applications as well. You will only find 1 or two situations in very specific environments that will go against this, but that isnt the real world either now is it?
Here the benefit for AMD will not only be improvements seen in performance, but also efficiency. Right now i'd assume that the OS is loading up cores randomly on various modules, hence resulting in poor idle power consumption compared to best case scenario (theoretical maximum). Anyone with BD should ideally also test idle power consumption on both Win7 & 8, to give us a clearer picture on the same. However, from what i understand, HT wouldn't see much improvements, if any at all, as in Intel's case there are no dedicated execution resources, but merely virtual extensions to improve execution efficiency.
Well not really. What we all would like to know as paying customers is that whether spending $300-400 on a GPU is better or spending $500-600 on upgrading cpu-mobo combo. Not all upgrade everything, every time a new piece of hardware comes out. What when you show a low res test happens is, it shows an entirely useless scenario in your day to day usage.
Check out [H] tomorrow with their performance graphs, I doubt anyone needs more than an unlocked X2 550 for most games and bulldozer won't change that, but can prove to be a worthy upgrade for those who have compatible mobos, well when the time comes. I will skip it for my gaming rig for this reason, and wait to see what they offer next year. Of course if it rips apart my 920 then I would get it for my main rig, but doesn't seem like it will happen.
There's definitely an improvement in gaming when you move to a quad, but little after. I think if you have a 920, then you're set till next year. IB's going to bring in only 10% odd on cpu side. However if you want to save on power consumption, Trinity seems potent. Or for more power the next FX replacement chip. Haswell is also a good bet, but that is 2013 when AMD is also supposed to come up with Steam-Roller. Oh, all this is only valid if you don't do rendering etc. If you render, then consider BD in opteron form. They'll be cheap and you could have like 16-cores running on a G34 mobo under $1000.
i converted my x6 into an x3 so i can push the MHz as high as possible. it can handle the same clocks running at x6, but it also can burn up the MB since it likes to throttle at my voltages. and i do notice a few games that still like more CPU power than 3x 4.2ghz, and my gpu is a 4850, so clearly that should be the bottle neck, but isnt.
640K ought to be enough for anybody
- Bill Gates
There is no reason for any individual to have a computer in his home.
- Ken Olsen
I can name at LEAST five current games that perform like junk unless a high speed 4 core (or preferably higher) processor is used:
- Shogun 2
- Dirt 3 / F1 2011 with more than 5 cars racing
- Battlefield 3
- Civilization V
There are plenty more out there and the trend for better AI processing is more cores + higher speed = better.
Sky, you just gave me a great idea. a game where if you overclock your cpu, you get access to the super smart AI hardcore mode, lol
you have an overclocked quad core like he was referring to. His point however that if you are looking for a future proof system you can't really say something is "good enough" games and programs are always evolving and using more processing power than the previous versions. so the idea that any particular CPU is "good enough" doesn't really hold much water.
Bill Gates never said that. Maybe you misunderstood that I mean a dualcore 550, by unlocked I meant a 550 with 4 working cores, I could have said 960 or whatever, the 550 is just cheaper.
I said for most games, granted, I haven't played any of those except for CIV 5 which didn't impress me much, those very well might need more processing power but I wonder if those games really fall into unplayable category with a deneb. Even if you managed to pick 5 games out of the ones which came out this year I think this still wouldn't invalidate what I wrote.
Anyway I will check tomorrow hopefully will find some benches with Deneb, Thuban and BD ocd to 4G and witness all those unplayable games on deneb which run well on the other ones, if I run into one which I really want to play and deneb can't handle, my I7 920 will have to suffice until next years offering is out as so far there is nothing what seems to worth my money, and I really doubt BD or Sandy-Se will change that.
What you mean is really obvious especially to us here, I never imagined someone would interpret what I wrote that it will be good forver.
I didn' t write anything about future proofing just that right now most games can be played really well with a cheap deneb. Actually, your system would be more future proof if you update only when your system really reaches it's limits so sticking to a deneb until next year grants me a more future proof system when I upgarde as this one doesn't have any I am well aware of that, that's why I said it will be replaced next year, but I see no reason for that right now.
Lets not forget the big daddy of em all; world of warcraft:
http://media.bestofmicro.com/1/M/269...%20Scaling.png
You forget TF2... but it profits the most from 2 cores but need insane high performance on those...
Okay i have been following this forum for ages, sometimes i completely loose track of what is going on and im not all the good with the specifics of CPUs.
however i would like to ask one thing:
I am currently building a Gaming / Rendering / Modeling / Level design PC for uni, buildlog = Liquid cooling/Worklogs/ Define R3 Hydr0 :)
I have been holding out for bulldozer but reading this forum (and others) has lead me to re-consider my CPU choice.
This is my current PC spec:
Mainboard: MSI 990FXA-GD65
RAM: Crucial Ballistix 8GB 1600mhx (upgrading to 16GB when rendering starts next year)
GPU: Radeon HD 6870
SSD: OCZ Sata II 64GB
HDD: WD Scorpio Blue 750GB
PSU: OCZ 80+ Modular 750w
All watercooled, please see build log for more info :)
What CPU would you recommend?
Phenom II x6 (1100T or 1090T)
FX4xxx Series (black edition)
FX6xxx Series (black edition)
*FX8150 - (this one i would like, but i think i would like to buy something cheaper and wait for piledriver)*
little help please :)
P.S been reading "The ultimate history of video games - Steven Kent" and bill gates did say that ^^
Also Steve Jobs (RIP) worked for Atari and was an evil little bugger but a ledgend none the less :D
recommend that book to anyone, its a right laugh.
Now we're splitting hairs over what Bill Gates said or not? Come on. For your information, there is no proof he DIDN'T say it....
With that being said, the progression of this thread is pathetic.
Lab501 posts results they claim are indicative of Bulldozer performance and the responses go....
1) That SUCKS
2) Oh wait, those can't be right!
3) Where did the processor come from? Was it paid for by Intel?
4) There must be something wrong with the benchmarks since they aren't properly loading the CPU!
5) Wait a sec, there is a BIOS setting that will somehow improve performance
6) Forget about the BIOS setting....there's a PATCH!
7) No no! The current way of benchmarking processors just can't show the true power of Bulldozer
8) You're holding it wrong
And prepare for this all to get rehashed in the thread holding launch day reviews in under 12 hours... At least at that point we'll see whether any of those hold water or if they're all just meaningless excuses to justify the poor results of a highly anticipated architecture. :)
One hint: at least TWO of the above statements are correct. ;)
Double post -_-
also so as not to fill this thread up with CPU choices, i have Added a poll to my build log :)
please come take a look Define R3 Hydr0
hx.
that game is held back ALOT by AMD chips, so id hope there is a major improvement here with BD, especially since alot of people who play that game are on PCs with crap gpu and midrange cpus. this is a perfect game to use llano or trinity on.
also WoW is using only 2 threads, with the third being very lightly touched.
this site sure is double posting alot these days...
You are wrong about BF3.
i2100 is just enough and only few % difference against 2600k. BF3 uses HT very well on i3 and usage is round 70-80% per thread, 35 - 50% on 2600k. But in general you are right. There are few good games that need 4 cores or even 4 1156 cores at least to go like they suppose to. SB is superior gaming CPU and BD will have hard task in gaming against SBs for sure, even in high resolutions and aa and stuff.
Lab501 thank you for preview. I like it very much and choice of benchmarks is very good for my taste. Dont see much of overclocking haven in BD expect very high clock but i dont care because 4.5ghz 2600k is still faster in many things than 7ghz fx or 8 and 5.6ghz will probably destroy it everywhere..
I don't understand how a Zambezi 8C (or 8T if you like) is scoring 26Gflops at 4Ghz when Deneb X4 @ 4Ghz scores 45-47Gflops. Just crazy. Is there a throttling mechanism employed because one 256bit FPU cannot be slower than one K10 FPU,makes zero sense.
I wont give much about the linx run, the other guy also had problems running it, for me it seems its a problem with the compiler... duno with what linx is compiled but if it is compiled with ICC the worse fall back SSE2...
One guy at coolaler forum measured 318W of power draw when running c11.5 on 4.5Ghz Zambezi....
My gripe is the way that most websites benchmark cpus. Hardware canucks is one of the worst offenders.
Do you guys really feel that you did a good Sandy bridge review?
I have seen you make so many comments about timedemos and canned benchmarks on this very forum while the site that you review for does the exact same thing in their sandy bridge review.
Do you really feel that a GTX280 that was about two and a half years old at the time was a good choice to show off what a brand new $300 cpu does for a gaming rig?
Windows Vista Service pack 1? You have got to be kidding me.
An old ass 320GB HDD?
Is this what I should expect from your Bulldozer review? If so it will be just as useless to me.
I find the responses in this thread to be borderline hillarious; the number of people with buyer's remorse for waiting for bulldozer is incredible. Just let it go and buy a an i7; I just got one last night after reading this thread. The gig is up; to be frank I wouldn't be shocked at all if this is in large part why Chew* is taking a break; he probably knew there'd be a ton of people groveling at AMD's alter. That they'd spin and say anything to get a piece of the action when AMD does better another chip or two down the road.
Also semiaccurate had a interesting article that sums up what I'm seeing here:
To me that looks like exactly what is happening. AMD wanted to try something novel and got bit in the ass by it. Unfortunately for them they don't have the capital to be behaving this way. It wouldn't surprise me at all if they are bought out or bankrupt by the end of this coming year.Quote:
Originally Posted by semiaccurate
this isn't true at all, for one they make a lot of money else where, servers, gpus,mobile markets etc..also like someone else mentioned that the average user/customer hasn't a clue what actually performs..so for the average person to walk into a store and talk to a best buy person who also knows nothing..they compare specs...intel 2600k quad core at 3.4 @ 3.8ghz with turbo ..OR amd fx 8 cores at 3.6 with turbo to 4.2ghz...amd is cheaper as well....customer goes home with their new 8 core machine and probably brags to friends about it who could be potential customers...
Who is going and buying a desktop at best buy? Those are the people that Llano is made for and for that market it does the job very well. If I were in the market for a new notebook I would be going with a Llano cpu. Bulldozer might just be a great workstation and server cpu. The niche pc gaming and high end desktop market isn't going to make or break AMD don't kid yourself.
I have definitely seen people in BB buy "high" end desktops...or compusa etc..theres more people out there that are clueless then you may think..the average person may know someone who knows about computers but they themselves know nothing nor care..just something that works and if a salesman sells them something they will eat it up even if they dont really need it...know a few people that use to work at compusa before being turning into "tiger direct" and would laugh about what people go in and waste money on...not saying this will save them cause like you said they are not dependent on desktop market but ive seen cases like this before and im sure it would happen again.
@SKYMTL
Do you still use gtx 280 When testing cpu ?
The problem is AMD seems to be half-assing a lot of things lately. Llano isn't strong enough to run games; at least not enough to justify using it over a discrete add in card, nor does it have enough in the way of power savings to justify purchasing on the extreme other end of things. The problem is AMD has condemned itself to mediocrity and they don't appear to be making much headway in being a leader in any category other than "good enough".
Intel killer? At this point i will be happy if its a Phenom II killer.
I guess now we know why talk of Piledriver started before this was out yet.
For some reason I see this being a short lived product, and pile driver coming out by mid year.
I think this cpu is just cpu used to optimize process in GloFo for piledriver.
Looks like we have some more stuff in the wild: here's an 8120 http://3dmark.com/3dm11/1979782
Source: http://www.overclock.net/amd-general...fx-8120-a.html
I think those craving hope are really showing desperation, in the stages SKYMTL has shown, but in kinder words. There have been a lot of tell tail signs in the past. I.E the falling stock in the last month, the leaving executives and Intel delaying ivy bridge till mid next year. If intel was scared of bulldozers, they wouldn't have delayed Ivybridge so much. They know they can milk Sandy and its probably why the 2700k has a higher price than the 2600k.
I am not certain at this point, but with the evidence from these benchmarks, Bulldozers looks to be a failure of epic proportion. Considering the time in development, this was AMD last hope on the CPU front.
This is certainly terrible for the consumer, as the development and aggressive pricing will slow down without competition. Ever since Hector Ruiz took over AMD, AMD has been moving in a worse and worse direction. I think if AMD wanted to be a successfull CPU company, buying ATI was the worst mistake they could make. They simply didn't have the resources to develop both and it has spread itself thin. Trying to get into the mobile game on top of this is going to spread resources even thinner.
Fusion is the only thing that can save AMD, its a gamble that has shown moderate success. The problem is will it pay off in the future, the the CPU portion so crippled.
Bulldozer is optimized for multi-threading and server market, and server market is where the money comes from to AMD, 16-core 8-module and 12-core 6-module server part's will be succesfull.
Will be waiting for Piledriver to show true power of Bulldozer arch.
Yeah of course, keep telling yourself that! Im a huge AMD fan (never had an Intel machine, NEVER), but denying something that has peen proved by saying something that has nowhere to come from is just dumb. This time they let me downn. Sorry.
Really?Quote:
The gig is up; to be frank I wouldn't be shocked at all if this is in large part why Chew* is taking a break; he probably knew there'd be a ton of people groveling at AMD's alter. That they'd spin and say anything to get a piece of the action when AMD does better another chip or two down the road.
http://www.xtremesystems.org/forums/...happened-right
Yeeeeaaaahhhh.........Quote:
It wouldn't surprise me at all if they are bought out or bankrupt by the end of this coming year.
(First off, I want to apologize in advance if the following comments cause anyone to get upset.)
Ouch. If these early benchmarks are accurate, this is going to make Fermi's launch look good. Seriously though... there were a lot of signs that were clear as day pointing to Bulldozer being a flop. The price was way too low for the hyped performance claims, combined with the delays, repeated yield problems, and the deafening silence from AMD and their cheerleaders like Charlie over at "rarely-accurate". Then top it off with the arrogance in naming it Bulldozer, implying it's going to 'bury the competition'.... and you have a perfect storm.
As expected, many of you are having a hard time dealing with being let down, and as SKYMTL mentioned... the 5 stages of grieving are clearly illustrated in this thread.
This is the one and only post i'll make poking fun at the fanboys even though it would be fun to flood these threads with bulldozer jokes and photochopped images like many here did a couple years ago. Oh the irony.
Well, i do agnowledge that Zambezi is litlebit failure compared what we expected but this arch have alot to tweak before we can say final word if its going to be alot of faster in form of Piledriver, there aint even final bios's released for Zambezi yet.
Piledriver will indeed show true power of Bulldozer arch, it can be equal or faster than Zambezi it can even suprice us but we dont know till Piledriver gets released, server part is really where amd gets most of its income at the moment.
I know, i know. I was bit too optimistic at that time about preformance, what is true is that i will still be buying Zambezi cpu's only due its Box, i want still one for candy, one for lollipops and one for screws.
So it looks like Terrance was right. The more we post about it, the more ipc decreases :p:
It still has best multi-threaded performance but crappy at single-threaded performance.
Oh yeah, im bit too optimistic at times :D
how many more hours till N.D.A. be D.E.A.D.
1:33 to go ?
Amidoinitrite?
http://img.photobucket.com/albums/v1...l/trololol.jpg
So much for the bulldozer thread from a few weeks back when movieman hyped up the fx like it was way faster than what intel is offering. I had a feeling it was smoke and mirrors
Meanwhile at Dresden...
http://i47.tinypic.com/2vnhzra.gif
FX8120 is not beating i5 2500 even in many highly MT apps, Game performance comparisons are not even funny
http://pctuning.tyden.cz/hardware/pr...u-1-2?start=14
Ouch...
-PB
Considering AMD has only 5.5% of the Server CPU market, I doubt that segment is whats keeping them afloat.
Oh... and there's this:
Intel’s server processor profit margins were 50% in 2009, versus 10% for AMD.
So Bulldozer starts to show its potential in WinRAR and Photoshop benches.. which is exactly what one should expect with 8 *integer* cores. Parallel integer workloads (think: SERVER) is where this thing will shine. (And probably special HPC apps that are designed for FMAC4).
That's probably why they released it as "FX", unlocked, processors for consumers. Since it's mainly fun for overclockers. Low IPC and only 4 FP cores isn't great for PI and Cinebench :)
Whelp, this is thing is possibly one of the worst releases this year. If current reviews are an indication.
At least we know why JF-AMD has been absent! LOL!
What a surprise...our results were 100% accurate (...again), the BIOS version was good, the silicone version was good, and the numbers that other got were exactly the same as anybody else.... Or, as Vr-Zone said...
Quote:
Clock for clock in multithreading heavy scenarios the FX-8150 is typically behind the quad core i7-2600K by ~20%, and in single-threaded workloads it gets totally annihilated. Overclocking was also nothing to shout about, as it needed a significant voltage bump to even reach a (prime stable) 4.7GHz on noisy air. Bluntly, the performance we are seeing from the 8-core FX-8150 today is only marginally better than last year's high end quad core Intel i7 "Nehelam" and AMD's own Phenom II "Thuban" 6-core offerings, and definitely not a "i7-990X killer" as hyped over the last few months. No disrespect intended to the record holders but disabling cores to achieve high frequencies is just going to confuse unsuspecting buyers and split-second CPU-Z screenshots is not a measure of true stability.
You're a legend to us Monstru, breaking rules :D