If they really had the CPU and it works properly, they would have done a much better test. For example, they need to throw in more CPU's.
Printable View
They might have some chips in hand but doesn't make any sense. While these guys never sign an NDA, it means they wouldn't be well-informed by official.
Yes we have known that, from slides which were exposed half of a month ago.
And a chip being announce means it'll be wildly-available, I still suspect it'll be at mid-Oct, although some packages are confirmed, but still not enough, unless they show us tons of packages or amd decide to make a paper launch.
If something doesn't have enough amount to be mading a shipment, why would be benchmark out earlier?
Nope. If you want to have hits you have to be the first. They did a rather quick instead of a thorough test to be the first one. There's nothing wrong with that.
For thorough reviews wait until the NDA is lifted. Some guys probably already have Bulldozer and make thorough reviews with a comparison to tother processors.
All? I haven't seen any trustworthy benchmark. What I have seen are some very strange benchmarks against just one other CPU, apps used that differ to normal reviews, some with strange settings.
Match that with a company that need to be very secretive because the contender has so much more money.
Monstru: is this cpu representative of what is going to be in stores next week, or maybe because you got hold of an *unofficial* cpu you got an early stepping bugged/stupid cpu?
From OBR maybe :p:
The title of the article is AMD FX-8150 Bulldozer Preview, it is not clear enough?
That's way are just medium number of tests, and only 2 cpu's anyway.
The larger review it will be posted on 12 as the others.
JohnJohn, the cpu's are already at the retailers if it is to be sold on 12, so more than sure is a retail.
And they didn't sad that they don't have the cpu from AMD, Monstru only sad that they didn't have a NDA accord with AMD.
+1 quote of the day. :up:Quote:
because all the other benchmarks from all the other leaks also sucked? how much smoke must there be before people believe there is fire.
Final BIOS ?
I am a still bit skeptical, this looks too bad to be true. Is it called denial ? ^^;;;
After all these delays I did not expect much but this might not even show a clear lead over their last gen not to mention what Intel has to offer.
Thanks for enlightening us Monstru, I really appreciate giving us something to read until the 12th.
(not a fan, have both intel and amd)
they are out of the gaming desktop area.
But it is easy to see that results isn't logical. Just check HAWX performance (not just strange to use a game like HAWX in a cpu test....), if they have a bulldozer then prefetching is off. Only 16KB L1 cache needs prefething on L2 to get good numbers
EDIT: I think that nehalem had some early benchemarks that had the same problem, it showed bad performance because if that
Yesterday I had a change to play FX-8150 I could not find what I excepted with FX , The max Cinebench speed was 4,55 Ghz with medium performance air cooler. The score I got from Cinebench was lower than same speed amd x6 processor.
Do a lot of people game on a $300 cpu at 1280×1024?
If Lab501 is right then this is fake:
http://img.ozeros.com/noticias/2011/...150_slide4.jpg
http://www.ozeros.com/2011/10/nuevos...e-el-i7-2600k/
I don't think AMD present fake numbers to customers, they may select apps that runs well on the product that is presented but they can't fake it
They are right guys,12 will be a sad day to remember i'm afraid...
Regarding benchmark scores being lower than Thuban's:
Could this come from current benchmarking programs not properly utilizing the floating point units, only issuing a single 128bit instruction per clock since there's only 1 FP scheduler per module (feeding two FMA execution units)?
In this case it would be a comparison between 4 zambezi vs 6 thuban cores.
Kuriimaho:yes, but who cares 1280x1024 in practice? Is the same as for Quadcores will be running superpi as main CPU benchmark :)...Most gamers playing at 1680x1020 or higher resolutions, so if I have problem with games in higher resolution = problem. Example some strategy games etc are very hard with good CPUs and gaming resolutions.
even in high resolution you will see the same thing in places dependent on cpu
example all the tests depend on the cpu 1920x1200
http://nvision.pl/Intel-Core-i7-2600...details-5.html
must know how to distinguish a test from the gpu and cpu.
AMD shows GPU test.
yes, as Maxforces show, this test are good for me...Hope we will see in reviews some gaming tests
The amount of synthetic and completely pointless benchmarks on review sites is getting a little annoying, If I'm buying an 8 core cpu i want to know how it performs on a variety of 3d rendering programs, media encoding, compiling in linux and programs I actually use. Instead review sites will benchmark 20 console game ports, superpi and cinema4d which isn't a widely used renderer.
The fact that Lab501 didn't sign an NDA only makes things worse.
How did they get the chip? It was either provided by AMD (highly improbable) or by somebody who was under an NDA.
If this is the case, what they did then is like buying stolen goods and we should not support these unlawful practices.
I would like to hear a clarification about this.
On the other hand, HAWX? Resident Evil 5?
Why not Crysis, Metro 2033, etc?
Too many wrong things in and around this preview.
I really can't wait to see the reaction of the main AMD BS producers (Hans, JF-AMD and other big names) when Bulldozer is finally launched. Time and reality separate fanbois and BS from the guys that used their brains since the first slides appeared and the architecture was revealed to the public. It's been funny to read all that BS for such a long time (years!), but the fun is over. Time for reality.
Welcome back Netburst.
2 days and a half guys,just make sure you are seated when bulldozer arrives cause it will blow you out of your chair =))
Unless this is a B0 or B1 chip then.. Ladies and gentleman, we have a dud!
It's clearly not since we have screenshots of CPU-Z 1.58.7(latest and greatest) and it says B2 ;) And there is no ES letter in that screenshot,that means it's actual a retail sample.
I'd say yes. A lot of software compiled with older compilers don't even use Phenom II advanced instruction sets. If they don't see Intel as the CPU, they default to x87 or take a very inefficient path for FPU and INT instructions. With Zambezi being such a radical design, I imagine AMD will be more dependent on optimized software than ever. Unfortunately this will likely never happen, as it never happened for the K10.
Then pick a cpu limited game like Fallout 3 where increasing the resolution actually increases the dependence on the cpu.
There are plenty of games that are cpu limited at 1920x1080 with a modern video card especially when you also show minimum framerates.
Those low resolution benchmarks are about as useful at showing real world performance as 3d mark.
:up:
If all this is true then shrinking and optimizing Phenon II for Phenom III would make more sense. Or maybe Phenom FX :)
If I were AMD, I would cancel the launch, urgently die shrink Thuban, add 2 more cores and call that their new 8 core FX chip!
Lol just kidding, what a dissapointment, I think more heads will roll at AMD for this fiasco, this is truly worse than Phenom I, clearly a horrible choice to ignore IPC (at this point it almost seems like Amd HAD to be different than Intel and try to innovate rather than copying) and AMD should really go out in the market and try to hire engineers from new successful companies like ARM. But that's not easy ofc.
One tip for AMD regarding hyperthreading, high IPC, enthusiast segment and everything good Intel has done: Don't be afraid to copy a technology... If something is worth copying, then it is worth something!
http://forums.anandtech.com/showpost...&postcount=233
Another person that used bios 9905 (same as Lab501) but the turbo didn't seem to work
There are too many weird things about this.
Why there is no NDA signed? Where did they get the cpu then? Why it’s slower than thuban? Why MM said they had a winner if it is slower than thuban?
Honestly, anyone thinks that AMD or Intel would release a cpu that performs worse than the previous cpu generation in overall performance? Would AMD dare to bring back de FX if they knew the cpu sucked and that they’ll get burnt?
Thats the whole thing there really weren't any good gaming tests in the Sandy bridge reviews either. Anand and Hardware Canucks used a single almost three year old GTX280 for most of their reviews. Anand took games like Fallout 3 and ran them at medium settings, as we all know increasing the settings like draw distance only increase the cpu dependence with that engine. Then used Far Cry 2 in DX9 with medium settings and Warhead at mainstream settings. Crysis is one of those games where in a custom run I saw a nice difference that I could show and repeat with fraps when moving from Lynnfield to sandy.
Typical cpu gaming comparisons on the sites that have gotten way too comfortable are just awful. Anandtech is one of the worst offenders and it looks like Lab501 ripped a page out of Anandtech's cpu review book.
disappointed :(:(
This is what I expected, unfortunately. After all this time Now I will buy intel 2600k
8 cores vs 4 cores = 4 cores win.....this is Ridiculous:shakes:
QFT
even the complete dud called K10 Barcelona had higher Performance than its predecessor - and a new CPU on a SMALLER node with a bigger die should perform worse than the 1 1/2 year old Thuban?
this makes no sense at all, they could've cancelled it several times by now and use the shrinked Llano Stars core instead :rolleyes:
new BIOS AGESA 1.1.0.0 from gigabyte!
http://www.gigabyte.com/products/pro...?pid=3880#bios
Attachment 121004
9905 is not a fully tested version. It's just an optional update. 0083 is recommended for non-OC benchmarking.
By the way there is an updated CPUID package here.
This means that there is an increase in performance? :p::DQuote:
new BIOS AGESA 1.1.0.0 from gigabyte!
What AGESA does the 9905 ASUS BIOS use?
Don't believe in miracles like 50-100% boost by only flashing new BIOS.
Testing games at such low resolution is complete fail and I'm not listening to the BS that it puts the emphasis on the cpu. Benchmarks are supposed to be indicative of real life performance and nobody uses fast graphics cards and cpu's at low resolution. Furthermore, by not benching at enthusiast-level resolutions you run the risk of failing to fully stress the cpu's in that role and will give a false impression as to how much better or worse a certain cpu is at gaming. There's no excuse for it.
so you think they way to test CPU performance in games is by limiting the overall performance by the GPU? like AMD did on their slides (HD6870 1080p)?
I think there is value in both methods, no one is hiding anything, clearly the FX is more than good enough to play these games, as a Phenom II X4 is or i3 2100 are,
but you need to consider that they probably had limited time for testing, and that maybe a dual gpu configuration can have the same performance difference as this on a higher res, or a new GPU next year and so on...
ideally we could have more test situations varying the resolution, and perhaps more well known CPU intensive games, but anyway, I think by now everybody knows that gaming is mostly limited by GPU if you have a good enough CPU,
but still is a valid test, showing what the CPUs can achieve under these conditions,
With todays high end video cards people are more often cpu limited than they think.
People say it's supposed to be a cpu benchmark but there are plenty of those in reviews. It's supposed to be a cpu benchmark in *gaming*. That means you don't negate the graphics card.
It is quite likely that SB will score 50% higher fps than Bulldozer in a game like Dirt 3 at low res, but lose by 5-10% at 1080p. What is the better gaming cpu?
By your logic you would test Crysis at 1080p , and conclude that Phenom x4 970 is only marginally slower then I7 sandy , when in reality in other game where CPU is important Phenom is significantly slower.
Thats why you bench at lower res to eliminate GPU dependency and measure CPU potential.
Attachment 121006
Looks like the retail pricing turns out to be the best avg benchmark for cpu performance.
Would have been nice to see some 2nd pass x264 results.
I was going to build a BD rig but I'll probably wait and see what PD brings to the table at this point.
Both Intel and AMD are going to need to deliver something more compelling, I'm just not feeling the excitement to upgrade.
Finally legit preview of bulldozer! Can't say the same for windows 7.
:P
Can't wait, 3 more days to see more!
:lsfight:
http://lab501.ro/wp-content/uploads/...0/spi_5066.jpg
If AMD are calling people tomorrow they are probably trying to ensure this low-res gaming skullduggery isn't happening, which imo is their right as they are simply ensuring that the gaming benchmarks are representative of the cpu's while gaming. Also, if BD is strong in minimum framerates then AMD would want that message to be getting across, which is also well within their rights.
Any site using low res gaming benchmarks has an axe to grind with AMD, or an intel shill. Why do you think the unnamed blogger benchmarked Unigine at low res to "prove" his point? I've never seen anyone else do that, like ever. But even at that BD was massively better in minimum framerates.
http://3.bp.blogspot.com/-pyUWXUhTaB...ld_unigine.jpg
You can bench at HI res but you have to find cpu dependent point.It is not easy as you think.
compare with that
http://img718.imageshack.us/img718/2159/36073117.png
@down
maximum settings from game ultra/high GTX580
http://img717.imageshack.us/img717/8346/farg.png
Your pictures show the use of built-in benchmark that is GPU test.
have you even read any recent reviews?
Most bigger review sites state in there test methods why they use lower resolutions for games and often also provide benches in full resolution and also mention that this is not real life scenario..
And here you have a perfect example why testing with realworld resolutions doesn't tells you anything about the cpu at all... you only test the gpu:
Attachment 121007Attachment 121008Attachment 121009
Granted this test is older (march 2010 from when the X6 was released and tested with a HD5870) you would think if you look at "real world resulsts" all cpus are pretty much even, but what happens when you plug in a dual 580 instead a 5870... you suddenly run into the gpu limitation again and your "even" cpu gets smacked hard.
Both test have validity, its just that recently people negate low res test for whatever reason...
Judging by Combined average intel system will have higher minimum on average ,without fraps FPS history graph . those minimum and maximum mean nothing ,all it takes is one split second dib and benchmarking program will register it as minimum , rest of the benchmark sandy will have higher fps then buldozer.
What exactly is the point in benchmarking at such low resolutions? RE2 and HAWX are 50% faster on SB than BD in these tests. Is that representative of the other results? Is that representative of a gaming benchmark?
Or is it simply representative of a useless benchmark of a game at low resolution while giving no indication of how the chips perform during actual gaming?
Do you think that might be because nobody games at 800x600 on enthusiast level hardware?Quote:
Both test have validity, its just that recently people negate low res test for whatever reason...
It's not to directly assess real gaming performance as it is to showcase and isolate the cpu's ability to pump fps rather than showing how well the video card performs.
Turning up the resolutions shifts the bottleneck to the gpu, if you want to bench the cpu you need to isolate it from gpu limitations.
Most review sites do a range of resolutions and settings anymore and when NDA is over you won't have anything to complain about.
As we are here, CB has a nice articel to this..
http://www.computerbase.de/artikel/g...frameverlaeufe
If you jack up resolution and AA/AF its pretty much irrelevant what cpu you use. It all depends on the GPU... on a HD6850 even a PII X2 produces nearly the same framerates as a 2600K@4,5ghz...
So why not benchmark it to see if that's true or not?
If SB is 50% faster than BD then why is it not 50% faster in anything else (except superpi :p)? Why only very low resolution gaming? It is obvious to me that low res gaming is not indicative of actual cpu performance. It's an abberation.
Again, Anandtech's cpu benches are :banana::banana::banana::banana:ty.
I HAVE NEVER PLAYED CRYSIS on MEDIUM SETTINGS like they use in that :banana::banana::banana::banana:ty review. THEY ARE ALSO USING A NOW ANCIENT GTX280 in that review. To top it all off they also don't even mention minimum framerates.
ANANDTECH'S CPU BENCHMARKS ARE USELESS.
They might as well just post a 3d mark bench.
It is indicative of the cpu's ability to generate game & frame data for the gpu. If you use high or low resolution the cpu still generates the same game and frame data but at high resolution the cpu is limited by the gpu's speed for rendering the frames which means all you see is more of a gpu limitation than a cpu limitation at higher resolutions.
In the end the cpu that can pump the highest fps period, raw cpu speed unhindered, has more processing power but that doesn't mean it translates into raw performance for other workloads.
Here is what I saw in actual gameplay with the settings that I actually use not some :banana::banana::banana::banana:ty canned bench like what Hornet just posted like thats supposed mean something to me.
This is what I would expect from a "professional review" I should have left out the maximum framerate though.
This is an i5 750 at 4ghz and an i5 2500k at 4.7ghz.
http://i308.photobucket.com/albums/k...0vsi52500k.png
Testing prefetching on the cpu.
Most games have one heavy render thread, only BF3 and some other are using multithreaded rendering (a dx11 feature). Most games are using half or less than half of total CPU power. For most games data is rather static, maps etc are data loaded from disk and not modified during gameplay.
The main work will be calculating dots sent to the gpu. Long trains of data is calculated and sent to the gpu in chunks with commands informing the gpu how to work with those dots.
The amount of work the gpu needs to do for each frame is much less on low resolutions. It could be faster than the cpu and then it starts to wait for the cpu to process all dots.
Prefetching will increase performance a lot working with long trains of data. The cpus guess where the next data will be and fetches that while current data is being processed.
i5/i7 prefetches data to L1 and L2 cache, phenom is only using the L1 with some small prefetching functionality.
This is the reason why i5/i7 can produce much more fps on low resolutions. It's about prefetching and a fast cache.
If the game are using more threads and maps are dynamic and complex. Data isn't at all that predictable and prefething isn't as important any more.
Yes cause the custom benchmarks of CB (that even have fram graphs) are so canned... :rolleyes:
So how do you explain those occasions where the *much* faster cpu at low res has the tables turned at loses by a couple of frames at higher resolution? Nehalem was notorious for this for a long time, losing at high resolution to the 940 BE. If it's always faster, why was it losing so many at highest res? Maybe a different kind of bottleneck is starting to show, one that moves the fps in favour of the AMD chip at actual gaming (ie gpu bound) resolutions?
If SB is 50% faster at low res all the time, stands to reason it should be at least 1% faster at high res all the time, right? We'll see.
since these games probably can't take full advantage of 8 threads, and you start considering the advantages of the sandy bridge it makes some sense to see a big difference,
1280x1024 is not that low compared to 640x480, even more with details on high,
this is an indicative of the performance with a less GPU constrained situation,
I know that perhaps it's irrelevant if in real game (with higher res, or slower VGA) the GPU will limit far more than the CPU, but comparing CPUs when the GPU is the biggest limiting factor it's not ideal to as AMD did with heir slides,
so again, think about it in this way, they are using a single GTX 580, now let's say you increase the res to 1080p, and add another GTX 580 (SLI, or let's say a 2012 high end VGA), I think the situation will look mostly the same, but now use a GTS 450 instead of a GTX 580 and keep the settings, probably both CPU are going to score a lot less than 100fps and achieve about the same performance in these games (the same effect basically of increasing the res without adding a faster VGA)
raising game settings can have a impact on cpu performance
keep things like AA/AF/tess off, leave the rest on high and use a high resolution with really strong GPUs, it can show cpu bottlenecks since the gpu should be relatively relaxed.
btw i find it absolutely fine to run games at settings people play out, cause it will give a quick yes/no for if the cpu will be the limit or not.
and running things on the lowest settings my give very artificial results because of how far it is from what people do. if people want to worry about FUTURE games being cpu limited, then just buy a chip with more cores than are used today.
Max/min on its own mean bollocks on its own..
http://www.computerbase.de/artikel/g...frameverlaeufe
Check the last graph of "the witcher 2" Though intel has the highest drop here its for one frame and is constantly higher then the X6... does this make the X6 a better overall cpu?
Min/Max fps are way to influenced by other things, is there a high priority call form the system that puts all other threads to the back for a few mili seconds? etc. etc.
http://www.youtube.com/watch?v=umDr0mPuyQc
Pls not this again... :eek:
Increasing the resolution or change the gpu to a slower cutting thos high fps where cpu don't need to work. Areas where there isn't any fight, just running on some with no enemies. Then the gpu will even out the prefetching advantage because it slows the cpu producing frames on easy parts.
The hard parts for the cpu. parts that are CPU dependent will start to show.
Games isn't linear, the work differs a lot depending whats going on
I think that BF3 will have maps where there could be 64 players. This will probably much work for the cpu
I disagree. Take an intel and AMD system then load up a game with inside and outside areas and go inside, I bet the Intel system will score much higher indoors, and much higher if you look up at the ceiling. Intel cpu's do the easier stuff better and this leads to higher average fps (and much higher maximums), but they do the harder stuff no better or worse. That's why SB's fps craters in Unigine at the difficult parts. The thing is, the intel cpu's are scoring much higher in the parts that the AMD cpu is already scoring very high fps at, it's practically meaningless if it's 150 fps or 250 fps.
You'd better not check this thread then...
http://www.hardwareluxx.de/community...en-720931.html
The frame gparhs are pretty much all the same, the only difference is the position on the y-axis and some anomalies where you have dips of a single frame.
If the point is to show fps difference in cpu-bound games, why not do that? Why not show games like Starcraft or Battleforge, but do it at actual gaming settings? That would be infinitely preferable to showing games at low resolution surely?
A lot of people are wondering why we are seeing slower then last gen PHII results.
BD is a new arch that is completely different from PHII. With so much change, there was never a gaurantee of it being faster. AMD was more focused on finding a way to win the "core" race, and in the process, lost a lot of single threaded performance. Its not really much of a surprise when you really think about it.
What they did do though, was lay a foundation to build upon. With BDII they should be able to get the single threaded performance back up, and they can start adding more modules for better multi threaded performance.
The 8150 vs 2600k isnt 8 core vs 4 core, its 4 modules vs 4 cores with HT.
Because most reviewers have a given set of benchmarks, if they add a new game they have to retest all the previous hardware with it. Using an available benchmark and making an artificial setting is easier, especial when you already have the data.
Thats also the reason why anandtech still is stuck on a half decade old benchmark (sysmark 07). People want to comapre results, anand has a pretty huge library of tested cpus... if they replace one benchmark they need to do it for 100+ cpus.. :p:
It's a damn preview,goddamit,not a review.Full review will come on 12 like others...You people do not deserve to get info ahead of schedule cause it seems you are not able to process it properly.
I told them twice, too that it's a "preview".Quote:
It's a damn preview,goddamit,not a review.Full review will come on 12 like others...You people do not deserve to get info ahead of schedule cause it seems you are not able to process it properly.
The title of the article is written "AMD FX-8150 Bulldozer Preview"
Hard to see or understand it seems.
Mad Pistol, yes but different NB.
Anyway for AMD it's clear now that they should have done a new socket from 0.
May be to fit and work better with BD arhitecture.
Socket AM3+ which is basically AM2 has too much years, even compared with LGA 775,
Yes, but I disproved your conclusion. AMD has done the same thing going from Deneb/Thuban to Bulldozer (Zambezi). The reason for the + appears to be improved power circuitry (more current available) as opposed to vanilla AM3. Also, the 9x0 motherboards have SLI support as well as native BD support.
Read the specs and look at the die-shots off BD. Bulldozer is a complete departure from Deneb/Thuban.
Most of the loss in MT is from the loss in ST. The 4 module design should roughly equal 7 cores (assuming the MT app scales 100%). With each core of each module not being as fast as a Deneb/Thuban core, there is a shared performance drop on all the modules.
What they basicaly did is revert back to a quad core and give it a form of hardware level HT that scales better, but lost ST performance by doing so.
The problem is, they would have been much more competitive with Sandy Bridge if they had shrunk X6 (I'm not sure why guys keep saying "add two cores!"), added 4-6MB between L2 and L3 caches, and sped stock speeds up to 3.6-4.0 single thread. I'm fairly certain that is doable within a 95-125w TDP on 32nm, and they should have been able to figure out some way to gain clockspeed...intel has had no problems with it, and AMD did it well from transition to 65nm to 45.
X6's that would OC on water 4.6-4.7 Ghz on 32nm +3% IPC boost from extra cache or adopted Llano's STARS core + L3, would have gained ~5-10% IPC over current gen and would have been a lot better than this.