The only botched result is Direct2D hw acceleration in IE9.The game is Direct3D and probably ran as it should.We have a user here who will run it on his i3 laptop for comparison soon.
The only botched result is Direct2D hw acceleration in IE9.The game is Direct3D and probably ran as it should.We have a user here who will run it on his i3 laptop for comparison soon.
What is interesting here is Zacate appears to be 1.5x faster in that, whereas in the other browser/city of heroes/amazon video it's more often at least 2x faster. The 520M has 50% higher clocks, so that might make even more sense. (note these are totally different benchmarks being discussed - there is nothing to link the city of heroes/amazon bookshelf demo to this AMD spacewhatever one).
ok i got home and tested it myself
i have 4850xfire, and those never went up a hair durring any test
my 1055T however saw about one core being near maxed out to do it, so 4ghz of a modern cpu can be used. i did not see any real multi core scaling at all.
even at full 1080p i still get 1900-2000 on the regular, and 600ish (660 tops) on the hallucinogenic version. shrinking the screen DID NOT reduce cpu usage, or affect my score much at all. i also noticed that some times task manager dosnt even report the same cpu usage (in one run i get 30% of a single core being used, the next 100%), but with the same scores. somethings really just weird.
looks like a mobile chips with no turbo will not enjoy that benchmark much
anyone have a task manager cpu usage screenshot for a low speed laptop quad or anything?
more testing
amazon uses about 5% of my cpu, none of my gpu and im 60fps locked. no idea how stressful that site really is.
i think ill have to downclock my cpu to 800mhz and do this all over again and see what happens
Definitely a limitation of some sort, here are the scores i got on my h/w and FireFox 4 Beta 5
And the bookshelf demo is pretty much at a solid 60fps.
e6850, 8800 ultra and 8gb of ram - all stock
http://imgur.com/h032H.png
The results could all be the same because the 2D acceleration of most cards is roughly on par. Unless Direct2D uses shaders instead of the traditional 2D engines in cards (no idea), there wouldn't be a lot of differentiation between cards. Low end cards and high end cards are all the same.
As for CPU utilization, I saw someone way it pegged a core, but that isn't what I'm seeing at 4GHz. I tied IE9 to core 0 for this test. It just hits very mild utilization. GPU utilization never exceeds 22%, however that is calculated. It may not be accurate for what we're trying to measure since this is Direct2D.
Test 1 & 2
http://pcrpg.org/pics/misc/IE9-T1-CPU.png http://pcrpg.org/pics/misc/IE9-T2-CPU.png
I'm starting to wonder at these 1900+ intel scores in the Psychedelic test btw. Is there any way to change the name of it to see if the scores stay the same or plummet? :D
Disappointing, I just looked at the AMD video.
You can clearly see that on the AMD, there's FRAPS in the top-left corner of the window - meaning Direct2D, but on the Intel demo FRAPS shows up in the game, but not in the psychadelic demo, meaning it wasn't GPU accelerated.
Shame on AMD.
I'd be interested to know exactly how does this prove "shame" on AMD? It couldn't be anything else, ie the intel system doesn't know when it should turbo in 2d etc?
Have you run this psychedelic test without gpu acceleration btw? It's a lot worse than what this intel system is showing.
I can test it on my desktop i3 when I get home. I can test it both on stock and OC'ed.
something mighty strange about all this.
AMD hasn't "disabled" gpu acceleration on the intel system - try running the psychedelic browser test in your normal browser and you'll see that the performance is in single digits when there is no gpu acceleration.
http://ie.microsoft.com/testdrive/
There you go, see how Psychedelic Browsing runs on your current non-accelerated browser. Single digits?
Just look this page. GPU vs CPU.
http://www.itwriting.com/blog/3003-f...d-enabled.html
jimbo75, have you seen the testing done by users here in this thread?
People with highend graphics cards and IGPs get very close numbers when using D2D acceleration.
ok i was lucky enough to just drop cpu speed using window power profiles
so at 960mhz with no turboing, i still get 1700+ scores, and cpu was never above 40% for more than one core.
i really have no idea how anyone can get a bad score on these benchmarks
i think microsoft rigged the tst to showcase ie9 in a better light. and amd marketing is just as sleazy as intels...its a sad day.
dont flame me lol!
3 pages later still no conclusion : Is the Intel result valid from AMDs test ? If not can we conclude the current GMA HD is as good if not better in 2D than the Ontario GPU ?
Definitely theirs test is not valid :down: I dont know what they did, but I can notice they have a different version of IE9 (which, of cause is not an excuse for them).
I still want to test CoH to validate theirs 3D test, but downloading is paintfully slow so it will take another day.
BTW, tried to reduce GPU clock to 200MHz:
Since people don't get any results that makes any sense I think it's safe to say that this bench don't show anything. It can only be used to show that there is a difference between software and hardware acceleration.
They should just run Left 4 Dead which uses multiply threads and is high/moderately gpu intensive.
I could care less about web browsing apps.
Benches here = Borked I retract my earlier statement regarding a win for AMD.
The ie 9 ones are definitely just weird, the psychedelic one especially because it doesn't seem to matter what gpu you have or what the screen resolution is.
I think we can pretty safely assume that Zacate is twice as fast as an i5's graphics though (unless you believe 80 sp's are going to be slower on Zacate), so there isn't any reason why it wouldn't be performing twice as fast in benchmarks that actually show up gpu prowess.
need some quake 3 benches and 3d01. lol :)
Apparently AMD are using a different version of ie 9 preview that is available from the public, (presumably the developer one). That pretty much renders all the browser tests invalid.
- Fry on S|AQuote:
Couple of weird things i noticed about the Kitguru comparison video, they are running a different version of the Platform Preview than is available to the public. I just downloaded the latest one from M$ which is 1.9.7916.6000, the version they are using is 1.9.7930.16394.
:rofl::rofl::rofl::rofl::rofl: oh thx one can start no better a day with a good lauch. :rolleyes:
fyi, ontario is a max 18W TDP product so it goes against i3 330UM, i5 520UM etc with a 500mhz gpu, i don't think we need more info :D
some examples according to anandtech: an i5 540m with 766mhz GPU is on par with a AMD 790GX chipset which is an ati3300 which has 40SP @ 700mhz.
Ontario has 80SP on an unknown clock but you have Mobility Radeon HD 5430 80@550MHz with a TDP lower then 8W so you can start guessing (which you like to do all the time) and will provide you about 50% more performance then 790GX. so keep dreaming with your on par performance. It will be double, even SB will have a hard time in the 18W parts to counter that mid next year.
you can argue as much as you want on the benchmarks provided by AMD and INtel all the time, but thinking that GPU performance is on par with clarckdale makes you an intel fanboy daydreamer.
omg....we have turned this thread into a news article?...they link straight back to us...lol specifically kl0012. gj!
http://hothardware.com/News/AMD-Zaca...hmark-Hijinks/
Yup I noticed that last night.
Btw looking at the kitguru video http://www.youtube.com/watch?v=pw14MgRHYJE (around 4 mins), you can see the window sizes actually go against the zacate machine. It's not much but the zacate system ran the browser tests at 1276x652 = 831952 pixels, and the i5 ran them at 1257x654 = 822078 pixels. If AMD was deliberately trying to hack the benchmark, they started off pretty badly. :clap:
theres to much ricknrolin going on lately on these forums. :rofl:
Need to see some game benches from APU. 18w is not much.
http://www.anandtech.com/show/3933/a...ormance-update
Anand update.
In this anand test, is the Core i5 running 2.4ghz with HT for the cpu part and Zacate 1.6ghz per core with no HT?
If thats true, than disparity gaming wise is HUGE, because i still believe bobcat cores are not that powerfull, athlon II levels AT MAX IPC wise.
so it was a i5 520M with a 766mhz GPU and zacate is about 50% faster in gpu performance, go find out how much slower the culv with 500mhz GPU is... ;)
even with the SB mid next year for 18W parts and its possible GPU performance it will still be slower, not to mention the possible improvements from ontario in that time frame.
I think that Zacate may offer Turbo core functionality,but on more limited scale(think 2.2Ghz for single thread usage).
edit: Like i suggested some pages ago,it was intel GMA driver issue(laptop manufacturer's fault since on this model ,the official intel drivers won't install automatically).Anandtech confirmed it,thanks to the poster who provided the link for the update.
Quote:
Originally Posted by AT
Quote:
The updated driver brought the IE9 performance tests to parity with Zacate. In fact, it looks like the IE9 benchmark doesn’t scale too far with GPU performance (apparently discrete cards don’t score much higher than what we’ve seen here).
Summary:Quote:
At this point we had an issue. The IE9 benchmarks AMD was showing off weren’t an accurate comparison of the two architectures. While valid for the only driver revision supported on this particular Core i5 notebook, the scores weren’t valid for a Zacate vs. Core i5 architecture comparison. AMD wanted to make sure there was no confusion about the GPU performance potential of Zacate so it allowed us to install whatever we wanted on both systems to validate the GPU performance we had seen.
Take a moment to realize exactly what just happened here. In an effort to convince us (and you) that it had nothing to hide and didn’t deliberately attempt to stack the deck, AMD gave us full access to the Zacate platform to do whatever we wanted. AMD wanted us to be completely comfortable with the Zacate comparison.
Zacate is 46% faster. Note the disparity of CPU clock speeds on the Zacate and i5 : 1.6 Vs 2.4(2.933Ghz !).Quote:
Batman Zacate(16.5 fps) i5(11.3 fps)
Quote:
City of Heroes : up to 2x advantage for Zacate,55% on average
AT says that Zacate is in no way optimized at this point in time(drivers,clocks,BIOS),there's still room for improvement.Batman ran for the first time on Zacate system... New driver didn't change the gaming performance of the i5,Zacate was still a lot faster(and up to 2x in City of Heroes ,like before).Quote:
N-Body Simulation : ~2.6x advantage for Zacate
So there is no "rigging" of tests,it was just the laptop with i5 that suffered from the GFX driver issue that is not uncommon.
Think you missed this -> http://www.xtremesystems.org/forums/...&postcount=119
even with 200mhz clock it scores ~1700 points for the D2D test.
Also suddenly we saw a increse in CoH form sub 10fps to ~25fps on the intel platform...
Anyway we all know that intels current igps suck at gaming and a 80sp ati gpu will run circles around any current available igp. But for the most improtant part for laptops, 2D/HD content acceleration, both platforms offer the same results.
If you want to play games on a laptop get a discrete card.
zacate is just amazing !!!. I think CPU test are under NDA that's why anand don't published anything on.
Damm ... It's an amazing work, that was done by AMD.
well it is targeted at low end/low power laptops and netbooks. These do not have the option to get an additional power consumer simply do to something useful in 3d. And we are still comparing to a 35W i5 cpu not the 18W with a much lower performance.. let alone the Atom series which are even slower than that.
Well obviously you live in a reality where everyone can afford high end gaming laptop with decent battery life.Quote:
If you want to play games on a laptop get a discrete card.
But for majority of people thats not true.If entry level notebooks on zacate will be cheapish,smallish and with decent battery life, as it looks like it may be, and without Intel constant game driver issues, than it looks like a great mainstream product.
I often work as a kind of support ,and every other client is shocked that he cant game on his new intel notebook at all mostly.Sandy bridge may well change that, but until now Intel IGP suck ass.
So the CPU part is 32nm and the GPU part is 40nm? Or is the whole package 40nm?
Whole package(monolithic) is 40nm bulk made by TSMC.
The Argument that you can't play games on that level of IGP is quite flawed.
You can play Quake III on an atom netbook.. but apart from nostalga reasons most wouldn't want to.
However there's plenty of current games that would play well on Zactate or even Ontario level of performance, that's the difference. It's perfect for those cheap indie specials on Steam for example.. which often have less than cutting edge graphics, but fresh and fun gameplay.
untill someone programs an application using openCL and Zacate blasts past most i models in computing power :P
Im looking foward to the day when i dont need to by a dedicated gpu, wont be long at the rate AMD are going. Games are already progressign alot slower the GPU speeds, so hopefully APU will catch up in the next few years. Results of this 'netbook' chip are prittey amazing for 18w, cant wait to see the better models.
"Take a moment to realize exactly what just happened here. In an effort to convince us (and you) that it had nothing to hide and didn’t deliberately attempt to stack the deck, AMD gave us full access to the Zacate platform to do whatever we wanted. AMD wanted us to be completely comfortable with the Zacate comparison."
I bet even after this you are still going to see "Waaah AMD lied!!!" posts for the next few months. 50% increase at this stage in actual games at that platform level is massive.
i hope the future of turbo is able to do much more with fusion type chips.
think for a sec how a desktop has a 125W cpu, and 100W gpu. if you put them both into one package, you now have a 225W TDP, and you will then have a heatsink which is able to handle both. so when your gpu is not being used and you need better cpu performance, i think it would be incredible if the chip knew the load on both parts and adjusted turbo for both as needed. with some games only being duel core optimized, the chip should be able to shut off 2/4 cores, and turbo the cpu or gpu depending on which has a higher load.
the future of hybrid chips is really going to get interesting if we keep moving into that direction and get enough competition.
http://images.anandtech.com/graphs/s...3731/24423.png
If AT had tested batman with Low Quality that would have been interesting and directly comparable to SNB preview results.
EDIT: Tried the N-Body Simulation on my friends desktop with a GTX 480 i am getting around 600-660 GFLOPS after which it crashes so i guess 23 GFLOPS is not bad for such a tiny little thing.
I just want to add that I think it's reasonable to believe that you can save some amount of power when integrating parts. One chip instead of three different chips probably uses less power, and less power regulation is needed.
An interesting feature would be if the CPU throttled down to match the performance of the GPU. So if the GPU is capable of 60FPS and the CPU is capable of 130FPS, it could throttle down to save power without drop in framerate. Some of that power could be used to boost the GPU. A frame limiter at say 60FPS controlling the turbo and throttle would save even more.
hopefully it can be smart too. in WoW your very cpu limited, so if you know to OC one core, leave the second stock, turn off the rest, and maybe adjust gpu until both one thread and the gpu are at the same usage (like 100% load on both) that would really make things easy. best framerate per watt ratio ever.
Nope, they weren't using the correct driver...
http://www.anandtech.com/show/3933/a...ormance-update
don't forget mobile parts will have lower clock and culv will have even less base clock/turbo.... and there are GT1 and GT2 versions.
according to anandtech latest update this was the full blown GT2 with lower clocks and it was more or less on par (some win some loose) with 5450 so 80SP 650mhz
Yes, AMD wasn't using Intel's newest driver but they allowed Anandtech to manually install the newest driver and another game to test with.
Anandtech:
For a chip going into a sub-$500 notebooks/netbooks this little guy isn't too bad. It's not like people will be Folding@Home and doing BigADV WUs on one of these things. CPU power isn't quite as important as the ability to accelerate media and do some casual gaming.Quote:
It’s very rare for any public company to make an on the spot decision to let us benchmark and publish test data of an unreleased part without having ever seen it before. The first time the AMDers in the suite saw Zacate running Batman was when we installed it. To be honest, it was probably the most open and flexible I’ve ever seen AMD be. I knew if the IE9 numbers changed that it would call the City of Heroes numbers into question. By allowing us to rerun everything as well as add an additional title (one that we’ve used more recently) AMD handled the situation perfectly.
I'm really excited overall to see integrated graphics going this way. Considering how nice the AMD IGPs have been compared to what came before and now to see how much what is coming next outclasses the lustrous products of today, I think computing in general is about to become a lot nicer. It even effects us, since we're the people Joe Random-with-a-computer-that-has-integrated-graphics calls upon to fix their computer when they run "Free Tacos, Registry Cleaner, & Happy Fun Cursors.exe" from totallylegitvirusfreedownloads.com which they in turn found by clicking on a banner ad at turboclown:banana::banana::banana::banana:.org. They'll at least have a responsive UI.
That's great how AT was able to resolve the driver issue and compare another game. Can't wait for the new platform to release.
There goes all doubts. This is great! Still, I'd rather see cpu performance (we know how the gpu's going to perform anyway). If all goes well... well... I want to host a server on one.
fusion is a winner....
That is an insanely powerful chip for its specs IMO. It looks like fusion is going to be in my next laptop for university a few years down the road unless intel beats them with SB or whatever's after SB. To be completely honest, in my experience, at university, there are three uses for laptops, notetaking/reading/research (easily adequate), light gaming (adequate), and for some specific people, heavy CPU usage programs such as rendering which is the only thing I'm not too sure on for fusion. If the program supports using the GPU to assist the CPU for stuff like that then AMD just got me to buy something other than their graphics cards.
Once you get rid of the anemic perfomance from Atoms (wich Athlon II / Turion II neo shows the path) the next thing is core frequency and IGP.
Current Turion II X2 Neo K625/K665 1.5/1.7Ghz @15w should be a great choice for a home server with 64bit and virtualization support). What Ontario/Zacate will bring is the lower consumption and high frequencies for the top of the line.
Say Zacate with 1.8Ghz-2Ghz dual core.
Yep. AMD has done a first class job with this one netbooks are going to be very different in a few months. It's better than just casual gaming I can casual game on an Atom netbook. (plants v zombies, Luxor, etc) but here we can get to some gen -1 type action games.
Mobile SB are GT2 only and have same turbo clocks as the desktop moddels. We know nothing about LV. If they follow the same trend as with the pervioues i5s we get the baseclock of the normal mobiles as turbo mode for the LV models.
Which likely will be 650mhz. It will be slower what anand has shown... or not, depending on how much they can extract from drivers. :p:
well at least amd redeemed themselves, by letting anand play with the test.
^^And Ontario still reigns superior in games,even in this early stage of clocks,BIOS,drivers etc. When it launches on the market,I bet we will see even better performance and lower power.
Have tried Bathman demo on my I3-530 with default GPU clock (733MHz)
The FPS varies between 12 to 22 with average ~14-15 FPS.
CoH is still downloading.
Here is the same position like in Anand's test. So his test seems valid, however i dont think i5 gpu was running 766MHz.
The Zacate of the preview is running at 1.6Ghz on the bobcat core (by fudo). And obviously clock per clock are slower than i5/i3.
A nice idea can be comparing this Zacate to a Turion II neo 1.5Ghz with HD4225. Both bottlenecked by cpu frequency. And Zacate with just single channel.
The difference is 1FPS between yours and AT's test which can be left to error margin or other factors.
http://images.anandtech.com/reviews/...te/i5-8897.jpg
happy to see it was a fluke or else i would loose respect for AMD. Impressive performance for the size and wattage:up:. In 8 years we wont need gpu's for gaming it be all APU's. My last name is nosterdamus btwy.
Bullet Physics demo here:
http://pcper.com/article.php?aid=1003
Quote:
The demo itself wasn't particularly exciting but the devil is in the details. Zecate was running a DX11-only, Shader Model 5.0 cloth simulation that will be part of the upcoming Bullet Physics, the open-source physics engine for gaming and 3D rendering. The Intel Core i5-520M based notebook on display obviously couldn't even open the demo and all early indications are that Sandy Bridge will not be a DX11-ready GPU either, giving AMD's APUs an advantage in a specific area through 2011.
You guys are giving AMD way too much slack. It should have been obvious that something was a miss with the initial benchmark used and AMD was obviously trying to pawn off their performance or inflate it higher than it actually is. I don't think AMD is incompetent enough that they couldn't get simple driver things resolved on a test platform for a comparison.
They just happen to let a driver issue get by that happened to inflate their performance?
People are being paid money when they set up these things, it an honest mistake when it joe shmoe, but when a professional company does it. They should be making mistakes like that.
Some of you guys piss on review websites that don't uses updated drivers in reviews. This is even worse than that because this info(driver dates) are not readily available where in most reviews, they tell you exactly what drivers they used.
Basically anandtech caught them cheating and they had to back up on their words and give anand freedom to do what he wanted.
If they didn't, Anand would have published that AMD was cheating on his website and that would have been bad press, even with AMD users.
Btw:
http://images.anandtech.com/graphs/5...3624/21599.png
Means that Zacate gpu perf is something like 790GX-890GX.
They should have known that from the beginning from the performance and corrected it, as Anand saw it. They just happened to have convenient driver issues and didn't see it until Anand saw it? What the point of running a broken platform?
For the guys at AMD setting up this test, its like sending a kids town with a knife in it. They should have caught the error or in this case, they let the error slide to inflate numbers.
They should have seen the problem earlier and fixed it. These guys are not idiots. If they didn't get the problem fixed before hand, they should have used a different platform or laptop.
If Nvidia didn't this, you would be burning them at the stake and not blame driver issues.
And it was a publicly available part. Just highlights the uphill battle intel is going to have with driver issues. That 20 years of developing a high performance, compatable, feature rich and industry standard up to date software stack that ATi and now AMD has shouldn't be taken lightly. Intel's graphics drivers are notoriously woeful.
As can be seen,game performance was minimally impacted(practically the same CoH numbers with the new driver).And it's actually intel's fault since it's their platform,their drivers,their responsibility.Users shouldn't have to do manual tricks and acrobatics in order to just get latest graphics drivers running on their laptops.
Hopefully not because HD4290 runs at 700mhz and 40 shaders where as the Zacate is suppose to have a 500mhz part with 80 shaders. If a 80 shader @ 500mhz is equal to 40 shader @ 700mhz it points to a lack of optimization , driver issues among others.
The 5450 is suppose to have 650mhz core clock and 104 GFLOPs theoretical performance. Now that N Body simulation on the Zacate gave 23Gflops can anyone pls test a 5450's N body bench and report the Gflop they get, i dont think it will be close to the 104 mark but most likely be in the lower double digit mark.
A 10.1inch display IPS with aluminum chassis...dirrty
I wonder if Catalyst will be available on Zacate also, or just Llano. I doubt AMD would provide overclockability for Zacate, but it would be pretty cool.
Do you really think they knew about it? Isnt it obvious that the 10x difference will make people want to verify it themselves? Why would AMD do that to themselves knowing that they will get caught? I would understand if the difference was 20 - 30 % across the board, but it was an isolated bench with a massive difference due to drivers
Its an honest mistake and the reason was plausible. Even if Nv did this, it would be understandable if the reason given was the same.
Regarding your comment on nvidia, when they used a fake card with wooden screws (:ROTF:) they didn't let journalists to check it in person to confirm it's real.
AMD let the guys at AT to reinstall the drivers and re-do the tests, to show how open they are. Plenty good for me.
There is no more ROP in ATi HD radeon. It's now RBE.
Youre comparing old data, aquired with different platform/drivers and most importantly DIFFERENT CPU.
So this comparison is invalid, with the exception if the numbers you posted are with some 1.6ghz dual core on single channel memory ;-) (which i highly doubt).
Not to mention, that its not the same walkthrough as in anands preview, diffrent parts of the game can have substantially different performance numbers.We have to wait for some comparable benchmarks.All we know for now that zacate (1.6ghz dual core 80SP) is 40% faster than 2.4ghz clardkale with intels IGP, in that particular instance of a batman run.
Like others have pointed, you are comparing apples to oranges. Let me guess, they used some uber i7 running @3GHz for this bench?
At lower resolutions, CPU becomes an important factor and the fact that i5 was running at 2.4GHz (50% faster) than Zacate and was still 40% slower speaks volumes.
At lower resolution it is more cpu bound because normally the graphic cards are faster. But as the screenshot indicates, there is a performance drop for each card. This means that the game at that resolution is gpu bound and not cpu bound. (cpu bound would be that the framerates were pretty much even). If we would have the speed of the i5 in that benchmark with those settings we could see how representative it is.
There's no reason in differentiation between cpu and gpu when looking at zacate because the cpu part is not replaceable. What is the point in fast gpu if cpu can't handle it?Quote:
Originally Posted by SimBy
40% perf advantage over i5-520M is not that great speaking about gpu which is supposed to be in range of HD5450.
AMD just made it so netbooks will have more gaming power than MOST laptops are ever getting, why, because so many people buy a laptop with super low end graphics, and this is higher than any IGP to date?
why get so mad? why try to make it seem weaker than it really is? its 80SPs and i hope it can OC with CCC like any other amd gpu can, and 80SPs is > 40, thats a fact
No one get mad about, but if we can speak about Intel's "unbalanced" cpu/igp configs, why can't we speak about "unblalnced" AMD configs? AMD did not demonstrated perf of the cpu part of zacate which rises some concern. Reading this thread i got impression that some ppl think that the main purpose of small and thin notebooks is to play 3d games.
testing of a mobile platform is much less important on max perf, but total perf done with a fixed energy cost
since that system is not a retail product, there will be no way to get an accurate total power consumption number, and then theres no way know what kind of battery life you get with the 1.x ghz they tested. amd is pushing for alot of the heavy stuff to be done by a gpu for a reason. what kind of tasks would you be running that are better with 3ghz vs 2? however you would notice if applications are choppy because there simply isnt enough power to run them.
The id that the cpu would be in range of the 5450 is hokum.
5450 specs:
Radeon 5450 512MB DDR3
Stream Processors 80
Graphics Core Clock Speed 650MHz
Texture Units 8 8
Texture Fill-rate 5.2 Gigatexels/sec
ROPs 4
Pixel Fill-rate 2.6 Gpixels/sec
Memory Clock Speed 800MHz
Memory Interface 64-bit
Memory Bandwidth 12.8GB/sec
Typical Board Power 19.2W
Lets asume the rumour was true that gpu in zacate ran @ 500MHz, that is 150MHz or 24% less.
Memory Bandwtih 12.8GB/s. What is the bandwidth on Zacate? (and that is shared) + the latency.
Zacate was made to be as low power as possible while still having decent graphics. Like previous poster said, they give 40% higher gaming performance with 50% of the energy consumption (on cpu level) compared to the current notebooks out there without a dedicated card. Just like SB is a revelation in ondie gpu performance for desktops and high-end notebooks, Zacate is that for the low power market.
edit: I believe Zacate is powered towards gpu enhancments for applications (flash, HD, ..) and keep the power consumption as low as possible for those. That was the primary design of the APU, having another type of calculation unit that can be used to handle certain tasks at a much higher speed. Considering their gpu supports dx11, opencl etc developers have the opportunity to optimize their applications for these things. (wether that will happen is a whole different story)