:D Because you didn't post for several days, i was already thinking you were another corpse in the ganges :D
Fair enough, soo, can you say if BD will work in current Am3 boards? :p:
Printable View
Wasn't it mentioned already that BD would? As long as VRM is good enough and there is a BIOS update supplied...
Not current boards, I do not believe it was ever stated it would.
We're talking a completely new architecture. Similar to how core 2 and p4 both worked on lga 775, but only p965 and newer actually supported conroe. Should bd fit in AM3, we'll probably need a new chipset, not just a new bios.
i believe that it is the am3r2 that will support the BD's.
Yes but you are talking Intel who just wants more money and will design a new socket everytime just to make you spend more money so you can use the latest hardware. It was stated it will use the AM3 socket, but whether it will work on our current motherboards or not is another story we have not heard yet.
What I would like to see is a socket like the ones on the server boards and intels boards. At least that way we no longer have to worry about bending the blasted pins. Anyone else agree.
At least BD will work in RD890 mobos.
http://img.photobucket.com/albums/v4...8e6961f4-1.jpg
Here is the latest official AMD roadmap:
http://prohardver.hu/dl/upc/2009-11/13063_deskroad.png
Is it enough?:)
Bulldozer(Orochi) is AM3 compatible .This is official since last Analyst Day. AM3 compatible means AM3 compatible,no hidden meanings etc. One thing people will need is BIOS support.
Why isn't it a good thing? You won't see sky-high sisoft sadra mem benchmark ? On desktop 3 or more channels are useless even for an 8 core chip as Orochi. 2 channels of DDR3 @ 1600Mhz (jedec standard) is perfect. Why do you need higher HT link speed ?? HyperTransport 3 is MORE than enough for any desktop chip,you won't see any benefit from higher HT link speed since HT speed is decoupled from IMC speed on any modern(post K8/9 design from AMD). I say the compatibility with AM3 socket is a great thing,it keeps the cost for validation and testing down(CPU only level ,since chipset is validated) and partners love this(keeps their RnD costs down). Not to mention it's great for end users too ;).
For 8+ cores AMD will probably go for MCM even on desktop so quadchannel is a given ,but for 2011 Orochi will do just fine.
JF already said quad channel for BD :/ ??? lol the IMC could use a speed up date to support more mhz on the ram, since 10h is limited to around 1800mhz.
I haven't seen an I7 running 6-6-6-18 1800mhz yet though lol sure they run 2250 and some 2400mhz, it's only 25% more mhz.
wacth at uncore Phenoms....Phenom 3.2 GHz with 2500/2600MHz uncore have average performance as Phenom 3400MHz and default uncore. Think, Bulldozer will compactible with AMD800 chipset and primaliry for AM3+.
This is not exactly true.
http://ixbtlabs.com/articles3/cpu/ar...2009-7-p1.html
Up to 18% gain in some desktop apps from the third mem channel. For more cores and higher frequency the gain may be much larger. Now if you consider real multitasking (running many apps in parallel) the diff can be even higher. This is why 6-core opteron dosn't show perfect scalability in SpecINT/SpecFP.
16GB/s current agregate HT BW is equal to BW of one PCIe x16 slot. Considering upcoming SATA3, USB3, PCIe3 (and all this with CF config) this probably going to be a bottleneck.Quote:
Why do you need higher HT link speed ??
Up to 18% you say? You can get the 20% if you use 1600Mhz jedec ddr3 dual channel memory instead of 1333Mhz one. There goes you 3rd channel advantage away without additional routing on the pcb level of the board and zero cost to the CPU die area... BTW, 6 core Opteron is using regged DDR2 800 memory,not regular DDR3 1600Mhz one BD will use.
The HT link won't be the bottleneck I assure you....
lol just when you think you heard it all, now giving end users a good upgrade path is a bad thing :lol: maybe every single cpu should have its own socket :rofl:
also, informal, you must have a lot of patience :)
Yeh, right...
Yeh, very convincing arguments...Quote:
The HT link won't be the bottleneck I assure you....
I wonder if anyone of those who defend "upgrade path" even uses some old AM2 board with latest phenom2. What I see is the impatience in waiting for upcoming 890FX chipset to exchange their "outdated" 790FX just because of the presence SATA3. :welcome:Quote:
Originally Posted by crazydiamond
Yeah,it's called math, 1600/1333. And it's called economics (3 channel routing on a board is pita compared to 2 channel;cost is higher for the 3 ch. IMC compared to 2 channel one-pure logic). We are talking about client space here.IN server/workstation space AMD will offer 2/4 channels and 2/4x times teh core count.
In client space HT3.0 is an overkill,show me the opposite in any review. IMC is decoupled from this clock so it makes no difference to clock it any higher. Platform is not I/O bound in any case or any review.Quote:
Yeh, very convincing arguments...
AM3 will not be an "old" platform,all current AM3 boards support anything end user may need.The latest 890 ones support sata6 and latest usb standard(it's not a must-have as you want to portray it though,I'm fine with my AM2r2 board,OCs great and performs just as well).Quote:
I wonder if anyone of those who defend "upgrade path" even uses some old AM2 board with latest phenom2. What I see is the impatience in waiting for upcoming 890FX chipset to exchange their "outdated" 790FX just because of the presence SATA3.
Friend of mine said he got a beta bios. I have no idea where the bugger got it, but since I gave him the board he's in heaven.
Although it's only a Phenom 9600 in it though.
I'm currently hunting to see if there's a way to get a Phenom 2 or Athlon 2 in there.
On a side note, my M2N-Sli Deluxe also has a Phenom 2 920 in it.
I don't get why some people seem to think that just because it's not a native Am2+ or am3 board its useless.
These old boys have some life left in them
Well, im using foxconn A79A-S ,for over a year now, with phenom 2.In few months its gonna be upgraded with Phenom X6,for a year more of usage.Thats bad ?Quote:
I wonder if anyone of those who defend "upgrade path" even uses some old AM2 board with latest phenom2. What I see is the impatience in waiting for upcoming 890FX chipset to exchange their "outdated" 790FX just because of the presence SATA3.
And most of my friends machines are running Am2 or Am2+ mobos with current cpus.Even if bios officialy doesnt support a cpu ,most of the time it works just fine.
My next mobo will be Am3r2 with a bulldozer, but thats more than a year away for me, so yea.Platform longevity is important to a lot of people.
ya...you're right
AMD, shouldn't let current AM3 users have any chance to upgrade from Phenom II/Athlon II they are using; Just same like Intel did to their LGA1156, lolz...
For me, as an end user/desktop user, I don't benefit at all from increase HT speed/BW; So why don't let BD be totally compatible with current AM3 boards? It would help the sales when BD is launched and keep existing users happy. Win-Win situation, why not?
.
Initial bulldozer targets high-perf desktop segment were the compability is much less important then performance. Computer entusiasts usually tends to buy latest and fastest hardware. Other 95% of users will never open the cover of system box. Mainstream users will still get a new socket with upcoming integrated processor.
And how do you calculate influence of upcoming features (SATA3, USB3, PCIe 3.0) on system performance with current 4000Mt/s HT link?Quote:
In client space HT3.0 is an overkill,show me the opposite in any review. IMC is decoupled from this clock so it makes no difference to clock it any higher. Platform is not I/O bound in any case or any review.
Good for you. But please dont decide for all users what they will need in 1.5 year from now.Quote:
AM3 will not be an "old" platform,all current AM3 boards support anything end user may need.The latest 890 ones support sata6 and latest usb standard(it's not a must-have as you want to portray it though,I'm fine with my AM2r2 board,OCs great and performs just as well).
What impress me most is the fact that people who used to blame Intel for lack of inovations and using FSB in desktop, now suggest AMD to do almost same thing just for ghostly "compatability" while a good new MBs cost less then $100 already. "Users don't need more mem channels...", "users don't need faster HT..." remainds me something.
:up:right! Bulldozer will at 2011 for entusiasts and lower CPUs as LIano and Thubans for normal segment. Yes, im a bit entusiasts, i need Bulldozer, i need Thuban next 2 months etc etc.
Srry for 90% guys, but AM2/AM2+ is death platform for future. History s.AM2 is about s939, this was very simillary socket with only DDR1 and how long years ago? :-/
Right, you really dont understand what you read. They compare that Bulldozer module to a current K10/K10.5 class core which has 128-bit FP SSE and 2x 64-bit port INT SSE, and claim to be as much as 1.8x faster than current single core. So that raw math and virtual 90% performance of K10 single core is basically ok.
They "sacrifice performance" cause in most cases third pipeline/port causes more heat than benefits. That's rather known fact since P6/K7 times where there was studies how would adding more than three ALU+FP inside core impact performance and heat output, and they saw only opportunity to go no multicore design which are now widespread and common. Now they light up cores cause they oriented software world towards efficient threaded software designs, and there's nothing too bad to it except we don't see anymore enormous serial processing speed bumps as we did until Y2K.
You should read the article again.
The compare a module to a comparable lonely core on the same architecture. They say it's a 10% performance hit on a module design compared to true dual core.
Everywhere they state that HT has ~25% performance boost while a module has a 80% performance boost.
So, a bulldozer module is 1.8x as fast as a singlecore.
Read this sentence again:
And read this quote from informal where he quotes chuck moore at AMD.Quote:
As it turns out, this sharing of components across the cores impacts performance. Fruehe says that a pair of Bulldozer "cores" will yield about 1.8 times the performance of what a single, whole core would have."
Quote:
Bulldozer module can achieve ~80% speedup when running 2 threads (versus ~25% from hyperthreading)
On K10.5 6MB L3 cache consumes almost same space as four cores with their proprietary 512kB L2 cache
Bulldozer cores will be lighter (maybe smaller L1 cache per "core" that will total up to same size of 64+64kB inside dual core module as in K7-K10.5 releases And L2 will be same or 1024kB per module so they could easily squeeze up to 6 that kind of modules and again double L3 cache to 12MB (originally they mentioned that 16MB L3 is projected for Bulldozer CPU afair) and still be inside same dimensions as previous CPU generations, K10@65nm/K10.5@45nm ~250mm2. And above all that to mention HKMG which supposedly should serve as huge MHz jump and they even manage to squeeze 4x3.4MHz inside 90W TDP on active 45nm process
Yep two separate cores are always better than two threads inside one core considering power/performance ratio and better utilization and easier optimization for simpler core than to proprietary derived HyperThreading which evolved from HTT(1) inside P4-HT to HTT(2) inside Nehalem, and probably to some variation of HTT(3) in Sandy Bridge. So previous optimizations usually doesnt work and you need to recompile your work yet again and optimize for HTT beside SSE/AVX native code optimizations. But in the end SMT should serve intel as much as CMT to AMD. just CMT has brighter future regarding power wise orientation (according to AMDs bragging)
SSE5 is part of CPU "module" and until GPU part of CPU doesnt get inside "module" it wouldn't serve as GPU optimization and that will probably never happen. Integrated GPU (which is not part of Bulldozer btw) will communicate over PCIe (and i hoped for HT/HTX bus) and that way will only serve for better integration and better HTPC (low end server design?). Only performance boost that "SSE5" could done would be some packing that shrink bandwidth needed for PCIe communication or something but that would benefit to any device connected to that PCIe(3.0?) bus (ex. discrete GPU card) and dont think they even think about that kind of tweaks when they designed SSE5.Quote:
Originally Posted by madcho
And what about 6-core revisions of K10(.5) CPUs, couldn't every core use ondie L3 cache and it's still 48-bit wide as in quad-core (Deneb, PII X4)?
I think L3 sharing is pretty easy to upgrade to more than 4-cores when TLB works properly in the first place (famous pre-B3 K10 revisions).
Excellent two hits w/o miss :up: hope more of it will come, it's refreshing to see someone on forums that knows the real matter behind all HT mess mixups :clap:
So they compare scaling of their architecture? How then are two 1.8x fast modules scale up when work together ~3.6x or less :confused: If that's true marketing BS comparison is true that might be far worse than i thought i read.
Comparing scaling inside one module which is lighter than anything released since K7 see daylights of the world It's very self explanatory :D ... to AMD PR minds maybe. For others it's just confusing junk.
But that becomes practice in last 6yrs or so where performance next gen architecture (GPU i bear in mind) almost always aint compared to previous one except in first model it came out (and it's usually just top model that came out). And while cheaper iterations of new one (usually more gamer oriented) are not even compared to previous ones and across wider range on same setup. Well there's some of sits that still does reviews as they should without subjective insight but they become more of exception than a rule in overcrowded quasi-reviewer sites which serve as various PR bulletin boards (no not forums) and for incentive blogging.
You also confuse me in second sentence where again you contradict yourself with yet another "THEY" and 10% performance hit on true dual core. So they compare BD "core" to BD "module" scaling, and then again compare it to true dual core scaling and claim 10% performance degradation?
HT approach is bad for programmers. Well crafted piece of software which aligns the instructions causing no dependency stalls, cache friendly code and minimizes branches has no real benefit of HT. OTOH, a programmer who "just makes it work" without aligning the instructions nor being cache friendly and with lots of branches will see speed up by HT. However, due to all the stalls and cache misses, the HT speed up will not compensate enough, so the program ends up being slower.
There are exceptions to this though.. :)
With such comments, you should refrain from discussing about HT ( SMT ) in the future. Basically, you're talking BS.
Indeed. The head of AMD server marketing is the reference when it comes to the "real matter behind HT". :rolleyes:Quote:
Excellent two hits w/o miss :up: hope more of it will come, it's refreshing to see someone on forums that knows the real matter behind all HT mess mixups :clap:
please no more circular HT/SMT arguments.
nope. HKMG will not improve clockspeed. it is just another necessity for energy scaling for 45nm and beyond.
in fact i would argue that HKMG is what is causing trouble for glofo 32nm SOI. the rumors about that node are not sounding very good.
OFC you never what you'd mess up in your next generation (both of you guys, intel included) :ROTF: But that kind of failure could be more of catastrophe for AMD and probably wouldnt provide easy recovery solution in next gen. btw. You delayed that BD for two years ....
(* seeing yet another cpu's premature birth and architecture that came out with expectations -- Bulldozer @45nm, thru Barcelona/K10@65nm catastrophic launch as final nail in the AMD coffin?
* waiting for Sandy Bridge finalized architecture?
* waiting for finalized AVX specs?
* FMA3 to FMA4 (improvement?) changes that will better accommodate coding scheme AMD use?)
.... so it would be utterly unexpected if you came out with failure with some core that had two extra years for final development and clean out all the ghosts from it
Anyway you finally provided DDR3 support on your server platforms and also to test some of HT3.1 features that are ready (how long?) and will be part of BD and it's already there on HY-D1 chips and it's proven to work.
Is that only revision K10 that accommodate HT rev.3.1 and HTA bus? Why is there only 2x800MHz links possible when HTA is enabled?
Anyway, I also found unnecessary that turbo feature found in rev.E0 core but it's nice marketing gimmick to cope with Intels "Turbo Boost" ... far better than that silly PR ratings to cope with P4 silly frequency pushing :D. And it's certaily a good thing for people tha "need thes kind of gimmicks" ... say some 30+yrs old that have 486DX as their first computer ... they also feature that kind of Turbo gimmicks so it's all about intel's late 80s revival mode in fact :ROTF:
yep. i know it's necessity for further lowering leakage on already low leakin SOI. And any leakage reduction should further improve clock speeds over same TDP
That would be very bad thing. It's certainly difference between implementing same thing on bulk Si from implementing it on SOI, but GloFo claim 40% improvements implementing HKMG on 32nm over 45nm non-HKMG process and everybody said it's stunning ... but in fact it's not so stunning when we saw how 45nm HKMG process helped intel to reach almost 50% higher frequency (overclocking) than on 65nm with their Core2 tick and probably with less than equal TDP (85%-90% TDP of that OCed 65nm parts had)Quote:
in fact i would argue that HKMG is what is causing trouble for glofo 32nm SOI. the rumors about that node are not sounding very good.
They're might not be smart and destructive bs you like to wrote.
It's certainly ten magnitudes better than your comment :clap: Ofc, you could try to explain to us how this HT on P4 really worked, cause it seems you have different theory.Quote:
Indeed. The head of AMD server marketing is the reference when it comes to the "real matter behind HT". :rolleyes:
Article from X-Bit, they quote John: AMD Bulldozer Microprocessors May Not Bring Dramatic Performance Boosts.
the whole 50% more power for 33% more cores, thats excluding clock speeds right? so if the 32nm process is a little leaky, and to fit 16 cores in 105W it might be 100-200mhz lower than MC, and that shows how much of an IPC jump there is. (but if that is at the same clocks please ignore me)
It's not important what it is IPC wise. It is important what it is product-wise. It is claimed to be 50 % faster. It should thereby do the task 50 % faster than MC...be that with 8, 16 or heck, even 128 cores. It's 50 % regardless.
What if it was 50 % faster with 66 % more cores(considering the die-size would be the same), could the product be considered as worse than currently(33 % more cores)? If so, why?
Who honestly cares that much about single thread performance these days? Most of the real number crunching apps are already multithreaded, and more so when these chips get to the hands of us mere mortals. ...and increasingly more when BD has been in market for a while.
were only comparing 1 thing, when BD will be everywhere. knowing a little more helps speculate on what it will be like with other apps. since desktop will be limited to 8 cores, but up to 140W. if the 16 core server chip is going to be 1.1v and the 8 core desktop at 1.3v, who knows what kind of clock scaling we can see. im honestly not that concerned with servers, but thats all we have to talk about right now.
SOI only lowers subthreshold leakage. gate leakage increases rapidly with thinner gate oxides and thin gate oxides are critical for high performance. the problem with leakage is that it is on a per transistor basis so if you double the transistors you double leakage current. on new processes you can either run transistors 10-40% faster or you can double the amount of transistors.
I don't have a different theory. The people who actually implemented HT ( SMT ) have one which is different from your nonsense. But what would they know ?
Enjoy the reading ( you can start from post 52 ) :
http://www.xtremesystems.org/forums/...ght=smt&page=3
Hans, the next time I am in the Netherlands I owe you a beer. Thanks for that one!
I have the same viewpoint for GPUs. Unfortunately, Nvidia / Intel's favourite argument against 'more cores' is to press for core to core comparisons, which are logically invalid as you don't buy cores, you buy a product.
I don't see anything wrong if a CPU with more cores delivers 50% more performance consistently over varied workloads. So long as it is priced competitively vis a vis its performance, I'm completely fine with that.
Exactly. (But i disagree they're in fact good reviewer site when you get them real product) and this is a reiterated news post (and A.Shilov is only news guy that doesnt care about all that too much:D)
That xbitlabs article AMD Bulldozer Microprocessors May Not Bring Dramatic Performance Boosts is a joke. Anyone reading some forum could wrote it more correctly and with some level of interest ... ex. "the first Bulldozer micro-architecture desktop/workstation chip code-named Zambezi (which belongs to Orochi family, according to the firm) will feature eight x86 processing engines with a multithreading technology, two 128-bit FMAC floating point units, shared L2 cache, shared L3 cache as well as integrated memory controller. AMD also states that the new CPU will feature “extensive new power management innovations”."
So he refers to Nov2009 slides and he didnt yet figure out that FMACs are two per module and not "two per core (x8)" or shared L2 cache will be shared a) between eight cores or b) between core and what. And as usual xbitlabs wrote (pump) that news at least 3rd-4th time since originally one on their site cover that matter (only if someone wish to count), and delivering nothing new or explanatory. As it seems xbitlabs also don't like to talk about unreleased products also.
The difference is that Westmere is really fast, almost ridiculous IPC. MC is fast to, but it's IPC/core really needs some improvement for desktop users.
It doesn't matter much if Zambezi will be quicker than Phenom II if the performance gap between AMD and Intel continue to grow.
What CPUs are in practice better than x6 1090T? Hm...i7 965 is simillary, and better is only i7 975 + i7 980X (970). Firstly i read maybe all reviews from world (its my hobby), so sorry, but your arguments are irrelevant for me in this.
SB will not faster than Gulftown, only in memory benchmarks:rofl:. Next highend of SB will at 3Q 2011, mainstream SB will atacking segment about i5 750 - i7 950.
In practice?
It's in practice Phenom II X6 needs to be faster. In practice IPC is more important than cores, since desktop-programs don't scale that well. I'd say that for the average user Core i5 750 would be a better choice. Besides, you can overclock it 50% with ease, sometimes a good bit higher than that, you can't do that with a Phenom II.
http://www.xbitlabs.com/articles/cpu...t_6.html#sect0
http://www.anandtech.com/show/3674/a...55t-reviewed/9
1090T is a good choice for bulldozer, you don't need buy a new motherboard next.
So it's cheaper and as fast, even faster. it's in line with i7 965 already, and can clock too like hell.
touche... i guess they must be on that intel kool-aid
superpi is your day to day task that you enjoy with the gf and the familly???
i bet you enjoy your collection of blu ray movies more right??? rip them and encode them ...this will require more cores ... wich amd is pretty close to intel in $/performance ..... but yeahh i get your point... single thread performance counts more then core count
BORIS: my x6 1090T is 4300 MHz stable with air cooler XG 1283. Think, not all i5 750 can do this ,-) (yes, Denebs are whorse than Thubans for OC, with air about 3.9-4.1GHz).
I don't use blu-ray, optical media is soo 1995. ;)
But gaming is probably the most performance demanding activity that people generally do with their home computers. So I think games is an important factor.
Personally, I think it's a good idea to buy a processor that has good gaming performance, and then only switch graphics cards the next couple of years, and then change processor again.
That is perfectly inline with a simple shrink to a new process given the scaling limitations.
BD however is AMD's 1st new core since K7 and they had ample opportunity to start with a clean sheet design. 12.5% without knowing the frequency ( it could very well be higher than MC ) is nothing to write home about.
No, not all of them, but not all 1090Ts either. Most i5 750s I worked with have done 4GHz with ease, since I built these systems for others I haven't really pushed them. And since an i5 at 2.66GHz can beat an 3.2GHz X6 in games, and many other apps. I bet a 4GHz+ i5 can beat a x6 4GHz+. And do you want us to take an i7 into the picture?
The thing is, if Zambezi operates at 3GHz-3.6GHz it would be nice if it could compete with i7 in the same range, current X6 Phenom II 3.2GHz is in the area around i5 2.66Ghz. An X4 needs 1GHz more to even get close to i5.
So, thats why people want more than 50% more performance with 33% more cores. 12% per core wont put Bulldozer in i7 range, let alone future intel products.
Barcelona to Shanghai,at the same clock, brought 10-15% increase in server workloads,even the desktop saw a good 6-8% jump(and in games up to 20%,which proves how these apps are cache sensitive).
I think we need to give AMD some credit and to wait for actual results instead of trying to extrapolate single thread performance from an average figure AMD gave this early in the game. This thing has so many improvements on so many levels.
Are you for real?1 Ghz? In which parallel world are you living... i5 (no SMT,with Turbo) @ 2.66Ghz can't compete with 3.2Ghz thuban in real world applications.
Real world? Cinebench isn't something people work with everyday right?
That lostcircuits review is hardly representable for everyday use by ordinary people. But in the programs ordinary people use it was quite close. In videoencoding they won in one program each, in Photoshop they were close, and then there only two games. Personally I don't work with Cinebench or DIEP chess. Do you?
Check this review:
http://www.xbitlabs.com/articles/cpu...t_6.html#sect0
There you have much more user oriented benches. It's pretty tied in applications, i5 wins in games. I ignore the two synthetic categories here.
And check this one:
http://www.anandtech.com/show/3674/a...55t-reviewed/6
X6 is a notch better at videoencoding, it's tied in archiving, an i5 is a clear winner in games and is a tiny bit better at photoshop. I ignore 3D rendering, since I don't know anyone that has done that for years.
Read a bunch of reviews, and please post some here if you like. But as long as you don't play cinebench all day long or some synthetic bench, i5 750 an Phenom x6 is pretty equal, but i5 wins hands down in games, which might be most important category for most desktop users.
To reinforce your point :
http://www.anandtech.com/bench/Product/109?vs=146
People also need to take into account the intel compiler that alot of todays benchmarks used aren't kind on AMD chips. If they can run code on AMD chips with older slower instruction set's it will do by default. I cant see that changing any time soon, so AMD will have to put up with it for now.
Atleast FTC seems to be trying to fix this issue.
Quote:
Originally Posted by FTC settlement
Back again to same old stuff : ICC 8.0 did a check for vendor ID, newer versions ( currently ICC 10 ) have the check removed and will check for feature flags ( basically whatever the CPU supports the compiler will throw at it ). However, Intel claim no responsibility for code quality and bugs.
They say the check in 8.0 was introduced simply because AMD did not give them the detailled errata list for their CPUs ( obviously that AMD refrains from sending samples to Intel for validation ).
It would be like AMD sampling now BD to Intel so future updates to Intel's compiler can support BD features. :rolleyes:
U read maybe 2 reviews, i read 40-50 reviews about one product ,-) (little bit diference)
So, what is than this:
http://www.lostcircuits.com/cpu/amd_...ainconcept.gif
whats wrong now, eh..., Thuban is winner (and Intel fanboys crying:D)
http://techreport.com/r.x/phenom-ii-x6/x264-1.gif
comparsion OC 4200 MHz i7 870 (2000MHz RAM, 3600 uncore) vs x6 1090T 4200 MHz (1800MHz RAM, 2900MHz uncore)
http://lab501.ro/wp-content/uploads/2010/04/26.png
I read lots of reviews, the thing is, I read more than one single bench!
Please check through the reviews again, ignore cinebench, sysmark and other synthetic tests, also don't mind looking at programs that most people don't really use. You have, games, archiving, Photoshop, videoencoding and programs like that. Now compare results.
EDIT:
Check savantus link a couple of post up.
lost circuit review is kinda a joke.. half the time the 980 or the 1090 don't run at there fullest due to the apps not utulizing them...
So, this benchmark says that an Athlon II X4 620 is faster than a Core i7 980X at video encoding
OK...OK..........
the one from techreport is pass 1. Pass 2 is much more real world, isn`t?
the last one is ok, show how much a Thuban is powerfull at video encoding, and it is pass 2.
Modified Savantus link. Removed programs I never heard anyone use, then removed duplicates, (20 benches in one single intel or AMD favorable program can skew the result a bit.) Then I removed synthetic like sysmark. And all without regard to if the bench I removed helped prove or disprove my point, for example, removed a heap of sysmark which favored i5.
http://www.anandtech.com/bench/Produ...58.59.60.61.62
@Boris
And what happens if you've put 940 instead of Thuban there?Yeah difference is minuscule ,proving i5 and i7 give very similar performance(9% difference on average-wow and the difference in clock speed is in that range,who would have though? :p:).What I am trying to say,it makes no difference how the product is called(i"5") when it performs closely to much more expensive product(940) while costing less.SMT ,in the test selection you made,brings around 10-15% improvement when you count in the clock difference between the 940 and 750.And if Thuban is there with 750(it's actually faster than 750 but whatever) it's there with much more expensive products like the higher end i7s,too.
Why u craying guys....Once again, with every new CPU i rading most review from world (ussually its from 15 to 50 reviews at one product). Need not some teching about performance modern CPUs. But to time i have not all time for right graphs, thats all. It was only example.
If i will normal user, dont need expensive CPUs, but only dualcore (or dual with HTT) and triplecores. Maybe some low priced Quad. We are not mostly classic users!
Lostcircuits is not joke, problem with 12 threads (6+6) is jumping form threads to next threads. Is not much aplications using more than 4 cores at "100%" stressing. And second, Games are not to ideal for more than quadcores, look at some comparsion i7 HT on and HTT off.
Lol, Quadcores and more are ideal job for videoencoding, 3D graphics, CAD etc etc. And Intel with Gulftown launch was big brother with Cinema Studio and new Cinebench R11.5 (yes, this simulating is very good, better than in R10)
Oh so now X6 can't compare(you mean compete?) with i7s?Right. :rolleyes:
Lost Circuits is not a joke,it's probably the best HW website out there.The guy who runs it is the VP of OCZ tech. And 3D rendering is one major part of performance when it comes to modern day desktop chips,just like A/V encoding.Gaming comes last since you can game just as well with lower end QC like Q6600 or Phenom 9850/810.
¨
Boris: little kid:(...informal has more better experience and arguments than u. Why are u so "pro-Intel" ? Look at it more realistic. Im in love with AMD, but know, Intel have great CPUs as example i5 750 (performance, price, OC) and i7 800 series (some i3, i5 6xx, i7 950 and higher are too much expensive)
yeah, a problem with 12 threads
is there a problem with 8 threads too? because X4 620 is faster than a i7 920 on this benchmark...
i7 870 is faster than i7 965 in this bencharmk, problems with triple channel?
don't know about the rest, but THIS benchmark from lostcircuits is a joke. or do you really think that a X4 620 is faster than i7 920 at video encoding?
thuban IS really good for video encoding, and even better for 3D rendering. But this benchmark makes no sense.
There is no such a number like 12.5%. This was a result of incorrect direct calculation... You forgot scaling and frequencies. So you don't know about single core IPC. Also desktop BD will have less cores, and it will result in different scaling numbers.
Also you speak of 16-core BD in line with Phenom. This is wrong again.
Interlagos is not suppose to be clocked high rather it has to have more cores and lower frequency to fit server market performance needs most.
Example: a top phenom is 3.4 GHz. Top server MC is 2.3 GHz. Compare Intel's top desktop and server chips yourself...
When I upgraded from Phenom 9850 to Phenom II 940 a few games suddenly became alot more playable. For instance GTA IV.
Even if modern games are multithreaded, it doesn't change the fact that performance/thread is the most important thing.
And IF bulldozer is 50% faster from 33% more cores, I feel it won't beat i5 at games. Since performance/thread isn't that much over Phenom II.
Watch this, i5 vs Phenom II, same amount of cores, around the same clockspeed. i5 wins it all by a huge margin.
http://www.anandtech.com/bench/Product/109?vs=85
So, why do I compare at the same clockspeed? Because Intel and AMD currently makes processors around the same clockspeed, but intel seems to have even more room for high frequencies than AMD.
Thus, AMD needs major improvements over current generation in singlethreaded performance to match intel here.
So, 12% more performance per thread won't be enough to even beat current generation CPUs from intel. And when it comes to multithreaded performance, AMD will possibly be able to add more cores than Intel, around 2 more per die perhaps. But that wont compete with HT combined with much stronger cores.
8 core bulldozer vs 6 core i7 with HT? I can't see the bulldozer win this one if it already looses singlethreaded performance by such a margin.
comparing a server chip built for super perfect thread scaling, with gaming where a duel core still offers the best fps/dollar, is fail i think
also if the 50% faster with 33% is with same clockspeeds, please someone say it, cause so far i dont think it has been mentioned, and will greatly affect the IPC calculations people like so much
Server Bulldozer is to desktop bulldozer what server MC is to Phenom II.
And the reason to why I say 12% in singlecore boost is that server loads is very parallel. AMD gave us numbers in server loads. So, when utilizing every core a Bulldozer is 12.5% quicker than a MC per core. BUT, that is still very optimistic, since the thing with bulldozer is that it's meant to scale well. So if Bulldozer scales with amount of cores better than MC. It will have even less performance in singlethreaded applications.
So i'm hoping for the opposite to prove me wrong, that Bulldozer scales bad due to modules instead of cores, and due to shared FPU, It would be fantastic if these 50% total output was due to extreme singlethreaded performance that just happens to scale bad. That would mean over 50% performanceincrease in singlethreaded apps, but "only" 50% in multithreaded. ;) However, a bit unlikely.
I want to make some things very clear here.
Everything is based on 50% performance increase over MC. I hope that number is extremely conservative. And that is for initial bulldozers and that they increase in clockspeed fast.
Personally, i believe Bulldozer will be much more capable than these numbers suggest. 4 ALUs, loads/stores per clock, shared FPU, 32nm, and totally new core probably capable of much higher performance/watt suggest that bulldozer will be very very good.
So I'm hoping I'm wrong here. Since I am already planning to throw my AM2+ system away and buy an AM3+ system next year. ;)