Got it! He said, he was comparing it to the 6400+! AMD would loose money selling only in the channel as those small amounts aren't even worth packing IMHO!
Printable View
i agree. i dont know how theyre even managing to sell their current processors at such low prices.. but i guess thats why theyre doing the spin off and all that.. i guess thats the only way tey could go.. lets see what happens :) but i think im gonna get myself one of these :D
I see your point and maybe, who knows.
I still think they'll sell well anywhere near that listed price. I'd think about $89 would be better or settled on later. I might upgrade the wife's 3500 and old A8N 16 with one as well. I work on computers on the side and need the practice:up:
I'm pretty sure this is an X2 6500, not 6500+... its a model number, not the PR rating used on the old K8 X2s.
Anyhow, the X2 6400+ is irrelevant since it is EOL, the fastest X2 currently is the 6000+. Its priced at around $90 and is faster yet cheaper than the X2 6500. Yes I know it'll be faster than K8 X2s at max overclock but when has a chip been priced based on its overclocking abilities? From an enthusiasts/overclockers perspective this is by far the fastest dual core CPU from AMD but its still way behind any 45nm C2D in this regard.
This is a much harder question to answer ... the quick answer is a bit of both.
A somewhat simplified analogy is to think of the CPU as a progression of simple to complex. Transistor level, circuit level, then architectural level in that order much like cell, organ, organism. On the transistor level, the performance is centered around process and design of the physical transistor, and within the device, there are a multitude of different transistors used for different desired electrical properties... example, transistors making the bits in SRAM are different than the transistors making the logic circuits ... SRAM transistors are much smaller and also require higher minimum voltage to operate (by this I mean a higher minimum limit say 1.1 v as opposed to 1.0 volt minimum to work without error, just an example).
Today's transistors operate with a switching time of 1.0 -1.5 picoseconds, to simplify this lets assume 1.0 picoseconds (if you don't believe me, read for example IBM's process tech papers and they will quote you a td or gate delay time of about 1.2 ps for thier PMOS transistor on 65 nm). This is the time required for the transistor to charge the channel and turn on, so it is the limit at which it switch on the fastest. Let's double that number just for sake of engineering error, you have a transistor today that can switch as fast as 5x10^-12 seconds, or in frequency (1/td) = 200,000,000,000 or 200 GHz ... so why don't we have 200 GHz processors???
The answer lies within understanding that the transistors are strung together to make the circuit and the total gate delay of the circuit determines the speed of a circuit. That is, the total capacitance (which determines the delay in an oscillating circuit) is additive or an inverse of inverse law when summing the circuit. Circuits are built into logical blocks and stages (i.e. pipeline stages -- more on this later) that ultimately also add up to limit the total capable clock of the circuit. Circuit designers will measure this total delay in a metric called FO4 or fan out of four (which is a simplified measure of 4 inverters chained up). The total FO4 delays are a limitation by design and how they target their work to hit a target clock (read more about this here on IBM's power6: http://www.research.ibm.com/journal/rd/516/tocpdf.html the limited, for example, critical FPU delays to 13 FO4's).
In the end, two things in general affect the clockability of a processor -- the strength or switching speed of the individual transistor (process driven) and the depth of the circuit (design driven). Generally speaking, the more complex (i.e. higher transistor count) that goes into the functional block making the task work, the higher the FO4 delay, the slower the ultimate clock speed for a given process type.
This is why long pipeline CPUs are said to be designed to clock higher -- here is how it works. An OoOe superscalar processor does 5 general things when running code -- fetch, decode, reorder, execute, retire (unreorder). The first three are complex and what is sometimes referenced as the pipeline. The work done in the fetch, decode, reorder phase can be broken down in to stages, and but the total work through the three is always the same. Thus if I break it down into 10 stages (say 3 for fetch, 3 for decode, 4 for reorder as an example) I can get there but designers have broken it down even further into say 30 stages to do the same amount of work.
The complexity and transistor spent per stage in a 10 stage design is much higher, FO4 delay is much longer, and clockability is not as good as a 30 stage design where each stage has fewer transistors, lower F04 delays, and hence higher clockability. This is mother natures cruel little joke ... to extract high IPC and better per clock efficiency one must add transistors to the equation, but adding transistors also makes clocking the device harder.
It gets even more complex than this... beyond my understanding (I am always trying to learn more), but in a nut shell this is how both process and design play in.
AMD has generated a elegant native, monolithic quad core CPU -- but elegant does not make it technologically superior. AMD has great technology going into the processor but it also has some baggage holding it back... Hector called it the most complex x86 CPU to date, and his is absolutely right and high complexity is harder to clock -- for reasons I gave above and if you read the IBM paper reasons you will better understand. The inability to clock higher on 65 nm is a combination of a slower overall transistor in the 65 nm process combined with a tremendously complex design.
Intel is advantaged over AMD on both fronts, which is what is making the Intel products so potent in the battle of the big two... Intel wins on IPC and they win on process which drives the clock equation (to a large extent). Intel also uses 3 simple decoders and 1 complex, AMD uses 3 complex. Frankly, I am impressed that Conroe could clock as high as it could considering Intel more than halved the number of stages in the pipeline. The one area AMD really still holds the advantage is in aggregate BW, but the advantage doesn't show up until 2 socket high BW server loads, and pretty much most 4P setups (except it appears Dunnington is changing the game in that area as well).
Jack
In junction leakage this is true, SOI will always have lower leakage, but this is third on the rung of potential leakage paths ... the 3 dominant paths are sub-threshold (source to drain in the off state, or sometimes denoted Ioff), gate leakage (which we all know is addressed with Hi-K), and junction leakage.
AMD's problem is gate leakage at 65 nm, and this can be seen by the rapid take off in dynamic power as the clock speed goes up even 200 MHz. Gate leakage is not solved by SOI and only appears when the transistor is in the on state. AMD can show great idle power, but the dynamic power is just horrendous at this point. They will be fighting this again with 45 nm to an extent I suspect.
Jack
That's like saying one can't compare a GTX280 to a 4870X2 because 'everyone' knows the 4870X2 is faster'... yeah, how ridiculous does that sound? :rolleyes:
Its on the market, so its fair game and open to comparison... if you don't like em, don't read em, no harm done.
Umm, Intel is AMD's direct competitor in the CPU market, nVidia is AMD's direct competitor in the GPU market...
But whatever, lets mix and match comparisons to your liking. Maybe we should inform all the major review sites not to review Deneb with Nehalem as comparison, as that would be 'unfair' to poor little AMD? :up:
Since the results that were linked via hardspell are gone(and the original review is gone too),here are the direct links to the pictures:
note ,while looking at the graphs,that Athlon 6500 works at 2.3Ghz by def.
Synthetic crap :) :
Sysmark2007
http://img.inpai.com.cn/article/2008...09dac672d2.jpg
Excel:
http://img.inpai.com.cn/article/2008...d67c10dd54.jpg
Cinebench10:
http://img.inpai.com.cn/article/2008...6afec339f4.jpg
1080i MPEG to AVI
http://img.inpai.com.cn/article/2008...d0fb6279d0.jpg
3DVantage
http://img.inpai.com.cn/article/2008...d81dd508ee.jpg
Quake4:
http://img.inpai.com.cn/article/2008...63107743df.jpg
HL2:
http://img.inpai.com.cn/article/2008...9503a0a30c.jpg
Crysis:
http://img.inpai.com.cn/article/2008...e114a0a76f.jpg
LostPlanetDX10:
http://img.inpai.com.cn/article/2008...4e4202e1f5.jpg
WiC:
http://img.inpai.com.cn/article/2008...81c8942019.jpg
All around pretty strong perf. even at lowly 2.3 core/1.8Ghz NB speed.
With OC about 3-3.5GHz x2 6500+ and NB 2400MHz it is killer AMD X2 Brisbane procesors :)...:welcome:
What is strong about it? All I am seeing is something roughly comparable to E5200 perf, which sells for $84 at newegg, while it will certainly consume more power and almost certainly not overclock as well as the Intel processor.
Anything more than $75 bucks for it is a rip-off.
also, higher NB speeds make a difference worth noting. Same goes for Intel's FSB, but that's mainly at more demanding 3D action, requiring more bandwidth.
Now we need 6mb L2. Is there going to be a dualcore Deneb spin-off?
That's because they use thicker gates , 1.5nm vs 1.2nm for Intel.What is strange , in PDFs describing the 65nm process from 2005-2006 they mentioned 1.2nm IIRC.I guess tests showed they need thicker gates to control leakage.However , thicker gates = slower transistors => low clocks.A side effect is very good power consumption at low clocks which literally takes off a few hundreds MHz higher. ( I'd say their power shmoo curb takes a nasty increase over 2.5GHz )
Since they did nothing serious to address this at 45nm , these problems will haunt them.