me too, cause i could get one hahaQuote:
Originally Posted by frankR
Printable View
me too, cause i could get one hahaQuote:
Originally Posted by frankR
agree, it's a pure fake
just remixed metro.cl's SS
but i saw your reaction -)
You have Hector's "it's not about core anymore" direction he's taking AMD to thank for their current situation. The disconcerting part is he's still pushing his mantra and AMD's roadmap reflects it. Intel's cores are projected to be faster for the forseeable future but in Hector's opinion, it doesn't matter. Time will tell whether the market agrees with him.Quote:
Originally Posted by Turtle 1
Well I think so far Hector has done a good job. If however K8L doesn't match C2D in performance. I believe that placing AMD's future on fusion is questionable. But thats a long ways off.
It is impossible for K8L not to equal C2D.Only if they screwed up ib a big way which doesn't look very likely with the conservative aproach they've taken.
Just curious, how is it impossible not to equal C2Ds?Quote:
Originally Posted by savantu
AMD officially showed quadcore opteron perf. ratingQuote:
Originally Posted by savantu
opteron is (u can measure yourself) 14 percent faster than Woodcrest (imo both at the same frequency)
http://img118.imageshack.us/img118/9...uadperfxr5.gif
K8L is teh greatest evar. Didn't you get the memo?Quote:
Originally Posted by Shintai
But mac tells the truth. :PQuote:
Originally Posted by brentpresley
One also have to wonder how the 2.8Ghz Opterons can be just as fast as the 3Ghz C2D Xeons. Kinda abit rigged benchmarks, if benchmarks was even included.
So, a 2.67Ghz C2Q smacks 3Ghz FX74 with regular DDR2 around. Yet a 2.8Ghz Opteron with ECC memory is just as fast as the Xeon 5160, that got a 25% faster FSB, dual FSB, quadchannel memory and a 10% higher clock. FUNNY!
Reminds me of a quote from the nVidia marketing executive.
"In marketing, we cannot speak the truth. We have to make the truth more understandable."
maybe in certain server applicationsQuote:
Originally Posted by Shintai
nevertheless Xeon 5355 at 2,66GHz is 13-14percent slower than 2,5GHz K8L Opteron (2,5GHz is the max. clock for K8L Opti, see the table up) - AMD said
Ye sure, and Apples new HW is 2-5x faster than their own PPC, that again was 2-3x faster than their X86..ohh wait.Quote:
Originally Posted by MAS
Its the same benches/estimates that are already wrong. Nothing but fancy PR.
I think that K8L has to do more than = c2d its has to best it. As Wolf/York will be a bit faster than conroe . I will be conservitive and say 5 % in clock to clock and in apps that use sse4 i would say 15% clock for clock.Quote:
Originally Posted by savantu
Than you have to consider the clock rate Wolf@ 4ghz with 6md shared cache. and York @ 3.7 ghz. with 12mb shared catch between all four cores as thats what has been reported. I believe Scali2 layed this out for you guys already and he was quit good at it.
Well I won't argue that point with you. I hope you are correct as it will benefit all of us if true.Quote:
Originally Posted by PallMall
But would you mind laying out for us just exactly how AMD will make up its present clock for clock deficit. You must remember and use logic . That dothan at the same clock was = or > than AMD64. Now c2d has all the same improvements that K8l has plus it is 4issue with more pipes and wider pipes.
I just don't see were AMD's IMC can make up this deficit. Now use logic and not your heart in your presentation of the facts.
Right on target ;)Quote:
Originally Posted by PallMall
Ya if you say so. I really thought maybe you could point out the architectural advantages.
What in the specs is better than what the wider pipe 4 issue C2D core.
Lower clock rates only tells me that AMD can't scale there cpu as high.
Whats the launch date have to do with Core Architecture?
Ok . Your right. I was just hoping you could tell me something concrete.
I just found NEMO!!
http://www.scan.co.uk/Products/Produ...oductID=445618
But on the specification page they say 90nm:slapass:Quote:
This is the New 65nm Energy Efficient Version
Do more in less time with true multi-tasking
Increase your performance by up to 80% with the AMD Athlon™ 64 X2 Dual-Core processor.
Work or play with multiple programs without any stalling or waiting. Dual-core technology is like having two processors, and two working together is better and faster than one working alone.
Which one is truth?? They are open tomorrow so I will try to find out.
Anyone willing to try this baby??
"This is the New 65nm Energy Efficient Version" must be "This is the New 65Wt Energy Efficient Version", thougth it's no so new
there are no retail Brisbanes today, only oems
Did you notice than it is an ADO4200CUBOX? Retail 65nm A64X2 are ADOxxxxDDBOX.
Yes I noticed, but every other EE CPU on their web site is described as 90nm Windsor Core. This is only one with '65nm' in overview! We know that this data are putting in system people without knowledge, so best and simplest approach is to copy as much as possible from older product overview.Quote:
Originally Posted by zir_blazer
Besides this product was added recently. I was looking one week ago on froogle with same key words without any result.
I have shadow of hope ;) .
Never mind!!
AMD announced only 4000+/4400+/4800+/5000+ 65nm versions.
Not to burst your bubble.Quote:
Originally Posted by PallMall
We looked at K8L specs, it contains only a small handful of Yonah->Conroe improvements. That gave 0-20%, and the major improvements was speculative cache, 4 issue wide, from 1 to 3 SSE ports, double cycle to single cycle SSE, from 2 to 3 ALUs, bigger buffers, macro fusion and so forth. Specially the SSE is funny, its one of the big hopes since K8 will get twice potential SSE! However, Conroe got 6 times the SSE potential that Yonah got. And Yonah performed around the same as K8.
Look at the lower clocks, yes, did you also notice the regular K8 quadcore lower clocks? And since AMD seems to be limited for 65W at 2.6Ghz for 65nm. Its a no brainer that its a heat limit.
Look at the launch date, that makes no sense. Its like saying presler/smithfield etc would beat X2 just because it was launched later. It was miles away from that.
Now get back to reality!
"Show me" a retail or OEM or ES version of Brisbane, and I'll give you a cookie.... What...... You can't.... No cookie for you!!!!! :slapass:Quote:
Originally Posted by MAS
ask Dell for Brisbane - it has it
Dell needn't cookie ))
Tried with my account manager. He says...noneQuote:
Originally Posted by MAS
Also abit wierd if nobody got them here. Since the ES samples are what OEMs get aswell. And if they are avalible down at the factory floor...so maybe I doubt any OEM got anything. More like a good old paperlaunch without any products, play the OEM card and hope nobody checks you.
Quote:
Originally Posted by Shintai
Thanks fella I couldn't respond to his last post as I had to pull back and take time to think. Another reply from me on his last post could have spelled doom for me. But you laid it out perfectly thanks.
I hope K8l has more coming than what we have been told. As it would be good if it beats C2D . But as it stands right now with the available info .I just don't see how it can possiably happen.:fact:
As you can see from the post below this one. He responded to myself but not to your post. I seem to be between a rock and a hardplace.
Where as were using information that is available. I am being baited only.
i am an amd fanboy.i have had amd cpu's since the first 1ghz athlon.i have had only one intel cpu in the last 10 years,but i dont think amd has anything special up its sleeve im afraid.i think its gonna be c2d for the next couple a years.i would love to be proven wrong but i just dont think it is gonna happen.sorry
K8L is going to be a beast, but nobody is sure how it will stack up against Core 2 Duo. You can guess, but it wont be very accurate. there will be a very nice improvement in performance. I just don't like the idea on how Intel will have a 3.73 GHz 45nm Quad core around the same time K8L is released. I too have been using AMD processors forever, I've had 1 Intel processor, ever. A P3 750...my first computer had a K6-2 350.Quote:
Originally Posted by vcas5
I'm definitely hoping AMD is actually paying attention this time. Quad FX has to either be a mistake, or AMD knows that 2 K8L quad cores will be crazy powerful. It would have to be worth using up that 500+ watts to use. AMD is crazy about power consumption...wtf is up with quad fx? theres something they aren't telling us, like always.
Don't post fake stuff in the news forum ( to be edited later) full stop unless you would rather be elsewhere. Members of XS read the various sections to get the latest info, not to be sidetracked with false information that someone thinks is clever to post.Quote:
Originally Posted by MAS
I would think that was blatently obvious, but obviously it needs spelling out yet again.
END OF MESSAGE
Andy
Good food :D !Quote:
Originally Posted by brentpresley
Your speculation is very interesting. Do you hear some voices telling you about future somehow ;) ??
But reality is that split power planes are not for AM2/S1207. On the other hand who cares?? I'm always changing mobo with new CPU.
We need to get hands on 65nm K8 somehow to have some idea about SiGe SOI process AMD will utilize with K8L. Then we can discuss about power usage, it should be better than bulk 65nm Intel is using.
PS. Brent any of yours friends @Dell had/have Brisbane?? Any possibility of short report?? I will own you :toast: if you find out something.
Stop spreading lies MAS....:slapass:Quote:
Originally Posted by MAS
Come up with some real news that can be confirmed instead of being a FUD-monger.....:fact:
;)
lol my first computer was a vic20 thanks for calling me young though:p:Quote:
Originally Posted by brentpresley
actually he is partially correct. AMD sent a shipment of 65nm Procs to Dell but they are not the low end kind(yet).Quote:
Originally Posted by mzs_biteme
Link to were shipment was sent to Dell. Or you got inside info. So I wonder how HP feels about this beings they are the number one supplier of PC's as of last qt. Or is Amd ignoring a company that has been doing business with amd a long time.
umm HP, DELL, SUN, IBM and a the rest of AMD's Top 50 list all are getting 65nm Chips this weekQuote:
Originally Posted by Turtle 1
How about review site when they going to see these things . So we can all see that 3.6ghz O/C we been hereing about the last 4 months.
couldn't tell you, since I am not God. But soon is the most logical guess since production is moving to full swingQuote:
Originally Posted by Turtle 1
I've read somewhere that Cool 'n Quiet in K8L quad-core can adjust FID individually per core, but FID is changed only when all cores are idle.
The stupid thing about this whole thing is you can't say how or what is wrong or right about the 65nm amd's are like. And how is that? Because we have not seen one part yet to compare. >__> Your alls logic about what issues you think it has is utter BS. Can you understand no body has the bloody part to say how it is or is not. Hello use your brains. We won't know its so called limit until somebody F'in benches OC's shows temps and watts. Some real values not CRAP talk about 65nm AMD's or K8L because its just speculation until we have real parts to work with, what ever any body says here is just garbage. We know NOTHING PERIOD!!!
Get over it fokes!
No, we know what AMD has said it's plans are, we know what kind of performance boost the declared improvements give, from AMD's own projections to results from Intel's implementations. Those knowns don't add up enough to overcome Intel. What is unknown is whether AMD will add other stuff enough to do so, and what's know about that scenario seems unlikely. That's the state of things, so we wait and hope AMD pulls their ass out of the fire(but Hector's "it's not about core anymore" attitude doesn't reassure that they'll even try.)
Serge84 I couldn't agree with you more. First time in 4 months that you have put it this way. Up till now you have been talking about 4x4 goodness.Quote:
Originally Posted by Serge84
How X2 @ 65nm will clock real high.
How K8l will destroy C2D. As you are fully aware as you have used it many times . We have die shot of what AMD is going have with K8L and what AMD has stated they will do with K8l.
Its very refreshing that you now admit its unknown. What we do know from all the talk of the last few months is this.
The 4x4 that would destroy kentsfield failed miserably clock for clock.
The release of 65nm cores was a paper release and by all accounts doesn't clock real high or AMD would have sent them to review sites. I think everyone would be thrilled if 65nm cores that O/C to 3.6ghz. But I think everyone pretty much understands now thats not going to happen anytime soon.
As far as K8L from what we know and have seen on the die shots that its not going to be enough to overcome c2d clock for clock. It certainly won't stand up to a much higher clocked York.
But at any rate it is refreshing to here you say that all is unknown. Except for we know 4x4 didn't live up to your hype.
What we don't know is how high 65nm x2 will clock . But I think most of us now believe that it to will not live up to your hype. Or amd surely would have sent out review samples.
As for K8l I hope it does live up to your hype. It would be good for everyone. I for one am definitely willing to wait and see on this one. But Serge84 it really isn't looking good. I like your present stance on its unknown much better than the stance that it will change the Tech world. Of the last four months.
4x4 is only going to be better then QFX when K8L comes out is all but by how much who knows. It still gives a good performance but theres no denying that Kent is faster. Specially when OCed. And intels quadro in 2P is the best 2P platform right now.Quote:
Originally Posted by Turtle 1
Well your right people change. I'd go with what ever performs the best. Sometimes I go eather way now but I'm actouly liking how wolfdale looks. Conroe is the better OCer now. If amd doesn't change I might get a Wolfdale at 4ghz. If not I'd get a K8L. Money isn't much of a object any more only what will perform best. I made my upgrade already and won't do any again until 2007. Besides going to a GF8800 that is and getting more ram.
Seems like wolfdale would be the perfect cpu for your moto and everything. lol
Well on the wolfdale thing it to is an unknown. It has been rumored to run at 4ghz. If it does great. But we well have to wait and see on that one.
Just because thats what Intel is aiming for doesn't mean it well happen. We both know intel has failed to meet their objectives before. It can happen again.
Speaking purely from a hobby/enthusiast point of view:
1. Dual/quad core is nearly absolutely useless for games. Heck, it doesn't do anything for SuperPI or 3DMark 99-05 either.
2. All this C2D vs K8L is making me nosious. You could have put together a sweet E6400 rig in July 2006. K8L.. will C2D even stay around in stores that long for a comparison?
3. CPU... I dont know. It all seems so 2001. Truthfully more of a 20th century kind of thing. You know, like back when running functions in a spreadsheet actually took seconds, you left the system running overnight to encode something, fired up your old trusty 56K in search of optimized codecs to play those super demanding DIVX movies, you went out of your way to get a MPEG2 hardware decoder card just to play DVD's or those agrevating waits to play shockwave games.
And all the last 4-5 years that WinXP has been around, the best thing they could come up with as a killer app WAS Dragon Naturally Speaking. Where has all that CPU horsepower gone to in the last couple years?
Microsoft and company, just keep making more and more enormous programs with gigantic memory footprints, flashy special effects everywhere marketed to "improve" productivity, and neverending stream of intelligent "auto-do" (re: aggrevate) features. Pardon me, but at the crossroads when a word processor, an applications which by its very definition is meant to do the most mundane trivial task, becomes so demanding it requires you to purchase a new system, we're all in deep deep doo-doo.
FYI: Word95 ran perfectly well on a 8MB 386DX33 hotrod. Fit quite well on 50MB hard drive too. As for games, I hope PC market lives on to see another decade - everybody for years has known the key, above all else, is getting newer faster video cards. Sure a $999 processor upgrade can make the game run as much as 20% faster... but a $200 video card upgrade can easily boost frame rates 200% or more.
I hope I got my points across...
Quote:
Originally Posted by ***Deimos***
Exactly why I am sticking with my opteron 165 rig now, It does everthing great, no need to upgrade at this point.
Agreed, word does not open that much faster on a 3.5ghz C2D ;-). Sures its faster in games, but I could save a grand and run this rig and only replace my vid card to get much higer FPS. Although gaming with an 8800GTX was massively faster on a C2D @ 1024 and 1280 RES where the video card was cpu bound. I think Tom's did the test maybe?
**Edit**
linkage: http://www.tomshardware.com/2006/11/...e_fastest_cpu/
Almost 100% improvement over an FX-60 at times. Fairly impressive I must say! lets pray for 65nm this week.... Everyones heads down please.
report:
AM2 65nm is to hot to OC :s
so bad clockers
:confused:
how can it be too hot?
Prescott ring a bell? D0 stepping of P4's where off the chain with power leakage. Maybe we are looking at that with these initial 65nm cores. Maybe AMD hit their breaking point @ 65nm like Intel did with the netburst technology on 90nm. Hopefully we get some full posts this week.
And wheres the logic in saying its hotter when. 1.There are no parts to give Temp results of to compare. 2.Its 65nm at the same volts as the 90nm and 65w. So does that make sense a smaller process on SOI-3 with the same watts and volts as 90nm ones only on a smaller die. Should run cooler at the same speed since its K8. Where does the enormous heat come from? It doesn't have the size to generate more heat unless more volts and watts are used. But it only runs at 1.20v to 1.30v same as 90nm can and would run cooler then the 89w vers but on a smaller die it would be expected to run much cooler. It would have to be a total arc change for it to run hotter. Are we missing something here? There is no way to know if they are or not unless you can show us your magical part that no body else has but you to prove your clam because its just BS right now with out PROOF ofcorse! :rolleyes:Quote:
Originally Posted by metro.cl
Ill believe it when I see it.
If 65nm is too hot to overclock, what is the point in it?
In all due respect to metro.cl, his word could mean anything it doesnt mean he is right.
Like I said, wait and see.
damn that is rather unexpected.Quote:
Originally Posted by metro.cl
Should hopefully be fixed shortly
umm because Dell Computers never should be overclocked anyways. Wait a bit and I am sure they will come out with some wonderfully overclockable chipsQuote:
Originally Posted by brentpresley
One thing that needs to be addressed to is C2D has 4mb cache on die, these X2's have 1mb. That is a LOT of mm2 of die space less with which the X2's have to dissipate heat. Anyone know the die size of X2's on 65nm? This might not be a big difference but when you throw some voltage and clocks together into that smaller die it could be making them appear hotter than they really are.
Quote:
Originally Posted by metro.cl
Thanks fella . I suspected this might be the case with SOI @ 65. See past threads. Its to bad maybe AMD can fix this but Intel couldn't fix @ 90nm.
Quote:
Originally Posted by Serge84
Relax Serge . It is widely known that the smaller the process on SOI the more leakage. Relax its not the end of the world. You can wait until the release and we see reviews . Befor you call metro out. He seems to have provided good info in the past.
where is the proof?Quote:
Originally Posted by metro.cl
65nm X2 are 65Watt cpu (and later even 35Watt!!!) like C2D
it cannot be hotter - maybe early ES were
though OC-ability can be restricted by its IMC
It's a speculation based on 0.6-0.7 scaling factor. It will be closer to 0.7 due to fact that this shrink is using old transistor design which can't be fully scaled to 65nm.Quote:
Originally Posted by brentpresley
Metro is telling truth :( . Smaller die @same voltage has greater thermal density.
In bigger die even if you're using more power it's transferred to radiator on bigger area, that is why monsters like Itanium or Power5 can consume up to 200W and still can be cooled efficiently.
Now some math:
89W-->182mm2 (windsor)
'x'W-->125mm2 (where x is desired TDP to maintain same thermal density)
x=89*125/182=61W
Basically thermal density is higher on 65nm than on 90nm process. This is small difference so I conclude that 65nm CPU+Zalman 9500 should go up to 3GHz but over this value it will be very hard.
This is very simplistic approach to complex problem. Other factors are how well new process is implemented, how much 'long lines' was shortened in this process, how this affected electron movement, etc.
AMD certainly will improve over time, for now we even can't buy those chips :slapass: .
It's annoying to NOT KNOW in real terms how performing the newer AMD chips will be (K8L, 65nm chips) or how they overclock, specially when we knew last march / april that the C2D would whipe the floor (4 months before they came available). And, when you put your hopes on a flawed concept like the yet unreleased 4X4 and the reviews confirm something that you hope wouldn't happen, which would be a very consuming platform that delivers not so good performance, and that it is directly competing against a very efficient quad core from INTEL.
Since AMD wants us to wait, let's wait. Maybe there's a conspiracy going on that we aren't aware of...
From a cost standpoint you would think AMD would be all about getting 65nm out the door even if they run a little hot. They guarantee a chip to run at X speed, not 1ghz over it. You can't tell me 65nm is SO broke they cannot make 2ghz-2.4ghz cores (majority of official sales) now to cut their costs by 1/2. I thought they had production worthy wafers coming out of Fab36 since October?
Too hot to be true?Quote:
Originally Posted by metro.cl
I have to disagree with duo and quad core being useless, i am in the Supreme Commander open beta. The difference between single core and dual core gaming in this beta is like night and day. You dont want to be playing the next generation games without at least dual core, trust me on that :eek: . Not to mention the half life 2 source engine will soon support multiple cores.Quote:
Originally Posted by ***Deimos***
The formula is right, but the Watts are not. AMD TDP works like different Power Consumptions categories that usually shares a single model of Heatsinks bewthem them, but it says nothing about real power consumption. That is what we are needing to make a more accurate comparision bewthem them, because on the last shrink, power consumption and temperature got lowered substancially.Quote:
Originally Posted by Lightman
Besides, Voltage plays a role in this too. Maybe 65nm K8s can run at a much lower Voltage than the nominal value for archieving that Frequency that a comparable 90nm K8.
That is right.Quote:
Originally Posted by zir_blazer
My point was that power density on new 65nm process at least on beginning is higher than on latest 90nm. That's why I used TDP instead of actual power consumption numbers. I'm hoping that you can run 65nm K8 at 1.0V at least up to 2.4GHz, but we need silicon on hand to test it :dammit: .
If AMD could produce quality on par with what a healthy die shrink should be producing, we would be seeing chips by now. I do think that part of the delays and speculation is due to the fact that they did not convert their existing fab into 65nm but built an entire new facility to do it. anything think that Fab36 as a site could be causing the delays vs purely from a problem with the shrink?
Are some IBM silicon currently using SiGe in the wafers? I don't follow their mfg processes.
All these Apps were written before multiple cores existed...New applications will do a much better job of utilising multiple cores...Quote:
Originally Posted by ***Deimos***
YA IBM used strained silcon on SOI. but it won't go on 45nm process to much leakage.
The IBM-led, collaborative effort will deploy strained silicon for the 45-nm node, but not silicon-on-insulator (SOI) technology. "Advanced strain engineering techniques are the cornerstone for performance," according to a spokesman for IBM. "SOI is a separate effort and is available through IBM, but is not part of the IBM-Chartered-Infineon-Samsung common platform."
lol seems that you have way to much faith in my contactsQuote:
Originally Posted by brentpresley
Prescott is not hotter than Northwood due to the manufacturing process, but because it has twice as much transistors.
Northwood = 55M
Prescott = 125M
http://www.aceshardware.com/read.jsp?id=60000315
Initial AMD 90nm process Winchester core is less power hungry than latter Venice, but the Dual Stress Liner process (in Rev E Venice) helps it to work at higher clock frequencies.
http://www.xbitlabs.com/articles/cpu...-venice_5.html
It's unclear if AMD is already using all 3rd gen SOI features in Rev G Brisbane, but even working with almost the same vcore, Brisbane's power consumption is about 30% lower than Rev F Windsor (65w vs 89w at 2.6GHz).
Wow, so there is intelligence on this planet... perhaps there's hope for womankind afterall (there never was any hope for man to begin with ;)Quote:
Originally Posted by brentpresley
Read my last Post. We can make no assumptions solely based on the TDP because that is not what the Processor truely consumes. Besides, I don't recall that any Rev. F K8s Processor only power consumption Benchmarks have been done to do an accurate comparision (The DDR-II Memory Controller should consume more).Quote:
Originally Posted by doompc
Unfortunately, the shrink is lame, and roughly 68% the size of 90nm, so even with the same yield %, they would only get 1.47 times as many parts. But even worse, AMD is only claiming to get at least as many good 65nm parts from a wafer as they were 90nm, according to the Inquirer, which said AMD defined "mature yields" to mean just that.Quote:
Originally Posted by brentpresley
Not surprising that these overclock poorly, given the launch bins compared to 90nm.
OMG (*smack self in face and tries to regain composure*)Quote:
Originally Posted by Serge84
I'm not sure what exactly we're debating here, but I'll assume you're questioning how the same 65nm processor can be hotter than a 90nm.
1. Typically, in a perfect world, linearly scaled down transistors would run faster and cooler. However, when gate oxides shrank down to around num of atoms you can count on your toes, we ran into trouble.. big BIG trouble.
2. Have you heard of static leakage current. When just recently Intel had such huge problems with Prescots, why would you think AMD would be immune? Static power is growing exponentially with each die shrink!
3. The smaller the die size, the greater the power/heat concentration. We're approaching levels in a nuclear reactor. Those bygone days of dinky aluminum heatsink are long over. And perhaps soon we might reach limits of copper's heat conductivity.
4. This is AMD's first attempt at 65nm. Ofcourse it will take some time to fine tune the technology. Especially if new materials and techniques are being used. Intel wisely had already gone through the gruesome trials nearly a year earlier before Core2Duo.
Wrong starting data.Quote:
Originally Posted by brentpresley
90nm Rev F is 183mm^2 for the 512k L2 x 2 part (The 1MB x 2 is 230mm^2)
http://www.anandtech.com/cpuchipsets...oc.aspx?i=2795
http://www.extremetech.com/article2/...1966029,00.asp
etc.
Unfortunately, shrink sucked (.68 or so factor overall), and 65nm Brisbane is ~125mm^2.
SOI-2 was perfected 90nm on AM2 and got rid of a lot of the leakage but hit a 3ghz wall. SOI-1 for the 1st 939's was pretty good and didn't leak nearly as bad as prescots but was capable of 4ghz speeds in the FX's. I thought SOI-3 at 65nm would have no leakage at all because its so crazy complex and advanced of a process compared to Intels COMS Cu Germinanium, only process. SOI with DSL is much harder, longer to make, and creates more reliable quality dies that can take more and last longer. As I understand it then AMD made a paper launch with something thats not ready. If it was then it would be better then SOI-2... But they are having trouble with creating the SOI gates in 65nm then huh so guess not. As anybody should expect for the 1st time then. Forgot about that prescot for a wile.Quote:
Originally Posted by ***Deimos***
Can you tell me what Intel has done different in Core2's FAB processes since a year ago?
Soon we will use different materials like diamonds to go any faster.
Serge Intel went to strained silicon on the Prescots. Since it was their first time using strained silicon they had a learning process to go threw. Plus Prescots had 2x as many transitors.
Intel finally got the right mix of Germinanium in their SS added more . Intel really isn't going to tell all their process secrets .
Now with the Amd process of Strained silicon on SOI . The Germinanium is removed after its stressed. SOI Serge has a pretty poor leakage problem the smaller the processs becomes. I tried to explain this to you a few months back but you wouldn't listen.
I don't think you well see SOI on AMD 45nm process. But who knows.
This may help AMD in the long run . As now they will have to get this figured out.
Because Intel used Stressed silicon on the 90nm process they are ahead of everyone in this area. Ibm is going to Strained silicon on the 45nm process no SOI. Along with the fab club partners of Samsung Infeneon and Chartered. AMD is not in the Fab Club. So its hard telling if IBM will beable to help them in this area as they have 3 other partners to consider. What I find interesting between IBM and Intel is That with the High K metal gates IBM says it will be useless on the 45nm process so they won't use High K until 32nm process. Intel says differant they are going High K on the 45nm process and 3Dgates on the 32nm process. Either Intel is way ahead or IBM knows something Intel doesn't. Its all going to be very interesting thats for sure.
another true SS )
http://www.overclockers.ru/images/ne.../12/pop_02.gif
low voltage EE model
http://www.overclockers.ru/images/ne.../12/pop_01.jpg
the right proportion of pixie dust to fairy spiceQuote:
Originally Posted by Serge84
;()
If that is true, the argument that it won't overclock at all due to running too hot makes no sense. The power density on the 90nm and 65nm versions should be nearly identical. If the CPU really runs that hot, I see two much more probable reasons:Quote:
Originally Posted by terrace215
1. The ES in question is not final silicon.
2. The heatspreader is not flat or not attached correctly.
MAS: Who has that 4400+ EE? Can you get any overclocking info?
http://img5.pcpop.com/ArticleImages/.../000382787.jpg
http://img5.pcpop.com/ArticleImages/.../000382788.jpg
http://img5.pcpop.com/ArticleImages/.../000382789.jpg
http://img5.pcpop.com/ArticleImages/.../000382790.jpg
http://img5.pcpop.com/ArticleImages/.../000382791.jpg
http://img5.pcpop.com/ArticleImages/.../000382792.jpg
http://img5.pcpop.com/ArticleImages/.../000382793.jpg
stock freq.
I don't quit know what your trying to show here. Mas. A 2.3 ghz amd 65nm cpu. Up against a 1.86ghz C2D . If you think that impressive . OK . But don't you think an apples to apples compare would be more suitable.
If your looking at the cost of the cpu. I guesss thats alright. But this is XS and most here are more interested in performance per ghz . Than the cost of a cpu that may or may not O/C very well. AMDprice $214 Intel E6400 $217. You can guy an E6300 even cheaper. power usage is the same. So why the comparison with an E6300. Both are 65watt parts. What I find most interesting about hat short review is no power consumption test. Thats what where all waiting for and O/C test.
if you knew Chinese you would read Brisbane 4400+ based system consumes less power than C2D 6300 system when idle and more power when in burn
( http://www.pcpop.com/doc/0/168/168366.shtml )
Can you read ENGLISH? <<< a little
mentioned brisbane was not OCed -> so i can't tell you its OC-ing limit as well as OC temperatures
Hay guys lets not get this thing closed down. Mas the fact that AMD uses less power than C2D could be a big deal. But 20 watts cost for home use is about 40cents a month. For office use in large company its almost none existant as there rates are lower than ours. Productivity however can easily over come 20 watts of power usage at idle.
An AMD processor will never consum more than it's TDP, it may consum alot less than it's TDP, but that's not the point.Quote:
Originally Posted by zir_blazer
X2 5000+ is at the edge of 89w TDP at 90nm and 65w at 65nm, so I think it's good for this kind of comparison.
A X2 consumes less power when idle than a Core2 because Cool 'n Quiet is way better than SpeedStep.
An normal (not Energy Efficient) idle 90nm X2 consumes about 15w, an idle Core2 consumes about 26w:
http://www.xbitlabs.com/articles/cpu...ficient_6.html
BTW, any news on "half" memory dividers?
This doesn't help us . Because in burn or load the Intel cpu has the advantage because its less Mgz. This is just a really bad compare.Quote:
Originally Posted by MAS
Brent go to inquirer check out the short story on Intel LOL.
Mas I think your review is = to this one as far as what were discussing here. This is the kind of review were looking for. Temps and O/C ability.
I don't know the price of this cpu but it won't be much.
http://topic.expreview.com/2006-12-1...1666d1617.html
Yes, that gives it an advantage, but the K8 has the (potentially) big advantage of having much less transistors. The 2W lower power consumption of the C2D E6300 in the pcpop test should also be taken with a fistfull of salt. Not being able to use the same motherboard makes the measurements pretty much useless for a processor power consumption analysis. An inefficient motherboard could account for a 10-15W difference.Quote:
Originally Posted by Turtle 1
That is the deal, we do not know if a 65nm K8 running at 2.6 GHz is around 65W on Full Load or considerabily less as there are no Processor only power consumption Benchmarks to do a fair comparision.Quote:
Originally Posted by doompc
Why are they not posting all the numbers? Maybe someone needs to get AMD to give them to us ;-)
Surely someone overclocks them from within!
I'm actouly going twords intel on this one. The C2D's are better. Maybe until K8L But if there was only a mobo out there that was matx and was NF-M2 like in features and OCability options then count me in for a upgrade cus I got the cpu all I need is a great mobo for my case. Why in the hell doesn't anybody make a bloody high end intel matx. I really think thats the only thing thats holding me back. I want a matx that can do 500FSB then I'll def switch.
Yeah hell froze over for another AMD fan.
*Now with the Amd process of Strained silicon on SOI . The Germinanium is removed after its stressed.*Quote:
Originally Posted by Turtle 1
And intel leaves it in right? Wouldn't it be better to leave it in to keep the effects of the material forever? As far as I understand it Germinanium is being used for the 1st time in AMD with 65nm's. Germinanium gives 40% better performance in transistors when used. It would help in OCing because of this material and part of the reason Intel can go so high in OCing also because there pipes are longer then AMD's. But removing a fab process after its applied doesn't sound right because it would make applying the process in the 1st place almost useless. Almost like a cheap way of saving cash so you don't have to buy more of the stuff.
Sorry, I just had to:DQuote:
Originally Posted by Serge84
---
http://www.dailytech.com/article.aspx?newsid=5328
Quote:
Internal roadmaps showed the Brisbane 65nm desktop processors would launch on December 5th, 2006. Yet almost every contact I had spoken to in the supply chain -- from the engineers to the distributors -- unanimously claimed there was virtually no chance the company would ship a 65nm product this year.
Authenticating...Quote:
It would help in OCing because of this material and part of the reason Intel can go so high in OCing also because there pipes are longer then AMD's
Verified! - genuine noob.
Why is it every time I come to xtremesystems it seems like its a kindergarten playhouse with jumbo colorful stuff animals, and those toys where you have to fit the square into the square hole, the circle into the... etc..
Look, its really simple... there are no "pipes". They are not being made longer. The terms have been horribly mis-used my internet folks. Processors have buffers where they keep intermediate results. They partition work on each instruction into steps or stages. "Pipeline", "pipe" or any other similar term, is just a METAPHOR. Its like saying that a Ford Mustang has a lot of "muscle".
Uhh huh.. sure, whatever. Where was the almighty performance per Ghz, when throughout 2001-2006, people abandoned their P3's and jumped on the P4 bandwagon?Quote:
Originally Posted by Turtle 1
XS has never been about anything per something. Nobody cares if you can get a little extra IPC, little less noise, performance/W, performance/$ or whatever. XS is mainly about overclocking.. by any creative means imaginable. And over the years, the fountain of creativity here has shown no signs of abating.
I'm talking about transistor performance not the performance of the cpu lets just get that clear with e-SiGe added.Quote:
Originally Posted by brentpresley
Congrats Serge you have become an enthusist. I wish I could make that step. But news occurred today that makes it impossiable for myself.Quote:
Originally Posted by Serge84
As for your question yes intel leaves Ge in its SS. IBM/AMD remove it. I believe the main reason for this is because dual stressed strained si ge on SOI would interfer with the transiters and gates operating effiently. Not sure.
When you have a pipelined processor, each cycle is a number of stages. In each stage, you do a few different things, and, depending on the processor design, each stage is one or more oscillator cycles. For example the P4 has 31 stages witch is long, the K8 far less, and the core2 far more then the K8. Now that will tell us how well the K8, Core2, and K8L will overclock depending on stages of the pipeline.Quote:
Originally Posted by ***Deimos***
5 stage pipelines are easiest conceptually, and from them, you tend to extrapolate our current pipelines (we use more than 5 stages in modern processors). The five stages are Fetch (get the next instruction), Decode (figure out what it is), Execute (do math), Memory (do memory acess), and Write-back (finish things up write data back to memory). Now when I say "memory", it's really more like cache, not memory. But the processors of today use 30+ stages.
How wide will also determen the cycles per clock in all out performance how it will calculate 64bit or 128bit data chunks per cycle. P4's are very slow, K8's have pretty much a 1000/1500mhz+ clock speed advantage over P4 at a much lower clock speed. And Core2 is 20% faster then K8's because of there cycles per clock and stages. So what am I again?