now we have a real photo of K8L chip, produced in August
http://img90.imageshack.us/img90/2965/k8lps7.jpg
compare it to schematic K8L
http://img246.imageshack.us/img246/4097/amd2im5.jpg
Printable View
now we have a real photo of K8L chip, produced in August
http://img90.imageshack.us/img90/2965/k8lps7.jpg
compare it to schematic K8L
http://img246.imageshack.us/img246/4097/amd2im5.jpg
ahhhh :banana::banana::banana::banana:.... :-D
2.5 months to go till the return of the king.
Opteron Deerhound will be announced from April to June 2007, little later (but maybe simultanuosly) Athlon FX and Athlon 64 X4
Whoa !
X4 sounds sweet :D
the first glimpse of my next upgrade.... :slobber:
I wish you a Marry Christmas (XMAS), I wish you a Marry Christmas and a happy new year! :comp10: ;)
Mine too most likely :DQuote:
Originally Posted by SparkyJJO
I wanna benches, benches, benches, benches :D
...and overclocking :D
before Deerhound and its derivatives u'll be able to try 65nm K8 Brisbane - typical OC 3,5-3,6GHz on 80-dollars-MB :)
How many did you try already to come to that conclusion?Quote:
Originally Posted by MAS
LowRun, do not worry, only one time bugaga
Sorry, i couldn't make sense out of this one.Quote:
Originally Posted by MAS
i guess he said that he tried one chip, im not sure though...
if it were in reality...Quote:
i guess he said that he tried one chip
Unfortunately K8 Brisbane will be available only in December
So... you tested it in your dreams? :confused:Quote:
Originally Posted by MAS
I think i just soiled my pants :). i wonder how many pennies i will have to save from now till the day k8l comes out and have a decent price. on a serious note, i hope amd improves on the manufacturing process. i would hate to have 1 out of 4 cores going bad. nothing is more disheartening than having a bad core :). amd should market the k8l as the return of the king similar to lotr: return of the king :). geeks all over the world will go out and buy one :).
Speaking about that, I wonder since K8L will have independant controll over each one of the cores, I wonder if we will see a X3 .... X3 would be 3 processors. One core bad.
~Mike
What makes you think that a core will go bad? (I mean after validation)Quote:
Originally Posted by dng29
This is not the Cell. :p:
i think he means having one core that clocks worse than the others :DQuote:
Originally Posted by Piotrsama
i hope it will not be to expensive
I assume we'll have the usual productline going from $400 to $1000 or something.Quote:
Originally Posted by galimim
well if this is more than kentsfield Q6600 im staying with intel ... which is potentially $500 ~ $600 USD
I wish personally if I am still on AMD when 65nm comes along I will be happy seeing 3.2ghzQuote:
Originally Posted by MAS
My 130nm 2800+ would do 2.75ghz.
My 90nm AM2 3000+ if I beg and pleed for months might do 3ghz
The WR for 130nm A64 is 4ghz
The WR for 90nm A64 is around 4.2ghz
I really don't think that unless AMD really gets there poop in a group we are going to see 3.5-3.6ghz outside of one of those occasional really great chips.
Wonder if XIPs will be able to show their results? They weren't allowed to with AM2, who knows with K8L. I just hope so.
AMD usually hide their new chips until announce date
one more pict.
http://www.overclockers.ru/images/ne...17/quad_01.jpg
Kent will have 280 square mm chip surface, K8L - only 150 sq. mm
That was an old estimate. A more recent estimate from the same person is 283 mm^2.Quote:
Originally Posted by MAS
http://chip-architect.com/news/Quad_vs_Dual3.jpg
thus Kent and Deerhound have equal chip square?
maybe sooner then we think ? remember how long ago that the K8L was suppose to be taped out ? well according to this , it takes 12 weeks from start to finish for a amd wafer , , i think that my math must be failing
http://www.theinquirer.net/default.aspx?article=34781
How AMD bakes its 65 nano Barcelona cakes
Basically, AMD can automate the testing and incorporate the feedback on the fly. When you decide to do 10 per cent more on step 37 of 91, three weeks into a 12 week process, getting accurate and timely feedback is essential. What APM does is allow AMD to pick a set of wafers and apply the special sauce, and track them at every point from then on while cataloging all the metrology details. If it works, and has good yields, it can be made part of the permanent mix very quickly.
either my math is getting bad , or my memory is getting bad
They have to do more testing and build volume for the launch, I doubt we'll see em' earlier than 2Q 2007. Die size seems a little large but doable for AMD, hopfully they can get it down to something a bit more reasonable to improve thier margins.
It's only 150mm^2, that's a lot smaller than the dual cores they make now. AMD should get very good yields. You probably looked at the picture and forgot the quad core is 65nm. This makes it really only half the size it looks like in the picture compared to the dual core.Quote:
Originally Posted by mesyn191
If these pictures are of a 300 mm wafer (and they appear to be), then it's closer to the 283 mm^2 accord99 mentions.Quote:
Originally Posted by Khenglish
These are pictures of their quad cores. 283mm^2 makes sense
I'm really confused by all these AMD core revisions...
nobody tested anyone yet??
i really would like to see a processor the nock out the conroe!! :P
Wouldn't matter since each core has a seperate PLLQuote:
Originally Posted by ozzimark
thus you can overclock each core to its max
i was under the impression that they all had the same pll, but could adjust multiplier and voltage independantly..Quote:
Originally Posted by nn_step
either way, the rough effect is the same :D
Earliest you could expect some leaked benches would be Dec. this year or Jan 2007, I'd lean towards the former rather than the latter, despite all the naysayers who said they couldn't pull it off it appears that AMD is following thier schedule and will be on time.Quote:
Originally Posted by engenheiro_ce
http://www.hkepc.com/bbs/attachments...1GfRYfDBpe.jpg
they say next-gen AMD cpus aka K8L will be 40% faster than todays K8, and thus 20% faster than C2D (Yorkfield is just Kentsfield shrink with SSE4)
At stock speeds.Quote:
Originally Posted by MAS
What about overclocked? We all know conroe overclocks like crazy, and unless these new AMD chips can do the same, I don't see them catching conroe anytime soon :(
true that!Quote:
Originally Posted by mr_mordred2095
well for bragging rights that is. Fortunatally most server farms run at stock frequencies.
Obi
in multithreaded apps (rendering mostly) it will spank every conroe even if you OC the later.
tho the hkepc list has some mistakes.
Rev G Quad Core and Rev H Quad Cores are different.Quote:
Originally Posted by MAS
Rev G Quad Core is merely 2 65nm dual core dies on one proc, while Rev. H will be native quadcore i.e. designed from the inside out as a quad core model.
The only match Intel will have to Rev H is Yorkfield. Conroe, penryn, kentsfield will all be architecturally behind the K8L aka Rev H. The advantage to Penryn in 2008 being the ability to have an Octa-Core Core2 model with the same die size as the smithfield (ironic)...how pratical that'll be knowing the yorkfield will be out for a about 8 months by that time)
PLEASE PLAN CAREFULLY FOR UPGRADES WITH THIS INFO... Lots of suckers are gonna be going around shouting "The K8L are here, The K8L are here" when the first quad cores are announced.
Increasing that confusion will be the fact that AMD will introduce some Rev H products with two cores disabled, hence a dual core Rev H.
Perkam
Perkam,there is NO RevG QC!MCM isn't possible with K8 since two cores can't share one mem. controler...Quote:
Originally Posted by perkam
RevH is the only QC design and the rivals for Kentsfield will be 4x4 systems(at least while there is no K8L in those configurations)
What about coldbug? :D
Quote:
Originally Posted by informal
Something I've been shouting about since March...there ARE non-native quad core designs in AMD's test labs...they just haven't taken a firm decision to launch it so close to the K8L launch as the kentsfield will be unarguably faster, which could cause a long term market stigma of ALL amd quad cores being slower than ALL intel quad cores, thats not to say that the market is stupid...just that the assumption makes it easier to make decisions :p:Quote:
Originally Posted by Dailytech
Perkam
So you do know how high AMD can clock RevG??Quote:
Originally Posted by brentpresley
If you don't know,how can you tell AMD can't challenge C2D?
I'm speaking of def. out of the box comparisons,like 3.4GHz Brisbane vs 2.93 C2D ie.We saw that there are 3GHz Windsors coming this Nov.,so Brisbanes that will be Rev G2 or G3(hypothetical numbers) and will incorporate new and improved trensistor mix on 65nm could probably easily hit 3.4GHz(and btw,don't bring those roadmaps showing Windsor based FXes-64 or whatever- in Q3 07 into this discussion,since those are highly inaccurate)
Perkam,as I read that,it says that they(dailytech) seems to think he suggested that,not that he actually said:" There are non-native QC on the roadmap in 07".Dailytech has a vivid imagination :p:Quote:
Originally Posted by perkam
Read that again,maybe you will see it too :)
AMD and Intel have both confirmed that their nex gen native QC designs are launching Q3 '07...Quote:
Originally Posted by brentpresley
All this talk about AMD "will be behind" is crap...it'll on par with the year...Conroe allowed Intel's crown to remain for 1 year, while AMD's lasted 3.
Perkam
The performance in games is much closer than you would think Brent.Quote:
Originally Posted by brentpresley
Perkam
Well we all know tha AMD ES usually suck when compared to shipping cpus.This has been the regular case in the past.Quote:
Originally Posted by brentpresley
So,i said if we compare out of the box products that will come Q1 07 and IF AMD can ramp up clock on Brisbanes,3.4GHz would be enough for C2D to match it in 90% of the cases.
Remember one thing: usual everyday peple don't know or care about OCing.They can read some PCMagazin or whatever and if they see the chips are on par and the prices are on par also,majority of folks will look at TOTAL cost of the platform and decide what to buy
this should be interesting :D
Very true...we'll have Super computer FPU performance in 2008...on a $500 PC.Quote:
Originally Posted by brentpresley
Definitely is interesting :)
Perkam
Well said, it's a beautiful time for us end users, and pc enthusiasts. :p:Quote:
Originally Posted by brentpresley
1. Intel has the lead over AMD in x86 CPU for ages.Quote:
Originally Posted by perkam
2. Netburst was a tragic mistake that Intel used for the past 6 years.
3. Intel's product development cycle is much shorter now with their new strategy. There's only a slim chance for AMD to keep up.
4. The best part of AMD's business will now be its GPU, aside from its digital TV and cellphone coprocessors.
What do we know today?
1. Server K8l Deerhound Opteron will be announced in Q2 2007
2. FX socket F+ (with HT 3.0 support) for 4x4 (in fact it's the same Opteron, remarked) and X4 am2+ (also with HT 3.0 support) - in Q3 2007
3. Dell will buy a great quantity of AMD procs :)
4. In several months AMD will show us the system with K8L and proove its high performance. Maybe this will be octal-core 4x4 :)
5. 65nm AMD procs can operate at clocks not less than C2D B2, B3
6. There is no quad-core K8 in AMD plans at all, quad-core is only K8L
brentpresley, look at overclock abilities of 90nm Dothan and K8 - frequency limit (temperature 20C) is just the same
why OC limit for 65nm short-stage intel and amd cpus must differ? (sorry for bad engl. :) )
I wouldn't bet the farm on that ;)Quote:
Originally Posted by vitaminc
http://www.xtremesystems.org/forums/...d.php?t=118091
So it's speculation, not an fact.Quote:
Originally Posted by MAS
The first few batch of 65nm AM2 CPU probably won't clock so well. It's a new process node for AMD afterall.
GP GPU has nothing to do with CPU. Sure GPUs (ATI > nVidia) has :banana::banana::banana::banana:loads of FLOPs, but all of the GPU will just choke so hard when running x86 style instructions in x86 CPU thermal envelopes.Quote:
Originally Posted by LowRun
Hopefully we could use a 200W PSU for a CrossFire or SLi high end gaming rig someday.
True, however the first few batches are about 1 Month of Time. After that you have to deal with AMD's Rapidly evolving process, realtime process changes combined with a group of highly skilled and motivated Individuals working as a team. At the same time for the first 65m chips (provided that are K8) From AMD would clock comparative to Yohan (Conroe's father) and shortly after it'll start to clock faster and fasterQuote:
Originally Posted by vitaminc
We could debate AMD's APM vs. Intel's Copy Exact strategy all day long, but the fact is that yields defect density will only go down with time.Quote:
Originally Posted by nn_step
You can never state the overclockability of a chip as a fact before you have them on hands with actual data. Everything now is speculations under NDA smoke screens.
some times i think that amd does rthis stuff on purpose , just look at the mhz that the 4x4 cpu's are going to be , amd has no other cpu running that fastQuote:
AMD's chips made EARLY on in a process shrink (90nm, 65nm, etc.) don't overclock very well. They continue to improve the process and release faster and faster chips. It just takes some time, but they eventually get VERY good with working with a particular process level.
i read some where that amd has , as you would say it , a experimental lab at fab 30 and fab 36Quote:
(they have the luxury of having an ENTIRE EXPERIMENTAL FAB - D1D in Oregon),
Actually ALL AMD fabs include the ability to experiment and improve process performance and yeildsQuote:
Originally Posted by The Ghost
They don't. They jointly develope with IBM. They do have a "test" line for future processes (not a whole dedicated fab like Intel).Quote:
Originally Posted by The Ghost
The improve process performance/yields are result of APM (software fab control and other magics). Has nothing to do with their process development for 45nm or 32nm.Quote:
Originally Posted by nn_step
Looks like you didn't even read the article :rolleyes:Quote:
Originally Posted by vitaminc
skim through it. nothing interesting that i havent heard of, except it does not address a lot of important concerns such as power envelope, process integration, etc.Quote:
Originally Posted by LowRun
So will the performance in wordpad. Just goes to show that games that are out now do not utilize CPU's well. Though I think they will in the future, what with physics simulation and all.Quote:
Originally Posted by perkam
Also Perkam,i totaly forgot about this : Did you ever thought that S.Meyer of AMD might be talking about 4x4 as non-native quad core solution?I did,and it goes well in line with what H.Richards said at Digitimes interviews when asked about that.
So i still am 99.9% sure there never will be MCM K8 based non native QC.As i said it previously,how is it possible that both chips share one mem. controler(be it on first or second core) and still use the same socket?OTOH,I can see that two sockets can make non native=almost MCM like solution= and that is what AMD publicly stated as their plan to combat Kents(which may fair well against since there could be some serious cache trashing -during heavy multitasking- in Kentsfield's shared cache structure and FSB bottlenecking,no matter if it is not full 8MB of shared L2,but 2x4MB)
Slightly OT: As for Daily tech credibility lately,look at this one :http://images.dailytech.com/nimage/2...07_roadmap.png
Tell me they are serious,since they have been wrong with their "roadmaps" (custom made ? :p: ) when they spoke about "roadmaps" showing AMD only getting to X2 5600+ in Q3'07...
Must be your way of saying you've only read the title otherwise you wouldn't say something likeQuote:
Originally Posted by vitaminc
Even without reading the article i can't see why one would want to run x86 style instructions on a GP-GPU that sits on a die with 4 x86 cores by it's side.Quote:
GP GPU has nothing to do with CPU. Sure GPUs (ATI > nVidia) has loads of FLOPs, but all of the GPU will just choke so hard when running x86 style instructions in x86 CPU thermal envelopes.
They won't put it there for you to run x86 instructions :slap:
According to HKEPC (Hong Kong based site), Yorksfield will be Intel's 45nm single die quad core with Conroe core in 2H07.Quote:
Originally Posted by informal
FSB problem still presists, but it's not as bad as people made it out to be.
Well, if you think about it, AMD has always been focusing on architecture improvement since the K6 days, thus the clocking of X2/X4 are not that big of a deal IMO. They could always do it Intel P4 style and clock it really high, but that's not "green" computing. :pQuote:
Originally Posted by informal
So they only taught you reading but not thinking in school?Quote:
Originally Posted by LowRun
"X86 style" instructions meaning codes that's not highly parallel like graphics.
1. You don't think that all of the current HPC softwares will be recompiled away from the current x86/SPARC/PPC codes and translate into DX9/DX10 based GPU code, do you?
2. Running a couple X1950XTX 512M cards with 200W power dissipation each under load just all of the sudden makes a whole lot of sense, especially with AMD's compaign of green computing.
Brent is right about 65nm
Even AMD says the same, they silently add new tech bits to their sauce. Early on in a shrink the process is almost exactly the same, and basically results in only a power and die size shrink. Further out when they begin adding the new tech is when the clocks start to ramp consistently. (i did have an early winchester that clocked to 2.77ghz which was great for the time, there will always be a few exceptions)
I was refering to Novembre release of Kentsfield and the AMD's response to that(in my opinion,it's not MCM K8 like hybrid-gotta love that term:p:).Quote:
Originally Posted by vitaminc
As for Yorksfield,i think it will be somewhat better than Kents ,BUT if it has no IPC improvements,i don't know how well it will fare against K8L,especially in HPC segment(in all honesty I believe that server variant of K8L will be better,and by a wider margin)
I was speaking of Brisbanes arrival dates in that DailyTech "roadmap".It's said that they will arrive in Q1 07,but other sources (HKPEC for instance) are saying Decembre.As for clock speeds on that slide,they are in line with what we know by now,my only objection was the fact that dailytech wasn't upto the mark,at least when roadmaps for AMD are in questionQuote:
Originally Posted by vitaminc
http://www.aceshardware.com/forums/r...7742&forumid=1
Charlie is one of the most informed in IT industry(by my merits at least),and I trust him on this.When he says "has been set for a long time",it seems to me that Brisbanes are over their initial problems and are set to go into retail in Dec(this year :) )Quote:
They are flat out wrong. 65nm is set, has been set for a long time, and knowing the exact date, I can say they are wrong. Trust them at your own peril.
-Charlie
1. Obviously if an app is to take advantage of a GP-GPU, is has to be coded specificaly :rolleyes:Quote:
Originally Posted by vitaminc
2. I don't even see what kind of point you're trying to make there.
1. Do you recollect the debate between RISC vs CISC and how road of domination by x86 instruction sets.Quote:
Originally Posted by LowRun
2. GPU has a lot of FLOPS, but it doesnt mean that they are cost effective solution. One single X1950XTX consumes as much as 3 Socket F Opterons (68W HE version TDP) or as much as 4 C2Q Kentsfields (50W ULV version TDP). HVAC is a huge concern with GPUs, and AMD's success with Opteron is mostly due to its high power efficiency.
might be 3x the power, but it's WAAAYYY more than 3x the FP throughput ;)Quote:
Originally Posted by vitaminc
1. Don't know if it's my english reaching it's limits but i don't get what you mean there. I'm just refering to what Stanford did with folding@home for example, they coded a version specificaly to take advantage of the GP-GPU.Quote:
Originally Posted by vitaminc
2. You keep on bringing that x1950xtx to the debate, did i said AMD was going to integrate one on their CPUs? Nope. In fact noone did. I don't know how they would do it but i'm confident they would do very well and as stated in the article that would put them in a very good position.
On top of DirectX 9 and ATI specific driver.Quote:
Originally Posted by LowRun
What does CPU/GPU integration has to do with power envelope? Having the capability in integrating GPU/NB/MC into GPU is one thing; doing the integration within a tight power envelope is another. Who care about where each individual component reside as long as it met power requirement and system level performace is competitive. I have faith in both AMD and Intel in the integration process, but to satisfy the power envelop requirement is another. Besides, Intel is the expert in IGP with approximately 50% marketshare.Quote:
Originally Posted by LowRun
All integrated graphics from nVidia and ATI have are using previous generation technologies, so I will not live my hope up for a massive integrated 350W TDP K8L + X1950.
well no one says they'd use an x1950 type of core. Seems to me that any kind of core that specializes in graphical/physics/FP operations would make a tremendous boost. Perhaps x16xx series? Even x13xx series? There's more than one option.
Ryan
so many info in this post.. but for me.. I'll wait for it.. :D
That make my point even moreQuote:
Originally Posted by vitaminc
Once again, you're the only one thinking x1950 about AMD's GP-GPU integration into their CPUs, don't know how you got there. I believe they will do fine regarding power envelope with a solution specificaly designed for the task. As for Intel being the expert in IGP, having 50% of the market share doesn't make them experts IMO, this is due to many reasons that have nothing to do with performance or efficiency, in fact their IGPs are utterly crap and they are lagging behind badly in that field wich could put them in a bad position when AMD will bring their GPU on CPU.Quote:
What does CPU/GPU integration has to do with power envelope? Having the capability in integrating GPU/NB/MC into GPU is one thing; doing the integration within a tight power envelope is another. Who care about where each individual component reside as long as it met power requirement and system level performace is competitive. I have faith in both AMD and Intel in the integration process, but to satisfy the power envelop requirement is another. Besides, Intel is the expert in IGP with approximately 50% marketshare.
All integrated graphics from nVidia and ATI have are using previous generation technologies, so I will not live my hope up for a massive integrated 350W TDP K8L + X1950.
With all due respect, I think Vitaminc is confusing the recently released F@H client for the X1900 cards with utilizing a GPU type processor as a coprocessor for the CPU.
I think ti's perfectly clear that AMD will use these types of coprocessors, in fact many different types, anything that will go on the HT3.0 bus. It would be a very logic first step for them to start with using Ati's GPU tech.
Ofcourse there are technical difficulties that I cant even comprehend, but that shouldt stop the engineers from finding solutions to tap into this extra power. Else we would still be using 286 CPU's without a seperate math coprocessor. (I believe this was added in the 386 DX, correct me if Im wrong.)
Low Run, you are dead wrong on this. GPU on CPU is a wet dream for AMD at the momemnt for 3 reasons:
1) power envelope for integrated graphics are simply way too high.
2) graphics engine evolves faster than moore's law and requires frequent updates.
3) a good one year lag in manufacturing technologies.
I am sure all the IT departments and HPC institutions that you know just loves to apply new graphics drivers (CPU if you believe GPU on CPU story) every couple weeks, new DX (or whatever API) every year, and patch your HPC code as frequent as the combination of the two.
A tailor made x86 threaded application on x86 processors (Opteron or C2D) should be able to chuck out nearly as many FLOPs as a GPU, in the same power envelope. All those x86-64 and SSE1-4 extentions are implimented for this.
Vodka,
The match coprocessor pre-pentium Intel processors are x87 series. 8087 for 8086, 80287 for 80286, 80387 for 80386, 80487 for 80486. Their main purposes are to add FLOPs to the system. No one cares if the coprocessor resides on FSB, PCI-express x16, HT 1.0/3.0, or CSI, as long as its profitable for the co-processor company to design and manufacturer and the power/performance envelope is reasonable.
Guys let us stick to the thread title and not wander off to 2008 or 2009 with this GPU on CPU stuff.
Topic is called "AMD K8L ES coming in Decembre":rolleyes:
As far as we all know it has no GPU onboard so we shouldn't go into this stuff ,at least not in this thread.Start a new one and battle in there :)
Quote:
Originally Posted by informal
I will second that :clap:
this will happen at 45nm cpu's in 2008 for notebooksQuote:
Low Run, you are dead wrong on this. GPU on CPU is a wet dream for AMD at the momemnt for 3 reasons:
1) power envelope for integrated graphics are simply way too high.
2) graphics engine evolves faster than moore's law and requires frequent updates.
3) a good one year lag in manufacturing technologies.
.Quote:
A tailor made x86 threaded application on x86 processors (Opteron or C2D) should be able to chuck out nearly as many FLOPs as a GPU, in the same power envelope. All those x86-64 and SSE1-4 extentions are implimented for this
this is not true or software writers would have done this already , amd has many partners that are developing co-processors to do this , ati is one of these partners that are developing co-processors
by the way intel has said something about putting a gpu on a cpu
http://www.gpgpu.org/
1. I believe AMD has not yet revealed any roadmaps for its notebook processors beyond 2007. Correct me if I am wrong though.Quote:
Originally Posted by The Ghost
2. AMD's notebook processors is using a different design than its desktop/server procesors, as disclosed in June 2006 Tech Analyst Day.
3. Most softwares aren't very capable of using multi-core CPU yet, and has been a concern for both Intel and AMD.
4. As I said before, co-processors will be better, but requires very specific software support.
vitaminc,i appreciate your effort to bring the info about CPU-GPU merging and notebook usage of those designs,but pls read : http://www.xtremesystems.org/forums/...3&postcount=95Quote:
Originally Posted by vitaminc
Cheers
phil hester said so in his last interview with zdnetQuote:
1. I believe AMD has not yet revealed any roadmaps for its notebook processors beyond 2007. Correct me if I am wrong though.
http://insight.zdnet.co.uk/hardware/...83795-2,00.htm
Are there changes that you're planning to make to the core for the mobile space?
One of the areas we need to work on as a company is the mobile space. And that's where the biggest win comes, from being able to integrate the graphics.
Integration in the microprocessor itself or integration in the chipset?
Integration of the CPU and the GPU. Assuming the transaction closes on time, we would target a merged design in the 45nanometre time frame.
Which is 2008?
Yeah. Another thing happening in the graphics space is that there's more and more programmability. It used to be that it was just polygon rendering. That's what graphics was, but now developers are doing so much programming.
The next generation of gaming is really making things more dynamic. It's not making the surface look realistic, but making it behave realistically. We've crossed the point where the GPU can do real programs of a significant size.
It may seem like 2008 is a long way away, but that's a major design cycle. ATI also has very good business, in the handset and set-top box DTV area.
co-processors work with the software that is out there already , ati could be one of those co-processorsQuote:
4. As I said before, co-processors will be better, but requires very specific software support.
Meh, the whole ages comment seems overblown unless you're referring to ages ago. They've been trading the lead since the K7 intro. with things working out more in favor of AMD as far as performance goes...Quote:
Originally Posted by vitaminc
Yea, only good thing about it was it gave them the lead again over the at the time aging AthlonXP's til' A64 came out, still sold like hotcakes though. :/Quote:
Originally Posted by vitaminc
Isn't this assuming perfect execution on the design and process side? Haven't they already had some minor delays for the intro of thier 45nm process? You cut Intel this slack but won't do the same for AMD? What gives?Quote:
Originally Posted by vitaminc
FWIW I'd say they're both going to be trading the lead back and forth a whole lot the next 4-6yr. or so assuming that niether makes any major screw ups.
This seems unreasonable (completely disregards any and all of AMD's future CPUs) and way to specualtive (we have no idea how AMD's on die/package GPU strategy will pan out or what they plan on doing exactly for high end GPU products...). You're also completely disregarding any attempt at them working on chipsets too...Quote:
Originally Posted by vitaminc
WTH happened vitc? I know you got a AMD system spec in your sig and all but are you getting paid by Intel to say this stuff or you just feeling pessimisstic or what?
Ghost,
GPU on CPU in general if you quote it out of context, but GPU on mobile CPU if you read the paragraph above and below. I would still argue vision and concrete roadmaps are two different things.
Mesyn,
If you look at the history of both AMD and Intel, Intel has the lead in significantly portion of the time in processor designs and has always been the leader in process/manufacturing technologies (unless you want to argue about apm vs. copy exact).
Not sure where you get the words on Intel's 45nm delays. They are finishing up 1 fab and building 2 extra ones to copy exact, as per IDF. Care to elaborate?
ATI's digital TV and cellphone multimedia coprocessors are the market leaders in their respective market. And those 2 markets are growing helluva fast. Desktop PC are in a decline as people switching to notebooks, thus AMD's attempt to get its own platform to battle Centrino.
I believe that big screen LCD/plasma TV will grow faster than PC for the next few years and people will always change cellphone faster than they change computers, thus my speculation that AMD's crown jewel will be in those 2 markets (if they don't mess up). :p
Unless you believe that PC growth will be faster than big screen LCD/plasma TV, or people changing
I could diss Intel all day long (like its bureacracy, FSBrenza, IGP), but that's not related to Altair. :p
roadmaps and visions are the same thing , there is no such thing as a concrete roadmap , we have seen things that have been on amd roadmaps that never came too , and we have seen things on intels roadmaps that never came tooQuote:
GPU on CPU in general if you quote it out of context, but GPU on mobile CPU if you read the paragraph above and below. I would still argue vision and concrete roadmaps are two different things.
also if you have seen some roadmaps , and have listened to members from amd , you would have noticed that they are also going to use mobile cpu's on low end desktops , so yes we can end up with desktop with intergrated gpu in to a cpu , companies like sell would jump all over something like this , where they are not worried about high end graphics
They are different things. Vision is what the executives/marketing people envisioning their future products, such as 10GHz CPU hot wet dream by Intel.Quote:
Originally Posted by The Ghost
Roadmaps is what companies release to their customers as a product release schedule promise so their customers can better manage inventories and software venders can anticipate the hardware changes.
Mobile on desktop has nothing to do with GPU on CPU. Apple/Shuttle and various other venders already have mobile on desktop PC out on market and those PC tends to use IGP instead of discrete graphics.
There are more engineering concerns in implimenting GPU on CPU for mobile. TDP, die size, pin count, and fan out are the most obvious ones.
I dont think its the GPU market either.. its the software companies. They need to come out with software that will utilize an X1950XTX, or 7950GX2.... From the looks of the new 8900GTX Nvidia's card will be about 2x's faster than the 7950GX2... I highly doubt any game will stress that.Quote:
Originally Posted by brentpresley
~Mike
At the same time microsoft will need better optimized DX9/10 compilers, Nvidia and ATI need better drivers, and game software companies need to multithread certain loads to CPU.Quote:
Originally Posted by arisythila
do you remember the amd mustang ? wasn't it on a roadmap ??Quote:
Roadmaps is what companies release to their customers as a product release schedule promise so their customers can better manage inventories and software vendors can anticipate the hardware changes.
AMD's Mustang processor was supposed to have support for up to 4MB of L2 cache on the chip die with tweaks to the Athlon Thunderbird core to allow for the addition of more L2 cache on chip, possibly with more pipeline stages, enabling higher clock speeds.
http://www.geek.com/procspec/amd/k7mustang.htm
now do i really need to go down the list of Intel cpus that was on the roadmap and was canceled ??
road maps are company visions , and neither is set in concrete
i know the difference between wet dreams and roadmaps
low end desktops are going to use mobile cpu , so yes it has something to do with gpu on CPU , that is a fact , even Intel is trying to do the same thingQuote:
Mobile on desktop has nothing to do with GPU on CPU. Apple/Shuttle and various other vendors already have mobile on desktop PC out on market and those PC tends to use IGP instead of discrete graphics.
i already provided the link that Phil Hester said it would happen , there have been others before him that have said the same thing
I was referring to performance, but Intel usually has the process lead.Quote:
Originally Posted by vitaminc
From this article: http://www.eetimes.com/news/semi/rss...leID=192501516Quote:
Originally Posted by vitaminc
Perhaps something else has changed that they don't know about because Intel roadmaps show QC 45nm chips available by Q3 2007, though they quote Intel as thier source for thier info...
What? While growing fast aren't these markets niche as all hell? Hasn't it already been shown that most people don't really play games and stuff on thier cell phones, its just a gimmick? You can't run a 300mm 65nm or for that matter 45nm fab off of profits from this market alone, or for that matter even keep one busy.Quote:
Originally Posted by vitaminc
They're still a significant chunk of business though, and they aren't going away any time soon either, you don't see Intel cancelling its desktop chips do you? AMD's efforts to get a competitive platform as a alternative to Centrino is an effort to improve profit margins, not a make or break issue.Quote:
Originally Posted by vitaminc
While cellphones are ubiquitos and have a relatively high turn over rate you can't justify the costs of running a high end fab with production for them, I don't see digital TV growth growing massivly either. In fact I think you're gonna see a decline in general sales/profits across the boards for the next few years, the US is heading for a major recession though I don't think you'll see signs of it til' about Q2/3 2007.Quote:
Originally Posted by vitaminc
Well neither is alot of the other stuff you're talking about, you seem to be more interested in how AMD will do financially than about CPU's in general.Quote:
Originally Posted by vitaminc