It's good you included that, because IMHO, it's been the opposite for a while now.
It is definately his opinion. IMO since Conroe Intel has had the arch lead but Im a process guy what do I know.
i think it's too hard to tell. they are close and there is no solid measurement of an architecture, unlike process (to some degree).
Frankly, who cares if AMD's current arch beats Intel's old arch. i7 exists, ignoring it doesn't make for a superior argument.
I don't have much objection with everything else he said. It was a decent post. But if you are comparing AMD's top arch against Intel's top arch, I think Intel's is superior.
Larrabee, never ending story and a major screw up.
In its current form it is a definite failure, the question is why and whether it can be fixed.
The whole ideea with doing graphics in SW, isn't something new, in fact this was the way things were done for most of the time computers existed. But compute power was slow, graphics are very compute intensive, thus you needed dedicated hardware to accelerate the common cases.
Problem is/was that the features people wanted to implement always exceeded what could be done in HW, basically HW lacks the flexibility of the SW approach.
Nowadays, we're about to solve the compute problem, so why not do the graphics again in SW ? Basically, instead of buying a new card that support a new DX version, why not update the graphics driver and you're done ?
Tim Sweeney and other gurus in the industry long sought the return to SW rendering. Intel felt the way things were moving and in 2005 started the Larrabee project involving some of the top brass in the graphics industry.
They simply played to their strengths :
- x86 cores for ease of programming and flexibility
- cache coherent structure ( cache design )
- large die size and number of cores ( manufacturing )
What they lacked was the SW infrastructure. They couldn't reuse their existing graphics know-how since the concepts and implementation are worlds apart.
Larrabee as we know it works well from a compute point of view, the hardware is in reasonable shape and is currently being shipped to developers and some customers.
Michael Abrash build something like 3-4 different rendering engines for Larrabee and apparently they couldn't get the SW to an acceptable level to release it as a GPU also. So we ended up with an accelerator card for the moment.
The second attempt or Larrabee 2 was cancelled and the project decided to skip 32nm. From the looks of it, the project is far from dead and the next major push is Larrabee 3, basically they are aiming for 22nm with H1 2012. I wouldn't be surprised to hear that the visual computing group is larger in size than the ATI or NVIDIA teams now.
.
Touched on a nerve ? Maybe in the future you should stop associating people with different sites.
The discussion was about the project on the whole. Reincarnation number 2 might very well be going well and reaching the new targets.Quote:
Exactly.
This is like saying a Ford borrows a lot from a Mercedes by having 4 wheels and using petrol.
AMD didn't even knew how to name what was to become K10 by the time Nehalem was already in the implementation phase with the uarch being frozen for some time.
As late as mid 2006 AMD was doing uarch work on K10. That they launched it in 2007 is incredible. The rushed nature of the project however, lead to the Barcelona disaster.
:rofl::rofl::rofl::rofl:
as if You, and some other blue fanboys inhere would know anything about when uarch was within R&D for both Intel and AMD, even worse you have NO NDA, you have to read what will be posted on forums. So cut the crap with all your posts, you are a fantasy speculation brewer without a clue.
whole thread full of garbage, I takes a long time to get this thread closed, is MM on holiday? I remeber he asked some to get out of certain fanboy discussions.......
Eh... /thread.