Intel Larrabee Roadmap 48 cores in 2010
http://pc.watch.impress.co.jp/docs/2...gai364_03l.gif
http://pc.watch.impress.co.jp/docs/2...gai364_05l.gif
http://pc.watch.impress.co.jp/docs/2...gai364_06l.gif
http://pc.watch.impress.co.jp/docs/2...gai364_08l.gif
http://pc.watch.impress.co.jp/docs/2...gai364_07l.gif
http://pc.watch.impress.co.jp/docs/2...gai364_04l.gif
http://pc.watch.impress.co.jp/docs/2.../kaigai364.htm
The CPU architect team takes charge of development
Larrabee which was seen as discrete GPU for a long time really is not GPU. It is メニイコア CPU which is specialized in the stream computing which processes many data in parallel. Gelsinger expresses as follows.
"Larrabee is the high-speed steel kale parallel machine. We load very many cores, our メニイコア (Many-core) become the first product "
Larrabee which adhered to x86 instruction set architecture
The largest point of Larrabee IA (x86 system) expanded instruction set architecture, it is the point which is the high parallel processor. There differs from GPU and the other stream processor of individual instruction set architecture largely.
"The core of Larrabee is IA instruction set interchangeable. This thinks that it is very important feature. However, floating point order is expanded to instruction set. It is the instruction set expansion which is specialized because of high parallel workload. In addition, cash coherency is taken in the form which extends over the core (joint ownership) it has cash. This (キャッシƒ…コヒーレンシ) when of プログラマビリティ is thought, it is very important. In addition, special use unit and I/O are loaded.
Larrabee, never, GPGPU (general purpose GPU) is not the traditional graphic pipeline in the space. General-purpose processor, in other words, it is the processor which is directed to the use where IA プログラマビリティ becomes important. But, it can answer to the workload of specification, with the expansion of instruction set, "(Gelsinger)
Like GPU, it is not the product which reforms the graphic pipeline in for general-purpose road, the fact that the approach whose is widely used is taken is Larrabee. Because of that, IA (x86 system) the rear interchangeability to instruction set architecture is taken. Starting from the IA core which is general purpose CPU, to the micro architecture which faces to the stream type computing of floating point arithmetic it is the processor which is expanded.
Actually, also discrete GPU plan of the Intel graphic core team exists to Intel. This differs Larrabee completely architecture and mounting, you say that it is discrete edition of Intel graphics. As for graphic integrated chip set of the CSI generation, because it is easy to derive discrete GPU, as for this it is the natural flow. Intel was advancing this project from the time before, but as for the concrete product road map because it is not audible, there is also a possibility of going out.
GPU the parallel processor of the approach which differs
The performance of Larrabee with graphic processing is unknown. As for graphic processing because it approaches to the execution efficiency of シェーダプログラム steadily, as for the possibility architecture of Larrabee type becoming advantage it is high. But, like rasterizing like processing and filtering and the luster operation where the functional unit which is locked completely is more effective the processing whose semi- fixed unit is effective is mainly included in the graphic pipeline. When these are processed with all processors, wastefulness increases performance/electric power mainly.
Because of that, efficiency with graphics, depending upon Larrabee how much has GPU hard, changes. Circuit scale there is also a possibility of having private hard concerning small unit.
It is clear present Larrabee not to be something which was focused to graphics, to be the architecture which approached to non graphics rather.
==================================================
http://www.tgdaily.com/content/view/32447/113/
Intel aims to take the pain out of programming future multi-core processors
Santa Clara (CA) – The switch from single-threaded to multi-threaded applications to take advantage of the capabilities of multi-core processors is taking much longer than initially expected. Now we see concepts of much more advanced multi-cores such as heterogeneous processors surfacing – which may force developers to rethink how to program applications again. Intel, however, says that programming these new processors will require a “minimal” learning curve.
As promising as future microprocessors with perhaps dozens of cores sound, there appears to be huge challenge for developers to actually take advantage of the capabilities of these CPUs. Both AMD and Intel believe that we will be using highly integrated processors, combining traditional CPUs with graphics processors, general purpose graphics processors and other types of accelerators that may open up a whole new world of performance for the PC on your desk.
AMD recently told us that it will take several years for programmers to exploit those new features. While Fusion - a processor that combines a regular CPU and a graphics core - is expected to launch late in 2009 or early in 2010, users aren’t likely to see a functionality that is different from a processor and an attached integrated graphics chipset. AMD believes that it will take about two years or until 2011 when the acceleration features of a general purpose GPU will be exploited by software developers.
http://www.tgdaily.com/images/storie...mc_program.jpg
Intel told us today that the company will be taking an approach that will make it relatively easy for developers to take advantage of this next generation of processors. The company aims to “hide” the complexity of a heterogeneous processor and provide an IA-like look and feel to the environment. Accelerators that are integrated within the chip are treated as processor-functional units that can be addressed with ISA extensions and a runtime library. Intel compares this approach with the way how multimedia extensions (MMX) were integrated into Intel’s instruction set back in 1996.
As a result, Intel hopes that developers will be able to understand these new processors quickly and develop applications almost immediately. “It is a very small learning curve,” a representative told us today. “We are talking about weeks, rather than years.”
Nvidia, which is also intensifying its efforts in the massively parallel computing space, is pursuing a similar idea with its CUDA architecture, which allows developers to process certain applications - or portions of them - through a graphics card: Instead of requiring a whole new programming model, CUDA can be used by a C++ based model and a few extensions that help programmers to access the horsepower of an 8-series Geforce GPU.
AMD & NVIDIA Talks About Intel Larrabee
http://pc.watch.impress.co.jp/docs/2.../kaigai365.htm
AMD Phil Hester
On the one hand, Phil Hester of AMD (filling to the star) the person (Senior Vice President & Chief Technology Officer (CTO)) points out that there is a doubt in the point, mounting the x86 instruction set with Larrabee.
Whether "Intel does what kind of selection in the design of Larrabee, it is not made still clear. But, story can do with the point, method of designing the processor which is directed to the application of data parallel type from the argument, Intel opposite AMD separated, most effectively.
When you survey over industry the whole PC, the x86 processor vendor increased continuing the instruction set which faces to the job such as graphics and the media. On the other hand, as for the GPU vendor recently, to moduration converting GPU, it added the function which can give the general purpose of a certain kind. From it tried to bring close to the general-purpose processor machine, that it is possible to rephrase. Both, are getting near to mutual position from the position where it is mutually opposite.
Because of that, this there is also a viewpoint which we assume that it is the war of CPU and GPU. But, speaking candidly, this is artificial war. Because because CPU and GPU, the respective developer could not either, either make their technologies simultaneous to cooperate, being generic to the enterprise which differs so far. Because of that, CPU and GPU each one, were moving to the individual. Until that flows together, us (AMD and ATI) really, it was circumstance.
Because it is in such circumstance, CPU enterprise tries will think to tie everything to CPU, CPU (even with development of the data parallel processor). The compatibility of CPU, of the development from CPU is thought. That (concerning Larrabee perhaps) the selection which Intel did. And as for me, as for that you think that it is method of thinking of making a mistake fundamentally. Because because, we have assumed that then it will be able to send (it cannot make the design which faces) to the structure of application of data parallel type.
If you ask whether those (the x86 processor and the data parallel processor) the design which is integrated is possible, that is possible of course. But, when of performance is thought, it does not become good selection. The processor of x86 instruction set is good to for general purpose, but because it does not become the machine which faces to parallel data execution. Never that way the design is not done.
But, case of us, CPU and GPU, the respective enterprise fused, the respective designer became simultaneous. Because of that, the design which is optimum to the respective application (the processor core) it is possible to integrate. We those (x86 CPU and the data parallel processor) make one chip, but it does not mount the instruction set of CPU on GPU itself. It integrates to the order space the instruction set of GPU, as the user order of CPU. It is not the case that it makes the data parallel machine which mounts CPU instruction set ",
NVIDIA David B. Kirk
So, NVIDIA which is not the CPU vendor how being? NVIDIA is evolving GPU architecture to the shape which faces to general-purpose data parallel computing. David B. of NVIDIA Kirk (the デビッド ・ B ・ kirk) the person (Chief Scientist), GeForce FX (NV3x) it continued to talk the development to the GPU server from at the time of architecture. In the territory, NVIDIA GPU and Intel Larrabee, AMD FUSION and GPU are opposed from front. NVIDIA already is not the vendor of the "graphic tip/chip". Because of that, Larrabee becomes the largest enemy.
And, in these 3 just NVIDIA, x86 instruction set is in the stance which leaves. You say that probably there are no times when even in the future x86 instruction set is brought to GPU. This, has become the decisive difference with NVIDIA and the CPU bender 2 corporation.
Kirk talks as follows concerning Larrabee and FUSION.
If "it does from their present positions, as for expansion of x86 instruction set you think that it is logical step. Because the CPU instruction set which is their strengths is utilized. But, because we do not have the x86 core, (laughing), that does not become logical step for us. But this never probably does not become with our limpness. Because any strength, the reverse side is connected to limpness.
The x86 interchangeability being very powerful, probably becomes strength. But, that is restriction. Because selection in regard to their designs is restricted. When レガシー is dragged, it becomes difficult to make the processor design which by any means is optimum to the parallel stream.
Vis-a-vis that, you can start us from the completely white paper. Because of that, it is possible to do the selection which is thought that it is desirable, freely. It is possible to design pure parallel stream computing environment without being dragged to レガシー.
Their selections that in the processor (in stream computing) x86 and it is made interchangeable, probably mean to bring unnecessary complexity. () History of the past processor has proven the fact that the interchangeability design such as that is difficult. By all means something becomes sacrifice. It believes "that the result where our selections which aim toward the pure design are good in the long term is brought
As for the compatibility of x86 it is useful, but complexity is brought to the processor, is the viewpoint of NVIDIA. Point of view Kirk matches Hester at this point in time. And, NVIDIA the instruction set which from zero is optimized in data parallel computing and by the fact that it makes micro architecture, has been about probably to make the processor whose efficiency is best. General-purpose processing of conventional type to CPU entrusts, GPU the optimum design is pursued is the road of NVIDIA.
NVIDIA has begun the expansion of the function of GPU according to this course. In the future, addition of 64-bit floating point arithmetic function and the achievement of the single precision floating point arithmetic performance of 1TeraFLOPS with 1 tip/chip refrain.