Page 2 of 2 FirstFirst 12
Results 26 to 33 of 33

Thread: Intel Larrabee Roadmap 48 cores in 2010

  1. #26
    Xtreme Member
    Join Date
    Jun 2005
    Location
    A..T..L
    Posts
    415
    you could in theory parallel a thread supporting parallel computing to a huge degree. we made a cluster out of o, 25 intel celeron 433's with around 256-512mb of ram with a central node.

    Through the central node i had a background displayof how many ghz and ram power I had along with virtual memory [hd] power. Running the classes video and 3d renderings through that was cake, .... It took maybe a couple of min at most for nearly any project we threw at it. We used a cluster version of knoppix.

    Out of experience, with the right softtware architecture, that looks highly probable..

    Ps: WE NEED A F@H CLUSTER CLIENT!

    -nome
    AMD X2 3800+
    DFi LANPARTY UT NF590 SLI-M2R/G
    2 x 1Gb Crucial PC8500 [Anniversary Heatspreaders ]
    Custom Watercooling on the way
    Thermalright XP-90 right now
    27" 1080p HDTV for monitor
    Quote Originally Posted by The Inq
    We expect the results to go officially live prior to Barcelona launch in September. µ

  2. #27
    Xtreme Addict
    Join Date
    Apr 2005
    Location
    Wales, UK
    Posts
    1,195
    With that many cores it would not be a case o fthe apps in use today running faster, but rather being able to create apps with more functionality.

    Just taking game use, Instead of running AI on one core you could have one core dedicated to the AI of an individual character, with many, many characters in play. You could run several physics threads on several cores with another to manage thread interaction. You could have several cores running ray tracing algorithms to calculate lighting which could then be passed onto the gpu or rendered directly. You could use several cores to process audio streams, decompress and decode textures, video textures etc.

    There will not be a case of there being 'enough' processing power for a long time yet, you can always think of more things to do, even if it becomes increasingly difficult to speed up the execution time of an individual thread.

  3. #28
    Registered User
    Join Date
    Sep 2006
    Posts
    69
    Quote Originally Posted by onewingedangel View Post
    With that many cores it would not be a case o fthe apps in use today running faster, but rather being able to create apps with more functionality.

    Just taking game use, Instead of running AI on one core you could have one core dedicated to the AI of an individual character, with many, many characters in play. You could run several physics threads on several cores with another to manage thread interaction. You could have several cores running ray tracing algorithms to calculate lighting which could then be passed onto the gpu or rendered directly. You could use several cores to process audio streams, decompress and decode textures, video textures etc.

    There will not be a case of there being 'enough' processing power for a long time yet, you can always think of more things to do, even if it becomes increasingly difficult to speed up the execution time of an individual thread.
    Exactly. Rather than a nanny thread like in single threaded applications there will be many threads which all operate independently
    Software will change, but software has to sell. Till you know the market has the required spec to run your software you want to target the masses else you're going to go down.
    ASROCK Dual 775-VSTA
    E6400 @ 2.4ghz
    1.5gig ram
    some other stuff...


  4. #29
    Xtreme Enthusiast
    Join Date
    Apr 2005
    Posts
    894
    it is very rational after all we're like this.

    alotta stupid cells.
    Gaming: SaberThooth X79,3930k,Asus6970DCII_Xfire,32gb,120OCZV3MaxIOPS, ThermaTake Chaser MK1
    HTPC:AMD630,ATI5750,4gb,3TB,ThermalTake DH103
    Server: E4500,4GB,5TB
    Netbook: Dell Vostro 1440

  5. #30
    Registered User
    Join Date
    Jun 2006
    Posts
    61
    Updated first post

    http://www.tgdaily.com/content/view/32465/113/

    Intel research points to increased power efficiency in future I/O architectures

    Kyoto (Japan) – Intel has unveiled research results of new I/O technology that improves the power efficiency of existing architectures such as PCI Express by a factor of 7.

    As we are heading deeper into the multi-core space, hardware manufacturers are likely to be facing a capacity and power bottleneck at the I/O transceiver. Processors with potentially dozens of cores are calling for significantly faster and more power efficient on-chip and off-chip interfaces to take advantage of the additional processing power.

    Intel said that it has developed a new technology, which only consumes about 14% of the power of PCI Express 2.0. At a maximum data rate of 5 Gb/s, PCI Express 2.0 consumes about 20 mWatts per Gb/s bandwidth. Intel claims that its new and not yet named technology can achieve 5 Gb/s at about 2.7 mWatts per Gb/s.





    So far, the power efficiency of the technology is scaling almost linearly and achieved 3.6 mWatts per GB/s at 10 Gb/s and 5.0 mWatts per Gb/s at 15 Gb/s – which, to our knowledge, is the best power efficiency result for I/O receivers achieved so far. Infineon claimed the crown in this category in February 2006, when it disclosed details about a 9.6 Gb/s transceiver running at 10.4 mWatts per Gb/s.

    At least in theory, Intel’s interface could bump the available bandwidth of today’s available bandwidth by a factor of 3 while consuming 75% less power. Randy Mooney, a Fellow and director of I/O research at Intel, however, pointed out that the technology is still at a research stage.

    He confirmed that we could see the technology surfacing within a few years in Intel products, but declined to comment on specific products. Instead, Rooney explained that the new I/O findings will be a key technology in enabling “a large number” of cores (which means more than 10 cores, according to company representatives). Example applications could include point-to-point links in microprocessors, which will replace the front side bus in today’s Intel architecture.

  6. #31
    Xtreme Cruncher
    Join Date
    Aug 2006
    Location
    Denmark
    Posts
    7,747
    The I/O is very nice. We all know the massive lanes chipsets aint friendly on that side. Its a small thing but everything helps. Lets hope nVidia uses it quickly
    Crunching for Comrades and the Common good of the People.

  7. #32
    Registered User
    Join Date
    Jun 2006
    Posts
    61

    AMD & NVIDIA Talks About Intel Larrabee

    http://pc.watch.impress.co.jp/docs/2.../kaigai365.htm

    AMD Phil Hester

    On the one hand, Phil Hester of AMD (filling to the star) the person (Senior Vice President & Chief Technology Officer (CTO)) points out that there is a doubt in the point, mounting the x86 instruction set with Larrabee.

    Whether "Intel does what kind of selection in the design of Larrabee, it is not made still clear. But, story can do with the point, method of designing the processor which is directed to the application of data parallel type from the argument, Intel opposite AMD separated, most effectively.

    When you survey over industry the whole PC, the x86 processor vendor increased continuing the instruction set which faces to the job such as graphics and the media. On the other hand, as for the GPU vendor recently, to moduration converting GPU, it added the function which can give the general purpose of a certain kind. From it tried to bring close to the general-purpose processor machine, that it is possible to rephrase. Both, are getting near to mutual position from the position where it is mutually opposite.

    Because of that, this there is also a viewpoint which we assume that it is the war of CPU and GPU. But, speaking candidly, this is artificial war. Because because CPU and GPU, the respective developer could not either, either make their technologies simultaneous to cooperate, being generic to the enterprise which differs so far. Because of that, CPU and GPU each one, were moving to the individual. Until that flows together, us (AMD and ATI) really, it was circumstance.

    Because it is in such circumstance, CPU enterprise tries will think to tie everything to CPU, CPU (even with development of the data parallel processor). The compatibility of CPU, of the development from CPU is thought. That (concerning Larrabee perhaps) the selection which Intel did. And as for me, as for that you think that it is method of thinking of making a mistake fundamentally. Because because, we have assumed that then it will be able to send (it cannot make the design which faces) to the structure of application of data parallel type.

    If you ask whether those (the x86 processor and the data parallel processor) the design which is integrated is possible, that is possible of course. But, when of performance is thought, it does not become good selection. The processor of x86 instruction set is good to for general purpose, but because it does not become the machine which faces to parallel data execution. Never that way the design is not done.

    But, case of us, CPU and GPU, the respective enterprise fused, the respective designer became simultaneous. Because of that, the design which is optimum to the respective application (the processor core) it is possible to integrate. We those (x86 CPU and the data parallel processor) make one chip, but it does not mount the instruction set of CPU on GPU itself. It integrates to the order space the instruction set of GPU, as the user order of CPU. It is not the case that it makes the data parallel machine which mounts CPU instruction set ",


    NVIDIA David B. Kirk

    So, NVIDIA which is not the CPU vendor how being? NVIDIA is evolving GPU architecture to the shape which faces to general-purpose data parallel computing. David B. of NVIDIA Kirk (the デビッド ・ B ・ kirk) the person (Chief Scientist), GeForce FX (NV3x) it continued to talk the development to the GPU server from at the time of architecture. In the territory, NVIDIA GPU and Intel Larrabee, AMD FUSION and GPU are opposed from front. NVIDIA already is not the vendor of the "graphic tip/chip". Because of that, Larrabee becomes the largest enemy.

    And, in these 3 just NVIDIA, x86 instruction set is in the stance which leaves. You say that probably there are no times when even in the future x86 instruction set is brought to GPU. This, has become the decisive difference with NVIDIA and the CPU bender 2 corporation.

    Kirk talks as follows concerning Larrabee and FUSION.

    If "it does from their present positions, as for expansion of x86 instruction set you think that it is logical step. Because the CPU instruction set which is their strengths is utilized. But, because we do not have the x86 core, (laughing), that does not become logical step for us. But this never probably does not become with our limpness. Because any strength, the reverse side is connected to limpness.

    The x86 interchangeability being very powerful, probably becomes strength. But, that is restriction. Because selection in regard to their designs is restricted. When レガシー is dragged, it becomes difficult to make the processor design which by any means is optimum to the parallel stream.

    Vis-a-vis that, you can start us from the completely white paper. Because of that, it is possible to do the selection which is thought that it is desirable, freely. It is possible to design pure parallel stream computing environment without being dragged to レガシー.

    Their selections that in the processor (in stream computing) x86 and it is made interchangeable, probably mean to bring unnecessary complexity. () History of the past processor has proven the fact that the interchangeability design such as that is difficult. By all means something becomes sacrifice. It believes "that the result where our selections which aim toward the pure design are good in the long term is brought

    As for the compatibility of x86 it is useful, but complexity is brought to the processor, is the viewpoint of NVIDIA. Point of view Kirk matches Hester at this point in time. And, NVIDIA the instruction set which from zero is optimized in data parallel computing and by the fact that it makes micro architecture, has been about probably to make the processor whose efficiency is best. General-purpose processing of conventional type to CPU entrusts, GPU the optimum design is pursued is the road of NVIDIA.

    NVIDIA has begun the expansion of the function of GPU according to this course. In the future, addition of 64-bit floating point arithmetic function and the achievement of the single precision floating point arithmetic performance of 1TeraFLOPS with 1 tip/chip refrain.

  8. #33
    Xtreme Enthusiast
    Join Date
    Nov 2005
    Posts
    844
    I've said it so many times, but I really wish we'd have another competitor in X86, regaurdless of what Nvidia says, and yes I want Nvidia to join the X86 boat if Intel allows it.
    -Cpu:Opteron 170 LCBQE 0722RPBW(2.87ghz @ 1.300v)
    (retired)Opteron 146 (939) CAB2E 0540
    -Heatsink: Thermalright XP-90
    -Fan:120mm Yate Loon 1650 RPM @ 12V, 70.5 CFM, 33dB
    -Motherboard: DFI Lanparty nF4 UT Ultra-D
    -Ram: Mushkin High Performance blue, 2gigs(2X1gig kit) PC3200 991434
    -Hard drive: Seagate 400GB Barracuda SATA HD 7200.10(AS noisey model)
    -Video card: evga 6800GS @520/1170
    -Case: P180
    -PSU:Enermax 535Watt EG565P-VE FMA (24P)

Page 2 of 2 FirstFirst 12

Bookmarks

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •