MMM
Results 1 to 25 of 188

Thread: Larrabee: A fiasco, or the future?

Hybrid View

  1. #1
    Xtremely High Voltage Sparky's Avatar
    Join Date
    Mar 2006
    Location
    Ohio, USA
    Posts
    16,040
    I don't know the tech specs of it but it still strikes me as a rather inefficient thing to take a bunch of x86 CPUs and have them do graphics. I mean, that's why the graphics card was created, because it was designed specifically for graphics and did a much better job than a CPU
    The Cardboard Master
    Crunch with us, the XS WCG team
    Intel Core i7 2600k @ 4.5GHz, 16GB DDR3-1600, Radeon 7950 @ 1000/1250, Win 10 Pro x64

  2. #2
    I am Xtreme
    Join Date
    Jul 2007
    Location
    Austria
    Posts
    5,485
    Quote Originally Posted by SparkyJJO View Post
    I don't know the tech specs of it but it still strikes me as a rather inefficient thing to take a bunch of x86 CPUs and have them do graphics. I mean, that's why the graphics card was created, because it was designed specifically for graphics and did a much better job than a CPU
    Soundcards where created for the same purpose, and now every mobo has a chip with the sound codecs on board and the cpu takes over the decoding. Its just a matter of calculation power.

    If your cpu would be fast enough, it wouldn't matter if x86 is inefficent. The problem, right now is, they arn't and thats why specialised hardware is faster (and always will be faster)

    Also, intel also is eager to bring x86 to the embeded market. So they arn't focusing only on the high performance market but also on a mass market (set top boxes, VDR etc.).

  3. #3
    Xtreme X.I.P.
    Join Date
    Nov 2002
    Location
    Shipai
    Posts
    31,147
    Quote Originally Posted by Vozer View Post
    I think we don't need to worry about LRB's raw power.
    Intel's Senior Graphics Software Architect Manager...
    please, what do you expect him to say?
    oh well, now that you ask... lrb is actually quite a weak design and doesnt perform well at all

    and possibly leading a small team of engineers through any of the above.
    what what WHAT???? ok... now im surprised... intel doesnt seem to take lrb half as serious as i thought they would...

    Quote Originally Posted by Chumbucket843 View Post
    fail post.

    if intel says larrabee will compete with radeon/geforce that means it will be low power all the way up to high end. intel designs all of their architectures to be scalable from atom to nehalem-ex so i dont see why they wouldnt do this for their graphics line up.
    its about perf per transistor... intel wont be able to match that of a gpu for rasterization, so they need more transistors, means a bigger gpu... and considering how people crack jokes about gt300 being huge and a failure cause tsmcs 40nm process will probably never have good yields with such a massive chip... well think about a chip that is 20-50% bigger to reach the same performance... at a higher tdp... and now you might see why intel might not be able to capture the highend market...

    Quote Originally Posted by Unoid View Post
    Even if it does only perform at 275 level, I'd still buy one just to be a GPGPU. Games and software will definitely support it for anything.
    hahah, wow... intel must be praying that there are enough naive customers like you who buy a product cause they are convinced it must be good at something even if it sucks at what its targeted at

    think of phhysix... ageia taught everybody a lesson in how naive many customers are nowadays and how much money they spend on a product that doesnt even have any real use

    same as this killer nic card that speeds up your games but you cant meassure it LOL

    Quote Originally Posted by Helloworld_98 View Post
    they tested it at 1GHz but rumours say it'll release at 2GHz.

    and iirc it's 48 cores not 64.
    i think they will wait for 32nm, at least for the highend version, and itll be 64 or even more... and there is no meassured perf so far, its all smart guesses from intel at this point afaik...


    Quote Originally Posted by SparkyJJO View Post
    I don't know the tech specs of it but it still strikes me as a rather inefficient thing to take a bunch of x86 CPUs and have them do graphics. I mean, that's why the graphics card was created, because it was designed specifically for graphics and did a much better job than a CPU
    heh yeah... and not only that... i think intel is really taking this too light hearted and too arrogantly...

    they want lrb to be:
    cost competitive
    tdp competitive
    rasterization competitive
    x86 competitive

    anything else?
    seriously... how arrogant do you have to be to think you can not only create a product that beats fixed function logic in its home territory, but also delivers outstanding general purpose performance, and all this at the same or worse mfc process, within the same tdp envelope and same price?

    its like boing anouncing they will launch a new plane that can transport more people than an A380, fly faster than a concorde, and all that at the same price and fuel consumption as an avergae A330 passanger jet

    oh and that not being enough, they make a, using their own words, SMALL, design team work on this... most of them have never worked with each other before and each comes from a diferent background, probably resulting in diferent views on things and conflicts...

    i think intels top guys dont seem to realize what an important strategic value lrb has for the future of their company... small team...

    Quote Originally Posted by Hornet331 View Post
    Soundcards where created for the same purpose, and now every mobo has a chip with the sound codecs on board and the cpu takes over the decoding. Its just a matter of calculation power.

    If your cpu would be fast enough, it wouldn't matter if x86 is inefficent. The problem, right now is, they arn't and thats why specialised hardware is faster (and always will be faster)

    Also, intel also is eager to bring x86 to the embeded market. So they arn't focusing only on the high performance market but also on a mass market (set top boxes, VDR etc.).
    ok, lets look at soundcards... while they heavily use the cpu to get stuff done, is there any mainboard, at all, that uses the cpu ONLY?
    is there any mainboard that does audio completely in software on a cpu?

    thats the same reason why many people doubt doing graphics on x86 cores entirely, makes no sense...
    Last edited by saaya; 09-20-2009 at 07:57 PM.

  4. #4
    I am Xtreme
    Join Date
    Jul 2007
    Location
    Austria
    Posts
    5,485
    Quote Originally Posted by saaya View Post
    ok, lets look at soundcards... while they heavily use the cpu to get stuff done, is there any mainboard, at all, that uses the cpu ONLY?
    is there any mainboard that does audio completely in software on a cpu?
    Every Mobo with a Audio codec...?
    The "Audio Codec" in hardware on the mobo is nothing more then a A&D/D&A converter.

    The real decoding is done through software aka the CPU.

  5. #5
    Xtreme X.I.P.
    Join Date
    Nov 2002
    Location
    Shipai
    Posts
    31,147
    Quote Originally Posted by Hornet331 View Post
    Every Mobo with a Audio codec...?
    The "Audio Codec" in hardware on the mobo is nothing more then a A&D/D&A converter.

    The real decoding is done through software aka the CPU.
    your right, bad example...

  6. #6
    Xtreme Member
    Join Date
    Sep 2008
    Posts
    228
    Quote Originally Posted by saaya View Post
    Intel's Senior Graphics Software Architect Manager...
    please, what do you expect him to say?
    I know he's an Intel man and all, but, you know, I'd rather believe the project insiders than rumours from other sources.

    Quote Originally Posted by Tom Forsyth
    The SuperSecretProject is of course Larrabee, and while it's been amusing seeing people on the intertubes discuss how sucky we'll be at conventional rendering, I'm happy to report that this is not even remotely accurate. Also inaccurate is the perception that the "big boys are finally here" - they've been here all along, just keeping quiet and taking care of business.
    .

  7. #7
    Xtreme Cruncher
    Join Date
    May 2009
    Location
    Bloomfield
    Posts
    1,968
    Quote Originally Posted by SparkyJJO View Post
    I don't know the tech specs of it but it still strikes me as a rather inefficient thing to take a bunch of x86 CPUs and have them do graphics. I mean, that's why the graphics card was created, because it was designed specifically for graphics and did a much better job than a CPU
    here is a better idea of what larrabee is/is not.
    nvidia has 240SP,AMD 160SP's, and intel 32.

  8. #8
    Xtreme Enthusiast
    Join Date
    Dec 2007
    Posts
    816
    Quote Originally Posted by Chumbucket843 View Post
    here is a better idea of what larrabee is/is not.
    nvidia has 240SP,AMD 160SP's, and intel 32.
    you are on the right track ... at the end, when all DirectX 11 stuffs are adopted, it will end up to a race to instruction per clock ... because you will start having "special" processing for each pixels , and when doing so, you ll have to start reusing all the tricks of the CPU ... load units that don't have issue with alignment, fast branching ... SIMD and all the usual gadgets.

    by 2015 to 2020, you ll be back to CPU cores, that is my prediction.

    (It is ok to dissagree, don't have to beat me up verbally :-P)
    DrWho, The last of the time lords, setting up the Clock.

  9. #9
    Xtreme X.I.P.
    Join Date
    Nov 2002
    Location
    Shipai
    Posts
    31,147
    Quote Originally Posted by Chumbucket843 View Post
    here is a better idea of what larrabee is/is not.
    nvidia has 240SP,AMD 160SP's, and intel 32.
    that picture is very misleading...
    here, this is much better:


    Larrabee


    GT200


    RV770


    somebody correct me if im wrong, but this is how i understood it from the siggraph papers:
    LRB = 32+ "16way" processors, 2flops/clock
    RV770 = 10 "16way" processors, 10flops/clock
    GT200 = 30 "8way" processors, 3flops/clock
    RV870 = 20 "16 way" processors, 20flops/clock
    GT300 = 60(?) "8way"(?) processors, 6flops/clock

    EDIT: corrected the infos
    http://graphics.stanford.edu/~kayvon...g/diagrams.pdf
    Last edited by saaya; 09-25-2009 at 08:41 AM.

  10. #10
    Xtreme Addict
    Join Date
    Jan 2005
    Posts
    1,366
    Quote Originally Posted by saaya View Post
    LRB = 32+ 4way processors
    RV770 = 16 blocks of 10 "5way" processors
    GT200 = 8 blocks of 3 8way processors = 24 "8way" processors
    RV870 = 32 blocks of 10 "5 way" processors
    GT300 = 16(?) blocks of 3(?) 8way(?) processors = 48 "8way" processors

    so if you just look at processors, afaik lrb is 32+ (probably 48), gt300 probably 48 too and rv870 is a whooping 320. lrb and gt300 have fast and very beefy processors while rv870 has rather simple processors, clocked at only half of lrb and gt300, but makes up for it with the sheer number of them.
    You're mixing different things. 4way for LRB means 4 threads per core. It has 512-bit vector FPU, so each core can execute 16 32-bit ops per cycle. Considering FMAC capability it has in total 32 FLOP/cycle.

  11. #11
    Xtreme Enthusiast
    Join Date
    Dec 2007
    Posts
    816
    Quote Originally Posted by saaya View Post
    that picture is very misleading...
    here, this is much better:


    Larrabee


    GT200


    RV770


    somebody correct me if im wrong, but this is how i understood it from the siggraph papers:
    LRB = 32+ "16way" processors
    RV770 = 16 blocks of 10 "5way" processors = 160 "5way" processors
    GT200 = 8 blocks of 3 8way processors = 24 "8way" processors
    RV870 = 32 blocks of 10 "5 way" processors = 320 "5way" processors
    GT300 = 16(?) blocks of 3(?) 8way(?) processors = 48 "8way" processors

    so if you just look at processors, afaik lrb is 32+ (probably 40-48), gt300 probably 40-48 too and rv870 is a whooping 320. lrb and gt300 have fast and very beefy processors while rv870 has rather simple processors, clocked at only half of lrb and gt300, but makes up for it with the sheer number of them.
    There are many more parameters to concidere, like the speed of branching, the speed of loading aligned and unaligned ... you are back to the instruction per clock race ... hehehehe ...
    DX11 and other OpenCL will open the door to this again, IPC is the key ... actually IPC/watts.

    check out here: http://www.realworldtech.com/page.cf...WT090909050230
    you ll figure out that Silverthome (Atom) is as efficent as the RV700 , and it is without the 512 bits execution units of Lrb , and withoout its texture sampler...


    Engineers understood where it is going, I wish that you guys put your fanboys hate in your pocket, and look at the technology itself.
    Being able to compensiate for the overhead of x86 was the challenge, this graph shows that it is done.

    now, the next challenge is to make Many of those cores x86 to work together ...
    Then , it comes down to who is able to make the large transistor dices, and we all know who has the best fabs on the planete.

    May the Core be with you!

    Francois
    Last edited by Drwho?; 09-25-2009 at 08:46 AM.
    DrWho, The last of the time lords, setting up the Clock.

Bookmarks

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •