Don't judge too fast, it is an early prototype, with an early software ...
Give us time ... we are taking on people with 14 or more years experience, Paris and Rome were not build the same day ... We are working hard ... wait and see :)
Francois
Printable View
Don't judge too fast, it is an early prototype, with an early software ...
Give us time ... we are taking on people with 14 or more years experience, Paris and Rome were not build the same day ... We are working hard ... wait and see :)
Francois
You are aiming for a fast moving target. You do not have the luxury of time. Same deal on the ARM front, and your results there haven't been impressive either. The iPhone exterminated all those MIDs you guys were showing off last year before that Next Big Thing even got out of the cradle.
I have to ask about this quote from the first article in the OP's post
Is this for real? According to the Sept 1st S&P stock report, Intel makes 97% of it's money from cpus and chipsets (digital ent group = 55% and mobility group = 42%) . I guess both include their integrated graphics though. The report doesn't go into that level of detail.Quote:
There is no doubt in our minds that Intel is going to deliver Larrabee, as it is the future of the company.
Even so, it's hard for me to see that their future would depend on mid to hi-end graphics cards.
So if larrabee does fall flat, does anyone really think the future of the company is in peril? I'm not arguing, it just seems give way too much weight to this market segment.
I would really LOVE a high end GPU from Intel, they have good support and even better products. If this performed close to the 5870, i would use it in a heartbeat. Intel is one of the best companies. AMD doesnt take care of its partners in the channel, Intel is great at supporting even the lower tier dealers. AMD has bad rep. Ive been a reseller for a long time, and i know.
A question... If that was a realtime demonstration using Larrabee... why they didn't show a PCB or the card? Just dismount the box and show the card...
I think that demonstration was just and simply the old Daniel Pohl's raytraced Quake War running over a 32nm xeon... and no LRB card was inside the box. Think in this: that demo is running at 20FPS aprox... more or less the same than the old QW-RT demonstrations performed. There are no new effects, antialiasing, etc... it's the same demo that Intel is showing now for 3 years.
Time is something you don't have.... Like many others have said, the competition is a fast moving target. They aren't standing still, and the hardware that albatross will have to compete with in the 2 or 3 years it takes to get at least acceptable performance out of it, will be even many more times faster than it is currently. And also like many others have said, this time, you'll have to try and compete within the law. If there is any bribing and backdoor deals going on, then NO SOUP FOR YOU! :p:
yeah but nobody claimed to build rome in 2 years... intel DID claim theyd build lrb in 2 years... its intel that set themselves this insane timeline, nobody else... it was stupid imo, give yourself more time than you think you need and surprise everybody by getting your work done early... thats the good old star trek scotty philosophy that works great :D
actually intel is doing exactly this with cpus usually, conservative forcast and then outperform the expectations you set... with lrb i think everybody had strong doubts from the start it would be available in time and perform that well...
longterm, it might not kill intel or rob them of their market leader position... but the latter is very likely if you ask me seing as cpus and gpus are on a crash course...
there are two major shifts going on, from pc to consoles, and from pcs/laptops to smartphones.
intel may have conquered the pc market, but the pc market is shrinking, its no secret...
its kinda like intel being the market leader in old school rollerskates, but everybody is getting inline skates nowadays...
how many game consoles use an intel cpu? 0...
how many game consoles are even x86 using intel technology? 0... ps3, xbox360, wii, psp, ds... all arm or powerpc...
how many smartphones use an intel cpu? 0...
how many smartphones are even x86 using intel technology? 0... iphone, blackberry, palm, nokia, sony ericsson, motorola, sharp, htc, lg, samsung anycall, etc... all arm
the pc will not die out, but its losing its importance...
thats why intel is pushing atom and lrb... atom for smartphones and netbooks and lrb for consoles and pc gaming.
while atom was a comercial success for intel, it is far from a sucessful product in customer perception...
atom is too hot for phones and pds and mids, and too weak for netbooks...
the only reason it was a sucess is because its cheap...
lrb wont have that advantage, and im starting to think itll have the same problems as atom, it wont really do great in either segment its targeted at...
but we will see...
yepp, i was wondering the same thing...
its definately very suspicious...
maybe, but lrb definately has needed quite some patchwork and doesnt look that healthy...
how much is intel making with 1 atom and how much is apple making with 1 iphone? ;)
margin on atom is higher than on iphone... you mean percentage wise... well thats hard to tell since im sure the fab cost and r&d cost is not really calculated as a fee per processor, same for apple, but its definately much lower for apple since they outsource manufacturing...
and a high percentage margin is nice, but in the end what you want is to make a lot of money, so you want a high us$ margin per product... and there the iphone definately beats atom... and most iphone users LOVE their product and LOVE apple for it... while most atom users arent exactly in love with it and dont exactly worship intel for it either... :D
fanboy? or who or what?
dont get what you mean...
With all due respect, I don't know if you ever tried to work on a puzzle that have more than 1 billion parts ... this is what GPU are ... so, it is always easy from your screen point of view to write those kind of statements, never the less, I am sure you never even completed a Lego with 100 000 parts ;-)
so, send us the picture of a 100 000 parts lego before you are "that" agressive. :clap:
so, put your ego back in your pocket, and show some respect for the people who are actually trying to build one of the most complexe machine ever build. :yepp:
The point of the demo was to show that it is alive, I did not want to performance of it to be visible, neither the Big boss, so, that was perfect to allow no comparaison yet. This is like a young kid, it just standed up, don't ask it to run the fastest 100 meter yet. Carl Lewis crawled before he walk, and he walk before he smoke you at the 100 meter.
Thanks for understand that this kind of archivement don't get done over night.
Francois
PS: I only used your ton to make you feel how we feel when you write this kind of thing. :shrug:
Dude, don't climb all up on the pity train now. There are many, many companies that produce many many products that get critisized just as much and more than this albatross. It's embarassing to watch you play the victim role! :p: Seriously, it's simple, you've over promised and under delivered! :D No offense....
So basically you're saying it's a bit :banana::banana::banana::banana:e right now but the in the future, it will be a monster at 100m.
I'm sure it will be a fantastic co-processor, I'm just worried it will be a let down when it comes to gaming. If it was an add-on card for say havok+raytracing lite, then it would possibly be a good addition.
Perhaps with all that power it could be a physics, ray lite and sound card all in one?
Hey, you want my money. Why should I be sympathetic to your problems? Your SSDs and processors are well executed, but I doubt you will get any 'pity buys' for Larabee, unless of course it happens to be good; then they will be legitimate, earned buys. But I doubt anyone is waiting with baited breath.
Carl Lewis also took illegal stimulants, and should per the rules have been banned from competing on at least one occasion (perhaps he had a better marketing/legal team than even intel:D), so your analogy could have been more wisely chosen. Perhaps something involving cars would work better? Oh, wait...
here is a better idea of what larrabee is/is not.
nvidia has 240SP,AMD 160SP's, and intel 32.
http://images.anandtech.com/reviews/...comparison.jpg
you are on the right track ... at the end, when all DirectX 11 stuffs are adopted, it will end up to a race to instruction per clock ... because you will start having "special" processing for each pixels , and when doing so, you ll have to start reusing all the tricks of the CPU ... load units that don't have issue with alignment, fast branching ... SIMD and all the usual gadgets.
by 2015 to 2020, you ll be back to CPU cores, that is my prediction.
(It is ok to dissagree, don't have to beat me up verbally :-P)
I wouldn't talk that way, if you (Your Company) weren't so arrogant in the last years with Larrabee. Your Marketing was more in the way, "We'll nerver made a gpu, but what's the problem. We make one now, revolutionize the whole market and crush everything else without a problem. It's just a gpu".
And now it's suddenly not that easy. I respect the people working for it, but i don't respect the attitude of your company.
PS: Never made such a big lego, but if i would, you still couldn't see it. My Legoparts are much smaller than yours;)
I agree with you, some people talk high of their projects in high words, but it is legitimate, just think about it:
This is their dream come true, they are like me and you, Geeks that enjoy technology. if they would not talk high about their project, they would not be putting much energy into it, they are very enthousiat about it, they and I believe into it, we spent a lot of energy on it, and we will not stop before we get the best out of it. This is intel at its best, this is what we are good at ... take a problem, analyst it, and bang your head on the wall as long as it is not the best ... Conroe is born like this, Pentium is born like this, Atom is born like this.
They forgot that before running, you got to be walking ... we just showed crawling ... we showed prototypes, it was running ... now, they are putting even more energy to make it awesome, they are motivated people, the best kinds, young and smart.
Give them a chance, and don't take their enthousiams as an issue, but as a strenth ... they will surprise you, and at the tuning, I got my big surprises coming too ... so, just relax and wait ...
we just wanted to show that the baby is born ... that's it.
we are enthousiat about it, not arrogant. make sense ?
Francois
I am pretty anxious to see what it can do myself. I can understand the many frustrations from both sides of the fence. People have heard about it for so long with nothing to hold on to and Intel very enthusiastic about their work but cannot run with the big boys because they are crawling. Did I get that right? Well anyway I understand what you mean and I am hoping that sometime in the near future we have a little more to hold on to and talk about. :up:
Just look at our track record for the last 4 years ... and you ll get the taste of the sauce we are preparing.
Francois
(Sorry, can't say more)
Francois, would you make some comments on this, please?
What does that mean exactly? The LRB prototype showed in the demo isn't the most powerful version Intel currently have, or it will get better in the future?Quote:
Originally Posted by AnandTech
that picture is very misleading...
here, this is much better:
Larrabee
http://img41.imageshack.us/img41/5830/lrb.png
GT200
http://img15.imageshack.us/img15/2857/gt200.png
RV770
http://img39.imageshack.us/img39/8308/rv790.png
somebody correct me if im wrong, but this is how i understood it from the siggraph papers:
LRB = 32+ "16way" processors, 2flops/clock
RV770 = 10 "16way" processors, 10flops/clock
GT200 = 30 "8way" processors, 3flops/clock
RV870 = 20 "16 way" processors, 20flops/clock
GT300 = 60(?) "8way"(?) processors, 6flops/clock
EDIT: corrected the infos
http://graphics.stanford.edu/~kayvon...g/diagrams.pdf
There are many more parameters to concidere, like the speed of branching, the speed of loading aligned and unaligned ... you are back to the instruction per clock race ... hehehehe ... :yepp:
DX11 and other OpenCL will open the door to this again, IPC is the key ... actually IPC/watts.
check out here: http://www.realworldtech.com/page.cf...WT090909050230
you ll figure out that Silverthome (Atom) is as efficent as the RV700 , and it is without the 512 bits execution units of Lrb , and withoout its texture sampler...
http://www.realworldtech.com/include...ficiency-1.png
Engineers understood where it is going, I wish that you guys put your fanboys hate in your pocket, and look at the technology itself.
Being able to compensiate for the overhead of x86 was the challenge, this graph shows that it is done.
now, the next challenge is to make Many of those cores x86 to work together ...
Then , it comes down to who is able to make the large transistor dices, and we all know who has the best fabs on the planete. :)
May the Core be with you!
Francois
ha! i found the same mistake and edited my post at 12:30 while you posted at 12:31 ^^ hehehe
thanks for the headsup tho! :toast:
i made a bunch of mistakes cause i only watched the pics and didnt read the actual presentation of the guy who made them :D
i think i got it right now...
so basically:
if intel wants to beat rv870 in Gflops they will need
32 cores at more than 2.70Ghz (:eek: )
or
40 cores at more than 2.10Ghz
or
48 cores at more than 1.77Ghz
2.7ghz is out of question i think... especially within a 250W tdp envelope...
so intel needs at least 48 cores to beat them in gflops, something that they seem to put a lot of effort in...
gt300 will be out by the time they launch though, so they will probably need 48 cores or even more to beat THAT... probably more cause nvidia is also focussing a lot on gflops and will probably improve that in gt300 over gt200, so itll probably have more flops per cycle...
i really dont think intel can beat ati and nvidia in gflops... not in 45nm... im pretty sure they need 32nm not only for power but also for die size...
but overall... i dont get what the hype about gflops is?
thats not a very good benchmark to compare performance...
francois, but why is it comparing die size to dp flops?
that makes no sense... shouldnt it be transistors or die size/mfc node or something like that?
interesting graph though... thanks :toast:
I wish it was that simple ... :) because we can make those dices ;-)
you concidere each efficency of NV, ATI and Lrb cores equal ... that is a mistake, the speed of branching of a Lrb core is sooo much faster than what a stream engine can do, the speed of loading of unligned data is infinitly faster than a stream engine, because it only can do it ... so, every data in 20th century GPU is aligned, increasing their memory foot print by adding "Zeros" bobles into the memory blocks ...
The problem is fortunatly not as simple as you simplistic maths.
I am not even talking about the inter-connect speed, you probably understood by now that Intel has a "very good" memory controler ... fast interconnect (look at the xeon scaling on spec_fp_rate).
we need to tune to thing, at compute, it is already really nice, now the challenge is to make it a good GPU ... :)
We have a lot of new talented people, let s see ...
For competitive reason, I can't give away a lot of information, I am very sorry for this, but definitively, your maths does not work.
Francois
based on that chart, the ATOM may be as powerful as 870 for flops/watts, but takes 6-7x the chip size to do so? that turns into costs and perf/$ will get hit hard.
OK now I'm struggling to wrap my head around all this :wth:
Sounds like alot of marketing speak to me. :P: And no, just because Intel has executed well over the last 3 years, doesn`t earn them the right to benefit of the doubt AFAIC. There`s no such thing. And they haven`t executed to their roadmap perfectly for that matter either. Tick tock has slipped some, and LRB has had huge delays. If you want to go down that road though, let`s see how this thing competes against Fusion, as it will probably surface within a reasonbly close timeframe.
intel can do a BS theoretical measurement too. it is very unlikey for an rv770 to be under full load with the way they measure it.
it is an odd chart but i think silverthorne's tdp is BS. it would have been nicer to see SP too because the ratios of SP: DP are very different from each architecture.Quote:
francois, but why is it comparing die size to dp flops?
that makes no sense... shouldnt it be transistors or die size/mfc node or something like that?
interesting graph though... thanks :toast:
and those pics arent misleading. the nvidia sp is simplified more than the other though.
I don't follow the personalities involved like a lot of you do, but doesn't it seem that every time one company owns the other one like an old mule, the guy on top gets *ocky and forgets that the game isn't over yet? AMD did it when they were kicking Intel's ass and to a casual observer such as moi, it seems like now it's Intel's turn.
I'm sure neither of them thought that they were being over confident, but then, that was probably the biggest part of the problem.
Just my somewhat irrelevant observation. :shrug:
I think that what Mr Francois is saying is that it is way above what we can even speculate. :eek: I for one am glad that he even posts in here, and people shouldn't bash so hard. He is gracing our presence....:up:
the data is public, you can redo the math for Atom :).
I checked what David K did, and it matches with my expectation.
prove it wrong if you think it is not right, it is ok to dissagree, but if you say it is wrong, you got to prove it.
(this is 3rd party data, not intel data ...)
Francois
whos bashing dr who?
It is not bashing, i am trying to explain how to have an enginnering discussion.
the rules are simple:
1) You can not say it is wrong if you don t have the demonstration that it is wrong.
2) a demonstration of a concept needs to be back up my experimentscor use of experiments results.
3) it is ok to have an opinion, but it can not be use as proof.
4) Carnaut mathematics apply to arguements, except the case of the reciprocity ... :rofl:
5) Admiting that some time a more qualified or better positionned person can over right your arguements is a smart move... (when a CPU architect explain you thinks, pretty hard to know better than him ..)
always funny when somebody try to teach me the performance of an intel product. I am lucky enough to be the visible parts, there are an army of people making sure i am accurate, including real hackers to PhD style guys.
Those guys are hardcore!
Give the credit to the new Intel, since Conroe, we use our very secret advanced maths, and it looks like we do a good job at figuring out performance ... Why do you think we are not in the top guys to do this?
Thanks for understanding that we are not beginners, we have amazing projection systems, we are adapting to GPGPU, that is going to be awesome when the all Cathedrale is finish ... As i keep asking, let's have real engineering discussion, this is what is interesting.
Got the direction i would like to have a discussion?
Francois
Larrabee is a great concept, good for devs & great for intel. However, I don't see it debuting with good performance.
Im having the most trouble figuring out how LRB will be competitive price wise. I believe that Intel will get the performance they are aiming for but how will it be able to compete $$$/perf with NV and ATI?
Dr. Who, I'm sure you realize why so many nay-sayers...
You see, the graphics war, to many of us, is far more interesting than the cpu war has ever been. The cpu war is pretty cut-dry for most of our needs, we either go with 1 company for budget or the other for performance and if we're going to game on the system we generally don't have to worry much about which brand we buy from right now on cpus, which is why Intel wants in the gpu war in the first place. With GPU's things change, on the other hand.
1 card can absolutely MURDER in every game, then comes a new game and the tide completely changes, or a driver comes out and everything completely changes. As such, when it comes to gpu's everyone is ALWAYS on a "put up or shut up" kick. Reason being? Every time in the graphics market that we heard huge things about a design for an extended period of time before the release, it's failed. Remember the R600? Remember the FX5800? Yeah...
Now you see, we've been hearing this and that about LRB for quite awhile...I seem to recall hearing it's coming soon when I bought my old 8800gtx. Here we are, 3 years later, and the only thing we've seen is a limited ray Ray Trace demo of poor quality that ran pretty slow and the camera never moved a notch. You really can't blame people for being skeptical at this point because frankly this isn't a situation like other companies. People are saying NVidia have nothing because the 5870 launched less than a week ago, and intel have had us waiting for years...
Yes, Intel has proven themselves, everywhere but one place... The gpu market, which is exactly where they're trying to go. Intel doesn't exactly have anything at all to show for themselves when it comes to said market, as their IGP's are royally under-powered and actually cited by major game developers as part of the reason for the collapse of the PC gaming industry. Not a good start, especially for the enthusiast market. Just notice these guys want numbers, and they want them yesterday, most don't understand that telling performance early is SUICIDE as your competition knows what they're preparing for in that case. Like I pointed out earlier, they're grilling NVidia (and NVidia has had a VERY solid track record as of late when it comes to their new architectures for gpus) just because ATi launched early and caught them off guard. Don't think they're going to give intel any special treatment just because of conroe... They'll instead remind you how long it took for intel to go to conroe.
TL; DR version - Don't announce a GPU over 3 years in advance, delay it, and show a demo that does nothing for anyone watching and on top of that runs slow and the camera doesn't move. You are liable to get grilled by the enthusiast community for doing so.
fyi, intel told charlie that the lrb part in the demo was still Ax silicon, not the fixed Bx silicon that should come out soon.
and they told him that the performance of this sample was less than 10% of what they are hoping for with the final retail parts.
that would mean over 110fps at 720p and 60fps at 1080p... sounds great, but who knows if what they are "hoping" for is realistically possible...
and even if, those fps numbers sound great, but thats for this custom demo... who knows what the fps will drop to if we are talking about complex geometry and textures...
intel really shouldnt have shown the lrb life demo...
you dont demo a product that is so crippled its only running at 8% of its predicted performance...
I've said it before and I'll say it again. LRB has no chance of getting a foothold in the PC GPU market unless it can produce a card that will play games around when it's released, as good as ATI or NVIDIA.
It doesn't matter if LRB can produce an image that is 100FPS using raytracing as no game maker would make a game for it.
The only way I can see LRB forcing it's way into the market is for it to get into the console market. This is either on intels own console or by intel paying for LRB to be in another.
10 years from now, I've no doubt LRB type GPUs will be around. In fact, I suspect more players could of entered the market (ARM for starters). Until then, it's great technology but it suffers from the Chicken and Egg disorder. no one will create a game until there are LRBs in the wild, yet no one will buy one until their are games it will run on.
Even 480P RayTraced would look much better than 1080P on current ATI/NV hardware.. The question really is: Can Intel deliver 60 FPS in-game experience at ANY decent resolution with LRB?
If it can, I'm in.
my opinion lrb will be a total fiasco for the start. actually i am believing this as a fact (my fact ofcourse). there is no way first lrb generations will hava a success.
but everyones succes is not same all the time. intel has a big power on the market. and intel can easily dominate onboard market with lrb by using its power. for the performance parts intel has to move lots of stones in the market to put lrb in performance leauge. this will be a long adventure for intel. my opinion is some time they will give up on this.
It's simple...
Think Atom chip with modularity...etc. Add Larrabee on die with 6 atoms and what do you get..? Motherboards may have two processors soon, the second is a co-processor such as Larrabee.. AMD is headed this way also.
Dr Who just isn't allowed to tie it all together for you, as if this is some secret! With the advent of Hydra engine (intel) you can easily just have your co-processor not integrated, but on a card. Anyone notice the priority of the Hydra chip?
It's now the north bridge.. :up:
add it all up over the next 2 years...
Makes much more sense. :up:
I'll take that with a mountain of salt, seeing as i cant remember the last time any numbers that charlie spewed were even close to accurate... i think his site should be renamed, almostneveraccurate.com
what i can see happening is that the cards will be marketed as coprocessors and industrial cards, I got a feeling that they'll pull their mainstream focus, i'm sure two or three lrb cards would make an epic renderfarm especially for firms like pixar that still have a software based rendering engine.
yeah thats really a good point your brought up there... yes, raytraced doesnt necessarily look better, the demo they used so far looks like the far cry island after it has been bombed with agent orange or nuked :D i count 7 trees on the whole island, no other plants or bushes or objects of any kind...
and the previous raytracing demos from intel werent exactly that impressive either... what intel needs to make raytracing popular is at least one killer application that uses it, and uses it heavily in a way that really changes the gameplay, not bolted on implementations like all the physix nonsense we are seeing lately...
epic games seems to be very interested in lrb, they are really interested in having more flexibility to program their engine... but they have an engine cycle of 3 years or more, and this is more than just a new engine, this is really taking things to another level in regards to flexibility... so i expect we wont see a raytracing supporting engine from them or anybody else until 2011, maybe even later...
Come on, Saaya, we all know this demo is no good. :welcome:
I meant raytraced crysis@480p vs current crysis@ whatever with _ALL_ effects, rays, textures exactly the same :).
While adding some textures or/and little more vivid colors could do those demos more interesting for an end-user, I guess it was not an aim of Intel. More it was a technological demos for potential developers. Intel is not yet in the stage of creating TV advertisement for LRB.
XGI Volari all over again I predict. Nuff said.
Larrabee: a future fiasco.
i have been thinking about larrabee lately. i dont think larrabee 1 is going to be impressive. software scheduling is a lot better at parallelism so i think their strategy is to get practice making gpu's for the future. you cant be very competitive with your first gpu even with amazingly smart people. even Huang recognized larrabee (in raytracing/gpgpu) as a threat and he generally looks down upon his rival companies. pixar has already signed up for some and f@h already has plans for larrabee. a general trend in graphics pipeline is less fixed function. well texture filtering maybe not. my philosophy on larrabee is "you gotta learn to crawl before you walk"
Not sure why this thead was moved. Anyway....
http://news.cnet.com/8301-13924_3-10409715-64.html
Fiasco.Quote:
"Larrabee silicon and software development are behind where we hoped to be at this point in the project," Intel spokesman Nick Knupffer said Friday. "As a result, our first Larrabee product will not be launched as a standalone discrete graphics product," he said.
Lame. :(
Gotta blame Fermi I guess.
AMD and nVidia rallied today on news of Larrabee delay.
Quote:
SAN JOSE, Calif. — Graphics processor makers Advanced Micro Devices and Nvidia Corp. may have gotten long- as well as short-term reprieves from Intel Corp.'s decision not to release the first version of its Larrabee chip.
Intel said late last week it would not release its first implementation of Larrabee, a multi-core x86 processor geared for consumer graphics and technical computing. The chip targeted the high-end graphics markets of AMD and Nvidia beyond the reach of Intel's existing low-performance integrated graphics cores.
Now Intel plans to release the first chip as part of a software developer's kit to seed the market for next-generation Larrabee chips. Intel will announce in 2010 its plans for the development platform as well as future Larrabee chips, said a company spokesman.
Stock prices for AMD and Nvidia shot up seven and 14 percent respectively on the news, according to reports from Reuters and others. "If you are at Nvidia or AMD you can breathe a sigh of relief," said Nathan Brookwood, principal of Insight64 (Saratoga, Calif.).
Longer term AMD and Nvidia may gain a broader strategic advantage.
Part of Intel's rationale for running graphics on an array of x86 cores was it would make programming easier that developing for the even larger arrays of proprietary graphics cores used by AMD and Nvidia. However, that advantage appears to be narrowing.
Microsoft's Windows 7 includes a DirectCompute applications programming interface to run big parallel jobs on traditional graphics processors. Meanwhile, the OpenCL API developed by the Khronos group and backed by a broad range of companies including Apple Inc. is also gaining traction.
"Over the next two years while Intel is re-architecting Larrabee, AMD and Nvidia will ship hundreds of millions of GPUs capable of parallel processing with OpenCL and DirectCompute," said Brookwood. "More and more we'll see application developers use those tools, and they won't even know whether the work is being done on an x86 or GPU core," he added.
This isn't big enough to be a "Fiasco" LOL! I'd expect that from flippinwaffles or etc.. but what?
http://news.softpedia.com/newsImage/...Q2-2008-2.png/
http://news.softpedia.com/news/Low-D...08-92971.shtml
With some guy talking about AMD and nVidia shipping millions of GPU's, I'm wonder WTF is he talking about, cell phones? Surely he's not talking about Discrete Video cards that LRB was meant for.
Oh and Intel made about 53% of the 9 to 10 Billion dollar Graphic market.Quote:
Originally Posted by Cnet
http://news.cnet.com/8301-13924_3-10...ol;mlt_related
By not fighting and trying to be civil, they screwed up. You can't be civil to greedy @$$holes:ROTF: