If anyone feels like doing a lil' reading:
Intel court filing Vs. Nvidia:
http://www.itexaminer.com/images/Int...0Complaint.pdf
Printable View
If anyone feels like doing a lil' reading:
Intel court filing Vs. Nvidia:
http://www.itexaminer.com/images/Int...0Complaint.pdf
well I've just read through it and everything useful to use in determining anything about this court battle has been redacted. :\ (IE the actual CLA and contested portions)
i think you shouldnt look at everything from intels perspective only...
imo both intel and nvidia are trying to misuse their market dominance to push their will onto people, but in this particular case intel is wrong.
but why should nvidia pay intel? for what? what technology does nvidia get from intel? essentially they are licensing their sli driver, a piece of software...
does any software vendor pay intel to run on their cpu/chipset?
i dont see why nvidia should be paying intel for beeing allowed to run their driver on an intel based platform. if nvidia really HAD to pay intel for this, then why doesnt intel just lock sli on their chipset?
first of all intel would get bad letter bombs from customers all around the world, then theyd get sued for anticompetitive behaviour and misusing their chipset monopoly, and then theyd be abandoned in the high end since nvidia sli would be amd only and shift the platform performance towards amd in a very notable fashion ;)
laptops
entry level igp
highend desktop
but most of all its a long term strategic thing... the platform wars have begun and amd intel and nvidia all want to sell the whole pc platform, basically an entire pc, all made by intel, nvidia or amd. thats their future plan...
nvidia will probably have to team up with via while amd and intel have all they need. nvidia knows that via isnt healthy and will limit them so of course jensen is upset that he cant use intel cpus and license their bus to build a nvidia platform with intel cpu...
thats an overstatement... but if you do a bit of research you might stumble over several cancelled 790i boards that nobody ever talked about though.
asus rog pinot noir 790i - prototype was shown at cebit last year
asus 790i reworked - never saw a prototype of this
dfi 790i - never saw a prototype
giga 790i - never seen a prototype of this either
I know there is a prevailing sentiment that many here think IP and Patents are wrong. This leads folks to think License Fees are repugnant and or evil. The business world wouldn't be worth a flip if that were the case. There are thousands R&D companies making our lives better every hour. These folks can't perform their magic for Free:rolleyes:
I'm NOT on just Intel's side and just as I've argued for AMD and thought they were rightly awarded their case vs. Intel on the whole X86 fight from the early 90's. I was for Intergraph long before they won vs Intel. I said AMD would have to pay them too.
My view is that everyone should either;
A. Pay each other for their respective tech/IP.
Or
B. Agree to mutually use each others' IP free of Charge as AMD and Intel currently does.
Not
C. Everyone pay nVidia for their IP while nVidia gets everyone's else's IP free.
Sorry Saaya but you and nVidia are wrong here. So of course I agree with Intel on this simply because they made the agreement with nVidia and nVidia found it hard to break their old Habits. nVidia is aggressively greedy and stengey, a bad combo:rofl: nVidia doesn't pull the crap they pulled I'm about 85% certain we're not talking about this right now.
totally agree! actually companies should be FORCED to license their IP to others and the fees should be controlled by an industry body to make sure innovation isnt slowed down and nobody can bully anybody with their market share or money. as soon as ip is not available to everybody innovation slows down and political and mafia like relationships emerge, which in the end bend over the end user...
we came up with the idea to use 775 mounting holes on 1366, asus copied it, we hold a patent on it, but... whats the point? innovation drives the market and suing each other for using and not using each others ip is ridiculous and only gets you down in dirty mudfights hurting everybody, especially the customers cause in the end its THEM who pay for all this legal bs the big companies are pulling off. shamino came up with an external panel to access the clockgen and voltages while the board was running, without using the boards resources... again asus copied it, and foxconn could sue them, but again whats the point?
i just dont see how intel is beeing treated bad here, since atm its perfectly normal to use the pc infrastucture without having to pay any licenses and its not sneaky or greedy from nvidia to do this. IF they would have paid Intel to have sli on x58, then it would be intel abusing their market position, forcing nvidia to pay or else they get kicked out. which they are now trying to do with the chipset business. THATS wrong. intel is the one screwing nvidia and the market here by not wanting anybody else building chipsets! just let nvidia build their chipsets and license them and make extra money on it, whats the big deal?
i totally agree that actually there should be a constant flow of licensing between all parties that share a platform, but right now thats not the case, and as such nvidia is doing what everybody else is doing and its not wrong...
my 2cents...
Hmmm I might have this wrong as I read the complaint PDF this morning.
I think this court hearing is more about NVIDIA making wild false claims towards OEs, vendors and even customers which allegedly do not correlate to the chipset licensing agreement they have with Intel. Intel wants NVIDIA to make a public statement about the false claims and NVIDIA refuses to do this because they believe they are in the right, even after being given the chance and not responding to Intel's formal letters. Intel is claiming that these false statements are both hurting them financially now and will even more so in the future. The rest of the pieces of the puzzle fall into their place beyond this.
Nvidia does not have an attitude of sharing their technologies as the standard, and intel should behave the same or else it is intel's loss
NV can stick a small GPU+vram on the mobo alongside a intel chipset.i'm not really sure what NV have to offer there, except maintaining their own manufacturing e-peen.
suppose you want i7 CPU power but don't game - how about a x58 mobo with a 9400 (with 128 or 256mb ddr3 sideport) connected to the chipset with a PCI-e2 x4 link (a single PCI-e2 lane may be enough)?
a board like that with a working hybrid-power implementation that'll power up the GTX295 in the PCI-e x16 slot when you open a game, and switch back to the 9400 when you quit to desktop, would be a sure-sell. but NV don't need to design the entire chipset to do that.
the atom platform may be different - if intel will only sell atom + a chipset with a IGP then nvidia won't be able to easily add their own GPU since the end customer will have to pay for two GPUs, and that'll kill the profit margin. even if NV did want to add a non-chipset integrated GPU to the atom platform it would need it's own memory controller, which would be acceptable on a 15W mobile CPU, but may be a TDP killer on a 1-2W atom. atom is the kind of platform where it really makes sense for the GPU to borrow the CPU's memory controller, for all other platforms it seems pretty optional to me.
interesting post about foxconn IP btw, thx
Each company should and does have the choice to charge or not for tech they invested in.
I think you go it mixed up. I said "If Intel has to pay nVidia for the "Cookie and NF200" when clearly neither is needed, then nVidia should have to pay Intel as well. What's going on between your company and Asus is a prime example and NOTHING like what's going on between Intel and nVidia.
I disagree 100 about what creates innovation. Innovation starts before the first motherboard is shipped in this case. Everybody with the exception if nVidia and many times VIA pay fees and acts in accordance with established business practices. Getting someone to follow the LAW and Rules is not bullying BTW. If I invest money to come up with better Mouse Trap, Damned right I want to be paid for it. Then someone else spends money to try and out do me and the beat goes on:up: Folks not paying for my or your ideas is what slows down innovation, just the opposite of what your saying. Why? Those creating the innovation will go broke if they don't get paid.
So think Intel shouldn't have paid Intergraph? Guys, you can't have it both ways. We ALL KNOW SLI doesn't need a Cookie or NF200 and nVidia shouldn't be charging anyone for something we don't need to run two of their cards at one time. It is the essence of Sleaze & Greed and we talking about Intel is Wrong?
I remind folks with Idiotic talk of Intel buying off courts and etc..... Intel lost to AMD, Intergraph (twice) and others:rolleyes:
NO to Intel monopolizing the i7 Platform!
What do you think SLI is? a monopoly! Nvidia just wants to be paid three times for it. They wanted to base the entire existence of their chipsets on it. Only they wanted to charge people 3x over, chipset, card #1 and card #2, and eventually card #3. They negotiated a cost to build chipsets for the Intel platform and were welcomed for the innovation, but they did not have the skill (yet) to get them durable and bug free, especially when pushed beyond their limits. But they could have gained that skill over time.
To offer history in this, folks will have to go back and research what Intel charged Nvidia to build for 478 and 775 platforms. I'd bet it was a fairly low amount, but I'll stand to be corrected if that's not true. Along comes SLI and, in effect, it's like the days of Intel's having a math processor as a separate add on chip. You want more performance, you sell someone a second GPU, which gives that to them. And you make more profit. Everyone thinks that the profit is all in the high end chips, but it's not. It's in the midrange. So having customers buy 2x or 3x midrange cards made Nvidia MORE money than if they had sold one high end card.
On high profit margin cards (Quadro's), SLI has worked everywhere for a very long time. There is no issue with chipsets or whatnot. So the whole "driver" cost is kind moot. It's being developed, and the cost paid for. It's just what market gets to "have" it is the only question.
All Intel asked for is a retraction of comments, and a public admission that Nvidia was not suitably licensed. Why would Nvidia withhold a license on I7? For two reasons, and I'd bet that both are equally weighty. One is obviously the SLI fighting that went on. SLI should not be used as a Nvidia tool for selling chipsets, because no excessive or restrictive measure was put on Nvidia for what CPU's they could develop chipsets for (775, 478). Intel didn't restrict what "market" (server, workstation, mainstream) Nvidia wanted to target, and didn't charge them extra for any given market target. In fact, if we go way back, Intel could simply (like Via) denied Nvidia the right to make chipsets at all and then their wouldn't be any SLI (oh wait, yes there would be but it would work on Intel chipsets go figure). Intel simply asked that Nvidia treat Intel the same way that Nvidia had been treated. "Here is your license. Now we'll compete on features and abilities in the marketplace."
Nvidia made two errors. They withheld a feature that propped up their chipset business, which I'll get more into below, and they made some very lousy chipsets which harmed the Intel end user experience, something I suspect Intel has been fuming about for years (775 socket era).
A lot of folks have said that SLI is Nvidia's baby and Intel should pay for it. Do we pay the principal founders for SATA, USB, pci-e, pci? Yes you do, but you do it via a specific route. The industry generally gets together to hammer out the standard, and then folks make money selling devices that use it. And to the degree to which each company can sell parts that use the feature, or optimize the features speed, latency and other aspects, a given company will make more or less money based on the quality of the parts and optimizing of the standard used with it's parts. These "understood" practices break down from time to time. And Intel doesn't play fair all the time. These moments often result in legal proceedings to work out the details, which is often a kinder wording for "deciding who gets what portion of the profits".
In this case, Nvidia has worked very hard to keep SLI for itself, except on the Quadro series. Moreover, as we saw with Skulltrail, they even decide "when" to enable SLI on even "SLI enabled" chipsets. Intel indicated that there should either be a reasonable fee for enabling SLI on other chipsets (emphasis on reasonable), or Nvidia would feel the same sting by getting charged for all the other industry infrastructure that they were making free use of, in making profits with SLI. Now that time has come. The sockets are changing, SATA 3 and USB 3 are coming, and everyone is running around making sure they line up their agreements and fees (where needed) to be ready for the wave of new products. But not Nvidia. During the 775 era Nvidia degraded the user experience, and didn't play fair regarding what to charge for and what to make proprietary. You don't see a wealth of companies standing up for Nvidia, do you? Nvidia was urged, many times, over the past several years, to reconsider its position. It has the ability to get better at building and eventually make some very good chipsets, with some features not seen before on chipsets. But given how they feel they wish to play, they are going to get cut off.
This is how the next few years will play out:
- GPU's will become powerful enough that they will become sideline or "included on main silicon" items. Contrary to what Nvidia proclaims, it's actually their silicon that is going the way of the sound card, not the other way around.
- Nvidia's chipset business will go the way of Via's. Cut off (due to their own misbehavior) they will not be allowed on any I7 or future platform. They will continue to make chipsets for AMD, and this will be where they will have to flourish or wither and die.
- The next generation of GPU's (G300, ATI 8xx series) will be more than capable of enough power on a single card. At that point, even if the quad resolution displays come about, the power will be there to drive them.
- Crossfire works, but becomes relegated to benching. SLI is the same.
- AMD/ATI meanwhile will join Intel in eventually offering their "GPU" silicon as an on die feature. Remember the days when math chips were separate? Same thing happens with GPU ability. AMD and Intel get to play, and Nvidia plays in other pastures.
- The first step in seeing this come about is to have a direct pci-e link to the GPU on silicon (core I5 series).
- The second step is that Nvidia will not be making core I7 or core I5 chipsets, probably ever, but there might be room to negotiate.
- Nvidia meanwhile will branch to other areas and abilities, much like Creative.
supercomputers started off as single-core solutions, but now they have thousands of cores, often mixtures of CPUs from different manufacturers, even nvidia GPUs are turning up in them.
the single-socket desktop became parallel decades later - hyperthreaded single cores, duals, hyperthreaded duals, quads, hyperthreaded quads, and soon we'll have hyperthreaded hex-cores.
so i think multi-GPU will stay, and expand. a multi-GPU setup is like a multi-processor server.
going again by the principle that history repeats - in the past if you wanted multiple CPU cores you needed to buy a multi-socket motherboard, now you can get multi-core CPUs. in the past if you wanted multiple GPUs you needed a multi-PCI-e motherboard, but in the future multi-GPU single cards will become more popular, ATI have made multi-GPU their high end card policy already. quad-sli/xfire on a single PCB? it's coming. wanna know a upcoming gaming technology that'll get everybody buying multiple cards? stereoscopic vision. we'll have each card working on a different picture instead of a single picture being split between two cards, so scaling will be perfect. in cases where two GPUs scale well when working on a single picture stereoscopic 3d will make quad GPU setups interesting instead of a micro-stutter-fest.
but nvidia have taken on AMD and Intel instead of just ATI by giving their GPUs a CPU feature - programmability. and it wasn't a half-assed effort. sub-$100 NV cards beat a overclocked core2quad at folding. NV GPU owners have a seriously powerful piece of silicon that's unused when they're not gaming, but in theory it could accelerate any desktop app, and obselete the CPU in many.
i'm not sure how this will pan out for nvidia, they may not have enough money to invest to get CUDA into the market. they need to lower the cost of entry for software companies, ie they need to sponsor CUDA development for the really big companies, but that isn't cheap. intel is going to rain on their parade pretty hard with larrabee which will already be compatible with x86, so today's programmers will find it easier to work with. can you imagine how much NV would like to have the world's biggest software company dedicating most of its time to programming CUDA? now you see NV's problem - microsoft dedicates most of its time to x86. larrabee may well accelerate microsoft windows and any program that runs on windows out of the box. if you're a occasional gamer, or not into the graphically stressful first person shooters, do you buy a $200 NV card, or a $200 Intel card with the gfx power of a $100 NV card + the ability to speed up any multi-threaded windows app, and multitasking in general?
in the longer run i agree with you that GPUs and CPUs will become the same thing. a GPU with CUDA is a great maths processor, and like the maths processor it will end up on-die. there's so much waste in having the GPU and CPU seperate - two memory controllers, two sets of memory, two programming languages...
if i were Jen-Hsun Huang there's one thing i would do today - i'd get a few hundred motherboards specially made, and find elaborate and entertaining ways of giving them away:
GTX285 core (maybe two, so it'd be the most powerful single PCB card in existence) where the CPU should be. GDDR5 where the ram should be. a slot that resembles a PCI-e x16 slot which you can add one of two small add-in cards to, one has a LGA1366 socket + 3 DDR3 slots on it, one has a AM3 socket + 2 or 4 DDR3 slots. the GTX285 should have a tru120e on it, and the CPU should have a thin GPU-style cooler that cools the CPU, chipsets and VRM.
an essential marketing stunt :yepp:
Less as in the percentage of Products with CF or SLI installed on them vs. Chip Sets. Simply put, there are way the hell more chip sets than CF or SLI. nVidia sells two cards for folks to USE on Intel boards, paying extra for Cookies and or NF200 is BOGUS IMHO. Those extra costs aren't consumer friendly either:rolleyes:
If CF and SLI demands a Fee, Tax or charge, and Intel pays it, so should nVidia pay for Intel's tech=P (repeated too many times) Neither company deserves any kind of special treatment or a etc.... AMD and Intel avoided this by following that some old IP type of agreement that's been around since World War 2. Why are you guys making this simple case of nVidia's greed into something more than it is?
Multi GPUs are nothing like multi CPUs. SLi/CF is more like 2 clustered servers if you like. You cant draw parallels between CPUs and GPUs as you do. They are worlds apart.
Personally I think multi GPUs is only here for a limited time until we do it all in the CPU. Sooner or later both the multi and single GPU setups will fade off.
a misunderstanding - i don't mean on a micro-architectural level
i'm referring to the evolution of the platform as it becomes more parallel...
desktop/server - single socket & single core, then multi-socket and single core, then multi-socket and multi-core
SLI/crossfire - single AGP and single GPU, then multi-PCI-e and single GPU, then multi-PCI-e and multi-GPU
...identical, in a sense.
edit: forgot the voodoo - multi-PCI and single GPU....
This is all kind of ironic, really :D nVidia killed the legend that is 3Dfx (look around, that hardware was so advanced for its time, among those devoted to 3Dfx still those GPUs can run the likes of Doom3, and probably others I don't know about)
What goes around comes around. Karma is not without a sence of humor.
limited time atleast for the next 10+ yrs
too much heat involved and voltage to push a monitor for gaming.
a 4780x2 generates alot of heat and put on a CPU chip will run way to hot.
This of course takes into account performance of the same calibur.
will the new westmere even compete against a ati 9700 pro????
guess we will find out
could a TRU120Ex2 with double the fin depth and 12 heatpipes instead of 6 handle a phenom2 MCMed to two 4870s?
i'd say.. easily.
donnie, microsoft doesnt NEED a license key for windows and no application NEEDS a license key or serial number either, does that mean they should pay intel licensing fees for beeing able to use their plattform?
Anemone, how the h3ll is sli a monopoly?
could you please explain this in some more detail?
thats taking the trend of the past years and flipping it around 180degrees... could you please explain why the trend should change all the time?Quote:
The next generation of GPU's (G300, ATI 8xx series) will be more than capable of enough power on a single card. At that point, even if the quad resolution displays come about, the power will be there to drive them.
whats driving pc upgrades and innovation? its almost entirely gaming and some internet and multi media. now whats better for this, slow cpu and fast gpu or fast cpu and slow gpu? then think again about which chip is going to be integrated into what... ;)
jensen is right about that part, and why do you think intel is going for gpus? just for fun? they know its more important than cpus and itll be even more important in the future as gpu processoring demand is growing way faster than cpu processing demand.
probably, but why would you do that?
you want to cool several chips as good as you can, so you can clock them high and max them out performance wise. then what uber genius would come up with the idea to put it all on a 40x40mm package? :P
thats what ive been saying for a long time, the whole point of FUSION doesnt add up... it makes sense for the mainstream and igp, but people always thought integrating gpus would mean better performance... well no...
both cpus and gpus are tdp limited, thats the whole point of overclocking, removing the tdp limit with better cooling and then beeing able to clock higher or bump voltages to clock even higher.
so we have two parts that are tdp limited, then how is putting them next to each other or merge them going to improve performance? :D
what benefits do you get from putting them on the same package or same piece of silicon?
more bandwidth...
was there a notable boost from pciE 1.1 to 2.0?
nope... then isnt it obvious that increasing the bandwidth between cpu and gpu isnt a limiting factor?
the cpu memory bandwidth is a joke compared to gpus, so again, thats limiting, not speeding things up! unless you give them seperate memory interfaces which means loads of pins and defeats the purpose of putting them on the same package. or you let the cpu use the gpu mem, but while it has massive bandiwdth the latency is terrible, and the cpu would suffer and not benefit from that. so AGAIN, it doesnt make sense...
the only good point is to save money if you put two mainstream or entry level parts on one package rather than two, or the same silicon die. and to everybody who thinks this is a revolution, amd geode gx cpus have had integrated gpus for a decade, and they are far from the only cpus with integrated gpus... its just uncommon outside of the ce segment, thats all...
Currently, X58 boards are getting a SLI cookie for $5 per board sold or so.
This means, intel , to compete vs asus, gigabye, evga, etc, has to pay nvidia $5 to get the SLI cookie for there boards too.
Since intel has to then, get there board approved by nV :rofl::ROTF: for it to get the cookie added to the bios.
So for intel, to then charge nvidia back $7.50 per board sold to approve this new SLI cookie input into all x58 boards using the cookie, would only be fair IMHO :up:
and during all this, AMD and Intel can co-operate fine, and crossfire is on X58 boards w/o lawsuits