http://www.nvidia.com/object/product...x-480m-us.html
352 CUDA cores
Core 425Mhz
Shader 850Mhz
256bit 2400Mhz GDDR5
Printable View
http://www.nvidia.com/object/product...x-480m-us.html
352 CUDA cores
Core 425Mhz
Shader 850Mhz
256bit 2400Mhz GDDR5
the world's most power draining notebook GPU
Wanted: Lots of jokes about how hot it is and how much power it uses, maybe throw in "can it play crysis?" while you're at it, or a pic of a burning laptop. We just can't get enough of it.
Really.
I think for a notebook that card is insane. Xtreme alright!
Last time i've seen info, this "GTX 480M" is suppose to be slightly modified GTS 250 core...
EDIT:
Correction, it was the GTX 280M. My bad.
Those specs don't make sense.
A 352 shader part should have 44 TMUs for 37.4GT/s and not 18.7GT/s (half that). It should also have 1.2 Gflops and not 900 Gflops for 850/1700 clocks.
352 shaders at 425 mhz. GTX 480 is 480 at 700. And it's around 1.35 gigaflops. I actually don't know if it's really 900 gflops, should be lower than that.
Dose the outlet require to be 220v @ 15amps for this portable fermunce? If anyone claims its a LAPtop, they must have a really good flame suit.
I wonder what type of voltage this chip is running?
Under 0.82V.
No the motive behind a laptop featuring this GPU is much more sinister than you think. The thought is that anyone stupid enough to buy such a laptop should not be able to reproduce, which is why such a laptop will put out so much heat that anyone who dares to put it on their lap will be sterilized:devil:.
Hey kids why don't you QUOTE your genius insider about the bad performance this time?
It is really embracing for this forum. People jump on and starts bushing everything that comes nVidia and Intel, before the product hits the marked, and before any real benchmarks or results.
Get a clue before you jump on judging an unreleased product.
But is it faster than 5870M and when can we expect a flag ship laptop with this part, such as the ASUS G73?
If it's faster, I don't care how short my battery life is or how hot it gets!
Afaik, the G73 was built with some wiggleroom on the GPU something like 50w... so it should be able to handle an NV GTX 400M series GPU.
I can't wait, I hope it out performs the ATI offering!
The G73 has a great chassis (no fans underneath, like a Mac!), it would be awesome if we had a choice of GPUs.
Am I the only one that noticed that this fits the specifications for the GTX465.
Hmm, if the way they cut the units are the same...
0.7x core clock
~0.75x memory clock
So you get 0.8x (max) perf of a GTX 465, which itself like a GTX 285/275 or so...
GTX260 performance or so?
It will definitely be the fastest mobile GPU, no thanks to the anemic 128-bit memory interface of Juniper (you can't really clock RAM fast on notebooks). But the gap should be less than expected- about 15-20% more performance? For double the TDP...
I'm still seeing HP's Envy 15 (if it didn't have that many problems)/17 as the perfect notebook blend for the majority of gamers (who really game)- definitely enough graphics performance, without the heft or uglyness associated with a gaming notebook. These are just gonna sell way more than any Alienware/Clevo, so nVidia really needs a 20-40W lineup on Fermi soon, and hopefully GF104 is (part) of the answer.
But really, congrats to nVidia on making a 530mm2 chip fit with 100W cooling on a design, and actually selling it! ATI would have a hard time doing that for several reasons.
german notebook e-tailer mysn already offers notebooks with 480m for preorder.
it's about time nvidia streamlines their mobile offerings. they used the same architecture over and over again. i think a castrated fermi with a lower power draw could be a pretty good mobile gpu.
I'm all for faster mobile gpu's, but Nvidia is gonna have their work cut out for them keeping the heat acceptable.
I'd like to have non-deformed sperm for at least a while longer! :D
LOL, yeah i remember that genius with insider info. He was insisting that GTX 258 would be the fastest nVidia GPU. Then was taking about a broken and unmanufaturable product. That genius doesn't dare to write in this forum any more, but many kids are still repeating his ideas and attitude.
There is not such a thing as a hot product, there is only bad cooling solutions in the world of OCers.
How knows, maybe they need time to fix the drivers, BIOS, yield, heat, power usage, or other stuff. But, and it is a big butt :slapass:. maybe nVidia doesn't feel the need for 512, at least as long as GTX 480 beats the 5870.
In my opinion, nVidia needs a double GPU to compete with 5970, not a 512 single GPU.
Why should they drop the 512 now, to compete with what you think?
^^ you got something constructive to say, or just trying to be funny in the middle of a serious discussion? Grow up.
There is no serious discussion, merely some light speculation.
What is said and happens in this thread will in no way affect anything of value.
Maturity is a three stage process.
1. Being too open and accepting situations that contradict your feelings of valour.
2. Becoming entirely unreasonable and controlling.
3. A mix of 1 and 2.
ie; grow up yourself and stop being stubborn, a mere calling on someone's bluff in a comical manner in a thread is entirely viable as a response.
that dose not show flexibility a new chip would, this shows that it can be castrated, cutting those clusters out dose not help power consumption so its clocked lower and will suck up extra power for no reason.
there is no chance of this beating the 5870m, the clock speed is to low and that low clock speed will make its texture processing way to slow like close to a mobile g92 but with more geometry like what a 260 original would have. there is no chance that it will compete with the 5870m. the 5870m is close to the 5770 so since the desktop 465 is slower than the desktop 5850 and the desktop 5770 isnt to far behind so with this massive clock down i would expect it to be bottleneck its self in the texture fill rate.
i will agree that its an accomplishment to get it sub 100W but i would expect that for the gpu not the platform.
we really need that new fermi die to bring the prices down, it should even match the stupidly giant die in performance with normal power and normal cooling
This is a new chip based on Fermi-architecture. I don't get what do you man by a new chip? maybe you want a new architecture?
By your logic all mobile GPUs should be called "castrated", but you are wrong, all mobile GPUs are new chips based on the respective desktop models.
Beating down on someone you shall never meet, who will never have any relation to your power of choice in your physical life with vigour and distaste.
You're all very mature people. ;)
As much as all your 'talking' matters (which is nought considering none of you have responsibility for nvidia) why would a light-hearted and whimsical post emotionally influence you in a NEGATIVE manner?
Unless you have emotional boundary issues and can't stand the actions of others without trying to assert control.
IMO that is some pretty poor reasoning. Takes up too much space? Seriously? Are you the one designing and building the laptop? No, they are prebuilt with all space considerations already taken into account... What do you expect, it will prevent you from installing an extra memory stick or two? Maybe there won't be room for that TEC? :rolleyes:
Well dad, how about the quote in your signature? That sure is a mature and unbiased reflection of yourself.
It is a mobile GPU, and all mobile GPUs are a form of cut down version of their respective desktop GPU. But you shouldn't mix it with a full blown desktop GPU and call it "castrated". It is a new chip based on the Fermi-architecture. It is very simple, and shouldn't need any explanation, actually.
HD5870M gives you the horsepower of a desktop HD57xx in 50 W
really, dont expect a GTX465 with core/shader clock reduced from 607/1215 mhz to 425/850 mhz to be very fast and power efficient :yepp:
is it dx11 compatable..?
It doesn't need to be hugely better, a little better would do!
Of course it is based on Fermi-architecture, what else would you expecting? but what would qualify as a a new chip in your book? Do you want a new architecture, gf15000 to call it new chip? This is not a desktop chip, it is a new Mobile chip.
Why so? A desktop HD5770 has a TDP of 108W and is about equal to a GTX260 - therefore I don't really see any problems for ATI to match this. I do however doubt they'll do it. The 100W mobile GPU market is minuscule - and then you're probably better off with 2*60W mobility Hd5870 anyway.
^^ The performance and TDP remains to be determined by hands on tests and reviews, but for now nVidia is claiming "Expect the fastest performance and visually-stunning graphics".
Of course all stars with some claims, but it is much better to stick to manufacturers claims than the genius who claimed inside info, and was spreading lies about an unreleased product with poor performance and broken etc ..
nViodi's claims is the best source we have for now, but we have to wait for hand on test and review to determine anything for sure about performance, power usage and heat.
on the gddr, the specs say 256bit buss, 1200mhz and 76.8GB/s. that means that with [(core speed)*(buss width)*(data rate)/8]= total GB/s
so that will give u with some math skills, 38.4GB/s*(data rate)= 76.8GB/s, that clearly leaves a data rate of 2 making it something that operates in ddr and dose 1200mhz so that is gddr3 as if it was gddr2 that would be slower and gddr5 is qdr.
there is also no spec on die size just stats so im assuming that its like the 465 making at a castrated gf100 and not a mini gf100 as the shader speed is 850mhz (aka processor speed, makes sense for the texture fill rate also) and NV rates tdp as the entire card that would not make sense for a smaller part to be rated at 100W with that clock and the 352 shader count. that demonstrates no flexibility and they need to hurry up with a real part that is useful for more than benching. with the limited 225W wall for desktop they need a smaller die that clocks higher as that would bring yields and performance up with cost down and the laptop needs a die even smaller than that that also has higher clocks that will give it more p-states for battery life and more performance.
They need a double GPU to beat the 5970, not a better single GPU, because GTX 480 beats 5870 with good enough margin already, in my opinion.
As I've said already, who knows why, but you seams to know it all. If so tell me, nVidia should release a 512 to compete with what?
Sam, there is no question or discussion about this, if Nvidia COULD release a 512 sps (better yields/lower TDP), they would. The fact that they cannot tells a lot about the current state of the GF100 chip.
Any other discussion is pointless.
No constructive and polite discussion is pointless!, unless one doesn't want/can't catch the point.
As said, how knows why, but the reality and logic says that they don't/didn't need a 512 to beat the 5870. Do you mean nVidia have to release a 512 to compete with it's own GTX 480, or what?
What we are talking about is that this chip will not be a new one and, as such, it will be a castrated GF100. Not all the other mobile chips are castrated, as it doesn't make any kind of sense to use a ~500mm2 to make mobile chips (its a waste). My bet is that this is temporary, until they have GF104 ready for action.
Nvidia has sunk a lot of money and effort in developping a 512 chip. Not delivering it is a sign of bad planning/execution/design. Believe me, if they could, they would.
Following your logic Nvidia shouldn't have released the G92B 8800GTS back in 2007 (it's not like ATI was a threat at the time) - but they could, so they did.
if they designed a chip that cant stay in its power envelope then they should have made it smaller and clocked were the bigger one should have been. or they should at least put one out for benching only as thats all that the gf100 is good for right now other than crunching. im not saying that it isnt fun to play games with benching hardware or clocks but its not viable for mass marketing or non custom computers.
Nobody puts a full blown new desktop GPU in a laptop. But it doesn't mean they are "castrated ", they are Mobile chips that are cut down to save power and heat. They are new chips that are spouse to be like that.
Show me a full blown new desktop GPU that went right into a Mobile GPU, then you are right. Otherwise you don't know what you are talking about,
You are amusing too much. As said, who knows, but maybe they were working on 512 because they thought they would need that much to beat the 5870 at that time. The reality shows that 480 was (and still is) enough for that. Why should they beat 5870 by 30% when 10% gives you the lead? But who knows, maybe they got in trouble too, but that's not important now, what matters now is, the GTX 480 beats the 5870 with good enough margin.
sh*t happens. there are some things that are out of nvidia's control.
they could but it would not have very good clockspeeds. im not arguing over what gf100 could have been. my point is that they are doing the most with what they have by disabling an SM and bumping clocks. it could be a better choice than having 512sp's.
i hear you, "castrated" was probably the wrong wording. i just meant a trimmed down fermi fitting the mobile needs.
however, even though people make fun of fermi's heat and power draw, these are really things that'll be interesting about 480m and i'm curious whether nvidia is able to reduce power draw and heat of the mobile parts that much or not.
I personally mean GTX 480 is performing exactly based on expectations, with good enough margin above the competing GPU (the 5870). It is exactly where any of my sane meanings expected it to be.
But , yes, I mean the power usage and price should drop. This power usage may get better by time with maturing BIOS, drivers and such, and we have already seen some improvements in a relatively short time. Hopefully can get even better, and it should. The price will get better when ATi can get it's acts together and put up a good fight in single-GPU war.
GTX480 spec:Memory Clock (MHz) 1848
http://www.nvidia.com/object/product...tx_480_us.html
So GTX480m mem clock=1200MhzX2
That's good because your insane meanings put it at 50-60%.
http://www.xtremesystems.org/forums/...postcount=1526
http://www.xtremesystems.org/forums/...postcount=1528
http://www.xtremesystems.org/forums/...postcount=1530
Not to pick on you though - those old fermi threads are full of great tidbits. Ahh, the memories, lol.
i guess that u are right, so NV just cannot give out specs now, but why would they use 600mhz gddr5 i did not know that it came that slow or could be undervolted.
they should be using the real frequency or the effective frequency they should not be allowed to use a magical frequency that they make up. i see their vendors using only the effective clock though.
lets just look at the part, its a full gf100 die, the entire die gets powered so on a laptop u are stilling the full desktop part, then its clocked to 425mhz. what dose that leave for a laptop i dont know how low the 480 can clock but im guessing that it wont work with optimus so that would have a card thats got a 100W gpu running at something like 30W+the card all the time as i dont see it clocking below 150mhz. that would make a laptop that would have like 30min on battery when idling making it not much of a laptop. what they need is a smaller die with the same shader count maybe less but with much higher clocks as the architecture is supposed to scale in size and scale with higher clocks why is it taking so long to get something out.
Heres your proof and specs. http://www.xbitlabs.com/articles/vid...e-gtx-465.html
My point is they could never make one in the first place, at least not enough to sell even in PR/halo quantities, hence the "unmanufacturable" claim. They don't need to release one NOW to compete, they should have released it when the 400 series launched, but they couldn't. :doh:
I've heard they were working on 512, but I don't know why they ended up with 480. Maybe they got in trouble with power usage, heat, yield, etc., or all those lies from that genius with claims about insider info, or all those great Charlie-histories. But maybe not, who knows? Are you claiming to know?
But the reality says they didn't (and still doesn't) need 512 to beat the 5870. Maybe they discovered it at the end and that's why they settled with 480. The GTX 480 has manged to perform with good enough margin above the competing GPU (5870) and that's all that matters in this business, the rest is history. I hope this logic can blow up the little that was left, if any.
Fermi design was set in stone before all those troubles (yield problems, leaking, thermal performance, TSMC, etc.) arised. Nvidia had a good idea how it would turn out but it's impossible to be 100% sure until you have actual silicon in your hands. But they couldn't go back to the drawing boards once they realised they're in trouble, this takes years. There's a thing called time to market.
Fermi was supposed to have 512 working cuda cores, it's not like Nvidia said, "Uhm, OK, 480 cores is just fine". All those problems combined resulted in this "catastrophic" failure (like an air plane crash) and Nvidia had to do a lot of adjustments to release the gf100 as we know it now.
Hence why it was almost six months late.
As I've said, this or other problems may be the case too, and yes they used too long to pull it off. Some of those assumptions may be true, but nobody knows what's the whole truth behind it. Specially when you look at the all those propaganda with funny pictures, funny comments, the genius insider, the great Charlie, and other BS that made the GPU business to look like monkey business. But anyways, they are history now, what matters now is the current reality. We should focus on the PPP (Performance, Power usage, Price) of their current products to judge their efforts, not what they couldn't/shouldn't do in the past.
Right now, the performance and power usage/heat of GTX 480M is interesting. It will show how scalable and flexible this new architecture really is.
Too bad, human don't have the power to change the future, but to change the past.
The Fermi design was a bit too unrealistic, while nvidia didn't do enough trial and error, or to say, not enough effort to realisticly inspire the chip maker.
Not being a fanboy, but it's unlike what ATI did. While they designed the card to use GDDR5 on 4870, they did realisticly enough what the maker should do.
Maybe a little bit off with some details, but you guys know what i mean.
If you have something constructive to say about this GPU distinction, say it, otherwise mind your own business. It is not up to you to tell others how much to write.
When the argument drys out, then the personal attacks and funny comments stars. This GPU monkey business propaganda has been going on with this kind of off-topic BS.
@Sam_oslo: im gonna try to explain you why you are wrong.
a) Full blown desktop GF100. As of now, we only get 480SP, and not the full product, 512SP. So, think about it...do you think it makes any sense to make a design with 512SP and then only market a castrated version? In my book its not, because had you intended to make a 480SP in the first place the chip would be smaller and thus, cheaper. Instead, they fabricate 512SP chips but have to disable a part of it in order to make it feasible. So, what does this tells us? And well, dont forget that GF100 is horrible power consumption / noise and heat wise. How is it possible that GF100 barely competes with GTX295 yets its not better in any of aforementioned aspects? Something is wrong, yet not many people sees it. Why? That is something I do not understand either.
Lets look at the competition: just judging from the performance numbers we can tell that something in the ATI architecture is not working at 100% (or the architecture is too old), because 5870 barely keeps up with 4870X2. BUT, and this one is enormous, they made huuuuuuuuuge improvements in heat, noise and power consumption, as the 5870 barely takes 200W. Now, ask yourself, how is it possible that NVIDIA needs a 500mm2 to compete with a 350mm2 one and, not only that, it needs a tooooooooon of extra energy. In my book something is deffinitely not right because, in the previous gen, GT200 was incredible good in power consumption / heat / noise wise, compared to ATI (not in vain I have two GT200 products: GTX260 and GTX295). So, in the past, you paid more, but you had also more: less consumption, less noise, a huuuge overclock headroom, etc. Now, tell me: where did all of that go? There is nothing left of the previous gen. Yes, they do keep the most powerful single-gpu...but that is not enough, specially when you are asking the same money as a GTX295 costed sometime ago withouth being notably better in any aspect...
b) Different product, different chip. When you create and fabricate a chip, you have your target in order to optimize production. If you need a product that has 1/5 the power of Cypress, its nonsense to use a castrated one as you are wasting huuuuuuuuuuge money. Instead, you design another chip, in order to optimize production and make everything more efficient. Its not about the power consumption...its about the cost. Why would you use a 500mm2 castrated chip to power a laptop if you are gonna clock it at 1/2 the desktop clocks and use 3/5 of its original SP? that is a total waste of a chip that could be used for plenty of other stuff. If you look at previous NVIDIA mobile line-up, NVIDIA only offered G92 (and other low end ones) based products (even those who are labeled as GT2XXM), because it was clear that using a 500mm2 chip uber castrated was total nonsense.
You could say that, if many chips are not 100% functional, its a good idea to use the defective ones for other purpouses: you are right. The problem is that, with time, yield rates improve (or tend to do so) and you would be getting more high-end chips and lesser cut-down ones, which would show to be a huge problem as you want more mainstream than enthusiast products. That is why they only make this as a limited line of products, because its not a good idea: they did it with 8800GS and now ATI is doing it with 5830.
Now, lets look at the hole stuff: if they are using GF100 for mobile parts it means that yields are not horrible, but 100% crap and, probably, it will take them quite some time to produce in good numbers any other chip. So, all in all, its a bad-bad situation for NVIDIA, and I deffinitely hope it gets better as, otherwise, we are screwed. Good competition means good prices, we didnt have 120€ GTX260 a year ago because NVIDIA stopped being so greedy :rofl::rofl::rofl: but because ATI was doing an insane competition.
PS: not that ATI is not greedy now...Evergreen has everything but raised its prices since launch, specially in Europe (but that is due to other factors, including the raising dollar).
GF100 is a brand new architecture there was bound to be problems. This happens whenever you make a new architecture just because its new. GT200 was till part of an old and matured architecture that is why it did so well and GF100 has the potential to do the same. I just needs to mature.
Yes this is a castrated chip but think of if the clusters they disabled were faulty then wouldn't it be a way to make money off something you would normally throw away.Quote:
b) Different product, different chip. When you create and fabricate a chip, you have your target in order to optimize production. If you need a product that has 1/5 the power of Cypress, its nonsense to use a castrated one as you are wasting huuuuuuuuuuge money. Instead, you design another chip, in order to optimize production and make everything more efficient. Its not about the power consumption...its about the cost. Why would you use a 500mm2 castrated chip to power a laptop if you are gonna clock it at 1/2 the desktop clocks and use 3/5 of its original SP? that is a total waste of a chip that could be used for plenty of other stuff. If you look at previous NVIDIA mobile line-up, NVIDIA only offered G92 (and other low end ones) based products (even those who are labeled as GT2XXM), because it was clear that using a 500mm2 chip uber castrated was total nonsense.
When it comes to power usage and price, I've already said this:
If I understand your right, you are arguing that the current architecture of GTX 480M means nVidia is having problems with yield. You also bring inn the "castrated" chips and manufacturing costs on top of this. and trying to say manufacturing costs, yield, is the reason for high prices, and other bad stuff. But you are wrong, because the competition (ATi's ability to counter the Fermi) has the biggest effect on prices, would anybody with a couple of days marking-classes tell you.
I don't know if your assumptions about yield is right or wrong, but even so, why should I, as a consumer care about it? Why should I care about 480 out of 512 at all?
I don't care how much it costs to manufacture, or what kind of problems nVidia may had/have, what they could or should make. All I as a consumer care about is the PPP (Performance, Power usage, price) of the current products on hand.
The performance is the most important and comes first. There is no doubt that GTX 480 is performing good enough above the competing GPU (5870).
The price is more up to ATi's ability to get it's acts together and compete, and in case Fermi-prices will fall fast, for sure.
The high power usage is still a problem, and it may have something to do with GPGPU-extras, but it may get better by maturing BIOS, drivers, and such by time. Hopefully will get better, and it should, anyways.
It is very clear that Fermi is not a very successful architecture for now. The reason is obvious: when 5870 was on sale, Mr Huang didn't even have a real Fermi card to show in the conference (where the wood screw jokes came from). In most cases, a delay means a failure. Pretty much the situation as Intel Pentium 4.
Well, you should.
1- Lower manufacturing costs means lower prices for the end-user.
2- A full-blown Fermi would be a better performer.
3- A better performing product for a lower price is good for itself and also would put a lot of presure on AMD, thus...
Well, you get the idea. ;)
Going back in time and talking about problems, even if true, doesn't change the realities of the day.
Look at the GTX 480 as it is today, and compare it's PPP with the competing GPU (5870). It makes it much easier to judge it, why go back in time to dismiss something that is right in front of your eyes?
1- Completion (ATi's ability to counter the Fermi) will decide the price, not manufacturing costs. Just like 980x costs $1000 because AMD is still stuck in 45nm, not because it costs more to manufacture. Many other examples proves this point.
2- Of course, who wouldn say no to 512? But nVidi didn't (and still doesn't) need it to perform good enough above the competing GPU (5870). nVidia needs/needed a double-GPU to compete with 5970, not a better single-GPU.
3- GTX 480 is a better performing product, but a superior GPU costs more, just like 5870 costs more than 5850. The price will go dwn when ATi can put up a good fight and compete with Fermi performance.
they dont have a product with 512sp but that's not a serious problem. the extra alu's are useful for redundancy so you can turn off some for different SKU's. ATi takes a fine grained redundancy approach and it ends up with the 5830, an overpriced underperformer. it helps in the high end but the lowest bin of the chip is terrible. that's where the money is so they are equal tradeoffs.
different products should not require a different chip, that's a total waste of money. just the masks will costs you several million dollars. it's best to have a one size fits all design to target all performance segments. non-recurring costs are very high while silicon is as cheap as dirt.Quote:
b) Different product, different chip. When you create and fabricate a chip, you have your target in order to optimize production. If you need a product that has 1/5 the power of Cypress, its nonsense to use a castrated one as you are wasting huuuuuuuuuuge money. Instead, you design another chip, in order to optimize production and make everything more efficient. Its not about the power consumption...its about the cost. Why would you use a 500mm2 castrated chip to power a laptop if you are gonna clock it at 1/2 the desktop clocks and use 3/5 of its original SP? that is a total waste of a chip that could be used for plenty of other stuff. If you look at previous NVIDIA mobile line-up, NVIDIA only offered G92 (and other low end ones) based products (even those who are labeled as GT2XXM), because it was clear that using a 500mm2 chip uber castrated was total nonsense.
btw, you might want to look up what the word castrate means. you are using it in the wrong context. fermi definitely has balls, in the form of a R/W cache.:p:
the exact opposite is happening from what you described. just because nvidia's yields are poor doesnt mean ATi's are good. they are still under-binning. 5870's still come in two voltages because they cant keep supply high enough with current manufacturing capabilities and the 5970 is still $700. neither the 4870x2 nor the 295 had that premium even at launch.Quote:
You could say that, if many chips are not 100% functional, its a good idea to use the defective ones for other purpouses: you are right. The problem is that, with time, yield rates improve (or tend to do so) and you would be getting more high-end chips and lesser cut-down ones, which would show to be a huge problem as you want more mainstream than enthusiast products. That is why they only make this as a limited line of products, because its not a good idea: they did it with 8800GS and now ATI is doing it with 5830.
Now, lets look at the hole stuff: if they are using GF100 for mobile parts it means that yields are not horrible, but 100% crap and, probably, it will take them quite some time to produce in good numbers any other chip. So, all in all, its a bad-bad situation for NVIDIA, and I deffinitely hope it gets better as, otherwise, we are screwed. Good competition means good prices, we didnt have 120€ GTX260 a year ago because NVIDIA stopped being so greedy :rofl::rofl::rofl: but because ATI was doing an insane competition.
PS: not that ATI is not greedy now...Evergreen has everything but raised its prices since launch, specially in Europe (but that is due to other factors, including the raising dollar).
the 5830 has horrible value. its about $240 and the 5850 is 30% faster for $60 more. it's a disappointment compared to the 4830.
The 480sp Fermi actually cost more for Nvidia than the full blown 512sp Fermi. They had to spend money to develop the existing version once the original went wrong. Your analogy with AMD and Intel CPUs isn't valid because their "equivalents" CPUs don't compete in the same segment anymore. AMD focus on the price/performance consumer conscious market.
Sorry to be brutally honest with you but your other arguments don't really make sense. :shrug: