We wouldn't have any yield problems... the chip would never make it to the Fab since the simulations would keep telling us it won't work.
Printable View
I just quickly googled and over the past years, NV has been the biggest or among the biggest customers.
If I was in TSMC's position, I would be trying to keep NV on my good side no matter what. ATI will eventually move to GloFo, but NV doesn't have to. And you don't want to lose two of your biggest customers, that's for sure.
Or maybe I'm completely wrong, it's just fun to speculate sometimes :D
Ok, how much more .01, $10, $100, or maybe $200000?? I mean we can assume costs are what we want to imagine them to be unless somebody slipped the invoices out to somebody who leaked real cost its all assumed.
I can "think" its binned lower and is cheaper, same logic being used...
Until somebody posts what vram is used what basis of comparison is there without a spec sheet at the least.
We can both assume the other is wrong but that doesn't make either of us right.
I don't know how many times the bom cost argument has gone on about Nvidia loosing money on gpus only to see solid financial statements to the contrary.
maybe not common sense. i cant tell you how much bom is and i have common sense.
tdp is 30% more and heatsink is not larger than 5870. they hire companies for good thermal solutions which helps when you have a power hungry card.
vram isnt a huge portion of costs on high end cards unless you use top bin ic's.
maybe you could show me some information on the cost from a research firm. im not paying for that. even if gf100 isnt profitable its not a big deal. high end and ultra high end is more about reputation than profitability.
Nope, they pay more for the newer wafers.
The difference is, Nvidia uses the node once it is mature and have a high volume. AMD/ATi has a lower volume but hops onto the node as early as possible and helps develop it. I'm sure each company gets discounts based on their involvement.
A few thousand more.
Too bad financials aren't broken down per SKU...
BOM for a single product |= profitability for the entire company.
So you don't think the GTX480 heatsink costs more than the 5870's?
I have posted the info from a research firm before, I will see if I can find it again.
Ok based on what, a few is that two thousand or four thousand more...
I mean its really just silly to just throw out ambiguous numbers that anybody here can here can do with no real substance behind it, at least post something solid to back it up.
I mean ATI and Nvidia are going to pay a few thousand more for their electric this month, lawn maintenance is going to be a few hundred more and tsmc is going to be billing them a few thousand more this month as well, just believe me...
old-school die shrink (250nm->180nm)
- same equipment, same fab, same lasers, same 200mm wafers
- most transistor parameters same or linear scale. Same materials.
- no need for special techniques, no significant sub-micron effects
die shrink (ie 45nm->32nm)
- new never before used techniques: liquid immersion, double patterning, soon UV and hyper index lens
- huge changes to materials: different insulation between metal layers, different via plugs, copper for connectors, metal instead of polysilicon for gate, straining of silicon using doping to improve carrier mobility
- drastic changes to transistor parameters. Because of sub-micron effects, instead of about a dozen, now hundreds of parameters to precisely control.
- previously negligible effects like gate dielectric leakage have HUGE effects on power/heat.
New smaller process means smaller dies so more per wafer. At first yields are poor, so definetly costs more. But, once yields mature, the higher cost per wafer is recovered and more.
If they could, silicon companies would avoid die shrinks. Nobody likes risks, delays or higher costs. But, its a necessity.
You can make an Athlon core on 250nm. But, you can't do that with much larger designs like 5870 or Nehalem. Dies would be enormous. Likewise with power.
=================
EDIT: just cores
P3 Slot1 250nm "Katmai" - 9 million transistors, 5 layer
Athlon SlotA 250nm - 22 million transistors, 6 layer
constrast/compare:
Nehalem 45nm 730 million (probably ~300/4 = 75 million per core)
G200 55nm 1400 million
Fermi 40nm 3200 million
future: slowdown.. limit on growth #cores, die size, power output.. for certain clock race is long over.
IMHO: after 2010, I don't think AMD and nVidia will be doubling transistor count anymore. Will be interesting what they come up with to sell chips... maybe AMD more L2 cache and/or faster bus stalling strategy?
thank you for prooving my point....
btw since tsmc has to purchase new equipment thatcost more then 30 million each part who do you think pays up in the end???? amd nvidia etc.... because tsmc isnt there to please anyone except their share holder ..... so no the waffer wont cost less so the cpu will cost more .... but its up to amd nvidia etc... to have a not too big die + an easy to work arch to have the highest yield possible to recuperate on that investment .....
Jeez, that's new to me, everything I have ever read before says moving to a smaller process = cheaper.... maybe I am thinking of more small shrink like 65nm to 55nm (GPU) and 65nm to 45nm (CPU)? 55nm to 40nm doesn't bring any of the real big problems with shrinkage IIRC, doesn't that only happen at 32nm and lower?
And even if the cost per wafer is higher, they get more per wafer. IF it does cost them more per wafer, I am sure the extra chips would even it all out.
I feel like I am missing something or have been horribly mis-informed.....
It's only cheaper per core if the DIE-size decreases and as more features are integrated (=> more transistors) to increase performance this does not always happen.
This is my point exactly, it seems like people are saying it would be cheaper to make Fermi (well, just the die - obvious heat issues aside) on, say, 55nm. Yes, Fermi is going to cost a fair bit to make, but does 40nm REALLY make it MORE expensive? Would a 40nm 500mm2 chip really be cheaper than a 55nm 687.5mm2 chip (assuming it scales linearly ie. 500*[55/40])?
Obviously they wouldn't really make it a 55nm chip, but hypothetically speaking is it "cheaper" to make it on 55nm?
Such a big chip is not possible for TSMC. Max size to manufacture is about 580mm˛.
40nm WAS necessary for GF100.
whoa already 33 pages :O
still no real info, so what's the point
Is there yet any concrete evidence regarding "Fermi" GF100 die size?
I see a lot of opinions that it is going to be somewhere between G200 and G200 B3 but I have not seen any conclusive justified statements to back that up.
General math isn't the issue, it's more or less your insistence that each wafer definitively costs "a few thousand dollars" more and how that range was derived or was it contrived.
I'm not saying you don't know but if you don't have anything to substantiate the claim what are we suppose to think. I mean I can say the wafers are more expensive but it avgs out to only $1 extra per die per wafer, I'm right trust me.
Who knows what kind of arrangement Nvidia have with TSMC and in what way wafer costs and yield issues are handled between the two. For all we know TSMC could have guaranteed price for a fixed number of operational dies regardless of the actual yield per wafer.
A huge cost is R&D. You worry how much the wafers and heatsinks cost, but the real point to consider is R&D. The Fermi architecture, just like the G80 architecture, was a long time in development and expensive to produce. But consider a few points:
- The architecture is being made into three seperate high-end products, the cheapest of which is the GeForce part. Quadro parts are often priced twice (or more) the price of GeForce parts, and Tesla parts even more.
- The architecture is extremely modular. Scaling up for the next generation after GTX 480 is almost as simple as CTRL+V. Scaling down is just as easy.
- Longevity. Fermi has it. It's an extremely advanced architecture with a ton of features built around the best part of DX11, tessellation.
I haven't seen anything that's made me believe that NVIDIA will be selling GPUs at a loss. Consider that even with HD5800 out performing NVIDIA parts, NVIDIA is gaining more of the market, selling a ton of GPUs and making a lot of money. Even IF they were selling GF100 at a loss, they aren't in AMD's financial shoes.
Besides, why should we care if they make or lose money on a product so long as the price and performance are competitive?
Amorphous
you had me until this...
nVidia has only gained market share in the low end and OEM markets, laptop makers are dropping them due to issues over the last 2 years, and OEMs are tired of renaming parts for new models. nVidia will be fine if the first round of fermi doesn't do well, but they better hope they get some money from somewhere. No business can afford to sell all it's parts at a loss for very long.
No difference, I still think its a contrived figure...
http://jonpeddie.com/press-releases/...er-also-beats/Quote:
Intel was the leader in Q4'09, elevated by Atom sales for netbooks, as well as strong growth in the desktop segment. AMD gained in the notebook integrated segment, but lost some market share in discrete in both the desktop and notebook segments due to constraints in 40nm supply. Nvidia picked up a little share overall. Nvidia's increases came primarily in desktop discretes, while slipping in desktop and notebook integrated.
but they have less market shares then last year
Can we then analogize scaling down as using the DEL key? :D
Scaling down for the mid and low range would be a good idea. They'd get a much higher yield and they could fill those market segments with nice Fermi arch chips instead of evolved G80 arch chips.
Well, I for one also consider the market and social impact of my purchase. I'd like strong competition in the GPU market. What happens now in the early days of gpu computing could affect how the market looks for decade. But we all will have our own reasons for buying a product.Quote:
Besides, why should we care if they make or lose money on a product so long as the price and performance are competitive?
You would have to be a subscriber to get the full article.
That would DEL the whole die what you need to use is the backspace key :)
I hope its scalable otherwise we will have Gx3xx in low-mid end based on GT200 and G92 cores.
My reason for buying is funding the gpu makers till holodecks come :D and play some games in the mean time.
5970,5870,5870 2GB 5850 VS. 480, 470 final scores:
http://bbs.pczilla.net/attachments/m...ecd61398ab.png
Total scores for what though?
I are can be like troll?
No but seriously, why even bother posting something like that, give me 10 minutes in excel and i can make you a more believable but completely fabricated graph.
To all the ATI fanboys in this thread, fermi might not be insanely faster or cheaper than the 5xxx series but at least it can run two screens :p: (so do i get my troll points yet?)
mao5=P2MM,a infamous chinese ATI fanboy...
Sometime he maybe has some real inside source,sometime he just post some bull:banana::banana::banana::banana:...
He specifically wants two monitors, three just don't cut it xD
You forgot your “Radeon 7" bull:banana::banana::banana::banana:....
http://forum.beyond3d.com/showpost.p...postcount=1761
6-9 months tops, as for most cards, I guess.
The cost difference can't be significant. Must be something else. Heat, perhaps...
No reason for them not to release it once they can.
It shouldn't make a significant difference in heat dissipation.
It's probably built that way to allow a bigger radiator while keeping an acceptable width.
750MHz is too high for a reference card.
And 512sp version is supposedly coming later...
I'm sure they will make some profit. On both Geforce and (of course) Tesla cards. The price they are selling them for is bound to be higher than the manufacturing cost.
It's the only high end card with disabled parts I can think of.
There would be low availability in any case.
And we can't complain, Tesla is their primary market... At least they'll make some serious $$$! :yepp:
LOL, nice source! :rofl:
I doubt they sold many... But they sold a lot of 4870x2s, I think.
What's up with all the hate? :shrug:
Unlikely IMO... Or at least right now. But there should be definitely something similar coming out a bit later.
:rofl:
Because Tesla cards cost a lot more than Geforce cards. And Tesla is the primary reason for creating Fermi arch (GPGPU, etc). So they are just following their plan.
Looks about right, I guess.
This is typical for high end cards these days, though. Gotta blame TSMC, I suppose...
Good definition! :up:
I bet they are not using 10.3 drivers that are supposed to be MUCH faster in Dirt2.
Interesting, nonetheless.
Great! Going to be an excellent comparison then! Looking forward to it! :up:
It doesn't use much tessellation at all.
Mostly for crowds of people and some minor effects...
250W.
1.5-2x gaming performance of GTX285? 5870 is already faster than that on average...
Doubt that, just Tesla cards I think.
Yeah, stylish and sturdy. I like it, too.
Doubt that, since it needs SLI for 3 monitors...
Oh, there is real info.
Which should hopefully be covered by Tesla sales.
Good point, though.
Fail graph.
I was trolling with that comment, jeez relax.
but honestly the flicker issue is still present with the 10.2 drivers and the new 10.3 drivers. I got 4 friends with 5870s, the asus card doesn't flicker for some reasons but the MSI and club3D cards both have flickering on the second screen when connected to my dual 1920x1200 monitors. I really wanted to get a 5870 but seeing as I work on my machine and have dual screens specifically for that reason, the 5870 is not a card I can risk buying.
Keeping mem frequency at 3D clock will fix the problem. The asus card doesn't flicker because it's bios keep mem frequency at 1200Mhz in mutli-monitor.
http://forum.beyond3d.com/showpost.p...9&postcount=49
http://www.hardwareluxx.de/community...9-post364.htmlQuote:
3DMark Vantage Extreme
HD5970: 12339
GTX480: 9688
HD5870: 8912
GTX470: 7527
HD5850: 6848
8.7% over 5870? I don't see anyone with a 5870 buying a 480...
Seems like this makes sense why Nvidia would be so tight regarding releases of benchmarks other than the heavy tesselation stuff.
Just checked my old scores 2xGTX 280 SLI would get me 9200-9600 in 3dmark vantage extreme. So this card will not be as fast as 2xGTX 285s but seems to be as fast as 2xGTX 280s. If this was 6 months ago it would be something special but 5870 has been around and 5970 this isn't going to excite too many people.
Especially 3DMark. :p: HD 5970 is also only 27.4% before GTX 480 here... yea right, like it would be this close to HD5970 in gaming benchmarks (where 5970 has working CF profiles) had been nice. :D
At least there's hoping for at least 20% performance advantage (480 vs 5870) in games looking at these scores though.
True but it is a decent indicator of what we will likely see with in game benchmarks, each game will differ obviously.
Well sigh.... I never got the GTX 285s or 295, or HD5870 because I specifically waited for this release on my "aging" GTX 280 SLI setup. Now one of them is gone and I'm turning down graphics and having to run DX9 in BC2.
My hope I guess is that GTX 480 SLI will have 80-90% performance increase at 2560x1600 in DX11 for BFBC2 and hopefully driver improvements and SLI scaling doesn't make me regret the purchases.
Whatever, they are already 6 months later:ROTF:
http://www.czechgamer.com/pics/clank...Czechgamer.jpg
yeh, but later releases of the nvidia drivers will also offer nice performance boosts. If you really want to be fair you should compare the gtx480 with the 5870 using the launch ATI drivers, but that will never happen so just remember if the difference is 8~10% now with 10.3 drivers and NV beta drivers, given 4 months the gap will probably be more like 15%~20% with mature nv drivers.
the 10.3 drivers offer a substantial performance increase over the launch 9.12s, and the same can be expected over in NV land.
I see where you are coming from, but that's not entirely true in my opinion. I'd rather to see what performance I can get right now, than what I _might_ get later on. Also, due to all the delays, the driver team should have had quite some time optimizing the drivers, so we can't really be sure how much more performance they can squeeze out of these cards compared to the drivers they will launch with. I'd love it to happen, though. Competition is good for the market! :)
jeez... why dont you post the nvidia pr slides here right away? you seem to be repeating what nvidia pr tells everybody... :P
and scaling down is easy? gf100 taped out when? and gf104 is coming when? if its that easy wouldnt you expect gf104 to come out just months if not weeks after gf100? look at ati, they released rv870 and rv840 at almost the same time, in fact the rv840 team managed to slightly pull ahead of the rv870 team in the rnd process! THATS what id call easy scaling...
fair enough...
uhmmm and you base this on what exactly?
exactly, amd has financial backing from abu dhabi and they have all the ip they need for the next decade, x86, gpu, chipsets...
hey guys have you seen this?
http://www.tweaktown.com/articles/31...is/index3.html
10.3 gives ati cards almost a 10% boost in unigine and hawx... impressive...
and a 20% boost on minfps for the 5870 in fc2 as well...
if you ask me, this is atis fermi welcome party hehe :D
im sure they had those tweaks before, but didnt integrate them into the past few driver releases to have something for the fermi launch...
and the timing is perfect, they released them just in time for fermi reviews...
Fermi yields still under 50 percent
Can't believe it after the TSMC's announcement of 40 nm production 1+ year ago...Quote:
According to Digitimes, Nvidia's Fermi is still plagued by poor yields, but the company is still expected to have enough cards at launch.TSMC's yields are apparently still under 50 percent and it is unclear whether they can or will improve anytime soon.
However, as we said earlier, Nvidia opted to reduce the number of cores from 512 to 480 on the flagship GTX 480, while the GTX 470 will have even fewer cores, 448 to be exact.
Should TSMC manage to improve the yields, Nvidia could probably launch a faster version with more shaders, putting more pressure on ATI. However, ATI has shipped a few million DirectX 11 cards since October, so this is a small consolation for Nvidia.
Yep 10.3 look specially made to combo up with 5870 2GB and give a sort of a painful box to the Nvidia's part. If 10.3 is official by the time fermi gets out and reviewers use that as reference 5870 driver, GTX 4x0 will not seem that sweet.
In anycase when ATi gets the new 5870 2GB's out reviews are bound to use 10.3 "If final" and also GTX 4x0 :yepp:
The fact that abu dhabi is interested in AMD and in a situation where AMD fails he will either buy majority "May not be allowed by Gov." or alternatively he will enter a some ratio ownership.
EDIT: BTW this is someone who bailed out Dhubai for $10bn
Whining about no dual monitor option, whining about a so called flicker problem with 5800 series, whining about the fact its unfair to compare a newly launch fermi with the latest ATI driver (oh you think nVida would do a fair bench)...get a grip man.
You complain about all the ATI fanboys and ranting/bashing on in these threads. Your just the same, clearly an nVidia fanboy, trying to find any stupid of reason to bash an ati product or use an excuse not to buy and ati product. You don't like ATI and your an nVidia fanboy thats why you won't buy a 5800 series card, don't use lame excuses like there is a flicker problem.
Its absolutely shocking the way some of you go on, so delusional.
Its a bloody graphics card company, its hardly Manchester United vs Liverpool rivalry is it, i cannot understand why you lot get so worked up.
CPU i7 965
MB RAMPAGE II EXTREME
RAM 3G DDR3
http://www.mobile01.com/topicdetail....&t=1475786&p=1
That reminds me too much of GTX 295 anyways here is 5870's score with 10.3 "Idk about the settings i guess they are the same i could be wrong"
http://images.tweaktown.com/content/3/1/3194_20.png
i'm not whining just pointing out a flaw in his logic, saying that he hopes they used the 10.3 drivers as that will show that fermi isn't that much faster (which may very well be), i was just pointing out that nvidia's driver also have to mature or do you disagree? So basically drivers don't affect performance in your opinion and driver versions don't really matter.
I don't have a preference to either camp and will buy whatever is the best value for money and will give me the least problems. Prices in south africa are insane (around $650 for a 5870) and I was prepared to buy a 5870 at that price till i saw reports of this *imaginary" flicker problem. I obviously must have been delusional that day and imagined those hundreds of posts confirming it and obviously in my state of delusional missed the driver change log that confirmed that it was fixed (i've been checking each new driver change log for that to happen cause then i can pretty much go out and buy a 5870 - but i sure as hell am not paying $650 for a card that flickers and then try explain why i want a refund to the idiot techguys at the suppliers here), but that's not the issue here is it. My initial comment was just a joke (you know something that's meant to be funny).
ironically enough my posts haven't been aggressive or overbearing, i wasnt even whining just pointing out a valid fact, which obviously got your panties in a bunch, so maybe instead of getting all upset give me a valid counterpoint to why the driver version maturing wont improve performance over time.
just to clear things up in case you missed my point like you did in my last point: at launch they SHOULD use the latest driver versions from each camp for the reviews and compare performance on those but it is naive to not think that performance will improve as the driver matures, a followup review in a few months with a more mature driver from both sides might yield different results. So the argument that a driver is going to bridge the gap between the two products is *at this point* incorrect as ATI have had nearly 6 months time to improve their driver.
new gpus are always benched with release drivers against older cards that have matured drivers. that's not unfair at all. comparisons like that can only be done at a point in time. what you can expect from these comparisons are results that are only valid for this specific point in time as well. nobody can predict how the performance will evolve in the future. sometimes matured drivers increase performance by a significant amount, but then there are times where matured drivers don't deliver better performance at all.
but that doesn't really matter, as you buy the graphics card a this point in time as well, so the comparison is valid. when you want to buy such a card, you care about its current performance most, and not what the performance will look like in 5 months (where the performance of the competing product might be better as well...).
so, yeah, it's pretty hard to say :)
+1 I fully agree as I said in my followup post, cant help it if people misunderstood my point, obviously you will test with the latest drivers but you cant expect that test to be valid 2/3 months down the line, point in case the 10.3 catalysts. The gap between the 285 and the 5870 has grow a lot larger since the 5870 launch just due to the driver updates.
Not only that but they have had this card for 3 respins, so to say that they dont have a "mature" driver is somewhat disingenious, their "dedicated" driver team has had ample time to write something acceptable for the card given the sheer amount of delays.
Really the drivers dont concern me; my issue with Fermi is as follows:
#1 Its the first time that I can think of that a "harvested" part is the best they can do, NV30 was not like this, R600 was not like this, R520 was not like this.
#2 Power efficiency 250W vs 185W-ish for the ATi equivalent.
#3 Lack of taped out parts, sure G108 just supposedly did but what about 104 and 106? Is this going to be a repeat of the GT200?
Even though the benchmark numbers are slowly trickling in, I'm not fully convinced of buying a card only to be overshadowed by the end of the year with a 512 part.
I don't know, maybe the reviews will give a bit more clarity in how fast it is with 480 cores, and how quick a 512 part would be. Future proofing as much as possible is important for me, since I don't have the funds anymore to update my rig every 6 months. My G80 really has blown me away in how long it has been with me. The GFX card I've owned the longest. I'm actually not going to sell this one to buy my next card. It's too good. :D
But there you go, there is always something else right around the corner really. You can wait forever. :p:
The way i heard it Nvidia disabled the shaders and they originally had 512 SP working but with high tdp and lower freq. so they went with fewer shaders that made the card fast and also less power hungry.
In any case a 512 part will come after all the arc supports 512 after all...
First post updated
short term, no, long term, most likely yes...
hmmm under 50%... well thats not a surprise, is it?
if it would be ABOVE 50%, that would be news... but under 50%... and that doesnt really say anything, 5% is under 50% as well :D
but if its somewhere in the 40% range, then thats great news for nvidia!
and that would explain the low prices for the gf100 cards too!
Graphical view to easily spot the differences
http://www.framebuffer.com.br/sites/...0-44-41%5D.png
If we take the score posted a few posts ago (X9322), the graph changes a bit
http://www.framebuffer.com.br/sites/...0-51-09%5D.png
If the 512 part makes it in the next 3 months , it would be totally worth waiting for. Current 480 benches being impressive though only then we should know in a few days.
Ok lets do an informal survey.
WHO HERE ACTUALLY HAS AROUND $1000 TO SPEND ON VIDEO CARDS TODAY?
Does it really matter if the almighty GTX480 is 8% or 18% faster?
Look at the GTX470 - its gonna get eaten alive by HD5850 rebate promotions - a price war that would drag nVidia into bankruptcy.
hey nice for ati to treat its customers like fools wait 6 months for a driver just to show up nv lol
not that the 5xxx cards had all its use for the first two months anyway so really only 4 month head start.
6 months for cf proiles? lol yeah ati can do no wrong :rolleyes:
i hope ati they rush out a 5890 or w/e for cheaper to further show that its first adopters are fools? lol j/k
its not like ati needed to rush out the cards with crap drivrs support(5980 is still bad)for any dx11 games just to capitilise on the dx11 pay off with all the games they said would come out shortly..i think 1 or 2 dx11 games came out in six months not to mention it seems that nv dx11 card will run those apps better but will see
ATI were not sitting on the driver, if they were it wouldn't be as buggy as it obviously is. They probably released it early so it could make it into the fermi reviews, as the reviewers need it about a week before they start writing so they can do all the benchmarks needed.
For reference. I7 @ 4GHz no HT, 5870 at stock, Catalyst 10.3a,Extreme preset.
http://img686.imageshack.us/img686/6...501200c.th.jpg
I'll assume the 965 is at stock ( hard to say since PhysX skewed the cpu score )
You do realize that the GeForce product line does not constitute the majority of NVIDIA's business model or potential profits, right? They could easily promote the GTX 480 and GTX 470 as loss-leaders and still come out FAR ahead on profits from the Fermi architecture.
Why? Because proper marketing of the GeForce series will in a round about way promote the Fermi-based Quadro and Tesla-based products which are both the real money-makers. They also both occupy areas within markets where ATI has next to no presence. Add their ION and Tegra lineups to things and you'll begin to see there's absolutely no way that selling GeForce cards at a narrow profit margin will substantially hurt NVIDIA's bottom line.
Just checking this thread to see where nvidia is at these days. I'm thinking about purchasing another card... I gota just say that I'm very glad I purchased the 5870 when I did. Performance has been great across 3 displays and I still use the 4th for vent and my cpu monitoring (cpuz, realtemp, etc). So instead of switching back to nVIDIA I think I'll just go buy another 5870 in a few months.
what $1000 are you guys talking about? GTX 480 will cost $500, not 1000.
Probably talking about TWO 480, not just one :p:
It would matter to me a while back... Rather have a stable and fully compatible video card than one that doesn't give me all candy in games. But things have changed since i tried this 5970 and Catalyst 10.3a... It's so much better than what it used to be. And i am going to add a GT240 for PhysX (as i really enjoy PhysX when it's present), so all my "problems" are solved and right now i don't feel the need for nVidia cards.
Many people will prefer otherwise and die-hard fans in special :rolleyes:
As for bankrupcy... ATI would fall first, nVidia has many more sources for the €uros and Dollar$.
But even with 512PS (or Cuda Cores as they call it now) the difference in performance won't be THAT high and i bet the price will :mad:
cheers, nice graphs :toast:
i think nvidia cut the 480 down to 480sps because 512 wouldnt have made a big difference... they still wouldnt get close to the 5970... so they went for 480sps which is still enough to beat the 5870 but means they can harvest a lot more 480s and... make more money per wafer...
would be cool if nvidia allows people to unlock their 480s to MAYBE get the full 512sps... at least for the time beeing when there isnt a 512sp card for sale... but i dont think they will do it :(
pff... at best it would result in an 8% perf boost... most likely less... whats up with all you guys looking for a reason to wait? nvidia is 6 months late, the cards are STILL not for sale, h3ll they are still not even LAUNCHED at this moment, and you are already looking for reasons to wait even more? 0_o
hurt, yes... cause them losses for a quarter, two or even three... maybe... drive them into brankrupcy, definately not... nvidia stumbled with gf100, but they are still standing... the next 6-12 months will be very interesting... if ati can place a heavy punch at nvidias chin in this time frame, they have a very good chance of seeing nvidia kiss the floor.
but look at nvidias market share in gaming graphics... ati is at 30% and nvidia at 68% and i think in laptop and professional segments its similar. ati needs more than 6 or even 12 months dominance to hurt nvidia or even knock them out... even if fermis successor/shrink fails it wont be enough to drive them into bancrupcy...
that was just a speculation... i think its likely, and yes, its kinda lame... but then again, a 10% boost doesnt make a resolution or setting playable... 10% is not enough to go from unplayable to playable, its just enough to go from "not that nice" to "good"... and i dont think its just ati doing this...