Well, I should've said that "of those who will buy nvidia card", stupid me. :D
Anyway, you get the point.
Printable View
Oh you mean when a competitor did it out of necessity to find a home for their products because they couldn't compete on the high end?
Of course if they can't compete at the high end, they want to go cheaper, which means they have to make their product cheaper in every way possible, including risking on process.
Do you really think NVIDIA can have the fastest GPUs on the planet, but can't go to 55nm?
It's not even a matter of NVIDIA having the process, because they just put in an order to TSMC the same way AMD does. Yields on relatively new processes (new for the fab in question) can be a :banana::banana::banana::banana::banana: and it's a huge risk.
What if I told you that for your new house you're building from the ground up, there's a new foundation technology which only 1 house has been built on before and it's only been out for a year, so the kinks are not necessarily all worked out? Want to invest your $600,000 on top of it?
Again, that all boils down to incompetence. No matter how you look at it.
And besides, 55nm is now just as mature as 65nm was when G92 came out.
http://largon.wippiespace.com/smilies/blankstare.gif
Err... what?
55nm is the issue here. 55nm works for ATi, and assuming the bunnies at TSMC are not ATi-fans then the root of the problem is at nV.
Anyways, I think it's a problem.
Infact, the only way consumers could see it is, as a problem.
Now, why is it not a problem for ATi - or rather wasn't a problem 6months ago?
Yes, 55nm really works for ATI. 576mm2 is to large, especially when a RV770 is twice as small.
RV770 = between 830 and 1300 million transistors = 256mm2
GT200 = 1000 million transistors =576mm2
If you look it like this they really should have used 55nm die shrink instead of 65nm. GT206 will be 55nm.
R600 had 30% transistors then G80 but G80 owned R600. ATI RV770 has to be really efficient to keep up with nVidia.
It looks to me nVidia might have a killer card, although I'm not sure about the heat dump. I hate stupid heatspreaders on GFX cards. :slapass:
I do wonder how good their margins will be if yields aren't great. I mean, that is a huge huge die along with the 16 GDDR3 chips and the very densly packed PCB (compare it to the G80 GTX and you'll see what I mean).
Actually, what I'm more amazed by is how tightly lipped ATI has been about RV770 (with no die shots of any kind yet) compared to GT200 which has had everything but the kitchen sink leaked out.
Yes, one by one :p:
Hmm, i forgot to edit that post like i did with my post in the other thread. I've seen some rumors about close to 1 billion but also close to 1300 million :shrug:
Sorry about that.
Every info on the net is based on rumours. Maybe 1 billion is not that accurate but my post can give you an idea about how small the RV770 is and how big the GT200 is. If you ask me a big difference.
It's a purely business decision. nVidia designed the G200 for 65nm, they would have to re-design & test it for 55nm before even attempting production. G200 on 65nm was in production a while ago now & there is a 55nm design in the works but it's coming later, once all the potential issues are addressed & fixed/bypassed.
It is not incompetence but a safer, cheaper decision that could be the difference between making money or losing money.
Have any other graphics cards had memory chips on the back of the card? How are they going to cool those? Some kind of sandwich cooler?
NVidia never start their high-end on a new process. ATi have been a process ahead for awhile because of a gamble they took with the R520 that costed them drastically in both market share as well as stock value, and is arguably the reason ATi was able to be bought out by AMD.
So when you think about it, having a high-end part be your first test run of a new process size is incompetence. Personally, I'd rather have a larger chip than one with half the yields. Call me crazy, but I'd like to actually be able to buy this thing rather than it be nowhere to be found and price gouged up in the $800+ range(remember the 7800GTX 512MB?). :up:
Chip design takes into account the fabrication process. It is not as simple as saying, hey let's put in an order for 55nm chips, and that's that. A chip is designed specifically for the process used to make it.
Nvidia no doubt would love to make their highest end stuff @55nm and below, and you can bet they would have if it was feasible. ATI does look to have a leg up on Nvidia when it comes to their working relationship with TSMC, but that has not hurt Nvidia at all because they have a better/faster design arch. Now if ATI had the performance lead AND was also doing everything on a smaller process, Nvidia would be in trouble.
Also, Nvidia has been able to field reasonably power efficient parts even on an "inferior" process. The smallest nm process is not everything.
I agree with you, gambling isn't my cup of tea and shouldn't be NVIDIA's either. For the competitor with a better budget and market share it's more logical to go the safe route which might or might not be the better case while the competitor with a lot smaller budget and market share will have to take more risks in order to be able to compete on a good level against the bigger opponent. If ATI doesn't take any risks they'll always be one step behind NVIDIA and that's a certain way to get the company become smaller and smaller and harder and harder to compete against the opponent.
Remember R600 delays? That's one example taking a risk doesn't always result in a good result. R600 arch underperformed a bit compared to 8800 series and this was the beginning of the "performance" card sales as ATI figured it won't be able to compete in the highend with this arch without having at least one or a few NVIDIA cards beating them and I don't think 4xxx series will be any different either but after that series I'm expecting more drastical changes and perhaps we'd see another attempt fighting for the performance crown but I'm a lil unsure if ATI will even try that anymore and simply continue this current trend. Props for AMD/ATI to be able to compete this well against a lot bigger opponents though.
Do you know how much time & money it takes to redesign a GPU for a new process? It's a lot cheaper to go with tried & tested but take a cut in profits than risk a huge loss.
As DilTech rightly said, ATI gambled on a new architecture on a new process and it cost them. If nVidia made a similar mistake they could become another cog in Intel/Samsung/[insert semiconductor company here] machine, like ATI became a cog in AMD's machine.
They do have 55nm on the way but timing is everything. RV770 will be better than G92 and nVidia fel the need to be on top, the 65nm G200 will ensure that & then a 55nm/45nm shrink might keep them ahead of whatever AMD comes up with next.
I was personally talking about IF NVIDIA will introduce them at 65nm and release identical chips but on 55nm process and maintaining both types similiarly with same model numbers etc. Marketing them with a different model number might give some slight bonus marketing capabilities (like 9800 series compared to 8800), although wouldn't be appriciated for enthusiasts knowing NVIDIA's marketing strategy. Focusing on one process would be a bit more simple and cheaper, but of course it's not usually that expensive to do the die-shrink process itself.
TSMC 55nm is a half-node/optical shrink so minimal retrofitting should be required when coming from a native 65nm design.
:shrug:
Funny thing I think Tomshardware have one ^^
http://media.bestofmicro.com/K/2/104...aylink_005.jpg
http://www.tomshardware.co.uk/hdmi-d...ews-28242.html
I wonder what kinda heat that card is gonna put out....maybe i could save money on my home heating?
It's so beautiful. Go Nvidia!