Page 4 of 6 FirstFirst 123456 LastLast
Results 76 to 100 of 136

Thread: Nvidia has serious yield problems with the GT200.

  1. #76
    Xtreme X.I.P.
    Join Date
    Aug 2004
    Location
    Chile
    Posts
    4,151
    Quote Originally Posted by Calmatory View Post
    Face it, GT200 is a flop when compared to RV770.

    Why comparing their yields? Even if AMD had 80 % defect-rate and Nvidia had "only" 60 %, that wouldn't mean that Nvidia's yields are anywhere near good. They just plain suck.

    AMD did great job with RV770(especially compared to GT200), and you will thank them for lower prices from Nvidia, now it is Nvidia's turn to do the same.
    Who said AMD did bad and NVIDIA did good, please stop your inner fanboy and talk like an adult.

    I only said that people always say Intel and NVIDIA have crappy yields, and we almost never hear that from the same people about AMD (and let me tell you that with phenom they did).

    Quote Originally Posted by cegras View Post
    Actually, a former ATI employee told me some time ago that early yields on the R600 were somewhere around 20, 30%. Yes, those are actual numbers .. LE GASP!

    Ouch?

    Also, yield does not necessarily translate into volume. It just means they're making TSMC work harder.
    Dont really buy it actually, those yields are way to low.

  2. #77
    Xtreme Addict
    Join Date
    May 2007
    Posts
    2,125
    Yield is percentage of die that are good out of all dice on a wafer. Nothing more, nothing less. Poor yields mean that you get few good dice out of all the total dice on each wafer. Justify it however else you want, but that's exactly what it means. You can look at the RV770 wafer I showed you to get an idea of how a GT200 wafer would look like...

  3. #78
    Xtreme Addict
    Join Date
    May 2008
    Posts
    1,192
    Unless it is a particle issue, it can also translate into poor overclocking end product for us.
    Quote Originally Posted by alacheesu View Post
    If you were consistently able to put two pieces of lego together when you were a kid, you should have no trouble replacing the pump top.

  4. #79
    Xtreme X.I.P.
    Join Date
    Aug 2004
    Location
    Chile
    Posts
    4,151
    Quote Originally Posted by zerazax View Post
    Yield is percentage of die that are good out of all dice on a wafer. Nothing more, nothing less. Poor yields mean that you get few good dice out of all the total dice on each wafer. Justify it however else you want, but that's exactly what it means. You can look at the RV770 wafer I showed you to get an idea of how a GT200 wafer would look like...
    I dont need to imagine as i already have seen wafers of GT200, it only means each GT200 is moer expensive to make and that NVIDIA can get less cores out. Alone it means nothing it all dependes on the income that you can get and how many you can sell.

    That is why i said think of it as a GPGPU procesor and not as a GPU only, for every Tesla NVIDIA sells it makes a lot more money out of a single GT200 and also he eats into AMD or Intel market share of CPUs.

    For NVIDIA GPGPU is a win win situation (more money and more sales) but for AMD and Intel is not a win win situation, each GPU sold for GPGPU means less CPU sales.

  5. #80
    Xtreme Mentor
    Join Date
    May 2006
    Location
    Croatia
    Posts
    2,542
    Well other than the current hight price what makes GT2xx in your oppinions a worse product?

    I mean the way I see it: it "seems" more power efficent and a better product package is built around that chip than a complete product package of 4870.
    4850 is a killer product on the other hand, but 4870 feels like a bit of a hack job, but priced reasonably.
    IMO only negative for GT2xx is the current price.
    Quote Originally Posted by LexDiamonds View Post
    Anti-Virus software is for n00bs.

  6. #81
    Registered User
    Join Date
    Jan 2004
    Posts
    36
    The GT200 is an excellent chip, clearly the most powerful single chip solution available. Very similiar to the G80 is ambition. The problem is that the G80 had no competition, it wiped the floor with everything available at the time. But now the balls to the wall, brute strength power is running into the new 4800 series, but also Nvidia's own 9800gtx+/2. It's difficult to make the jump to the $650 card when close performance can be had for less money.

    The practice of pushing performance to the limit with large dies, lower yields and higher prices might be over. It is very expensive for Nvidia and Ati to play this game. Smaller and simpler chips that compete in the midrange and can be doubled up for the smaller market of the high end is the most economical approach(G92 comes to mind, which was a good solution for them despite the critics). Nvidia has on its hands a product similiar to the r600, its no surprise both sported a 512-bit bus that really hasn't shown itself to matter much aside from the added cost and complexity.

    Nvidia is probably going to release a gt200 based card that sports a 256-bit bus with a smaller less whatever and it will compete. It is, however, awkward for them at this time because of the pressure put on them from Intel and the new Nehelam bus. With Ati's crossfire taking center stage, with the value and performance gains, Intel will just say it doesn't need SLI as much as did. Nvidia will be a huge disadvantage in the negotiations, hurting them in the long run. That far more of a bigger deal for them than yields.
    Last edited by Mindfield; 06-26-2008 at 03:50 PM.

  7. #82
    Xtreme X.I.P.
    Join Date
    Aug 2004
    Location
    Chile
    Posts
    4,151
    Quote Originally Posted by XS Janus View Post
    Well other than the current hight price what makes GT2xx in your oppinions a worse product?

    I mean the way I see it: it "seems" more power efficent and a better product package is built around that chip than a complete product package of 4870.
    4850 is a killer product on the other hand, but 4870 feels like a bit of a hack job, but priced reasonably.
    IMO only negative for GT2xx is the current price.
    If you ask me for gaming that is a yes (i'm still not sure of how much PhysX can actually help ater on).

    If i would buy a card for 1900x1200 i'll get the HD 4870 (even thought its TDP seems higher that what AMD says) if i want more than 1900x1200 just use crossfire. Now de really interesting part is HD 4850 in Crossfire you could have it for USD 300 and play everything at 1900x1200

  8. #83
    Xtreme Addict
    Join Date
    Nov 2007
    Location
    Illinois
    Posts
    2,095
    You don't have to believe me, and of course it's within your right not to, but I believe what he tells me. I have a very good reason to : )
    E7200 @ 3.4 ; 7870 GHz 2 GB
    Intel's atom is a terrible chip.

  9. #84
    Banned
    Join Date
    Jun 2008
    Location
    Mi
    Posts
    1,063
    Quote Originally Posted by natty View Post
    What makes you think the successor will be out in Q3? Or is that just a way to put down the company?



    I think he was being optimistic, because fabing these new chips would undoubtedly have scheduling conflicts.

    Do you really see September as being pessimistic? (a put down?). I think your grossly neglecting nvidia's bad business decisions. It will take a while before they move their entire product line to 55nm.



    -Xoulz

  10. #85
    Xtreme Addict
    Join Date
    Jan 2005
    Posts
    1,730
    Quote Originally Posted by Wiggy McShades View Post
    so just because ibm has low yield on Cell that means TSCM has low yield on the gt200? they are two totally different chips being produced by two totally different fabs. Im sure there are bad yield rates but your just throwing guesses around. You have no idea what the yield rates are.
    Au contraire mon ami. Don't jump so fast on what I know..

    GT200 is 24/24mm for a 576mm^2 chip.

    You can put 93 dies on a 300mm wafer.

    How do you calculate yield ? You need to know the defect density and distribution.

    There are several distribution models , Poisson Model, the Murphy Model, the Exponential Model, and the Seeds Model.

    Poisson is the most pessimistic , with defects spread randomly.Murphy has triangular or rectangular defect densities and is in the middle.
    Exponential assumes , clustered defects and is the most optimistic one.Seeds is best suited to small chips , not the case here.

    With that in mind , how about defect densities ? Intel claims world class yields 0,22-0,25 dd/cm^2.
    AMD claimed "lower than 0.5" , so it must be in the 0,4-0,5dd/cm^2.

    Is TSMC better than Intel or AMD ? I'd say hell no , but that's my opinion.

    So let's see the results :

    0,25dd/cm^2
    Exponential 38 good dies , yield 41%
    Murphy 26 , yield 28,1%
    Poisson 22 , yield 23,7%
    Average ( best+worst/2 ) 30 yield 32,3%

    0,5dd/cm^2
    Exponential 23 , yield 25,8%
    Murphy 9 , yield 10,7%
    Poisson 5 , yield 5,6%
    Average 14 yield 15%


    I would say that TSMC is as good as AMD at best , that's 0.5dd.In that case they get around 10-14 fully working dies per wafer , 12-15% yield.

    Does that tell us how man working chips NVIDA salvages ? No.The chip has a lot of redundancy since it is build from hundreds of parallel very simple cores.

    What it tell us , is that NVIDIA gets 10-14 GTX280 ( or what's it's called ) premium chips per wafer.The rest are lesser models with fewer shader/vertex/etc units.

    Out of those 10-14GTX280 , some might fail running at the required frequency or in the envisioned TDP.So I'd venture to say that they get less than 10 full fledged chips per wafer.
    Quote Originally Posted by Heinz Guderian View Post
    There are no desperate situations, there are only desperate people.

  11. #86
    Engineering The Xtreme
    Join Date
    Feb 2007
    Location
    MA, USA
    Posts
    7,217
    bravo that was very comprehensive, thNks for sharing this thread in general I'd a very good read as you all are very knowledgable on the topic

    15% yields is what I heard a while back so I guess it wasnt too far from the truth eh?

  12. #87
    Xtreme Addict
    Join Date
    May 2007
    Posts
    2,125
    Any way you can calculate RV770? Bravo btw, very nice job. I took some wafer classes back in the day but never on defect distribution

  13. #88
    Xtreme Addict
    Join Date
    Jan 2005
    Posts
    1,730
    Quote Originally Posted by zerazax View Post
    Any way you can calculate RV770? Bravo btw, very nice job. I took some wafer classes back in the day but never on defect distribution
    http://www.icknowledge.com/misc_tech...calculator.xls

    It is an Excel model ; insert chip dimensions , wafer size and calculate for 0.25 and 0.5dd.

    It's easy.

    Gross die 237

    0,25dd/cm^2
    Exponential 144 good dies , yield 61%
    Murphy 129, yield 54,6%
    Poisson 124 , yield 52,7%
    Average ( best+worst/2 ) 134 yield 56,5%

    0,5dd/cm^2
    Exponential 103 , yield 44%
    Murphy 75 , yield 31,8%
    Poisson 65 , yield 27,8%
    Average 84 yield 35,44%

    Again , I consider 0,5dd more representative.Based on that , ATI gets 7-8x the number of high end chips per wafer vs. NVIDIA.
    Last edited by savantu; 06-26-2008 at 11:53 PM.
    Quote Originally Posted by Heinz Guderian View Post
    There are no desperate situations, there are only desperate people.

  14. #89
    Xtreme Addict
    Join Date
    May 2007
    Posts
    2,125
    Nice so for a RV770 at 16mm x 16mm..

    237 Gross Die per 300mm wafer

    0.25dd/cm^2
    Exponential - 144 die, 61%
    Murphy - 129 die, 54.6%
    Poisson - 124 die, 52.7%
    Seeds - 106 die, 44.9%
    Average: 125 die, 53%

    0.5 dd/cm^2
    Exponential - 103 die, 43.9%
    Murphy - 75 die, 31.8%
    Poisson - 65 die, 27.8%
    Seeds - 76 die, 32.3%
    Average: 84 die, 35.9%

    So even worst case (ATI's is 0.5dd/cm^2, Nvidia is 0.25dd/cm^2), on average ATI will get 84 die at 35.9% yields vs. 30 yield, 32.3%

    From that you can tell that ATI easily, and I mean easily has a > 2:1 ratio of RV770 per GT200. Not to mention that the GT200 PCB will cost more to make with 512-bit lanes, memory on the back, voltage regulation, etc.

    No wonder Nvidia is scrambling for 55nm with the GT200... probably a good time to adopt GDDR5 as well

  15. #90
    Xtreme Mentor
    Join Date
    Aug 2006
    Location
    HD0
    Posts
    2,646
    Quote Originally Posted by Aberration View Post
    Unless it is a particle issue, it can also translate into poor overclocking end product for us.
    you mean like GT200 which clocks around 10% lower than it's predecessor?

  16. #91
    Banned
    Join Date
    Jan 2008
    Location
    Canada
    Posts
    707
    Most everyone is making the assumption that Nvidia pays for poor yields but do they? We don't know the exact deal that NV has with TSMC. Does TSMC guarantee a certain yield? Or do they they charge for every die regardless if it is good or not? Or something in between?

    Without knowing the exact details, we can't make proper conclusions on how much poor yields are costing Nvidia.

  17. #92
    Xtreme Member
    Join Date
    Jun 2007
    Location
    Berlin
    Posts
    190
    In my opinion, nvidia is far away from the trouble.

    Huge, hot chip? Well, as if we really care.
    Most love it for what it is - the fastest single core gpu that allows to enjoy both high resolutions and high iq in game in the same time (as well as the benefits of a single card solution, such as higher minimum fps in some cases and absence of microstuttering in others).

    The demand seems to be really high for 280 GTX now and it is still likely to stay this way even now when new radeons showed up. Granted, they are not milking custumers as much as they could, but switching to 55nm should change the situation quite a bit in a not very distant future.

    Thx nvidia for the gt200.


    p.s. And dont forget: your e-penis is definately pleased with a 1.4 billion transistors beast too
    CPU: Q9450 @3.6GHz (lapped) Cooling: Scythe Zipang (lapped)
    RAM: Mushkin (996580) 2x2gb XP2-6400 @5-5-5-15, DDR2-1080 Mobo: Asus Rampage Formula (Bios 0410), tRD=7
    GPU: EVGA GTX 280 @712/1512/2700 PSU: Enermax Modu82+ 625 Optical Drive: LG Electronics GGC-H20L
    HDD:
    1x 160GB Intel X-25M G2, 1x VelociRaptor, 2x Samsung SpinPoint F1 640GB in 2x CM STB-3T4-E3-GP 4-in-3 cages
    Sound: Focusrite Saffire PRO 24 DSP / Grace Design M903 Speakers: M-Audio BX8a Deluxe, Shure SE535, Ultrasone Pro 900 w/ custom cable & dual entry mod, HD 800
    Case: Cooler Master Stacker 832 w/ 7x S-Flex SFF21F fans on 2x Zalman ZM-MFC1 Plus controllers
    Monitor: NEC MultiSync 24WMGX3 OS: Windows 7 Ultimate x64 SP1

  18. #93
    Banned
    Join Date
    Jan 2008
    Location
    Canada
    Posts
    707
    Quote Originally Posted by motopen1s View Post
    In my opinion, nvidia is far away from the trouble.

    Huge, hot chip? Well, as if we really care.
    Actually people do care, and it also increases cost. The hotter it runs, the more expensive the cooling device has to be, and the more expensive the supporting components need to be, power supply etc.

    The demand seems to be really high for 280 GTX now and it is still likely to stay this way even now when new radeons showed up. Granted, they are not milking custumers as much as they could, but switching to 55nm should change the situation quite a bit in a not very distant future.
    I don't know how you know the demand is high, we won't know exactly what the demand is until at least another quarter when sales numbers come in. And you say NV is not milking customers as much as they could? What? So $650 is not high enough?

    The Tech Reports has a poll up, asking people what is the best new graphics core. The results are very ugly for Nvidia. NV has reason to be very worried.

    http://techreport.com/discussions.x/15012


    p.s. And dont forget: your e-penis is definately pleased with a 1.4 billion transistors beast too
    RIIIIIIIIIIIGHT

  19. #94
    Registered User
    Join Date
    Sep 2006
    Posts
    68
    xbit labs

    http://www.xbitlabs.com/news/video/d...ics_Cards.html

    The reasons why partners of Nvidia decided to join the so-called ATI camp are not known wide-spread, but certain sources said that many add-in-card partners as well as distributors are displeased about Nvidia’s frequent new chip launches that decrease pricing of existing graphics cards and minimize profits. Another reason for disappointment of certain partners may be Nvidia’s Unilateral Minimum Advertised Price Policy (UMAP) initiative that restricts resellers to sell Nvidia-based products below certain price level.

  20. #95
    Xtreme Member
    Join Date
    Jun 2007
    Location
    Berlin
    Posts
    190
    Quote Originally Posted by eleeter View Post
    Actually people do care, and it also increases cost...
    Well, all I mean is that if you get a Mustang you know you will spend quite more money on the fuel. I definately agree though - lower power bills are very welcomed. As for cooling - seems like a well ventilated case is enough for this card even overclocked. Will have to see myself how quite/cool/etc it will be though.

    Quote Originally Posted by eleeter View Post
    And you say NV is not milking customers as much as they could? What? So $650 is not high enough?
    It is on the high side. But, talking of their profits in general (milking people, not just 1 person) - they would definately sell more of the cards if they are cheaper. As you say however, we will see the whole picture only when the numbers from nvidia will show up.

    Quote Originally Posted by eleeter View Post
    The Tech Reports has a poll up, asking people what is the best new graphics core. The results are very ugly for Nvidia. NV has reason to be very worried.
    It is the best when it comes to price/perfomance IMHO. But, it is not the fastest though. Having the fastest is important and that what definatelly will save nvidia's ass from much trouble in my opinion. They should still show positive numbers for this quarter and then... I can think only of success thinking of the 55nm die shrink that will show up.

    Most fellows with 30" monitors will go for couple of 280 GTXs I think, those who want a single fastest card and having enough spare organs to sell should also go for it.

    Quote Originally Posted by eleeter View Post
    RIIIIIIIIIIIGHT
    CPU: Q9450 @3.6GHz (lapped) Cooling: Scythe Zipang (lapped)
    RAM: Mushkin (996580) 2x2gb XP2-6400 @5-5-5-15, DDR2-1080 Mobo: Asus Rampage Formula (Bios 0410), tRD=7
    GPU: EVGA GTX 280 @712/1512/2700 PSU: Enermax Modu82+ 625 Optical Drive: LG Electronics GGC-H20L
    HDD:
    1x 160GB Intel X-25M G2, 1x VelociRaptor, 2x Samsung SpinPoint F1 640GB in 2x CM STB-3T4-E3-GP 4-in-3 cages
    Sound: Focusrite Saffire PRO 24 DSP / Grace Design M903 Speakers: M-Audio BX8a Deluxe, Shure SE535, Ultrasone Pro 900 w/ custom cable & dual entry mod, HD 800
    Case: Cooler Master Stacker 832 w/ 7x S-Flex SFF21F fans on 2x Zalman ZM-MFC1 Plus controllers
    Monitor: NEC MultiSync 24WMGX3 OS: Windows 7 Ultimate x64 SP1

  21. #96
    Banned
    Join Date
    Feb 2008
    Posts
    696
    Quote Originally Posted by savantu View Post
    http://www.icknowledge.com/misc_tech...calculator.xls

    It is an Excel model ; insert chip dimensions , wafer size and calculate for 0.25 and 0.5dd.

    It's easy.

    Gross die 237

    0,25dd/cm^2
    Exponential 144 good dies , yield 61%
    Murphy 129, yield 54,6%
    Poisson 124 , yield 52,7%
    Average ( best+worst/2 ) 134 yield 56,5%

    0,5dd/cm^2
    Exponential 103 , yield 44%
    Murphy 75 , yield 31,8%
    Poisson 65 , yield 27,8%
    Average 84 yield 35,44%

    Again , I consider 0,5dd more representative.Based on that , ATI gets 7-8x the number of high end chips per wafer vs. NVIDIA.
    No offense, but I find it funny that someone on a forum with an excel spreadsheet for approximating yields thinks they know about yields as though NVIDIA didn't know exactly what they would be dealing with before making their chips. Sorry, but when millions of dollars are at stake, you know these things ahead of time, and you calculate around that. Sure it can fluctuate a bit below what you might expect, but not much.

    People making a thread like this seem to think that these guys make wafers and just pray they'll get good yields. When people say good or bad yields ,this isn't a major swing. It's a matter of a percent or two. So it's not the difference between 80 or 20 dies like some seem to believe.

  22. #97
    Xtreme Enthusiast
    Join Date
    May 2007
    Posts
    510
    Quote Originally Posted by [XC] Lead Head View Post
    Probably much cheaper then that. I remember reading that most CPUs only take a few dollars each to make, including the actual package it sits on, and the heat spreader.
    did you fail to remember the fact that running your own fab and paying somone else to do it is probably quite a big price diff.
    Gaming Rig
    Intel E6300
    Intel Mobo
    2 Gig OCZ 800 (800 5,5,5,15)
    Saphire Ati 4870
    22inch LG Flatron W2230S

  23. #98
    Banned
    Join Date
    Feb 2008
    Posts
    696
    Quote Originally Posted by cozwin View Post
    did you fail to remember the fact that running your own fab and paying somone else to do it is probably quite a big price diff.
    Not entirely true. It costs money to keep fabs running, and there's a lot of overhead with making sure you keep your fabs are all running at capacity. You need to keep them full and producing 100% of the right products at all the right times on the right processes in order to keep them cost effective. Otherwise, they're just a liability.

    Sure you save some, but theres a huge risk involved.

  24. #99
    Xtreme Member
    Join Date
    Apr 2006
    Posts
    290
    Quote Originally Posted by Sr7 View Post
    No offense, but I find it funny that someone on a forum with an excel spreadsheet for approximating yields thinks they know about yields as though NVIDIA didn't know exactly what they would be dealing with before making their chips. Sorry, but when millions of dollars are at stake, you know these things ahead of time, and you calculate around that. Sure it can fluctuate a bit below what you might expect, but not much.

    People making a thread like this seem to think that these guys make wafers and just pray they'll get good yields. When people say good or bad yields ,this isn't a major swing. It's a matter of a percent or two. So it's not the difference between 80 or 20 dies like some seem to believe.
    You really think the difference between yields on a 260mm^2 chip and a 576mm^2 chip is 1-2%?

    There is a reason why Intel, with the best fabs in the world, built Kentsfield & Yorkfield out of dual-die MCM. Yields on 2x 143mm^2 chips are significantly better than a single 286mm^2 chip even at Intel's fab, and I am sure they get much better yields than a foundry like TSMC. TSMC doesn't have a hope of getting good yields on a 576mm^2 chip; that is Itanium size, and even Intel only manufactures chips that large for CPUs that sell for $1,000s and do not need good yields to be profitable.

    nVidia is not dumb, the problem is they thought they could sell GT200 for more than they are now able to. If nVidia sells the GTX 280 for $649 and the GTX 260 for $449, like they originally planned, they can still make a good profit off the cards. But the problem is AMD showed up with RV770, which is much more competitive with GT200 than nVidia thought. GT200 clocks are lower than expected as well; this is very clear when you see that the GTX 280 misses the 1 TFLOP mark. If nVidia was able to get good yields at a higher shader clock, there is no doubt they would want their flagship card to hit 1TFLOP.

    Now nVidia is faced with selling the GTX 260 for $300 or so and the GTX 280 for $500 or less if they want to have good sales. nVidia expected to be able to sell GT200 for much more money than AMD would sell RV770; thus they could make up for the horrible yields and much higher manufacturing cost. Now they are unable to do that.

  25. #100
    Banned
    Join Date
    Feb 2008
    Posts
    696
    Quote Originally Posted by Extelleron View Post
    You really think the difference between yields on a 260mm^2 chip and a 576mm^2 chip is 1-2%?

    There is a reason why Intel, with the best fabs in the world, built Kentsfield & Yorkfield out of dual-die MCM. Yields on 2x 143mm^2 chips are significantly better than a single 286mm^2 chip even at Intel's fab, and I am sure they get much better yields than a foundry like TSMC. TSMC doesn't have a hope of getting good yields on a 576mm^2 chip; that is Itanium size, and even Intel only manufactures chips that large for CPUs that sell for $1,000s and do not need good yields to be profitable.

    nVidia is not dumb, the problem is they thought they could sell GT200 for more than they are now able to. If nVidia sells the GTX 280 for $649 and the GTX 260 for $449, like they originally planned, they can still make a good profit off the cards. But the problem is AMD showed up with RV770, which is much more competitive with GT200 than nVidia thought. GT200 clocks are lower than expected as well; this is very clear when you see that the GTX 280 misses the 1 TFLOP mark. If nVidia was able to get good yields at a higher shader clock, there is no doubt they would want their flagship card to hit 1TFLOP.

    Now nVidia is faced with selling the GTX 260 for $300 or so and the GTX 280 for $500 or less if they want to have good sales. nVidia expected to be able to sell GT200 for much more money than AMD would sell RV770; thus they could make up for the horrible yields and much higher manufacturing cost. Now they are unable to do that.
    No no no you misunderstood. I'm saying that difference between good and bad yields is a very narrow margin. I'm saying it's not like NVIDIA was expecting 80 and got 20.

Page 4 of 6 FirstFirst 123456 LastLast

Bookmarks

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •