MMM
Results 1 to 25 of 146

Thread: 55nm GT200 (GT200-400) on the way?

Hybrid View

  1. #1
    Xtreme Member
    Join Date
    Oct 2007
    Posts
    407
    You AMD fanboys are the ones who don't listen to reason. AMD right or wrong! I checked that link and there was no mention of how much Nvidia or AMD are paying for their TSMC wafers. It was also a "known fact" that the R600 was going to blow away the competition. Until it was released. Then all the AMD fanboys were proven wrong. I have yet to see any facts here. If they are so known, then why is it so difficult to cite your sources? And "well grounded predictions"? That is totally idiotic. There's nothing "well grounded" about your wild guesses. Nothing. I think you are morons for making the assumptions that you do. You may not lack "common sense" but you certainly lack critical reasoning skills.

  2. #2
    Xtreme Addict
    Join Date
    Aug 2007
    Location
    Istantinople
    Posts
    1,574
    Quote Originally Posted by gojirasan View Post
    You AMD fanboys are the ones who don't listen to reason. AMD right or wrong! I checked that link and there was no mention of how much Nvidia or AMD are paying for their TSMC wafers. It was also a "known fact" that the R600 was going to blow away the competition. Until it was released. Then all the AMD fanboys were proven wrong. I have yet to see any facts here. If they are so known, then why is it so difficult to cite your sources? And "well grounded predictions"? That is totally idiotic. There's nothing "well grounded" about your wild guesses. Nothing. I think you are morons for making the assumptions that you do. You may not lack "common sense" but you certainly lack critical reasoning skills.
    I completely miss your point that "the manufacturing costs of 4870 and GTX 280 should be very close to each other"

    You don't need exact numbers to prove this wrong. Here is a very niceway of doing so:

    1. The cost of a graphics card is more or less dependent on its GPU cost. GT200 is 576mm˛. RV770 is 275mm˛. Without factoring in yields (or manufacturing defects), which will be obviously lower on the GT200, a 576mm˛ die is more than two times more expensive than a 275mm˛ chip.

    2. The other major cost of a graphics card (that would be different on those two cards) is memory and memory bus. A 512bit bus PCB is more expensive than a 256-bit one, and a GDDR5 chip is more expensive than a GDDR3 one. However, 4870 has only 512MB GDDR5 whereas GT200 has 1024MB GDDR3.

    So there you have it! The only thing in a RV770 that costs more (per unit) than a GT200 is its GDDR5 RAM. For your pointless assumptions to be correct, a GDDR5 chip should be so expensive than a GDDR3 that it should offset the cost of two times the RAM size, two times the memory bus, and two times the GPU cost. Is a GDDR5 chip really around 2,000% (that's two thousand percent) more expensive than a GDDR3 chip? I would think it should be around %20. Are you serious?
    Has anyone really been far even as decided to use even go want to do look more like?
    INTEL Core i7 920 // ASUS P6T Deluxe V2 // OCZ 3G1600 6GB // POWERCOLOR HD5970 // Cooler Master HAF 932 // Thermalright Ultra 120 Extreme // SAMSUNG T260 26"

  3. #3
    Xtreme Addict
    Join Date
    Nov 2007
    Location
    Illinois
    Posts
    2,095
    Quote Originally Posted by gojirasan View Post
    You AMD fanboys are the ones who don't listen to reason. AMD right or wrong! I checked that link and there was no mention of how much Nvidia or AMD are paying for their TSMC wafers. It was also a "known fact" that the R600 was going to blow away the competition. Until it was released. Then all the AMD fanboys were proven wrong. I have yet to see any facts here. If they are so known, then why is it so difficult to cite your sources? And "well grounded predictions"? That is totally idiotic. There's nothing "well grounded" about your wild guesses. Nothing. I think you are morons for making the assumptions that you do. You may not lack "common sense" but you certainly lack critical reasoning skills.
    I couldn't care less about the R600, I barely knew anything about the computer hardware world until I started seriously browsing maybe around Sept 2007.

    Now you're sounding more and more like someone with a pocket degree. It's almost pathetic to see you rambling on like this, pointing fingers without even raising a comprehensible rebuttal.

    Quote Originally Posted by savantu
    Au contraire mon ami. Don't jump so fast on what I know..

    GT200 is 24/24mm for a 576mm^2 chip.

    You can put 93 dies on a 300mm wafer.

    How do you calculate yield ? You need to know the defect density and distribution.

    There are several distribution models , Poisson Model, the Murphy Model, the Exponential Model, and the Seeds Model.

    Poisson is the most pessimistic , with defects spread randomly.Murphy has triangular or rectangular defect densities and is in the middle.
    Exponential assumes , clustered defects and is the most optimistic one.Seeds is best suited to small chips , not the case here.

    With that in mind , how about defect densities ? Intel claims world class yields 0,22-0,25 dd/cm^2.
    AMD claimed "lower than 0.5" , so it must be in the 0,4-0,5dd/cm^2.

    Is TSMC better than Intel or AMD ? I'd say hell no , but that's my opinion.

    So let's see the results :

    0,25dd/cm^2
    Exponential 38 good dies , yield 41%
    Murphy 26 , yield 28,1%
    Poisson 22 , yield 23,7%
    Average ( best+worst/2 ) 30 yield 32,3%

    0,5dd/cm^2
    Exponential 23 , yield 25,8%
    Murphy 9 , yield 10,7%
    Poisson 5 , yield 5,6%
    Average 14 yield 15%


    I would say that TSMC is as good as AMD at best , that's 0.5dd.In that case they get around 10-14 fully working dies per wafer , 12-15% yield.

    Does that tell us how man working chips NVIDA salvages ? No.The chip has a lot of redundancy since it is build from hundreds of parallel very simple cores.

    What it tell us , is that NVIDIA gets 10-14 GTX280 ( or what's it's called ) premium chips per wafer.The rest are lesser models with fewer shader/vertex/etc units.

    Out of those 10-14GTX280 , some might fail running at the required frequency or in the envisioned TDP.So I'd venture to say that they get less than 10 full fledged chips per wafer.
    E7200 @ 3.4 ; 7870 GHz 2 GB
    Intel's atom is a terrible chip.

  4. #4
    Banned
    Join Date
    Jun 2008
    Location
    Mi
    Posts
    1,063
    Quote Originally Posted by gojirasan View Post
    You AMD fanboys are the ones who don't listen to reason. AMD right or wrong! I checked that link and there was no mention of how much Nvidia or AMD are paying for their TSMC wafers. It was also a "known fact" that the R600 was going to blow away the competition. Until it was released. Then all the AMD fanboys were proven wrong. I have yet to see any facts here. If they are so known, then why is it so difficult to cite your sources? And "well grounded predictions"? That is totally idiotic. There's nothing "well grounded" about your wild guesses. Nothing. I think you are morons for making the assumptions that you do. You may not lack "common sense" but you certainly lack critical reasoning skills.
    Still trolling...?

    We are enthusiasts, it doesn't matter who's chip it is, we are having a logical discussion. It seems your blindfolded because your inherent love for nVidia doesn't allow you to acknowledge FACTS.

    My link DID give you facts. Matter of fact, there is a photo of the GTX wafer. Secondly, you now know there is 94 cores per 300mm wafer. Which before you feigned ingnorance....

    So, I'm going to play connect-the-dots with you.

    Here:
    GameSpot
    theINQUIRER
    PureOC

    Probably more than you'll ever want to know:
    SemaTech


    So... as you can see, it is widely accepted that a 300mm wafer cost about $5,000 dollars. If you wish not to accept this widely known fact, then your just being stubborn and/or an utter "fanboi".

    But, just to prove your ignorance, even if we use some fictitiously low cost of $3,500 per wafer and an industry mind blowing 50% yield for a 1.4billion transistor fab, your still looking at $75 per GTX200 core. (Which in reality is estimated @ 20% yield, or about $277 per core)

    How do you make a complete Video card with heat sink, fan and materials for $25 bucks..?
    Quote Originally Posted by gojirasan
    2) both the HD4870 and the GTX280 cost under $100 to manufacture.

    So (again) remedial mathematics suggest you have no clue as to the business end of microprocessors. We are indeed concerned for Nvidia... and also taking a stab at them while their obviously down.... simply because their monopolistic stance. This will be my first ATI card in a long time and I am quite sure that nearly every gamer and enthusiast is right there with me.

    By the time Nvidia actually goes 55nm with their GTX200 series, ATI will probably be getting better yields than what they are and will be able to drop the price of their HD4000 series even more.

    Which we are all happy about... except die-hard nVidia fans!




    .
    Last edited by Xoulz; 07-02-2008 at 04:32 AM.

  5. #5
    Banned
    Join Date
    Jul 2008
    Posts
    9
    Quote Originally Posted by Xoulz View Post
    Still trolling...?

    So, I'm going to play connect-the-dots with you.

    .
    *yawn* over-use that phrase much? You sure like connect the dots..

  6. #6
    Xtreme Addict
    Join Date
    Aug 2007
    Location
    Toon
    Posts
    1,570
    Quote Originally Posted by gojirasan View Post
    You AMD fanboys are the ones who don't listen to reason. AMD right or wrong! I checked that link and there was no mention of how much Nvidia or AMD are paying for their TSMC wafers. It was also a "known fact" that the R600 was going to blow away the competition. Until it was released. Then all the AMD fanboys were proven wrong. I have yet to see any facts here. If they are so known, then why is it so difficult to cite your sources? And "well grounded predictions"? That is totally idiotic. There's nothing "well grounded" about your wild guesses. Nothing. I think you are morons for making the assumptions that you do. You may not lack "common sense" but you certainly lack critical reasoning skills.
    I see the 2900 as the beta for the 4870, the 2900 did not live up to expectations, perhaps because it was released a year early on an out of date technology!

    Factors influencing die cost:
    Die size
    Bulk wafer cost
    Fabrication process
    Yield

    Area G200:R770 = 256:576 = 9:4 = 2.25
    Cost (1GB DDR3 ASUS Cards) 4850:GTX280 £183.59:£339.56 = 1.85

    This is a difficult comparison since the 4850 1GB is the most expensive model and the only 1GB variant on Scan and the GTX280 is their cheapest model. However, moving to more expensive GTX280s brings the cost ratio in line with the die density ratio.

    So yes, the cost of the competing TMSC GPUs correlates at around 1:1 with die area after any reduction in cost due to using a 65nm rather than 55nm process. My guess is that this is offset by poor yield due to high area.

    Not using the latest technology node (65nm) is part of why the 2900 (90nm) did not live up to expectations.
    Intel i7 920 C0 @ 3.67GHz
    ASUS 6T Deluxe
    Powercolor 7970 @ 1050/1475
    12GB GSkill Ripjaws
    Antec 850W TruePower Quattro
    50" Full HD PDP
    Red Cosmos 1000

Bookmarks

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •