Page 24 of 109 FirstFirst ... 14212223242526273474 ... LastLast
Results 576 to 600 of 2723

Thread: The GT300/Fermi Thread - Part 2!

  1. #576
    Xtreme Enthusiast
    Join Date
    Feb 2005
    Posts
    970
    haha yeah, well a lot people participated in it a lot more than I did! like i said, 3 possibilities, I just went for the first one! heh

  2. #577
    Xtreme Enthusiast
    Join Date
    Dec 2009
    Posts
    591
    That was the funniest thing I had come across in the news section.

    Lol. Lol. Lol. Lol for the next little while.

  3. #578
    Xtreme Cruncher
    Join Date
    May 2009
    Location
    Bloomfield
    Posts
    1,968
    Quote Originally Posted by Piotrsama View Post
    Didn't Sony buy only the design of the RSX to reduce costs? Why would they buy Nvidia's GPUs then??
    yes, consoles are full of cheap parts. they are priced very aggressively too. it would cost way too much to have g80 based console and its really sony's fault that their console didnt have a good gpu because they came to nvidia way too late to design a good chip for next gen console. back during g70 days nvidia had perf per mm2 crown which made them attractive for console hardware. this is very important when you sell 30 million units.

  4. #579
    Xtreme Addict
    Join Date
    Nov 2004
    Posts
    1,692
    What the heck was that all about. You know, that dude seriously sounded (for once) pretty serious.

    Was he serious? Mentally ill? Just a elaborate prank? What was that.

    Intel Core i7-3770K
    ASUS P8Z77-I DELUXE
    EVGA GTX 970 SC
    Corsair 16GB (2x8GB) Vengeance LP 1600
    Corsair H80
    120GB Samsung 840 EVO, 500GB WD Scorpio Blue, 1TB Samsung Spinpoint F3
    Corsair RM650
    Cooler Master Elite 120 Advanced
    OC: 5Ghz | +0.185 offset : 1.352v

  5. #580
    Xtreme Member
    Join Date
    Mar 2008
    Location
    Canada
    Posts
    356
    This thread is seriously non-informative and this is part 2 threads OMFG.

  6. #581
    Xtreme Addict
    Join Date
    Aug 2008
    Location
    Hollywierd, CA
    Posts
    1,284
    Quote Originally Posted by annihilat0r View Post

    almost as good as the GTX295 WTF Edition!
    [SIGPIC][/SIGPIC]

    I am an artist (EDM producer/DJ), pls check out mah stuff.

  7. #582
    Xtreme Addict
    Join Date
    Jul 2007
    Location
    Alberta, Canada
    Posts
    1,264
    Despite his lack of factual proof, his posts are certianly convincing. The inclusion of a picture with a 295 however was certainly a way to bomb any credibilty he may have had... I'm guessing he is no more than a talented / bored word smith Funny none the less. However I have a feeling that he may actually have some thigns right.. we shall see when the crap storm that is Fermi finally disapates.
    Feedanator 7.0
    CASE:R5|PSU:850G2|CPU:i7 6850K|MB:x99 Ultra|RAM:8x4 2666|GPU:980TI|SSD:BPX256/Evo500|SOUND:2i4/HS8
    LCD:XB271HU|OS:Win10|INPUT:G900/K70 |HS/F:H115i

  8. #583
    Xtreme Addict
    Join Date
    May 2007
    Location
    'Zona
    Posts
    2,346
    Quote Originally Posted by -Sweeper_ View Post
    The first slides confirmed U$499 for Hemlock.
    This one showed up at HD5970 launch, weeks later, and by the time Ati already knew there would be no Fermi or any competition in 2009.
    The slides all came from the same slide deck, they just weren't posted publicly, leaked, all at the same time.
    Also, there were at least two different Hemlock designs with a third being a clockspeed difference.

    Back to semi-ontopic... I'm slightly disappointed that so many fell for that guy. He mentioned a hyper-transport bridge in one of his posts...
    Last edited by LordEC911; 02-16-2010 at 11:50 PM.
    Originally Posted by motown_steve
    Every genocide that was committed during the 20th century has been preceded by the disarmament of the target population. Once the government outlaws your guns your life becomes a luxury afforded to you by the state. You become a tool to benefit the state. Should you cease to benefit the state or even worse become an annoyance or even a hindrance to the state then your life becomes more trouble than it is worth.

    Once the government outlaws your guns your life is forfeit. You're already dead, it's just a question of when they are going to get around to you.

  9. #584
    Xtreme X.I.P.
    Join Date
    Nov 2002
    Location
    Shipai
    Posts
    31,147
    Quote Originally Posted by trinibwoy View Post
    Their quarterly R&D expense has been hovering around 200m for the last 2 years. It's not a sign of anything.
    i thought it kept climbing over the years... thats what they showed just a few weeks ago
    and it makes sense, im sure rnd has more than trippled since 2000

    Quote Originally Posted by trinibwoy View Post
    I was referring to the bolded part above where SA claimed Nvidia should throw money at R&D as if that would magically solve all their manufacturing/yield problems.
    well they didnt say it would solve mfc problems, it would help to get a new cut down and/or reworked fermi out asap... i dont know about that, but it makes sense... there are a lot of things you can do to cut ttm that have nothing to do with mfc, but they cost money...

  10. #585
    Xtreme X.I.P.
    Join Date
    Nov 2002
    Location
    Shipai
    Posts
    31,147
    http://www.semiaccurate.com/2010/02/...and-unfixable/

    the good: A3 came back from the fab at the end of january
    the bad: yields suck, top bin is only 448cores and 600mhz
    the ugly: shader clocks are only 1200mhz

    fab wafer yields are still in single digit percentages.
    the problems that caused these low yields are likely unfixable without a complete re-layout. Lets look at these problems one at a time...

  11. #586
    Xtreme Addict
    Join Date
    Nov 2004
    Posts
    1,692
    My goodness, that is actually way worse then I imagined it would be. I hope nVidia has got some serious reserves, because they are going to need it if this article is anywhere near true.

    At $5,000 per wafer, 10 good dies per wafer, with good being a very relative term, that puts cost at around $500 per chip, over ten times ATI's cost. The BoM cost for a GTX480 is more than the retail price of an ATI HD5890, a card that will slap it silly in the benchmarks. At these prices, even the workstation and compute cards start to have their margins squeezed.
    $500 per chip?
    Last edited by Tim; 02-17-2010 at 09:28 AM.

    Intel Core i7-3770K
    ASUS P8Z77-I DELUXE
    EVGA GTX 970 SC
    Corsair 16GB (2x8GB) Vengeance LP 1600
    Corsair H80
    120GB Samsung 840 EVO, 500GB WD Scorpio Blue, 1TB Samsung Spinpoint F3
    Corsair RM650
    Cooler Master Elite 120 Advanced
    OC: 5Ghz | +0.185 offset : 1.352v

  12. #587
    Xtreme Addict
    Join Date
    Aug 2007
    Location
    Istantinople
    Posts
    1,574
    A very reliable source told me the top GF100's clocks were about GTX 285. So the 600mhz figure from Charlie is, I believe, wrong.
    Has anyone really been far even as decided to use even go want to do look more like?
    INTEL Core i7 920 // ASUS P6T Deluxe V2 // OCZ 3G1600 6GB // POWERCOLOR HD5970 // Cooler Master HAF 932 // Thermalright Ultra 120 Extreme // SAMSUNG T260 26"

  13. #588
    Xtreme X.I.P.
    Join Date
    Nov 2002
    Location
    Shipai
    Posts
    31,147
    die size is 550mm2? thats almost g200 65nm big then, ouch...
    i thought it was 500mm2 or about g200b 55nm in size...

    the ES cards suck 280W... single gpu... wow... :o

    The BoM cost for a GTX480 is more than the retail price of an ATI HD5890, a card that will slap it silly in the benchmarks
    mhhhh 480 is rumored to sell for 400-500$, so the bom cost is probably 400$? so 5890 will sell for 400$ and 5870 drops to 300$?
    Last edited by saaya; 02-17-2010 at 09:43 AM.

  14. #589
    Xtreme Member
    Join Date
    Mar 2008
    Location
    Pennsylvania
    Posts
    166
    Quote Originally Posted by annihilat0r View Post
    A very reliable source told me the top GF100's clocks were about GTX 285. So the 600mhz figure from Charlie is, I believe, wrong.
    The 600MHz figure from Charlie is about the same as the GTX 285s 648MHz.

    (adv) approximately, about, close to, just about, some, roughly, more or less, around, or so ((of quantities) imprecise but fairly close to correct)

  15. #590
    Xtreme Addict
    Join Date
    Aug 2007
    Location
    Istantinople
    Posts
    1,574
    Quote Originally Posted by thebluemeanie1 View Post
    The 600MHz figure from Charlie is about the same as the GTX 285s 648MHz.
    Actually I was told that it's higher than GTX 285.

    Plus, Charlie is talking about a 600mhz which only top binned chips can do. So I don't believe him on that.

    I don't believe the "$500 per chip" thing either. It can't be THAT bad.
    Has anyone really been far even as decided to use even go want to do look more like?
    INTEL Core i7 920 // ASUS P6T Deluxe V2 // OCZ 3G1600 6GB // POWERCOLOR HD5970 // Cooler Master HAF 932 // Thermalright Ultra 120 Extreme // SAMSUNG T260 26"

  16. #591
    Xtreme X.I.P.
    Join Date
    Nov 2002
    Location
    Shipai
    Posts
    31,147
    Quote Originally Posted by annihilat0r View Post
    Actually I was told that it's higher than GTX 285.

    Plus, Charlie is talking about a 600mhz which only top binned chips can do. So I don't believe him on that.

    I don't believe the "$500 per chip" thing either. It can't be THAT bad.
    maybe thats what they were hoping for... maybe thats what one or two cards can run... or maybe thats what all cards COULD run if they cool them really well...

    either way, the clocks dont really matter, 10% higher or lower clocks... thats not gonna make a huge difference... yields are a serious problem... if they are really still that low, then thats bad...
    he said GF104 still didnt tape out... that sucks :/
    i hope nvidia isnt waiting for 28nm to get GF104 out!

  17. #592
    Registered User
    Join Date
    Jul 2009
    Posts
    66
    The interesting point was that no GF100 derivatives have taped out yet. So another year before we see Fermi mainstream parts?

  18. #593
    Xtreme X.I.P.
    Join Date
    Nov 2002
    Location
    Shipai
    Posts
    31,147
    Quote Originally Posted by thatdude90210 View Post
    The interesting point was that no GF100 derivatives have taped out yet. So another year before we see Fermi mainstream parts?
    yes, that was the most interesting part for me too...

    if they follow their old strategy of shrinking and cutting down (G80->G92) then yes, a year almost... if they follow their recent strategy of just shrinking (gt200->gt200b) then it will also be about a year... cause shrinking means 28nm, and thats not going to happen before Q4... if not even Q1 2011...

    the only way they can get GF104 out soon is if its still on 40nm... but since they havent taped it out yet... even that wont be too soon

    they were very optimistic with GF100 and it taped out it july and they wanted to sell it in november... thats 4 months... and that was optimistic... if they tape out tomorrow that would mean GF104 arrives in july... maybe a little sooner... bleh :/

    since there are signs of delays and yield issues at 28nm at tsmc ALREADY and we are just in Q1 of 2010 while its supposed to kick off at the end of the year, it would be really stupid from nvidia to wait for 28nm... so im pretty sure they will do GF104 in 40nm and will try to have it out in the middle of this year...
    Last edited by saaya; 02-17-2010 at 10:43 AM.

  19. #594
    Xtreme Member
    Join Date
    Mar 2008
    Posts
    131
    Nvidia's Fermi GTX480 is broken and unfixable
    Hot, slow, late and unmanufacturable

    http://www.semiaccurate.com/2010/02/...and-unfixable/

  20. #595
    Registered User
    Join Date
    Jun 2005
    Location
    Germany
    Posts
    38
    Quote Originally Posted by Milos-stancene View Post
    Nvidia's Fermi GTX480 is broken and unfixable
    Hot, slow, late and unmanufacturable

    http://www.semiaccurate.com/2010/02/...and-unfixable/
    Reply from nvidia.
    http://twitter.com/RS_Borsti
    Oh Charlie... that just another hilarious post

    http://twitter.com/Igor_Stanek
    Oh Charlie... that just another hilarious post me: I think with this post Charlie totally destroyed his credibility

    I want to see how he is going to explain his article in March.... looks like biggest lie and mistake of his life

  21. #596
    Banned
    Join Date
    May 2006
    Location
    Brazil
    Posts
    580
    Quote Originally Posted by mapel110 View Post
    Reply from nvidia.
    http://twitter.com/RS_Borsti
    Oh Charlie... that just another hilarious post

    http://twitter.com/Igor_Stanek
    Oh Charlie... that just another hilarious post me: I think with this post Charlie totally destroyed his credibility

    I want to see how he is going to explain his article in March.... looks like biggest lie and mistake of his life
    I hope he's totally wrong, for the sake of nVidia.

  22. #597
    Registered User
    Join Date
    Apr 2008
    Posts
    49
    It really seems to me that Fermi is just to big, it has so much more arch added in there for Cuda based hardware that it is bigger then it needs to be to work as a gaming card.

    I remember when GTX 280 came out my first reaction was, its so big where can they go from here, if it gets any bigger its just not going to work. That reaction was based on really nothing save that my first GTX 280 ran much hotter then I expected and required a second loop to keep my CPU at the temps I wanted.

    If in fact Fermi is just too big to make then where does Nvidia go from here?

    Do they
    A, Rework a smaller version for late 2010, maybe throw out some of the CUDA stuff that was not needed for the gaming market, and then double up like the HD 5970 for the top card?
    Problem: Nvidia will be fighting ATI in its own backyard, dual GPU cards is kinda ATI's thing, and from my own experiences with the 5970, ATI has it down. The 5970 is also sitting at the 300W PCIE wall, and though you can break it with over-clocking OEM's don't want to break it for legal reasons. Unless Fermi has better performance per watt, a dual Fermi card will be slower un over-clocked and thus slower at the OEM lvl.

    B, Start shrinking down Fermi to 28nm and not release anything till 2011?
    Problem: ATI will have something new out by then, maybe the 6XXX cards or by that point 7XXX cards, leaving Nvidia one to two generations behind.

    C Find what chips work, throw them on boards with insane cooling just to beat the 5870 by around 10 to 20 % and use the performance crown to sell re-branded G92 based cards to the masses?
    Problem: People may catch on and not buy re-branded stuff, and with Win 7 selling so well and OEM's wanting to give everyone DirectX 11, re-branded G92 cards won't cut it in the OEM market. Nvidias market share could crash and the money and time that would have been set aside to help shrink Fermi down to 28nm, will have been used to make a broken card almost work.

    I have been holding off getting a 5870 until the MSI lighting version is out, but a part of me was holding off to see how Fermi would do, and I feel I am not alone in that. Now however I feel no reason to wait, my 4870X2 I use most days is no longer new and shiny, and there are a few games where it chokes up a bit, mostly due to CF issues. There is simply no longer a reason to wait, Fermi is not going to be better then a 5870 from the looks of it, and if it is, it will be too hot, and take way to much power to make the small increase worth it.

    I hate saying it, but Nvidia has failed with Fermi, even if it comes out and it works, sorta, it is just to late to be called a success, no matter what. It sucks I know, but its about time we and Nvidia admit that Fermi was a bit to much to try to build on 40nm, and the inability of Nvidia to admit and realize this soon enough may have hurt them more then any of us could have expected or predicted.
    MSI 790FX-GD70
    AMD PhIIX4 955 BE
    Coolermaster V10!!
    Mushkin DDR3
    WD Vel Raptor
    Diamond ATI HD 4870X2
    Cosmos Stacker 830

  23. #598
    Xtreme Addict
    Join Date
    Jan 2008
    Posts
    1,176
    Quote Originally Posted by mapel110 View Post
    Reply from nvidia.
    http://twitter.com/RS_Borsti
    Oh Charlie... that just another hilarious post

    http://twitter.com/Igor_Stanek
    Oh Charlie... that just another hilarious post me: I think with this post Charlie totally destroyed his credibility

    I want to see how he is going to explain his article in March.... looks like biggest lie and mistake of his life
    I think Charlie, as much as I love to discredit the bunny boiler, is telling a half truth at least.

    Just go by the vibes, there's nothing being shown to strengthen the launch of Fermi with less than a month to go.
    By this time, ATI had a boat and over a hundred cards for the visitors to play with.

    Nvidia *might* have a booth with one behind a curtain. Probably with a number 7 written on the chip with marker pen

  24. #599
    Xtreme Enthusiast
    Join Date
    Dec 2009
    Posts
    591
    Quote Originally Posted by Tim View Post
    My goodness, that is actually way worse then I imagined it would be. I hope nVidia has got some serious reserves, because they are going to need it if this article is anywhere near true.



    $500 per chip?
    7800GTX 512 memories coming now.

    If this is true, this will leave a scar. It would also teach them a lesson in being humble towards opposition. Monolithic GPUs are the way of failure. They are complex, expensive, and inefficient per mm˛.

    A good idea is a good idea. Big corporations cannot survive if they ignore good ideas endorsed by rivals. Look at Microsoft and Google and Apple and every other successful fortune 500. Even ATI! The example from ATI is the ringbus memory controller. It was originally a great idea: plug&play gpus/mem chips with loads of bandwidth. As time went on, they came to realize that it was expensive, in terms of space used, it had to be optimized per GPU to get maximum performance, negating the benefits first assumed, and that made it unnecessary. How many different kinds of memory chips are you going to use with any GPU lineup?

    How much time and money did they spend on it? It didn't matter. They scrapped it and went for the classic approach. The engineers learned many valuable lessons, and it is paying off now in their almost fully modular GPU/mem design.

    The big green giant needs a slap in the face. While the little red rabbit isn't as big or powerful, it has won a few battles by outwitting the giant.

  25. #600
    Xtreme Addict
    Join Date
    Apr 2007
    Posts
    1,870
    Quote Originally Posted by CadESin View Post
    Rework a smaller version for late 2010, maybe throw out some of the CUDA stuff that was not needed for the gaming market
    With Compute Shader in DX11 most of the changes done for CUDA apply to games as well. What exactly is this stuff not needed for the gaming market that you're referring to? The biggest expense in Fermi is definitely the geometry engine and that's completely gaming related.

Page 24 of 109 FirstFirst ... 14212223242526273474 ... LastLast

Tags for this Thread

Bookmarks

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •