Page 12 of 42 FirstFirst ... 2910111213141522 ... LastLast
Results 276 to 300 of 1035

Thread: The official GT300/Fermi Thread

  1. #276
    Xtreme Guru
    Join Date
    Jan 2005
    Location
    Tre, Suomi Finland
    Posts
    3,858
    Are we talking about metal or silicon respins? Dunno why nV has only one letter and one number in the spin code - while ATi lists both silicon and metal spins. Anyways, if ATi's A0 was first revision then R600 would have gotten unrealistic total of 4 four respins, as early samples were A11 (1st rev silicon, 1st rev metal) when retail chips were A13. More likely, ATi's first rev is A11 meaning R600 had two metal respins (A11 -> A12 -> A13).

    And, I haven't seen any nV, nor ATi, chips marked A0...
    Last edited by largon; 10-02-2009 at 12:53 PM.
    You were not supposed to see this.

  2. #277
    Xtreme Member
    Join Date
    Jul 2009
    Location
    Madrid (Spain)
    Posts
    352
    Quote Originally Posted by Chumbucket843 View Post
    explain to me how 512 shaders is not over double 240 shaders. the bandwidth increased by 50% too. the theoretical numbers are not that impressive but you completely missed a lot of factors and posted wrong information. nvidia also said 1.5ghz is a conservative estimate for clockspeed.
    512 shaders is over double 240 (x2.13 to be exact). But 48 ROPs is not over double 32 (x1.5 to be exact). And 230 MB/s is not over double 141 (x1.63 to be exact). So overall, it's not over double the specs of the previous one. I don't think it's so hard to get what I've said there, and I don't get where I've said anything about CPs not being double (I think I have mentioned +113%). I would also like to know what are all those lot of factors that I've missed and what wrong information I've posted, based on what we know at the moment.

    And regarding clock speed, I would take it like talking about the shaders clock. I wouldn't expect much higher clocks than GTX285, if at all.

  3. #278
    I am Xtreme
    Join Date
    Dec 2007
    Posts
    7,750
    Quote Originally Posted by Helloworld_98 View Post
    I wouldn't make conclusions yet, we haven't seen any GPGPU results for larrabee, or pricing.

    however even if larrabee is slightly less powerful, I could still see businesses opting for it due to lower power usage and it will probably be cheaper
    i believe intel said LRB was about 1Tflop at double precision, and GF100 i think is 3TFlop single and half that double. but this is all from memory and i could be wrong.

  4. #279
    Xtreme Mentor
    Join Date
    Jul 2008
    Location
    Shimla , India
    Posts
    2,631
    Quote Originally Posted by AVB View Post
    pic for ya




    http://rs648.rapidshare.com/files/28...Key_Visual.jpg ( res. 6316 x 3240)

    Fermi 1.4-1.6 x of GTX 295.

    ( 1.6-1.8x of GTX 285 is not too much)
    rapidshare is not letting me download says :-"This file is neither allocated to a Premium Account, or a Collector's Account, and can therefore only be downloaded 10 times."

    Quote Originally Posted by K404 View Post
    This card got torn apart on BTUK yesterday. Its been so crudely done (the PCB that is,) its insulting.
    Any links cant find them, did Google noting came up...
    Coming Soon

  5. #280
    I am Xtreme
    Join Date
    Dec 2008
    Location
    France
    Posts
    9,060
    Quote Originally Posted by Chumbucket843 View Post
    nvidia also said 1.5ghz is a conservative estimate for clockspeed.
    Do you honestly believe in 1.5Ghz GT300 on stock settings?
    Donate to XS forums
    Quote Originally Posted by jayhall0315 View Post
    If you are really extreme, you never let informed facts or the scientific method hold you back from your journey to the wrong answer.

  6. #281
    Xtreme Addict
    Join Date
    May 2007
    Posts
    2,125
    People were also talking about G200 being similar to G92 in clocks and look what happened at the first gen...

    Fact is, when I hear Nvidia's own engineers claim that the design is delayed because it's incredibly hard, i'm not holding my breath on getting incredible clocks especially since they've had their own struggles going to 40nm

  7. #282
    Xtreme Mentor
    Join Date
    Oct 2005
    Posts
    2,788
    Quote Originally Posted by zalbard View Post
    Do you honestly believe in 1.5Ghz GT300 on stock settings?
    For the shaders, not the core. Obviously the core won't be 1.5GHz.
    Asus Rampage II Gene | Core i7 920 | 6*2GB Mushkin 998729 | BFG GTX280 OCX | Auzentech X-Fi Forte | Corsair VX550
    —Life is too short to be bound by the moral, ethical and legal constraints imposed on us by modern day society.

  8. #283
    Xtreme Cruncher
    Join Date
    May 2009
    Location
    Bloomfield
    Posts
    1,968
    Quote Originally Posted by Farinorco View Post
    512 shaders is over double 240 (x2.13 to be exact). But 48 ROPs is not over double 32 (x1.5 to be exact). And 230 MB/s is not over double 141 (x1.63 to be exact). So overall, it's not over double the specs of the previous one. I don't think it's so hard to get what I've said there, and I don't get where I've said anything about CPs not being double (I think I have mentioned +113%). I would also like to know what are all those lot of factors that I've missed and what wrong information I've posted, based on what we know at the moment.

    And regarding clock speed, I would take it like talking about the shaders clock. I wouldn't expect much higher clocks than GTX285, if at all.
    games are bound by shaders in the majority of cases. you can see that clearly in the 5870. they are running games at ridiculously high resolutions on a single card and still its bandwidth that really bottlenecks pixel fillrates. the factors you missed were new memory hierarchy, better scheduling logic, predication, and instruction set improvements.

    i would trust nvidia more than i trust you for the clockspeed.

  9. #284
    Xtreme Addict
    Join Date
    Aug 2005
    Location
    Germany
    Posts
    2,247
    Quote Originally Posted by Nightcover View Post
    --> http://www.nvidia.com/object/gpu_tec...onference.html

    watch the opening keynote with Jen-Hsun Huang. He says it runs on Fermi. A lot better quality than youtube too.

    And it's really interesting.
    yep, especially the physx part is very interesting and impressive. it starts at about 1/4 or 1/5 of the video (no minutes or smth are shown :/).
    1. Asus P5Q-E / Intel Core 2 Quad Q9550 @~3612 MHz (8,5x425) / 2x2GB OCZ Platinum XTC (PC2-8000U, CL5) / EVGA GeForce GTX 570 / Crucial M4 128GB, WD Caviar Blue 640GB, WD Caviar SE16 320GB, WD Caviar SE 160GB / be quiet! Dark Power Pro P7 550W / Thermaltake Tsunami VA3000BWA / LG L227WT / Teufel Concept E Magnum 5.1 // SysProfile


    2. Asus A8N-SLI / AMD Athlon 64 4000+ @~2640 MHz (12x220) / 1024 MB Corsair CMX TwinX 3200C2, 2.5-3-3-6 1T / Club3D GeForce 7800GT @463/1120 MHz / Crucial M4 64GB, Hitachi Deskstar 40GB / be quiet! Blackline P5 470W

  10. #285
    Registered User
    Join Date
    May 2005
    Posts
    3,691
    Here ya go guys...
    http://www.hardocp.com/news/2009/10/...es_eyefinity63

    I still don't care about multi-monitor for gaming until they make multi panel monitors into one frame, but good for those who do care.
    Quote Originally Posted by Leon2ky
    "dammit kyle what's with the 30 second sex lately?" "Sorry sweetie, I overclocked my nuts and they haven't been stable since"
    Quote Originally Posted by trinibwoy View Post
    I don't think his backside has internet access.
    Quote Originally Posted by n00b 0f l337 View Post
    Hey I just met you
    And this is crazy
    But I'm on bath salts
    And your face looks tasty

  11. #286
    I am Xtreme
    Join Date
    Dec 2007
    Posts
    7,750
    Quote Originally Posted by DilTech View Post
    Here ya go guys...
    http://www.hardocp.com/news/2009/10/...es_eyefinity63

    I still don't care about multi-monitor for gaming until they make multi panel monitors into one frame, but good for those who do care.
    id expect both companies to have been able to do this for a while, just never cared to develop drivers to do it. if a x1800 can do 1920x1200, then i doubt they were hardware limited.

  12. #287
    Xtreme Member
    Join Date
    Jul 2009
    Location
    Madrid (Spain)
    Posts
    352
    Quote Originally Posted by Chumbucket843 View Post
    games are bound by shaders in the majority of cases. you can see that clearly in the 5870. they are running games at ridiculously high resolutions on a single card and still its bandwidth that really bottlenecks pixel fillrates. the factors you missed were new memory hierarchy, better scheduling logic, predication, and instruction set improvements.
    I didn't miss that factors. They simply don't take any part in anything that I've said. And when it take it, I have mentioned them and considered them. Take the "trouble" of reading my posts and trying to understand them before quoting me, please, to not put things in my mouth.

    And I don't know how to use HD5870 to know how games are shader bottlenecked since the proportion in which they have improved shader processing power it's the same than texture processing power, rasterizing operations processing power, and so.

    There are more things involved in the 3D rendering process apart from shaders and memory bandwidth.

    i would trust nvidia more than i trust you for the clockspeed.
    Yeah, no doubt. But I think you have misunderstood them when you have the idea that they are talking about a clock of 1500MHz for the GPU core.

  13. #288
    Xtreme Addict
    Join Date
    Apr 2004
    Posts
    1,640
    Quote Originally Posted by Farinorco View Post
    512 shaders is over double 240 (x2.13 to be exact). But 48 ROPs is not over double 32 (x1.5 to be exact). And 230 MB/s is not over double 141 (x1.63 to be exact). So overall, it's not over double the specs of the previous one. I don't think it's so hard to get what I've said there, and I don't get where I've said anything about CPs not being double (I think I have mentioned +113%). I would also like to know what are all those lot of factors that I've missed and what wrong information I've posted, based on what we know at the moment.

    And regarding clock speed, I would take it like talking about the shaders clock. I wouldn't expect much higher clocks than GTX285, if at all.
    Um, you don't have to double EVERYTHING to get doubled performance. More than anything this depends on the particular application you're running, and where the bottlenecks lie within it.

    If you look at a past example where performance WAS doubled, like the 8800GTX, let's compare that to the previous gen flagship, the 7900GTX. The 8800GTX had almost exactly twice the GFLOPs of the 7900GTX, even taking into account the nearly useless MUL op. The 8800GTX had 69% more memory bandwidth, and get this, only 33% more pixel fillrate, and 18% more bilinear texture fillrate.

    The GF100 is more of an improvement in raw specs over the GTX 285 than the 8800GTX was over the 7900GTX. So doubling performance is more than possible.
    DFI LANParty DK 790FX-B
    Phenom II X4 955 BE (1003GPMW) @ 3.8GHz (19x200) w/1.36v
    -cooling: Scythe Mugen 2 + AC MX-2
    XFX ATI Radeon HD 5870 1024MB
    8GB PC2-6400 G.Skill @ 800MHz (1:2) 5-5-5-15 w/1.8v
    Seagate 1TB 7200.11 Barracuda
    Corsair HX620W


    Support PC gaming. Don't pirate games.

  14. #289
    Xtreme Mentor
    Join Date
    Oct 2005
    Posts
    2,788
    Quote Originally Posted by Cybercat View Post
    Um, you don't have to double EVERYTHING to get doubled performance. More than anything this depends on the particular application you're running, and where the bottlenecks lie within it.
    Not always, for example the shaders are more efficient being MIMD/FMA.

    Also you have to keep in mind, while the ROPs and TMUs were doubled on RV870, ask yourself, doubled to what? 32/80 respectively. GT200 already has 32/80.
    Asus Rampage II Gene | Core i7 920 | 6*2GB Mushkin 998729 | BFG GTX280 OCX | Auzentech X-Fi Forte | Corsair VX550
    —Life is too short to be bound by the moral, ethical and legal constraints imposed on us by modern day society.

  15. #290
    Registered User
    Join Date
    May 2005
    Posts
    3,691
    Quote Originally Posted by largon View Post
    Are we talking about metal or silicon respins? Dunno why nV has only one letter and one number in the spin code - while ATi lists both silicon and metal spins. Anyways, if ATi's A0 was first revision then R600 would have gotten unrealistic total of 4 four respins, as early samples were A11 (1st rev silicon, 1st rev metal) when retail chips were A13. More likely, ATi's first rev is A11 meaning R600 had two metal respins (A11 -> A12 -> A13).

    And, I haven't seen any nV, nor ATi, chips marked A0...
    Of course you haven't, A0 is usually in-house only. Only chip I can think of that released as an A0 from NVidia is the NV15. Usually it takes a few revisions before they can release.

    Also, the R600 DID take several respins before it could release if you remember. You're talking about a card that was 6 months+ late.

    Now again, all that could have changed since then, but I've never heard or read anything to tell me that. I'd ask the reps, but that's likely information they aren't willing to let out.
    Last edited by DilTech; 10-02-2009 at 01:52 PM.
    Quote Originally Posted by Leon2ky
    "dammit kyle what's with the 30 second sex lately?" "Sorry sweetie, I overclocked my nuts and they haven't been stable since"
    Quote Originally Posted by trinibwoy View Post
    I don't think his backside has internet access.
    Quote Originally Posted by n00b 0f l337 View Post
    Hey I just met you
    And this is crazy
    But I'm on bath salts
    And your face looks tasty

  16. #291
    Xtreme Member
    Join Date
    Jul 2009
    Location
    Madrid (Spain)
    Posts
    352
    Quote Originally Posted by Cybercat View Post
    Um, you don't have to double EVERYTHING to get doubled performance. More than anything this depends on the particular application you're running, and where the bottlenecks lie within it.

    If you look at a past example where performance WAS doubled, like the 8800GTX, let's compare that to the previous gen flagship, the 7900GTX. The 8800GTX had almost exactly twice the GFLOPs of the 7900GTX, even taking into account the nearly useless MUL op. The 8800GTX had 69% more memory bandwidth, and get this, only 33% more pixel fillrate, and 18% more bilinear texture fillrate.

    The GF100 is more of an improvement in raw specs over the GTX 285 than the 8800GTX was over the 7900GTX. So doubling performance is more than possible.
    Where in the post you're quoting I say that you have to double everything to double performance? I'm aswering a specific question.

    And you can't compare G80 with previous generation, as it's a completely different architecture. Starting by the unified shader processors (instead of units that only could calculate vertex or pixel shaders), with a completely different architecture, and the same for TMUs and ROPs.

    Again, I've never said that doubling is not possible (why is everybody putting that words in my mouth? It's at least the 3rd person who says that, and I'm starting to be tired or repeating it). You can read it yourself in my post quoted by Chumbucket843 (that I should add it's taken from a conversation including more posts before and after).

    I have only said that there is not a single evidence which grants that the GT300 is going to be more than twice the performance of GT200.

    But oh, well. If all of you are getting hurt by hearing it, I'll correct myself and let's finish with this: "GT300 is going to be obligatory at least 2x the performance of GTX285, and probably more". ¿Happy there?

    EDIT: I have edited the former paragraphs to give a much more accurate response.
    Last edited by Farinorco; 10-02-2009 at 02:02 PM.

  17. #292
    I am Xtreme
    Join Date
    Dec 2007
    Posts
    7,750
    Quote Originally Posted by DilTech View Post
    Of course you haven't, A0 is usually in-house only. Only chip I can think of that released as an A0 from NVidia is the NV15. Usually it takes a few revisions before they can release.

    Also, the R600 DID take several respins before it could release if you remember. You're talking about a card that was 6 months+ late.
    the R600 was bigger and hotter than my epeen, biggest think ati ever made, and was on the wrong process i think.

  18. #293
    Xtreme Addict
    Join Date
    Apr 2007
    Posts
    2,128
    Quote Originally Posted by Helloworld_98 View Post
    I wouldn't make conclusions yet, we haven't seen any GPGPU results for larrabee, or pricing.

    however even if larrabee is slightly less powerful, I could still see businesses opting for it due to lower power usage and it will probably be cheaper
    Wasn't it reported to be 300W 600mm² thing like a year ago? Yeah, rumours based on guesstimates based on rumours, I know.

  19. #294
    Xtreme Addict
    Join Date
    Apr 2004
    Posts
    1,640
    Quote Originally Posted by Farinorco View Post
    And you can't compare G80 with previous generation, as it's a completely different architecture. Starting by the unified shader processors (instead of units that only could calculate vertex or pixel shaders), with a completely different architecture, and the same for TMUs and ROPs.
    Of course it was a completely different architecture, as is the GF100. It may not be to the same extent as the G80 enjoyed, but it makes up for that by increasing raw specs more than the G80 did.

    It really doesn't bother me when people say the GF100 won't perform as well as such-and-such or whatever, because no one knows, and everyone's entitled to their opinion. The main thing that bothers me is how much importance you place in ROPs, TMUs and bandwidth, when those are insignificant factors in games that are GPU-limited. Granted, there aren't that many of those anymore, thanks to consoles and the perceived threat of piracy.
    DFI LANParty DK 790FX-B
    Phenom II X4 955 BE (1003GPMW) @ 3.8GHz (19x200) w/1.36v
    -cooling: Scythe Mugen 2 + AC MX-2
    XFX ATI Radeon HD 5870 1024MB
    8GB PC2-6400 G.Skill @ 800MHz (1:2) 5-5-5-15 w/1.8v
    Seagate 1TB 7200.11 Barracuda
    Corsair HX620W


    Support PC gaming. Don't pirate games.

  20. #295
    Xtreme Cruncher
    Join Date
    May 2009
    Location
    Bloomfield
    Posts
    1,968
    Quote Originally Posted by Farinorco View Post
    I didn't miss that factors. They simply don't take any part in anything that I've said. And when it take it, I have mentioned them and considered them. Take the "trouble" of reading my posts and trying to understand them before quoting me, please, to not put things in my mouth.

    And I don't know how to use HD5870 to know how games are shader bottlenecked since the proportion in which they have improved shader processing power it's the same than texture processing power, rasterizing operations processing power, and so.

    There are more things involved in the 3D rendering process apart from shaders and memory bandwidth.



    Yeah, no doubt. But I think you have misunderstood them when you have the idea that they are talking about a clock of 1500MHz for the GPU core.
    my reference to the 5870 was to show that rops are where they should be. too much and youre just wasting die space. they are running games at 7680x3200. the rop's were added to help texture filtering quality which wont double the performance in either gpu. if you dont believe me look at the ratio of shaders to rops over the past 5 years.

    this is the statement i was referring to:
    Consider that HD5870 is exactly double the HD4890 (+100% everything at the same clocks) except bandwidth (aprox. +30%) and it's far from double the real world performance (that's one of the most recent proves that doubling everything doesn't mean doubling real world performance), and NVIDIA is not even doubling processing units.

    i responded to this part of your statement about shader clocks and you somehow got the idea i was talking about core?
    And regarding clock speed, I would take it like talking about the shaders clock. I wouldn't expect much higher clocks than GTX285, if at all.

  21. #296
    Xtreme Member
    Join Date
    Jul 2009
    Location
    Madrid (Spain)
    Posts
    352
    Quote Originally Posted by Cybercat View Post
    It really doesn't bother me when people say the GF100 won't perform as well as such-and-such or whatever, because no one knows, and everyone's entitled to their opinion. The main thing that bothers me is how much importance you place in ROPs, TMUs and bandwidth, when those are insignificant factors in games that are GPU-limited. Granted, there aren't that many of those anymore, thanks to consoles and the perceived threat of piracy.
    That's exactly what I was trying to say.

    I have never said "GF100 won't perform as well as papapa".

    Exactly that's what I'm talking about.

    Somebody said "GF100 is going to perform at least twice as well as" and I asked him "Why? What's the reason why do you think that? What info which we have now lead you to take that for granted?".

    And then some of you started quoting me puting words in my mouth.

    But oh, you know what? The guilt is all mine:

    I should have started to reply "I know it. I didn't say otherwise. Read it again" since the very beginning.

  22. #297
    Xtreme Member
    Join Date
    Jul 2009
    Location
    Madrid (Spain)
    Posts
    352
    Quote Originally Posted by Chumbucket843 View Post
    my reference to the 5870 was to show that rops are where they should be. too much and youre just wasting die space. they are running games at 7680x3200. the rop's were added to help texture filtering quality which wont double the performance in either gpu. if you dont believe me look at the ratio of shaders to rops over the past 5 years.

    this is the statement i was referring to:



    i responded to this part of your statement about shader clocks and you somehow got the idea i was talking about core?
    Processing units. CUDA cores are processing units. Texture units are processing units. Raster Operation Processors are processing units. So no, they are not "doubling processing units".

    And regarding the clocks, obviously. If you understand it like shaders clock I don't know how it's an argument to say that they have doubled processing units power.

    EDIT: And from my part, discussion about what I've said or left to say is over. My (at the present time favourable, and I think not unrealistic) opinions about GT300 are pretty clear at posts in previous pages (some of them quoted on this one), even when some people is absolutely determined to misunderstand them.
    Last edited by Farinorco; 10-02-2009 at 02:26 PM.

  23. #298
    Xtreme Cruncher
    Join Date
    May 2009
    Location
    Bloomfield
    Posts
    1,968
    Quote Originally Posted by Calmatory View Post
    Wasn't it reported to be 300W 600mm² thing like a year ago? Yeah, rumours based on guesstimates based on rumours, I know.
    this is from anandtech. they will hit a power wall if it clocks high enough before they have a huge die.
    At 143 mm^2, Intel could fit 10 Larrabee-like cores so let's double that. Now we're at 286mm^2 (still smaller than GT200 and about the size of AMD's RV770) and 20-cores. Double that once more and we've got 40-cores and have a 572mm^2 die, virtually the same size as NVIDIA's GT200 but on a 65nm process.

    The move to 45nm could scale as well as 50%, but chances are we'll see something closer to 60 - 70% of the die size simply by moving to 45nm (which is the node that Larrabee will be built on). Our 40-core Larrabee is now at ~370mm^2 on 45nm.

  24. #299
    Xtreme Addict
    Join Date
    May 2007
    Location
    'Zona
    Posts
    2,346
    Quote Originally Posted by Chumbucket843 View Post
    the whitepapers said there will be future versions with less double precision for gaming. that probably wont happen this gen though. no one is expecting 3x performance in games. 2x faster could be possible.
    Please quote or tell me what page that is on. I have read through the whitepaper 3 times and haven't seen ANY mention of that.
    Originally Posted by motown_steve
    Every genocide that was committed during the 20th century has been preceded by the disarmament of the target population. Once the government outlaws your guns your life becomes a luxury afforded to you by the state. You become a tool to benefit the state. Should you cease to benefit the state or even worse become an annoyance or even a hindrance to the state then your life becomes more trouble than it is worth.

    Once the government outlaws your guns your life is forfeit. You're already dead, it's just a question of when they are going to get around to you.

  25. #300
    Xtreme Cruncher
    Join Date
    May 2009
    Location
    Bloomfield
    Posts
    1,968
    Quote Originally Posted by LordEC911 View Post
    Please quote or tell me what page that is on. I have read through the whitepaper 3 times and haven't seen ANY mention of that.
    there are 4 papers on nvidia's website.
    pg 19 of "fermi looking beyond graphics"

    Fermi’s only vulnerability may be its attempt to combine world-class graphics performance with generalpurpose
    compute performance in one chip. With three billion transistors, a Fermi GPU will be more than
    twice as complex as NVIDIA’s existing GT200 chips. At some point, if not now, features intended to
    boost compute performance may compromise the chip’s competitive position as an affordable graphics
    processor. At that juncture, the architectures may have to diverge — especially if the professional
    market grows larger than the consumer market.

Page 12 of 42 FirstFirst ... 2910111213141522 ... LastLast

Bookmarks

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •