MMM
Page 13 of 20 FirstFirst ... 310111213141516 ... LastLast
Results 301 to 325 of 478

Thread: Nvidia Fermi GF100 working desktop card spotted on Facebook

  1. #301
    Xtreme Enthusiast
    Join Date
    Mar 2009
    Location
    Toronto ON
    Posts
    566
    LegitReview GT 240 Batman Preview Nov 17, 2009

    Quote Originally Posted by Teemax View Post
    Anyone noticed this totally totally legit comparison in that article?




    Who needs Fermi, GTX 275 is already k.i.n.g.! (the real way it's mean to bs)
    LegitReview HD 5870 Batman review Sep 22, 2009



    Looks like big check can change the review big time. We can only guess where the check came from
    Last edited by Heinz68; 11-26-2009 at 08:44 PM.
    Core i7-4930K LGA 2011 Six-Core - Cooler Master Seidon 120XL ? Push-Pull Liquid Water
    ASUS Rampage IV Black Edition LGA2011 - G.SKILL Trident X Series 32GB (4 x 8GB) DDR3 1866
    Sapphire R9 290X 4GB TRI-X OC in CrossFire - ATI TV Wonder 650 PCIe
    Intel X25-M 160GB G2 SSD - WD Black 2TB 7200 RPM 64MB Cache SATA 6
    Corsair HX1000W PSU - Pioner Blu-ray Burner 6X BD-R
    Westinghouse LVM-37w3, 37inch 1080p - Windows 7 64-bit Pro
    Sennheiser RS 180 - Cooler Master Cosmos S Case

  2. #302
    Xtreme Addict
    Join Date
    Nov 2005
    Posts
    1,084
    Quote Originally Posted by Shintai View Post
    And AMD is only a CPU manufactor due to stolen technology and making clones.

  3. #303
    Xtreme Member
    Join Date
    Sep 2009
    Location
    @Rockwell Business Center
    Posts
    129
    Quote Originally Posted by Heinz68 View Post
    LegitReview GT 240 Batman Preview Nov 17, 2009



    LegitReview HD 5870 Batman review Sep 22, 2009



    Looks like big check can change the review big time. We can only guess where the check came from
    wow and they call themselves legit review
    Newbie Cruncher

  4. #304
    Xtreme Enthusiast
    Join Date
    Dec 2008
    Posts
    752
    Its true though. It DOES run 57% faster. Its just misleading as they're forcing uncommon settings onto the 5870.

  5. #305
    Xtreme Addict
    Join Date
    Jan 2004
    Posts
    1,313
    Can somebody please explain to me. I'm DYING here.
    90nm G80. 128 shaders
    65nm G92. 128 shaders
    55nm G92b. 128 shaders
    65nm G200 240 shaders - 576mm2!
    40nm GT240. 96 shaders
    nVidia has barely changed the shader count at all... AMD went from 64 to 160 to 320, and each has 5 execution units. AMD doesnt even need die shrink to make 2.5x more shaders (RV670 -> RV770) and still small die size.

    Does this mean AMD shader architecture design is more compact and efficient,
    or
    nVidia design has technical limitations and overhead?

    24/7: A64 3000+ (\_/) @2.4Ghz, 1.4V
    1 GB OCZ Gold (='.'=) 240 2-2-2-5
    Giga-byte NF3 (")_(") K8NSC-939
    XFX 6800 16/6 NV5 @420/936, 1.33V

  6. #306
    Xtreme Addict
    Join Date
    May 2005
    Posts
    1,656
    Not sure if this was posted here yet or not:

    NVIDIA Fun fact of the week: The GF100 board pictured in last's week's fun fact is 10.5-inches long -- the same length as GeForce GTX 200 Series graphics cards!
    Work Rig: Asus x58 P6T Deluxe, i7 950 24x166 1.275v, BIX2/GTZ/D5
    3x2048 GSkill pi Black DDR3 1600, Quadro 600
    PCPower & Cooling Silencer 750, CM Stacker 810

    Game Rig: Asus x58 P6T, i7 970 24x160 1.2v HT on, TRUE120
    3x4096 GSkill DDR3 1600, PNY 660ti
    PCPower & Cooling Silencer 750, CM Stacker 830

    AMD Rig: Biostar TA790GX A2+, x4 940 16x200, stock hsf
    2x2gb Patriot DDR2 800, PowerColor 4850
    Corsair VX450

  7. #307
    Xtreme Addict
    Join Date
    Nov 2004
    Posts
    1,692
    Quote Originally Posted by highoctane View Post
    Not sure if this was posted here yet or not:
    Source?

    Intel Core i7-3770K
    ASUS P8Z77-I DELUXE
    EVGA GTX 970 SC
    Corsair 16GB (2x8GB) Vengeance LP 1600
    Corsair H80
    120GB Samsung 840 EVO, 500GB WD Scorpio Blue, 1TB Samsung Spinpoint F3
    Corsair RM650
    Cooler Master Elite 120 Advanced
    OC: 5Ghz | +0.185 offset : 1.352v

  8. #308
    Xtreme Addict
    Join Date
    May 2005
    Posts
    1,656
    Quote Originally Posted by Tim View Post
    Source?
    Nvidia's facebook page
    Work Rig: Asus x58 P6T Deluxe, i7 950 24x166 1.275v, BIX2/GTZ/D5
    3x2048 GSkill pi Black DDR3 1600, Quadro 600
    PCPower & Cooling Silencer 750, CM Stacker 810

    Game Rig: Asus x58 P6T, i7 970 24x160 1.2v HT on, TRUE120
    3x4096 GSkill DDR3 1600, PNY 660ti
    PCPower & Cooling Silencer 750, CM Stacker 830

    AMD Rig: Biostar TA790GX A2+, x4 940 16x200, stock hsf
    2x2gb Patriot DDR2 800, PowerColor 4850
    Corsair VX450

  9. #309
    Xtreme Addict
    Join Date
    Jan 2004
    Posts
    1,313
    Quote Originally Posted by randomizer View Post
    How any reviewer could put that on their website without a disclaimer* is beyond me.




    *Above graph was made courtesy of the BS brigade.
    the result they obtained is completely legitimate.
    Quite simply, if you put Physics = high, ANY Radeon will plummet in fps like stone.

    How? Why? Ask TWIMTBP. You'd think a 3.3Ghz Core i7, executing 8 threads in parallel, could perhaps be enough to figure out where to render a few dozen swirling pieces of paper. After all, whats a few pieces of paper compared to cranking out 3000fps per core in Unreal Tournament.

    The simple and logical conclusion is that some *special* BS is being done to make the GeForces look good.

    Another successful example of parking dump trunk full of money at developer's HQ.

    Check the CPU utilization screenshots.. pretty much explains everything:
    http://www.tomshardware.com/reviews/...m,2465-10.html
    Last edited by ***Deimos***; 11-27-2009 at 04:20 PM.

    24/7: A64 3000+ (\_/) @2.4Ghz, 1.4V
    1 GB OCZ Gold (='.'=) 240 2-2-2-5
    Giga-byte NF3 (")_(") K8NSC-939
    XFX 6800 16/6 NV5 @420/936, 1.33V

  10. #310
    Xtreme Addict
    Join Date
    May 2007
    Location
    'Zona
    Posts
    2,346
    Quote Originally Posted by ***Deimos*** View Post
    Can somebody please explain to me. I'm DYING here.
    90nm G80. 128 shaders
    65nm G92. 128 shaders
    55nm G92b. 128 shaders
    65nm G200 240 shaders - 576mm2!
    40nm GT240. 96 shaders
    nVidia has barely changed the shader count at all... AMD went from 64 to 160 to 320, and each has 5 execution units. AMD doesnt even need die shrink to make 2.5x more shaders (RV670 -> RV770) and still small die size.

    Does this mean AMD shader architecture design is more compact and efficient,
    or
    nVidia design has technical limitations and overhead?
    Forgot 55nm G200b 240shaders and 484mm2 and the supposed G212 w/ 384shaders on 40nm.
    AMD's shaders are more compact and efficient in terms of power consumption and die area.
    The shaders for Nvidia are also clocked +2x higher than the core clock which helps even out the playing field in terms of shader count and performance.


    Quote Originally Posted by highoctane View Post
    Not sure if this was posted here yet or not:
    Hmmm.. what happened to the 9-9.5in board that some people here where so adamant was true?
    Last edited by LordEC911; 11-27-2009 at 04:26 PM.
    Originally Posted by motown_steve
    Every genocide that was committed during the 20th century has been preceded by the disarmament of the target population. Once the government outlaws your guns your life becomes a luxury afforded to you by the state. You become a tool to benefit the state. Should you cease to benefit the state or even worse become an annoyance or even a hindrance to the state then your life becomes more trouble than it is worth.

    Once the government outlaws your guns your life is forfeit. You're already dead, it's just a question of when they are going to get around to you.

  11. #311
    Xtreme Addict
    Join Date
    Sep 2008
    Location
    Downunder
    Posts
    1,313
    Quote Originally Posted by ***Deimos*** View Post
    the result they obtained is completely legitimate.
    Quite simply, if you put Physics = high, ANY Radeon will plummet in fps like stone.
    Nice how they picked out Batman for its own unique graph without any proper analysis of the results though isn't it

    I think the issue is more the paragraphs of text surrounding the graph than the graph itself. In particular, the text right before the Batman graph looks like something I'd read on an NVIDIA press release, not on a 3rd party website.

  12. #312
    Xtreme Addict
    Join Date
    May 2007
    Posts
    2,125
    Quote Originally Posted by ***Deimos*** View Post
    Can somebody please explain to me. I'm DYING here.
    90nm G80. 128 shaders
    65nm G92. 128 shaders
    55nm G92b. 128 shaders
    65nm G200 240 shaders - 576mm2!
    40nm GT240. 96 shaders
    nVidia has barely changed the shader count at all... AMD went from 64 to 160 to 320, and each has 5 execution units. AMD doesnt even need die shrink to make 2.5x more shaders (RV670 -> RV770) and still small die size.

    Does this mean AMD shader architecture design is more compact and efficient,
    or
    nVidia design has technical limitations and overhead?
    I don't know if its necessarily Nvidia's design that as limitations, but that ATI's was designed from the start to be super-scalable.

    They said that R600 would be the foundation for 3 generations of video cards, and while a lot of people said that R600 was a failed architecture, R700 definitely vindicated the design. R600's failures were more likely due to the fab process and leakage which killed any chance of higher clocked cores or the rumored original specs of 480SP's, rather than 320.

    That being said, it is true that G92 and GT200 have all been heavily based on the G80 (G92 basically just a shrink) and Nvidia did hit a wall earlier on the scaling of its design

  13. #313
    Xtreme Addict
    Join Date
    Jan 2004
    Posts
    1,313
    Quote Originally Posted by LordEC911 View Post
    Forgot 55nm G200b 240shaders and 484mm2 and the supposed G212 w/ 384shaders on 40nm.
    AMD's shaders are more compact and efficient in terms of power consumption and die area.
    The shaders for Nvidia are also clocked +2x higher than the core clock which helps even out the playing field in terms of shader count and performance.



    Hmmm.. what happened to the 9-9.5in board that some people here where so adamant was true?
    Yeah but even though shader clock domain is higher, clocks havent really improved much G80, G92, G92b, G200, G200b..

    nVidia's crazy fantastic "PROGRESS" in clockspeeds

    90nm - avg = 562
    8800 ULTRA 612/1500
    8800 GTX 575/1350
    8800 GTS 500/1200

    65/55nm - avg = 662
    8800GT/9800GT 600/1500
    8800GTS 650/1625
    9800GTX+/GTS250 738/1836

    65/55nm - avg = 615
    GTX280 602/1296
    GTX260 55nm 576/1350
    GTX275 633/1404
    GTX285 648/1476

    40nm - avg = 617 (ie die shrink = slower?)
    GT210 675/1450
    GT220 625/1360
    GT240 550/1360

    Although nVidia has yet to beat 740Mhz, which ATI/AMD did with 2900XT, 2600XT, 3870, 3870x2, 4890, 5870 etc..
    ATI/AMD clocks aren't improving much either. 850 is tiny improvement over 750, but at least its not slower.

    24/7: A64 3000+ (\_/) @2.4Ghz, 1.4V
    1 GB OCZ Gold (='.'=) 240 2-2-2-5
    Giga-byte NF3 (")_(") K8NSC-939
    XFX 6800 16/6 NV5 @420/936, 1.33V

  14. #314
    Xtreme Addict
    Join Date
    Nov 2007
    Posts
    1,195
    Quote Originally Posted by Chaserjzx100 View Post
    wow and they call themselves legit review
    the previous one was nvidias PR they failed to mention that in article but if you look into forums you ll see the discussion going on and admin telling taht it was Nvidias own slides now the second one is Legits own review

  15. #315
    Xtreme Enthusiast
    Join Date
    Mar 2009
    Location
    Toronto ON
    Posts
    566
    Quote Originally Posted by ***Deimos*** View Post
    the result they obtained is completely legitimate.
    Quite simply, if you put Physics = high, ANY Radeon will plummet in fps like stone.

    How? Why? Ask TWIMTBP. You'd think a 3.3Ghz Core i7, executing 8 threads in parallel, could perhaps be enough to figure out where to render a few dozen swirling pieces of paper. After all, whats a few pieces of paper compared to cranking out 3000fps per core in Unreal Tournament.

    The simple and logical conclusion is that some *special* BS is being done to make the GeForces look good.

    Another successful example of parking dump trunk full of money at developer's HQ.

    Check the CPU utilization screenshots.. pretty much explains everything:
    http://www.tomshardware.com/reviews/...m,2465-10.html
    TWIMTBP is that a misprint? Did you mean TWIMTBG? (see #3)
    Core i7-4930K LGA 2011 Six-Core - Cooler Master Seidon 120XL ? Push-Pull Liquid Water
    ASUS Rampage IV Black Edition LGA2011 - G.SKILL Trident X Series 32GB (4 x 8GB) DDR3 1866
    Sapphire R9 290X 4GB TRI-X OC in CrossFire - ATI TV Wonder 650 PCIe
    Intel X25-M 160GB G2 SSD - WD Black 2TB 7200 RPM 64MB Cache SATA 6
    Corsair HX1000W PSU - Pioner Blu-ray Burner 6X BD-R
    Westinghouse LVM-37w3, 37inch 1080p - Windows 7 64-bit Pro
    Sennheiser RS 180 - Cooler Master Cosmos S Case

  16. #316
    Xtreme Member
    Join Date
    Sep 2008
    Posts
    115
    Quote Originally Posted by Heinz68 View Post
    TWIMTBP is that a misprint? Did you mean TWIMTBG? (see #3)
    haha, thanks, much easier to explain the meaning of the word now.

  17. #317
    Xtreme Addict
    Join Date
    Jan 2008
    Location
    Vancouver,British Columbia, Canada
    Posts
    1,178
    Quote Originally Posted by v_rr View Post
    http://www.youtube.com/watch?v=7qKcJF4fOPs


    World Community Grid's mission is to create the world's largest public computing grid to tackle projects that benefit humanity.
    Our success depends upon individuals collectively contributing their unused computer time to change the world for the better.

  18. #318
    Admin
    Join Date
    Feb 2005
    Location
    Ann Arbor, MI
    Posts
    12,338
    Quote Originally Posted by Heinz68 View Post
    LegitReview GT 240 Batman Preview Nov 17, 2009
    Quote Originally Posted by Teemax View Post
    Anyone noticed this totally totally legit comparison in that article?




    Who needs Fermi, GTX 275 is already k.i.n.g.! (the real way it's mean to bs)


    LegitReview HD 5870 Batman review Sep 22, 2009



    Looks like big check can change the review big time. We can only guess where the check came from
    The 275 > 5870 chart is with PhysX enabled. No doubt the 5870 destroys the 275 with PhysX off.

  19. #319
    Xtreme Addict
    Join Date
    Sep 2008
    Location
    Downunder
    Posts
    1,313
    Quote Originally Posted by eric66 View Post
    the previous one was nvidias PR they failed to mention that in article but if you look into forums you ll see the discussion going on and admin telling taht it was Nvidias own slides now the second one is Legits own review
    I had a feeling that might have been the case, especially since I've never seen any review put "X% increase with Y card" on top of their graphs before. But since I couldn't find it anywhere else after a few minutes of Googling I decided not to mention it.

  20. #320
    Xtreme Mentor
    Join Date
    Apr 2005
    Posts
    2,550
    Quote Originally Posted by Vapor View Post
    The 275 > 5870 chart is with PhysX enabled. No doubt the 5870 destroys the 275 with PhysX off.
    first graph looks to be from NVIDIA PR material! No one else puts full explanation like "Boost Performance by 57% with a dedicated GeForce GT 240"

    Journalists usualy put on that place name of the benchmark (like on the second graph)!

    It's shame that "Legit" do not mention anywhere fact that article is PR propaganda from NV
    Adobe is working on Flash Player support for 64-bit platforms as part of our ongoing commitment to the cross-platform compatibility of Flash Player. We expect to provide native support for 64-bit platforms in an upcoming release of Flash Player following the release of Flash Player 10.1.

  21. #321
    Xtreme X.I.P.
    Join Date
    Nov 2002
    Location
    Shipai
    Posts
    31,147
    Quote Originally Posted by ***Deimos*** View Post
    Yeah but even though shader clock domain is higher, clocks havent really improved much G80, G92, G92b, G200, G200b..

    nVidia's crazy fantastic "PROGRESS" in clockspeeds

    90nm - avg = 562
    8800 ULTRA 612/1500
    8800 GTX 575/1350
    8800 GTS 500/1200

    65/55nm - avg = 662
    8800GT/9800GT 600/1500
    8800GTS 650/1625
    9800GTX+/GTS250 738/1836

    65/55nm - avg = 615
    GTX280 602/1296
    GTX260 55nm 576/1350
    GTX275 633/1404
    GTX285 648/1476

    40nm - avg = 617 (ie die shrink = slower?)
    GT210 675/1450
    GT220 625/1360
    GT240 550/1360

    Although nVidia has yet to beat 740Mhz, which ATI/AMD did with 2900XT, 2600XT, 3870, 3870x2, 4890, 5870 etc..
    ATI/AMD clocks aren't improving much either. 850 is tiny improvement over 750, but at least its not slower.
    more importantly, note how fast clocks decline on nvidias 40nm gpus with added complexity! over a 100mhz drop for the most complex 40nm part so far, and its only a cut down G92... so a tweaked G92 in 40nm can only clock to 550mhz, but fermi which is 5+ times more complex will reach 650+mhz? (clock derived from nvidias flops numbers mentioned at the super computing event)

  22. #322
    Xtreme Addict
    Join Date
    Jan 2004
    Posts
    1,313
    Quote Originally Posted by zerazax View Post
    I don't know if its necessarily Nvidia's design that as limitations, but that ATI's was designed from the start to be super-scalable.

    They said that R600 would be the foundation for 3 generations of video cards, and while a lot of people said that R600 was a failed architecture, R700 definitely vindicated the design. R600's failures were more likely due to the fab process and leakage which killed any chance of higher clocked cores or the rumored original specs of 480SP's, rather than 320.

    That being said, it is true that G92 and GT200 have all been heavily based on the G80 (G92 basically just a shrink) and Nvidia did hit a wall earlier on the scaling of its design
    Dont think thats the problem. X2900XT was 80nm and something like 740Mhz. The mid-end X2600XT was a blazing 800Mhz.. a milestone nVidia has yet to conquer 3 years later.

    AMD moved to 55nm early on, at same time as nVidia launched 8800GT on 65nm. Likewise, AMD first to 40nm. So, you'd think AMD would be the one with yield problems, right?

    Lets compare last 3 gen product launches.
    X2900XT/PRO - full chip, 600-740 clocks
    HD3850/HD3870 - full chip, 668-825 clocks
    HD4850/HD4870 - full chip, 625-850 clocks
    HD5xxx... only one where not selling all full chips at launch.

    nVidia? crippled chips gallore - 8800GTS, 8800GT/9800GT, GTX260.. even their mid-range where you'd think yields wouldn't be an issue.

    Clearly, nVidia designs are either more susceptable to defects or tight schedules are pressuring them to cut corners... litterally.

    Quote Originally Posted by saaya View Post
    more importantly, note how fast clocks decline on nvidias 40nm gpus with added complexity! over a 100mhz drop for the most complex 40nm part so far, and its only a cut down G92... so a tweaked G92 in 40nm can only clock to 550mhz, but fermi which is 5+ times more complex will reach 650+mhz? (clock derived from nvidias flops numbers mentioned at the super computing event)
    die shrink - > lower clocks??... this happened for each and all of AMD's Athlon64s. Recall those top of the line FX were 130nm. Couple years later the 90nm were offered at highest clockspeed while 65nm for mid/low end.

    And then ofcourse Phenom1 but that was more of a power budget issue.

    And power budget is clearly no prob for a 32sp 40nm nVidia chip... so how does AMD make 850Mhz 2B tran and nVidia only 600Mhz and shaders running barely as high as original 90nm GTX....?
    1. They do it on purpose, so Fermi looks good compared to GT220 /GT240
    2. Fermi delay will be announced and they'll ship a 40nm G92 in the meantime
    3. Design not scalable. Could be a prob if even crippled cut-down Fermi can only run (extrapolating)... 400Mhz.
    4. Simply inexperienced and incompetent engineers. Can't be the "process" since 4770 was getting great clocks early on before things were ironed out.
    5. Management.

    Hate to repeat it so many times, but nVidia Fermi is way late, their DX10.1 cards are a joke, huge die GT200s are sinking profits, and dont even have Intel/AMD chipset business to fall back on. If they dont get PERFECT Fermi out with whole lineup down to bottom, it could be not just NV30, but more like 3Dfx time

    24/7: A64 3000+ (\_/) @2.4Ghz, 1.4V
    1 GB OCZ Gold (='.'=) 240 2-2-2-5
    Giga-byte NF3 (")_(") K8NSC-939
    XFX 6800 16/6 NV5 @420/936, 1.33V

  23. #323
    Xtreme Addict
    Join Date
    Jan 2008
    Posts
    1,176
    Quote Originally Posted by ***Deimos*** View Post
    Quite simply, if you put Physx = high, ANY Radeon will plummet in fps like stone.
    Important correction.

    Nearly all games have physics calculations now. Lots of games with configurable physics wont affect an ATI card.

    Physics =/= Physx

    One is a model of the laws of motion and the other is marketying hype and false bottlenecking

  24. #324
    Xtreme Enthusiast
    Join Date
    Dec 2008
    Posts
    752
    Quote Originally Posted by Jowy Atreides View Post
    Important correction.

    Nearly all games have physics calculations now. Lots of games with configurable physics wont affect an ATI card.

    Physics =/= Physx

    One is a model of the laws of motion and the other is marketying hype and false bottlenecking
    Except his post cleary says PHYSX not physics.

    Unless hes retarded, he wrote physx on purpose. In which case, if you put PHYSX to high, any radeon will do poorly

  25. #325
    Xtreme Member
    Join Date
    Aug 2009
    Posts
    244
    Lots PHYSX games only use cpu do physics,Only 14 games has gpu PHYSX.

    So,radeon will do well in most PHYSX games.
    Last edited by mindfury; 11-28-2009 at 06:28 PM.

Page 13 of 20 FirstFirst ... 310111213141516 ... LastLast

Tags for this Thread

Bookmarks

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •