Page 75 of 109 FirstFirst ... 25657273747576777885 ... LastLast
Results 1,851 to 1,875 of 2723

Thread: The GT300/Fermi Thread - Part 2!

  1. #1851
    Xtreme Member
    Join Date
    Feb 2006
    Location
    La La Land.
    Posts
    250
    Quote Originally Posted by kgtiger View Post
    I should point out that I really know nothing about this subject.
    Just what I have picked up from reading 2/3s of the first fermi thread and all of this one.

    While ATI was the first to market with a Dx11 cards by used their old architecture and just adapted Dx11 to it and since fermis new architecture seams to do really well at tessellation.

    Will ATI now need to build a new architecture also from the ground up to handle the tessellation more effectively?
    Or will ATI be able to hold off building a new architecture from scratch for another generation or two and still get the same tessellation performance that nVidia are getting now.
    Or can ATI keep the architecture they have and just keep re-working it?

    Another thing I noticed is, if most games are playable at the frame rate that they now have and in the past the best cards have always been judged and rated by how many FPS they could do.
    Would it now be fair to say the way cards are now rated might need to change?
    If most games don’t need more FPS to improve the smoothness of the game play and tessellation only improves the detail of the actual graphics.
    Then maybe we should look at other ways to judge the performance of the new Dx11 cards and take the quality of the graphics into account as well not just the FPS?

    With that in mind, from the outside looking in, nVidia does seemed to understand this point some what and as a result have moved there focus to the quality of the graphics more that just the FPS aspect of gaming.
    After all is that not what tessellation is all about, improving the graphics quality?
    Like the demo with the dragon with tessellation on and the big difference it had with out it, also like the water effect in Just Cause 2.
    With my limited experience in gaming to me water has always sucked but that Just Cause 2 demo looked really good.

    One other thing, is tessellation the main point of going from Dx10 to Dx11?

    Any insight to these question would be much appreciated.
    ATI is doing what market dictates.
    They come out with the required hardware when there is real need for it or it is anticipated in near future.
    Look what happened to all the hype of Double precision floating point processing. Nvidia made a big deal out of it. But to the gamers it made little difference. Even today it makes little difference to gamers. But now even ATI have a card with good DP performance.
    They took similar approach with tessellation. Right now they have an architecture which they were able to release on time, cheaply and with right balance according to how they view the current and near future market.
    They will have new architecture ready when there will be need of it if you look at their track record over last 1/2 years.

    Primary Rig
    Intel Xeon W3520 @4200Mhz 24x7, 1.200v load (3845A935)
    Gigabyte X58A-UD7
    Patriot Viper II DDR3 2000 CL8
    Tagan BZ1300
    DeepCool Gamer Storm with 2x120mm DeepCool fans.
    MSI GTX 470 Twin Frozr II
    Zotac GTX 470 AMP edition.
    GPU collection : http://www.xtremesystems.org/forums/...5&postcount=64




    Rig2
    Phenom II x4 965
    MSI 790GX-GD65
    2GBx2 Corsair DDR3 1333
    Tagan tg500-u37
    Arctic Cooling Freezer 7 Pro Rev.2
    XFX 9600GT


  2. #1852
    Banned
    Join Date
    Jun 2008
    Location
    Mi
    Posts
    1,063
    Quote Originally Posted by kgtiger View Post
    I should point out that I really know nothing about this subject.
    Just what I have picked up from reading 2/3s of the first fermi thread and all of this one.

    While ATI was the first to market with a Dx11 cards by used their old architecture and just adapted Dx11 to it and since fermis new architecture seams to do really well at tessellation.

    Will ATI now need to build a new architecture also from the ground up to handle the tessellation more effectively?
    Or will ATI be able to hold off building a new architecture from scratch for another generation or two and still get the same tessellation performance that nVidia are getting now.
    Or can ATI keep the architecture they have and just keep re-working it?

    Another thing I noticed is, if most games are playable at the frame rate that they now have and in the past the best cards have always been judged and rated by how many FPS they could do.
    Would it now be fair to say the way cards are now rated might need to change?
    If most games don’t need more FPS to improve the smoothness of the game play and tessellation only improves the detail of the actual graphics.
    Then maybe we should look at other ways to judge the performance of the new Dx11 cards and take the quality of the graphics into account as well not just the FPS?

    With that in mind, from the outside looking in, nVidia does seemed to understand this point some what and as a result have moved there focus to the quality of the graphics more that just the FPS aspect of gaming.
    After all is that not what tessellation is all about, improving the graphics quality?
    Like the demo with the dragon with tessellation on and the big difference it had with out it, also like the water effect in Just Cause 2.
    With my limited experience in gaming to me water has always sucked but that Just Cause 2 demo looked really good.

    One other thing, is tessellation the main point of going from Dx10 to Dx11?

    Any insight to these question would be much appreciated.
    ?
    ATi's architecture was more advanced. Their use of dx10.1 and prior use of tessellation, etc... meant they had little problems fitting it into their already advanced arch.

    Nvidia required a whole new approach. Thus is it taking much longer.

  3. #1853
    Xtreme Member
    Join Date
    Jan 2009
    Posts
    261
    If tessellation is the only thing that Fermi excels in, then the architecture is a failure.
    A balanced architecture is what we need, not something that require everyone else to change their development model.


    You can spot the NVIDIA fanboys (in disguised or not) pretty easily: just a moment ago they were screaming "DX11 doesn't matter"!
    Now, the new mantra is "tessellation is THE WAY".

    Make up your mind guys

  4. #1854
    Xtreme Addict
    Join Date
    Apr 2006
    Location
    Cairo
    Posts
    2,366
    Quote Originally Posted by Xoulz View Post
    ?
    ATi's architecture was more advanced. Their use of dx10.1 and prior use of tessellation, etc... meant they had little problems fitting it into their already advanced arch.

    Nvidia required a whole new approach. Thus is it taking much longer.
    Really !!!! , more advanced because it had DX10.1 , last time i checked Nvidia had no problem adding 10.1 support to low end parts using the same very old architecture that you are calling less advanced because it didn't support 10.1 , and you are making it sound like Fermi is made for Tessellation ...
    Intel Core I7 920 @ 3.8GHZ 1.28V (Core Contact Freezer)
    Asus X58 P6T
    6GB OCZ Gold DDR3-1600MHZ 8-8-8-24
    XFX HD5870
    WD 1TB Black HD
    Corsair 850TX
    Cooler Master HAF 922

  5. #1855
    Xtreme Member
    Join Date
    Nov 2006
    Location
    CroLand
    Posts
    379
    GTX 480 Unigine and 3D Vision Surround Demo :
    http://www.youtube.com/watch?v=vpdPSZB8A8E
    Phenom II x6 1055T | ASRock 880G Ex.3 | 560Ti FrozrII 1GB| Corsair Vengeance 1600 2x4GB | Win7 64 | M4 128GB

    VR Box - i5 6600 | MSI Mortar | Gigabyte G1 GTX 1060 | Viper 16GB DDR4 2400 | 256 SSD | Oculus Rift CV1 + Touch

  6. #1856
    Xtreme Member
    Join Date
    Oct 2007
    Location
    Sydney, Australia
    Posts
    466
    Quote Originally Posted by n!Cola View Post
    GTX 480 Unigine and 3D Vision Surround Demo :
    http://www.youtube.com/watch?v=vpdPSZB8A8E
    Finally getting some info.

  7. #1857
    Xtreme Member
    Join Date
    Sep 2009
    Location
    Czech Republic, 50°4'52.22"N, 14°23'30.45"E
    Posts
    474
    Quote Originally Posted by kemo View Post
    Really !!!! , more advanced because it had DX10.1 , last time i checked Nvidia had no problem adding 10.1 support to low end parts using the same very old architecture that you are calling less advanced because it didn't support 10.1 , and you are making it sound like Fermi is made for Tessellation ...
    The DX 10.1, which NVIDIA was bashing all the time as useless etc. and implemented it after the long years, just in the right time because no Fermi to buy for next half a year (meant at times when the cards were released)?

    As for new arch, AMD is working on really new architecture right now. It's supposed to come out end of the year/beginning of 2011. I see only tick-tock schedule from AMD, no 6 months delay. So who actually has problems running out more advanced technologies?

  8. #1858
    Xtreme Enthusiast
    Join Date
    Dec 2009
    Posts
    591
    Quote Originally Posted by kemo View Post
    Really !!!! , more advanced because it had DX10.1 , last time i checked Nvidia had no problem adding 10.1 support to low end parts using the same very old architecture that you are calling less advanced because it didn't support 10.1 , and you are making it sound like Fermi is made for Tessellation ...
    Actually what he is saying is correct.

    When multipurpose DX10 shaders were discussed in whitepapers ATI already had this tech in their cards. Even as far back as the x1900, the gpu was designed with dx10 in mind. All it needed was a few enhancements and it was fully dx10 capable when dx10 launched.

    With the HD 2900, ATI already had DX10.1 standards implemented at the hardware level. Meaning, again, hardware was capable, ready for the api to be released. Nvidia was too busy calling it useless.

    Guess what happened with DX11? Exact same thing. ATI's hardware team has been one step ahead of the game since the first DX10 whitepapers were surfacing. Nvidia on the other hand was too busy milking their architecture. The engineers were given clear instructions: more, more, more. More of the same. Who was smarter? Hard to say, but nvidia was the one making the money.

    Nvidia got DX10.1 working, but look how long it took. When Microsoft said DX10.1 is here, ATI said here are the cards. When DX11 rolled around ATI said here are the cards.

    ATI's long term planning has hopefully paid off. They invested time in making the current gen support upcoming standards from the ground up. It has cost them dearly because research costs were higher and beefing performance was going to exceed the budget.

    Referring to nvidia as a giant is truly the best way to describe them. Slow in moving forward, strong and powerful, but not very intelligent.

  9. #1859
    I am Xtreme
    Join Date
    Dec 2008
    Location
    France
    Posts
    9,060
    Quote Originally Posted by SKYMTL View Post
    Current ATI cards get castrated when tessellation is enabled.
    Quite a strong statement considering the fact that the very top offer from Nvidia isn't even 2 times faster than a cheaper 5870 (not to mention 5970 which is on par with (or better than) GTX480 in Haven and a much slower GTX470) in the over-the-top tessellation environment of Haven, not to mention some people speculating about GTX480 scoring much closer to 5870 with high IQ settings...
    You are not being totally rational here.
    Last edited by zalbard; 03-05-2010 at 06:16 AM.
    Donate to XS forums
    Quote Originally Posted by jayhall0315 View Post
    If you are really extreme, you never let informed facts or the scientific method hold you back from your journey to the wrong answer.

  10. #1860
    Xtreme Member
    Join Date
    Sep 2009
    Location
    Czech Republic, 50°4'52.22"N, 14°23'30.45"E
    Posts
    474
    Did we see some real game performance with tessellation or are we talking only about some tech demo/benchmark showed by NVIDIA and of course with settings hurting the competition? This is actually NVIDIA's PR, you know? NVIDIA's PR is what brought the company to where it is now.

  11. #1861
    One-Eyed Killing Machine
    Join Date
    Sep 2006
    Location
    Inside a pot
    Posts
    6,340
    Quote Originally Posted by Behemot View Post
    Did we see some real game performance with tessellation or are we talking only about some tech demo/benchmark showed by NVIDIA and of course with settings hurting the competition? This is actually NVIDIA's PR, you know? NVIDIA's PR is what brought the company to where it is now.
    Guess we should also thank their PR dpt for the G80 ( 8800GTX )
    Bloody bastard$ fooled us and convinced us that it was the best and a great product
    Coding 24/7... Limited forums/PMs time.

    -Justice isn't blind, Justice is ashamed.

    Many thanks to: Sue Wu, Yiwen Lin, Steven Kuo, Crystal Chen, Vivian Lien, Joe Chan, Sascha Krohn, Joe James, Dan Snyder, Amy Deng, Jack Peterson, Hank Peng, Mafalda Cogliani, Olivia Lee, Marta Piccoli, Mike Clements, Alex Ruedinger, Oliver Baltuch, Korinna Dieck, Steffen Eisentein, Francois Piednoel, Tanja Markovic, Cyril Pelupessy (R.I.P. ), Juan J. Guerrero

  12. #1862
    Xtreme Member
    Join Date
    Jun 2009
    Location
    Portugal
    Posts
    227
    Quote Originally Posted by n!Cola View Post
    GTX 480 Unigine and 3D Vision Surround Demo :
    http://www.youtube.com/watch?v=vpdPSZB8A8E
    AF 1x | AA 0x


    Fail?


    Fractal Arc Midi
    Asus P8Z77V-PRO
    Intel Core I7 3770K
    Corsair Hydro H80
    8GB G.Skill Ripjaw-X 2133Mhz
    eVGA GTX670 SC 2GB
    Corsair AX-850W Gold

  13. #1863
    Xtreme Member
    Join Date
    Nov 2008
    Location
    Finland
    Posts
    100
    Quote Originally Posted by Macadamia View Post
    Hah, nVidia using Heaven 1.1 (30% more performance in culling, aka Dragon parts of the benchmarks) while ATI uses Heaven 1.0!


    It's like Nehalem and Cinebench R10 "R11" dll swap all over again!
    Nvidia real demo run vs Nvidia PR slide (Hint. Check 60-120 sec):

  14. #1864
    Xtreme Member
    Join Date
    Sep 2009
    Location
    Czech Republic, 50°4'52.22"N, 14°23'30.45"E
    Posts
    474
    Quote Originally Posted by BenchZowner View Post
    Bloody bastard$ fooled us and convinced us that it was the best and a great product
    How said this is true on most cards which came after G80 I do not infirm the G80, but what came next...all the bashing, PhysX (pure PR, but how many less-informed people they convinced to buy), Assassin's Creed (you better forgot don't you?), all their HW it's still old G80, just lil tweaked and renamed Renaming is probably the worst thing they did. I once met one NVIDIA PR guy during Invex and that made my real picture about the company. What a servility!

  15. #1865
    Xtreme Addict
    Join Date
    May 2007
    Location
    'Zona
    Posts
    2,346
    Quote Originally Posted by kgtiger View Post
    Will ATI now need to build a new architecture also from the ground up to handle the tessellation more effectively?
    Or will ATI be able to hold off building a new architecture from scratch for another generation or two and still get the same tessellation performance that nVidia are getting now.
    Or can ATI keep the architecture they have and just keep re-working it?
    For some reasons, GTX480 can be compared to 5970, I'm not stating it should be in all cases. In those cases the tessellation advantage that GTX480 holds over the 5870 should be drastically reduced, if there is even one at all.

    As was already stated, AMD/ATi has, most likely, implemented the needed changes to their future architecture which we should see before the end of 1H '11. The main change needed is with the geometry, they need to be able to handle at least 2tris/clock.

    As for the dual GF100, just like G200, it will need a shrink to happen unless Nvidia and TSMC have some magic pixie dust to use on the Bx silicon.
    Last edited by LordEC911; 03-05-2010 at 01:59 AM.
    Originally Posted by motown_steve
    Every genocide that was committed during the 20th century has been preceded by the disarmament of the target population. Once the government outlaws your guns your life becomes a luxury afforded to you by the state. You become a tool to benefit the state. Should you cease to benefit the state or even worse become an annoyance or even a hindrance to the state then your life becomes more trouble than it is worth.

    Once the government outlaws your guns your life is forfeit. You're already dead, it's just a question of when they are going to get around to you.

  16. #1866
    Xtreme Addict
    Join Date
    Jul 2007
    Location
    Alberta, Canada
    Posts
    1,264
    No one buys a high end gpu to not run AA. Sure its a benchmark but the fact they chose to put out this info is rather bad. It doesn't show confidence in their product. The way I see it is if there wasnt anything to hide, theyd have done a real world comparision.

    I have a feeling Nvidia are going to rely on their PR marketing strength with PhysX, 3D Vision and their developer relations to upplay these cards. In otherwords they may not have anything to show other than their own IPs in which case.... I suppose if you enjoy those things then all the power to you but the average user just won't ( 3D Vision requires an SLI setup in most cases to be playable, doubt that has changed with GF100, making it an expensive thing, not to mention the glasses and 120HZ lcd ; aka not remotely mainstream aka an extreme luxury ) If PhysX were more interesting and didn't have such a performance hit mabey people would be more interested ( eg Nvidia pimp throwing in a 2nd gpu for PhysX... and for good reason it seems, it could just be a conspiracy to deplete excess G92 stock...hmmm)

    I just hope things don't turn out as gloom as they appear to be so far. For innovation and competition sake, they better not suck.
    Feedanator 7.0
    CASE:R5|PSU:850G2|CPU:i7 6850K|MB:x99 Ultra|RAM:8x4 2666|GPU:980TI|SSD:BPX256/Evo500|SOUND:2i4/HS8
    LCD:XB271HU|OS:Win10|INPUT:G900/K70 |HS/F:H115i

  17. #1867
    Xtreme Addict
    Join Date
    Oct 2004
    Posts
    1,356
    Quote Originally Posted by Chickenfeed View Post
    No one buys a high end gpu to not run AA. Sure its a benchmark but the fact they chose to put out this info is rather bad. It doesn't show confidence in their product. The way I see it is if there wasnt anything to hide, theyd have done a real world comparision.

    I have a feeling Nvidia are going to rely on their PR marketing strength with PhysX, 3D Vision and their developer relations to upplay these cards. In otherwords they may not have anything to show other than their own IPs in which case.... I suppose if you enjoy those things then all the power to you but the average user just won't ( 3D Vision requires an SLI setup in most cases to be playable, doubt that has changed with GF100, making it an expensive thing, not to mention the glasses and 120HZ lcd ; aka not remotely mainstream aka an extreme luxury ) If PhysX were more interesting and didn't have such a performance hit mabey people would be more interested ( eg Nvidia pimp throwing in a 2nd gpu for PhysX... and for good reason it seems, it could just be a conspiracy to deplete excess G92 stock...hmmm)

    I just hope things don't turn out as gloom as they appear to be so far. For innovation and competition sake, they better not suck.
    I feel much the same way man.

    Don't get me wrong, I want Nvidia to have a killer card (for the same reasons you mention in your last sentence). Been waiting for one for some time now. But every day it just seems less and less likely.

  18. #1868
    Xtreme Member
    Join Date
    Jan 2009
    Posts
    261
    Quote Originally Posted by zalbard View Post
    Quite a strong statement considering the fact that the very top offer from Nvidia isn't even 2 times faster than a cheaper 5870 (not to mention a much slower GTX470) in the over-the-top tessellation environment of Haven, not to mention some peope speculating about GTX480 dcoring much closer to 5870 with high IQ settings...
    You are not being totally rational here.
    Agreed that it's such an aggressive statement, coming from a respected reviewer.

    Unless it's back with facts, forgive me if I take the statement with a LOT of salt. Somehow, SKYMTL has turned into an NVIDIA's ambassador in this forum. Weird.

  19. #1869
    One-Eyed Killing Machine
    Join Date
    Sep 2006
    Location
    Inside a pot
    Posts
    6,340
    Quote Originally Posted by Teemax View Post
    Agreed that it's such an aggressive statement, coming from a respected reviewer.

    Unless it's back with facts, forgive me if I take the statement with a LOT of salt. Somehow, SKYMTL has turned into an NVIDIA's ambassador in this forum. Weird.
    I heard nVIDIA gave him 20k$.

    And you've been here for how long to know SKYMTL well ?
    Coding 24/7... Limited forums/PMs time.

    -Justice isn't blind, Justice is ashamed.

    Many thanks to: Sue Wu, Yiwen Lin, Steven Kuo, Crystal Chen, Vivian Lien, Joe Chan, Sascha Krohn, Joe James, Dan Snyder, Amy Deng, Jack Peterson, Hank Peng, Mafalda Cogliani, Olivia Lee, Marta Piccoli, Mike Clements, Alex Ruedinger, Oliver Baltuch, Korinna Dieck, Steffen Eisentein, Francois Piednoel, Tanja Markovic, Cyril Pelupessy (R.I.P. ), Juan J. Guerrero

  20. #1870
    Banned
    Join Date
    Jan 2003
    Location
    EU
    Posts
    318
    From my simple understanding of the tesselation, it seems it could be "tweakable" just like AA or aniso settings are, you could have 2x more polys/4x more polys and etc.It would be a smart move for ati to implement such override in drivers.I wonder if thats possible.

  21. #1871
    Xtreme Addict
    Join Date
    Nov 2004
    Posts
    1,692
    Quote Originally Posted by Teemax View Post
    Agreed that it's such an aggressive statement, coming from a respected reviewer.

    Unless it's back with facts, forgive me if I take the statement with a LOT of salt. Somehow, SKYMTL has turned into an NVIDIA's ambassador in this forum. Weird.
    He doesn't do nVidia bashing like half the users here, and uses the round thing that is on his shoulders. That's the difference.

    Intel Core i7-3770K
    ASUS P8Z77-I DELUXE
    EVGA GTX 970 SC
    Corsair 16GB (2x8GB) Vengeance LP 1600
    Corsair H80
    120GB Samsung 840 EVO, 500GB WD Scorpio Blue, 1TB Samsung Spinpoint F3
    Corsair RM650
    Cooler Master Elite 120 Advanced
    OC: 5Ghz | +0.185 offset : 1.352v

  22. #1872
    Xtreme Member
    Join Date
    Jan 2009
    Posts
    261
    Quote Originally Posted by BenchZowner View Post
    And you've been here for how long to know SKYMTL well ?
    I haven't registered in this forum for a long time, yet, I'm a regular reader of HardwareCanucks. I like the depth of their investigations and the often unbiased, logical arguments for their conclusions.

    However, SKYMTL's tone sounds a bit odd for an supposedly neutral reviewer. Claiming that Evergreen has "castrated tessellation performance" is a HUGE claim, he should at least back it up with some evidence.

  23. #1873
    Xtreme Mentor
    Join Date
    Jan 2009
    Location
    Oslo - Norway
    Posts
    2,879
    Tessellation is one the few and most important new futures introduced by DX11. There is so much new here and it needs an open mind to investigate and understand it without getting offensive toward others ideas ans meanings.

    Apparently, nVidia is using a different implementation of tessellation and it can affects both the picture details/quality and the performance. It is important to know the real differences based on a contractive discussion .

    ASUS P8P67 Deluxe (BIOS 1305)
    2600K @4.5GHz 1.27v , 1 hour Prime
    Silver Arrow , push/pull
    2x2GB Crucial 1066MHz CL7 ECC @1600MHz CL9 1.51v
    GTX560 GB OC @910/2400 0.987v
    Crucial C300 v006 64GB OS-disk + F3 1TB + 400MB RAMDisk
    CM Storm Scout + Corsair HX 1000W
    +
    EVGA SR-2 , A50
    2 x Xeon X5650 @3.86GHz(203x19) 1.20v
    Megahalem + Silver Arrow , push/pull
    3x2GB Corsair XMS3 1600 CL7 + 3x4GB G.SKILL Trident 1600 CL7 = 18GB @1624 7-8-7-20 1.65v
    XFX GTX 295 @650/1200/1402
    Crucial C300 v006 64GB OS-disk + F3 1TB + 2GB RAMDisk
    SilverStone Fortress FT01 + Corsair AX 1200W

  24. #1874
    Xtreme Enthusiast
    Join Date
    Jun 2005
    Posts
    960
    Quote Originally Posted by Migi06 View Post
    Nvidia real demo run vs Nvidia PR slide (Hint. Check 60-120 sec):
    Good stuff there.

    Quote Originally Posted by LordEC911 View Post
    As for the dual GF100, just like G200, it will need a shrink to happen unless Nvidia and TSMC have some magic pixie dust to use on the Bx silicon.
    Or maybe they can release some limited Mars (or something), where they jump the PCI-E power limit and use 500W+ on stock for the dual gpu part.

  25. #1875
    Xtreme Enthusiast
    Join Date
    Jun 2006
    Location
    Space
    Posts
    769
    Quote Originally Posted by BenchZowner View Post
    I heard nVIDIA gave him 20k$.

    And you've been here for how long to know SKYMTL well ?
    He went right down in a lot of peoples estimations, when he did that very biased review of Nvidias 'big bang' drivers.

    Everyone could see the reviewers were told the apps they had to run, as 90% of the website rand the 4 same apps, yet he was on here arguing it was all fair and above board.

    That's why a lot of people lost respect for him and the canucks website that day.
    Last edited by Motiv; 03-05-2010 at 03:31 AM.

Page 75 of 109 FirstFirst ... 25657273747576777885 ... LastLast

Tags for this Thread

Bookmarks

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •