Results 1 to 25 of 3051

Thread: The Fermi Thread - Part 3

Hybrid View

  1. #1
    Xtreme Addict
    Join Date
    Aug 2007
    Location
    Istantinople
    Posts
    1,574
    Quote Originally Posted by Behemot View Post
    Oh yes, keep repeating. It is closer to RV770, I did not say it isn't, but definitely is far away from R600. If you say so, you mean the gaming side of GPU is the same and in this case it is just right to say the GF100 is same as G80, too. In both architectury, AMD and NVIDIA, is full ton of GPGPU changes from old R600%G80. But who in cares?! You play video encoding? You play Photoshop or what? I'd say 99% play games.

    So if the cards are 20% better on average, it'll stay that. No driver-miracles possible. I only hope people will finally end with this "drivers will change everything" BS
    RV770 was released 2 years ago, and by all means RV870 is pretty much a shrank + upscaled RV770. So do you think this is the same situation as GF100 vs GT200b?

    Drivers won't change everything but I'm pretty sure GTX 480 will see more driver-based performance improvements in the future than HD5870.
    Has anyone really been far even as decided to use even go want to do look more like?
    INTEL Core i7 920 // ASUS P6T Deluxe V2 // OCZ 3G1600 6GB // POWERCOLOR HD5970 // Cooler Master HAF 932 // Thermalright Ultra 120 Extreme // SAMSUNG T260 26"

  2. #2
    Xtreme Member
    Join Date
    Sep 2009
    Location
    Czech Republic, 50°4'52.22"N, 14°23'30.45"E
    Posts
    474
    Quote Originally Posted by annihilat0r View Post
    RV770 was released 2 years ago, and by all means RV870 is pretty much a shrank + upscaled RV770. So do you think this is the same situation as GF100 vs GT200b?

    Drivers won't change everything but I'm pretty sure GTX 480 will see more driver-based performance improvements in the future than HD5870.
    No it is not and that's what I tried to point out. If you say the GF100 is different than GT200, you just cannot say RV870 is the same as R600 because they architecturaly differ to. Than if you say R600=RV870, than by this logic you have to say G80=GF100.

    By the way, as for terms of shader units, this is true Both firms improvements were for GPGPU mainly. Gaming related is only add-on of new instructions for DX 10.1 and DX 11, adding shader units, memory and some other stuff (like improving compression algorythms, better anti-aliasing etc.).

    We'll see, but I personally doubt there won't be some drastical performance speed-ups. The Catalyst 10.2/10.3 impact on speed is speculative, too. These were much more important like bug-fixes and that's what I expect from ForceWare too.
    Quote Originally Posted by zalbard View Post
    I think we should start a new "Fermi part <InsertNumberHere>" thread each time it's delayed in this fashion!
    Quote Originally Posted by Movieman View Post
    Heck, I think we should start a whole new forum dedicated to hardware delays.

  3. #3
    Xtreme Addict
    Join Date
    Aug 2007
    Location
    Istantinople
    Posts
    1,574
    Quote Originally Posted by Behemot View Post
    No it is not and that's what I tried to point out. If you say the GF100 is different than GT200, you just cannot say RV870 is the same as R600 because they architecturaly differ to. Than if you say R600=RV870, than by this logic you have to say G80=GF100.

    By the way, as for terms of shader units, this is true Both firms improvements were for GPGPU mainly. Gaming related is only add-on of new instructions for DX 10.1 and DX 11, adding shader units, memory and some other stuff (like improving compression algorythms, better anti-aliasing etc.).

    We'll see, but I personally doubt there won't be some drastical performance speed-ups. The Catalyst 10.2/10.3 impact on speed is speculative, too. These were much more important like bug-fixes and that's what I expect from ForceWare too.
    But then, if gaming-related enhancements and differences of GF100 are adding shader units and stuff, how do you account for the incredible performance differences between games? One game it tramples 5870, one game it falls behind with 2x transistors.

    Some guys at B3D have pointed out that this is related to how geometry intensive a game is. Apparenetly FC2 is such a game whereas Crysis is just shader/texturing intensive, thus accounting for the big performance difference.

    And if this is true, it should mean that GF100 is architecturally very different.
    Has anyone really been far even as decided to use even go want to do look more like?
    INTEL Core i7 920 // ASUS P6T Deluxe V2 // OCZ 3G1600 6GB // POWERCOLOR HD5970 // Cooler Master HAF 932 // Thermalright Ultra 120 Extreme // SAMSUNG T260 26"

  4. #4
    Xtreme Member
    Join Date
    Sep 2009
    Location
    Czech Republic, 50°4'52.22"N, 14°23'30.45"E
    Posts
    474
    Quote Originally Posted by annihilat0r View Post
    But then, if gaming-related enhancements and differences of GF100 are adding shader units and stuff, how do you account for the incredible performance differences between games? One game it tramples 5870, one game it falls behind with 2x transistors.

    Some guys at B3D have pointed out that this is related to how geometry intensive a game is. Apparenetly FC2 is such a game whereas Crysis is just shader/texturing intensive, thus accounting for the big performance difference.

    And if this is true, it should mean that GF100 is architecturally very different.
    This is related to how the game is written. Most performance-increases could be achieved by optimizing the game, no optimizing the drivers. This is too much overestimated way, but wrong way and for that reason we see mostly few percent increases.

    I give Unreal Engine as an example every time: as Epic presented on GDF 2008, they focus on optimizing the engine/games before it is released, because it is cheaper. They e.g. have several machines running the game 24/7 and logging it. When somewhere is performance drop, they focus on the reason. We all know Bioshock looks pretty good and runs smoothly on almost everything DX10 capable (if you not count GF8400/HD 3450 as a DX10 capable). This is probably last developer doing this, others usually roll it out with critical bugs and don't care. But it pays off since UE is the most common engine.

    And of course this means also how the engine works. For example old games often used post-proces rather than advanced shader features, so they do not perform much better on Radeon X800 or HD 3870. F.E.A.R. is example I remember since I play it when I have time now (once a week or so); I am used to play on 20 FPS average, but this game sometimes falls under 20 FPS even with HD 3870@915 MHz, Windsor 2,8 GHz.

    You should also not forget on how AMD's stream processors work: if the SP are feeded optimaly, up to 5 instructions could be processed. If not, you gain performance drop. Than it is huge difference if you feed 3 or 5 instruction on average making the Radeons somewhere win, somewhere lose.
    Quote Originally Posted by zalbard View Post
    I think we should start a new "Fermi part <InsertNumberHere>" thread each time it's delayed in this fashion!
    Quote Originally Posted by Movieman View Post
    Heck, I think we should start a whole new forum dedicated to hardware delays.

Tags for this Thread

Bookmarks

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •