Results 1 to 25 of 1518

Thread: Official HD 2900 Discussion Thread

Hybrid View

  1. #1
    The Blue Dolphin
    Join Date
    Nov 2004
    Location
    The Netherlands
    Posts
    2,816
    Quote Originally Posted by Tim View Post
    Which doesn't mean anything at all, afaik they can't be compared directly to nVidia's, but atleast it should show some benefit in DX9, it clearly does not.
    Why should it? In DX9 a static config of vertex/pixel configuration in choses by the driver. I think you can imagine that a lot of tweaking / game / graphics settings can be achieved. I for one am convinced that just changing the name of the executable of a game to 3mark06.exe (for example) changes the performance of the card in that particular game.

    I'm expecting the 64*5 shader config of the R600 to be more powerful than the simpler 128*1 config of the G80, but I have no facts to back it up. It's just that ATI probably knows more about unified architecture design now than Nvidia did many month ago before releasing the G80. If 64*5 wasn't faster then ATI would have chosen the simpler 128*1 design.
    Blue Dolphin Reviews & Guides

    Blue Reviews:
    Gigabyte G-Power PRO CPU cooler
    Vantec Nexstar 3.5" external HDD enclosure
    Gigabyte Poseidon 310 case


    Blue Guides:
    Fixing a GFX BIOS checksum yourself


    98% of the internet population has a Myspace. If you're part of the 2% that isn't an emo bastard, copy and paste this into your sig.

  2. #2
    Xtreme Addict
    Join Date
    Nov 2004
    Posts
    1,692
    Quote Originally Posted by alexio View Post
    Why should it? In DX9 a static config of vertex/pixel configuration in choses by the driver. I think you can imagine that a lot of tweaking / game / graphics settings can be achieved. I for one am convinced that just changing the name of the executable of a game to 3mark06.exe (for example) changes the performance of the card in that particular game.

    I'm expecting the 64*5 shader config of the R600 to be more powerful than the simpler 128*1 config of the G80, but I have no facts to back it up. It's just that ATI probably knows more about unified architecture design now than Nvidia did many month ago before releasing the G80. If 64*5 wasn't faster then ATI would have chosen the simpler 128*1 design.
    Hmm, although I'm not really that technical, shouldn't we see even with a fixed config some degree of that stream processing power? And I wouldn't say ATI knows more about unified architectures then nVidia, I mean, the thing that they had is the Xenos GPU in the Xbox360, so yes this is their second gen, but R600 was already under development when that happened. So I wouldn't say that they have more knowledge too loud. Whatever nVidia did, they did it well. If all those stream processors are unified in DX10, then it should boost performance even more, instead of the fixed ratios, shouldn't it?

    Quote Originally Posted by Miwo View Post
    What's important to me is the present. DX10 benches mean nothing when games are not even out. By the time Crysis and other DX10 games roll out, the R650 on 65nm will be out, which will reduce costs, be cheaper to manufacture, and have less power consumption......so I don't exactly see the point in jumping the gun and spending for an R600 which allegedly has awesome DX10 performance. DX9 wise, its not much better at all than the x1900xtx. thats a dissapointment
    Well I wouldn't say that, people have been waiting very long, so I don't think they want to wait for R650, I'm not going to that's for sure. Anyway, the 2900XT is much much better then the X1900XTX in a lot of benches, sure in some the difference is small, but the GTS for instance seriously beats the X1900XTX around in just about every game.

    And I think Crysis is nearer then everyone thinks, I put my money on late June for a playable demo, R650 will be September like, which is another 4,5!! months from now.
    Last edited by Tim; 05-12-2007 at 10:25 AM.

    Intel Core i7-3770K
    ASUS P8Z77-I DELUXE
    EVGA GTX 970 SC
    Corsair 16GB (2x8GB) Vengeance LP 1600
    Corsair H80
    120GB Samsung 840 EVO, 500GB WD Scorpio Blue, 1TB Samsung Spinpoint F3
    Corsair RM650
    Cooler Master Elite 120 Advanced
    OC: 5Ghz | +0.185 offset : 1.352v

  3. #3
    The Blue Dolphin
    Join Date
    Nov 2004
    Location
    The Netherlands
    Posts
    2,816
    Quote Originally Posted by Tim View Post
    Hmm, although I'm not really that technical, shouldn't we see even with a fixed config some degree of that stream processing power?
    Okay a little explanation then

    For this example I use the NV40 (6800 series architecture). This chip had 16 pixel pipelines and 6 vertex pipelines. In some games a 12/10 config might have been faster, but the ratio was fixed so this could not be changed by the driver. To use an easy number, lets say the R600 has 64 pipelines that can perform vertex operations aswell as pixel operations. DX9 can only use a static configuration so these 64 pipelines are either told by the driver to act as a vertex pipe or pixel pipe.

    A possible config is 32/32 pixel/vertex. However, some game may benefit from more pixel processing power and doesn't need so much vertex processing power. The ratio can be changed by the driver to 48/16 yielding much better performance in that particular game.

    To makes this change the driver has to "know" what setting to use for each individual game (and maybe even game settings) to use anything other that the standard fixed ratio that I expect the driver to set if it doesn't recognize a game. This way the R600 performance can be tweaked per game or game settings. It takes a lot of testing to tweak the R600 performance for all the DX9 games that are currently out. Nvidia had the time to do this really well with the G80, ATI had a lot less time.

    And I wouldn't say ATI knows more about unified architectures then nVidia, I mean, the thing that they had is the Xenos GPU in the Xbox360, so yes this is their second gen, but R600 was already under development when that happened. So I wouldn't say that they have more knowledge too loud.
    Maybe so, but it's as good of a guess as mine. We have no idea when the R600 and G80 designs were finalized. It's critical to have those facts to make a good judgement. My guess is based on the fact that ATI has had more time to change the design of the R600 whilst working in parallel with game devs working on DX10 titles.
    Whatever nVidia did, they did it well. If all those stream processors are unified in DX10, then it should boost performance even more, instead of the fixed ratios, shouldn't it?
    Yes, in the exact same game with the same graphical settings a game should run at higher framerates with DX10. However the difference may be very small due to DX9 vertex/pixel ratio tweaking.
    Last edited by alexio; 05-12-2007 at 10:56 AM.
    Blue Dolphin Reviews & Guides

    Blue Reviews:
    Gigabyte G-Power PRO CPU cooler
    Vantec Nexstar 3.5" external HDD enclosure
    Gigabyte Poseidon 310 case


    Blue Guides:
    Fixing a GFX BIOS checksum yourself


    98% of the internet population has a Myspace. If you're part of the 2% that isn't an emo bastard, copy and paste this into your sig.

  4. #4
    Xtreme Addict
    Join Date
    Nov 2004
    Posts
    1,692
    Thank you very much for taking the time to explain it, I did understand that more or less, but....am I right in saying that whatever makes the card choose the vertex/pixel shaders on the fly in a DX10 game is the most crucial bit....is it called a scheduler? If that sucks, then it could very well mean that the R600 could beat the G80 right? Or vice versa.

    Thanks again, I try to read up on stuff like that, but most explanations still go way over my head unfortunately. I'm more of a gamer.
    Last edited by Tim; 05-12-2007 at 10:59 AM.

    Intel Core i7-3770K
    ASUS P8Z77-I DELUXE
    EVGA GTX 970 SC
    Corsair 16GB (2x8GB) Vengeance LP 1600
    Corsair H80
    120GB Samsung 840 EVO, 500GB WD Scorpio Blue, 1TB Samsung Spinpoint F3
    Corsair RM650
    Cooler Master Elite 120 Advanced
    OC: 5Ghz | +0.185 offset : 1.352v

  5. #5
    The Blue Dolphin
    Join Date
    Nov 2004
    Location
    The Netherlands
    Posts
    2,816
    Quote Originally Posted by Tim View Post
    Thank you very much for taking the time to explain it, I did understand that more or less, but....am I right in saying that whatever makes the card choose the vertex/pixel shaders on the fly in a DX10 game is the most crucial bit....is it called a scheduler? If that sucks, then it could very well mean that the R600 could beat the G80 right? Or vice versa.
    Yes, the scheduler is important. In the G80 it runs at half the shader clock, so there are latencies involved here. The scheduler assigns a thread to a streaming processor. Of course it's more efficient to have the scheduler run at the same clock as stream processors. I'm not sure how big the difference is though. If the scheduler can assigns two thread to one processor then the penalty is rather small. I don't know enough about the G80 architecture to tell you about the latencies involved.

    I have no information at all regarding the scheduler of the R600 to tell you how it is compared to the one in the G80. The architectures are so complex and different from each other that from specs alone it's hard to pick the better one. Only benches will tell the truth after both companies have had time to optimize for DX10 (and ATI for DX9).
    Blue Dolphin Reviews & Guides

    Blue Reviews:
    Gigabyte G-Power PRO CPU cooler
    Vantec Nexstar 3.5" external HDD enclosure
    Gigabyte Poseidon 310 case


    Blue Guides:
    Fixing a GFX BIOS checksum yourself


    98% of the internet population has a Myspace. If you're part of the 2% that isn't an emo bastard, copy and paste this into your sig.

  6. #6
    Xtreme Addict
    Join Date
    Nov 2004
    Posts
    1,692
    Quote Originally Posted by alexio View Post
    Yes, the scheduler is important. In the G80 it runs at half the shader clock, so there are latencies involved here. The scheduler assigns a thread to a streaming processor. Of course it's more efficient to have the scheduler run at the same clock as stream processors. I'm not sure how big the difference is though. If the scheduler can assigns two thread to one processor then the penalty is rather small. I don't know enough about the G80 architecture to tell you about the latencies involved.

    I have no information at all regarding the scheduler of the R600 to tell you how it is compared to the one in the G80. The architectures are so complex and different from each other that from specs alone it's hard to pick the better one. Only benches will tell the truth after both companies have had time to optimize for DX10 (and ATI for DX9).
    Dankjewel for the uitleg. (Thanks for the explanation)

    Much appreciated...I guess we'll just have to wait and see.

    Intel Core i7-3770K
    ASUS P8Z77-I DELUXE
    EVGA GTX 970 SC
    Corsair 16GB (2x8GB) Vengeance LP 1600
    Corsair H80
    120GB Samsung 840 EVO, 500GB WD Scorpio Blue, 1TB Samsung Spinpoint F3
    Corsair RM650
    Cooler Master Elite 120 Advanced
    OC: 5Ghz | +0.185 offset : 1.352v

  7. #7
    The Blue Dolphin
    Join Date
    Nov 2004
    Location
    The Netherlands
    Posts
    2,816
    Quote Originally Posted by Tim View Post
    Dankjewel for the uitleg. (Thanks for the explanation)

    Much appreciated...I guess we'll just have to wait and see.
    Graag gedaan = no problem
    Blue Dolphin Reviews & Guides

    Blue Reviews:
    Gigabyte G-Power PRO CPU cooler
    Vantec Nexstar 3.5" external HDD enclosure
    Gigabyte Poseidon 310 case


    Blue Guides:
    Fixing a GFX BIOS checksum yourself


    98% of the internet population has a Myspace. If you're part of the 2% that isn't an emo bastard, copy and paste this into your sig.

  8. #8
    Registered User
    Join Date
    Feb 2006
    Posts
    98
    Quote Originally Posted by alexio View Post
    Yes, the scheduler is important. In the G80 it runs at half the shader clock, so there are latencies involved here. The scheduler assigns a thread to a streaming processor. Of course it's more efficient to have the scheduler run at the same clock as stream processors. I'm not sure how big the difference is though. If the scheduler can assigns two thread to one processor then the penalty is rather small. I don't know enough about the G80 architecture to tell you about the latencies involved.

    I have no information at all regarding the scheduler of the R600 to tell you how it is compared to the one in the G80. The architectures are so complex and different from each other that from specs alone it's hard to pick the better one. Only benches will tell the truth after both companies have had time to optimize for DX10 (and ATI for DX9).
    From http://www.elitebastards.com/cms/ind...1&limitstart=2 :
    We've already mentioned the scheduler present alongside every cluster of sixteen Stream Processors - As well as this there is also a global scheduler present in G80, which oversees the graphics core as a whole.

  9. #9
    Xtreme Addict
    Join Date
    Nov 2005
    Location
    UK
    Posts
    1,074
    Last edited by fornowagain; 05-12-2007 at 03:44 PM.

    i7| EX58-EXTREME | SSD M225 | Radbox | 5870CF + 9600GT

Bookmarks

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •