Page 59 of 61 FirstFirst ... 949565758596061 LastLast
Results 1,451 to 1,475 of 1518

Thread: Official HD 2900 Discussion Thread

  1. #1451
    Xtreme Member
    Join Date
    Sep 2006
    Posts
    304
    So does this means AA issue can be fixed by drivers?

  2. #1452
    Xtreme Enthusiast
    Join Date
    Dec 2006
    Location
    France
    Posts
    741
    I think maybe yes since it's software in part.
    AMD Phenom II X2 550@Phenom II X4 B50
    MSI 890GXM-G65
    Corsair CMX4GX3M2A1600C9 2x2GB
    Sapphire HD 6950 2GB

  3. #1453
    Registered User
    Join Date
    Feb 2006
    Posts
    98
    Quote Originally Posted by XeRo View Post
    So does this means AA issue can be fixed by drivers?
    What issue? The poor performance in current games compared to the G80? He never said it could be fixed (or that it couldn't be fixed), instead he said, forget about old games, let's focus on AA for future games. They asked him the wrong question tbh.

    I am fairly sure they will optimize their AA-shader based solution further and try to program their programmable MSAA to do the resolve in the back-end, to see if it is faster with older games.

    He seemed to imply that doing MSAA resolve in the back-end, can only be done linear, whilst HDR requires non-linear to work correctly. If that is true, then either NVIDIA doesn't do MSAA resolve in the back-end, found a way around it, or has different image quality with HDR AA (the lightning will be different). This would be very difficult to verify since you need the exact same settings for both videocards you are testing and a good dose of HDR light.

    Basically I am quite confused

    Quote Originally Posted by cadaveca View Post
    NWN2, details maxed, outdoor town, 1680x1050, 16FPS avg
    Your native TFT resolution is 1680x1050 I guess. I see the drivers haven't improved for that game. I was hoping for more speed, but okay. NWN2 was tested on http://vr-zone.com/?i=4946&s=13
    Last edited by Noobie; 05-17-2007 at 06:33 AM.

  4. #1454
    Xtreme Addict
    Join Date
    Nov 2006
    Posts
    1,402
    i understood, that ATI improve their new AA render on shader, and don't care about classic AA, i'm right ? ( english is not my first language ^^ )

    If i understood well, then if ATI force driver to do shader AA, AA will perform well ?

    This drivers could kill the GTX

  5. #1455
    Xtreme Mentor
    Join Date
    Jul 2004
    Location
    Ontario
    Posts
    2,780
    I find it really funny how it is mentioned that the card was optimized for shader based AA. As far as I understood this was a desperate attempt from ATI to include AA on the card without having to do yet another silicon respin due to a problem with the original AA resolve method. They did not want to have another delay so opted for this instead. I still feel this was a real bad move and one of the main reasons why the card is suffering performance wise when AA and AF is enabled. Until I see a driver update that corrects what I am feeling I am going to have to feel this way unfortunately. If shader based AA really is a thing for future games than great, but what about the 10,000 other games on the market that don't agree with that method performance wise? I would hate to spend $400+ to play maybe 2-3 games that will be out this year that may take advantage of this new AA method. To me, it is all about playing the older games the best they have ever been played as well as have the possibility to play future games decently too. I am very anxiously awaiting the end of next week when new officially released drivers are suppose to hit with what is suppose to be major performance increases, and hopefully not at the expense of IQ which is another nasty rumor I keep hearing. My trigger finger is staying off the buy button till I get some more solid answers.
    Silverstone Temjin TJ-09BW w/ Silverstone DA750
    Asus P8P67
    2600K w/ Thermalright Venomous X Black w/ Sanyo Denki San Ace 109R1212H1011
    8GB G.Skill DDR-1600 7-8-7-24
    Gigabyte GTX 460 1G
    Modded Creative X-Fi Fatal1ty w/ Klipsch Promedia 2.1
    1 X 120GB OCZ Vertex
    1 X 300GB WD Velociraptor HLFS
    1 X Hitachi 7K1000 1TB
    Pioneer DVR-216L DVD-RW
    Windows 7 Ultimate 64


    Quote Originally Posted by alexio View Post
    From the hip and aim at the kitchen if she doesn't approve your purchases. She'll know better next time.

  6. #1456
    I am Xtreme
    Join Date
    Mar 2005
    Location
    Edmonton, Alberta
    Posts
    4,594
    Quote Originally Posted by Noobie View Post
    Your native TFT resolution is 1680x1050 I guess. I see the drivers haven't improved for that game. I was hoping for more speed, but okay. NWN2 was tested on http://vr-zone.com/?i=4946&s=13
    Cpu limits are definately imposed on this game too. Still jsut running stock q6600...this weekend I'll have time to pull the rig apart(is installed in case, etc already) and start some real benching.

  7. #1457
    Registered User
    Join Date
    Feb 2006
    Posts
    98
    Quote Originally Posted by cadaveca View Post
    Cpu limits are definately imposed on this game too. Still jsut running stock q6600...this weekend I'll have time to pull the rig apart(is installed in case, etc already) and start some real benching.
    Mmm, now that you mention CPU limiting. It's almost like developers are spending more time programming in a threaded model than optimizing. The later is still far more advantageous performance wise.

    I find it really funny how it is mentioned that the card was optimized for shader based AA. As far as I understood this was a desperate attempt from ATI to include AA on the card without having to do yet another silicon respin due to a problem with the original AA resolve method. They did not want to have another delay so opted for this instead. I still feel this was a real bad move and one of the main reasons why the card is suffering performance wise when AA and AF is enabled. Until I see a driver update that corrects what I am feeling I am going to have to feel this way unfortunately. If shader based AA really is a thing for future games than great, but what about the 10,000 other games on the market that don't agree with that method performance wise? I would hate to spend $400+ to play maybe 2-3 games that will be out this year that may take advantage of this new AA method. To me, it is all about playing the older games the best they have ever been played as well as have the possibility to play future games decently too. I am very anxiously awaiting the end of next week when new officially released drivers are suppose to hit with what is suppose to be major performance increases, and hopefully not at the expense of IQ which is another nasty rumor I keep hearing. My trigger finger is staying off the buy button till I get some more solid answers.
    From recent news I heard the problem is exaggerated and underrated. The AF problem is exaggerated; the difference between G80/G84 and 2900 is supposedly very hard to find in games. The AA problem is far worse when in motion, BUT! it only occurs when wide & narrow tent are used, there is no problem when your using pure MSAA (2x, 4x, 8x).

  8. #1458
    Xtreme Addict
    Join Date
    Sep 2006
    Location
    Stamford, UK
    Posts
    1,336
    Quote Originally Posted by Noobie View Post

    From recent news I heard the problem is exaggerated and underrated. The AF problem is exaggerated; the difference between G80/G84 and 2900 is supposedly very hard to find in games. The AA problem is far worse when in motion, BUT! it only occurs when wide & narrow tent are used, there is no problem when your using pure MSAA (2x, 4x, 8x).
    Isn't that because the graphics card doesn't render MSAA...? I heard that although it is set to be on it isn't actually on
    FX8350 @ 4.0Ghz | 32GB @ DDR3-1200 4-4-4-12 | Asus 990FXA @ 1400Mhz | AMD HD5870 Eyefinity | XFX750W | 6 x 128GB Sandisk Extreme RAID0 @ Aerca 1882ix with 4GB DRAM
    eXceed TJ07 worklog/build

  9. #1459
    Registered User
    Join Date
    May 2005
    Posts
    3,691
    Quote Originally Posted by madcho View Post
    i understood, that ATI improve their new AA render on shader, and don't care about classic AA, i'm right ? ( english is not my first language ^^ )

    If i understood well, then if ATI force driver to do shader AA, AA will perform well ?

    This drivers could kill the GTX
    You understood wrong...

    Problem with Shader AA is that you're taking shader power to do something that could be done by dedicated hardware.

    So, now ATi(I refuse to say AMD had anything to do with this part) is going to attempt to offload physics onto the shaders AND AA onto the shaders? That's just not going to work.

    Also, ATi don't have to force their driver, it already does it. That's why performance is so low with AA.
    Quote Originally Posted by Leon2ky
    "dammit kyle what's with the 30 second sex lately?" "Sorry sweetie, I overclocked my nuts and they haven't been stable since"
    Quote Originally Posted by trinibwoy View Post
    I don't think his backside has internet access.
    Quote Originally Posted by n00b 0f l337 View Post
    Hey I just met you
    And this is crazy
    But I'm on bath salts
    And your face looks tasty

  10. #1460
    Registered User
    Join Date
    Feb 2006
    Posts
    98
    Quote Originally Posted by eXceededgoku View Post
    Isn't that because the graphics card doesn't render MSAA...? I heard that although it is set to be on it isn't actually on
    I've never heard this, but that doesn't mean it isn't so. Got a source we can read?

  11. #1461
    I am Xtreme
    Join Date
    Mar 2005
    Location
    Edmonton, Alberta
    Posts
    4,594
    AA works here, although ther ARE some niggling issues as it does not seem to be applied to all textures in some applications.

  12. #1462
    Registered User
    Join Date
    Jul 2006
    Posts
    13
    Will 65nm process permit a higher shader clock? If AMD rises shader clock, performance will scale much more than on GeForce, right?
    Maybe AMD will come up with a 1Ghz core and 1Ghz for the 320 shaders on the R650. One of the problems with the R600 is the lower shader clock (Half the clock of the 128 shaders of nvidia).

    I dont know much about shaders...

  13. #1463
    Xtreme Addict
    Join Date
    Sep 2006
    Location
    Stamford, UK
    Posts
    1,336
    Quote Originally Posted by cadaveca View Post
    AA works here, although ther ARE some niggling issues as it does not seem to be applied to all textures in some applications.
    that is what I am referring to, I read it in one of the many articles and so its not really a fair comparison to nvidias.....
    FX8350 @ 4.0Ghz | 32GB @ DDR3-1200 4-4-4-12 | Asus 990FXA @ 1400Mhz | AMD HD5870 Eyefinity | XFX750W | 6 x 128GB Sandisk Extreme RAID0 @ Aerca 1882ix with 4GB DRAM
    eXceed TJ07 worklog/build

  14. #1464
    Xtreme Enthusiast
    Join Date
    Dec 2006
    Location
    France
    Posts
    741
    AMD Phenom II X2 550@Phenom II X4 B50
    MSI 890GXM-G65
    Corsair CMX4GX3M2A1600C9 2x2GB
    Sapphire HD 6950 2GB

  15. #1465
    Xtreme News
    Join Date
    Dec 2005
    Location
    California
    Posts
    1,594


    8800 GTS score *all stock* 3dMark 06 (CPU: Intel E6600)




    2900 XT score *all stock* 3dMark 06 (CPU: Intel E6400)




    2900 XT Fear BenchMark AAx4, max resolution, Max Everything, Ansiotropicx8, no soft shadows.
    Check out this scores! what do you guys think. It's from a member of OCN
    Last edited by sladesurfer; 05-17-2007 at 12:26 PM.

  16. #1466
    Registered User
    Join Date
    Feb 2006
    Posts
    98
    Good link, but no answers:
    So it's possible the ALU is broken, or some other logic in there is borked. Or they simply just want to use CFAA as the default resolve path regardless, even if the hardware does work, and I'm just completely wrong.
    Basically no one is able to figure out why ATI is using shaders to perform the AA, whilst it is clearly not the way to go due to performance.

    Hypothetically it could be a major oversight on ATI's part; believing that the shaders were so powerful it could handle it; after all shader AA has a huge flexibility advantage and is DX10.1 spec. But in practice it was too slow, and they had to redo the hardware. They redid the hardware, but the software drivers weren't implemented yet (I actually wonder what has been correctly implemented properly in the drivers, can't be much seeing as how bad it was at launch). Instead ATI chooses to show of the software shaders, with their flexible options (2x, 4x, 6x, 8x, 12x, 16x) in various modes and proclaim it's the fuuutuuuree!

  17. #1467
    Xtreme Addict
    Join Date
    Nov 2006
    Posts
    1,402
    32x AF pleaaaase :p

  18. #1468
    I am Xtreme
    Join Date
    Mar 2005
    Location
    Edmonton, Alberta
    Posts
    4,594
    Quote Originally Posted by Noobie View Post

    Basically no one is able to figure out why ATI is using shaders to perform the AA, whilst it is clearly not the way to go due to performance.
    Well, at least now ya know the source of "free 4xAA" for DX9 from DX10 cards.

    Shaders that would normally be idle during rendering can be used to apply this AA method, however the "thread arbiter" must be programmed for such, and, in most cases, this must be on a per-app basis, until 3Dc kicks in(Three consecutive loadings of the same map will "optimize" the workflow as the driver "recompiles" the workload to what is needed).

    This makes alot of sense when apllied to DX9 rendering formats, as it's far easier for the driver programmer to add in the AA for free, and make use of the VLIW format of the gpu, however this can come at a penalty and cause rendering errors if it's not done right. The other option is for the arbiter to pack many instructions at once, and this can cause timing issues far more easily than applying AA to a texture, if a subsequent texture/pixel needs info from data already "in flight" through the gpu.

    Problems of a programmable architecture that will only get better over time, IMHO.
    Last edited by cadaveca; 05-17-2007 at 10:06 AM.

  19. #1469
    Xtreme Addict
    Join Date
    Sep 2006
    Location
    Stamford, UK
    Posts
    1,336
    Quote Originally Posted by cadaveca View Post
    Well, at least now ya know the source of "free 4xAA" for DX9 from DX10 cards.

    Shaders that would normally be idle during rendering can be used to apply this AA method, however the "thread arbiter" must be programmed for such, and, in most cases, this must be on a per-app basis, until 3Dc kicks in(Three consecutive loadings of the same map will "optimize" the workflow as the driver "recompiles" the workload to what is needed).

    This makes alot of sense when apllied to DX9 rendering formats, as it's far easier for the driver programmer to add in the AA for free, and make use of the VLIW format of the gpu, however this can come at a penalty and cause rendering errors if it's not done right. The other option is for the arbiter to pack many instructions at once, and this can cause timing issues far more easily than applying AA to a texture, if a subsequent texture/pixel needs info from data already "in flight" through the gpu.

    Problems of a programmable architecture that will only get better over time, IMHO.
    I thought this was only the case for DX10....
    FX8350 @ 4.0Ghz | 32GB @ DDR3-1200 4-4-4-12 | Asus 990FXA @ 1400Mhz | AMD HD5870 Eyefinity | XFX750W | 6 x 128GB Sandisk Extreme RAID0 @ Aerca 1882ix with 4GB DRAM
    eXceed TJ07 worklog/build

  20. #1470
    Registered User
    Join Date
    Feb 2006
    Posts
    98
    Quote Originally Posted by cadaveca View Post
    Well, at least now ya know the source of "free 4xAA" for DX9 from DX10 cards.

    Shaders that would normally be idle during rendering can be used to apply this AA method, however the "thread arbiter" must be programmed for such, and, in most cases, this must be on a per-app basis, until 3Dc kicks in(Three consecutive loadings of the same map will "optimize" the workflow as the driver "recompiles" the workload to what is needed).

    This makes alot of sense when apllied to DX9 rendering formats, as it's far easier for the driver programmer to add in the AA for free, and make use of the VLIW format of the gpu, however this can come at a penalty and cause rendering errors if it's not done right. The other option is for the arbiter to pack many instructions at once, and this can cause timing issues far more easily than applying AA to a texture, if a subsequent texture/pixel needs info from data already "in flight" through the gpu.

    Problems of a programmable architecture that will only get better over time, IMHO.
    I don't think this will get solved without resulting to per-app setting, because of the indoor/outdoor problem so well demonstrated by Oblivion. Indoor the GPU/CPU are generally overkill fast, whilst outdoor they struggle (well just the GPU thx to grass). There is no way to guarantee that some part of the shaders are free all the time, unless the arbiter tries to aim for some free shaders which it can use. But then it would no longer be free. Per-app settings suck, so I wouldn't even go there without a big prodding stick

    <turns brain on>
    Ow wait you mean POTENTIALLY FREE 4xAA. Yes, thanks to the clever prioritizing design of the ring-bus, coupled with it's overkill in bandwidth, along with the overkill in shader power, the 2900 can use slack-time to perform AA without much of a performance penalty.

    But there can be points in the game where it is no longer free (truly free cannot be achieved, guaranteed seemingly free can only be achieved using hardware only specifically meant for doing AA and only capable of doing AA, which does it in the space of a nanosecond). This stresses the need for expanding the way we test the videocards in games (we must ensure that the parts we bench the game on are representative).

    Quote Originally Posted by madcho View Post
    32x AF pleaaaase :p
    I want at least 128xAF & 64xAA for Quake 3!

    Last edited by Noobie; 05-17-2007 at 11:07 AM.

  21. #1471
    Xtreme Mentor
    Join Date
    Oct 2005
    Location
    Portugal
    Posts
    3,410
    Quote Originally Posted by sladesurfer View Post
    Check out this scores! what do you guys think. It's from a member of OCN

    I got 20536 marks X2900XT default & E6600 @ 3600mhz

    I got 17633 marks BFG 8800GTS OC (550/1600) & E6600 @ 3600mhz





    --3DMARK 2005 -- X2900XT 512mb DEFAULT (cat 8.37)-- E6600 @ 3600Mhz ( 20536 Marks)




    3Dmark2005 @ BFG 8800GTS OC (158.19) @ E6600 3600mhz 17633 Marks

    Last edited by mascaras; 05-17-2007 at 11:31 AM.

    [Review] Core i7 920 & UD5 » Here!! « .....[Review] XFX GTX260 216SP Black Edition » Here!! «
    [Review] ASUS HD4870X2 TOP » Here!! «
    .....[Review] EVGA 750i SLi FTW » Here!! «
    [Review] BFG 9800GTX 512MB » Here!! « .....[Review] Geforce 9800GX2 1GB » Here!! «
    [Review] EVGA GTX280 1GB GDDR3 » Here!! « .....[Review] Powercolor HD4870 512MB GDDR5 » Here!! «

  22. #1472
    I am Xtreme
    Join Date
    Mar 2005
    Location
    Edmonton, Alberta
    Posts
    4,594
    Quote Originally Posted by eXceededgoku View Post
    I thought this was only the case for DX10....
    Yes, you are right, in a way, however, due to the superscalar nature of teh gpu, it pretty much applies to DX9 as well, or if it is NOT APPLIED, we get a performance hit. Oh, wait..we got that now...


    LoL Noobie...i think you got it all straight right from the get-go, but wanted confirmation.

    the "Potentially free"(you got the right term, thanks) AA is possible due to the VLIW nature of the gpu. As it stands now, shaders aren't quite large enough to use VLIW to the max, so AA gets tossed into the mix in order to use more of the gpu, however, when we think aobut DX10, we must be aware of geometry operations happening as well, and alot of rendering is going to be dependant on the results of the geometry shading being complete before being able to continue. How does that relate? well...in the scheduling, of course!

    Dx10 features shadow AA, and it would be far easier, IMHO, to get AA-free shadows, given HDR, if the edges of teh textures were AA free BEFORE HDR is applied. If using the traditional method, performacne falters alot more, you would think, using non-AA textures for HDR, than using shader AA.
    Last edited by cadaveca; 05-17-2007 at 11:46 AM.

  23. #1473
    Xtreme Mentor
    Join Date
    Nov 2006
    Location
    Spain, EU
    Posts
    2,949
    More personal tests mascaras... I&#180;m fed up with fanboys and want real numbers from XS members.
    Friends shouldn't let friends use Windows 7 until Microsoft fixes Windows Explorer (link)


    Quote Originally Posted by PerryR, on John Fruehe (JF-AMD) View Post
    Pretty much. Plus, he's here voluntarily.

  24. #1474
    Registered User
    Join Date
    Jan 2007
    Posts
    29
    HY there, nice thread, useful. Sorry to interrupt, but i just received my new 2900XT and i can not run 3DMark, i can not use RivaTuner, or ATITool, is it faulty? Pls some help guys? Many thanks, Danni

  25. #1475
    Xtreme Addict
    Join Date
    Sep 2006
    Location
    Stamford, UK
    Posts
    1,336
    on 3dmark you need to add -nosysteminfo to the shortcut and rivatuner and atitool do not work as of yet... use AMD GPU tool that was recently released. (http://www.techpowerup.com/downloads...Tool_v0.7.html)
    FX8350 @ 4.0Ghz | 32GB @ DDR3-1200 4-4-4-12 | Asus 990FXA @ 1400Mhz | AMD HD5870 Eyefinity | XFX750W | 6 x 128GB Sandisk Extreme RAID0 @ Aerca 1882ix with 4GB DRAM
    eXceed TJ07 worklog/build

Page 59 of 61 FirstFirst ... 949565758596061 LastLast

Bookmarks

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •