I picked up my HD2900XT yesterday from a local retailer here in Canada, $499.00CAD
Printable View
I picked up my HD2900XT yesterday from a local retailer here in Canada, $499.00CAD
Driver support on the HIS driver CD for the HD2900XT. No Vista?
http://i144.photobucket.com/albums/r...upportonCD.jpg
wow where did you get it in canada? and yes benchies would help!..
Spent all last night building two pc's and today will be spent overclocking/stress testing, sorry for no watercooling anymore, and tomorrow is mothersday... Will have benchies on Monday, just like everybody else, hehe - patience my young padawans!
So its more then twice as fast as GTX in DX10 ?
Cool =)
If this is correct then i dont think the slower performance in DX9 matters cuz its fast enough to handle those games.
Maybe thats why the 8800 hit the street so fast, they just didnt care about DX10 cuz they wanted to sell as many cards as possible before the shift to DX10.
Just what im thinking might be wrong.
Let's just see how games fair in DX10, it's funny, strange scores in games with the R600 reviews everyone says : DRIVER DRIVER! And here is nVidia having problems, and everyone goes! Told you so! G80 sucks in DX10! :p: j/k
Just wait till the 15th, Lost planet DX10 demo will be here, and is no doubt already in some of the R600 reviews. :)
Which doesn't mean anything at all, afaik they can't be compared directly to nVidia's, but atleast it should show some benefit in DX9, it clearly does not.
I mean, I really really really want super performance in DX10.....but I don't think it will happen. Pessimistic maybe, but at least then I won't feel too bad if it turns out to be not so good in DX10 as well. :)
Why should it? In DX9 a static config of vertex/pixel configuration in choses by the driver. I think you can imagine that a lot of tweaking / game / graphics settings can be achieved. I for one am convinced that just changing the name of the executable of a game to 3mark06.exe (for example) changes the performance of the card in that particular game.
I'm expecting the 64*5 shader config of the R600 to be more powerful than the simpler 128*1 config of the G80, but I have no facts to back it up. It's just that ATI probably knows more about unified architecture design now than Nvidia did many month ago before releasing the G80. If 64*5 wasn't faster then ATI would have chosen the simpler 128*1 design.
What's important to me is the present. DX10 benches mean nothing when games are not even out. By the time Crysis and other DX10 games roll out, the R650 on 65nm will be out, which will reduce costs, be cheaper to manufacture, and have less power consumption......so I don't exactly see the point in jumping the gun and spending for an R600 which allegedly has awesome DX10 performance. DX9 wise, its not much better at all than the x1900xtx. thats a dissapointment
Hmm, although I'm not really that technical, shouldn't we see even with a fixed config some degree of that stream processing power? And I wouldn't say ATI knows more about unified architectures then nVidia, I mean, the thing that they had is the Xenos GPU in the Xbox360, so yes this is their second gen, but R600 was already under development when that happened. So I wouldn't say that they have more knowledge too loud. Whatever nVidia did, they did it well. If all those stream processors are unified in DX10, then it should boost performance even more, instead of the fixed ratios, shouldn't it?
Well I wouldn't say that, people have been waiting very long, so I don't think they want to wait for R650, I'm not going to that's for sure. Anyway, the 2900XT is much much better then the X1900XTX in a lot of benches, sure in some the difference is small, but the GTS for instance seriously beats the X1900XTX around in just about every game.
And I think Crysis is nearer then everyone thinks, I put my money on late June for a playable demo, R650 will be September like, which is another 4,5!! months from now.
I think the problem is the r600 is very well geared for vector calculations, hence the 4+1 capability of a single shader unit. What happens when you are working with a scalar? Well if you don't do any optimizations, you will be wasting away power if you assign one scalar instruction per shader. Meaning, with better bios/drivers (I don't know what is more at work here), you would assign mutually exclusive operations to the same shader, and in a perfect world, you wouldn't move on to another shader unit until they are all fully utilized maximizing performance. What about if you have a vector, and 4 shaders available, or 2 shaders half in use? How well can they do that? These predictions and optimizations will not be easy to complete. That is why with better drivers, more shaders will be more fully utilized and instructions done in fewer cycles.
I also have a feeling this has has to do with the way dx9/opengl optimize for scalar designs.
nVidia isn't 128 * 1 it's 128 * 3 (not sure about the 3, fairly sure it is not * 1). Also nVidia's 128 runs faster.
Last rumors I heard ATi was a pure 320 * 5 instead of 320 * 4+1.
This is so much useless fun!
In some other thread around here you can see the ATi gets crushed in the G80 optimized DX10 shadow variance maps benchmark (which is out already) by a factor 3. App optimizing will remain importante I fear :x
nVidia is 128 1+1 or 2+1. cant remember(think its 1+1). They run at those 1.35Ghz or so.
AMD is 64 4+1 that runs at ~750Mhz.
320 is a PR gimmick.
Someone's using an old 8800 driver.... I can tell you that at the office we've seen very very good DX10 numbers out of the G80.
Exactly. Technically, going by AMD/ATi's math, NVidia has 256 shaders going at close to double the speed of ATi's. I love it how people keep pointing out "but the R600 has 320 shaders", when in reality, it only has 64. The total package won't be able to be used in a lot of situations, as only the most complex shaders will have the ability to take advantage of them.
Okay a little explanation then :)
For this example I use the NV40 (6800 series architecture). This chip had 16 pixel pipelines and 6 vertex pipelines. In some games a 12/10 config might have been faster, but the ratio was fixed so this could not be changed by the driver. To use an easy number, lets say the R600 has 64 pipelines that can perform vertex operations aswell as pixel operations. DX9 can only use a static configuration so these 64 pipelines are either told by the driver to act as a vertex pipe or pixel pipe.
A possible config is 32/32 pixel/vertex. However, some game may benefit from more pixel processing power and doesn't need so much vertex processing power. The ratio can be changed by the driver to 48/16 yielding much better performance in that particular game.
To makes this change the driver has to "know" what setting to use for each individual game (and maybe even game settings) to use anything other that the standard fixed ratio that I expect the driver to set if it doesn't recognize a game. This way the R600 performance can be tweaked per game or game settings. It takes a lot of testing to tweak the R600 performance for all the DX9 games that are currently out. Nvidia had the time to do this really well with the G80, ATI had a lot less time.
Maybe so, but it's as good of a guess as mine. We have no idea when the R600 and G80 designs were finalized. It's critical to have those facts to make a good judgement. My guess is based on the fact that ATI has had more time to change the design of the R600 whilst working in parallel with game devs working on DX10 titles.Quote:
And I wouldn't say ATI knows more about unified architectures then nVidia, I mean, the thing that they had is the Xenos GPU in the Xbox360, so yes this is their second gen, but R600 was already under development when that happened. So I wouldn't say that they have more knowledge too loud.
Yes, in the exact same game with the same graphical settings a game should run at higher framerates with DX10. However the difference may be very small due to DX9 vertex/pixel ratio tweaking.Quote:
Whatever nVidia did, they did it well. If all those stream processors are unified in DX10, then it should boost performance even more, instead of the fixed ratios, shouldn't it?
Thank you very much for taking the time to explain it, I did understand that more or less, but....am I right in saying that whatever makes the card choose the vertex/pixel shaders on the fly in a DX10 game is the most crucial bit....is it called a scheduler? If that sucks, then it could very well mean that the R600 could beat the G80 right? Or vice versa. :)
Thanks again, I try to read up on stuff like that, but most explanations still go way over my head unfortunately. I'm more of a gamer. :)