Spent all last night building two pc's and today will be spent overclocking/stress testing, sorry for no watercooling anymore, and tomorrow is mothersday... Will have benchies on Monday, just like everybody else, hehe - patience my young padawans!
Spent all last night building two pc's and today will be spent overclocking/stress testing, sorry for no watercooling anymore, and tomorrow is mothersday... Will have benchies on Monday, just like everybody else, hehe - patience my young padawans!
Intel Core i7 920 @ 3.8GHz (183x21)
Gigabyte EX58-DS4 BIOS F5
3GB PATRIOT PC3-10666 DDR3
Sapphire Radeon HD4870 512MB BLACK
2x500GB SEAGATE SATA-II 7200.11
OCZ GameXstream 750W PSU
Antec Three Hundred Chassis
Last edited by ownage; 05-12-2007 at 09:13 AM.
>i5-3570K
>Asrock Z77E-ITX Wifi
>Asus GTX 670 Mini
>Cooltek Coolcube Black
>CM Silent Pro M700
>Crucial M4 128Gb Msata
>Cooler Master Seidon 120M
Hell yes its a mini-ITX gaming rig!
So its more then twice as fast as GTX in DX10 ?
Cool =)
If this is correct then i dont think the slower performance in DX9 matters cuz its fast enough to handle those games.
Maybe thats why the 8800 hit the street so fast, they just didnt care about DX10 cuz they wanted to sell as many cards as possible before the shift to DX10.
Just what im thinking might be wrong.
Last edited by Ubermann; 05-12-2007 at 09:18 AM.
Everything extra is bad!
Let's just see how games fair in DX10, it's funny, strange scores in games with the R600 reviews everyone says : DRIVER DRIVER! And here is nVidia having problems, and everyone goes! Told you so! G80 sucks in DX10! j/k
Just wait till the 15th, Lost planet DX10 demo will be here, and is no doubt already in some of the R600 reviews.
Intel Core i7-3770K
ASUS P8Z77-I DELUXE
EVGA GTX 970 SC
Corsair 16GB (2x8GB) Vengeance LP 1600
Corsair H80
120GB Samsung 840 EVO, 500GB WD Scorpio Blue, 1TB Samsung Spinpoint F3
Corsair RM650
Cooler Master Elite 120 Advanced
OC: 5Ghz | +0.185 offset : 1.352v
Blue Dolphin Reviews & Guides
Blue Reviews:
Gigabyte G-Power PRO CPU cooler
Vantec Nexstar 3.5" external HDD enclosure
Gigabyte Poseidon 310 case
Blue Guides:
Fixing a GFX BIOS checksum yourself
98% of the internet population has a Myspace. If you're part of the 2% that isn't an emo bastard, copy and paste this into your sig.
Which doesn't mean anything at all, afaik they can't be compared directly to nVidia's, but atleast it should show some benefit in DX9, it clearly does not.
I mean, I really really really want super performance in DX10.....but I don't think it will happen. Pessimistic maybe, but at least then I won't feel too bad if it turns out to be not so good in DX10 as well.
Intel Core i7-3770K
ASUS P8Z77-I DELUXE
EVGA GTX 970 SC
Corsair 16GB (2x8GB) Vengeance LP 1600
Corsair H80
120GB Samsung 840 EVO, 500GB WD Scorpio Blue, 1TB Samsung Spinpoint F3
Corsair RM650
Cooler Master Elite 120 Advanced
OC: 5Ghz | +0.185 offset : 1.352v
Why should it? In DX9 a static config of vertex/pixel configuration in choses by the driver. I think you can imagine that a lot of tweaking / game / graphics settings can be achieved. I for one am convinced that just changing the name of the executable of a game to 3mark06.exe (for example) changes the performance of the card in that particular game.
I'm expecting the 64*5 shader config of the R600 to be more powerful than the simpler 128*1 config of the G80, but I have no facts to back it up. It's just that ATI probably knows more about unified architecture design now than Nvidia did many month ago before releasing the G80. If 64*5 wasn't faster then ATI would have chosen the simpler 128*1 design.
Blue Dolphin Reviews & Guides
Blue Reviews:
Gigabyte G-Power PRO CPU cooler
Vantec Nexstar 3.5" external HDD enclosure
Gigabyte Poseidon 310 case
Blue Guides:
Fixing a GFX BIOS checksum yourself
98% of the internet population has a Myspace. If you're part of the 2% that isn't an emo bastard, copy and paste this into your sig.
What's important to me is the present. DX10 benches mean nothing when games are not even out. By the time Crysis and other DX10 games roll out, the R650 on 65nm will be out, which will reduce costs, be cheaper to manufacture, and have less power consumption......so I don't exactly see the point in jumping the gun and spending for an R600 which allegedly has awesome DX10 performance. DX9 wise, its not much better at all than the x1900xtx. thats a dissapointment
'Gaming' AMD FX-6300 @ 4.5GHz | Asus M5A97 | 16GB DDR3 2133MHz | GTX760 2GB + Antec Kuhler620 mod | Crucial m4 64GB + WD Blue 2x1TB Str
'HTPC' AMD A8-3820 @ 3.5GHz | Biostar TA75A+ | 4GB DDR3 | Momentus XT 500GB | Radeon 7950 3GB
'Twitch' AMD 720BE @ 3.5GHz | Gigabyte GA-78LMT-S2P | 4GB DDR3 | Avermedia Game Broadcaster
Desktop Audio: Optical Out > Matrix mini DAC > Virtue Audio ONE.2 > Tannoy Reveal Monitors + Energy Encore 8 Sub
HTPC: Optoma HD131XE Projector + Yamaha RX-V463 + 3.2 Speaker Setup
Hmm, although I'm not really that technical, shouldn't we see even with a fixed config some degree of that stream processing power? And I wouldn't say ATI knows more about unified architectures then nVidia, I mean, the thing that they had is the Xenos GPU in the Xbox360, so yes this is their second gen, but R600 was already under development when that happened. So I wouldn't say that they have more knowledge too loud. Whatever nVidia did, they did it well. If all those stream processors are unified in DX10, then it should boost performance even more, instead of the fixed ratios, shouldn't it?
Well I wouldn't say that, people have been waiting very long, so I don't think they want to wait for R650, I'm not going to that's for sure. Anyway, the 2900XT is much much better then the X1900XTX in a lot of benches, sure in some the difference is small, but the GTS for instance seriously beats the X1900XTX around in just about every game.
And I think Crysis is nearer then everyone thinks, I put my money on late June for a playable demo, R650 will be September like, which is another 4,5!! months from now.
Last edited by Tim; 05-12-2007 at 10:25 AM.
Intel Core i7-3770K
ASUS P8Z77-I DELUXE
EVGA GTX 970 SC
Corsair 16GB (2x8GB) Vengeance LP 1600
Corsair H80
120GB Samsung 840 EVO, 500GB WD Scorpio Blue, 1TB Samsung Spinpoint F3
Corsair RM650
Cooler Master Elite 120 Advanced
OC: 5Ghz | +0.185 offset : 1.352v
I think the problem is the r600 is very well geared for vector calculations, hence the 4+1 capability of a single shader unit. What happens when you are working with a scalar? Well if you don't do any optimizations, you will be wasting away power if you assign one scalar instruction per shader. Meaning, with better bios/drivers (I don't know what is more at work here), you would assign mutually exclusive operations to the same shader, and in a perfect world, you wouldn't move on to another shader unit until they are all fully utilized maximizing performance. What about if you have a vector, and 4 shaders available, or 2 shaders half in use? How well can they do that? These predictions and optimizations will not be easy to complete. That is why with better drivers, more shaders will be more fully utilized and instructions done in fewer cycles.
I also have a feeling this has has to do with the way dx9/opengl optimize for scalar designs.
Last edited by ahmad; 05-12-2007 at 10:29 AM.
My watercooling experience
Water
Scythe Gentle Typhoons 120mm 1850RPM
Thermochill PA120.3 Radiator
Enzotech Sapphire Rev.A CPU Block
Laing DDC 3.2
XSPC Dual Pump Reservoir
Primochill Pro LRT Red 1/2"
Bitspower fittings + water temp sensor
Rig
E8400 | 4GB HyperX PC8500 | Corsair HX620W | ATI HD4870 512MB
I see what I see, and you see what you see. I can't make you see what I see, but I can tell you what I see is not what you see. Truth is, we see what we want to see, and what we want to see is what those around us see. And what we don't see is... well, conspiracies.
nVidia isn't 128 * 1 it's 128 * 3 (not sure about the 3, fairly sure it is not * 1). Also nVidia's 128 runs faster.
Last rumors I heard ATi was a pure 320 * 5 instead of 320 * 4+1.
This is so much useless fun!
In some other thread around here you can see the ATi gets crushed in the G80 optimized DX10 shadow variance maps benchmark (which is out already) by a factor 3. App optimizing will remain importante I fear :x
nVidia is 128 1+1 or 2+1. cant remember(think its 1+1). They run at those 1.35Ghz or so.
AMD is 64 4+1 that runs at ~750Mhz.
320 is a PR gimmick.
Crunching for Comrades and the Common good of the People.
Someone's using an old 8800 driver.... I can tell you that at the office we've seen very very good DX10 numbers out of the G80.
Exactly. Technically, going by AMD/ATi's math, NVidia has 256 shaders going at close to double the speed of ATi's. I love it how people keep pointing out "but the R600 has 320 shaders", when in reality, it only has 64. The total package won't be able to be used in a lot of situations, as only the most complex shaders will have the ability to take advantage of them.
Okay a little explanation then
For this example I use the NV40 (6800 series architecture). This chip had 16 pixel pipelines and 6 vertex pipelines. In some games a 12/10 config might have been faster, but the ratio was fixed so this could not be changed by the driver. To use an easy number, lets say the R600 has 64 pipelines that can perform vertex operations aswell as pixel operations. DX9 can only use a static configuration so these 64 pipelines are either told by the driver to act as a vertex pipe or pixel pipe.
A possible config is 32/32 pixel/vertex. However, some game may benefit from more pixel processing power and doesn't need so much vertex processing power. The ratio can be changed by the driver to 48/16 yielding much better performance in that particular game.
To makes this change the driver has to "know" what setting to use for each individual game (and maybe even game settings) to use anything other that the standard fixed ratio that I expect the driver to set if it doesn't recognize a game. This way the R600 performance can be tweaked per game or game settings. It takes a lot of testing to tweak the R600 performance for all the DX9 games that are currently out. Nvidia had the time to do this really well with the G80, ATI had a lot less time.
Maybe so, but it's as good of a guess as mine. We have no idea when the R600 and G80 designs were finalized. It's critical to have those facts to make a good judgement. My guess is based on the fact that ATI has had more time to change the design of the R600 whilst working in parallel with game devs working on DX10 titles.And I wouldn't say ATI knows more about unified architectures then nVidia, I mean, the thing that they had is the Xenos GPU in the Xbox360, so yes this is their second gen, but R600 was already under development when that happened. So I wouldn't say that they have more knowledge too loud.
Yes, in the exact same game with the same graphical settings a game should run at higher framerates with DX10. However the difference may be very small due to DX9 vertex/pixel ratio tweaking.Whatever nVidia did, they did it well. If all those stream processors are unified in DX10, then it should boost performance even more, instead of the fixed ratios, shouldn't it?
Last edited by alexio; 05-12-2007 at 10:56 AM.
Blue Dolphin Reviews & Guides
Blue Reviews:
Gigabyte G-Power PRO CPU cooler
Vantec Nexstar 3.5" external HDD enclosure
Gigabyte Poseidon 310 case
Blue Guides:
Fixing a GFX BIOS checksum yourself
98% of the internet population has a Myspace. If you're part of the 2% that isn't an emo bastard, copy and paste this into your sig.
Thank you very much for taking the time to explain it, I did understand that more or less, but....am I right in saying that whatever makes the card choose the vertex/pixel shaders on the fly in a DX10 game is the most crucial bit....is it called a scheduler? If that sucks, then it could very well mean that the R600 could beat the G80 right? Or vice versa.
Thanks again, I try to read up on stuff like that, but most explanations still go way over my head unfortunately. I'm more of a gamer.
Last edited by Tim; 05-12-2007 at 10:59 AM.
Intel Core i7-3770K
ASUS P8Z77-I DELUXE
EVGA GTX 970 SC
Corsair 16GB (2x8GB) Vengeance LP 1600
Corsair H80
120GB Samsung 840 EVO, 500GB WD Scorpio Blue, 1TB Samsung Spinpoint F3
Corsair RM650
Cooler Master Elite 120 Advanced
OC: 5Ghz | +0.185 offset : 1.352v
Intel Core i7-3770K
ASUS P8Z77-I DELUXE
EVGA GTX 970 SC
Corsair 16GB (2x8GB) Vengeance LP 1600
Corsair H80
120GB Samsung 840 EVO, 500GB WD Scorpio Blue, 1TB Samsung Spinpoint F3
Corsair RM650
Cooler Master Elite 120 Advanced
OC: 5Ghz | +0.185 offset : 1.352v
good news on the power connectors front as kinc says you can use 2 6 pin pcie connectors if you don't have the 8 pin available and i think we can safely assume kinc knows, if you no what i mean
heres the link
http://www.nordichardware.com/forum/...=8433&forum=28
Yes, the scheduler is important. In the G80 it runs at half the shader clock, so there are latencies involved here. The scheduler assigns a thread to a streaming processor. Of course it's more efficient to have the scheduler run at the same clock as stream processors. I'm not sure how big the difference is though. If the scheduler can assigns two thread to one processor then the penalty is rather small. I don't know enough about the G80 architecture to tell you about the latencies involved.
I have no information at all regarding the scheduler of the R600 to tell you how it is compared to the one in the G80. The architectures are so complex and different from each other that from specs alone it's hard to pick the better one. Only benches will tell the truth after both companies have had time to optimize for DX10 (and ATI for DX9).
Blue Dolphin Reviews & Guides
Blue Reviews:
Gigabyte G-Power PRO CPU cooler
Vantec Nexstar 3.5" external HDD enclosure
Gigabyte Poseidon 310 case
Blue Guides:
Fixing a GFX BIOS checksum yourself
98% of the internet population has a Myspace. If you're part of the 2% that isn't an emo bastard, copy and paste this into your sig.
Intel Core i7-3770K
ASUS P8Z77-I DELUXE
EVGA GTX 970 SC
Corsair 16GB (2x8GB) Vengeance LP 1600
Corsair H80
120GB Samsung 840 EVO, 500GB WD Scorpio Blue, 1TB Samsung Spinpoint F3
Corsair RM650
Cooler Master Elite 120 Advanced
OC: 5Ghz | +0.185 offset : 1.352v
Blue Dolphin Reviews & Guides
Blue Reviews:
Gigabyte G-Power PRO CPU cooler
Vantec Nexstar 3.5" external HDD enclosure
Gigabyte Poseidon 310 case
Blue Guides:
Fixing a GFX BIOS checksum yourself
98% of the internet population has a Myspace. If you're part of the 2% that isn't an emo bastard, copy and paste this into your sig.
Bookmarks