:up::up::up::up:
Many good memories of CS and Return to Castle Wolfenstein were made possible by those babies. :p:
Printable View
I got amazing startling revolutionary news...
In DX9 and DX10, nVidia and AMD rendering should be identical right. When you add AA and AF, the different algorithms might produce very slightly different colors not noticeable to the eye.
But in DX11, with "tessellation", does that mean driver can "optimize" how many more triangles to add?
ie original "ball" made using 60 triangles.
nvidia tessellation makes 300 triangles.
AMD tessellation makes 200 triangles - but it looks rounder!
:shrug: just wondering how we gonna handle reference quality images.. :shrug:
I can't wait any longer.. just release the specs already, plz.
Just wait for The GT300/Fermi Thread - Part 3! its on its way... :yepp:
@thread does the lack on lower mainstream card based on fermi mean that the 5750 and 5770 will not have anything to fight?
If thats true the 5830 could sell like hot cakes if its priced right... That is unless people get into the nvidia renaming trap and buy a GTX3xx based on G200 core that works with DX10.1 but not DX11 :(
they will, specially that it’s lower mainstream, those folks doesn’t know or care about dx11, EVGA 7200GS is clearly better choice than the SPARKLE 8400GS because both have 512MB memory (so both are in the same power spot) but the first one is cheaper and from EVGA which manufacturers the whole card just like MSI, XFX, etc (that Geforce word on all of them seems to confuse them though), what?! newer generation!! stop repeating the crap you read at tech forums, what?! native resolution! what’s that?!
even if they knew about dx11, it doesn’t matter, they are already lowering most of their graphic setting in the game. they are the bigger market share who gets everything, and we dont even get one liquid cooling case :(
edit: from tags: troll party. hehe :) nice one
Specs are out just final clock are to be known. It is hard though with all the chatter posts.
http://img403.imageshack.us/img403/4469/img0027828.png
For me i find most impressive is the number of triangles (Tessellation) and although the 5970 specs are higher they are only higher due to the simple fact that it is two cards; i.e. not a true reflection (as it is not shared/one unit)
Also a benchmark
GTX285: 51 FPS
GF100 part: 84 FPS.
The 5970 gets 92.7 fps which i think could be caught with higher clocks. Of course you could always just clock the 5970 but we all know the problem when you do that. My prediction is that the GF100 is going to be very fast almost with the 5970, but by the time it does come out ATI would only need to release a the 5870 to stay competitive. Like i said what i really want to know is how the G100 does on DX 11 i.e. next gen. Being built from the ground up for it i think it will be significantly faster than ATI in that department until ATI's next refresh.
That's terrific but i was talking about this.
http://www.xtremesystems.org/forums/...5970+overclock
Questions about that "G100 spec" chart.
what is up with these ratios? And the supposed MAD/triangle numbers are even harder to justify/figure out. I'm betting 90% likely fake (just like rumours of 400SP on RV770).Code:The ratios
GTX285 G100 5870
filrate 1 1 1
samples 2.5 2 2.5
textures 2.5 8 10
Does anyone think they will get as high as 725/1400 core/shader on GTX480? I don't think core speed will be that high.
I think i read somewhere that the ratio between the core and shader will be 1:2 that means a 700:1400 or 650 :1300 and so on.
true. Doesn't this thing have like 5 clock domains? instead of core/mem or core/shader/mem, it's now like core/rops/uncore/shadercore/memory
The "hot-clock" in GF100 is the shader domain, known by this name from the previous arch generations, and that includes the texture unit clock which runs at fixed 1/2 the shader rate. The "base" domain covers everything else -- ROPs, setup, display logic, etc.
The pixel rate for GF100 seems to be correct. Sure there are 48 ROPs in there, but the limitation comes from the scan-out capacity at 32 pixels per clock (same as Cypress), following the setup phase. In some cases all the ROP units could be saturated at once, like heavy colour-blending op's or processing some of the more taxing AA modes.
700 seems like a magical barrier for nVidia.
7900 GTX - 650 Mhz
8600 GTS - 675 Mhz
except for 9800GTX+/GTS250, I cant recall anything that made it to 700.
Now, looking back at last couple big chip launches:
G70 - 7800GTX at 430 Khz
G80 - 8800GTX at 575 Mhz
G200 - GTX280 at 602 Mhz
Either big DX change brings along big change in clocks (ie 50% more), or looking at just last two, something around 600Mhz.
Now, if you look at nVidia's 40nm track record, things look grim:
GT220 625 Mhz
GT240 550 Mhz
This is the same 40nm that AMD's 5770 and 5870 run 850Mhz at. If nVidia hasn't overcome whatever issues are causing this gap, this suggests something around 500Mhz for FERMI.
Surely some of you will cry foul. Pish posh. Clocks dont matter. But, if 4890 is part of a new trend, AMD's clocks will only improve. And already, 850 vs 600, is a huge 42% gap.
I'm not sure what to say about triangles. Over at B3d, they brought up the fact that 2 x 5770's beats 1 x 5870 in triangle intensive games like HAWX, which the GF100 was benched at as being pretty fast. Supposedly 2 cards means twice the tri/clock?
I think he meant that shader and texture units are tied together - but texture units operate at half the speed of the shader domain, hence a 2:1 clock ratio of shader:texture
This makes sense from what people have been saying - the GF100 has a lot of tesselation / triangle/clock power, but it's texture and ROP performance is not much better than the GTX285... meaning that in games that rely on texture and ROP performance, it's performance is not much better than a 5870 if at all, but in triangle/tesselation intensive games, it is much faster than the 5870. This is corroborated by the evidence in benchmarks - HAWX (a tri intensive game) sees the GF100 perform much much faster than the 5870 but the quoted average % increase over 5870 is a lot lower than the specs would suggest, meaning that in other games heavy in textures and/or ROPs it doesn't perform much better if at all. We'll see soon enough, but that's the latest story
new final/performance numbers are in:
gtx470 battles 57xx radeon series
gtx480 battles 58xx radeon series
gtx490 battles 59xx radeon series
and this is the best nvidia can for 2010
you guys were right all along nvidia sux!!
I don't see this happening...
I didn't saw that since FX and I really don't see that happening again especially before such big work off development.
And I can't understand why there's still people coming to this thread to say that nvidia sucks and AMD/ATI is great.
Yeah nvidia sucks and Ati's the boss!! who makes 2 good card's every decade. Really? have you install the latest hotfix? lol!