I got amazing startling revolutionary news...
In DX9 and DX10, nVidia and AMD rendering should be identical right. When you add AA and AF, the different algorithms might produce very slightly different colors not noticeable to the eye.
But in DX11, with "tessellation", does that mean driver can "optimize" how many more triangles to add?
ie original "ball" made using 60 triangles.
nvidia tessellation makes 300 triangles.
AMD tessellation makes 200 triangles - but it looks rounder!
just wondering how we gonna handle reference quality images..
![]()
24/7: A64 3000+ (\_/) @2.4Ghz, 1.4V
1 GB OCZ Gold (='.'=) 240 2-2-2-5
Giga-byte NF3 (")_(") K8NSC-939
XFX 6800 16/6 NV5 @420/936, 1.33V
Lian Dream: i7 2600k @ 4.7Ghz, Asus Maximus IV Gene-z, MSI N680GTX Twin Frozr III, 8GB 2x4GB Mushkin Ridgeback, Crucial M4 128GB x2, Plextor PX-755SA, Seasonic 750 X, Lian-li
HTPC: E5300@3.8, Asus P5Q Pro Turbo, Gigabyte 5750 Silentcell, Mushkin 2GBx2, 2x 500gb Maxtor Raid 1, 300gb Seagate, 74gb Raptor, Seasonic G series 550 Gold, Silverstone LC16m
Laptop: XPS15z Crucial M4
Nikon D700 ~ Nikkor 17-35 F2.8 ~ Nikkor 50mm F1.8
Lian Dream Work Log
my smugmug
Heatware
I can't wait any longer.. just release the specs already, plz.
Just wait for The GT300/Fermi Thread - Part 3! its on its way...
@thread does the lack on lower mainstream card based on fermi mean that the 5750 and 5770 will not have anything to fight?
If thats true the 5830 could sell like hot cakes if its priced right... That is unless people get into the nvidia renaming trap and buy a GTX3xx based on G200 core that works with DX10.1 but not DX11![]()
Coming Soon
they will, specially that it’s lower mainstream, those folks doesn’t know or care about dx11, EVGA 7200GS is clearly better choice than the SPARKLE 8400GS because both have 512MB memory (so both are in the same power spot) but the first one is cheaper and from EVGA which manufacturers the whole card just like MSI, XFX, etc (that Geforce word on all of them seems to confuse them though), what?! newer generation!! stop repeating the crap you read at tech forums, what?! native resolution! what’s that?!
even if they knew about dx11, it doesn’t matter, they are already lowering most of their graphic setting in the game. they are the bigger market share who gets everything, and we dont even get one liquid cooling case
edit: from tags: troll party. hehenice one
Last edited by firas; 02-06-2010 at 11:24 PM.
Q9550 E0 @3.8 - zalman CNPS9700 NT - GA-EP45-UD3R rev 1.1 - OCZ Reaper 2x2GB DDR2 1066
GTX 285 SSC - Thermaltake Toughpower 850W - Samsung 2220WM 22" LCD - win7 x64
Specs are out just final clock are to be known. It is hard though with all the chatter posts.
For me i find most impressive is the number of triangles (Tessellation) and although the 5970 specs are higher they are only higher due to the simple fact that it is two cards; i.e. not a true reflection (as it is not shared/one unit)
Also a benchmark
GTX285: 51 FPS
GF100 part: 84 FPS.
The 5970 gets 92.7 fps which i think could be caught with higher clocks. Of course you could always just clock the 5970 but we all know the problem when you do that. My prediction is that the GF100 is going to be very fast almost with the 5970, but by the time it does come out ATI would only need to release a the 5870 to stay competitive. Like i said what i really want to know is how the G100 does on DX 11 i.e. next gen. Being built from the ground up for it i think it will be significantly faster than ATI in that department until ATI's next refresh.
Last edited by takamishanoku; 02-07-2010 at 12:44 AM.
That's terrific but i was talking about this.
http://www.xtremesystems.org/forums/...5970+overclock
Originally Posted by motown_steve
Every genocide that was committed during the 20th century has been preceded by the disarmament of the target population. Once the government outlaws your guns your life becomes a luxury afforded to you by the state. You become a tool to benefit the state. Should you cease to benefit the state or even worse become an annoyance or even a hindrance to the state then your life becomes more trouble than it is worth.
Once the government outlaws your guns your life is forfeit. You're already dead, it's just a question of when they are going to get around to you.
Questions about that "G100 spec" chart.
what is up with these ratios? And the supposed MAD/triangle numbers are even harder to justify/figure out. I'm betting 90% likely fake (just like rumours of 400SP on RV770).Code:The ratios GTX285 G100 5870 filrate 1 1 1 samples 2.5 2 2.5 textures 2.5 8 10
24/7: A64 3000+ (\_/) @2.4Ghz, 1.4V
1 GB OCZ Gold (='.'=) 240 2-2-2-5
Giga-byte NF3 (")_(") K8NSC-939
XFX 6800 16/6 NV5 @420/936, 1.33V
Does anyone think they will get as high as 725/1400 core/shader on GTX480? I don't think core speed will be that high.
[SIGPIC][/SIGPIC]Bring... bring the amber lamps.
I think i read somewhere that the ratio between the core and shader will be 1:2 that means a 700:1400 or 650 :1300 and so on.
Coming Soon
true. Doesn't this thing have like 5 clock domains? instead of core/mem or core/shader/mem, it's now like core/rops/uncore/shadercore/memory
[SIGPIC][/SIGPIC]Bring... bring the amber lamps.
The "hot-clock" in GF100 is the shader domain, known by this name from the previous arch generations, and that includes the texture unit clock which runs at fixed 1/2 the shader rate. The "base" domain covers everything else -- ROPs, setup, display logic, etc.
The pixel rate for GF100 seems to be correct. Sure there are 48 ROPs in there, but the limitation comes from the scan-out capacity at 32 pixels per clock (same as Cypress), following the setup phase. In some cases all the ROP units could be saturated at once, like heavy colour-blending op's or processing some of the more taxing AA modes.
Last edited by fellix_bg; 02-07-2010 at 11:44 AM.
INTEL Core i7 920 // ASUS P6T Deluxe V2 // OCZ 3G1600 6GB // POWERCOLOR HD5970 // Cooler Master HAF 932 // Thermalright Ultra 120 Extreme // SAMSUNG T260 26"Has anyone really been far even as decided to use even go want to do look more like?
700 seems like a magical barrier for nVidia.
7900 GTX - 650 Mhz
8600 GTS - 675 Mhz
except for 9800GTX+/GTS250, I cant recall anything that made it to 700.
Now, looking back at last couple big chip launches:
G70 - 7800GTX at 430 Khz
G80 - 8800GTX at 575 Mhz
G200 - GTX280 at 602 Mhz
Either big DX change brings along big change in clocks (ie 50% more), or looking at just last two, something around 600Mhz.
Now, if you look at nVidia's 40nm track record, things look grim:
GT220 625 Mhz
GT240 550 Mhz
This is the same 40nm that AMD's 5770 and 5870 run 850Mhz at. If nVidia hasn't overcome whatever issues are causing this gap, this suggests something around 500Mhz for FERMI.
Surely some of you will cry foul. Pish posh. Clocks dont matter. But, if 4890 is part of a new trend, AMD's clocks will only improve. And already, 850 vs 600, is a huge 42% gap.
24/7: A64 3000+ (\_/) @2.4Ghz, 1.4V
1 GB OCZ Gold (='.'=) 240 2-2-2-5
Giga-byte NF3 (")_(") K8NSC-939
XFX 6800 16/6 NV5 @420/936, 1.33V
I'm not sure what to say about triangles. Over at B3d, they brought up the fact that 2 x 5770's beats 1 x 5870 in triangle intensive games like HAWX, which the GF100 was benched at as being pretty fast. Supposedly 2 cards means twice the tri/clock?
I think he meant that shader and texture units are tied together - but texture units operate at half the speed of the shader domain, hence a 2:1 clock ratio of shader:texture
This makes sense from what people have been saying - the GF100 has a lot of tesselation / triangle/clock power, but it's texture and ROP performance is not much better than the GTX285... meaning that in games that rely on texture and ROP performance, it's performance is not much better than a 5870 if at all, but in triangle/tesselation intensive games, it is much faster than the 5870. This is corroborated by the evidence in benchmarks - HAWX (a tri intensive game) sees the GF100 perform much much faster than the 5870 but the quoted average % increase over 5870 is a lot lower than the specs would suggest, meaning that in other games heavy in textures and/or ROPs it doesn't perform much better if at all. We'll see soon enough, but that's the latest story
new final/performance numbers are in:
gtx470 battles 57xx radeon series
gtx480 battles 58xx radeon series
gtx490 battles 59xx radeon series
and this is the best nvidia can for 2010
you guys were right all along nvidia sux!!
Yeah nvidia sucks and Ati's the boss!! who makes 2 good card's every decade. Really? have you install the latest hotfix? lol!
Nvidia's Fermi GTX480 is broken and unfixable,Hot, slow, late and unmanufacturable
As we have been saying since last May, Fermi GF100 is the wrong chip, made the wrong way, for the wrong reasons.
by Charlie Demerjian
Bookmarks