anyone know if its being released early or late march?
Printable View
anyone know if its being released early or late march?
sure
whatever lets you sleep at night.
Um no.
Intel's previous E6600 etc.. naming scheme was VERY clear.
Intel's current naming scheme was changed to make clear what is Nehalem/Lynnfield/Clarkdale.
All i7 are quad core +HT. the 9xx are X58 and the i7-8xx are P55 (1156).
All i5 and i3's are 1156. The 7xx is Lynnfield without HT. The 6xx and 5xx are Clarkdale.
Its a very simple naming system.
AMD liked it so much, they called their Phenom II X4 "965" just like the Nehalem 965.
Contrast with nVidia. What is the difference between a 8800GTS and a 8800GTS? Is a GTS250 better than a 9800GTX+?
so does this thread conatin any actual specs or exactly tru info other than it will be codenamed firem/GF100 and called GTX-4X0??? cuz i want like clock (core/mem/shader) info memory amounds interface etc etc info.
guys dont get your skirts up.. 64 texture units/half shader clock makes fermi mind boggling efficient and still beats 5870 (44gigatexel vs 68 gigatexel) :rofl:
whitepaper doesnt state half/full or anything the shader clock.. only states "higher clock" ;)
whitepaper @ v1.4 it wont remain @ v1.4 lol
only 64 texture units @ half shader clock cannot coexist.. one or the other will get whitepaper update.. but then again it still beats 5870s texturing power :rofl:
@overclocking101 simply put..no :)
dude calm down, this is not that serious, we dont like re-branding as much as you, but the point is, we try the most we can to not get taken advantage of, but to be honest, in one way or another we all get take advantage of by companies, even the ones that research.
If you ran a company and someone put some papers in front of you with your debt and then laid down business models to pay that debt and keep the company rolling and one of those was re-branding, you would chose to do it too, based on how effective it is. Its business.
I know it sounds like poor logic just to deal with it, but unfortunately, its the truth, more importantly, you dont have to deal with it if you dont want to.
Your blowing this whole research thing out the water trying to make your argument sound full proof, it takes 10 minutes of research on a graphics card to find out a little of what your buying, rebranding, performance and what not, not an 8 hour work day LOL, and of course we dont do this for every little thing, impulse/convenient buying is in everyone, some more than others.
seriously, your saying you dont do this, yet your in this very forum right now, finding out exactly what fermi is, and im shure your not taking days off of work to do it.
and you saying your the consumer that would be tricked by this yet your, again, in this very forum showing your concern about re-branding on mobile chips.. Its your logic, that i dont understand.
For someone who kept saying in the other threads that he didn't know what he was talking about, you sure love claiming "to know!"
Kind of like how you said that the GT200 was stomping the RV770 because 512-bit > 256-bit... cept for that inconvenient fact about GDDR5 which is exactly what Fermi is doing huh!
:rofl::ROTF:
Anyways, at B3D, they analyzed the numbers and it makes sense - games which use a lot of geometry / tri-clock (such as HAWX), Fermi was quite a bit faster than the GTX 285. Those that didn't though, it was close to 5870 performance... meaning there are games where the Fermi changes will improve it's performance greatly, and others where it won't be much faster if at all more than the 5870. Hence the average performance values put it above the 5870 but not faster than the 5970.
And if you want to talk efficiency... how about comparing the card to other cards in its power envelope???
*broken record* you keep saying that^ :ROTF: just wth are you on about.. what claims ???
claimer: i dont know anything zip nada/am no gpu prophet :welcome:
fyi: even g80 stomps on rv770 :rofl:
b3d ?? ohhh those ati gpu prophets ? :rolleyes: right yeah yeah
power envelope ?? ati gpus dont eat watts vegan straight up :up:
WHAT!? :ROTF:
http://i47.tinypic.com/264szzl.jpg
If anyone had any doubts...
Some call this... sig worthy!!
So you can go ahead and claim you know exactly how Fermi will stomp ATI. Yeahhhh...Quote:
claimer: i dont know anything zip nada/am no gpu prophet :welcome:
Yeah you mean those guys who actually have technical knowledge of GPU's?Quote:
b3d ?? ohhh those ati gpu prophets ? :rolleyes: right yeah yeah
But hey, because the experts conclusions don't mesh with your "conclusions", they're ATI gpu prophets? LOL!
hahah this made me lol :D
i agree that intels previous naming sheme was pretty good... not perfect but good...
about the current naming sheme... nah, its really confusing...
there are i5s that beat i7s, there are even i3s that beat i7s and i5s!!!
intel made the mistake of creating hw categories that dont mean anything to end users... end users want to know what product to chose for what task/system... ie gaming = X, workstation = Y, office = Z
with i9 i7 i5 i3 that doesnt work very well at all...
and about amd liking intels sheme so much they copied it...
nah, i dont think they care, they neither like or dislike it, they copied it just because its intel and people will be familiar with their sheme, more or less... ie people know 965 is a fast cpu, so if they see an amd 965 they know its a fast cpu as well...
very bad idea in my opinion tbh cause it makes amd look like a cheap asian copycat company instead of a propper western engineering powerhouse that... well it WANTS to be... and lets say it is, but im being really generous here :P
you have proven in your previous posts that you have no insider infos about fermi whatsoever, so how can you rave for an unreleased product? why are you trying to convince us how great it is when you have no possible way of knowing anything about it yourself?
or do you think you aresorounded by ati fans and think it would be fun to poke at them and call them out? :D
This thread is rapidly turning into a troll banquet (yet again) :eek:
Yes Fermi will hopefully be released soon.
Yes it will be pretty fast.
Yes it will be :banana::banana::banana::banana:ing expensive.
End of discussion :)
Until we get some (more) hard numbers what is the point of creating another 100 page long pissing contest?
About AMD copying Intels naming scheme I think it's because they want people to think that their 965 is equal to Intels 965.
Just like they did with the 3000+ etc trying to convince people their 3000+ equals a Pentium 4 3000mhz.
Could have something to do with the Tag "Troll Party" at the bottom. :rofl:
I can honestly say that, "All I know, is nothing at all"
I feel I may have even lost some ground on that after reading some 40 odd pages in the last Fermi thread and the all of this new one. :rolleyes:
Lets hope the entertainment improves, hay. :rolleyes:
Honestly, it seems like whenever someone tries to interject with some facts, they are ignored while the trolls are replied to.
It's a vicious circle.
This and all other GF100-based threads need to be closed until additional, concrete info is available to the public. If not, these threads will continue to be littered with people claiming "insider knowledge".
whos tryin to interject with some fact?..i really hope not napalm being the person in reference. i mean could you honestly believe someone who says g80 "stomps" rv770 which is easily disproved with a short google search for anyone that doesn't "know"... or is there someone else in here that "facts" were missed because of all the BS surrounding it???
lmao @ the troll party tag
I remember one episode of this "vicious circle" a couple weeks ago. Somebody was making claims about the performance of the Fermi without having any documentation/benchmarks.
He was claiming boldly that he knew more about the Fermi-performance back in July than most us Know today, because of his "insider knowledge".
At the end he wanted to use the picture of his beer belly to prove his groundless claims about the performance of an unreleased product. Just because the claims for his "insider knowledge".
I can't believe myself that I'm still visiting and posting in this thread!!
wow... and to think i've been ignoring this thread... partly because it was all speculation but mostly because i have no intention of buying a fermi based product.... (i already invested in ati this round)...
'but' i have to say this last page has had me in stiches.... very funny stuff....troll party's are fun fun fun.... :)
sooo let me add a little conjecture of my own... ati will do dx11 better than nvidia and... no i think i'll stop there... that'll do it... ;)
^^ same reason I'm ignoring it.
If nVIDIA releases a faster card, I'll buy it. As long as ATI has the faster card, I'll use it. Napalm, what gives? You usually have constructive posts.
Look through this thread and the last dozen or so pages of the last Fermi thread.
- There IS a hardware-based tessellator in the PolyMorph Mngine
- The GF 100 architecture IS highly scalable
- The cards ARE in volume production and have been for some time now
- The PM Engine WASN'T an afterthought
I could go on and on but each of these points and more were discussed shortly before trolls continued their assault on the proper discussion points of this thread.
Erh, don't read everything so literally. I wasn't apologizing for anything; and I sure as hell didn't take any day off ;)
BTW, I'm not the one that suggests "throughrough research" before making a purchase like a GPU is necessary... A good and honest naming convention goes a long way in helping consumers.
1st what does an "honest" model naming convention for a graphics product tell you about the actual graphics product you are buying.
Answer, its tells you nothing if you don't spend some time "researching" to know/understand what it actually is. There's no shaders count, no dx capability, no core clocks, no memory info, nada in the name itself and without "researching" how do you even know what it is exactly capable of.
So now that you know the name of the new fermi cards without doing any research at all you must be able to extrapolate all the specs for all of us from the GTX480 & GTX470 because surely all the information you need is right there in the naming convention according to you....
you are digging to deep in that research thing ... he is trying to tell you that for high percentage of customers its easier to go just by the numbers (highest model number =highest performance for them)
Its not about research alone, its about basic human common sense and using the rock on your shoulders for something more than looking at model numbers.
There's nothing easy for a totally uninformed customer to go into a store and look at a shelf full of video cards with model numbers across the board and hope to make sense of model numbers alone to figure out what they want, that would be quite daunting if you didn't have a clue.
If a customer doesn't have a basic understanding of what they are buying what basis do they have to compare an item. Is a 5970 or gtx295 what they feel they need because they cost so much or what is the difference with the 5870 or gtx275. If they don't know they could end up with a dx10 card when they assumed they where all dx11 or even end up with a dx9 card when they where hoping to get a video card to allow vista aero theme to work.
All it takes is basic common sense to read the package, ask questions, or google for a few minutes, this isn't research for a thesis or phd.
How many people go into a store pick up the gpu box with the biggest or lowest number and simply proceed to the checkout knowing only the model number, who does that other than the hypothetical masses of zombie human consumers with no basic reasoning skills, seriously...
Why is the customer upgrading their video card anyways, I would wager most computer hardware illiterate folks never even open the computer case so much as upgrade a video card and those who know enough to upgrade their video card have the basic common sense to figure out what offers the best performance for their budget by looking at all the pretty graphs posted by review sites or even on the box itself.
Go fermi....
Did everyone forget the first post in this thread.....
Originally Posted by Gomeler
Just as the title says, this is the Fermi thread. You find something you want to share about Fermi, stuff it in here. We don't need a new thread every day til the fictitious launch day.
Play nice or you'll get the hammer.
__________________________________________________ __________________________________________________ __________________________________________________ _
Originally Posted by Serra
+1
Remember:
- We can/do track who writes tags and don't appreciate seeing them played with. Good Tags == Traffic folks!
- Let's keep things on topic. Or, as there is next to no info available, as not off-topic as possible. When a mod or admin gets grumpy and wants to give infractions or play with the banhammer this will be a popular place to visit as the last thread was like shooting fish in a barrel.
If anyone has links to nVidia they feel are credible / official news feel free to send me a PM and I'll review and toss it up.
While I grant that a lot of GPU users out there who go to upgrade probably are smarter than the average consumer...
You're giving consumers wayyyy too much credit here. How else would you explain Best Buy having $200 mid-range cards from a year ago... and seeing people actually buy it.
So many B&M stores prey on the uninformed it's ridiculous...
Like it's been said... think of your average American... then realize that he's only average and how many more are stupid?
And as technology improves/gets more advanced, the gap between the "knows" and the "dont knows" only gets larger
Any documentation or benchmarks that he could provide us...
Stop being a jerk to one of the few people that actually does have inside info.
People get mad that there isn't any leaks and then when people do leak info, they ignore it or attack it... Talk about a "vicious circle."
Whoa I wouldn't call a gpu user smarter than the avg consumer, more hardware savy than the avg consumer maybe. More like allot of gpu users have an interest in computers while the avg consumer is only interested in getting on say the internet and could care less about the mechanisms that allow them to actually browse the internet or how the computer works.
Fermi
The consumer is you me and everyone else, you were not born knowing what you know now nor where any of us. If you don't want to buy junk/outdated hardware its not hard to get informed before buying but if you don't care about the hardware you're buying it doesn't matter either way because you don't care. Hence you can't really blame a company for selling someone something they bought when they themselves don't even know what they want and bought it anyways.
Fermi
I would agree which is what I've been going round in circles talking about. If you want to make the right choices your only option is to be well versed in regards to the item you plan to purchase. Blame can go both ways for the consumer and the store but if the consumer doesn't wise up the store will give the consumer what they think they want with no questions asked. Then again if the customer doesn't know any better and is happy with their purchase in the end who are we to say they got duped. They may have gotten exactly what they thought they wanted because they really don't know the difference and even if they got the better hardware they probably wouldn't know the difference.
Fermi
Most people here are hardware savy because they are interested in hardware in general, that doesn't mean non hardware savy consumers are stupid they simply don't share an enthusiasts level of interest in regards to their hardware purchases. They probably have other interests and think computer hardware folks are crazy stupid for spending so much money on something that gets outdated pretty much annually.
Fermi
Sure but the "don't knows" don't matter because they don't care or don't care enough to know what needs to be known to be in the "know", so the "don't knows" might also be considered "don't cares" because it don't matter to them.
So we have people who are hardware savy that care enough to learn about hardware because of interests to make smart choices and we have computer users that simply want to use a working computer and don't care about the hardware regardless.
Fermi
i really mean no offence here, but personaly i have never seen so many words say nothing, i see a lot of 'Buzz' words like 'consumer' and 'dont care's and dont know's'
heres what i think your saying in a few less words,
'uninformed people get ripped off'
'people like 'us' are tech savvy'
'we dont get ripped off'
and isnt that what the quoted guy was saying anyway?
It's now officially confirmed:
NVIDIA is simply renaming their cards for us to have a ... healthy debate on xtremesystems. It's not a marketing gimmick, nor is it an attempt to boost the bottom line from a company with no real product against the competition. Obviously, EVERYONE is spending at least a few days doing their "research" before buying a $200 GPU, right?
A big thank-you to NVIDIA for providing us with free entertainment! In this economic crisis, we need more companies like NVIDIA ;)
/sarcasm
http://eatourbrains.com/EoB/wp-conte...7/05/troll.jpg
this really doesn't belong in the gf100 thread, perhaps a thread in the ati section would be more appropriate.
<<deleted by myself, I don't think I'm saying nothing useful>>
is it? what really matters is the available bandwidth per frame rendered, isnt it? and for a dualgpu card that bandwidth IS double...
if you look at how dualgpu cards perform compared to single gpu cards of the same model and single gpu cards of the next generation with almost double the memory bandwidth per gpu and double the gpu horsepower per gpu... youll notice that dualgpu cards are a pretty reliable perf indication for next gen single gpu cards... not 100%, but a pretty good indication...
a dual gpu card is basically gpu raid0, you dont gain memory space by adding a second unit, but you gain bandwidth...
a dual fermi card thus has effective bandwidth of a 768bit gddr5 bus... :slobber:
i venture a guess here that fermi2 and even fermi3 single gpu cards wont beat that :o
i heard nvidia is notorious for not promoting staff, only if people leave and then get hired back they have a chance to sit in a more comfy seat or make more money... plus they are known to drive their employees pretty hard, 14+ working days... sooner or later that starts to hurt the employees performance...
hey i totally agree... it IS ridiculous... but its the sad truth :shrug:
for the + rating from amd... no idea what their itentions were, but the idea behind the + rating during the a64 days was really good! amd never talked that much about it, but they actually ran a series of benchmarks, sysmark, office, excel, games, photoshop and a few others, and they based the + rating on the performance of a cpu in relation to an original athlon 1ghz (where applicable since it didnt even have sse)
so when amd launched their dualcores, those + ratings were pretty good... they didnt relate to games and other applications all that much, but for productivity they were pretty spot on iirc
the sad part is that i know you wont even apologize for these remarks once it turns out he was right...
yes and forums wouldnt exist, there would only be 1000 chatrooms on the whole net and 5 news websites... is that really what you want?
back on topic:
what do you guys think how fermi will scale cpu wise?
will it need as much cpu power as a 295 to max out, less, more?
and how tdp/temp limited will it be when overclocking it?
Several of my contacts there (marketing, development, etc) have risen through the ranks over the years I have been talking to them so I don't know where you gotyour info from.
As you increase the performance of a GPU, the CPU naturally needs to be faster to feed it information at a quick pace. However, as these GPUs quickly outpace game development, the CPU will continue to be a bottleneck all the way into high instances of AA. Luckily, it seems like DX11 has moved less emphasis off of the CPU which bodes well for the future.Quote:
what do you guys think how fermi will scale cpu wise?
will it need as much cpu power as a 295 to max out, less, more?
This really depends on how good of a heatsink NVIDIA sticks on it. I have a feeling though that the combination of 8-pin / 6 pin power connectors and the PCI-E 2.0 slot will be able to provide more power than even an overclocked GF100 can ask. Naturally, when you get into areas like voltage tweaks the consumption of any component will skyrocket.Quote:
and how tdp/temp limited will it be when overclocking it?
It's not just the consumers, also the shop managers often have no clue about various brands and models of hardware. Whenever I see one of them 'helping' one of the customers... I really wish I wouldn't hear what they were suggesting. :shakes:
So no wonder renaming works for Nvidia so well.
Most people only look at the numbers... model number, price, VRAM amount, that's it. And a lot assume that the more they pay the better product they get... So there is a lot of room for ridiculous pricing, and people are more than happy to buy such stuff.
As I don't want to dig into this useless thread...What's about GTX4XX? Wasn't it supposed to be GF100? Renaming again? What happens to GTX3XX, they just skip that number?
They've renamed the GTX 2XX mobile chips to the GTX3xx cards.
So that 9800gt in your laptop with the fake title of GTX 280M which imitates an architecture it doesn't have has now become the GTX360M faking yet another architecture change
The 9800M is not a GT200 or a GF100.
We all know that nVidia is going to rename old junk anyways, but separating the naming category from the new products is a a good, and welcome new move.
There are little to no constructive posts found in this thread, hence my post in my opinion is valid. It's just the same old same old rehashed over and over again. Now it's even more of a load of rubbish with even less info.
Forums would exist, and they would be a lot more interesting if people would post some more 'useful' things.
Every day I check this thread and it's bla bla this bla bla that. It's probably because Fermi is so late that I just can't bear this stuff anymore, usually it isn't as bad and actually quite entertaining sometimes, because there is actually stuff to discuss. Now there is nothing much to discuss anymore, and people still keep yapping.
Same experience here. Most of the times I overhear IT "advices" from department store, I realize that those store guys know pretty much nothing about hardware.
Actually, I know plenty of IT engineers, and surprising enough not THAT many of them follow hardware development closely.
Ditto on the stores. Also, surprisingly enough as you said, not many software developers really know much in-depth on the hardware side of things from what I have seen in the dev community :(. It was definitely a shock to me when I started to seriously work on developing.
since we're already into February now, who wants to speculate on a more specific launch date. I've heard both 'March 2' and 'end of March' recently :shrug:
^No one will, because no one can(or everybody can).:D
Guess we'll just have to wait till some solid info will surface soon.
No offense but I mean we've gone for a 2nd thread already for much of that "speculation".:eek:
GF100 is the codename for the silicon. Rumors/info first started out, for the same chip, as GT300/G300. Fermi is the codename for the architecture.
March 9th is the earliest it will launch.
That is a date for something else... not the "launch" date of Geforce parts.
I agree, if this is the start of some relatively normal naming/branding of their products. I say relatively because no system is perfect but the last few years from Nvidia hasn't exactly been good.
then what? the actual availability of geforce parts? :p:
cause THATS the date people really care about :D
mid april, thats late q1.... LATE q1... but yeah fermi is launching and will be available in retail in mid november, mid q4 2009... right... :P
i dont believe any of the dates floating around... nvidia has changed dates of fermi for so many times its beyond ridiculous...
they should just say its done when its done, and thats that...
"it will be a kick4ss product when it launches and good things need time..."
what difference does it make if they launch it in april or may? as if 1 more month would make a difference...
Metro 2033 is supposedly some big TWIMTBP/PhysX title so I'm expecting it to at least paper launch around the same time - March 16th.
gtx: 470 vs 480: what if 480 is 50-70% faster ?? :D
then again maybe just 10-20% as the model numbers would suggest :)
You got it all wrong saaya:shakes:
Nvidia said stop STOP! dont buy a Ati 5800, Fermi's almost done! its gonna burn a hole out your underwear.........
(1 year later at launch) See, I told you fermi was tits on a ritz, but its gonna cost you 2x as much as a 5850 for all the time and money we lost trying to make it.:down:
Ummm... You're going to bring up the value card?
Ti 4200? Ti 4400? Ti 4600?
Disagreeing with me just for the sake of disagreeing is a little bit silly tbh. :shrug:
And just for the record if all you're saying is that the MX series was :banana::banana::banana::banana:ty, I agree with you 100%.
But it sounds like you're trying to discredit my claim based on the MX series, and I do take issue with that.
Well, it was a huge fail, period. Lots and lots and lots of people bought GF4 MX those days (no one was used to the renaming crap) just cause it was GF4, yet it was pretty much a slightly optimised GF2 MX card (DX7!) without no shader support whatsoever... and it was waaaay too damn popular, sadly.
Even Carmack himself mentioned that it's one of the reasons Doom 3 was delayed, lol.
Yea I'm sorry if I came off overly defensive; you're absolutely right about all that. It was a complete dirt pile of a card. :ROTF:
I just loved my Ti4400, so I get a little up in arms when someone insults anything involving the words geforce and 4 in the same sentence. :D
I got amazing startling revolutionary news...
In DX9 and DX10, nVidia and AMD rendering should be identical right. When you add AA and AF, the different algorithms might produce very slightly different colors not noticeable to the eye.
But in DX11, with "tessellation", does that mean driver can "optimize" how many more triangles to add?
ie original "ball" made using 60 triangles.
nvidia tessellation makes 300 triangles.
AMD tessellation makes 200 triangles - but it looks rounder!
:shrug: just wondering how we gonna handle reference quality images.. :shrug:
I can't wait any longer.. just release the specs already, plz.
Just wait for The GT300/Fermi Thread - Part 3! its on its way... :yepp:
@thread does the lack on lower mainstream card based on fermi mean that the 5750 and 5770 will not have anything to fight?
If thats true the 5830 could sell like hot cakes if its priced right... That is unless people get into the nvidia renaming trap and buy a GTX3xx based on G200 core that works with DX10.1 but not DX11 :(
they will, specially that it’s lower mainstream, those folks doesn’t know or care about dx11, EVGA 7200GS is clearly better choice than the SPARKLE 8400GS because both have 512MB memory (so both are in the same power spot) but the first one is cheaper and from EVGA which manufacturers the whole card just like MSI, XFX, etc (that Geforce word on all of them seems to confuse them though), what?! newer generation!! stop repeating the crap you read at tech forums, what?! native resolution! what’s that?!
even if they knew about dx11, it doesn’t matter, they are already lowering most of their graphic setting in the game. they are the bigger market share who gets everything, and we dont even get one liquid cooling case :(
edit: from tags: troll party. hehe :) nice one
Specs are out just final clock are to be known. It is hard though with all the chatter posts.
http://img403.imageshack.us/img403/4469/img0027828.png
For me i find most impressive is the number of triangles (Tessellation) and although the 5970 specs are higher they are only higher due to the simple fact that it is two cards; i.e. not a true reflection (as it is not shared/one unit)
Also a benchmark
GTX285: 51 FPS
GF100 part: 84 FPS.
The 5970 gets 92.7 fps which i think could be caught with higher clocks. Of course you could always just clock the 5970 but we all know the problem when you do that. My prediction is that the GF100 is going to be very fast almost with the 5970, but by the time it does come out ATI would only need to release a the 5870 to stay competitive. Like i said what i really want to know is how the G100 does on DX 11 i.e. next gen. Being built from the ground up for it i think it will be significantly faster than ATI in that department until ATI's next refresh.
That's terrific but i was talking about this.
http://www.xtremesystems.org/forums/...5970+overclock
Questions about that "G100 spec" chart.
what is up with these ratios? And the supposed MAD/triangle numbers are even harder to justify/figure out. I'm betting 90% likely fake (just like rumours of 400SP on RV770).Code:The ratios
GTX285 G100 5870
filrate 1 1 1
samples 2.5 2 2.5
textures 2.5 8 10
Does anyone think they will get as high as 725/1400 core/shader on GTX480? I don't think core speed will be that high.
I think i read somewhere that the ratio between the core and shader will be 1:2 that means a 700:1400 or 650 :1300 and so on.
true. Doesn't this thing have like 5 clock domains? instead of core/mem or core/shader/mem, it's now like core/rops/uncore/shadercore/memory
The "hot-clock" in GF100 is the shader domain, known by this name from the previous arch generations, and that includes the texture unit clock which runs at fixed 1/2 the shader rate. The "base" domain covers everything else -- ROPs, setup, display logic, etc.
The pixel rate for GF100 seems to be correct. Sure there are 48 ROPs in there, but the limitation comes from the scan-out capacity at 32 pixels per clock (same as Cypress), following the setup phase. In some cases all the ROP units could be saturated at once, like heavy colour-blending op's or processing some of the more taxing AA modes.
700 seems like a magical barrier for nVidia.
7900 GTX - 650 Mhz
8600 GTS - 675 Mhz
except for 9800GTX+/GTS250, I cant recall anything that made it to 700.
Now, looking back at last couple big chip launches:
G70 - 7800GTX at 430 Khz
G80 - 8800GTX at 575 Mhz
G200 - GTX280 at 602 Mhz
Either big DX change brings along big change in clocks (ie 50% more), or looking at just last two, something around 600Mhz.
Now, if you look at nVidia's 40nm track record, things look grim:
GT220 625 Mhz
GT240 550 Mhz
This is the same 40nm that AMD's 5770 and 5870 run 850Mhz at. If nVidia hasn't overcome whatever issues are causing this gap, this suggests something around 500Mhz for FERMI.
Surely some of you will cry foul. Pish posh. Clocks dont matter. But, if 4890 is part of a new trend, AMD's clocks will only improve. And already, 850 vs 600, is a huge 42% gap.
I'm not sure what to say about triangles. Over at B3d, they brought up the fact that 2 x 5770's beats 1 x 5870 in triangle intensive games like HAWX, which the GF100 was benched at as being pretty fast. Supposedly 2 cards means twice the tri/clock?
I think he meant that shader and texture units are tied together - but texture units operate at half the speed of the shader domain, hence a 2:1 clock ratio of shader:texture
This makes sense from what people have been saying - the GF100 has a lot of tesselation / triangle/clock power, but it's texture and ROP performance is not much better than the GTX285... meaning that in games that rely on texture and ROP performance, it's performance is not much better than a 5870 if at all, but in triangle/tesselation intensive games, it is much faster than the 5870. This is corroborated by the evidence in benchmarks - HAWX (a tri intensive game) sees the GF100 perform much much faster than the 5870 but the quoted average % increase over 5870 is a lot lower than the specs would suggest, meaning that in other games heavy in textures and/or ROPs it doesn't perform much better if at all. We'll see soon enough, but that's the latest story
new final/performance numbers are in:
gtx470 battles 57xx radeon series
gtx480 battles 58xx radeon series
gtx490 battles 59xx radeon series
and this is the best nvidia can for 2010
you guys were right all along nvidia sux!!
I don't see this happening...
I didn't saw that since FX and I really don't see that happening again especially before such big work off development.
And I can't understand why there's still people coming to this thread to say that nvidia sucks and AMD/ATI is great.
Yeah nvidia sucks and Ati's the boss!! who makes 2 good card's every decade. Really? have you install the latest hotfix? lol!