Well, that is the kind of market that Tesla is targeting, not GeForce.
Printable View
i'm not even sure if nvidia managed to produce 400 working gpus so far
well its not only about being smart... do you really think they put backup logic on fermi? its so huge that some of the blocks of a fully fledged GF100 probably ARE seen by nvidia as backup units... its just that nvidia can enabled all of them on some few very good cores, while ati has backup logic that can only be enabled if something is broken... know what i mean?
hahah definately :D
mhhhh in 3d? did you see it with glasses? what was your impression? i saw this at the display show here in taiwan and thought the bezel was terrible in 3d, cause it cuts the 3d objects in 2 pieces in a really weird way...
i didnt say the chips cant be produced... i just find it odd that the ref pcb for gt300 isnt finished yet... cause nvidia previously said they have the cards already and just wait for the chips... or am i mixing something up?
hehhehe :D it all depends on how you define mass production i guess :D
do you know if that was actual game perf then or just rop perf meassured somehow?
naaahhh come on, the 5800 was a nightmare... it was slower in some games than its predecessor FFS! :D really cant compare that... and back then gpu power really mattered, nowadays you can play at max details even with mainstream cards :D
hmmmm what exactly does ai enabled do? does it reduce iq?
well that perfectly matches the FC2 results... :)
Too early to be too sure about the GPGPU performance of GF100, but everything is indicating a new era, with unheard performance of personal supercomputers just around the corner. :D
Leaving A.I. enabled in FarCry 2 benchmark does the following:
1) Allows the CPU performance ( not actual amount of cores, meaning a similarly clocked Core i7 920 with HT off will perform the same as a 920 with HT on ) to have a decent impact in the benchmark's performance results because it partially simulates normal gameplay ( as in the PC "calculates" the enemies & other "live" objects moves & artificial intelligence stuff ).
2) In short-term benchmarks it can introduce a lil' bit more discrepancy between the results ( but that gets covered by the 3 runs of the bench )
It's like the Unreal Tournament "flyby" ( rendering ) and "botmatch" ( rendering + AI ) benchmark modes.
Why spend 500.000 on a Ferrari or even 1.000.000 on a Bugatti ?
I use milkyway@home and it uses DP and i am not so sure about Collatz Conjecture but i think it also uses DP not sure tough.
Our 4*5850 can crunch out more than anything nvidia has at the moment and what GF100 could do in the future. Also if i remember correctly my friends 4870 reached speeds of around 140-145 Gflops DP there....
invalid comparison.
A Quadro has nearly identical stats and performance to a Geforce product. a Fiat 500 has neither of those compared to a ferrari. they only share the things that make them a "car" and not much else. I'm not comparing a S3 Trio to a Quadro here.
Anyone know when the next NDA lifts?
Hopefully we might see a street date, or perhaps some benchmarks of cards from vendors and not NVidia...as far as performance, the jurys out until Trubitar, Tom's Hardware, or HardOCP..etc..get a card to test
"Why spend 500.000 on a Ferrari or even 1.000.000 on a Bugatti ?" Is wrong question. Actually the opposite question applies to GF100.
Why spend 500.000 on a supercomputer or even 1.000.000 on a the dig blue, when you can get the same performance by spending $700 on a GF100?
The new articulate of GF100 is aiming to dominate and change this landscape in 2010.
GPGPU is a very new technology, and has been in child-stage until now. It is really surprising to have 1 supercomputer in TOP-100 already based on the old technology. Stay tuned, everything indicating that you are going to get surprised soon.
pretty much, yeah...
im surprised how confident ati is though...
they knew that fermi is more than double the hw logic, and they knew that nvidia always targets to reach the same perf with a single gpu card as their previous dual gpu card... and from what nvidia showed at the deep dive event, thats exactly where fermi perf is at...
well yeah, nvidia didnt say if the card they showed was a 360 or 380, how many cores, what clocks etc... of course this created a wildfire of speculation... :D
whoever thought that fermi would be notably faster than a gtx295 wasnt anticipating but dreaming... :P
they are creating a massive expensive power hungry super performance part... hows that trying something different?
i think atis strategy to bring the same perf level down to much lower prices makes a lot more sense, especially if you consider how the pc industry has developed in recent years...
thx for all the interesting infos a couple of pages back LordEC911! :toast:
40% yields for fermi? thats not going to be fully functional chips though... i can believe that its 40% 484core... but 40% 512core? nah...
well.. i wasnt impressed by it tbh... it looks a lot like the mairmade nalu demo nvidia used a long time ago, and perf wasnt orders of magnitude higher... so i doubt we will see games simulate hair like that and it will remain a tech demo just like the thing nvidia showed before...
i hope im wrong though! :D
hmmm neliz probably meant card MP then... it takes a couple of weeks from wafers to chips... but im sure you know that :D
thx for the headsup! :toast:
well, how would you prove he was wrong? im not saying hes lying, but he very well could be... hes famous for twisiting words around...
so AI enabled basically means a FC2 run becomes a flyby, while AI disabled the bench tool will pretend somebody is playing the game and do all the ai calculations?
didnt know that, thx! :toast:
Hitler aimed to rule the world.. just saying...
That's just as much as "ATI" has.. and their system is considered beyond poorQuote:
GPGPU is a very new technology, and has been in child-stage until now. It is really surprising to have 1 supercomputer in TOP-100 already based on the old technology.
You cannot surprise someone that knew more about Fermi back in July then you do now. You sound awfully optimistic.. too optimistic.Quote:
Stay tuned, everything indicating that you are going to get surprised soon.
No I didn't.. the investors "mass production" refers to production RAMP, they are still quite a few weeks off from real mass production.Quote:
hmmm neliz probably meant card MP then..
cypress=100$? that would mean its @ 33% yields? 0_o
and first he says gt300 is @ 40% yields right now, then he says quadro and tesla will subsidize geforce until nvidia can reach 40% yields with gt300... huh? :P
interesting read, but a bit weird... and i dont trust the numbers :P
ahhh so he WAS twisting words again :D
i thought there wasnt much you could interpret when saying MP has started... but i guess you can always claim your doing MP in a special way...
so MP starts in your company means you set up a row of meetings for the next weeks at which you will discuss how and when to start MP :D
take it easy, no need to get rancid ;)
He's talking about A2 but I thought there was another revision after that? A3 or B1 or whatever
Hahahaha. So, in your little boxed up world something like Fermi just pops out of thin air and is there? It doesn't require years of work and nvidia doesn't update their partners at all? they just say "oh, here's our new product, go and have fun with it!"
So according to you there are no cards out at developers and partners and nv suddenly doesn't inform anyone.
I wonder how I leaked the Fermi name a couple of days before Fuad posted his first article about it prior to GTC. (though I must admit some people posted it before but were not quite sure. Theo's 24x24? saying that for months where "respected" websites and NV PR spinners have been saying it's smaller since last May. I hope they're not paying you to spread the Fermi word, because you're doing a pretty bad job.
The truth will come out. GF100 might look good now.. but let me tell you, AMD never planned on positioning HD5870 against Fermi.
A3 went into production back in Early December. What I'm hearing right now is that A3 will probably not be the final revision.
Much like GT200b1 was shown in May 08. Demo'ed with b2 in September, reviewed and sold in minor (tesla) quantities in b2 up to November and finally became available (many months after) in b3.
Sam_Oslo's reply here \/
Are you indicating to have inside info?, or have contact with some partner?
Even if you have heard something about the the name and such, it doesn't mean that you should take off totally and get so sure that you are acting.
Nothing is that sure yet, and nobody can say anything for sure before benchmarking the retail GPU, but those early benchmarks are showing something else than your speculations based on some info that you have heard from somebody. :p:
Even though i prefer ATI (9800 pro was my last desktop card:P) i really want Fermi to be successfull since i will buy a new system sometime soon and those prices have to go down:p: + it might push ATI harder. Compared to what MOST people are saying/hypothesizing/guessing/hoping here, neliz seems to know his stuff better. Anyway as people have already said countless times, we can't be sure about anything until we get some more independent reviews and probably by that time i will stop caring!
Thx, :p: but it was doomed to happen, anyways. Nobody can act as sure as him without proper documentation.
EDIT:
It is always fun to predicts the trends, and make some educated guess and assumption, but acting too sure about a unreleased product is not the proper way to act.
maybe you guys need to "Slow down on the kool-aid thar" may i suggest some guinness to balance you guys out a bit ?? :)
let me put it this way.. transistor per transistor nvidia has higher efficiency/performance than ati whether you agree or not like it or not.. you guys can turn blue it aint gonna change
oh btw top 512 fermi could go up against a single 3200sp radeon.. thats why it cuts thru 5970 like knife thru butter
You either like to play with fire or you're 100 percent sure about that, like in "I hold it in my hands" sure.
There've been others who were very sure about their claims (can't blame them if they just said what sources told them) like the whole "Fermi will be released in november/december" thingy.
Yeah neliz is J. H. Huang :D
FWIW he may be a forklift operator at TSMC, he might be a packaging guy at CoolerMaster.
We can all play dr nice, mr smart, and mr predictor hiding behind our own finger.
By the way neliz, you were not the first to come up with the name Fermi, neither about any of the technical characteristics of the GPU.
Nobody was actually.
Some people hear things, some real and some... not so real info, from friends, contacts, whatever.
Nobody in the media/manufacturing sector has been 100% right every time, nobody.
Now unless you're Derek, Haan, or somebody in TSMC or nVIDIA's GPU team stop stating things as facts.
Nobody likes to be in the "wrong" side in the end, trust me ;)
And if you in fact are somebody, hiding your identity & status isn't nice at all.
At least he's not the one presuming ATI engineers are idiots... unlike most of the green plants here.
The fact that you (and some others) take it so seriously to discredit him instead shows... I dunno... a huge inferiority complex? Hiding egos behind a graphics card now are we? :clap:
Am I the only person who thinks nVidia are dragging on Fermi like some old Soap Opera that has run 15years too many?!?
BRING ON THE REAL REVIEWS FACTS AND FIGURES!!!
John
I only care because none of them offer what I want. (That being a decent >1GB card).
My only hope is Ati "canned" the "six" 2GB HD5870 to make something new and shiny, like say perhaps a 2GB (across the range) 5890 which is clocked @ 1Ghz and has faster memory and a few tweaks here and there...
And that Fermi has 1.5GB (at least) with some versions/editions offering 3GB!
John
lol, gotta love the thread tags :lol:
pot vs. kettle
they have to finalize the card, begin mass production, and release review boards first.... not that anyone has asked for a "real" review yet.
then what is the purpose of this post? the purpose of this thread is to SPECULATE ON THE FUTURE OF THE GF100. as it's final verson doesn't exist, the only thing anyone on this forum can do is speculate.
or troll...
Purpose was agreement with someone's opinion and I would recheck why this thread was created BTW. It was meant for facts which is why I put in the power plant named Fermi because we knew nothing of the damn card except the name at the time.
Yeah the first one '4 G92's glued together' and 'The Ferminator' were mine...:rofl:
Just go by my numbers that I have been stating since Dec, they are pretty close.
No, A3 will be the first samples/review cards with small numbers leaking into retail. B1 is in the works and B2 will most likely be the final mass produced GF100.
Yes, he has very, very good info.
He has been pretty much spot on about everything the last year or so, when I first started reading his posts.
Why does it matter?
He passes on what he knows and we can verify it.
It doesn't matter who he is and what he does, all you need to know is he gets very good info.
You may be able to verify some of his info, but he doesn't even allow others to be optimistic, :rolleyes: while he himself acts to know everything in such a details that ha can't get surprised :rolleyes:. All this without showing any benchmarks or documentation about his claims. He says::
I personally require benchmarks and documentation for this kind of bold claims, otherwise i don't take it serious. You can consider it as good info if you want.
So multiple people saying almost all the info he gave is correct isn't enough?
Hopefully he doesn't mind me saying this, he told me back in August that GF100 was ~G200 size, not G200b, even though everyone was saying it was smaller than G200b.
He was also the first one, that I saw, that hinted that AMD wouldn't brand Hemlock as an x2 part.
Do some research and read his posts, you will find out just how accurate he is. He has posted quite a bit of info before Charlie or Fud or anyone else has posted it.
Edit- I don't think you understand how leaks work... It doesn't matter what you think about him or his info, take it or leave it. He doesn't have to prove anything, either the info is right or wrong and eventually we will know one way or the other. Since his his track record is outstanding, I have no reason to doubt his info.
From what I hear the GF100 cards wont be in retail until March. Can anyone confirm this or shed some more light on the subject?
Reading this thread depresses me. For nearly half the pages in this thread it's just been people repeating themselves. How boring :(
On the upside, I love reading these tags. They're hilarious. It's probably the only reason I even open this thread anymore :rofl:
Agreed. The thread tags are awesome!
All the bickering and fanboyism in the thread are quite ... serious. It's good to know that some still have humors when creating the tags ;)
It's called dampening enthusiasm..
And when I did leak benchmark scores (P number was correct, X wasn't ;) ) I didn't hear you cheer either.. what's up with that?
I'm not quite anonymous either, it would take you less then 3 minutes to find my real name, my address, phone number etc. (pics of the family, me and my beer belly!)
I'm just warning people here that think that GF100 will be [strike]God's[/strike]nVidia's gift to man. If it was, it would've been last year.. not anymore. We're talking about an expensive, marginal (10-30%) upgrade over HD5870.
That's no speculation.
edit: Dagnabbit... anyone know how the strikethrough tags work here?
edit part Deux: availability will be the end of March, at the earliest.
Ohh you all come on, i know neliz from back in rage3d and even in those days he was more of a supporter than anything else.
If one does not like what neliz writes just ignore it, geez its not such a huge issue to began with...
actually they (AIBs) still dont have any cards at all i heard :lol:
heh, was way late last night :D
:lol:
:rolleyes:
depends at what perf segment you look at... for some reason cypress doesnt seem to scale very well, especially if you look at the 5970...
beats me why no single AIB has launched a 2GB 5870 :shrug:
1. its not even released, let alone widely available
2. i dont think he gives a damn about 6 monitor support, so why should he pay extra for that? on the contrary, you end up with mini dp connectors so you need an adapter for every display you hook up to it...
fully functional? thats 250$ a piece... thats just about good enough to not lose money on a 500$ card isnt it?
so when you go out on a date you ask her to prove that shes really single and actually female before talking to her, right? :lol:
geez, just take it for what it is... take it or leave it...
cheers :toast:
if price wouldnt be an issue, i think most people would def go with a fermi instead of a 5970... even if its slower... i hope ati has a faster single card up their sleeves...
And I thought G70/71 would be the first 512-bit GPU (turned out it was 2x256) boy.. did NV fool me there ;)
edit: on the Jen-Hsun Huang thing.. ugh...m.. mmh.. I am :(
http://www.facebook.com/profile.php?...00000590746752
While we are at it.... I'll take the chance to ask Neliz if he has some information on how's ATI going to counter the fermi release...
Will it "only" be a 5870 with 2GB? Maybe with improved clocks?
Thanks.
...and any news on 2gb framebuffer variants?
I'm not dropping cash until I can buy a single gpu 2gb framebuffer 5870 / fermi / whatever the soon to be revealed fermi competitor is .....
Also like some others here I am pretty suprised that there are no 2gb 5850 or 5870's around .... why is this the case?
When you have insider info life becomes so much easier.:D thanks neliz, what you say matches up my expectations quite well:up::D
edit:
by the way, any rumors about some big deal concerning nvidia floating around by any chance? ... that may influence the structure of the company in some way ?...:)
My newsletter came today :D
Some gumf is linked to here too .Quote:
The next generation of NVIDIA GeForce GPU is coming – and it’s built for the ultimate gaming experience
Code named GF100, the upcoming addition to the GeForce lineup will deliver unparalleled 3D realism, brilliant Microsoft DirectX 11 graphics, and immersive NVIDIA® 3D Vision™ gaming; all with incredible performance. And yes, it’s ok to drool.
Seems DirectX 11 is not quite so insignificant for nVidia now ;)Quote:
Packing in 3 billion transistors, double the CUDA cores of previous generation GPUs¹, a high speed GDDR5 memory interface, and full DirectX 11 support, GF100 is designed for groundbreaking graphics performance. With a revolutionary new scalable geometry pipeline and enhanced anti-aliasing capabilities, GF100 delivers both unrivalled performance and breathtaking image quality.
JohnQuote:
Ray tracing:
GF100 brings interactive ray tracing to the consumer market for the first time, providing a glimpse into the future of visual realism in games. By tracing the path of light through a 3D scene, ray tracing uses the power of the GPU to create spectacular, photo-realistic visuals. Thanks to the innovative unified cache architecture in GF100, ray tracing runs up to 4 times faster than prior generation GPUs.
A shame that the they forget to mention the cut down DP performance
A 5870 and a 5970 refresh with Higher clocks and more GRAM is ALL that AMD needs to do to counter Fermi. Unless dual Fermi is 50%+ faster than 5970 which I doubt, AMD just needs to with prices in order to keep their sales up. By the time Fermi is selling in mass, the 5K series will have great yields and better efficiency, if they decide to compete hard on Price/Performance as they did with 4k series, its good enough till 6k series is out.
Besides that, a dual Fermi being 50% faster than 5970 Refresh would also cost 100% more most likely.
Where is this .. lets see what you got, and talk based on that.
Nobody asked for the pics of the family, you or your beer belly. I don't care how you are, you need to provide proper documentation, including benchmarks to act this sure about the performance.Quote:
I'm not quite anonymous either, it would take you less then 3 minutes to find my real name, my address, phone number etc. (pics of the family, me and my beer belly!)
When did I say this is going to be cheep or a gift?Quote:
I'm just warning people here that think that GF100 will be [strike]God's[/strike]nVidia's gift to man. If it was, it would've been last year.. not anymore. We're talking about an expensive, marginal (10-30%) upgrade over HD5870.
I've already said (several times in this very thread, i can link if you doubt it) this is going to be expensive without a competing refresh (or a new GPU) from ATi. I've never said anything for sure about the performance either, I have always said "it is too early to be sure about the performance" (several times in this very thread, i can link if you doubt it).
Are you trying to be funny or trying to misinform and mislead? I dare you to "QUOTE"-me on these childish claims you are making here about me.
[/QUOTE]Quote:
That's no speculation.
edit: Dagnabbit... anyone know how the strikethrough tags work here?
edit part Deux: availability will be the end of March, at the earliest.
You may predict the release time, price, or whatever you want, it is OK. But you can't ask others to shut up, just because you know so much about the performance of a unreleased product, based on the pic of you beer belly.
I'm still waiting to see benchmarks and documentation.
hah, his best buddy is msi? not evga? :D
what did they mention in the details for *1?
double the cuda cores? GT200=240, sounds like the 380 will be 484 cores then?
i thought DP doesnt matter for desktop anyways, so... who cares... :)
One such group of "home" users running DP, where the application builder generally enjoys the splendid fruits of the "desktop" labor.
Saying no one uses double-precision at home is a simple lie.
well, not really a very widespread app...
its kinda ironic though if you think how hard nvidia pushed gpgpu, cuda and physix and then cripples it artificially on their latest and greatest card :D
i dont think most people care about it though... even after all the cuda propaganda barely anybody cares about it at all... and even those that do care are unlikely to have it notably influence their buying decision...
and for you guys and the other few who care... just get an ati card... :)
supposedly 600 something...
so if they cut it in 4, then itll be 150... about the same as a 5970 i guess...
It goes like this:
Step 1: Application is important enough to drive sales.
Step 2: Companies care.
Right now the amount of money they would lose on Tesla sales if they allow Geforces to have full DP capability is far more than the amount of money they would make from people who care about Milkyway@home.
Yes but DP has a lot of advantages over SP for calculations in projects like medical research, math and physics. Its sad that nvidia went this way, we bought 4*5850 setup specially to contribute something.
This move by Nvidia has created a void for DP performance due to which developers will prefer to release SP app's rather than more robust and better DP app's :shakes: The number of people who contribute for the cause is ever growing and i am sure that the united processing power of these people can trump all the combined power of every tesla ever released.
In small words: If Nvidia included full DP performance it would be better for humanity... :D
wait, whats going on now, only the tesla fermi's have good dp performance and the geforce fermi's won't?
i guess dp performance is why teslas are so crazy expensive.
You've asked for the source in the previous post. It's not exactly quoting ISF themselves, but you can take it as is, and if you don't believe it ask anyone of ISF guys over at AVS.
My response was to Nedjo's post, where he claimed that on a PC he played Crysis with X360 gamepad which allowed him not to care so much about the framerate due to it not being so responsive compared to m&kb combination. Then I "suggested" buying PDP to perfectly complement that experience. It was a slight OT remark, while we were already at it.
It has twice the pixels, yes, but as I've already pointed out, you can just go with the higher pixel count and think you are getting the best possible IQ compared to other solutions for gaming. And of course, not everyone has the same needs and priorities when it comes to choosing a monitor.
Inform yourself about motion resolution, 30" 60Hz LCD monitor simply can't resolve not even half of it's pixel real estate in motion, which matters most (obviously, static picture looks great, no doubt about that). And that's only one thing, we are not even talking contrast, dynamic range, black levels, color accuracy, bunch of stuff. Dymanic range is probably the most important aspect of image quality assessments when it comes to gaming. Seems to me that you're judging the quality of a monitor only by the pixel pitch, which is ridiculous and doesn't have a support in reality.
As I've already said, feel free to think/buy whatever your heart desires. :up: :wave:
you guys really think cutting dp perf on retail cards is a big deal? :confused:
i havent seen a single really useful cuda or opencl or direct compute app... i mean something everybody or most people will actually benefit from...
and dp perf doesnt matter at all in games... so really, both nvidia and ati could disable dp on their retail cards altogether... i couldnt care less :shrug:
neliz, so the FC2 numbers were a best case scenario? cause in there fermi was close to a 5970 and notably faster than a 5870.
if the average boost is just 10-30% for the fastest fermi part, then a 5870@1ghz and with 2gb mem will be able to sell for 400$+ easily vs a slightly faster fermi with 1.5gb... so ati can maintain their price point, they just have to up the clocks and increase the memory, both shouldnt hurt their margins much, if at all...
bad news for us consumers :/
price/perf wont improve much if at all in this year...
People were stating how awesome it was going to be to buy a GF100 and have that massive DP power.
People assumed Nvidia would leave full DP on the Geforce parts.
I dunno but I would assume that would be total yield... not for a specific bin.
Remember, that would just be for the silicon, not including other board costs which Charlie is calculating to be ~$100(not re-reading his article), not including the cuts for AIBS and distros.
Think about that for a sec, they can't even price the salvage part against the 5870 without losing money, if those numbers are right.
$499 and $599 would be the minimum MSRP in March. If they "launch" a 512CC part, I'm thinking it would be $649-$699MSRP, at a minimum.
It is certainly bad news for people who like to fold on gpu's. I recall how a developer was talking about milkyway@home and DP being better and more logical than it being on SP.
If DP is used in all folding app's the efficiency and accuracy would only increase. I would think not many people will buy tesla's just for folding and i know of many people who use the consumer part to fold, these kind of people are at a lose.
As to start with GF100's SP score is not that great anyways and the DP seemed very future proof and certainly inviting. For games it would not matter but for folding it would certainly and for a company which talks so much about "GPU-CPU is future" this is bit of a downer :shakes:
Future is DP and the Quadruple precision floating point format there is no way around that fact for CPU processing or GPU-CPU processing
I come to this thread and all I can read is
*whine whine whine*