It was never said the GF104 was a dual chip.
They do have such a part, and have since November. More official info will arrive at CES in a week for you guys.
Printable View
According to that post it is.
Quote:
GF100 promises to deliver at least 40% more performance than the GTX295 for less money. GF104 promises double that.
Do they differ much from the performance figures presented in October?Quote:
They do have such a part, and have since November. More official info will arrive at CES in a week for you guys.
Ah that was a bit of weird wording. The 100 is 40% faster than the 295, the 104 is 80% faster than the 295 (double of 40%). Which means the 104 is about 29% faster than the 100. Rahja cannot confirm or deny the 104 part is a single or double at all, it's under his Business NDA still.
My guess is the 100 could be like a "GTX360" with disabled parts or lower clocks, and the 104 like a "GTX380" full bore, both single chip. We will know for sure soon enough!
I'm not sure what the October numbers were or where they came from, but it seems like they will be releasing some official information in 2~ days and/or definitely at CES.
I'm sure a fully clocked and uncrippled Fermi can deliver those numbers. But can Nvidia/TSMC actually deliver such a chip in mass numbers for the market?
because from the looks of things so far even A3 had to be SP crippled/underclocked just to get it to run.
Correct me if I'm wrong please, I do not want to buy another dual gpu card.
Well, maybe they are watercooling them, else it is utter nonsense (or a hair dryer engine is providing the airflow).
Also GF104/Dual Fermi won't be able to double performance, SLI won't scale 100%, it never does, and secondly they won't be able to run it at full speed without exceeding 300W TDP.
I believe those figures only in case it's taken from an nvidia PR slide or something because for those slides they always pick games that's good for nvidia gpus and provide bad performance figures for a HD5970 and the load temp is prolly under ambient temp 0C or sth (who says it has to be 20C ambient?). :D
In a proper 3rd party review, the numbers wouldn't be as nice so I'm gonna stick to my original speculations that GTX380 single but highend gpu will be ~10% avg behind HD5970.
Ahh sorry it's 5am and my brain is fried!
This post should answer your question....kinda:
http://www.overclock.net/8085950-post411.html
He did mention "A3 silicon has just finished up, and is looking great." in this post:
http://www.overclock.net/8085891-post400.html
Thanks for the info Kuntz, now I know where the "Fermi is smaller than GT200" comes from. Also good he confirmed launch in the 2nd week of March and availability some time after that.
He's basically saying everything opposite from what I hear from people I trust. He says the Die size is surprising, I'm saying it's gt200-esque. He says performance and yield are great, I'm hearing they'll have a hard time getting quantities for launch. And with Quantities that means Chips that actually make the desired speed.
Also, did someone here say the numbers were go at 19x12 4xSSAA?
http://www.overclock.net/8085586-post389.html
Hmm...if Crysis was just on high thats not that huge of a jump.Quote:
Here are a few more bits of FPS data for you. These were collected at 1920x1200 with 4x MSAA (no SSAA this time) and 16xAF. All settings are on high (perhaps not extreme/highest though, it isn't clear from the included documentation)
Crysis Warhead: 58 FPS
Left For Dead 2: 143 FPS (Judging from this result, this appears to be CPU limited at this point)
Hawx: 127 FPS
World of Warcraft: 46 FPS (this result seems off to me, I assume this is a driver issue)
Fallout 3: 72 FPS (clearly a driver acceleration issue here again)
What are you talking about, Charlie did jump on the whole 448 SP thing except he used Nvidias own documents to do it instead of citing some unnamed source/rumour for which he would get flamed for.
All of you will see what is Fermi in TWO days, and someone will be hardly surprised :) :) :) Performance is BRUTAL!
Man this is going to be awesome. :D
i'd would not even comment on what the Rahja had to say..its just way too fishy + vague.
in a few days i hope we get a bone or two
At CES there will be a Tri-SLI running Fermi PC...
a 5850 at 1ghz has brutal performance.
(just need better drivers)
same goes for a 285 or a 295.
Performance is good enough today for hardware purposes.
We need, good overclockability, cheap, great drivers, and so on..
Visionfinity seems as a name, ludricous, but Nvidia is fond of renaming so I guess it fits for them.
Can Fermi work out of the box, with good drivers?
or is the chip so dependent on drivers and the team to write them it will take the whole year before they are mature enough?
Is the thermal specs to much?
If its get way hot, well lot of rma is gonna be caused to happen due to heat issues.
Fermi is an adaptive chip, made for the new market, not for gaming purpose first and foremost but for tesla and the folding market. which has different requirement to the design.
It be interesting to see a working card in action, not some fake excited ceo card.
It might be also that the cards will be rare until summer in the shops.
many questions, so little answers.
this Rahja dude is really "special":
http://www.guildwarsguru.com/forum/s...2&postcount=10Quote:
Originally Posted by Rahja
http://www.guildwarsguru.com/forum/s...7&postcount=12Quote:
Originally Posted by Rahja
the one tri-sli setup I know of will be there, no predictions... :)