shame on NVIDIA:down:
Printable View
Here's the Siggraph 2008 presentation where the HAIR demo comes from: http://developer.nvidia.com/object/s...2008-hair.html
(the second slide deck.)
@ neliz
Thanks! :up:
I think the most impressive part about the Hair demo is that it was done TWO YEARS ago.
Ironically I never even saw that coming.
I predicted that ATi would be better at the tessellation and nVidia would be better at the computational stuff and the FSAA + AF.
Big surprise to see they are also good at the tessellation especially considering ATi have been meddling with tessellation for years now.
John
I think it is too early to get into the details about performance +/- few FPS. This GF100 looks really good based on early results, (unless nVidia is really bluffing, but i don't think so).
We have to wait for some official test, and several games. otherwise we are going to hang on hairs, LOL.
That has me scratching my head but more regarding AT results than mine. I've never seen a case in FC2 where cards would loose so little in terms of framerates when going form 1920 to 2560 at 4xAA. Even at Very High settings as seen here.
I see what you mean and maybe it has something to do with the particular bench run, some system detail, or poor methodology.
What I mean is that if you are calling AT's 2560x1600 numbers into question then you shouldn't claim the 1920x1200 numbers as vindication. Either use both the numbers or don't use either - accepting one number and denying the other because of personal expectations is cherry picking.
Oh believe me there are games out there that need the power these cards bring. 4 year old oblivion with the better cities mod can bring the geometry performance of todays cards to their knees. Poly performance is in big need of a massive boost, all our performance gains in the last 6 years are pretty much just shader and AA/Anis.
Shattered Horizon needs the power of these new cards.
Arma2 and the latest stalker will.
More Fermi coverage over @ PCPER...
Pretty similar in terms of format and content to Hardwarecanucks but hey.... we like moar coverage, don't we? :D
Hopefully this means Tessellation will be used to the max and be the main differentiation between DX10 and DX11 games. At the moment you can't even tell them apart.
^ moar coverage ??
here you go.. 53+ news articles so far
http://news.google.com/news/more?um=...2-_QHpGbjZ8XdM
I don't think those 50-60% over HD5870 is going to be the average and typical result for most games.
All nVidia needs to do it to get 20% ahead of HD5870 with a dissent power composition. Even if they could get much more out of the GF100, they would probably limit the performance of the retail GPU just about 20% over HD5870, I would think.
+1 ,
I agree the same rules applies to HD5970 too:)
EDIT: on this note, nVidia don't need to beat the performance of the HD5970, just matching it with a single GPU would be enough.
But Im not so very sure about the pricing .. i hope you are right. If I'm not mistaking, nVidia has traditionally priced such a "superior" single GPU a good deal over the HD5970.
I believe, the prices would depend more on ATi and how they manage to compete. I hope ATi drops a refresh, or even new GPU at the same time, then we can get really good price on GF100.
Benchmarks change as do which games benefit certain manufacturers. However, FC2 at Ultra High details remains one of the most demanding games in any category.
On the flip side of that coin, this was a PR event by NVIDIA which means they showed what they wanted to show and no more. I don't think average performance will be close to what was shown but then again, considering drivers can mature and clock speeds will increase, you never know...
You are right, and in case we are looking at a single GPU with is going to match the HD5970. Then a GTX 380 with 50-60% over HD5870 sounds just about right!
I was more focused on HD5870, and was taking about the GTX 360, but I should persist that. Sorry for a bad formulated post and cousin misunderstandings.
Rejecting their 2560x1600 numbers because they don't meet your expectations and accepting their 1920x1200 numbers because they do is not sound reasoning. If their methodology or test equipment is suspect then all their numbers are suspect.
http://www.techpowerup.com/reviews/A...D_5870/14.html
In the only game benchmark shown it's 31-39% faster then a 5870, not 50-60%.
But my calculations shows GTX 380 needs to perform 50-60% over HD5870 to match the HD5970. The GTX 360 would do with 20% over HD5870. This would be the best business scenario for nVidia, I believe, and I bet they would give it a go.
I agree, it sounds like a impossible task, specially after all those speculation and bashing some people (;)) have been trowing around this forum, lately. But maybe they can get there with matured drivers?
If the numbers shown are from a 448 shader card then the full 512 shader cards could match a 5970. But if the numbers are from the full shader card then they won't be able to match a 5970 unless they make a dual-gpu card.
Yeah that can be one scenario, too.
But even if these numbers are from GTX 380 (512 shader), considering that these are using very early drivers, and these are totally new driver, don't you think they got a good chance to smack over the HD5870 by 50-60% by lunch time?
No, I don't.
Just going from a core count standpoint a 360 should be capable trading blows with a 5870 and the 380 is probably what was being demo'd, I would think. If the demo was say an A2 silicon lower clocked 380 512 core based unit then the A3 respin may allow for better clock binning and possibly better performance by launch time.
At any rate the cards should be fast but I'm still waiting for some solid reviews...
why you guys don't wonder why nvidia didn't bench dirt or stalker or other dx 11 games to show there great tessellation power. i mean if it can run fc bench then how about trying fermi in rel dx 11 games instead of homemade demos
My understanding is that Nvidia has aimed big, which has a lot of advantages and disadvantages. Now, when Nvidia does a presentation about this before the cards are out, they are obviously going to point out the advantages (tessellation, geometric computing, GPGPU etc.) and leave out the disadvantages (price, heat). And from what we are hearing, 1.25x 5870 is what Fermi is going to be, and that isn't exactly a great advantage.
So Nvidia leaves that out. Dirt and Stalker don't use enough tessellation to make Fermi really stand out, so it's going to be at best 1.3x better than a 5870.
This is a good point for everyone to remember. This was a PR presentation. They'll highlight the good and keep the bad under covers. After the hype wears off, read between the lines.
For example... They said 2.33x faster for 8xAA than a GTX285. That's great... Until one remembers that the GT200 and G92 cards were notorious for choking at 8xAA. I have no doubt Fermi will be the fastest single GPU but I'm mindful of the PR work at it's best
Why would they use A2? Since they left out clocks we don't know for sure but this was either A3 clocks, aka likely launch clocks, or their target clocks, which they seem unlikely to hit.
There is talk about a B1 in the works. Launch A3, give what few full speed and full shader parts to reviewers, then once B1 is ready just use those parts. Don't know how likely that is but it was hinted at from a Chinese forumite.
Someone said it best a couple pages back, but I admire fermi much more (even if it performs similarly to 5870) than the 5xxx series.
What games REALLY need the horsepower this brings? Only multi-monitor displays with massive resolutions with heaps of AA/AF you won't notice.
While its not 'shown' (demonstrated with independent reviews) it appears that fermi will dominate in "next gen" areas (tessalation, geometry, particle simulation...).
so basically 5870 = fermi in current/old games but fermi > 5870 in next gen games (of course next gen means non console remakes so this is all moot by the time we get non-consolized games [never?] much better hardware will be out)
I think fermi will serve nvidia incredibly well as if it is indeed "future proofing itself" then tweaks to that and improvements on classic rasterization will just make the architecture best of both worlds. I highly suspect 6xxx from AMD will turn out to be extremely similar to fermi in design principle (maybe not size but the way it handles information)
It has been hinted at for awhile, since before Cypress, that a new AMD/ATi architecture would be able to handle more than 1tri/clock, though that might have been due to Cypress' "dual rasterizer."
We can only hope that with their new architecture that they do up the ante in that area.
A fact? Really?
Two weeks ago I received my monthly investor update from TSMC which noted mass production has begun on nVidia's high end 40-nm GPU after months of delays caused by wafer production problems.
nVidia's CEO also clearly stated at CES 2010 that the GF100 is now in mass production.
Now I'm sure some of you out there are going to say "oh well.... yea.... nVidia lies all the time".
See.... if it were just the comment @ CES, I'd probably agree with you guys that nVidia is just trying to string buyers along so they don't buy an ATi card. But some people forget that it is HIGHLY ILLEGAL to lie in shareholder reports. In addition, why would TSMC break the law at the request of nVidia?
TSMC did over $10 Billion in sales in 2008 and nVidia is only a tiny part of that
I have heard that it is all gearing up for wide availability of Fermi in March.
Instead of doing an Ati and releasing the product early with insufficient drivers and poor availability (HD 58xx series cards did not come out in the UK until December/Last Week of November at the earliest and HD5970 cards are still not released here) nVidia will be releasing on mass in March with relatively mature drivers too (well I do say relatively, obviously there would still be the odd gremlin or two that you get with a new card for the first couple of months).
John
There's a difference between mass production of GPUs and mass production of cards. The second can't start before the first is finished. So I guess neliz is talking about the production of cards, i.e. the placement of GPUs on the boards; while TSMC is obviously talking about production of the dies, because well, it doesn't have anything else to do.
Dude, get over England and the fact the cards where launched later on.
In Europe, US, Asia all 5xxx GPUS launched at the correct date, so Uk's problem is a local one, not a general one.
Problems with stocks exist and have existed, but you still could buy one, lots of people bought 5xxx series.
And stop a_ss kissing nvidia "because they don't release the product cause it has problems with availability", because they don't have it yet ready, that's why they are not releasing it. When they will have final cards in their hands, they will do a paper lauch, just like with GTX 295, don't worry about that.
[FUD] Dual GF100 Fermi should be in April ohhhh myyyyy gawwd :rolleyes:
Dual Fermi isn't going to be much faster than a single full Fermi, it's only being released because a single GF100 isn't enough to get past 5970. Dual Fermi will probably be faster than a 5970 by the minimum amount it would be required to call it a faster card than 5970. The preceding sentence rocked, right! Display of my elite English skillz.
Because if a single + full Fermi is around 250W like being rumored, this means a dual fermi can pull only 50W faster than a single one, all the while being SLI. There's only a 40 percent overall difference between the 190W 5870 and the 300W 5970. So I figure it'll be much smaller in the case of GF100.
nvidia could just put 8165632 8-pin power connectors on the card so it can pull 1.21 gigawatts to deliver benchmarks scores before you even clicked start
ati deliberately chose to make their dual card <300w. nothing stops manufacturers from making cards that require more power
OMG he's reposting his own "news".... http://www.fudzilla.com/content/view/17197/65/
Good job on the tags guys ( wait 'til Gautam or MM sees them ;) ).
2nd... Fuad isn't pulling the dual GPU in April out of his a*s.
As for the performance fighting and :banana::banana::banana::banana::banana:ing... simply wait until the real reviews and the cards are out.
If you're looking for a new VGA, go out and buy one now. But wait, I don't think you needed to hear that from somebody else, you're in control of yourself hopefully :)
They can't be labeled as PCI-e compliant which would shut out a good deal of OEM business. Basically, you wouldn't find the cards in any pre-built PC because manufacturers would be too scared to use a card that is out of power spec and deal with any potential problems it would cause.
if you google me you'll find me saying that a "395" will launch a "couple of weeks" after 360/380.. with the latest partner information, I decided to lower those expectation to H2. Don't put any money on a April Launch of the 395 (which was originally scheduled for March 2010 with the GTS parts in May.)
The product has to use less than 300W when it's offered to PCI-SIG for certification, they don't care less what your card uses when it OC's.
A dual GF100? Either nVidia is going for the total kill with a monster, or I have to make some major changes in my 50-60% calculations.
Really?
Where did you get it (and by card I mean 5850 or 5870, the 5700 series have been widely availible since November).
I am curious as it could be certain places have a good deal going with Ati which means I will keep my eye on them for the 2GB model if Fermi does not deliver :up:
John
apparently this didn't hurt ati, their hd 4870 x2 is around 380w.
if you go to alienware and tell them they can have the fastest graphics card in the world, for a good price and some marketing development funds on top of that, but it's outside of some pci sig spec, do you think they will care?
which trouble? their 5870 is fairly low power consumption, by making the dual card like it is right now they leave space for a higher clocked product that is using better binned gpus. dont believe everything marketing tells you
there is/was no competition, so making the card faster gains you nothing
What? Impossible. 4870x2 is 8+6 pin so it should have a max TDP of 300W. Yes, I know that you can pull more than 75W from a 6pin, but that's outside of its specifications and if it was 380W it would definitely be 8+8 pin.
Plus (total consumption)
http://images.anandtech.com/graphs/r...3626/17191.png
And
http://img521.imageshack.us/img521/3...eon4800jk1.gif
i measured 381w under load in furmark, card only, 273w peak 3dmark03 nature
test details here: http://www.techpowerup.com/reviews/H...0_IceQ/26.html
of course one could argue whether tdp includes stuff like furmark, as far as the spec is concerned i'd say it means "under no circumstances":
"Similarly, a PCI Express 300W Graphics add-in card is defined as a card that exceeds PCI Express
CEM 1.1 and PCI Express 150W 1.0 power delivery or thermal capability and, as such, consumes
greater than 225 W with support for up to 300 W inclusive."
check the gtx 295 in my results, jump is similar to 4870 x2 and above 300w
why obvious?Quote:
and obviously doesn't enter into the TDP equation
asus mars is 453w in furmark btw, 2x8 pin = 375w
just stating that the interface is pci-express doesnt mean it is certified
http://www.pcisig.com/developers/com...list/pcie_2.0/
nothing there about 4870 x2, nor 5970, nor gtx 295
Yes, I was about to post that. Indeed looks like those cards didn't get the certification, but no mention of GTX 285 or 275 either.
Well, dual Fermi = 800-1000$ card, anyone? :rofl:
Average price of the Radeon 5970 in the US... 680$.
Lowest price 619$, highest 780$...
Both cards are pointless and worthless for me, a single 5870/GTX3xxx is more than enough to enjoy all the latest and most demanding games on any monitor/resolution ( except eyefinity 4000+x2500+ )
GF100 is not only gaming GPU. I personally have been looking into GPGPU lately, and a dual-GF100 can become a great GPGPU setup, and serve as a personal supercomputer ;)
Prices depends on competition, HD5790 became so ridiculously expensive just because the lack of it.
If ATi can't put up a good fight (at least with a worthy refresh), then the GF100 could get really expensive. Lets hope ATi drops a new GPU at the same time.
Heres Evergreens results:
http://images.anandtech.com/reviews/.../5870/5870.png
and here is fermi's -
http://www.flarerecord.com/wp-conten...-all-folks.jpg
Thats where the problem is.
90% gamers dont give a damn about this. All they want is cheap value for money graphics card to game on.
This is where Nvidia is only hurting the whole PC gaming industry.
The architecture is solid. It should perform, but its too early (technically its late, but its too expensive now to manufacture. ).
This will push more people away from PC gaming.
Honestly at this point, ATI wont be worried at all. The main cash cow for them is 5770 and lower cards, and there is no sign of competition there.
I would agree if the game performance was left out, as the early rumors/speculations where indicating, but these early results shows a great game performance too. The price would more depends on the competition, and less on the architecture .
On the other hand, many small and medium size business which can't afford a $100.000+ supercomputer, can become the new consumers of the GPGPU and get the same performance with a fraction of the price.
I see you concerns too, and it going to be exciting to see where the price and gaming performance on the retail GPU will end up.
If anyone was concerned about the cheap cards and the poor people who want to game on their PCs we'd have 100$ 5870's right now instead of the crap performance given by the cheap cards out there.
Quoted right from Charlie's mouth.
The statement is false. Large system builders don't care about extreme high end cards and filter most of them down to their boutique high performance shops like Dell's Alienware and HP's VoodooPC. These higher-end shops care less about compliance since their clients aren't concerned with having the systems run in datacenters, work computers, etc.
There are no potential problems 300W+ cards would cause other than on the power supply feeding them. The PCI-E slot is designed to provide up to 75W of power and no more while power connectors aren't an issue either since a pair of 8-pins can provide up to 300W plus the 75W from the slot for 375W.
As for board partners, they aren't concerned either. They can still claim a card is compatible with the PCI-E interface BUT they won't be able to put the PCI-Express logo on their packaging not claim compliance in their documentation.
All in all, not a big deal. :up:
i think you're forgetting that Quadro and Tesla are two of nvidia's cash cows... never underestimate the power of high margin products. the gpgpu features of the fermi arch will probably allow nvidia to dominate the professional and hpc markets. also, i've yet to see any proof of gpgpu features decreasing the performance or value of gf100. yes it's a big chip, but the performance improvements it will probably justify the extra cost.
Wizzard, I'd be interested in your take on the TDP of intels cpu's. It's been shown they have the potentil to draw 195W or more, and they rate their TDP as average, and not even their max power is documented at anywhere near 195. Should there, or is there different definitions of TDP from CPU's to GPU's. That's the confusing part to me. Where's the standards?
Dominating the professional and hpc markets is fine, but not having cards for the mass consumer market is risky, as it let's ATI grab at that market with little/no competition. Nvidia should work on getting more mainstream cards out faster than they are currently doing, or they run the long-term risk of being pushed out of the lower markets and becoming a niche company. Sure, that's probably not happening any time soon, but they still need to make sure it doesn't happen later, either.
Edit: Now that I think about it, this feels a bit like disruption coming, kinda like the whole mainframe/mini-computer/PC-thing all those years ago.
if you're going to take issue with speculation, then you should take issue with this thread as we now have 60+ pages of conversation about a product that doesn't yet exist. i think you might have forgotten where you are....
i agree, these delays are costing them dearly, but i don't think it's all in vein. from what i understand the 'fermi' arch is going to be the basis for many future gpu's after the gf100, similar to what amd did with r600.
That was probably an ulterior motive. As we know, it underclocks itself when it starts to overheats. Even at stock. By keeping it under 300w they greatly reduce the chance of it over heating. If they had kept the 5870 clocks the over heating issue would have been found much sooner and would have been over exaggerated.
I don't think the 300w limit was something they really cared about but was something they were able to also achieve in getting the 5970 to not down clock itself often
Why they didn't just design a better heatsink though is beyond me.
It depends. Basically, the second you go over 300W you're no longer backwards-compatible with PCI-E 1.x which can only deliver 75W through the PCI-E slot while PCI-E 2.0 can deliver up to 150W. So, if you make a 300W card you have one of two options: add two 8-pin PCI-E connectors or risk alienating everyone with PCI-E 1.x mobos in addition to confusing the heck out of potential customers who don't know any better.