You're welcome =)
Unfortunately, no :(
I said almost all....
Printable View
Somebody said this is ALL benchmarks...
http://www.computerbase.de/forum/sho...postcount=3418
Screams in Dirt 2, FarCry 2 and HAWX, %10 better than 5870 in BC2, same as 5870 in Crysis.
Incredibly high power consumption and temperature. Even if you don't care about power consumption, you will care about temperature.
http://i44.tinypic.com/4smiyx.png is this true ? i mean in dx 11 5870 is very close to 480 ?
Hey Fermi ROCKS on Uningine, I LOVE THIS GAME!
http://img260.imageshack.us/img260/8320/227qoz.jpg
Lets hope someone tests with 10.3.
http://img.hexus.net/v2/graphics_car...i/N174/OCb.jpgQuote:
"We increased the fan-speed to 80 per cent (<4,000rpm) and then used EVGA's Precision tool to force up the clocks. From the default 700MHz/1,400MHz/3,698MHz clockings for core, shader and memory, respectively, we hit 800MHz/1,600MHz/ 4,224MHz, representing a 14 per cent increase over stock. System-wide power-draw increases from a peak 475W to 525W.
Looking a the 2,560x1,600 results, Far Cry 2 (8x AA) performance rose from 54.91fps to 61.3fps and DiRT 2 DX11 from 50.69fps to 56.78fps. "
my verdict: if it weren't for the temperature it would be worth it, since its price can be justified for its performance. But now it isn't.
From these numbers it seems (the 480) to be anywhere from 0 to 40% faster,with somewhat 15-20 on average.Weird differences, no speedup in crysis, massive in hawx(no tessaltion there).
There is one characteristic however that stays consistent with transistor increase over 5870 tho.Power consumption is solid 50% higher :P.
Would like to view other reviews with more games however.
That's alright.... we want MORE!
Hmmm, surprising that Nvidia was able to pull out a win in Hawx. Also surprising is that it seems like HD 5870 might be doing better at ultra high-res/AA, at least when it doesn't run out of memory.
I'm thinking Fermi isn't quite as bad as R600 was, but I can only see two differences: 1. AA isn't broken 2. Nvidia decided it would get the performance crown no matter what kind of cooling and how many watts it would mean :rotf:
AMD can definitely beat or at least tie for the single GPU performance crown with a new SKU, and hopefully they do. They might just position HD 5970 against it since it consumes less power(!), is a fair bit faster, and probably costs less to make. That would be a mistake though, a lot of people want a fast single GPU card.
not sure if anyone noticed, but the News section has over 800 viewers
Looking at those, the 480GTX isn't that much better than the 5870.
I'll be holding onto my 4870X2 longer it seems. I've no interest in Metro, so I won't be killing my X2 any time soon with a game.
Hopefully when the new ATI and second revision of Fermi is out there'll also be some good DX11 games. :)
Until I'll be quite happy. Looks like my X2 might last as long as my 8800GTX did before it died :D
I wonder if it ever happened that single GPU consumed more power than dual GPU.
It also puts nV dual GPU out of the question period.
I wonder if it ever happened that single GPU consumed more power than dual GPU.
It also puts nV dual GPU out of the question period.
There should be a live blog update from the event here
http://www.techreaction.net/2010/03/...ent-live-blog/
Good god those load temps are really quite frightening. Almost 100*C at load? As much as I would be up for buying one of these those temps really have me concerned.
My room is normally close to 30*C in the summer I just dont see how I could have a single card, much less SLI function with anything less than 100% on fan, and thats not even for overclocking. :down:
I guess we'll see what hexus's temps were at 80% fan speed, that will be the make or break as far as I'm concerned. I just cannot buy something that pulls that much power and runs that hot.
Im sure watercooled these chips will be fantastic but air cooled seems to be pushing it to say the least.
[QUOTE=96redformula;4308033]Hey Fermi ROCKS on Uningine, I LOVE THIS GAME!
that game was a big hit when the 5xxx sieres hit the market now it will lose its luster :rolleyes:
lol @ hexus bang for buck and bang for watt; GTX480 soundly fails those.
Thanks.
Sad to say the least :( I am scared of seeing what the 470 is capable of now (or not capable of).
Three titles (which nvidia of course didnt optimize for :rolleyes:)
Compared to the 1GB 5870
Uses 155W at load over the 5870 wtf! (40 watts over my prediction, so close!)
And can someone explain to me what the hell the point of tessellation is if its going to reduce performance?!?
http://www.abload.de/img/20drj8.png
http://www.abload.de/img/219sme.png
look at unigine, even if it is optimized for nvidia, evergreen scales poorly with tessellation. 5970 goes from 50 to 26fps and gtx480 goes from 45 to 30fps.
dirt2 is pretty much slapped on dx11 like how crysis just threw in motion blur and called it dx10.http://www.bit-tech.net/gaming/pc/20...-look-dirt-2/3
im really disappointed by nvidia. And yes power consumption is a HUGE faktor in europe. I live i danmark so most expensive electricity in the europe. My 8800gts is getting old. Time for hd 5850 when price is right.
for those who missed the other site, before it pulled the results...
http://www.mediafire.com/file/jgj3jyrntqy/fermi.7z
ATI just needs hardware physics support and a bit more memory.
Otherwise why would anyone buy a dinosaur like GeForce Fermi?
Too bad for desktop PCs.
DX11 Tesslation is Next Gen.Fermi is the only real DX11 Card.
http://img.hexus.net/v2/graphics_car...mi/N174/22.png
There's always a solution :rofl:
http://www.tech-caffe.com/gfx/powerfermi.jpg
Wow, looking at the Hexus benches, 480 is crazy fast in some games, but so "slow" in others when compared to 5870. Crysis is a major dissapointment for me.
But let's see how the other reviews do. I hope that it will shine more in future heavy tesselated DX11 games.
We have already established Uningine is in NO WAY a good way to see real game experience. nVidia supposedly has the ability on Fermi to shift all its resources to tesselation, where as HD5870 has it as a set fixed resources. During real gaming Fermi wouldn't be able to run like that as it has to use its resources elsewhere.
holy mother of fanboy
btw fixed that picture :D
http://img401.imageshack.us/img401/1564/fermipsui.jpg
it might be driver related. the big variations in performance hints at that. gt200 was VERY consistent from game to game. it could be AA like gtx280 shortly after launch, remember when the 4870 beat it in crysis with aa on? when we get more reviews we can get a better idea of drivers.
Apparently FarCry 2 and HAWX use a lot of triangles, which Fermi is good at, which is the reason for their significant leads
The performance delta of Fermi is astonishing though... the 480 goes from being 10% below a 5870 to 50% above depending on the game. It "averages" out to 10-20% range depending on settings (if you take out the synthetic benchmarks), but I don't think we've ever seen a card have THAT big of a gap between performance values over a card, i.e. most cards might be 10-30% faster than a previous card, not -10 to 50% for example.
This is true. My friend had to RMA his GTX280 after a few months... twice! (thank god EVGA is literally next door to us)
He picked up the GTX 285 later and the lower heat has prolonged his card life a lot
Ironically, that's what most Nvidia fans cared about.... but now it doesn't matter :rofl:
Talk about role reversals
I actually think this is due to the architecture, and not drivers.
At B3d, most people agree that Fermi has new tri/clock power. Games like HAWX take advantage of triangles greatly. Look at the 5770 CF scaling vs. a 5870. A 5770CF is exactly the same as a 5870... and yet it can perform >> than a 5870 due to the extra cards triangle advantage.
So in triangle intensive games, Fermi sees its biggest gains.
However, in traditional texture or shader powered games (Crysis for example), Fermi falls short.
So I don't think its drivers at all, but architecture related
Spotty performance (driver issues?), massive power reqs and high price. These are not living up to the hype from my point of view. I really wanted to like Nvidia again for something beyond folding.
Yeah, this means that the Fermi architecture is really different from both ATI and previous NV cards.
Call me an optimist but I do think the obvious lack of performance in some games might be corrected somewhat by future drivers.
Nobody should buy a card because of possible future driver optimizations, though
Oh- and once you enable DX11 effects in DiRT 2 that require shader power, does the GF100 have less power to triangle setup so many polygons?
Tessellation in D2 (DX9 vs DX11) actually makes the GTX 480 less competitive.
So the GF100 will be blazing fast in Quake 3 and less effects intensive games, but once you crank post-processing, texturing, and especially shading, there' less/no advantage? (Heaven doesn't use extensive pixel shaders or post processing btw)
Good hypothesis?
@anihihilat0r - the possibility of them not optimizing for crysis is _________
post-processing is makeup for a poorly made game ( graphics wise) back in the good'old days when post-p wasn't used that much, they looked a lot better.
anyway, you can see it still perfoms great in crysis:
http://image3.it168.com//2010/3/26/1...50a09bcaac.jpg
I was thinking the same thing. I don't remember any card being like that. It goes from being slower than HD 5870 to being as fast or faster than HD 5970.
Is it drivers or just some engines suit the card better? That's the most important thing now. Single GPU card with HD 5970 performance would be great.
The same thing happened to tesselation and Unigine benchmark on the other side.
Just see Uningine 2.0 thread, its got your answers as to how creditable it is....
I personally like the benchmark itself as it shows tesselation it action, but hearing things like nvidia licensed its own version 1.1 and some of the other things in the thread keeps me from considering it creditable to compare cards.
It's looking like Fermi having 480/512SPs is more due to power consumption than anything else. Having less SPs means they can clock higher while staying (just) within the 300W PCIe requirement, so it looks like they sacrificed some shader performance in order to increase texturing performance and pixel pushing power. If that's the fact, we probably won't see a 512SP version until a shrink or significant respin, or as some kind of 'ASUS MARS' type product that exceeds 300W.
<WTF>
<WTF>
<WTF>
140Ws of greatness!Quote:
anyway, you can see it still perfoms great in crysis:
http://image3.it168.com//2010/3/26/1...50a09bcaac.jpg
I guess that sort of answers my question, but it really doesn't.
In order for us to determine the true benefit of tessellation over standard polygon rendering is give me a scenario with similar complexity one done in tessellation the other using old school methods. Otherwise you aren't really making a point.
That's like saying "look how slow this car goes without turbo vs this other similar, but a 1000 pound lighter, car with a turbo!". The inverse of that analogy might make more sense but I think both apply the same meaning: not a fair comparison.
Plenty of people are interested in single GPU vs single GPU. The 5970 is also ~$200 more than the 480, and a 5990? Come on, it's just an overclocked 5970 to 5870 speed, and it will likely be extremely expensive, probably $300 more than the 480.
Not to mention, using your logic, since the 5970 already beats it, what's the point? :shrug:
evga pics!!
http://www.komplett.nl/img/p/800/590843.jpg
^^ the 480 costs more than the 5870 and 470 because it comes with a new feature: the george forman grill.:D
http://www.komplett.nl/img/p/800/590844.jpg
Has the major power usage thing been confirmed across different reviews? It's one thing if one review has it pegged at > 100W more, but if 3 or more say the same thing... then we have an issue
That depends on how tesselation is done in relation to the rest of the card.... which will be based on architecture
Yes yes DX11 was slapped on for Dirt2 and other DX11 games, but the Hexus benches showed the card falling down in DX11 so I'm not sure how to explain that at all
Yes the tessellation test in Unigine heaven 2 has an advantage for Nvidia because they perform tessellation using Cuda and can assign more card resources than they would be able to in a game where other effects are also rendered.
In games that use tessellation like Dirt2 performance is very similar to the ATI cards.
The card performs quite well though and will only improve with newer drivers. The ATI performance has gone up some 10 - 20% since launch due to more optimised drivers.
The performance per watt isn't very good though.
its alot more than just comparing how the GPU handles alot of polygons, vs tessellated polys, the designers apply a single map for the tessellation on top of the old model, which gives the game the ability to do unique things. theres a nice crab movie where it starts off basic, then they move a slider and it gets a whole bunch of spikes that just grow, all in real time. its not easy to do that with a simple model since the polys which are only part of the displacement map cannot be individually identified.
but in a nutshell, the polygons from tessellation are added later in the rendering process which can help reduce resource requirements. and a bunch of other bonuses only game developers really give a whoot about
Yeah, I love how the GTX285 can run in DX11. And then you look at the lower resolution, and it is faster than the 5870
:rofl::ROTF:
Talk about a bad review. Most likely, the Nvidia cards defaulted to DX9 hence the score
Look at techreport's (a much more reliable site anyways) own benches on the matter
http://techreport.com/forums/viewtop...&sd=a#p1009567
Can someone with HD5870 confirm this. FPS looks kinda low here.
And since when is GTX285 DX11?
http://www.abload.de/img/0b889bc0-0b6f-47dd-b525c7a.jpg
2 (or 3?) hours to go!
ATI has a driver issue running 4 x AA in Metro 2033 which will be fixed soon i'm sure. The testing and optimisations for the game were done on Fermi cards so ATI needs some time to improve the performance and fix any bugs.
Advanced Physx on or off has little effect on the FPS in Metro 2033 because the game is GPU limited and also supports multicore CPU physx. On my system I have no performance penalty when enabling advanced Physx. Nvidia cards have also been shown to run faster using the CPU to run advanced physx and letting the GPU run the game.
anyone save the die shot from hexus review?
Based on Hexus' results, that's a quick peek at performance, all test at 1920x1200 AA 4x, except Dirt 2/AA 8x/DX 11 at 2560x1600
We need more data :(
http://i39.tinypic.com/2mc8xvo.jpg
a 1 million poly scene (average for games today)takes about 20gflops at 60fps for geometry. the bottleneck is memory capacity and in some cases bandwidth. tessellation increases polys while using much less memory, about 10x less. it also keeps data on chip so it uses less gddr bandwidth.
dx11 games were developed with 5xxx cards so they havnt really done much debugging or optimizations for nvidia. stalker should favor ATi with tessellation because it has used non-dx11 tess for a while. keep in mind none of those games have heavy tessellation yet, even metro 2033 doesnt. it will be nice to see who can get the first good dx11 title out (graphics wise).
Don't forget that is system power consumption, so the 146.3% is deceptive. Power consumption of only the GFX is probably more like 190-200%
For a second I saw the power consumption (load) and thought... wow 48% more performance! What game was that!... then I facepalmed
Thanks for the chart though... its much faster in DX9 DirT2 and HAWX and Far Cry 2 (expected), but slower in Crysis Warhead by a little and just a bit faster in DiRT2 DX11 and BFBC2...
I just bought a Sapphire 5850 from Newegg for $279 shipped.
Supersonic Sled Demo is up on nVidia site.
Run only on GT4X0 Series
Yeah would have liked to see more reviews but it's a Sapphire. I'm confident it will be fine. And prices did just drop somewhat this morning but I don't expect that to last now.
Mine dropped $20 from when I first looked at like 10AM CST too. Then the cheapest was $305 w/ free shipping.
Power consumption is piss poor. Its pretty hard to ignore, I don't know how they can consume so much power with a single chip. Powerwise this is as much of a disaster as the pentiumr 4 D editions. These things look like they are going to bench like barncats under LN2, if they are still clocking 15% very warm conditions.
This generation the 5850 is the card to get for price to performance and watt.
The delta between the the 5850 and the 5870 is smaller than the delta between the 5870 and Fermi. Considering pricing, the 5850 has a 120 price difference compared to fermi and the 5870 100 dollar price difference. If you don't mind Cf solutions, the 5850 is a combo pretty hard to ignore if you want to spend about 550 dollars.
I can't believe people are dismissing driver potential here, atleast team Red fan. This card is brand spanking new architecture and it shows with the up and down performance. The current drivers are so specialized for the gtx 28x - g80 its not even funny. The very same people that were saying 5870 didn't perform that well at launch because of drivers are the same ones dismissing fermi driver potential.
The fact is fermi biggest improvements are in games that they performed poorly in prior generations in the past which shows how new the architecture is. Considering that these chips have only had 4 months for driver development, I think it unreasonable to think fermi won't pick up big driver gains in the future.
I agree with you annihilator that future driver improvements should never be part of the purchasing equation. None the less, I think its impossible for fermi not to pick up huge jumps in the future.
2 hrs 45 minutes remaining until DH review.
i figured it out, the other saphire card is 25$ higher, but come with CoD, so both cards are a good 40$ cheaper than the competition, which is a nice bonus. but in my clicking around i saw a 514$ 5870 with a water block, which i think is the way i would go. OC the crap out of it until it consumes as much power as the 480, and see who can win then.
The question actually is how much is it that the architecture is influencing performance in those said games?
As B3D users have pointed out, Fermi might have changed how the card does setup and tri/clock, which is corroborated in its performance in HAWX (a triangle intensive game). In traditional shader/texture games, like BFBC2 and Crysis, it doesn't seem to do all that much better than previous generations - then again, with lower ROPs and same TMU's as a GTX 285, that might be why it has limited gains.
So my point is that "up and down" performance isn't indicative of drivers when the other variable, a different architecture, is in play - if Fermi improves triangle rate, then it's not drivers at play - its architecture taking advantage of a game's engine for example.
We'll need the full review of the architecture to see whats at play here, but that's probably the biggest reason there IS a performance delta that big between games
furby or 5990...
hmm besides 5990 costing more I guess the only other con for it is being dual gpu, all rest is better than furby.. *undecided* x_x
It's crazy that 6 months ago that sapphire card for $329 with 220 reviews was just $259 on 10/2/2009. Half a year goes by, and the price rises $70. :shrug:
Go figure. Compare the 5850 to the GTX280. Debuts at $659 and 6 months later, the price has dropped by $300 to $359