and 480 GTX will be go into history as "the other Voodoo 6000"
Haha I love what is rumored currently :) :ROTF:
Printable View
yeah keep dreaming, then ATI can bend all of us over the table and demand $1500 for a highend card as you Daamit fans always wanted.
Perhaps you are speaking of kids I'm not sure but most adults with a decent income can afford the best PC hardware at least the people I know of anyway. Define "real gamers" anyway are there "fake gamers"? When you say "spend more money on games" do tell me in the course of a year do you think they purchase 5 games? 10 games? Not very many "real gamer" titles come out.
Also you proved my point as well when you say "uses all that JUST for games" a lot of people with 30" monitor and high end hardware just might use the PC for more than games shocker!
mhhh that IS nice :D
but then look at games like crysis, stalker, metro 2033... crysis2 is coming up... forget about playing that with a single gpu at 2560 res...
crysis? stalker? metro 2033? assasins creed? and thats multi gpu... like i said, while there have been a lot of improvements, id really avoid multi gpu when building a gaming rig...
charlie claimed 5-10k, ive heard 10-15k from others, and i think it was zed_x who mentioned 60k during the first weeks... though he also claimed the cards would have some weird new marketing name which they didnt, and that the 480 had 512sps etc...
good point... but i think most of the tweaks were related to new games that they had to optimize the drivers for, and then a lot was most likely reducing cpu dependance, so you dont need as much mhz to drive the cards... so if you compare old vs new drivers with a 4ghz intel cpu you wouldnt notice that much of a boost...
but dont forget, the 10.3 drivers brought the biggest boost for the 5800 series so far and it was UP TO 10% per gpu boost... on average it was maybe 3% accross the board at most... thats why i said i dont think nvidia will be able to improve average perf accross the board by much more than 5% in the next couple of weeks... in individual games, im sure we will see 10% boosts in some settings, maybe even 15%... but on average... dont think so...
no idea... but its not like its scientists studdying an UFO, they built the darn thing so they should know how to optimize their drivers to make use of it, and id be surprised if they waited for final hardware to think about how to actually use the units and bandwidth with their driver/compiler... its very uncommon to see notable performance boosts accross the board that go beyond tweaking and fixing the drivers for new games, after 6+ months that a new architecture came out in my experience... you usually see notable boosts within 3 months of the launch and form there on there are barely perf boosts accross the board, just tweaks and fixes. fermi launched NOW, but is delayed by 6 months... so they started working on the drivers more than 6 months ago... so theyve had more than 6 months to tweak their drivers!
there was an article about driver evolution on ati and nvidia cards on 2 sites in the last year iirc... they both concluded that within 6 months drivers rarely improve performance accross the board by more than 5% and the best was around 10% at some res for a certain card iirc.
thats nonsense! i couldnt disagree more! :D
hehehe, yeah i tend to disagree a lot... i dont mean that you are wrong though, so please dont take it personally :)
true... aa isnt that important at high res... unless its a low res alpha texture like a fence or twigs of a tree etc...
but think about it, would you prefer lower detail and quality settings at 2560 res and 2-4" more screen than better iq at a slightly lower display?
its a subjective thing... id def prefer a smaller screen with better iq...
and thats not even taking into account the cost of a bigger screen plus more gpu oomph...
i think thats mostly based on texturing performance and geometry performance though... if you look at the games where fermi does well, its games that are shader and geometry heavy. in games that are texture heavy 480 is as fast as a 295 or even slower... im not really sure, but thats the impression i got when reading the fermi reviews... crysis is very texture heavy, and fermi performs the same as 5870. stalker and metro 2033 are very shader heavy and fermi does very well there... 3dmark series have always been pretty texture heavy, and fermi is as fast as a 5870 there...
do you really think in 2010 this is still a valid claim?
i thought 5800 and fermi showed us that resolution is not longer a driving factor? i mean ati and nvidia had to come up with multi monitor solutions and 3d to somehow show a notable performance boost of the latest gen hardware, cause in 1080P which is still NOT the standard, there really isnt a need to buy the latest and greatest hardware... at all...
i think we have reached a point from where on the amount of pixels becomes less and less important and the amount of pixels increases less and less. its all about increasing calculation power per pixel now. thats what several game devs said at the last graphics convention as well...
~5970 performance, faster @ tesselation, less heat than a 5970, ~399$, q1 2011... and ~470 performance at half the power consumption for 249$ is what id expect as well...
i hope they focus on better 3d support and come up with a propper infrastructure and dont just tell customers to go and find displays and glasses themselves and figure out how to set it all up and get it working properly...
that about sums it up. looks like they didnt have enough time to really figure out how to utilize the new architecture for older games. and honestly why should they care, who wants 10 more fps when u already have 150fps, but for newer games, those extra 5 will really help when your at 45fps. ill probably take a look at the hard ocp review to see what settings they found playable (i do like how they do that kind of thing) id bet you can enjoy every old game just fine, and with a few driver enhancements, up the 4xaa to 8x (woopty dooo. lol)
yes, we got it, your in the money and so are your friends :P :D
its not about affording it, its about chosing to spend money on something you dont actually need or see no real gain from...
i wouldnt caegorize gamers based on how many games they buy or how much they spend on hw but how many hours a day they play games... then again, for hardware and software companies those gamers are actually not interesting at all as they dont make a lot of money off of them :lol:
yeah but once you go 30", for whatever reason, you HAVE to play at 2560... or else the image quality will suck... well not suck, but why spend so much on a big screen and then play below its native res with a slightly blurry image? thats what i mean... once you go 30" you HAVE to play at 30" and you HAVE to invest more on hardware... and you might be able to get 99% the same game play experience as on single gpu 24" screens, but it wont be the same or even better... show me any pro gamer that plays on a 30" screen... ANY! see what i mean? ask most benchers here on xs... they bench on tri or quad sli... but when they play games they prefer single gpu rigs...
maybe... or maybe its that older games are texturing limited on fermi... the newer the game, the more pixel and geometry heavy games get... mostly pixel shader heavy...
i think fermi is definately texturing limited...
maybe it has enough texturing performance so they focussed on pixel shader and geometry performance... but tbh im not very happy with the texturing in current games, and many people arent, just look at all the texture mods and hacks out in the wild... would be interesting to make an article about this... cant wait for some beyond3d or techreport or xbitlabs analysis of fermi :)
Its weird those. In anandtech review, 480 does significantly better at crysis warhead 11-17 percent better with 33 percent higher minimums than the 5870. The SLI results are a complete beat down in those results with SLI gtx480 likely matching CF 5970s.
single gpu isnt too scary, u just cant use the ultra textures or 4x+AA, but with that kind of resolution, and dot pitch, u may not need to use more than 2xaa.
and yes if u can drop 1000$ on a monitor, it may just be better to drop 500$ on a good IPS around 25" and put an extra gpu in ur pc. thats what i was telling myself with the SSD drive i had, i could buy 2 more 4850s, or a 60GB SSD. but i was happy with my gpu performance (especially for WoW) and the SSD felt more worthwhile, even though its retarded expensive still. but back the point, a 1000$ on a monitor probably means an extra 200$ on GPU power.
and keep in mind its only 40% more pixels.
This shouldn't be a problem with nv's supposed wizzards in writing drivers. Can't have it both ways, either they are or they aren't. Saying nv's drivers are second to none, with all this speculation on poor 4xx performance due to poor drivers seems to me like a dog chasing it's tale.
Not really.
They can have better drivers than AMD and be good at making them. But just recently they've pulled 15 - 20% more performance out of some games. On an Architecture thats extremely similar to the g80 which is 3 years old. Now are you honestly going to tell me that a BRAND SPANKING NEW ARCHITECTURE which has had basically four months of development won't get similar improvements?
Compare the 5870 with release drivers to the drivers now. This arch is really just a cleaned up version of the rv770 (which was based off of 2900 amirite?) and yet they still added a clean 10 - 15% increase across the board. Do you really expect Fermi to not have similar improvements if not MUCH greater improvements in the coming months?
+1,
i think its alot of balancing the new gpu/memory power. the 4870s had much stronger ram to gpu power than we do now. i dont know much about building drivers, but when it comes to billions of calculations a second, i bet its alot of trial and error testing, and then boom, they find the sweet spot
Yea but unlike GF100, Cypress (Rv870 or whatever its new name is) was not a huge jump from RV770 architecture. On the Contrary GF100 in comparison to G80, G92 or G200 is a huge jump in some areas. It maybe a while before we see Fermi's real performance and by then it will probably be already too late .
I added one more smiley in my post so that just nobody get's me wrong - won't happen and would be very bad anyway...love benching different cards but Fermi still needs some time until it's worth spending some DICE or LN2 on it for me.
I don't see it like that. I see a few percentage that you could attribute improvements to refining optimizations. The big jumps I see are in DX11 which isn't exactly old. I think it was just released in September. There's no excuse for nv not having as much development time with it as AMD has. I also see big improvments in TWIMTBP'd titles, or other games where AMD doesn't get pre-released code to optimize for which they have to basically buy off the shelf. Take Metro for example: optimizations still coming as they never had access to code. I can't think of a situation where nv wouldn't have the chance to optimize for their own games.
2 completely different situations I think.
You still think fermi will be worth nvidias time in the future?
G80 is creaking and cracking and the more I read about fermi the more I realize its going to take a genius and an engineering miracle to bring out a well balanced GPU from that hungry and noisy beast.
If Nvidia can do it without a major redesign, I will be forever impressed. Advise to nvidia (worthless I know): its time to move on. I don't want to see you like this.
it wont take a miracle to make a good gpu based off of fermi. it might catch us by surprise like rv770 but thats just from good engineering. they dont need a major redesign, if anyone does its ATi. fermi is really good at tessellation and designed to handle it very efficiently. they just need more work on process/yields.
to be honest i think the hardest think about the new architecture is the predicative load balancing that needs to be done on the GPU, I think that with a bit of time and more cards in the wild, and in test groups that the drivers can easily be tweaked to provide better load balancing on a per game basis. I am guessing here though.
@saaya, I personally think that 1920x1200 is the perfect res to game at, on a 24" screen, tho sometimes i do feel like even 24" is a little large since i tend to subscribe to the "nose up against the screen" CS style of play school.
I think that this thread has devolved into nvidia vs ATI, which is all I've read on any forum lately and I'm getting a little tired of it, I personally don't lean towards either camp but I also will not write off the gtx480 as quickly as some people on this board will, I will give it a fair amount of chance, for all we know this new architecture might be the future, look at where the pathetic 2900xt ended up ;)
Oooh tessellation is such a big thing now nvidia's got it. We know ATI has got solid hardware under the hood. In fact, if you check this link you can see there isn't a single figure that nvidia has the edge on. This shows ATI is still missing something: they are producing a beast with better specs in theory, fewer transistors (a billion less), and excellent power consumption.
Hardware or software? I am guessing ATI's processing units aren't saturated effectively, something in the hardware. Load balancing isn't doing its thing properly or maybe something entirely different.
Tessellation? I can make a guess that ATI can improve tessellation performance with better drivers, given the nature of tessellation.
considering people are still able to OC the crap out of these cards, i wonder if they can be undervolted and still run a solid 500-600mhz, but with a huge chunk less of power.
not really. its designed around big triangles. if you have small primitives from tessellation it pretty much destroys performance of pixel shaders.
you might want to look harder.
http://www.hardware.fr/medias/photos...IMG0028307.gif
http://www.hardware.fr/medias/photos...IMG0028308.gif
fyi heaven culls ~70% of triangles in the dragon scene.
their graphics pipeline is stalling for tessellation. its a hardware problem that will require a lot of thinking and problem solving to get right. im sure they will have a good solution in r900.Quote:
hardware or software? I am guessing ati's processing units aren't saturated effectively, something in the hardware. Load balancing isn't doing its thing properly or maybe something entirely different.
Tessellation? I can make a guess that ati can improve tessellation performance with better drivers, given the nature of tessellation.
considering the low default voltage I am not that optimistic that undervolting can save you "a huge chunk less" of power. Maybe a bit but if it is enough to justify the consumption for the speed you have at those clocks then - think you have the performance of a different card with less consumption then, so we are back at the point that it doesn't look good for undervolting. Also I am wondering that software voltage tools are still only announced to be released -.-
oh come on, dont be such a party pooper :P
yeah... only 40% more pixels but for some reason perf collapses when going to 2560x1600 with current hw... i expected a lot from the 480s in that regard... i thought theyd be really really nice at high res and a prfect companion for a 30" screen...
yeah, if you buy a 2560 display you basically have to take into consideration that if you play games, you have to add some extra money to the cost of the display for upgrading your pcs graphics...
what drivers did anandtech use? what cpu speed and what windows version? i noticed big differences during the 5800 launch in reviews using a 2.66ghz 920 or a 4ghz 965, and there were weird differences in 32bit vs 64bit as well, some games seem to perform better on 64bit than others, and iirc ati was slightly better in 64bit than nvidia vs 32bit? cant remember...
yeah... and that sweet spot is probably different for every game, every res... if you use aa and af its different... and then they find another tweak and the sweet spot is different again... :D
lol@nose at the screen hehe :D
well tbh, if the pixels are big and the space between them is tiny, then theres no problem looking at the screen from close up i think :)
but yeah, 24" 1920x1080 or 1200 sounds like the best res to me as well...
i didnt know there were 27" screens with the same res tho... those might actually be very interesting as well :D
about ati vs nvidia... like i said before, i dont think theres a big difference between 5850 5870 470 and 480... what can you do on one of those that you cant do on another? if wed build 4 rigs one with each of them, would any of us be able to tell them apart by playing games on it? i doubt it...
for me it comes down to cost, driver preference and whether power and heat is important to the user...
i dont think even the most radical ati fanboy would call fermi a slow card... its hotter and costs more than a 5870, but i doubt any ati fan would refuse to use one if youd give him one for free :D