and 480 GTX will be go into history as "the other Voodoo 6000"
Haha I love what is rumored currently :) :ROTF:
Printable View
yeah keep dreaming, then ATI can bend all of us over the table and demand $1500 for a highend card as you Daamit fans always wanted.
Perhaps you are speaking of kids I'm not sure but most adults with a decent income can afford the best PC hardware at least the people I know of anyway. Define "real gamers" anyway are there "fake gamers"? When you say "spend more money on games" do tell me in the course of a year do you think they purchase 5 games? 10 games? Not very many "real gamer" titles come out.
Also you proved my point as well when you say "uses all that JUST for games" a lot of people with 30" monitor and high end hardware just might use the PC for more than games shocker!
mhhh that IS nice :D
but then look at games like crysis, stalker, metro 2033... crysis2 is coming up... forget about playing that with a single gpu at 2560 res...
crysis? stalker? metro 2033? assasins creed? and thats multi gpu... like i said, while there have been a lot of improvements, id really avoid multi gpu when building a gaming rig...
charlie claimed 5-10k, ive heard 10-15k from others, and i think it was zed_x who mentioned 60k during the first weeks... though he also claimed the cards would have some weird new marketing name which they didnt, and that the 480 had 512sps etc...
good point... but i think most of the tweaks were related to new games that they had to optimize the drivers for, and then a lot was most likely reducing cpu dependance, so you dont need as much mhz to drive the cards... so if you compare old vs new drivers with a 4ghz intel cpu you wouldnt notice that much of a boost...
but dont forget, the 10.3 drivers brought the biggest boost for the 5800 series so far and it was UP TO 10% per gpu boost... on average it was maybe 3% accross the board at most... thats why i said i dont think nvidia will be able to improve average perf accross the board by much more than 5% in the next couple of weeks... in individual games, im sure we will see 10% boosts in some settings, maybe even 15%... but on average... dont think so...
no idea... but its not like its scientists studdying an UFO, they built the darn thing so they should know how to optimize their drivers to make use of it, and id be surprised if they waited for final hardware to think about how to actually use the units and bandwidth with their driver/compiler... its very uncommon to see notable performance boosts accross the board that go beyond tweaking and fixing the drivers for new games, after 6+ months that a new architecture came out in my experience... you usually see notable boosts within 3 months of the launch and form there on there are barely perf boosts accross the board, just tweaks and fixes. fermi launched NOW, but is delayed by 6 months... so they started working on the drivers more than 6 months ago... so theyve had more than 6 months to tweak their drivers!
there was an article about driver evolution on ati and nvidia cards on 2 sites in the last year iirc... they both concluded that within 6 months drivers rarely improve performance accross the board by more than 5% and the best was around 10% at some res for a certain card iirc.
thats nonsense! i couldnt disagree more! :D
hehehe, yeah i tend to disagree a lot... i dont mean that you are wrong though, so please dont take it personally :)
true... aa isnt that important at high res... unless its a low res alpha texture like a fence or twigs of a tree etc...
but think about it, would you prefer lower detail and quality settings at 2560 res and 2-4" more screen than better iq at a slightly lower display?
its a subjective thing... id def prefer a smaller screen with better iq...
and thats not even taking into account the cost of a bigger screen plus more gpu oomph...
i think thats mostly based on texturing performance and geometry performance though... if you look at the games where fermi does well, its games that are shader and geometry heavy. in games that are texture heavy 480 is as fast as a 295 or even slower... im not really sure, but thats the impression i got when reading the fermi reviews... crysis is very texture heavy, and fermi performs the same as 5870. stalker and metro 2033 are very shader heavy and fermi does very well there... 3dmark series have always been pretty texture heavy, and fermi is as fast as a 5870 there...
do you really think in 2010 this is still a valid claim?
i thought 5800 and fermi showed us that resolution is not longer a driving factor? i mean ati and nvidia had to come up with multi monitor solutions and 3d to somehow show a notable performance boost of the latest gen hardware, cause in 1080P which is still NOT the standard, there really isnt a need to buy the latest and greatest hardware... at all...
i think we have reached a point from where on the amount of pixels becomes less and less important and the amount of pixels increases less and less. its all about increasing calculation power per pixel now. thats what several game devs said at the last graphics convention as well...
~5970 performance, faster @ tesselation, less heat than a 5970, ~399$, q1 2011... and ~470 performance at half the power consumption for 249$ is what id expect as well...
i hope they focus on better 3d support and come up with a propper infrastructure and dont just tell customers to go and find displays and glasses themselves and figure out how to set it all up and get it working properly...
that about sums it up. looks like they didnt have enough time to really figure out how to utilize the new architecture for older games. and honestly why should they care, who wants 10 more fps when u already have 150fps, but for newer games, those extra 5 will really help when your at 45fps. ill probably take a look at the hard ocp review to see what settings they found playable (i do like how they do that kind of thing) id bet you can enjoy every old game just fine, and with a few driver enhancements, up the 4xaa to 8x (woopty dooo. lol)
yes, we got it, your in the money and so are your friends :P :D
its not about affording it, its about chosing to spend money on something you dont actually need or see no real gain from...
i wouldnt caegorize gamers based on how many games they buy or how much they spend on hw but how many hours a day they play games... then again, for hardware and software companies those gamers are actually not interesting at all as they dont make a lot of money off of them :lol:
yeah but once you go 30", for whatever reason, you HAVE to play at 2560... or else the image quality will suck... well not suck, but why spend so much on a big screen and then play below its native res with a slightly blurry image? thats what i mean... once you go 30" you HAVE to play at 30" and you HAVE to invest more on hardware... and you might be able to get 99% the same game play experience as on single gpu 24" screens, but it wont be the same or even better... show me any pro gamer that plays on a 30" screen... ANY! see what i mean? ask most benchers here on xs... they bench on tri or quad sli... but when they play games they prefer single gpu rigs...
maybe... or maybe its that older games are texturing limited on fermi... the newer the game, the more pixel and geometry heavy games get... mostly pixel shader heavy...
i think fermi is definately texturing limited...
maybe it has enough texturing performance so they focussed on pixel shader and geometry performance... but tbh im not very happy with the texturing in current games, and many people arent, just look at all the texture mods and hacks out in the wild... would be interesting to make an article about this... cant wait for some beyond3d or techreport or xbitlabs analysis of fermi :)
Its weird those. In anandtech review, 480 does significantly better at crysis warhead 11-17 percent better with 33 percent higher minimums than the 5870. The SLI results are a complete beat down in those results with SLI gtx480 likely matching CF 5970s.
single gpu isnt too scary, u just cant use the ultra textures or 4x+AA, but with that kind of resolution, and dot pitch, u may not need to use more than 2xaa.
and yes if u can drop 1000$ on a monitor, it may just be better to drop 500$ on a good IPS around 25" and put an extra gpu in ur pc. thats what i was telling myself with the SSD drive i had, i could buy 2 more 4850s, or a 60GB SSD. but i was happy with my gpu performance (especially for WoW) and the SSD felt more worthwhile, even though its retarded expensive still. but back the point, a 1000$ on a monitor probably means an extra 200$ on GPU power.
and keep in mind its only 40% more pixels.
This shouldn't be a problem with nv's supposed wizzards in writing drivers. Can't have it both ways, either they are or they aren't. Saying nv's drivers are second to none, with all this speculation on poor 4xx performance due to poor drivers seems to me like a dog chasing it's tale.
Not really.
They can have better drivers than AMD and be good at making them. But just recently they've pulled 15 - 20% more performance out of some games. On an Architecture thats extremely similar to the g80 which is 3 years old. Now are you honestly going to tell me that a BRAND SPANKING NEW ARCHITECTURE which has had basically four months of development won't get similar improvements?
Compare the 5870 with release drivers to the drivers now. This arch is really just a cleaned up version of the rv770 (which was based off of 2900 amirite?) and yet they still added a clean 10 - 15% increase across the board. Do you really expect Fermi to not have similar improvements if not MUCH greater improvements in the coming months?
+1,
i think its alot of balancing the new gpu/memory power. the 4870s had much stronger ram to gpu power than we do now. i dont know much about building drivers, but when it comes to billions of calculations a second, i bet its alot of trial and error testing, and then boom, they find the sweet spot
Yea but unlike GF100, Cypress (Rv870 or whatever its new name is) was not a huge jump from RV770 architecture. On the Contrary GF100 in comparison to G80, G92 or G200 is a huge jump in some areas. It maybe a while before we see Fermi's real performance and by then it will probably be already too late .
I added one more smiley in my post so that just nobody get's me wrong - won't happen and would be very bad anyway...love benching different cards but Fermi still needs some time until it's worth spending some DICE or LN2 on it for me.
I don't see it like that. I see a few percentage that you could attribute improvements to refining optimizations. The big jumps I see are in DX11 which isn't exactly old. I think it was just released in September. There's no excuse for nv not having as much development time with it as AMD has. I also see big improvments in TWIMTBP'd titles, or other games where AMD doesn't get pre-released code to optimize for which they have to basically buy off the shelf. Take Metro for example: optimizations still coming as they never had access to code. I can't think of a situation where nv wouldn't have the chance to optimize for their own games.
2 completely different situations I think.
You still think fermi will be worth nvidias time in the future?
G80 is creaking and cracking and the more I read about fermi the more I realize its going to take a genius and an engineering miracle to bring out a well balanced GPU from that hungry and noisy beast.
If Nvidia can do it without a major redesign, I will be forever impressed. Advise to nvidia (worthless I know): its time to move on. I don't want to see you like this.
it wont take a miracle to make a good gpu based off of fermi. it might catch us by surprise like rv770 but thats just from good engineering. they dont need a major redesign, if anyone does its ATi. fermi is really good at tessellation and designed to handle it very efficiently. they just need more work on process/yields.
to be honest i think the hardest think about the new architecture is the predicative load balancing that needs to be done on the GPU, I think that with a bit of time and more cards in the wild, and in test groups that the drivers can easily be tweaked to provide better load balancing on a per game basis. I am guessing here though.
@saaya, I personally think that 1920x1200 is the perfect res to game at, on a 24" screen, tho sometimes i do feel like even 24" is a little large since i tend to subscribe to the "nose up against the screen" CS style of play school.
I think that this thread has devolved into nvidia vs ATI, which is all I've read on any forum lately and I'm getting a little tired of it, I personally don't lean towards either camp but I also will not write off the gtx480 as quickly as some people on this board will, I will give it a fair amount of chance, for all we know this new architecture might be the future, look at where the pathetic 2900xt ended up ;)
Oooh tessellation is such a big thing now nvidia's got it. We know ATI has got solid hardware under the hood. In fact, if you check this link you can see there isn't a single figure that nvidia has the edge on. This shows ATI is still missing something: they are producing a beast with better specs in theory, fewer transistors (a billion less), and excellent power consumption.
Hardware or software? I am guessing ATI's processing units aren't saturated effectively, something in the hardware. Load balancing isn't doing its thing properly or maybe something entirely different.
Tessellation? I can make a guess that ATI can improve tessellation performance with better drivers, given the nature of tessellation.
considering people are still able to OC the crap out of these cards, i wonder if they can be undervolted and still run a solid 500-600mhz, but with a huge chunk less of power.
not really. its designed around big triangles. if you have small primitives from tessellation it pretty much destroys performance of pixel shaders.
you might want to look harder.
http://www.hardware.fr/medias/photos...IMG0028307.gif
http://www.hardware.fr/medias/photos...IMG0028308.gif
fyi heaven culls ~70% of triangles in the dragon scene.
their graphics pipeline is stalling for tessellation. its a hardware problem that will require a lot of thinking and problem solving to get right. im sure they will have a good solution in r900.Quote:
hardware or software? I am guessing ati's processing units aren't saturated effectively, something in the hardware. Load balancing isn't doing its thing properly or maybe something entirely different.
Tessellation? I can make a guess that ati can improve tessellation performance with better drivers, given the nature of tessellation.
considering the low default voltage I am not that optimistic that undervolting can save you "a huge chunk less" of power. Maybe a bit but if it is enough to justify the consumption for the speed you have at those clocks then - think you have the performance of a different card with less consumption then, so we are back at the point that it doesn't look good for undervolting. Also I am wondering that software voltage tools are still only announced to be released -.-
oh come on, dont be such a party pooper :P
yeah... only 40% more pixels but for some reason perf collapses when going to 2560x1600 with current hw... i expected a lot from the 480s in that regard... i thought theyd be really really nice at high res and a prfect companion for a 30" screen...
yeah, if you buy a 2560 display you basically have to take into consideration that if you play games, you have to add some extra money to the cost of the display for upgrading your pcs graphics...
what drivers did anandtech use? what cpu speed and what windows version? i noticed big differences during the 5800 launch in reviews using a 2.66ghz 920 or a 4ghz 965, and there were weird differences in 32bit vs 64bit as well, some games seem to perform better on 64bit than others, and iirc ati was slightly better in 64bit than nvidia vs 32bit? cant remember...
yeah... and that sweet spot is probably different for every game, every res... if you use aa and af its different... and then they find another tweak and the sweet spot is different again... :D
lol@nose at the screen hehe :D
well tbh, if the pixels are big and the space between them is tiny, then theres no problem looking at the screen from close up i think :)
but yeah, 24" 1920x1080 or 1200 sounds like the best res to me as well...
i didnt know there were 27" screens with the same res tho... those might actually be very interesting as well :D
about ati vs nvidia... like i said before, i dont think theres a big difference between 5850 5870 470 and 480... what can you do on one of those that you cant do on another? if wed build 4 rigs one with each of them, would any of us be able to tell them apart by playing games on it? i doubt it...
for me it comes down to cost, driver preference and whether power and heat is important to the user...
i dont think even the most radical ati fanboy would call fermi a slow card... its hotter and costs more than a 5870, but i doubt any ati fan would refuse to use one if youd give him one for free :D
I think it has long been said (rumored, i think some interview also suggestet) that atis next arch will be new one. It is long project always so it has been in developement for long time already.
On the Fermi, I have mixed feelings, its not as bad as hd2900 wa:rofl:s. But its not big success either. I would'n buy it, i need value not most high performance card there is. But I hoped for better performance to get hd5850 price down. Maybe i just stick with my old trusty hd3870 for a while still:shakes:
i clicked around a few reviews and see that people are getting 100mhz out of it, and at stock volts i believe. which means i think it should have no problem being knocked down a notch and frequency left alone. idk but that could mean 20-30 watts removed, for no perf loss.
and yeah the idle volts look way wrong
This is Xtreme Systems right? Not "Frugal Systems"
While I think it is great to save money some people are in this hobby like others are into race cars, R/C airplanes etc. This is a hobby and it isn't a cheap one if you want the latest and greatest, SPEED costs $$$$$$ in every single hobby.
So as I'm driving my Z06 today I selected the Coolant Temp guess what? 214'F which is quite a bit hotter than a standard Corvette, Z06 is 505 HP vs 430 HP that's only 75 HP more that's only 17.5% increase and it was 20-25K more... So the Z06 and the ZR1 for that matter run hotter, cost more in fuel, and are more expensive sounds like Fermi right? Yet it isn't easy to find a Z06 or ZR1 in stock, they sell well and they aren't for economy buyers.
Before you say it isn't a good comparison think about it for a minute, some people will spend $250K on a Ferrari is that a good deal? How about people that spend money on water cooling setup when it is not necessary? Or the tons of other custom "Xtreme" parts that go into a fast PC.
I think we are seeing GPU technology being pushed to the limits of current chip design, why else would Nvidia who isn't some no name company have issues getting this to market? They were ambitious and wanted to have something really revolutionary, well they did but at quite a cost.
Yep I'm going with the Fermi (Ferrari) of GPUs and not just 1 but 2 for SLI on my waste of money 30" 3008WFP, will I enjoy my gaming experience? Yep :yepp:, do I enjoy driving my way too expensive Z06? Yep :yepp: Fortunately Nvidia IMO priced Fermi properly I don't think they really had too much of a choice TBH but nevertheless it is priced pretty good.
:cool:
GTX 480 vs 5870 TOXIC
Very interesting indeed looks like AMD has really no real reason to lower prices...
Now, I'm not going to quote anyone specific here (had to read through like 10 pages lol) but I'd like to give my opinion on a few general "ideas" that some people are having...
1. Nvidia will not (and pretty much cannot) go bankrupt. It isn't a "fail" card, even if it does not meet YOUR requirements (and I am sure plenty of people feel this way) there is still a very big market that is susceptible to Nvidia's PR. I am sure Fermi will sell like hotcakes in Maingear, iBuyPower, CyberPC (or whatever) etc. as those people see "FASTEST SINGLE GPU" and instantly :banana::banana::banana::banana: themselves. It is also not a card that is built to make money by itself anyways, it is a marketing tool to sell laptop and low-end GPU's (which is where all the real money is made).
2. Heat - I am sure Nvidia could have made it cooler by adding an extra inch onto the end of the card.... TBH this sort of surprises me, why bother making a short card that is still the hottest and loudest? It's not like they haven't made compromises already. It also does seem to have some big idle problems, and the high voltage on idle is probably because of the lack of double redeundancy on interconnects (or whatever, sorry the term is very vague to me... I understand the concept but not the name :lol: ) which requires more voltage just to function... therefore don't pump enough volts into it and it simply won't work I would think.
3. Overclocking - it can happen, look at all the reviews. I'm not sure why everyone has the idea that it is IMPOSSIBLE to OC because of heat problems.... it may hit 95C but that is in Furmark and with a relatively low fan speed (70%?) which will ramp up if it gets any hotter.
4. GTX485 - do want, and it probably will happen. My bet is Nvidia will have it out in 6-8 months... it takes them what, 4-6 months to develop a GPU at the bare minimum (risk wafers and all that take a while)?
5. GlobalFoundries - WHEN DO THEY OPEN!??! TSMC can go burn in hell.
Using that logic - everyone should go out and buy a 5970 as soon as possible. It's the most expensive, but it's also the fastest.
Also, you keep mentioning 2560x1600. But Fermi takes a huge hit (for now) at 2560x1600 so a 5970 has an even greater lead over a 480 at that res. So if speed at 2560x1600 is the only thing that matters to you (not power, heat, noise, single vs multi-gpu, or cost) then why pick a Fermi for this res rather then the fastest card?
That's a dual GPU card @ $700 IF you can find one in stock and as I mentioned... Fermi SLI is the absolute fastest there is right now? Don't even try the 5970 CF because quad GPU scaling is NOT good and if you can find them 5970 is $700 so that'd be $1400 and again scaling isn't good. I did mention I'd be doing 2xGTX 480 SLI didn't I? Always planned on it unless GTX 480 wasn't a good product, it is not the best overall release ever but then again 8800 Ultra was 800-1000, GTX 280 was $649 each so GTX 480 SLI costs me $1000 that's $300 less than my GTX 280 SLI cost me.
It has already been identified that the 2560x1600 hit is unexpected and will be corrected in a few weeks (Nvidia report to one of the review sites) These GPUs are fully programmable, with Nvidia's history on driver revisions I fully expect and believe that the card's performance will improve and be an even better value.
Apparantley XFX doesn't think so. :p:
http://www.legitreviews.com/news/7707/Quote:
We just received confirmation that XFX, a division of PINE Technologies, will not be releasing any GeForce GTX 400 series graphics cards to the market when the cards become public next month. XFX said that the decision not to carry this series of GF100 graphics card was their decision and that they will still be carrying NVIDIA products. From our conversation with XFX they mentioned that they have "yet to see whether the fermented launch will reach an inglorious anti-climax" and mentioned they want to "Ferm up to who really has the big Guns". We are guessing they mean AMD and it sounds like they have something special cooking up there too.
^^ well that's a supprise after the XFX boxes for gf100 cards were leaked first....
^ Oh wow.
Regardless of which side you're on, that's quite a big move by XFX. You never NOT want a halo part when you're offered one... unless you get an awesome deal on the other side.
Show us a review where 2 480s beat 2 5970s. The only other review you will find (probably) will be this total failure of a "review": http://www.maingearforums.com/entry....GeForce-Part-2
It's even worse then the ixbt review everyone rejected, but as you can see, when scaling works 2x 5970s match or beat 2x or even 3x 480s.
It can be fixed if it is a driver issue. But if it is something integral to the architecture like fillrate then there is nothing that can be done until next respin. And here is the comment from HWC's conclusion on that topic:Quote:
It has already been identified that the 2560x1600 hit is unexpected and will be corrected in a few weeks (Nvidia report to one of the review sites) These GPUs are fully programmable, with Nvidia's history on driver revisions I fully expect and believe that the card's performance will improve and be an even better value.
Quote:
As with any new architecture, there are still obviously areas for improvement and in the case of the GTX 480 there was a straw that almost broke the camel’s back. Our comparative testing charts were extremely eye-opening since they showed exactly what they were meant to: issues with resolution scaling. In Left 4 Dead 2, Aliens versus Predator, BattleField: Bad Company 2 and Unigine Heaven, we saw what looked to be an insurmountable lead at lower resolutions all but vanish at 2560 x 1600. Thankfully for NVIDIA, the GTX 480 was able to pull its butt out of the fire with strong AA performance. Nonetheless, this is particularly worrying since high resolution gaming is what the GTX 480 was supposedly built for and if it can’t maintain a sizable lead over the HD 5870 in exactly this area, many enthusiasts may question its price premium. We can’t state for a fact whether some of the performance drop-offs we saw at 2560x1600 were due to an architectural issue or immature drivers but it seems everyone we talked to (from fellow editors to NVIDIA themselves) had their own explanations. For now, we’ll keep an open mind and side with NVIDIA’s explanation which stated there are certain driver optimizations which have yet to be implemented.
Ya because EVGA's lifetime warranty will certainly love to replace a majority of cards for free when they "burn up" get real man..... How about this one BFG also lifetime warranty and selling them too despite the rumor of going RED.
http://www.bfgtech.com/bfgrgtx4801534e.aspx
lets see 72 fps to 133 fps what 80% skaling
http://www.sweclockers.com/articles_...id=6236&page=9
what do you want 100%
LOL magically their Metro 2033 numbers are so much better for ATI than other reviews? The rest of those benchmarks are extremely suspect as well. Except the only one they seemed to get right was Crysis Warhead. How's your 5970 CF doing in that one?
http://www.sweclockers.com/articles_...id=6236&page=8
:ROTF:
they could well be switching teams, i heard they get the short end of the stick with gtx280 pricing and were rather cross about the ordeal. but then they would be the only nvidia partner to jump ship because of it.
hard to believe these are photochop'd.
It's obviously not scaling in that game, drivers could fix that. If it is valid to expect/hope that Nvidia fixes 2560x1600 scaling in several games then it is no less valid to expect/hope ATI fixes quad-gpu scaling in a few games.
Personally, I wouldn't buy a product based on the hope of future driver improvements though.
But there is no need to pursue this line of discussion any further if you don't want to. I just wanted to understand the motivation of someone who isn't a multi-GPU hater going with the slower GTX480 over a 5970. But your responses have answered that for me, if indirectly. Thank you.
Well add to the fact my board is a 790I and I really don't have a need or desire to go with a new board and cpu right now. My 4Ghz Dual core sees about 60-75% per core usage if the game is multithreaded well. Crossfire isn't happening on a 790I so that factors in a little bit but less so than the dislike for ATI Catalyst drivers. If I had good experiences with ATI in the past I'd possibly changed MB+CPU 5-6 months ago when 5870 came out and done CF with them.
I don't say I'll never use another ATI card again however it'll really have to be better circumstances than just less heat 86'C vs 93'C and fan noise is almost the same SLI vs CF, power usage I won't even acknowledge. Perhaps in my next major overhaul I'll take another look at the competition but thankfully for all of us competition is there. Both products are good and both products will appeal to different people for different reasons. ATI has a great head start, if they have been utilizing the last 6 months and have forward momentum they could do some really amazing things. I firmly believe both companies are pushing some technology limits (for now) and going beyond what is currently out will require some costly R&D on both sides.
In before April fools where fermi is decent
so your points are:
1. im rich!
2. im buying 2 gtx480s cause they are the best cards for 2560x1600
if you really cant find a 5970 then you must be doing something wrong, i can find several offers... even then, most boards nowadays have 2 if not 4 pciE 16x slots... how about getting 3-4 5870s or 5850s? i thought money doesnt matter and you just want the fastest possible? and multi gpu isnt a problem at all, right?
you obviously prefer nvidia... theres nothing wrong with that! i hate ccc and nvidia highend cards are pretty solid, expensive, but solid... but please dont tell us your unbiased and just buy 2 480s cause they are oh so awesome :D
lol what? 2-3fps over a standard 5870, thats it? i expected a lot more from those beefed up 2gb cards :/
eeeeEEEeeEEEEWWWWwWWWww :S :D
sorry, but really... 790i... eew... :D
you can flash the dell XPS 730 H2C bios on your board and enable xfire that way... dell managed to force nvidia to remove their xfire lock on their chipsets... if you dont want to do that, then yeah, 480 sli sounds like the best option indeed... but tbh, if i were you... id rather get a pair of evga 470s with waterblock mounted already... they cost the same as a 480, clock higher, run cooler, and should perform better than 480s on air when comparing 480 sli max oc vs watercooled 470 sli max oc...
I can confirm the rumors are true. We've had the news post up for a while.
Everyone I know at XFX confirms this is the direction they are going.
Of course everyone should consider their own situation before buying an expensive piece of hardware. It is reasonable to decide what to buy partly based on what you have already. The only reason I haven't purchased my friend's 5970 (for $300 :D:D:D ) is because ATI multi-GPU boards don't work well with my current motherboard.
Really I think people trying to state flatly "A gpu is x% better then B gpu across the board" are vastly oversimplifying. I think everyone should consider how each potential purchase works at the resolutions they use, with the quality settings they use, in the games they use, with the hardware config they have. It happens quite often that specific games work better with certain architectures or that certain architectures perform better with specific quality settings, AA, etc. Just taking an average, overly broad, number might not get an accurate picture of performance in your specific usage.
As for drivers, I used to hate ATI drivers. Since my first ATI card (an All-In-Wonder) I always had trouble with their drivers, particularly bugs and OpenGL performance. But I have been switching between ATI and Nvidia a lot lately and I can say that I don't prefer either driver any more because the ATI ones have gotten a lot better (since R600 gen) and the NV ones have actually gotten worse. :(
Didn't you try to argue that you wanted the fastest no matter what the cost? If that's the case, then it is 5970 CF.
Yes, quad GPU's tend not to scale so well. And they do cost a lot. But like you said, if you want the fastest, it does cost money.
If you want something that's fast, doesn't have to be the fastest but it'd be nice if it was close, 5870 is the perfect card right now.
See, I just don't understand this. How is 470 or 480 a better purchase against a 5850, 5870 or 5970?
I'm not anti-Nvidia or anything, but I really think that ATI has won this generation.
Cypress isn't the fastest GPU, but it's cheaper, much lower on power consumption and it works well in crossfire (which will give best performance if that's what you're after).
I don't really see a price point where GF100 is better suited than one or more Cypress cards/GPU's
depends on what games you play...
in some games a 470 is faster than a 5870, and its supposed to cost 50$ less...
there are a few scenarios in which a 480 allows you to play a game at a higher res or with more details and better aa quality than the 5870...
if that happens to be a game you play a lot, and you happen to need that higher res, and you dont like multi gpu, and you dont care about power heat and noise... then a 480 is the only option really...
but ill repeat what i said before, for most people theres barely any difference between the 5850 5870 470 or 480, except for price... and preference based on brand or rma support or drivers...
the performance of those cards is enough for most games at 1920x1080, and at 2560 they are all too slow in most scenarios...
i had high hopes for an oced 470 to be a great deal, but i just read the pcgh review where they overclocked one...
at 750mhz core and same mem clocks as a 480 its still 10% slower than its bigger brother in crysis, despite 50mhz higher gpu and 100mhz higher shader clocks!
im seeing that as a confirmation of what i already suspected, gf100s Achilles heel is texturing performance...
it will take vmods to get a 470 faster than a 480 at stock in many scenarios...
if i wanted to buy a card now it would be a 5850 i think... the cheapest card thats fast enough for 95% of the games out there at high res... and its just below 300$
either that or i would wait for the second gen dx11 hw and get a gts250 1gb or 4850 1gb, both are selling for 99$ now which is a killer deal
all the highend cards right now seem unappealing to me... they dont really deliver any value that is in relation to their cost.
in the past you would be able to crank aa up and play at a really high res if you paid a lot extra... but the 480 and 5870 are both too slow for 2560x1600 in many situations while they are simply "too fast" for 1920x1080
the only point where it makes sense again is sli and xfire, including the 5970... at that point you get enough power for 2560 gaming.
Nothing wrong with that, i've already bought AMD cpu's when for the same price i could have gotten better performance from Intel.
Really annoying hearing the same repetitive excuses, ati drivers suck, i had a bad experience with ATI 10 years ago, bla bla bla... who cares? buy what you like, no excuses, it's your money.
But when you have like 432 posts on a thread saying how you gonna buy Nvidia and why, it start to look like you're goal is making someone else buy it too and thats suspicious. :rolleyes:
Yep, agree 100%.
GTX 400's aren't winners but to me they aren't the flat out worst choice for everyone out there either. I would still get a 5870 over a GTX 480, but if you like Nvidia better, 470 and 480's are pretty solid cards too and you don't have to justify your decision with things that make little sense.
Yeah I'd much rather not hear justifications. The people who feel they have to tear down the other company in order to justify their own purchases are just sad.
I'll buy an AMD cpu over an Intel simply because AMD has done less to annoy me over the years. I already know that Intel chips are faster, I have an i7 in my laptop afterall. But when I'm looking at parts for my next build I don't even consider Intel options, and yet you don't see me in evey Intel thread in the news sections mouthing off. Wish I could say the same for some of the thread crappers here. :shakes:
I'd say the 5850 is the best right now, with the 5870 in 2nd place
back to sticky!!
i am waiting for the gtx 480 waterblocks , preferably by EK, or swiftech.. its stock cooler sux quite frankly.
hows EVGA's waterblock btw? anyone has experience with it from previous generations??
making choice this round is hard , no clear winner .. i am still jumping back and forth between 480, 5970 and 5870
I know what you were saying.
You were saying that people should choose a card based on their particular needs, etc...
However in this case, for the vast majority of people who buy fast cards, GF100 simply does not fit their needs.
Yeah, there are circumstances where a GF100 card would be a better choice. But they are basically few and farfetched compared to Cypress.
i would like to say that too, but honestly the Perf/$ is won by last generation cards. which makes me think were all losers for giving into this ridiculous pricing scheme. yes 300$ for a 5850 is good, if this was late 2009, but its now april, and the only happy people are the ones who got on board early. so my vote is a previous generation card, until dx11 becomes important, or until current gen cards drop 30% in price, which better happen within 2 months.
I want to see some SLI GTX 470 benchmarks because it seems like 2 470's would beat or be really close to the 5970 or SLI 5870's for the same money or less. And you could overclock them pretty close to the 480 speeds.
2 470's would definitely beat a 5970. From what I have seen GF100 scales well, and a 5970 is basically two CF'ed 5850's which 470 usually beats.
However, 5970 can be greatly overclocked. It can easily be used at 920mhz daily, a 200mhz upgrade. Same can't be said about GTX 470's.
RV770 was pretty killer in price/performance.
I'd say a significant reason the current gen isn't as cheap is because Cypress itself is bigger than RV770 (~330mm˛ vs ~256mm˛).
Not to mention they use 1GB GDDR5 as standard, and also the PCB/PWM is more complex this time.
And for Nvidia, GF100 has been tough to manufacture. Yields will be preventing decent costs.
Though 5xx0 does have the advantage of great performance per watt (RV770 was somewhat lacking in that area)
2 x underclocked (to reduce heat and power consumption) 5870's really (1600 shaders vs 1440)
I agree. But in the end it is up to the individual, not you or I, to decide what will work for them the best.
I'm simply advocating making intelligent purchases. Buying based on facts and rational thought is good, whichever brand one decides on. But buying based on emotions, propaganda, team think, and peer pressure seems to be the norm these days and I'm highly opposed to that kind of irrational thought. It does nothing for you as a customer (except maybe emotionally) and helps reinforce those behaviors as being effective sales tools - as well as financially supporting a company based on factors other then the quality of their product. If that kind of emotional purchasing continues on the long term it ultimately degrades the quality of the products in the marketplace from all companies.
the price of the 4870 and 5870 match very closely when looking at $ per mm2, however thats using the launch prices of 300$ for a 4870 and 400$ for a 5870. and that launch was 6 months ago, after 6 months of the 4870s launch it was down to ~200$. yes a lack of nvidia competition does cause a few problems, but that shouldnt mean every video card should be stuck in limbo (seriously, since last summer, has any brand of any series gone down in price by more than 10%?, and havnt a few gone up?). the 5850 that was going for 280$ shocked me, and i praise them for taking the first step in the right direction.
470's in SLI SOUNDS good, and I used to be interested in the value of SLI'ing/CF'ing good value cards until I read about microstuttering. Now, I'm of the belief that one card is the way to go until it starts dragging its feet...then you just buy a new one. This way, you don't have to worry about an optimal SLI/CF ready platform either. Plus, if you're actually PLAYING games and not just running benchmarks to break world records or gain inches on your e-peen, then a single 5850/470 is fine for any game out now.
Additionally from what I have seen Tri SLI seems to be the fastest solution.
IF SLI gtx 480 are sometimes beating a 5970 by 60 percent as in anandtech review(and at the very worst beating it by 15 percent, with an average about 40 percent), tri sli is bound to win even against a pair of 5970 . And they happen to more or less cost the same, although you might need a big power supply(and if was me I would use atleast
1 water block on the center card).
Also for some, 3d gaming might be another reason to go with NV. 3d everything is taking off and AMD doesn't have anything setup in this regard.
I have a question my monitor is a 24inch samsung lcd tv 1080p if I get the 4xx in sli can I connect them with my hdmi cable to my tv? will I get audio?
somebody asked if the evga blocks are any good.
i saw that they use swiftech blocks on the 470, which at least look very very nice... they even have a heatsink for the pwm which is connected to the main block via a heatpipe! sounds very nice... and their 470 with waterblock costs 499$, the same as a 480... but really, the 470 doesnt make much sense imo... check out the 470 ocing results we have so far... even a 750mhz 470 is 10% behind a 480 at 700mhz... the missing mem bandwidth and texturing power seem to have a big impact on perf... with a vmod the 470 should be able to reach 480 performance... but you need watercooling to keep it cool, and you end up spending the same as a 480, lol... so watercooling a 470 doesnt make too much sense imo...
1920x1080 without dx11 = 4850 or gts250 = 99$
1920x1080 with dx11 = 5850 = 299$
2560x1600 = 5970 or 470 sli = 700$+
the 480 and 5870 dont make sense at all unless your benching imo... :shrug:
in the end you pay 200$ for dx11... is that worth it? definately not if you ask me... sure, a 5750 does dx11 as well and costs a lot less, but its too slow to actually play games with dx11 enabled and full details... its somewhat a compromise option but it makes more sense to spend a bit extra and get a propper dx11 card imo.
tons? samsung, acer, alienware and acer... and they are based on the same panel, the only difference is the plastic bracket and stand.
correct me if im wrong but those are the only 3d displays i can find...
i like 3d... a lot!... but im waiting for 240hz 3d displays before i actually spend that much money on it...
can anyone answer my last question.
Not that experienced with SLi, let alone pass through audio on the Geforce cards. However, I would be almost certain that using the SPDIF pass through from soundcard/board to whichever 4xx card you'd have as the main display driver would allow the audio to still pass through in SLi. So yes.
I agree with you guys but I mostley play Arma 2 and I need all the power I can get.But that was not my question I wanted to know If I will be able to use the gpus threw a HDMI cable and get audio and video.
correction, id go for a 3d display as soon as theres one that can do 200hz... 100hz per eye is enough...
i just came back from the cinema seeing the dreamworks 3d dragon movie and... wow... as soon as the camera moved, even at a slow pace, the whole scene began to flicker and everything blurred.
some scenes made my eyes water, and not cause they were emotionally touching lol :D
Probably easier to make it work properly with games, since they have all the 3D data and render it when you want to see it. Some movies can be "converted" into 3D after they are mostly done, which obviously introduces the problem of really only having a 2-dimensional source to work with.