That only means you actually do stuff acording to specs provided by some vendor (don't remember the name) for GTX 470 driver : running Crysis under Vista on Pentium IV 2 GHz 1 GB RAM :rofl:
Printable View
So a mere 7 to 9 month wait gets you an extra 5% performance. With probably an additional 15% power consumption.
Impressive Nvidia, impressive. :rofl:
wow, you guys, like, determined everything. performance, price, tdp. who needs to actually see reviews and pricing?
http://www.anandtech.com/video/showdoc.aspx?i=3740
This.
At first I was like, WTH, 5870 is slower then 4870x2 :confused:
Then I read that ATI totally redesigned the chip midway into the cycle so that made sense.
I got my old baseball bat out of the closet to get a good whack at the dead horse.
IMHO I know drivers.
1. What are drivers? There are many components to control hardware and display output.. and then there is some sort of GUI. Bad GUI = Bad driver??
Is it more important that brightness control on HDMI doesn't work, or that a game crashes on startup? What if driver install fails, but only in some very rare situation like triple-SLI/CF 99.99% of users will never see?
2. GUI accessability. Before CCC (tree on left side, preview on right) ATI used "Control Panel" with tabs.. amateurish at best. CCC, built on .NET, had HORRENDOUS startup times (~40sec), and even button click lag. So nVidia wins by default, right? Well, I don't care much for huge mickey mouse icons, or Vista-like "what would you like to do" left column. I like the current nVidia Control Panel and its left "topic" tree, because its emulated CCC.
3. Bad game developer, sit. Lets face it, the industry is young. And not all of the hundreds of games made every year are properly checked out. So if dev expects code to return 15, but its coded to return 14.7 causing game to crash, who's fault is that? Well, if nVidia get a beta before release, notices issue and adds exe detected workaround, every ATI card owner automatically assumes its their ATI driver's fault since it works on nVidia.
3. BSOD. There are many charts. Many statistics. And until recent ATI GSOD, nVidia was the only one with threads HUNDREDS of pages long on debilatating issues and BSOD unresolved for GF6, GF7, GF8, GF9... well pretty much all generations. Check for yourself - try and install nVidia driver on XP64 SP2 - BSOD before getting to desktop... talk about "great user experience". nVidia blames MS. MS blames nVidia. No issues with ATI.
4. nVidia and MS sitting in a tree. Ever since the fall from grace when ATI was first with DX9, and won contract for XBOX360, nVidia has been far behind the OS curve. After a lot of kicking and screaming about Vista's new driver model and what it required, nVidia caved in. If you were fortunate enough to be a beta tester for Vista (2006), you're probably trying to forget how clicking GUI buttons can cause crashes and BSOD. 3D, multi-display, rotation... either not supported or "experimental".
5. Finally a shout out to Intel. A million monkey may not be able to recreate Shakespeare, but have somehow managed to make a "3D" driver. Just 2-3 years ago, your stupidity for trying to play a game on IGP would be rewarded with startup crashes. Due to lack of vertex shaders. Due to memory allocation errors. Due to the wrong moon phase. Ofcourse there was the long "exclusion" list - games it was hopeless to try running. 9% of all Vista BSOD, caused by Windows GUI and a little simple low setting 3D graphics. Fortunately, Intel driver has greatly improved just in time for actually working (!!) DX10, HDMI and Win7.
PS: Anybody here remember VIA GART drivers (back when it was on chipset instead of GPU), or special game specific drivers (ie Tomb Raider, UT, S3 Metal)?
Well said sir. I can't say I disagree with any of the points you've listed.
CCC has improved vastly performance wise ( I remember when it had 1min+ startup times... ) but in the past, yes it was beyond unacceptable.
I cringed when you mentioned the Vista beta... I had used it as far back as alpha and my god... epic fail. It took a good year until after it formally released until I'd call it remotely stable ( and another year to regain my sanity )
PS: I remember the gart drivers :rofl: Facepalm to the max.
When is the suppose release date? I'm having a hard time, sell or not to sell my HD 5870.
26th march is supposed launch date. I just got news that Turkey would receive more Fermi based cards than France as first shipment. Also I am able to buy a 5850 for $280. Keep your sodding EU to yourself guaiz! :D
That's what we like to hear ! :D
Fermi is supported in 196.75 drivers, but ID removed from .inf file, in few days will be released new version instead 196.75, when will be cards launched, you can to download new drivers .. or ONE week before launch from Guru3D or some chinese web.
This card is great for orientation view of performance, because clocks are lower then retail piece. In logical view, when card beat on lower clocks 5780, will not be in final revision more better?
PS Price of this card is very good, lowest then 5870 :)
So you're saying they've actually set clocks? Last I heard, they're still juggling them
And for that matter, prices have been set?
Personal opinion... I dont think these cards will be cheap, regardless of performance.
Theres no reason for them to be cheap
Yeah, but... what about the case when the card doesn't beat HD5870 on lower clock, as it's the case? Will the final GTX470 be able to reach HD5870, or won't?
Because current numbers show a clear defeat in the 2 games tested (Crysis Warhead and Dirt 2) by 18.5% and 10% respectively, and an innocuous win at what is nearly a synthetic benchmark by a 7%.
That doesn't sound too promising, sincerely. Let's wait for reviews with the final clock frequencies, anyway...
How can be that way? The price of a product depends entirely on how much the people is willing to pay for it, not on any other consideration (like production costs for example). And as I see it, performance is one of the main reasons why people is willing to pay more or less for a piece of hw.
Of course, there are other things that might affect to this apart of performance, like disponibility or brand recognition, but performance should be one of the main factors in the price...
drawing conclusions from their sheer transistor count is not a good idea...
and calling the 5870 a midrange chip is a pretty weird statement... ati had to go for multi display setups to find a configuration that makes use of all the graphics performance it offers...
it was a pciE rep? i thought somebody just checked the list on the pci sig site? note that i looked at real meassured power consumption, not tdp and peak values... of course you can build a mars like card, but we all know that cooling was a major issue with that card and it wasnt stable at stock speeds with some people as it simply ran too hot. and that was with a huge and expensive heatsink already... like i said, above 300W you reach a point where any extra watt of power makes the pcb, pwm and heatsink designs exponentially more expensive.
yes, but why should software suddenly, magically, catch up? why should there not only be a lot of dx11 games, but good dx11 games, and then not only good dx11 games but good dx 11 games that use compute shaders so much that gf100 has an advantage from it? i just dont see that happening... sure, eventually games will demand a lot more tesselation and compute shader power, but by then we will have second and most likely third or 4th gen dx11 hardware and all this first gen dx11 stuff will be useless.
yes, but the tdp values only matter for certificates and verification with pci sig... im more interested in feasability of card above 300W than whether it can be certified :)
more 8 vs 1 fps, 1.3gb vs 1gb nonsense...
it does... but this one is a fake :)
whoever did it made a loooot of mistakes, i think he wanted people to know its fake, the mistakes are too obvious...
15fps average...and more 1.3gb vs 1gb nonsense...
thats like asking you what the difference between an elephant and a llama is in your opinion :p: its not an opinion, it IS a different standard... why evga keeps making this mistake, who knows... either they dont know, which is very possible, they are marketing people after all, or, they say ddr+high number because it makes it sound more advanced... everybody knows that his system is using ddr2 or ddr3 memory, and if they think the memory on the card is 2 or 3 generations ahead many n00bs probably go whOooOOoOAaAaaA :slobber: :lol:
yes, totally agree :toast:
buy a next gen card that DOES support the next standard, but when it comes to performance, focus on current games.
that doesnt make it correct, does it? :D
some cards actually do use ddr memory as its cheaper, especially entry level and mainstream cards tend to use ddr2 and ddr3 these days as its fast enough and cheaper.
yes, i totally agree... that was nvidias strategy with gt200 as well, wider bus means they can use cheaper slower memory and still beat ati in bandwidth and they dont need to push clocks really high which can be a pita. since they need all the performance they can get though, i wouldnt be surprised if they actually go for fast gddr5 now... probably as fast as they can get it to run... since their gddr5 controller is first gen or maaaybe second gen, im not sure how high they will be able to get... the imc will be the same or a slightly tweaked version of that in the gt21x 10.1 40nm cards and those only clock in at 3500mhz effective...
5870 xfire might be good enough though, and cheaper...
totally agree :toast:
hogging hw performance "for later" is the most foolish thing you can do in IT :D
crysis warhead numbers look interesting!
well maaaybe, just maaaybe thats because nvidia was creating a huge hype with several events and claiming 40-60% over 5870? :D
gt200 (295) vs gt300 (470/480)
~5-20% extra performance
~5-20% extra power consumption
~5-40% higher price
+dx11
+single gpu instead of dual gpu
i think thats actually pretty damn good, and its not like ati did any better...
the 5870 is slower than the 4870x2 and costs more, consumed less power and was a single gpu and had dx11 which made it acceptable... while the 470 will probably lose to the 295, the 480 definately wont. more perf comes at a cost, more power and a higher price. i think the price doesnt justified the extra performance, especially because more performance at higher prices is not what 90% of the market needs and wants right now... but hey, market demand will take care of that, and im sure there are enough people who are willing to pay huge prices for the fastest single gpu card. the only problem i see for nvidia is availability...
if you compare the last product cycles from ati and nvidia, the differences are that nvidia uses more power and costs more, but also offers a performance boost while ati couldnt even reach the performance of their previous gen highend dual gpu card... ati was able to ship though, slowly and with a few bumps, but they could... even the 470 seems to be veeery limited in numbers :/
i think this is a clasical example of a pr hype actually hurting the product because it drove expecations too high... and it was a bad decision to focus so much on more performance for a higher price instead of the same performance for a lower price, as performance really isnt such a limiting factor for todays gaming pcs...
the specs look fine though, performance is good i think, price is acceptable, so is the power cosumption and heat... but availability... thats a real issue... not all that much for nvidia, but for its partners its a huge problem... they need some business to make money...
I can actually see this. The test cards everyone been seeing is A2. Regardless of how things went, A3 should be better than A2 in clocks even if just barely. A1 was supposedly 500mhz, A2 was 600mhz according to charlie. A3 is unknown at this point. Charlie is saying things didn't get better but if rumors are both given equal weight, about clocks flying around, the A2 is already at speeds of 625-650. Even with a garbage respin a3, we should be seeing clocks very close if not at 700mhz and shaderclocks into the 1400 range.
I think the bad performance right now is bad drivers more than anything because the gtx 295 with terrible clocks of 576 and 1242 of the core and shader respectively is giving 5 percent or more performance than the 5870. We are already seeing possible signs those that 8x AA is no longer the spot where AMD takes a vast lead over NV.
If there is a 512 core part out of there and they have 675 core with 1400mhz+ shader, we should see a part that is 20% percent faster part than the 5870 even with bad drivers. We are likely seeing a a2 revision part, produce the 84 FPS vs 50 fps on a gtx 285 in the earlier ces show considering the timing. An A3 should do better than that and might get very close to the 100 fps that a 5970 produces. Even if this is one benchmarks, since it not obscure settings, some of it has to translate into real world gaming results.
http://ht4u.net/reviews/2009/amd_ati...70/index30.php
I think NV might be sandbagging its performance right now because no matter what they do, they won't be able to effect 5870 sales because the yields are still low enough that their will always be a supply issue. In addition, the amount of people that are interested in fermi and are going to buy one already exceed the small amount of cards NV has produced. I think the reason why NV is so silent, is the very reason AMD was silent about the performance of the rv770, it would rather have a less prepared opponent than a highly prepared one. I think AMD even knows better to fall for this like NV has done in the past, hence, the reason why they are releasing new low leakage r800 parts.
NV screwed up badly in regards to performance with the g200, because it performance was way to close to the competition to justify the priced difference between the parts and had to do a really severe price cut. NV canned the g212 40nm gtx 28x replacement for a reason. A 40nm part with close to 360 shader part using 10.1 tech(and ddr5) + g200 tech would typically be more than enough to match a 5870, with a similar footprint to boot. I think NV went to direct x11 and a new part for a reason because it had sacrificed alot by not making g212.
Charlie writes about NV like they are amateur that don't have a right to make silicon because they are too incompetent but I think we know better than that, although we might be pessimistic. I think Anand is right in saying NV didn't underestimate AMD this time. Although they might have not gotten the results they wanted, fermi parts are not going to be garbage and to some who want ultimate single chip performance, the wait could have been worth it to them.
I am thinking the 470 will slightly beat the 5870 and the 480 will be on average 20-25% faster. Both cards will be inefficient for power usage those. With mature drivers I am expecting especially the overclocked edition of fermi to be 30-35% faster than the 5870.
tajoh, your up in the clouds man... :D
But wait ... I were once told that the Nvidia 8800 trounces every single radeon ... So surely the fermi must be like ... like ... like ... ZOMG AWESOME :confused:
And ATI drivers are rubbish and far less stable compared to Nvidias ... Thats what I heard.
This heathen chart lies, dont listen to it, it is lies! They swapped ATI and Nvidia on purpose.
Buy Fermi.