If a dual fermi passes 300W then it would easily beat a 5970. But that doesn't necessarily mean that ATI will just let them have the lead without a fight. There is plenty of room in the evergreen arch for >300W cards also.
Printable View
If a dual fermi passes 300W then it would easily beat a 5970. But that doesn't necessarily mean that ATI will just let them have the lead without a fight. There is plenty of room in the evergreen arch for >300W cards also.
If it indeed passes 300W then it's gonna be incredibly funny. Nodes shrink, and game requirements don't go up at all, but power keeps going up because of epenis length of Nvidia or ATI or people actually buying those cards that don't have 3 monitors or more.
According to the engineers I spoke to, it is risky to do this due to the wide variance of rail-to-connector specifications on today's PSUs.
In addition, there are quite a few PSU manufacturers who have begun using thinner, less capable wiring to their connectors. While these "cheaper"-made PSUs are capable of delivering 150W+ to a PCI-E connector, doing so becomes increasingly risky as the wiring guage decreases in size.
More tesselation heavy games in the near future 2010 and beyond will utilize that power as long as there's hardware for it. Its like the chicken before egg scenario just like DX11 no one will make games for it if hardware doesn't exist.
The software always has to catch up. There's a few games that will also bring a 5970 or 3x GTX 285 to its knees like Arma2 and the STALKER when crank eye candy on even at 1080p resolutions. We should all welcome moar pwer!
Oblivion has a zillion mods i am using Qarl's Pack and its great...
Most of the badly coded games happen to be cross overs from the console department. I am surprised at GTA4 it was made using PC's to be played in consoles but PC's cant run it after wht a years gap.. :(
its about focus and the amount of time they were willing to spend to focus on optimization. Its easier on a console cause you create 1 goal to reach.
different on pc as given settings and produce differnt results and can be inconsistent if not enough time is spent in the optimization department.
to sum it up, they cared more about focusing on optimization the console version, once that goal was met, im shure things were cut for deadlines.
it wasn't until 8800 GTX was released that Oblivion could be played at HQ ; so it was definitely GPU bound too :)
even today if you add all possible Oblivion IQ mods, put AA level to 8xAA for everything (including trees etc) you make make a 5870 sweat if you bump up the resolution
What's worse is how wrong it is the "but Graphics is not everything" crowd. Historically Graphics IS almost everything not -so much- by making games shinier but mostly for allowing newer possibilities, new styles of gameplay, storytelling. Whole genres were born because of innovation in graphics, FPS, modern RPGs, Strategies, Realistic Rally games and so on and so forth.
There is no telling how much more could be invented if there was initiative for game devs to utilize the sheer power that we now are *only* sitting on. Few years of console domination and the stagnation in innovation is unparalleled, in no other era gaming did have so many sequels, it almost became like holywood, well -no- the whole scene is even duller.
I'm not a big gamer anymore due to real life reasons, but I would love to see game devs for once trying to be innovative by using new technologies and eventually becoming good users of it but I don't see it coming anytime soon. Game developing increasingly becomes the Luddite of an industry that it once led.
It is sad how gaming applications and the hardware created to support it were in fact pouring in so many other fields, from cinema to medical imagery. The stagnation in the 21st century's genre which was not meant to be is not only sad but actually stops innovation which we are not able even to imagine from realizing (who had thought of 3D GPS, 30 years ago, or body rendering, life like effects in movies and many others - in all those fields we would comparably, still, be in the stone age). Not to speak about new genres of gaming-entertainment-culture which would "shyly" emerge by now. But that's not happening and instead we're about to get the 7th Call of Duty in a short while.
The future is shaped by software not hardware, hardware is a necessary ingredient but far from sufficient. Kudos to ATI and nVidia for continuing creating wonderful hardware, shame to gaming industry who are not using the -aforementioned- hardware, thus burying the future...
Noooooooooooooo. That's what they want us to do.;)
Does anyone know what the hole mounting dimensions are? Same as GT200 cards?
Sorry if its already been discussed, but I don't have time to read a 65 page thread.
tc i dont think anyone discussed that
Here is a guide for you.
http://www.pwnordie.com/wp-content/u...9_fullsize.jpg
The first Stalker game started being developed in 2001 - I saw screenshots showcasing the graphics in 2002 and they didn't really look so much worse than those taken in 2007. Yes, that's right, like 9 years ago. In DirectX 8. Do you really think they re-made the engine from the scratch for CoP? Since it doesn't look that much better than SoC, I believe a whole ton of code they've written has been made in 2001, no wonder it's so sluggish...
Haha, brilliant! :rofl:
No Stalker game has ever looked anything better than "good" and CoP is no different. The CoP benchmark program made me seriously cringe, with sunshafts it shafts the system but graphics are hardly better than something by the Source engine (which would perform 10x better)
The game's bugginess every release should've been a sure enough sign of bad coding
It's true that Valve and Blizzard games aren't as stunning visually as a lot of games out there, but they're so damn well optimized that they'll run on damn near anything GeForce 6600 or later
scaling with respect to pixels is really bad in stalker engine (almost linear which is bad). its why multigpu is so good on the games.
Hi,
A French online magazine HARDWARE.FR published details about Fermi architecture: http://www.hardware.fr/articles/782-...ometrique.html , on 18th January. You probably already have some more information locally in the USA, but there are maybe some info there that may help.
It's in French but there is a translation there, don't look at the publication date it's wrong, the french version was published on 18th January:
English version http://www.behardware.com/articles/7...evolution.html
8 pages with a deep dive into Fermi. I don't know if this link has already been provided there, if not, enjoy the reading.
Best regards,
Cyril
In these days of internet.. it really doesn't matter where you live, as long as you're online (no it doesn't contain any news, it's like all the other articles + speculation.)
from fermi to xbox 360 this thread is incredible. so much info inside that one can go crazy just by looking at tags :P
first paper made card is also hilarious, because of card=several layers of paper :D
One last thing concerning the xbox. It has a three year warranty for the RROD and after we sent my brother's one for repair (free of charge of course), they decided to send us a new one back after 10 days. I was expecting a much worse service from MS:p:
P.S: I think it has to do with inadequate cooling
not necessarily, I work at gamestop and i've seen people get ring of death within a month, even if they "over usage" PS3 doesn't have those problems whatsoever...anyways on topic, so if the dual-gpu comes out, probably would have x2 8-pins?? since is going to exceed the 300w mark?
Fermi benched in Farcry 2
Quote:
"The Fermi and GTX 285 numbers were provided by NVIDIA, in NVIDIA-controlled conditions, at an event in Las Vegas*
So make of it what you will.
http://www.hexus.net/content/item.php?item=21996
Thanks for the link
The GF100 is sandwiched between the 5870 and the 5970 :D "Avg frames"
The interesting part is the Max rate GF100 has lowest max frame rate out of the whole bunch. I get that the 5970 is limited due to CF but why is it so with GF100?
Anyway they should really test a 5870 @ 1ghz and maybe a 5870 Eyefinity6 also :D to complete the test, i reckon the 5870 @ 1ghz can score near to the 70FPS mark "BTW the tested 5870 was at 875MHz"
and ppl say its 8800 gtx vs 3870x2 situation meh 5970 isn't even 5870 cf so fermi needs a lot of improvement to catch 5970
Thats why the best thing to look at is the avg performance where everything looks good.
I mean a 5870 @ 1ghz "MSI lighting & Gigabyte one" can archive around 70 fps and the GF100 can get around 85 fps and the 5970 gets around 100 fps. A full 15fps difference among each other, the gf100 shown must have been one of the most binned and special ones to show up there its highly unlikely that nvidia will show up with a GF100 card that does over a 100 fps IMO.
That is not the point.
The thing is, the 5970 is in a league of it's own.
Fermi will be single fastest GPU, but it cannot be priced like mad, because it cannot beat the 600$ 5970. So, pricing will be a tricky thing for nvidia.
Though, i am really interested in a 300-350 sps part (can't be bothered to calculate the precise number :P).
They kept the machine screened. No one could see the hardware inside.
"...provided by NVIDIA, in NVIDIA-controlled conditions..."
Could of been a S3 Virge against a Matrox mystique inside a Cray Jaguar for all we know. ;)
Speculation is fine and fun, but what is really needed are some independent benchmarks from a trusted source across a range of titles. :yepp:
Nice to see their results reflect ours almost exactly. Thanks for that! :up:
Not true at all. The side panel of the machine was open allowing the people in attendance to see everything inside; from memory to the GPU to the mobo to the PSU. Finally, NVIDIA offered to open up the dxdiag screen. How do you think their use of a 960 was confirmed? :shrug:
it's the MSRP, even if right now all the stores are over-pricing it.
If fermi comes out at 550 $ or 600$, do you really think the stores will sell it at that price? doubt it.
But fermi at 600$ MSRP is uncompetitive.
AND 5970 will beat it, it's a beast of acard.
...Could of been a S3 Virge against a Matrox mystique inside a Cray Jaguar for all we know. ;)
I only read the Hexus piece in the link I sent so I know nothing of the machines innards being exposed. Besides, the dxdiag screen could of been faked and if Nvidia had a hypnotist on the payroll who is to say what was seen?Quote:
Not true at all. The side panel of the machine was open allowing the people in attendance to see everything inside; from memory to the GPU to the mobo to the PSU. Finally, NVIDIA offered to open up the dxdiag screen. How do you think their use of a 960 was confirmed? :shrug:
Is there a tongue in cheek smiley? Because the winking smiley failed in my previous post. :)
Good read.
I feel like watching all the conspiracy theory movies and a few X-files episodes with some of you here :D
What if nVIDIA used a Radeon HD 5870 and can't produce a working Fermi.
What if the world as we know it ends tomorrow.
What if...
Good lord...
come on spill it pretty please :p:
Maybe what i heard of those slides being true is true.... :shrug:
Those slides can be true i heard they were shown to the AIB partners, etc... who knows....
What if BenchZowner was a bit more fun and entertaining :) speculation is F--U---NN
^ lol :ROTF:
lol well my guesstimate is 400-500$
though still keep expectations low/leave room for surprise :)
spill what.. my glass is empty :D
all the fermi vids/all the talk about those vids.. thats "old" fermi
if nvidia all wanted to do is beat 5870 thats easy.. peanuts! fermi out in retail already
- theres room for clocks: thats whats taking longer
- theres room for drivers: a lot of room for improvements
- theres room for oc: just as 5870/5970 can oc ~20%~ so does fermi
- theres room for other stuff too :D
some of you have forgotten.. some of you dont know.. some all know about nvidia is renaming.. about who the hell nvidia is.. they dont like to lose! :rofl:
If what you say is true, than that is a really good thing, i love competition and i actually want fermi to be up there with 5970, because i work as a 3d artist and for gpu rendering purposes i need a Fermi as my new upgrade.
So, let's all hold hands and sing praises so that Jen and his magic green goblin team can release a great product.
Right guyz, WE need graphics competition and, above all, we need kick*ss games and hope DX11 could bring a real difference vs consoles.
I agree
As soon as a >1GB VRAM card which pushes more pixels than a GTX 285 is released I am going to buy one.
I am currently livid with BFG see here as to why. It's no surprise BFG have pulled out of Europe :down:
I just hope eVGA or some other decent brand produces high quality Fermi cards.
John
^ try supreme commander/2 players/lots of units/triple buffering: 1.7gb easy @ 285 2gb and still playable while 1gb 285 doesnt/cannot sustain such brutality
1gb cards ati+nvidia all theyre good for is benches.. at least the way i see it :D
Yes we need competition !
The best scenario could be something like this:
nVidia releases a single chip GTX 380 which barely matches the performance of HD5970. That's all nVidia needs to do to ask a high price.
ATi releases a refresh (if not a new HD5980?) right after, and the gets just ahead.
In this scenario we would be able to get a great GTX 380 for a good price, otherwise I'm afraid a superior nVidia single GPU (aka a repeat of 8800GTX) won't com cheep.
Let's play what if since you guys are so fond of these mind games :p:
@Sam oslo...
and what if nVIDIA releases a dual GF100 in April ? :p:
It Is not about what if. It is about a very possible competition scenario, that happed before. :p: and it is a good chance for happening again, soon or later.
In this scenario, that dual-GF100 would become even more interesting to follow. Don't you think so?
EDIT: 2 scenarios could explain the necessity for releasing a dual-GHF100. Either GTX 380 is behind the Hd5970 by a good margin, or will become behind a refresh (or new GPU which ATi is going to release soon). What else could be the reason for a Dual-GF100, you think?
i knew id bring the positive out of you :)
fermi vs 5970
512 vs 3200
384bit vs 2x256bit
512 fermi shaders beat 3200 radeon shaders.. just as the actual 384bit mem bandwidth beat 2x256bit actual mem bandwidth and thats even if they stick to 4.2gbps
so much for radeon shaders hyped as "more efficient"
mind games.. oh that 400GB/s (5gbps) fermi ?? :)
i wouldnt go that far. ATi has the hands down best shader units. the problem is there is no games can use this because this 3d graphics, not shading.
im referring to actual 3d performance.. thats just numbers
what if they hold it together with wood screws? haha jkjk
looking forward to seeing what fermi can do!
and so ATI 5000 series will come down in price :p
:rofl::ROTF:
You do realize that the "3200" shaders are just how ATI counts it (marketing speak) right? You've got to divide that by 5 to get the equivalent count for Nvidia... so 640 to 512, and that's not counting the fact that Nvidia has a hot clock meaning the shaders on the Nvidia part are overall more, equivalently
As for your bandwidth talk, that's hilarious seeing as how you once were championing 512-bit as a necessity, when 256-bit + GDDR5 did just fine when it came to the RV770 vs. GTX285
Slow down on the kool-aid thar
I don't see Fermi selling for 400-500$ at all unless it's performance really is under the 5970's by a decent margin. No way has Nvidia EVER sold a top end card without a premium. See: GTX 280 vs. 260 prices at launch.
And I'm pretty sure people have gotten in trouble for claiming to know stuff on this forum without substantiating it
hmmm and i thought you had some inside info seems you just based your claims on your flawed logic. shader comparison is funny though if you divide atis shaders to 5 that means atis 160 lower clocked shaders matched highly clocked 240 shaders and unless fermis shaders are some sort of miracle its really hard to believe that only 512 shaders can match 3200. Fermi has few undeniable advantages though higher 'memory and bandwith' and being single gpu are important ones
How about this scenario; The Fermi sits in between the 5870 and 5970 in current games (but we are still talking about > 60 fps). BUT it is much quicker than the 5870 and beats the 5970 in the Unigine benchmark (geometry) and is the quickest in DX 11 games?
Now that would make a lot of people think.
New benchmark are out here
Intel's Core i7 960, 6GB of 1600MHz , an Asus Rampage II Extreme , Win 7
http://images.hardwarecanucks.com/im...0/GF100-42.jpg
http://images.hardwarecanucks.com/im...0/GF100-43.jpg
http://images.hardwarecanucks.com/im...0/GF100-44.jpg
Source :
http://www.hardwarecanucks.com/forum...oscope-14.html
http://www.hardwarecanucks.com/forum...oscope-13.html
http://www.fudzilla.com/content/view/17325/1/
its been already posted :)
february/march lol..
oh wait ... wait.. someone seems to forget that they actually need boards to make people surprised. and without cards.. no game..
if we want to be surprised next month.. actually let the people that make and sell the card know they're getting them.. which is not the case if nv told it's partners last week they don't expect any shipment in march.
I hope you're not expecting Twintech, Inno3D, Sparkle or Gainward getting the chips prior to eVGA, BFG, XFX :p:
Checking 70 pages isn't easy, though.
I has been rumored in some sites that the GTX380 will sport a $520 sticker :)
That is a goood price IMO, around $500 it will be $100 cheaper than the 5970 and $100 more expensive than the 5870 not bad for a fastest single core. But the price of 5950 and 5870 @ 1Ghz is still to be inked and it will be blood bath if 5950 is priced around $500.
5870 @ 1ghz will be much cheaper and may be a good choice for a person who cant afford the gf100.
You have to realize that both architectures are very different, you can't compare ATI's shaders to Nvidia's Cuda Cores on a one-by-one basis.Quote:
Originally Posted by NapalmV5
It's similar to the old Athlon VS Pentium 4 debate.
It's not about quantity of working units, but how they work, and ultimately how good performance is.
And BTW, you are supposing that fermi will have the upper hand on said comparison, but most people don't share that thought.
Think about this: Why did Nvidia compare GF100 vs the 5870? (instead of the 5970). That should give you an idea.
Considering what Nvidia said about XFX, I wouldn't expect them to be on the first batch of fermi cards.
Anyone expecting SuperCompute functions in GF100 are badly mistaken..
(expect some more superb announcements in the coming month)
Thanks Jawed for the linkQuote:
I should pause to explain the asterisk next to the unexpectedly low estimate for the GF100's double-precision performance. By all rights, in this architecture, double-precision math should happen at half the speed of single-precision, clean and simple. However, Nvidia has made the decision to limit DP performance in the GeForce versions of the GF100 to 64 FMA ops per clock—one fourth of what the chip can do. This is presumably a product positioning decision intended to encourage serious compute customers to purchase a Tesla version of the GPU instead. Double-precision support doesn't appear to be of any use for real-time graphics, and I doubt many serious GPU-computing customers will want the peak DP rates without the ECC memory that the Tesla cards will provide. But a few poor hackers in Eastern Europe are going to be seriously bummed, and this does mean the Radeon HD 5870 will be substantially faster than any GeForce card at double-precision math, at least in terms of peak rates.
^ Think anyone could figure out a way to bypass that limit?
sure it is :)
http://www.xtremesystems.org/Forums/...hreadid=241120
Don't count on it, every soft Quadro hack is just superficial (device ID and SPEC marks) and Teslas are even more of the same.
Now where is that "but it's still faster with 186GFLOPS" crap, when it's taking 100W more and costing an estimated 150USD more it :banana::banana::banana::banana: be doing that.
Neliz, no specifics, just the 280W ceiling added unto the 520USD estimates.
But ultimately Cypress' pricing is dynamic, and I'm placing my bet on AMD keeping their distance (and opening up the 5800s to custom designs) beyond the price (in)elasticity of green users.
They've been "designing" for quite some time now first results in a bit more than two weeks.
edit: NOW!
http://techpowerup.com/113418/MSI_HD...assembled.html
http://www.techpowerup.com/img/10-01...card_naked.jpg
double precision on the gpu is completely irrelevant for everyone in this forum, afaik there are no commercially available applications that use it. it's used in secret stuff that decides if your company make hundreds of millions of dollars more or not.
expect ati and amd to have big drama about double precision, ask them for real-word applications you could use to benchmark
I've got a couple of customers that actually use DP (and yes, they are in the financial world) and they have used GeForces/CUDA all the time now (starting development on 8800GT's) And they were indeed quite expecting that all their CUDA efforts would get a nice boost come Q2.
DP would be appreciated perhaps even in situations such as a GPGPU based renderer (no, not the lame raytrace-everythingmajig kind but a OCL implementation of a unbiased renderer)
nVidia's headache here is that Cypress achieved IEEE-854 compliant DP and has perhaps enough cache read/write flexibility/addressing capabilities that would make it a decent choice for most devs.
As for real world apps... is GPGPU even important enough yet to talk about speed? Besides all those distributed computing projects I thought the first round of CUDA/Stream apps were horrid jokes.
I know ppl who use DP based software and use consumer chips for that purpose saves them a wallop and its almost as good.