For you. There aren't many that would take a slower yet more expensive single-GPU-card over a cheaper yet faster Dual-GPU-card.
If money was no problem for me I'd go with the first option. SLI/CF just plain sucks.
Printable View
I don't think anyone is arguing that the 5970 isn't meant to be ATI's card they put up against GF100 in price/perf. The point people are making is that NVidia's GPU is interesting from what's known/seen and that their GPU appears to be more powerful than ATI's GPU.
Whether one side stacks 2 GPUs on one PCB as a counter is irrelevant in a GPU to GPU comparison. People are just trying to gauge how much more powerful NVidia's GPU is compared to ATI, and compare raw GPU performance and capabilites, that's all.
No need to go to the fanboy mattresses.
I don't really think so. If that's the GTX 380 and it is an average 20% faster than the 5870, it will be priced above the 5870 and close but below the 5970. Right now, AMD must like having a higher profit margin, so if Nvidia keeps prices high, AMD can stay the same...
Remember, if AMD knew that RV770 was going to be so competitive against GT200, they would have priced MSRP much higher. A 4870 at $399 and a 4850 at $299 might have been the debut price had they known that the GT200 was not as much of a beast as people predicted
Not really... if the dual GPU part and the single GPU part are in the same power and TDP range, and price as well, the comparison is valid... it's a single board sucking up the same power and heat vs. a single board doing the same
orly?Quote:
Originally Posted by Hardware Canucks
http://www.overclock.net/gallery/dat...mIIrevised.PNG
yes, i agree... sli and cf just dont work as well as a single gpu card in some scenarios... like i said before, when i threw in a second 260 to play FC2 the gameplay actually got worse even though the fps went up... :shrug:
they dont have all the infos, and too much text and technical stuff that nobody really cares about imo... they posted the rumored 725mhz gpu clocks and 1500mhz mem clocks, but i highly doubt that the final parts will launch at that speed...
yeah but thats in FC2 which likes higher cpu clocks a lot, doesnt it?
what cpu clocks was this at? 4? 4.5?
next time thumbnail pls, k? thx :D
Also, when you compare die size and how many chips they get per wafer, doesn't AMD get twice as many churned out per wafer as NVidia?
Gives them a pricing/profitability advantage doesn't it? :confused:
I mean, assuming R&D and overhead for both were equal, which they clearly are not.
you can bring up all kinds of metrics of what you think constitutes "logic" but in the end you are looking at two things: TDP and MSRP. both are very close for R800 and GF100.
If you want to talk about "not fair" .. Is it fair to bring 340mm2 to fight 580mm2?
Personally, i think that ATI has a much higher capacity to produce lots of gpus, OEM's love it when your able to supply lots of parts (giving them lower prices per 1000 units).
Nvidia, with low yields and big chips will have a harder time to meet Fermi orders.
I don't know how to post a thumbnail =o(.
here's a link to a result so you can see cpu-z.
http://www.overclock.net/gallery/dat...ayarequest.PNG
GF100
Gpixels/s - 22.4
Gtextels/s - 44.8
Mtriangles/s - 2800
Bandwidth - 214.7
RAM - 1536MB
Radeon 5870
Gpixels/s - 27.2
Gtextels/s - 68.0
Mtriangles/s - 850
Bandwidth - 143.1
Ram - 1024MB
Summery: GF100 should slower then the Radeon 5870 in most games except where tessellation and/or physics (on the gpu) is used, and except at high resolutions with AA, maybe, probably. However, GF100 will be faster for the HPC market as it has 716.8 double precise GFLOPS, while the Radeon 5870 has 544 double precise GFLOPS. This is probably why Nvidia has been touting the GF100's "compute" performance instead of its gaming performance.
That is about as high of an OC as anybody would get (without ln2) on a 5870, but it is in fact matching the GF100 score in FC2!
:eek:
edit btw, does anyone know how many clock domains this thing has? It's something like rops&core/l2&scheduler/sampler/shader/memory. going to be kinda wierd seing 5 different mhz's in your gpu-z window
NCspecV81... what are your driver settings?
I haven't used ATi in a long time, but isn't there a setting for Texture levels (HQ, Q, P, HP on nVidia). Also Catalyst AI settings maybe?
I'd wager on your software being setup slightly differently to the other benchmarks that have been posted.
Su
use this service: http://www.imageshack.us/
and when picture is uploaded you'll get the option to copy forum thumbnail link that you just paste in here!
BTW
Perhaps you should turn on 16 Aniso Filtering in CCC for benching FC2 ;)
Saaya, i think it's pretty conclusive that an overclocked 5870 can reach the speed a GF100 reaches.
What i want to know is what TDP does a 5870 have at 1080mhz core freq. I would suspect it will be under the 260-270 W, at which GF 100 is reckoned to be.
I am really interested in performance per watt, i think the 5870 for just 190W is doing a really good job.
Yepp, it does. But don't forget yields. I think it's safe to assume that Fermi has worse yields than RV870.
Someone made a claim that NVIDIA would have to sell Fermi at a loss in the desktop market. If things go bad for NVIDIA (AMD cuts prices, very bad yields etc) I can see this being true easily.
given the same defect density per wafer of silicon a bigger die will automatically have lower yield (take a piece of paper, put 5 defect dots on it with a pen, cut it into 10 pieces, now cut another piece of paper with dots into 20 pieces, count how many pieces without dot you get)
if nvidia is smart (they probably are) they put some spares on their gpu which is basically extra pieces of hardware that can replace pieces where defects are in the silicon. for example you could imagine having a 5th GPC cluster that can replace one with defects. if you do the proper math you can compute spare designs that are statistically going to increase your per-die yield even though such measures increase the die area
There are really 2 things ATi can do to counter the GF100 threat, those are 1. 5890 and 2. 5950.
1. 5890 @ 1.1ghz-1.2ghz would be close to the performance of GF100 but can not really beat the GF100
2. 5950 even around 700Mhz would be hard competition for the GF100.
the other counter measure is already in effect, those 1Ghz 5870's seem very tasty and if sold around $400 they can make a killing. After all who does not want a card that runs at 1ghz or 1000Mhz :)
I think the card in the video is a GF100 after all Deep Dive was about GF100 and its been confirmed that GF100 = 512SP's
GF100 = ~250W.
Sweclockers - (in swedish)
sweclockers: According to sources at CNET Networks chassis and power supply manufacturers is at least top model a real thirsty graphics card. The data suggest that the power consumption of ports around 250 W - 25 percent more than the GeForce GTX 285th
sounds as good as confirmed by fudo
i measured 5870 ref design, stock clocks as 212W in furmark. so if you extrapolate 30% performance advantage of gf100 over 5870 you get around 280W. and ati did their homework regarding power consumption
For me power consumption in load really doesn't matter, as long as it's as low as possible in idle. Because thats where it's going to be >95% of the time in my case.
AMD has managed quite well with the 5 series, alltough there is still room for improvement.
You guys need to stop daydreaming.
First of all you people have to realize that both nVIDIA's and AMD's current base architectures ( both the 5xxx series and the GTX 3xx series are based on previous designs, what AMD started with the 2900 series and nVIDIA with the 8800GTX ) have reached/are reaching their ceiling.
They won't be able to add more units to the cards to put out a refresh.
If at all, nVIDIA might be able to add another 64 "SPs" and AMD about 160 "shader units".
And those numbers would be barely viable even if TSMC or any other manufacturer gets the job done right at 40nm or even lower.
Realistically both nVIDIA & AMD have to come out with a totally new architecture for their new generation cards, and that's not a simple thing, so no, unless the 6xxx series are just a refresh of the current GPU, it won't be out before Mid 2011.
By the way somebody has to develop a vB add-on called "hide Fermi related threads", I'm really tired of some people, some flaming, some cheering, some doubting and some crying like poor babies.
Grow up maybe ?