http://img144.imageshack.us/img144/9...ohofx57wu7.jpg
GTX 280 is three times faster in F@H than HD 3870 and five times faster than PS3.
Source (a reliable Turkish news editor):
http://forum.donanimhaber.com/m_23433606/tm.htm
Printable View
http://img144.imageshack.us/img144/9...ohofx57wu7.jpg
GTX 280 is three times faster in F@H than HD 3870 and five times faster than PS3.
Source (a reliable Turkish news editor):
http://forum.donanimhaber.com/m_23433606/tm.htm
F@H supported on Nvidia Cards?
Btw link doesn't work for me , its an empty page ;/
That's my distorted picture! :mad:
The original can be found over at a certain site which reported about NVIDIA F@H client, but fumbled with the pictures. I didn't want to rat them out, but it still interesting news...
http://www.nordichardware.com/news,7777.html
//AndreasQuote:
GeForce GTX 280 is going to be a mighty beast. The GT200 core impressed us from the very beginning in matters of raw performance. Alas there have been little information on the real-life performance of the card, other than statements that it runs Crysis fine at certain settings and that it scores 7000 points in the Vantage Extreme profile. We've been provided some more solid information based on the performance in Stanford's Folding@Home client. A slide recently presented says that GeForce GTX 280 will be capable of folding slightly more than 500 mol/day, which is three times more than what Radeon HD 3870 can do, about 170 mol/day (according to the slide), or five times more than PlayStation 3; 100 mol/day.
http://www.nordichardware.com/image3.php?id=5077
The picture and angle it was taken from has been altered from the original
http://www.pcper.com/images/news/folding4.jpg
Try Again, it's working now
We do have a thread about this very topic: http://www.xtremesystems.org/forums/...=188726&page=1
And my setup can still take it.
4xOCed 3850 in CrossFireX will still outperform that.
But, I bought mine for games, so I'll just enjoy it for that.
GTX280 will cost at least 3x as much as a 3870, so id be really ****ed if it wasnt 3x faster.....
nVidia presented us with the info that their new gear did exactly what it should. That shouldnt be news!
Sadly its never been linear, but I dont believe in paying 4x as much as 10% more performance.
If everyone thought like that, the price structure of hardware would be different.
Value for money >>> all else.
Oh- and nV say their new card is 3x faster. Il believe it when joe average pulls the same numbers.
i could care less, if i buy this card its for games and folding as a side project, i doubt someone would buy a $400+ video card to only fold with it
gtx 280 is freakin insanely fast.
In the UK, the 3870 was the top card for a while.
Do you mean performance or price? The 3870 is no slouch in performance/£, although- yea, it depends on the game
Since GTX 280 supports tri sli...
1 Card = 3x faster than 3870
2 Cards = 6x faster than hd3870
3 Cards = 9x faster than hd3870
In trisli 280 will freaking much faster than an cpu.
If 1 card gets 500points and cpu it's 125x faster.
In trisli or 3 cards it's freaking 375x faster!
There numbers are ridiclous and maybe FUD.
But yeah, DAMN!
Hmm, if folding performance is any representative of real life gaming performance, and the r700 should also be roughly 3x the 3870 (r700=2x rv770= 1.5x rv670), meaning that we should have a nice battle royal on our hands between the 4870x2 and gtx 280
Yeah I thought it would be fairly linear for folding (isn't it for cpus?)
isnt folding memory/cpu based and nonlinear
but we dont know what they did to get the number if if there are no stats it dosnt count, alll i see is NV with an ati card and their new one, im sure that it will score higher but CUDA is complex and it would take alot of work to make it efficient
also how can they rate it, the NV one should be like the ati when it 1st came out were it was doing simple ones and getting rechecked
F@H scaling depends on the WU- I pitted an E2140 @ 3.2GHz with single-channel RAM against an E66@ 3.2 with dual-channel RAM and the frame time for what I was working on at the time was the same.
I am always sceptical about the results a company is showing when it comes to promoting there own product. Its never an equal setup, 9/10 times there testing setup is in favour of the of there own product. I will wait for the websites reviews and benches.
i wonder with what cpu they compare it... and what the y-scale has as untis. I only can read xx/day i hope i doesn't mean WU/day. If so its a joke by nv...
i mean the gforce to radeons, it shouldent be a linear comparison, some of the runs should favor one or the other and the results shouldent linear in relation to flops (i assume thats the only metric here)
Can't read the article... Are those numbers based on theoretical musings or are they based on the NVIDIA F@H GPU beta?
I love how we're comparing the next gen nvidia card to the current gen ATI card. When we see a 280 vs a 4870 and 4870x2 I'll pay more attention. Hell, I'd like to see a g92 and g80 in that chart too. I have a funny feeling g80/g92 is probably right there with the 3870.
3X the folding power doesn't mean 3X the points per day.
The gpu clients are only about 500ppd because they can only get very specific work units.
IDK - I'm kind of skeptical there, the big deal about the 2X0 series was that they went to a more general stream processor design than g80/g92. I'm wondering where it is vs ATI's stream processing tech, but I guess we'll see soon enough.
lolol, I bet at some point we're going to see each company cheating ala the old driver optimizations for various benchmarks.
Since nobody brought this up yet:
The points system for F@H is a mess. Work unites score based on what hardware you're running them on, not the actual speed each processes them.
see this all the time about here:D.
You are saying that you do care.
I think you mean "I could not care less"
or "I couldn't care less"
Sorry to be a grammar Nazi, And if the truth be told it is somewhat hypocritical of me as I am often wrong myself. I am just trying to point out a really common error.
so...
nvidia now sort of sumtin like this?
http://i32.tinypic.com/23rzhl.jpg
What a lame-ish and fud-ish presentation, comparing the competition's previous generation mid-range to their yet-to-be-released top-range..
I wonder if this happened because nVidia is actually afraid of the RV7xx.
Y'know- it would be nice if the people being shown these presentations (from all major companies, not just nV) could stand up and point out the most glaring omissions/skewed facts etc without fear of the company going in the huff and kicking them off the early view/play/bench list
audience member- "Wheres the 8800GTS in that lineup, or any last-gen nV card for that matter?"
nV: "Doors behind you, GTFO"
When that will be releaset to public?
BTW, someone have to code version for QMC.
True, they really should've waited for AMD to send them a few early 4870 samples to use in their presentation. I know I would have.....
And it is really lame of them to show numbers from their upcoming products at a conference about their upcoming products. How dastardly!
everyone is forgetting that the GT200 will fight the 4870X2
lets assume a single 4870 is 50% faster then an 3870 (50% more shaders, 50%more TMU, higher shader clock, high speed DDR5)
A GT200 would be twice as fast as a single 4870 (300% / 150% = 2)
So it will be equally as fast as a 4870X2 (F@H scales quite linear)
i dont realize how this is such awesome new?
I hope it is better than this or I hope F@H favors ATI cards, because GTX280 won't be faster than 9800GX2, and certainly not HD4870X2 if it will look like this in gaming.
Not sure how they measured it, but this Gefore is looking like a monster.
You honestly think AMD owould have sent NVIDIA a card that is yet to be released, and not high end, to be put against the high end?
Maybe becasue NVIDIA didnt support F@H before? + shows the power of the new card.
How can you claim that? you got all 3 cards? and tested them?
Anyone knows if this GTX 280 will have aftermarket air cooling or it will be like the 9800GX2 that can only be cooled with water?
Does the GTX 280 CUDA client beat HD3870 CAL client by 300%, I believe it would
I want to see what CUDA would do against CTM results
And you are Charlie Demerjian? Or the oracle of delphi?
Am I the only one that thinks it would be pathetic if a dual card setup from AMD could not obliterate a single nvidia card, though, NV can release a dual card themselves, thus it is pretty irrelevant if a 4870X2 is superior.
Dunno, it seems impressive and it probably is as Dinos gave us a clear hint :D
:worship: gt280
:worship: 4xxx series.
more revenues for the 'checkout-chick' e-tailers.
Is there any confirmation which one is the fastest? GTX 280 or 4870X2?
NDA is June 17th
That's just folding and just pre marketing. Remember the old news where 2900XT was suppose to be n times faster then 8800GTX, they even got the monster specifications but the results were - how should I put it: Lame! Now, nVidia is doing the same thing - I doubt GTX 280 would be even 2x faster then HD 3870 in games, even with his 6+8 power connector... maybe 3 times the power consumption, now that I might believe.
If you can link/post a guide for doing this it would be much appreciated. According to the folding forum running a second gpu client on the second core of the 3870X2 doesn't work. See for example:
http://foldingforum.org/viewtopic.php?f=42&t=2145
the 2900 is way faster at maya and CAD and call of warez (i think that its valid its sponsored by both ATI and NV), but in gaming rendering takes a back seat to assembling frames and its not like low shader optimization and dx9 textures with dx10 shading/overlays helps ati out (dont flame, im saying NV = game, ATI = workstation)
there is a huge limit with slot bandwidth and the cpu queuing up stuff for the gpu and the client cant do it (it only will touch 1core)
I don't know if anyone has noticed, but this is a win for ATI actually.
The HD 3870 costs 3.75X Cheaper than the $600 GTX 280: http://www.newegg.com/Product/Produc...82E16814161218
So technically you get more for your card per dollar with the 3870 than you do with the GTX 280.
Perkam
Meow?
http://techreport.com/r.x/radeon-hd-3800/power-idle.gif
http://techreport.com/r.x/radeon-hd-3800/power-load.gif
That plus the price difference means Nvidia cannot claim F@H crunching superiority with this demonstration alone. Frankly I'm surprised this thread was not debunked much earlier.
Perkam
Techreport is not quite good with those figures. Here is the figures from the HD3870X2.
http://techreport.com/r.x/radeon-hd-...power-idle.gif
http://techreport.com/r.x/radeon-hd-...power-load.gif
Absolutely no consistency. Also i think he wanted you to scale AMD to the same performance. And price/performance on highend parts vs lowend parts is...not right. Its like comparing a 100$ 2.4Ghz Core 2 with a 1499$ 3.2Ghz.
Mine never got beyond 200W, what did they test it with?
No, not my picture, I would never violate NDA if I had signed it, but since they never even give us a chance to do that, I feel just fine posting things like this ;)
If I had been there I would've made sure to look you and some other guys up :)
This is the world map hanging on the wall at NVIDIA HQ:
http://img138.imageshack.us/img138/5241/mapqm6.th.jpg
See? No Nordics :(
//Andreas
^ LOL :D IIRC F@H doesn't run on X2 GPUs nor Crossfire/SLI configs? Also i've heard that you need a beefy CPU to "feed" these GPUs and with the latest WUs even an OCed C2D isn't properly "feeding" these cards (3850/70) :shrug:
Don't worry, when the 4850 comes, $460 instead of $600, 220W instead of 236W, and at least the cards are going to be silent. :up: :D
@Zanzabar
ATI for now dislikes textures. If you noticed, Mass Effect and Bioshock DX9 weren't really texture heavy (close to an 88Ultra), and 16x AF kills a lot of performance that would not hit the nVidia cards.