i wonder with what cpu they compare it... and what the y-scale has as untis. I only can read xx/day i hope i doesn't mean WU/day. If so its a joke by nv...
Printable View
i wonder with what cpu they compare it... and what the y-scale has as untis. I only can read xx/day i hope i doesn't mean WU/day. If so its a joke by nv...
i mean the gforce to radeons, it shouldent be a linear comparison, some of the runs should favor one or the other and the results shouldent linear in relation to flops (i assume thats the only metric here)
Can't read the article... Are those numbers based on theoretical musings or are they based on the NVIDIA F@H GPU beta?
I love how we're comparing the next gen nvidia card to the current gen ATI card. When we see a 280 vs a 4870 and 4870x2 I'll pay more attention. Hell, I'd like to see a g92 and g80 in that chart too. I have a funny feeling g80/g92 is probably right there with the 3870.
3X the folding power doesn't mean 3X the points per day.
The gpu clients are only about 500ppd because they can only get very specific work units.
IDK - I'm kind of skeptical there, the big deal about the 2X0 series was that they went to a more general stream processor design than g80/g92. I'm wondering where it is vs ATI's stream processing tech, but I guess we'll see soon enough.
lolol, I bet at some point we're going to see each company cheating ala the old driver optimizations for various benchmarks.
Since nobody brought this up yet:
The points system for F@H is a mess. Work unites score based on what hardware you're running them on, not the actual speed each processes them.
see this all the time about here:D.
You are saying that you do care.
I think you mean "I could not care less"
or "I couldn't care less"
Sorry to be a grammar Nazi, And if the truth be told it is somewhat hypocritical of me as I am often wrong myself. I am just trying to point out a really common error.
so...
nvidia now sort of sumtin like this?
http://i32.tinypic.com/23rzhl.jpg
What a lame-ish and fud-ish presentation, comparing the competition's previous generation mid-range to their yet-to-be-released top-range..
I wonder if this happened because nVidia is actually afraid of the RV7xx.
Y'know- it would be nice if the people being shown these presentations (from all major companies, not just nV) could stand up and point out the most glaring omissions/skewed facts etc without fear of the company going in the huff and kicking them off the early view/play/bench list
audience member- "Wheres the 8800GTS in that lineup, or any last-gen nV card for that matter?"
nV: "Doors behind you, GTFO"
When that will be releaset to public?
BTW, someone have to code version for QMC.
True, they really should've waited for AMD to send them a few early 4870 samples to use in their presentation. I know I would have.....
And it is really lame of them to show numbers from their upcoming products at a conference about their upcoming products. How dastardly!
everyone is forgetting that the GT200 will fight the 4870X2
lets assume a single 4870 is 50% faster then an 3870 (50% more shaders, 50%more TMU, higher shader clock, high speed DDR5)
A GT200 would be twice as fast as a single 4870 (300% / 150% = 2)
So it will be equally as fast as a 4870X2 (F@H scales quite linear)
i dont realize how this is such awesome new?
I hope it is better than this or I hope F@H favors ATI cards, because GTX280 won't be faster than 9800GX2, and certainly not HD4870X2 if it will look like this in gaming.
Not sure how they measured it, but this Gefore is looking like a monster.
You honestly think AMD owould have sent NVIDIA a card that is yet to be released, and not high end, to be put against the high end?
Maybe becasue NVIDIA didnt support F@H before? + shows the power of the new card.
How can you claim that? you got all 3 cards? and tested them?