Nice catch!:up:
Printable View
sweet i hope it supports the G80 cards so i can use my Ultra!!!
Oooh yes, a graphics core with it's massive parallel design can indeed be faster than a cell processor. The "daughter cores" (SPE) and the controller core are sort of tuned risc processors, the last one being very near to a generic powerpc chip, nothing giant or powerful about that one. The whole thing is just 9 cores tuned to work well together, shrunk and at a higher clock speed. The thing with a Cell proc is that it is tuned to put out a high number of gflops at low power, while a gpu is tuned just tuned to max performance. And it's way more parallel then the Cell. So don't get me wrong, I kind of like the Cell design, but a GPU is just way faster than this. You're not saying that Nehalem with it's 8 cores will be faster than a GPU in specialized tasks, will you? Notice also that in the PS3 the Cell has has not always all it's eight SPE's, to increase yield IBM deactivates crappy cores on the die. But is for PS3 only, so Mainframes get only fully functional procs.
BTW folding on an oc'd GT200? Imagine the electricity bill!
Edit: That jump from 3870 to gt200 in the slide is just INSANE.
Why there are no other GPU enabled projects like WCG or QMC?
Nice catch metro!
Heres a imageshack mirror incase it goes down:
http://img86.imageshack.us/img86/4967/folding4wg0.jpg
x1900xtx was 2x more fast than a PS3.
3870 is about 384GF
4870 is 1TF ... what is insane with the GT200 with 500GF ?
Has there actually been any tangible results of all this folding thats been happening? Or is it just one big experiment?
The recent ATI GPU client runs on Windows only and uses nearly a full CPU core feeding the GPU with data. I let one SMP client run in parallel which uses the other 3 cores of the Phenom9850.
The high CPU load of the GPU client is one problem the F@H team is working on for quite some time. But when they release a new client with nVidia support - maybe they where able to minimize cpu utilization.
What does Folding@Home means?:shrug::shrug:
protein-folding to make a cure for cancer etc.
Are the 8x00 series all ready for this??
Please excuse my ignorance but what actually Folding@home stands for? what it is? and how can this benefits "us" nvidia gpu users?
At least now i will actually consider a Nvidia GPU. :up:
http://img144.imageshack.us/img144/9...ohofx57wu7.jpg
GTX280 will be very fast...
Is the legend for that red column "Radeon HD3870"?
http://folding.stanford.edu/
Like in all distributed computing projects the end "user" gets nothing. You basically donate computing power to the project.
No benefit because Nvidia users don't get cancer. At least I think that was their latest sales pitch...
If it's anything like what we studied in one of my math classes it's a program that simulates all kinds of ways DNA strands or other protein strands get themselves in knots. These knots or "misfolds" can cause all kinds of different diseases and this program will help us to understand it. Instead of having to use extremely expensive supercomputers they just get millions of people to donate their computing power. Every boost in speed like this is a nice boost in progress.
Folding @ Home is a distributed computing project dedicated to curing cancer and other diseases such as Parkinson's and Alzheimer's. For people that already "fold" this is great because now we can use the GPU power to get more points. :up:
Info: http://folding.stanford.edu/English/Science
How fast the client is doesn't matter too much, really. What's more important is:
a) The WUs themselves
b) CPU utilization
Current work units for the SMP client...well...suck. They take forever, are unstable, and are worth very few points. Also, the deadlines are shorter than they used to be. What I'm worried about is that the work units for the CUDA client will be large, slow, not worth much, crash constantly, and by the time you get done crashing and actually complete the unit, the deadline will be up. So after a day of percussive maintenance trying to get it to work, you will get exactly 0 points and Stanford can't use any of the WU you did calculate. That's what I'm most worried about.
Also, when the Radeon GPU2 client was first released, it would use up to a whole core on the CPU just to sort out the graphics card's own calculations...which meant anyone folding SMP on a dual or dual SMP on a quad was losing PPD due to a whole core gone. Stanford has since fixed that, but I hope that issue doesn't plague the CUDA client.