Quote Originally Posted by D_A View Post
Let's for a moment assume they are the same amount of work.

The credit is based on what the benchmark score is and how long the unit takes, with a "fudge factor" possibly but more of less that's it.
If a Linux box is chewing through the equivalent amount of work in less time due to being more efficient, but has the same or close to the same benchmark scores, then it will logically claim less points per unit. If the benchmark scores are lower for Linux (which I suspect might be the case as the Windows compilers are more efficient for those processes) then it will also claim less points per hour. (That could be verified one way or the other if you have benchmarks for the same version client on both Win7 and Linux). I'd suggest that the credit per hour would be the metric to make the call on as it takes more factors into account and no matter how good the code is on any system, time is close enough to a constant pace. The difference in claimed v granted credit has me intrigued though. It's enough to swing things either way depending on what's causing the variation.

The credit systems on any project are, pretty much, crap anyway. At best they're only an educated guess of what the work is worth on an "average" system. At worst ...
That makes sense.. benchmark score * runtime = score.
Interestingly enough, Windows doesn't score higher on the Benchmark, at least not overall. I just ran it on both machines, here's what I got:

L5640:

Ubuntu 2913/11816 FPU/Integer
Win 7 3123/10085 FPU Integer

980X:

Ubuntu 3184/12899 FPU/Integer
Win 7 3346/10776 FPU Integer

BOINC 6.10.45 on both Windows installs, BOINC 6.10.17 on both Linux installs