Page 3 of 4 FirstFirst 1234 LastLast
Results 51 to 75 of 82

Thread: Linux vs. Windows on HCC - a study

  1. #51
    Xtreme Enthusiast
    Join Date
    Nov 2009
    Location
    Lévis,Québec,Canada
    Posts
    741
    Quote Originally Posted by pirogue View Post
    <rant>In some respects the points are bogus anyway, especially for overall BOINC points. I mean what's the point of having 20 5890s blasting away at endless, useless calculations figuring out if Collatz was right or not? So someone can say they have the most BOINC points or highest BOINC PPD or used the most KWH last month?</rant>
    Because those mathematic problem can be the msissing part in calculation for physic or chemical and by solving those mathematical problem you can solve also alot of other science problem. Thats why i am running collatz and i will be running Edges@home, yoyo@homes and ufluids. Because yes finding a cure for disease is very nice but as a young man with alot of interest in science, i want to help them to figure out how and why everything is happening. Thats why i am not just running wcg and gpugrid but they still use most of my computer ressource.
    Quote Originally Posted by DDtung
    We overclock and crunch you to the ground

  2. #52
    Banned
    Join Date
    May 2009
    Posts
    676
    it can be said that linux would be granted less credit due to less dedicated systems, yet it is also claiming less credit even though it is working faster...

    ohh,
    found this:
    claimed credit = ([whetstone]+[dhrystone]) * wu_cpu_time_in_sec / 1728000
    http://www.boinc-wiki.info/Claimed_Credit

  3. #53
    Xtreme Cruncher
    Join Date
    Oct 2008
    Location
    Chicago, IL
    Posts
    840
    Quote Originally Posted by onex View Post
    so what you say actually, is that if one cpu does the same work faster then a second cpu, it would get less points,
    they seem to grant more credit per time crunched and not per work done..
    E: or either pfm is right.

    that is confusing.

    can't it be actually tested, i.e - a 980 going 4GHz should be granted less credit then a 980 going 3GHz ?
    Wow, I just read my previous post and I should have had my morning coffee before posting!

    When I compared the 2, the linux machine appears slower on the WCG benchmarks than the windows machine(slightly, something like 5-10% when the benchmarks are run several times to obtain average values).

    However, the Linux machine appears to be able to complete work units faster than on Windows 7 x64 Ultimate. The linux machine also generates about 15% less PPD than a windows machine.

    Obviously the benchmark numbers have a large effect on the PPD, but I would have expected that Linux would have ultimately been more efficient and so, regardless of the benchmark values, would have produced more PPD.

    I would like to note that my test was to determine the maximum PPD, not necessarily the most work units performed.

    Overall, my little test concluded that Linux should have provided more PPD, but did not. I came to this conclusion because I figured that since linux appears to have a 5-10% benchmark penalty compared to windows linux would get 5-10% less work completed and would get 5-10% less PPD. Instead, linux appears to do up to 30% more work than windows, but get about 35% less PPD.

    Just using Jcool's numbers on L5640:

    Credit per hour per thread Win7: 22.18
    Credit per hour per thread Linux: 18.6

    Runtime per WU on Win7: 1.976 hours
    Runtime per WU on Linux: 1.227 hours

    Based on those numbers it seems logical to say that the Linux benchmark doesn't do justice when it comes to the horsepower of the processor.

    There's alot of variability as to how WCG gives us the PPD value, so it could be that I'm out to lunch on estimating the WCG, or WCG somehow puts Linux at a disadvantage. I'm sure this affect isn't deliberate.

    I don't know if I still have my numbers at home. I looked at all of my numbers and after looking at the conflicts I had originally dismissed my data as garbage because I thought I had clearly been making some bad assumptions. Maybe I wasn't too far off.

    What would interest me though, a comparison of Windows 7 x64 Ultimate to Home Premium x64. Surely Ultimate has more overhead, so perhaps running Home Premium x64 would be better? Also, a comparison of Windows 7 x64 Ultimate to Windows 7 Starter might be interesting. I know that x64 gives more PPD, but maybe the starter edition has so much less overhead it makes it superior in PPD. Without numbers I don't want to rely on that hypothesis.

    I don't know how many of you remember some of the threads/comments I had made about WCG back before I made the switch from F@H, but this is why I asked those questions. There were no numbers, just people saying that this was how it was. I don't like hearing "this is how it is". I like to see numbers or something proving beyond someone's opinion that this really is true. . I'm an Engineer, and my job is not about providing opinions, it's about providing numbers and facts to support it.

    Great work to everyone that has done this research. I'm sure more is to follow and maybe we can figure out what is really going on in terms of PPD.

    -Josh

  4. #54
    Xtreme Member
    Join Date
    Apr 2010
    Location
    Portugal
    Posts
    352
    Quote Originally Posted by onex View Post
    it can be said that linux would be granted less credit due to less dedicated systems, yet it is also claiming less credit even though it is working faster...

    ohh,
    found this:

    http://www.boinc-wiki.info/Claimed_Credit
    Based on the benchmarks seen, that formula would indicate that the linux system should claim more credit for the same runtime (without getting into account the number of work units done), yet, that doesn't seem to happen...

  5. #55
    Back from the Dead
    Join Date
    Oct 2007
    Location
    Stuttgart, Germany
    Posts
    6,602
    Quote Originally Posted by pirogue View Post
    Here's some more data for you guys to chew on.

    This is an unscientific comparison between two very similar machines using the first page of valid results for FAAH.

    Credit was granted based on 1 machine (no wingman) except for the last one in each list.
    Both run 64-bit versions.
    Both are overclocked to 4.0GHz.
    Memory speed is 1603MHz.
    Win7 is P6X58D Premium.
    Win7 benchmarks: 3631/11470
    Win7 is my main PC used for browsing, email, etc. and nothing really heavy-duty.
    Fedora is P6X58D-E.
    Fedora benchmarks: 3386/12638
    Fedora is dedicated cruncher.
    [snip]
    ...
    Who says the point system isn't perfect?
    What project is that? From the looks of it, not HCC. ONLY HCC is that much faster-running on Linux.. there's maybe a 10-15% difference on the other projects (according to some people over at WCG forums, I have only tested HCC so far).

    Quote Originally Posted by pfm3136 View Post
    It scores lower at fpu, higher at integer.. Could it be that points are mostly based on the fpu part of the benchmark?
    That would explain less points and more work done at the same time, as it seems that hcc can really use well the better integer performance of linux.

    This could explain the granted credit, but not the claimed credit, i think...
    Quote Originally Posted by onex View Post
    If they are indeed using that formula then both scores seem equally important. Meaning Linux should score better overall with its higher Integer Score.

    BOINC benchmarks are slowed by HT, so if I'd disable HT, I'd get higher scores. I might try to re-run the BOINC bench with HT turned off (even though changing active threads usually s up the program...)
    World Community Grid - come join a great team and help us fight for a better tomorrow![size=1]


  6. #56
    Xtreme Cruncher
    Join Date
    Mar 2008
    Location
    Los Angeles, CA
    Posts
    280
    it can be said that linux would be granted less credit due to less dedicated systems, yet it is also claiming less credit even though it is working faster...

    ohh,
    found this:
    Quote:
    claimed credit = ([whetstone]+[dhrystone]) * wu_cpu_time_in_sec / 1728000
    http://www.boinc-wiki.info/Claimed_Credit
    That is a claimed credit calculation though, not granted credit from WCG. I've been looking trying to determine how WCG calculates granted credit and cannot find much. Does anyone know how WCG calculates the granted credit? Has anyone compared the claimed credit scores between Linux and windows(not granted)?

    I, unfortunately, have not run any similar/identical systems on both OS's for any significant amount of time so I don't have any data to compare with. I suspect though, according to the BOINC claimed credit calculation, that on the average Linux crunchers will CLAIM more when compared to Windows crunchers. When comparing the GRANTED credits though, it would be less since so far it seems linux machines are awarded less.

    -EDIT- I found an article on how granted credit is calculated, still reading it.
    Last edited by Someguy1982; 05-30-2010 at 10:12 AM.

  7. #57
    Registered User
    Join Date
    Apr 2010
    Location
    NC
    Posts
    389
    Quote Originally Posted by jcool View Post
    What project is that? From the looks of it, not HCC. ONLY HCC is that much faster-running on Linux.. there's maybe a 10-15% difference on the other projects (according to some people over at WCG forums, I have only tested HCC so far).
    Quote Originally Posted by pirogue
    This is an unscientific comparison between two very similar machines using the first page of valid results for FAAH.
    FightAIDS@Home

    I used it for the example because the granted credit doesn't depend on any other machines (usually).


    Edit: I was focused on Linux vs Windows and completely missed the HCC part of the thread title. At least it's not a problem specific to HCC. I'll shut up now unless someone asks me a specific question.
    Last edited by pirogue; 05-30-2010 at 10:29 AM.

  8. #58
    Back from the Dead
    Join Date
    Oct 2007
    Location
    Stuttgart, Germany
    Posts
    6,602
    Doh.. right. Must have overlooked that part of the sentence...
    Good to know the PPD between Windows and Linux are equal there though.
    World Community Grid - come join a great team and help us fight for a better tomorrow![size=1]


  9. #59
    Xtreme Cruncher
    Join Date
    Mar 2008
    Location
    Los Angeles, CA
    Posts
    280
    First, I made an error earlier. I believe on average that linux machines would claim less credit per result returned based on the claimed credit calculation if they are spending less CPU time per WU. They should also return more work per day on average if they are completing WU's faster. Therefore it should come out in the wash essentially.

    According to the WCG wiki the credit system works as follows...

    http://wcg.wikia.com/wiki/Credit_system

    The way that credit is awarded in a quorum of two is that the two claimed credits are compared and if they are within 30% of each other, then they are averaged and the average value is granted. Over 85% of workunits have the granted credit determined this way.

    If the two claimed credit values are further then 30% apart, then the code looks at a field in the database which stores the recent average credit granted per second for each computer. Whichever computer's claimed credit per second for the workunit is closer to their recent average credit granted per second has its claimed credit used as the credit granted for the workunit.
    So based on all this, per WU linux machines should be granted less credit. They should, however, be completing more work/day since they take less time to complete the work. Earlier someone mentioned that there may be some linux servers which crunch, but only part time. Perhaps this is skewing the granted credit for people crunching dedicated machines on linux?

    Also, while reading through some of this information... it seems like we could "predict" BOINC ppd using the standard credit calculation.

    Code:
    (([whetstone]+[dhrystone])*86400/1728000)*[number of threads]
    Where 86400 is the total # of seconds during the day. Ive checked it against all my crunchers and in most cases it is close. my quad socket is way off though.

    -EDIT- Just realized this thread is supposed to be centered around HCC, sorry!

  10. #60
    Xtreme Member
    Join Date
    Apr 2010
    Location
    Portugal
    Posts
    352
    Actually there's a note in here about the claimed credit:

    Note: There is an additional refinement in that the Project can determine and specify the ratio of Floating Point to Integer mathmatics with a fraction that is used to scale the benchmark multipliers, with the variable "a" representing the scaling fraction, gives us:

    claimed credit = ( a * whetstone + (1 - a) * dhrystone) * wu_cpu_time_in_sec / 1728000
    So it might not be as straightforward as initially thought.
    It depends on the real formula used for the WCG project, which might focus more on fpu performance and that might explain linux asking for less claimed credit for the same runtime.
    If someone could find the real formula used for WCG project maybe that could shed a little more light on this matter.


    Nonetheless and in spite of what might be found about points, already burned ubuntu to a cd, and will change my Q6600 to it next weekend, as it'll get more work done for the same time.
    If we can accelerate and cut the time in half for finishing a project, why not?
    Last edited by pfm3136; 05-30-2010 at 01:03 PM.

  11. #61
    Xtreme Cruncher
    Join Date
    Mar 2008
    Location
    Los Angeles, CA
    Posts
    280
    Ahh, I didn't see that. Well, I'll keep looking around and if I find anything useful I'll post it.

  12. #62
    Banned
    Join Date
    May 2009
    Posts
    676
    Overall, my little test concluded that Linux should have provided more PPD, but did not. I came to this conclusion because I figured that since linux appears to have a 5-10% benchmark penalty compared to windows linux would get 5-10% less work completed and would get 5-10% less PPD. Instead, linux appears to do up to 30% more work than windows, but get about 35% less PPD.
    the issue with linux getting less credit is probably due to the approximate difference between the dhrystone+whetstone vs CPU_time per WU_in seconds,
    i.e, if linux get an higher whetstone+dhrystone sum, it might come above the difference in seconds when dividing with 1728000,
    - when looking closer at pirogue results (last page), the whetstone+dhrystone sum, is 6.39% more for linux and so the avg calculation time, is ~6.22, (add difference between WU's and it comes out ~ the same).
    i.e - if a WU is going for 4 hours at windows (14,400 sec) and 6.3% less in linux (13,464 sec) and the whetstone+dhrystone sum, is too 6.3%, 15001 vs 16024,
    then,
    Windows, goes, (15001*14400) / 1728000 = 125.008
    Linux, goes, (16024*13464) / 1728000 = 124.85...

    so, what we can see, here, is that there is a difference, even though it is slight, which means, that even though Linux is a faster cruncher, it can claim less points,
    as for the granted points, it is possible indeed, that most heavy crunchers are using windows, at OC'ed systems and 24/7.!

    E: pfm, just seen your post,
    then this might explain the differences, up and down between projects and inaccuracies of the original formula.

    now,
    have a look here:
    Points are calculated in a two-step process which attempts to give a consistent number of points for similar amounts of research computation. First, the computational power/speed of the computer is determined by periodically running a benchmark calculation. Then, based on the central processing unit (CPU) time spent computing the research result for a work unit, the benchmark result is used to convert the time spent on a work unit into points. This adjusts the point value so that a slow computer or a fast computer would produce about the same number of points for calculating the research result for the same work unit. This value is the number of point credits "claimed" by the client.
    https://secure.worldcommunitygrid.or...ame=points#177
    quite communistic, isn't it..?
    yet, at a second thought,
    if they did it by the CPU power, then a slower CPU, would have gotten a lousy result, claiming for only few points, and the faster CPU, would have gotten a high result,
    then the granted credit between the two, would have pushed up significantly the slower CPU and pull down significantly the faster CPU.
    this is not so good for future compatibility as well, as processors get faster and more advanced and others stay slow, this would harm they're efforts even further, comparing to older CPU's.
    the basic advantage of using a faster CPU, then, is not coming from it's ability to process any WU faster, rather from it's ability to process more WU's per day,
    that's it's actual benefit.

    That is a claimed credit calculation though, not granted credit from WCG.
    what we care, is for claimed credit, because granted does not represent anything we can work with,
    granted goes by comparing the claimed credit for a WU per it's crunching participants, and taking some average,
    that is why people have said that Linux could be suffering from lack of OCed machines and dedicated crunchers in comparison to windows, which is much more popular, and even more with casual users.

    -EDIT- Just realized this thread is supposed to be centered around HCC, sorry!
    p.s - guys, it seems to be o.k, flowing with different projects, other wise it would be difficult for everyone to compare results while not using HCC specifically,
    as long as it contribute to the thread and Jcool doesn't mind it in particular, go on with it.

  13. #63
    Back from the Dead
    Join Date
    Oct 2007
    Location
    Stuttgart, Germany
    Posts
    6,602
    Feel free to compare Linux and Windows on other projects as well.. we've already established that Linux is faster in HCC in post 1.
    World Community Grid - come join a great team and help us fight for a better tomorrow![size=1]


  14. #64
    c[_]
    Join Date
    Nov 2002
    Location
    Alberta, Canada
    Posts
    18,728
    I think a point weight on the WU based on expected completion time should be used. Longer they are expected to take, the more points they are worth.

    That way benchmark numbers dont matter, actual work performed does.

    All along the watchtower the watchmen watch the eternal return.

  15. #65
    Xtreme crazy bastid
    Join Date
    Apr 2007
    Location
    On mah murder-sickle!
    Posts
    5,878
    The benchmark is, in theory at least, important because it gives an impression of the work potential of a given machine. Hence the work potential and the actual time spent should be very good factors in determining what a machine's expended effort is.

    The other alternative is to make all work units worth a fixed amount of credit, based on their completion times of a single, standardised machine (which is how Folding@Hades operates, from memory). That has the advantage of being "fair", in that a work unit is worth x no matter what machine or OS does the work. The trick (and I'd bet some substantial arguments) comes in determining what the "standard" machine is.

    edit: Disclosure - Yes I realise that if all HCC work units were worth the same point value regardless of time, Linux machines would SLAUGHTER Windows machines (AT THE MOMENT. That could change with the next application release, who knows.)
    Last edited by D_A; 05-30-2010 at 03:54 PM.

    [SIGPIC][/SIGPIC]

  16. #66
    Xtreme Cruncher
    Join Date
    Oct 2008
    Location
    Chicago, IL
    Posts
    840

    Are we sure Microsoft isn't involved in this?

    Forgive me for being a conspiracy theorist, but I've been pondering this all afternoon. Believe me, i am not a conspiracy theorist generally. I just don't see how Linux could be as fast as it appears, yet still have less PPD than Windows.

    Is it at all possible that Microsoft is somehow involved in the reason why Windows gives more points than Linux despite all of the information provided. I did look at the WCG "partners" and Microsoft is not directly listed, but Microsoft has been known to be involved via other subsidiaries. There's 426 partners listed on WCG, so I'm not sure we'd be able to identify if Microsoft is a partner at all. If Microsoft was involved in this, I wouldn't be too surprised. It definitely won't be the first time that MS has been caught being involved in projects solely to make their products look better than their competition.

    Truth be told, if people were building lots of machines just to fold (and gee, nobody does that here, right?) alot of people would be buying extra copies of Windows, wouldn't they? That's ALOT of money if you think about it, so Microsoft does stand to lose some easy money off of those that build a farm of computers to crunch if Linux were used instead of Windows...

    I want to reiterate that this is ALL open for conjecture, I have ZERO proof of this, just a hypothesis.

  17. #67
    Xtreme Cruncher
    Join Date
    Mar 2008
    Location
    Los Angeles, CA
    Posts
    280
    Quote Originally Posted by onex View Post
    what we care, is for claimed credit, because granted does not represent anything we can work with,
    granted goes by comparing the claimed credit for a WU per it's crunching participants, and taking some average,
    that is why people have said that Linux could be suffering from lack of OCed machines and dedicated crunchers in comparison to windows, which is much more popular, and even more with casual users
    I brought it up because both linux machines I run take heavy hits on the granted credit VS claimed on a couple projects. Sometimes it will grant me down to 18% of the claimed credit, so initially I suspected that somehow linux was hit harder pointwise by these projects for some reason I cannot determine. I don't seem to have this similar problem with windows.

    What it boils down to is I thought perhaps something in the calculation to figure out Granted credit vs claimed credit was suspect. And since the granted credit is what actually gets reflected in WCG point totals, it seems to me that BOINC claimed credit is worthless if the granted credit is going to cut the score down by 82%.
    Last edited by Someguy1982; 05-30-2010 at 06:50 PM. Reason: Granted Points much lower then 50%

  18. #68
    c[_]
    Join Date
    Nov 2002
    Location
    Alberta, Canada
    Posts
    18,728
    Quote Originally Posted by D_A View Post
    The benchmark is, in theory at least, important because it gives an impression of the work potential of a given machine. Hence the work potential and the actual time spent should be very good factors in determining what a machine's expended effort is.

    The other alternative is to make all work units worth a fixed amount of credit, based on their completion times of a single, standardised machine (which is how Folding@Hades operates, from memory). That has the advantage of being "fair", in that a work unit is worth x no matter what machine or OS does the work. The trick (and I'd bet some substantial arguments) comes in determining what the "standard" machine is.

    edit: Disclosure - Yes I realise that if all HCC work units were worth the same point value regardless of time, Linux machines would SLAUGHTER Windows machines (AT THE MOMENT. That could change with the next application release, who knows.)
    Basing the credit per WU on a single machine would be worse than what we've got now.

    2 hours WU: 2 points
    2.5 hour WU: 2.5 points
    3 hour WU: 3 points...

    If you finish the 3 hour WU in 2 hours, you get 3 points..

    All along the watchtower the watchmen watch the eternal return.

  19. #69
    Xtreme crazy bastid
    Join Date
    Apr 2007
    Location
    On mah murder-sickle!
    Posts
    5,878
    Quote Originally Posted by STEvil View Post
    Basing the credit per WU on a single machine would be worse than what we've got now.

    2 hours WU: 2 points
    2.5 hour WU: 2.5 points
    3 hour WU: 3 points...

    If you finish the 3 hour WU in 2 hours, you get 3 points..
    Yeah, you'd get 3 points, but you'd get it in two hours instead of three. If you're machine is 50% faster than the standard then you'd be getting 50% more credit. How exactly is that worse than what we have now?

    [SIGPIC][/SIGPIC]

  20. #70
    Xtreme Cruncher
    Join Date
    Feb 2009
    Location
    Iowa, USA
    Posts
    705
    i think that method sounds better...
    Main: i7-930 @ 2.8GHz HT on; 1x GIGABYTE GTX 660 Ti OC 100% GPUGrid
    2nd: i7-920 @ 2.66GHz HT off; 1x EVGA GTX 650 Ti SSC 100% GPUGrid
    3rd: i7-3770k @ 3.6GHz HT on, 3 threads GPUGrid CPU; 2x GIGABYTE GTX 660 Ti OC 100% GPUGrid
    Part-time: FX-4100 @ 3.6GHz, 2 threads GPUGrid CPU; 1x EVGA GTX 650 100% GPUGrid

  21. #71
    Xtreme Member
    Join Date
    Apr 2010
    Location
    Portugal
    Posts
    352
    Quote Originally Posted by Someguy1982 View Post
    I brought it up because both linux machines I run take heavy hits on the granted credit VS claimed on a couple projects. Sometimes it will grant me down to 18% of the claimed credit, so initially I suspected that somehow linux was hit harder pointwise by these projects for some reason I cannot determine. I don't seem to have this similar problem with windows.
    Well, i do have the same problem as you in most of the work units, and i'm using Windows, not all the wu, but most, it seems to depend too much on the wingman.
    As for FAAH, which doesn't use a wingman normally, sometimes it's seems an obscene robbery of points.

    Also, if you check the statistics for the different projects within WCG you might note that theres a huge difference in points per hour between some of them.


    Quote Originally Posted by D_A View Post
    Yeah, you'd get 3 points, but you'd get it in two hours instead of three. If you're machine is 50% faster than the standard then you'd be getting 50% more credit. How exactly is that worse than what we have now?
    Yep, i also think it would be more just/fair.
    Last edited by pfm3136; 05-30-2010 at 09:16 PM.

  22. #72
    Banned
    Join Date
    May 2009
    Posts
    676
    Windows, goes, (15001*14400) / 1728000 = 125.008
    Linux, goes, (16024*13464) / 1728000 = 124.85...

    so, what we can see, here, is that there is a difference, even though it is slight, which means, that even though Linux is a faster cruncher, it can claim less points,
    actually, this is not true,
    as low of commutativity says, ([D]+[W])*WU_sec / 1728000, will equal if we take 6% of both of the parameters,
    the sum, should be the same.
    even though D+W numbers through windows sums higher then linux, they perform the task in less proportional time and so, the multiplication in the formula gives ~the same result,
    i.e, as said in the pointing scheme explanation, - a faster processor should get the same points as a slower processor-.
    maybe they do the comparison between the WU's PPD, in order to cancel any chance of people changing they're results..

    The benchmark is, in theory at least, important because it gives an impression of the work potential of a given machine.
    it is said, that they bench the machine every now and then to take measurements of it's current performance.

    The other alternative is to make all work units worth a fixed amount of credit, based on their completion times of a single, standardised machine (which is how Folding@Hades operates, from memory). That has the advantage of being "fair", in that a work unit is worth x no matter what machine or OS does the work. The trick (and I'd bet some substantial arguments) comes in determining what the "standard" machine is.
    sounds like a good alternative.

    What it boils down to is I thought perhaps something in the calculation to figure out Granted credit vs claimed credit was suspect. And since the granted credit is what actually gets reflected in WCG point totals, it seems to me that BOINC claimed credit is worthless if the granted credit is going to cut the score down by 82%.
    yeah,
    you lose a decent portion of your CPU potential.
    maybe it has something to do with linux,
    yet, this pointing scheme do create some problems,
    by the original formula, 2 CPU's should give ~the same results, yet actually, what happens, is one is taking the other up or down.
    as an add,
    it is not the points that are really important, yet they give the user a good estimate of it's work, it is like watching a benchmark.

    Josh,
    currently, doubt it...
    Last edited by onex; 05-30-2010 at 09:55 PM.

  23. #73
    c[_]
    Join Date
    Nov 2002
    Location
    Alberta, Canada
    Posts
    18,728
    Quote Originally Posted by D_A View Post
    Yeah, you'd get 3 points, but you'd get it in two hours instead of three. If you're machine is 50% faster than the standard then you'd be getting 50% more credit. How exactly is that worse than what we have now?
    Worse?

    Its better. If you do more WU per time used you generate more points. More points drives people to better efficiency.. more efficiency, more work done. Points will be based on work done, not theoretical performance.

    All along the watchtower the watchmen watch the eternal return.

  24. #74
    Xtreme crazy bastid
    Join Date
    Apr 2007
    Location
    On mah murder-sickle!
    Posts
    5,878
    Ok, now I'm confused.

    The whole point of basing returned credit on how long a "standard" machine takes to do the work unit is completely congruous with what you proposed. The system we have now tries to give credit based on how long the unit takes and the assumption that different machines will take different times to do it. Running a sample unit on a "standard" machine and then concluding that all the units in that batch are worth n points is what I'm suggesting and exactly the same as what you counter proposed. In both cases, a given unit is worth a fixed value. If you throw a more powerful machine at it and do it quicker then you end up doing more units and hence get more PPD. No crazy benchmark formulae, no quorum to decide credit, if it's valid you get n. Period. If you do them faster then you get more ns and get more credit than the machine that only does one a week instead of six hundred a day.

    [SIGPIC][/SIGPIC]

  25. #75
    Banned
    Join Date
    May 2009
    Posts
    676
    Running a sample unit on a "standard" machine and then concluding that all the units in that batch are worth n points is what I'm suggesting
    maybe i'm missing something here, yet,
    WCG, is doing some 600K results per day, and it would need 400 batches for that (1500 WU's per batch),
    the other thing, is that not all WU's are similar within a batch, so they cannot say, every unit within this specific batch, is worth x points,
    they might be able to do an estimate, yet even in the BOINC manager, estimates are changing as the WU is being processed...

Page 3 of 4 FirstFirst 1234 LastLast

Bookmarks

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •