-
I know you folks don't know me well so if you disagree please just let me know and I'll move along :-)
I think a quicker and more accurate way to create direct comparisons would be to process the same exact WUs (shut BOINC down, copy and paste BOINC data folder, suspend network activity) and then look at total runtime for the entire set. The set would only have to be the max WUs that could be processes in 12 hours because beyond that would not tell us anything different from a statistical perspective.
This eliminates all the caveats when trying to use points as a comparative basis (each WU set is completely different, wingmen, WCG changing target runtime length, when was the last time you ran BOINC benchies, the list goes on). It then becomes very easy to identify changes that produce significant improvements ... BCLK, vCore, QPI, RAM amount, bandwidth, timings, HD vs. SSD, graphics, power usage 110 vac vs 220 vac. You can add any data element you want and as long as you continue to use the same WU set you will have directly comparable results. I know there is a lot of conventional wisdom on some of these elements so we could skip testing those ones (or do a quick sanity check to make sure) and move on to those we do not already have a handle on.
Now someone will come along and ask if we can swap WU between different OS, the answer is no and that is where we have to handle the averages as best we can. First I would suggest that comparisons be done within compatible architectures. Instead of trial and error I could email WCG as they already have these groups defined. Once there is a *best* consensus within a group we could start to compare runtimes from X number of WUs between groups for the same subproject ... I am still trying to stay away from points because of all the issues mentioned. We certainly will need to refine this strategy because the RICE and CMD2 subprojects have soft and hard stops based on time limits if you have not finished the WU. We can figure out those deets when we get there.
If there is interest I will start to lay out what we elements we want to test and create some form of data structure (Excel or maybe some free SQL tool). I think this will not only provide results we can all benefit from right away but also positions us with a testing methodology to quickly evaluate new platforms, OS, crunching efficiency theories etc.
All that being said, thanks for your patience, and is there a configuration you would like me to test on my i7 920 C0 (currently at 19*207) and are there any suggestions on a cheap and accurate multimeter?
I know the original focus is on power efficiency but for those of use not running farms and/or not real concerned about power costs I would suggest that 1% improvement in runtime is significant but also at the bottom limit of what we should be looking for. Roughly for an i7:
8 cores * 24 = 192 hours
192 * 1% = 1.92 hours
on average you would turn one extra WU every two days giving an ROI of just under 1 week for each 12 hour test that improves runtime by 1% ... yes, I'm convinced this is a good low end target.
Posting Permissions
- You may not post new threads
- You may not post replies
- You may not post attachments
- You may not edit your posts
-
Forum Rules
Bookmarks