Hmmm.. maybe I should switch my 32-thread dual-Xeon machine to run Test4Theory tasks on all cores ?
but then you'd loose your #1 place
haha, nice try... but ive got like 50 more cores i could throw at it
Stats are in.......11,081 for the day
but it looks like we need to get a bit more poduction, we dropped down to 3rd in the dailies
Last edited by skycrane; 04-15-2014 at 09:59 AM.
Its not overkill if it works.
Based on RAC, XS is now #3 ! Not bad for a team of 4 active crunchers only Congrats !
and happy Easter !
Just noticing this the last few days ... while only observational, it appears the more boinc instances I run the more virtual disk corruption takes place ... I was trying to add a new core every few days but I'm going to back down a little ... kills me but I lost about 48 hours between multiple instance over the last 2 days.
back up to #2 in the dailies, and a pb of 7122 for me today
congrats to everyone that is putting up the points. im hoping to add a few more this week.
Its not overkill if it works.
hehe, well ive added a few more the last 2 days we were over 20k, but dropped down to only 17k today
i found a down and dirty easy way to add a new comps... it takes about 2 minutes per core, and thats with typing everything in...
let me know if yall need it. im going to have it run on every core ive got. and see if i can get the #1 spot in the project dalies
Its not overkill if it works.
Something like this? http://lhcathome2.cern.ch/test4theor...ad.php?id=1463
<cc_config>
<log_flags>
</log_flags>
<options>
<ncpus>1</ncpus>
</options>
<options>
<data_dir>c:\programdata\boinc1</data_dir>
</options>
</cc_config>
"C:\Program Files\BOINC\boinc.exe" --allow_multiple_clients --dir c:\programdata\boinc1\ --gui_rpc_port 31417
no, just a concised way to run the switches. just increase the \boinc1 by one for each core, and the port 31417 as well
Its not overkill if it works.
You guys looked lonely so I figured I throw you a core.
[SIGPIC][/SIGPIC]
Thanks for joining in the fun of smashing atoms
wooohooo, top 100 for me, and the team is doing nicely climbing up in the ranks.. were at 37 now, so everyone turn it up a bit, throw some more cores on it, bring your friends, and lets have some fun
Its not overkill if it works.
<app_config>
<app>
<name>vboxwrapper</name>
<max_concurrent>1</max_concurrent>
</app>
<app_version>
<app_name>vboxwrapper</app_name>
<plan_class>vbox64</plan_class>
<avg_ncpus>1.5</avg_ncpus>
</app_version>
</app_config>
ive also found this app_config that some ppl are making use of using 2 cores for the VM... not sure how it works. but im going to try it
Its not overkill if it works.
I'll try that too. Wondering if the credit granted is then doubled too...
Last edited by Mumak; 05-19-2014 at 10:25 AM.
I'm back for a while ... at least until I hit 250 K ... I've been bouncing around a bit lately but have got back on the CERN bandwagon ... Test4Theory, LHC classic sixtrack, and now the new ATLAS project
AFAIK, Atlas is in beta stage yet and Atlas tasks will be run at vLHC@Home (ex-Test4Theory) soon.
In case yall havent realized it yet, they FINALLY have vLHC running 2 cores max now. i still havent figured out why they wont let more than that run on a comp.... let us worry about how much hardware we need to run it. your job is to write the program to run on every core...
end rant...
also, vLHC, and atlas will NOT run on the same comp. they fight each other for access to VBox
Its not overkill if it works.
I *think* some of the contention comes when multiple apps are trying to write snapshots at the same time, it simply takes too long for the HDD to write and you end up with the VM for one of the apps thinking the source system can not service it (which it can't) and then it magically shuts itself down for 24 hours (ugh).
On a 4670K w / core 4.1, nb 3.9, RAM 2133, OS Win7 on SSD ... I had no issues with running 1x vLHC and 1x ATLAS.
When WCG challenge is over I'll be back and see how far we can push the concurrency.
Bookmarks