You misunderstood. We commonly suspend the project to realign WUs when they get out of sync. After the pause, I forgot to resume the project to align the WUs.
Printable View
I'll give it a try, anyway! With many WUs running, you can get the most WUs processed by stressing the GPU and CPU to the fullest. If you offset the WUs so that enough are being processed by the CPU at a given time, but also not too many that the other WUs catch up. I have 16 WUs running and I try to keep groups of 4, so that as many WUs are being processed by the CPU at a given time. This gives the GPU 12 to process most of the time instead of 16, resulting in higher PPD output. This isn't an exact description of what really happens, but the point is to load up CPU and GPU as much as possible and offsetting WUs helps to do that.
And I though I was doing something wrong. I've been doing that for weeks. Thanks for the info. Makes me feel better.
For some reason, for the last three days I haven't been able to run more than two GPU WUs at a time regardless of the app_config file. Anyone else?
I'm still running app_info and still running 8 threads easy. Hey, it works, and I didn't want to screw up a good thing :p:
Side note - I don't have very many resends in my queue (those are the _2 labeled ones, right?).
Running 19 now. I did have a problem with Boinc manager. My estimated w/u times were way off causing me to get 600 pages of work:shocked:
App_config is much easier than the dated app_info. I can select the number of WUs to run, save file, and tell the BOINC client to read config file and it assigns the program to run that many tasks without shutting the client down. It's very convenient. I suppose it doesn't really matter with a few days left.
Yeh, when it is fetching tasks the BOINC client (v7.0.44 onwards) on my GPU machine sometimes sets the time estimates for the WUs that are Ready to Start to 3m43s, causing it to request too many. Then I'll look again and the time estimates will be back to their proper value of around 13m (for 13 tasks on my 7870).
If you want to count the number of HCC tasks in your cache, start a command prompt (windows) or shell (*nix). cd to the WCG project directory (where your app_config.xml file lives). Do "dir x*.zip" (no quotes)(windows) or "ls x*.zip | wc -l" (*nix). That should show the no of files that match x*.zip
If there's a problem with the *nix version, try:
ls | grep 'x*.zip' | wc -l
PS: Only had 6500 jobs so I bumped the cache up a bit ;)
Are we there yet? I seem to get resend tasks only now..
I just realized from stock to overclocked settings on my 7950, power consumption goes from 257W to 512W. Power consumption went up almost 200% and PPD output went up about 130%. Talk about efficiency!
I didn't notice that much of a power increase on mine going from 800 to 1000.
I now have a _3. http://serve.mysmiley.net/sad/sad0012.gif
I have mostly _0s and _1s, with a few _2s thrown in for seasoning.
That's normal (although not very common). That just means two people errored out/aborted the same WU, or someone sent back an errored unit and therefore is marked "inconclusive". But it does signal the end...
Do we have any idea where to put the GPUs once this is over? Thought I saw something on that, but overlooking it at the moment. I know GPUGrid is great for Nvidia, but what about ATI? There are a bunch, but what's the most worthy (read: humanitarian) project that runs on ATI?
(BTW, hello again! Been a while)
:wave:
Otis, you need to fix your Signature. FYI - I have 10 AMD 7770 GPU's.
I've signed up with the XtremeSystems Einstein@Home team, and the Milkyway@Home team, and the Seti@Home team.
http://einstein.phys.uwm.edu/index.php
http://milkyway.cs.rpi.edu/milkyway/index.php
http://setiathome.berkeley.edu/index.php
Asteroids isn't doing GPUs right now.
Is anyone receiving large amounts of resends yet? There are estimated to be less than 2.5 days of available work.
For real?? I thought there were something like a week left...
I"ve got WU's on one 'puter until Monday, and the other GPU enabled one has WU"s until Wed. I am starting to see some of the resend/2's now.
I have w/u's dated 5/1 and all are new.
Requesting tasks for ATI ==> No tasks are available for Help Conquer Cancer
Last task in cache due 2/5/13 04:00:25 AM local time, ie 1 May 2013 18:00:25 UTC.
It would have been fetched 24 Apr 18:00 UTC, about 10 hours ago.
I bumped up the work cache a few days ago so no longer qualify as a "fast returner" & won't get resends.
Oh, well:
C:\BOINC_Data\projects\www.worldcommunitygrid.org> dir x*.zipso they'll still be crunching for about 5 days.
... says ...
6384 File(s) 647,979,278 bytes
BTW, working with such a long cache seems to really slow down the BOINC daemon. It takes ages to suspend/resume all tasks when adjusting the startup offsets of the running tasks. In task manager, boinc.exe is currently showing 19h14m CPU time used! Mind you, it's been running continuously for many weeks :)
Not sure what to do with the graphics card either. It's host Q9650 will be retired, as I just put a new 3770K on line. Bitcoin?
[Edit:]HCC GPU tasks are flowing again ... Might be one of your last chances to greedily grab some extras. But maybe not :shrug: [/Edit]
"No tasks available" starting to pop up occasionally in the event log, and when it does download tasks they are mostly resends due within the next couple days, putting off the regular ones that are due 5/1. Hm. This looks like the end :(
remember to abort non gpu tasks so they don't start running on high priority and ruin your productivity :S
Event Log excerpt:
4/24/2013 9:01:13 PM | World Community Grid | No tasks sent
4/24/2013 9:01:13 PM | World Community Grid | No tasks are available for Say No to Schistosoma
4/24/2013 9:01:13 PM | World Community Grid | No tasks are available for Drug Search for Leishmaniasis
4/24/2013 9:01:13 PM | World Community Grid | No tasks are available for The Clean Energy Project - Phase 2
4/24/2013 9:01:13 PM | World Community Grid | No tasks are available for Help Conquer Cancer
4/24/2013 9:01:13 PM | World Community Grid | No tasks are available for Human Proteome Folding - Phase 2
4/24/2013 9:01:13 PM | World Community Grid | No tasks are available for FightAIDS@Home
Huh, POEM is actually pretty good - any idea on how efficiently it runs on ATI?
And start a team! XS is very supportive of new start ups, especially with medical related projects.
My biggest problem with F@H is (last time I checked) it didn't use ATI efficiently. It was much better to use an NVIDIA GPU... have they fixed that?
(IMO BOINC is also preferable, but that's a minor issue)
Yeah, I know I gotta fix it... just don't remember the place and haven't cared enough to dig around for it. Asked SNURK, should be fixed shortly.
Actually, I'm going to start another thread on this specifically.
Thanks guys! (And nice to meet y'all two!)
I'm not sure on F@H, I haven't played with that on my GPU for a long time. I kinda feel bad about it, since that was my very first project and all :(
I tried running it on my GTS250 on my work PC, but it totally made that PC unusable while it was running.
*edit* I guess they found some more new work as I just got a whole bunch of _0 and _1 units all of a sudden.
F@H has a open_cl core for both gpus, and AMD cards are used effectively. Only thing left for f@h is gpu on Linux support.
I'm not sure what to do after hcc either. Athlon machine is getting disbanded. 7770 might get a temp home in a 8c pd machine until its intended use makes it homeless again.
What I can't find is a Project that can use the ATI GPU as effectively as the WCG Hcc1 did.
I can run the Hcc1 task on a 0.2C + 0.2ATI configuration. So far, all the GPU taskers I've look at require a 0.2C + 1.0ATI configuration.
I have some WUs ending in _2 and _3.
XS actually has a team, but only one guy, Lionheart, has contributed in the past month. There are a couple of XS'ers on the team that have been active in WCG, including pirogue, one shot, snow crash, and vitchilo.
Supposedly AMD cards of 5000 series or above work on Folding@Home. I don't know how well AMD cards work vs. NVIDIA. There are claims that NVIDIA cards work better on the Folding projects.
XS actually has a team, but only one guy, Lionheart, has contributed in the past month. There are a couple of XS'ers on the team that have been active in WCG, including pirogue, one shot, snow crash, and vitchilo.
Supposedly AMD cards of 5000 series or above work on Folding@Home. I don't know how well AMD cards work vs. NVIDIA. There are claims that NVIDIA cards work better on the Folding projects.
http://i1007.photobucket.com/albums/...sig/305484.gif ; Otis11
305484 is the number you need. It's magic! :up:
Well I worked on POEM for a while but I switched to HCC... Will go back to POEM once HCC is finished.Quote:
XS actually has a team, but only one guy, Lionheart, has contributed in the past month. There are a couple of XS'ers on the team that have been active in WCG, including pirogue, one shot, snow crash, and vitchilo.
My PPD average has been climbing recently, are these resends worth more or something?
I'm still getting new WUs and some resends. Some tasks end in 0 and others in 1 and 2.
_0 and _1 are new tasks, _2 and higher are resent or additional verification requests.
When you click on the task in the WCG results web interface, it shows task creation time. So it seems there are still new tasks being generated...
I tried to sign up with POEM, but they don't have or do ATI tasks. They also don't show the projects the way other BOINC-based Accounts do.
Can anyone tell me what is happening here?
Thanks
4/25/2013 10:16:39 AM | Poem@Home | Sending scheduler request: To fetch work.
4/25/2013 10:16:39 AM | Poem@Home | Requesting new tasks for ATI
4/25/2013 10:16:41 AM | Poem@Home | Scheduler request completed: got 0 new tasks
I have downloaded a few tasks with a <name>poemcl</name> app_config.xml entry.
It is a start. Funny, I had to click on the CPU option to get the ATI downloads. Go figure?
I don't think POEM has as much staff/budget as HCC has to supply millions of tasks a day like HCC has been.
IMHO something fishy has been going on at POEM for quite a while. Crunchers in the projects' home country could get 7+million boinc credits per day while I couldn't get enough tasks to keep 1 7770 busy 24/7. :shrug: When SETIUSA was closing in on PD3N to take first place in overall credit all of a sudden P3DNs' production tripled. Makes me wonder.
Good News!!
http://www.worldcommunitygrid.org/fo...st_post,419510
"The researchers have identified additional batches for this project which need to be run (and weren't included in the original 25 day estimate). I'm in the process of getting them loaded into BOINC and should be able to provide a more accurate estimated number of days left once this is complete. At minimum though, there should be more than five days left of work before we exhaust the supply of non-resend work units.
Seippel"
Bill P
http://www.wcgsig.com/402500.gif
Good news!
I wondered. My queue has only barely a handful of resends, everything else is new stuff.
Reference: http://stats6.free-dc.org/stats.php?page=team&proj=poe&team=264#map
Looks like I'm the only XS member working POEM.
Updated OP with new estimation. Thanks, Bill P.
I can't remember where I saw that but it was on HCC forums... they were working on ``number 11 batch``... which would be available in early may... so that would mean a lot more work...
That post was for the Help Fight Childhood Cancer project, not Help Conquer Cancer which is estimated to only have 3 days for downloading new work.
HFCC isn't for GPU, but I'm looking forward to crunch those wu's.:)
We must be getting pretty close to the end. Except for some repair WUs, I'm seeing a constant stream of:
4/27/2013 1:38:31 PM Scheduler request completed: got 0 new tasks
4/27/2013 1:38:31 PM No tasks sent
4/27/2013 1:38:31 PM Tasks are committed to other platforms
4/27/2013 1:38:31 PM Tasks for CPU are available, but your preferences are set to not accept them
That should only be temporary until the next set is loaded (typically happens fairly quickly). The WCG techs are pretty good with their estimates and they are targeting another couple of days of new work being distributed.
I did however stuff my caches to the gills so blame it on me if you think any one of us makes that much of a difference :rofl:
<edit> and new work is flowing again (I'm in EST so this was at 9:05 PM UTC)</edit>
Quote:
4/27/2013 5:04:54 PM | World Community Grid | Requesting new tasks for ATI
4/27/2013 5:05:01 PM | World Community Grid | Scheduler request completed: got 1 new tasks
4/27/2013 5:05:04 PM | World Community Grid | Started download of X0900098371237200803201010_X0900098371237200803201 010.zip
4/27/2013 5:05:06 PM | World Community Grid | Finished download of X0900098371237200803201010_X0900098371237200803201 010.zip
I see they're flowing again. I just sat down and did my normal "how's everything running" check and noticed those messages on every PC. I obviously jumped to the wrong conclusion.
However, I am getting many, many more repair WUs today.
I'm at 500 pages of work in progress. I think that is the max?
15 WU per page * 3000 pages = 45000 / 7500 per card = 6 cards (min) :eek:
unless of course you've played the "slow card shuffle* where you temporarily add a second slow card to a fast rig to bump the total number of WUs it can download and then move the slow card to another rig ... rinse and repeat :up:
Slow card shuffle? Now would I actually do something like that? :rolleyes: Sounds like way to much work. Set and (basically) forget for me. I do have 1 dual card rig running 2 7770s but I played with the min/max cache settings until I could keep the amount in there to around 7500 +/- 100. For some reason if I allowed more than that I would get continuous PITA upload issues on that machine only that I was never able to resolve. I would have loved to store up 15,000 on one machine but it wasn't worth the hassle. BoincTasks had enough lag managing 7500. :yepp: The other 5 rigs are single 7970s.
Final estimate:
So the end is May-1Quote:
All remaining work for the project is now loaded into BOINC and there are now about 2.5 days of non-resend work remaining for this project.
Seippel
damn haha, I needed 4 days to catch Movieman......
Thanks for the update!
How much left?
How many pending WU pages do you have in your cache now on your 7970 and 7950...??? What are your cache settings?
You not passing Movieman may be a blessing for him... He has been run-over so many times lately that he should seriously think of changing his on-line name from "Movieman" to "RoadKill" :rofl:
In any version 7 of BOINC you must have a value in both cache settings. if you leave either one at 0 it won't work. FWIW I usually set my minimum work buffer to 0.50 and use my max additional work buffer to get the needed amount of tasks in cache. The max additional setting needed to fill your cache can vary widely depending on project, hardware and WU return time limits. Hope this helps you.
FWIW - I'm learning that when you have multiple Projects on one computer, WCG doesn't want to send you replacements. As soon as you Suspend the other Projects, WCG bombards you with replacement tasks. I've decided, as soon as this rush for hcc1 tasks is over, I'm assigned one Project per computer. That way there won't be any bias between Projects.
Anyone else getting every single hcc 7.05 wu as computation error immediatelly upon they start? This happened today!
According to WCG forums, it should end around May 1... so a day or so left.
i still have 6641 WUs in my main, and 2680 in my server, should keep my computer busy for a while
I have 7466 WUs in my queue for HCC and I've lowered my clocks so I'm crunching through about 1250 WUs/day. I also hit 4 years on HCC yesterday.
Started getting M series rather than X series work units today. Looks like we are getting to the last of the GPU work units.
I have some C series as well as M series and the X series. Where is Vanna White, when you want to buy a vowel?
Maybe its 320.00 beta that causing the problem? I ll try to go back to 314.
Sent from my GT-I9300 using Tapatalk 2
I'm getting several "Internal server error" or other http errors in the log when trying to report/download tasks. Anyone else seen this too ?
Now getting repair task only for now.
I get that when my cache gets to around 8200 WU's on an single machine. WCG's has a limit of 7500 WU''s per GPU but I have not been able to get more than around 8200 on multi GPU rigs.
Anyways, here's hoping for about 6 more days of work! :up:
http://img594.imageshack.us/img594/538/1bil.jpg
Yep, I have 8250 WUs buffered on that machine (2xHD7950).
There should be about 1 day of new work available, then resends only (for at least a week, then another week of resends of resends).
Same issue here. Since HCC is just about done it's a little late for a cure but the only way I could get those errors to stop on a multi-GPU machine was just keep lowering the max cache setting by .5 until the errors stop. The cache would then vary between 7200 and 8000 but no more http upload errors. Something to remember for the next GPU project.
Indeed, I assume the cache will be at max only about a day, so it should recover soon (expecting ~0.5 day of new units and then resends only). Meanwhile it works sometimes (report/dl) and sometimes it doesn't...
I'm getting resends only since few hours.. anyone else seen the same?
I'm doing just M Series now.
http://www.worldcommunitygrid.org/fo...st_post,420071
"Last of the new work for Help Conquer Cancer Sent
We have just finished distributing the last of the new work for the Help Conquer Cancer project. It will take about 7-10 days for the in progress work for the project to complete. During this time there will periodically be additional results sent out to finish up those in-progress workunits.
THANK YOU so much for your contribution and efforts on this project!"
Posted by knreed about 2 minutes ago
Bill P
http://www.wcgsig.com/402500.gif