You misunderstood. We commonly suspend the project to realign WUs when they get out of sync. After the pause, I forgot to resume the project to align the WUs.
Printable View
I'll give it a try, anyway! With many WUs running, you can get the most WUs processed by stressing the GPU and CPU to the fullest. If you offset the WUs so that enough are being processed by the CPU at a given time, but also not too many that the other WUs catch up. I have 16 WUs running and I try to keep groups of 4, so that as many WUs are being processed by the CPU at a given time. This gives the GPU 12 to process most of the time instead of 16, resulting in higher PPD output. This isn't an exact description of what really happens, but the point is to load up CPU and GPU as much as possible and offsetting WUs helps to do that.
And I though I was doing something wrong. I've been doing that for weeks. Thanks for the info. Makes me feel better.
For some reason, for the last three days I haven't been able to run more than two GPU WUs at a time regardless of the app_config file. Anyone else?
I'm still running app_info and still running 8 threads easy. Hey, it works, and I didn't want to screw up a good thing :p:
Side note - I don't have very many resends in my queue (those are the _2 labeled ones, right?).
Running 19 now. I did have a problem with Boinc manager. My estimated w/u times were way off causing me to get 600 pages of work:shocked:
App_config is much easier than the dated app_info. I can select the number of WUs to run, save file, and tell the BOINC client to read config file and it assigns the program to run that many tasks without shutting the client down. It's very convenient. I suppose it doesn't really matter with a few days left.
Yeh, when it is fetching tasks the BOINC client (v7.0.44 onwards) on my GPU machine sometimes sets the time estimates for the WUs that are Ready to Start to 3m43s, causing it to request too many. Then I'll look again and the time estimates will be back to their proper value of around 13m (for 13 tasks on my 7870).
If you want to count the number of HCC tasks in your cache, start a command prompt (windows) or shell (*nix). cd to the WCG project directory (where your app_config.xml file lives). Do "dir x*.zip" (no quotes)(windows) or "ls x*.zip | wc -l" (*nix). That should show the no of files that match x*.zip
If there's a problem with the *nix version, try:
ls | grep 'x*.zip' | wc -l
PS: Only had 6500 jobs so I bumped the cache up a bit ;)
Are we there yet? I seem to get resend tasks only now..
I just realized from stock to overclocked settings on my 7950, power consumption goes from 257W to 512W. Power consumption went up almost 200% and PPD output went up about 130%. Talk about efficiency!
I didn't notice that much of a power increase on mine going from 800 to 1000.
I now have a _3. http://serve.mysmiley.net/sad/sad0012.gif
I have mostly _0s and _1s, with a few _2s thrown in for seasoning.
That's normal (although not very common). That just means two people errored out/aborted the same WU, or someone sent back an errored unit and therefore is marked "inconclusive". But it does signal the end...
Do we have any idea where to put the GPUs once this is over? Thought I saw something on that, but overlooking it at the moment. I know GPUGrid is great for Nvidia, but what about ATI? There are a bunch, but what's the most worthy (read: humanitarian) project that runs on ATI?
(BTW, hello again! Been a while)
:wave:
Otis, you need to fix your Signature. FYI - I have 10 AMD 7770 GPU's.
I've signed up with the XtremeSystems Einstein@Home team, and the Milkyway@Home team, and the Seti@Home team.
http://einstein.phys.uwm.edu/index.php
http://milkyway.cs.rpi.edu/milkyway/index.php
http://setiathome.berkeley.edu/index.php
Asteroids isn't doing GPUs right now.
Is anyone receiving large amounts of resends yet? There are estimated to be less than 2.5 days of available work.
For real?? I thought there were something like a week left...
I"ve got WU's on one 'puter until Monday, and the other GPU enabled one has WU"s until Wed. I am starting to see some of the resend/2's now.