Results 1 to 10 of 10

Thread: Crunching WCG GPU and GPUGrid

  1. #1
    Xtreme Cruncher
    Join Date
    Jan 2009
    Location
    Nashville
    Posts
    4,162

    Crunching WCG GPU and GPUGrid

    'ello all

    Things have finally cooled off enough to utilize a few GPUs around here. But I have a concern.

    If/when I get WCG HCC GPU work units how will Boinc handle scheduling between GPUGrid and HCC/GPU? WCG is set to 100% and GPUgrid is set to 20%. Long GPUGrid work units run 12+ hours and so far the HCC/GPU runs about 6 minutes. Will Boinc wait for a GPUGrid wu to finish or interrupt the GPUgrid to run the HCC/GPU wus?

    I really like the project badges GPUgrid use now. Very nice.


    Last edited by PoppaGeek; 09-29-2012 at 03:46 PM.

  2. #2
    Xtreme Cruncher
    Join Date
    Dec 2008
    Location
    Texas
    Posts
    5,152
    This. ^^ anyone know?


    24 hour prime stable? Please, I'm 24/7/365 WCG stable!

    So you can do Furmark, Can you Grid???

  3. #3
    Xtreme Legend
    Join Date
    Mar 2008
    Location
    Plymouth (UK)
    Posts
    5,279
    This may be a matter to be set up with the "Switch projects every" setting....but that is just a guess.

    If that setting is made to be shorter wu plus a bit then you should see crunch a short one then run the long one for an equal period then crunch a short one etc.

    You could apportion time like this on an equal basis with any setting but longest wu time and shortest wu time make most sense to me not sure though because at this stage we are guessing...they may even introduce another tool when HCC GPU goes live


    My Biggest Fear Is When I die, My Wife Sells All My Stuff For What I Told Her I Paid For It.
    79 SB threads and 32 IB Threads across 4 rigs 111 threads Crunching!!

  4. #4
    Xtreme Cruncher
    Join Date
    Mar 2009
    Location
    kingston.ma
    Posts
    2,139
    +1 OldChap for the 'switch every xxx seconds' I've done this with multiple CPU and GPU projects at the same time and it does work as advertised

    BUT ...

    I am going to suggest caution in setting the switch frequency very low. GPUs don't actually have a "stay in memory" so you are going to lose any crunching after the last checkpoint. While GPUGrid does checkpoint fairly frequently you could eventually end up losing a good deal of time.

    I think once WCG gets rolling for good I can see setting the "switch" quite high but then only allow 1 GPUGrid task to download and then set that project to NNT until you really do want another (I'm guessing maybe every other day or so to get the ratio Poppa's looking for.)

    One more note ... GPUGrid pays bonus points for WUs returned in less than 24 hours so don't pull WUs until you are ready to crunch them. If you do get 2 from GPUGrid and are only going to crunch 1 before switching I would suggest that you first set GPUGrid to NNT and then abort the second WU.

    Last thought ... maybe change the title of this thread to 'Crunching WCG and GPUGrid' so that gets anyone who wants to do both can get the basic setup quickly ... maybe even cross post on the WCG forum
    Last edited by Snow Crash; 09-29-2012 at 03:42 PM.

  5. #5
    Xtreme Cruncher
    Join Date
    Jan 2009
    Location
    Nashville
    Posts
    4,162
    Found this at WCG forums:

    Hi Gerald

    I've just joined in this GPU Malarkey. I'm running GPUGrid flat out

    However the Beta's from WCG (last time out anyway) have a really short return time

    As a result they kick out the GPUGrid WU's as they have a higher urgency and take over your PC's GPU(s) until they run out.

    Which for me, like you, with a great affection for WCG....is ideal.

    (I joined the moonlighting MOT team over there too )

    Dave

    I think after BETA it will best to have one project per GPU.

  6. #6
    Xtreme Cruncher
    Join Date
    Mar 2009
    Location
    kingston.ma
    Posts
    2,139
    I *think* you might be able to craft an anonoymous app_info.xml to handle this but I'm not really certain of it. Would be / is a royal PITA to keep them updated as you have to manually update when projects change apps (even down to the version level). If this starts to become important I've seen folks who are really good at creating the files (it's just xml but you have to be EXCATLY correct) who would likely be willing to help and would likely know right off the top of their head if it is even possible to split multiple projects to individual cards on a multi-gpu rig.

  7. #7
    Xtreme Legend
    Join Date
    Mar 2008
    Location
    Plymouth (UK)
    Posts
    5,279
    2.5 years on is there still no love for amd on the grid?.... If not then it solves this problem for me


    My Biggest Fear Is When I die, My Wife Sells All My Stuff For What I Told Her I Paid For It.
    79 SB threads and 32 IB Threads across 4 rigs 111 threads Crunching!!

  8. #8
    Xtreme Cruncher
    Join Date
    Dec 2008
    Location
    Texas
    Posts
    5,152
    Quote Originally Posted by OldChap View Post
    2.5 years on is there still no love for amd on the grid?.... If not then it solves this problem for me
    Well there's D@H, which does Bitcoin for GPUGrid... but other than that no.

    And SC - I have some XML experience and would be willing to help... but don't have much hardware to test stuff on...


    24 hour prime stable? Please, I'm 24/7/365 WCG stable!

    So you can do Furmark, Can you Grid???

  9. #9
    Xtreme Cruncher
    Join Date
    Feb 2009
    Location
    Iowa, USA
    Posts
    705
    You can do 1 project per GPU with this:

    <exclude_gpu>
    <url>project_URL</url>
    [<device_num>N</device_num>]
    [<type>nvidia|ati</type>]
    [<app>appname</app>]
    </exclude_gpu>


    there is also this:
    <fetch_minimal_work>0|1</fetch_minimal_work> "Fetch only 1 job per device (CPU, GPU). Used with --exit_when_idle, the client will process one job per device, then exit." But it doesn't seem to be able to specify which devices, it just gets 1 per device for all devices.

    Maybe we could get the BOINC devs to implement the minimal work option for specific project or device (just like the exclude_gpu option is set up)


    For now I wouldn't recommend multiple GPU project per machine. I had 1 machine with 2 projects, and even though one project was set to resource share of 0 it still was doing work occasionally.
    Main: i7-930 @ 2.8GHz HT on; 1x GIGABYTE GTX 660 Ti OC 100% GPUGrid
    2nd: i7-920 @ 2.66GHz HT off; 1x EVGA GTX 650 Ti SSC 100% GPUGrid
    3rd: i7-3770k @ 3.6GHz HT on, 3 threads GPUGrid CPU; 2x GIGABYTE GTX 660 Ti OC 100% GPUGrid
    Part-time: FX-4100 @ 3.6GHz, 2 threads GPUGrid CPU; 1x EVGA GTX 650 100% GPUGrid

  10. #10
    Xtreme Cruncher
    Join Date
    Mar 2009
    Location
    kingston.ma
    Posts
    2,139
    Quote Originally Posted by OldChap View Post
    2.5 years on is there still no love for amd on the grid?.... If not then it solves this problem for me
    That's more of an issue with AMD than GPUGrid. They regularly compile their app of AMD and test ... latest results (which was quite a while ago) was 10x slower than CUDA. As much as they have tried working with AMD, for the heavy science being done the API is just not up to snuff for them.

Bookmarks

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •