Page 2 of 2 FirstFirst 12
Results 26 to 45 of 45

Thread: Best GPU Projects?

  1. #26
    HWiNFO Author
    Join Date
    Apr 2006
    Location
    /dev/null
    Posts
    801
    So some results

    Milkyway@Home:
    ----------------
    HD7950: ~37 / ~47 secs per WU; ~80 / ~107 pts per WU

    Einstein@Home BRP4:
    ---------------------
    650 Ti: ~3000 secs for 2 WUs
    660 Ti: ~1200 secs for 1 WU, ~2200 secs for 2 WUs
    HD7770: ~2300 secs for 1 WU, ~4000 secs for 2 WUs
    HD7950: ~1050 secs for 1 WU
    Each GPU WU is worth 500 pts

    GPUs are clocked:
    HD7950 - 1120 MHz
    HD7770 - 1120 MHz
    660 Ti - default
    650 Ti - 1020 MHz

  2. #27
    Registered User
    Join Date
    Feb 2013
    Location
    Middle Earth
    Posts
    208
    Mumak - do you run these WUs virgin or do they go through the app_config file?
    If app_config, what settings are you using?

  3. #28
    HWiNFO Author
    Join Date
    Apr 2006
    Location
    /dev/null
    Posts
    801
    Quote Originally Posted by Gandalf View Post
    Mumak - do you run these WUs virgin or do they go through the app_config file?
    If app_config, what settings are you using?
    For MW@H I run a single WU per GPU, so no app_config.

    For E@H there's no need to use app_config for running multiple WUs/GPU, since you can specify this in the E@H Preferences page - "GPU utilization factor of BRP apps" (set to 0.5 for 2 WUs/GPU).

    For GPUGrid it doesn't make much sense to run multiple WUs/GPU, since they already utilize the GPU pretty high (>90% depending on GPU). Though I use app_config there to reserve a full CPU core for each WU on Kepler (by default the requested CPU reservation is much lower there, but on Kepler it seems the tasks require much more CPU resources). So there I use:
    <app_config>

    <app>
    <name>acemdlong</name>
    <max_concurrent>1</max_concurrent>
    <gpu_versions>
    <gpu_usage>1</gpu_usage>
    <cpu_usage>1</cpu_usage>
    </gpu_versions>
    </app>

    <app>
    <name>acemdshort</name>
    <max_concurrent>1</max_concurrent>
    <gpu_versions>
    <gpu_usage>1</gpu_usage>
    <cpu_usage>1</cpu_usage>
    </gpu_versions>
    </app>

    <app>
    <name>acemd2</name>
    <max_concurrent>1</max_concurrent>
    <gpu_versions>
    <gpu_usage>1</gpu_usage>
    <cpu_usage>1</cpu_usage>
    </gpu_versions>
    </app>

    </app_config>

  4. #29
    Xtreme Cruncher
    Join Date
    Mar 2009
    Location
    kingston.ma
    Posts
    2,139
    Quote Originally Posted by Otis11 View Post
    Snow - How's POEM working? any apparent shortages? Is Docking@Home GPU based?
    POEM appears to have a 100 WU in process limit - it is not stated on their site but I never get more than that and once I start reporting them in I fill back up to 100. Running 6 at a time via app_config on my 7950 because a single WU only uses 40% GPU ... each additional concurrent WU seems to take about another 10%. Six at a time takes about 26 minutes. Points are pretty good (if that matters) at 2,925.23 per WU but CPU usage is pretty heavy - they have it set to 1 GPU + 1 CPU but I found you really only need between .5 and .75 CPU for each GPU WU.
    app_config ...
    Code:
    <app_config>
    
       <app>
          <!-- the name tag is the name of the application you want to control with this app_config file -->
          <name>poemcl</name>
    
          <!-- set max_concurrent to the total number of both CPU and GPU WUs to run at a time on your rig -->
          <max_concurrent>5</max_concurrent>
    
          <!-- gpu_version tag is where you take control of how BOINC allocates your resources for this app -->
          <!-- this line is unnecessary but adds control especially for those with two different cards in a rig -->
          <gpu_versions>
    
             <!-- gpu_usage calucation is 1 divided by the number of WUs you wish to run on any single card -->
             <!-- don't change this when adding a card as each will try to run this qty (if you have enough CPU) --> 
             <gpu_usage>0.20</gpu_usage>
    
             <!-- cpu_usage is how much cpu resources BOINC will reserve to run each gpu WU-->
             <!-- calculation is the number of CPU cores/threads for all GPU work divided by the number of GPU WUs -->
             <cpu_usage>0.75</cpu_usage>
    
          </gpu_versions>
    
       </app>
    
    </app_config>


    Docking@Home is a CPU only project ... I just need 25k to make one of my stat goals (#2 on team XS for number of projects w/ 25K plus)
    Docking@Home is a project which uses Internet-connected computers to perform scientific calculations that aid in the creation of new and improved medicines. The project aims to help cure diseases such as Human Immunodeficiency Virus (HIV). Docking@Home is a collaboration between the University of Delaware, The Scripps Research Institute, and the University of California - Berkeley. It is part of the Dynamically Adaptive Protein-Ligand Docking System project and is supported by the National Science Foundation.
    Last edited by Snow Crash; 05-05-2013 at 11:58 AM.

  5. #30
    Linux those diseases
    Join Date
    Mar 2008
    Location
    Planet eta pie
    Posts
    2,930
    As team captain for GPUgrid, I'd just like to point out that unlike every other project XS is involved with, we are at #1 position there. So if you have an Nvidia card (500 series or better) come join our success & help keep us there

  6. #31
    HWiNFO Author
    Join Date
    Apr 2006
    Location
    /dev/null
    Posts
    801
    Quote Originally Posted by stoneageman View Post
    As team captain for GPUgrid, I'd just like to point out that unlike every other project XS is involved with, we are at #1 position there. So if you have an Nvidia card (500 series or better) come join our success & help keep us there
    Are Noelia tasks finally working there? I had so many issues with those, that I didn't want to continue until it's fixed...

  7. #32
    Linux those diseases
    Join Date
    Mar 2008
    Location
    Planet eta pie
    Posts
    2,930
    I've not seen a Noelia task for 3 weeks
    Last edited by stoneageman; 05-05-2013 at 01:14 PM.

  8. #33
    Registered User
    Join Date
    Feb 2013
    Location
    Middle Earth
    Posts
    208
    Quote Originally Posted by stoneageman View Post
    As team captain for GPUgrid, I'd just like to point out that unlike every other project XS is involved with, we are at #1 position there. So if you have an Nvidia card (500 series or better) come join our success & help keep us there
    Stone man - has Nvidia resolved the one WU per GPU issue?

    P.S. - nice avatar!

  9. #34
    Xtreme Cruncher
    Join Date
    Mar 2009
    Location
    kingston.ma
    Posts
    2,139
    Quote Originally Posted by stoneageman View Post
    As team captain for GPUgrid, I'd just like to point out that unlike every other project XS is involved with, we are at #1 position there. So if you have an Nvidia card (500 series or better) come join our success & help keep us there
    As soon as I get some personal stats settled, I'll be back with a 670 + 660Ti combo

  10. #35
    Xtreme Addict Evantaur's Avatar
    Join Date
    Jul 2011
    Location
    Finland
    Posts
    1,043
    Quote Originally Posted by Gandalf View Post
    Stone man - has Nvidia resolved the one WU per GPU issue?
    you run long tasks with 100% utilization, should not be a problem there

    will see if i have money to buy a couple of titans, most likely not
    Last edited by Evantaur; 05-05-2013 at 01:38 PM.

    I like large posteriors and I cannot prevaricate

  11. #36
    Xtreme Cruncher
    Join Date
    Feb 2003
    Location
    Estonia
    Posts
    1,097
    Computing considered - Titan should be the biggest-baddest kid in town at the moment?
    Member of XS WCG since 2006-11-25




  12. #37
    Linux those diseases
    Join Date
    Mar 2008
    Location
    Planet eta pie
    Posts
    2,930
    Quote Originally Posted by Gandalf View Post
    Stone man - has Nvidia resolved the one WU per GPU issue?

    P.S. - nice avatar!
    From the testing done by others, overall there is nothing to be gained at present by running more than 1 wu per gpu.

  13. #38
    HWiNFO Author
    Join Date
    Apr 2006
    Location
    /dev/null
    Posts
    801
    Titan doesn't work for most projects yet. Seems that current CUDA 4 and 5 apps don't like it and they were not fixed yet. So for example you won't be able to run GPUGrid on Titans.
    Though I think Einstein@Home BRP4 app (CUDA 3.1) is able to run on Titans, however the performance is not so great.

    Here's a performance comparison of different GPUs for E@H: http://www.dskag.at/images/Research/...rmancelist.pdf
    Last edited by Mumak; 05-05-2013 at 02:00 PM.

  14. #39
    Xtreme Addict Evantaur's Avatar
    Join Date
    Jul 2011
    Location
    Finland
    Posts
    1,043
    Quote Originally Posted by Mumak View Post
    you won't be able to run GPUGrid on Titans.
    well then http://youtu.be/r-bA9FYB8HY?t=49s sums it up for me pretty well:P

    I like large posteriors and I cannot prevaricate

  15. #40
    HWiNFO Author
    Join Date
    Apr 2006
    Location
    /dev/null
    Posts
    801
    I checked GPU usage when running MW@H tasks and it seems there might be some room left for running multiple WUs. So I changed it to run 2 WUs/GPU, more doesn't seem to make sense. The app name for app_config is "milkyway".

  16. #41
    Xtreme Addict Evantaur's Avatar
    Join Date
    Jul 2011
    Location
    Finland
    Posts
    1,043
    E@h seems to ask 0.5 cores to run well so only running 2 (wcg > everything after all)

    I like large posteriors and I cannot prevaricate

  17. #42
    Xtreme Member
    Join Date
    Oct 2003
    Location
    Missouri
    Posts
    149
    supercoolin is tearing up the milkway!!!!

  18. #43
    HWiNFO Author
    Join Date
    Apr 2006
    Location
    /dev/null
    Posts
    801
    My 2x7950 are doing ~560,000 pts/day there

  19. #44
    Xtreme Cruncher
    Join Date
    Dec 2008
    Location
    Texas
    Posts
    5,152
    Found another - anyone knowedgeable about Primegrid? Here's all I got:

    Primegrid - Cryptography(Trying to find out various different primes in an attempt to improve encryption algorithms, as well as other algorithms)
    Last edited by Otis11; 05-10-2013 at 09:13 PM.


    24 hour prime stable? Please, I'm 24/7/365 WCG stable!

    So you can do Furmark, Can you Grid???

  20. #45
    Xtreme crazy bastid
    Join Date
    Apr 2007
    Location
    On mah murder-sickle!
    Posts
    5,878
    I ran it for a little while. It's a small budget affair but the work has some important applications if you're into cryptography.

    [SIGPIC][/SIGPIC]

Page 2 of 2 FirstFirst 12

Bookmarks

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •