Page 5 of 11 FirstFirst ... 2345678 ... LastLast
Results 101 to 125 of 270

Thread: Countdown to HCC Completion Thread!

  1. #101
    Xtreme Member
    Join Date
    Mar 2012
    Posts
    390
    Quote Originally Posted by Gandalf View Post
    That is why God, Microsoft, created the Startup folder.
    You misunderstood. We commonly suspend the project to realign WUs when they get out of sync. After the pause, I forgot to resume the project to align the WUs.

  2. #102
    Registered User
    Join Date
    Feb 2013
    Location
    Middle Earth
    Posts
    208
    Quote Originally Posted by 0ne.shot View Post
    You misunderstood. We commonly suspend the project to realign WUs when they get out of sync. After the pause, I forgot to resume the project to align the WUs.
    Would you explain, to the new guy on the block, what this align is all about, please.


    "I refuse to answer that question on the grounds that I don't know the answer" [Douglas Adams (11 March 1952 - 11 May 2001)]

  3. #103
    Xtreme Member
    Join Date
    Mar 2012
    Posts
    390
    Quote Originally Posted by Gandalf View Post
    Would you explain, to the new guy on the block, what this align is all about, please.
    I'll give it a try, anyway! With many WUs running, you can get the most WUs processed by stressing the GPU and CPU to the fullest. If you offset the WUs so that enough are being processed by the CPU at a given time, but also not too many that the other WUs catch up. I have 16 WUs running and I try to keep groups of 4, so that as many WUs are being processed by the CPU at a given time. This gives the GPU 12 to process most of the time instead of 16, resulting in higher PPD output. This isn't an exact description of what really happens, but the point is to load up CPU and GPU as much as possible and offsetting WUs helps to do that.
    Last edited by 0ne.shot; 04-21-2013 at 10:22 PM.

  4. #104
    Registered User
    Join Date
    Feb 2013
    Location
    Middle Earth
    Posts
    208
    And I though I was doing something wrong. I've been doing that for weeks. Thanks for the info. Makes me feel better.


    "I refuse to answer that question on the grounds that I don't know the answer" [Douglas Adams (11 March 1952 - 11 May 2001)]

  5. #105
    Registered User
    Join Date
    Feb 2010
    Location
    bermuda dunes CA
    Posts
    95
    Quote Originally Posted by Evantaur View Post
    Yeah, that's why i'm asking if someone knows how boinc gets those peak numbers (actually calculate or get it from a driver) 50 gflops is lower than that of a g-series APU
    7200 GFLOPS is what my 7870 is saying.





  6. #106
    NooB MOD
    Join Date
    Jan 2006
    Location
    South Africa
    Posts
    5,799
    For some reason, for the last three days I haven't been able to run more than two GPU WUs at a time regardless of the app_config file. Anyone else?
    Xtreme SUPERCOMPUTER
    Nov 1 - Nov 8 Join Now!


    Quote Originally Posted by Jowy Atreides View Post
    Intel is about to get athlon'd
    Athlon64 3700+ KACAE 0605APAW @ 3455MHz 314x11 1.92v/Vapochill || Core 2 Duo E8500 Q807 @ 6060MHz 638x9.5 1.95v LN2 @ -120'c || Athlon64 FX-55 CABCE 0516WPMW @ 3916MHz 261x15 1.802v/LN2 @ -40c || DFI LP UT CFX3200-DR || DFI LP UT NF4 SLI-DR || DFI LP UT NF4 Ultra D || Sapphire X1950XT || 2x256MB Kingston HyperX BH-5 @ 290MHz 2-2-2-5 3.94v || 2x256MB G.Skill TCCD @ 350MHz 3-4-4-8 3.1v || 2x256MB Kingston HyperX BH-5 @ 294MHz 2-2-2-5 3.94v

  7. #107
    Xtreme Member
    Join Date
    Oct 2012
    Posts
    448
    Quote Originally Posted by [XC] Oj101 View Post
    For some reason, for the last three days I haven't been able to run more than two GPU WUs at a time regardless of the app_config file. Anyone else?
    My HD7770 is still running 8 GPU WU's at a time with no problem. My HD6670 running with only one GPU WU at a time, however, had computational errors on bunch of Thurs and Fri WU's. I think I'm up to 6 or 7 pages now.
    Desktop rigs:
    Oysterhead- Intel i5-2320 CPU@3.0Ghz, Zalman 9500AT2, 8Gb Patriot 1333Mhz DDR3 RAM, 120Gb Kingston V200+ SSD, 1Tb Seagate HD, Linux Mint 17 Cinnamon 64 bit, LG 330W PSU

    Flying Frog Brigade-Intel Xeon W3520@2.66Ghz, 6Gb Hynix 1066Mhz DDR3 RAM, 640Gb Hitachi HD, 512Mb GDDR5 AMD HD4870, Mac OSX 10.6.8/Linux Mint 14 Cinnamon dual boot

    Laptop:
    Colonel Claypool-Intel T6600 Core 2 Duo, 4Gb 1066Mhz DDR3 RAM, 1Gb GDDR3 Nvidia 230M,240Gb Edge SATA6 SSD, Windows 7 Home 64 bit




  8. #108
    Xtremely High Voltage Sparky's Avatar
    Join Date
    Mar 2006
    Location
    Ohio, USA
    Posts
    16,040
    I'm still running app_info and still running 8 threads easy. Hey, it works, and I didn't want to screw up a good thing

    Side note - I don't have very many resends in my queue (those are the _2 labeled ones, right?).
    The Cardboard Master
    Crunch with us, the XS WCG team
    Intel Core i7 2600k @ 4.5GHz, 16GB DDR3-1600, Radeon 7950 @ 1000/1250, Win 10 Pro x64

  9. #109
    Xtreme Member
    Join Date
    Mar 2009
    Location
    Naples FL
    Posts
    206
    Running 19 now. I did have a problem with Boinc manager. My estimated w/u times were way off causing me to get 600 pages of work

  10. #110
    Xtreme Member
    Join Date
    Mar 2012
    Posts
    390
    Quote Originally Posted by Sparky View Post
    I'm still running app_info and still running 8 threads easy. Hey, it works, and I didn't want to screw up a good thing

    Side note - I don't have very many resends in my queue (those are the _2 labeled ones, right?).
    App_config is much easier than the dated app_info. I can select the number of WUs to run, save file, and tell the BOINC client to read config file and it assigns the program to run that many tasks without shutting the client down. It's very convenient. I suppose it doesn't really matter with a few days left.

  11. #111
    Xtreme Member
    Join Date
    May 2008
    Location
    Sydney, Australia
    Posts
    242
    Quote Originally Posted by BlindShot View Post
    Running 19 now. I did have a problem with Boinc manager. My estimated w/u times were way off causing me to get 600 pages of work
    Yeh, when it is fetching tasks the BOINC client (v7.0.44 onwards) on my GPU machine sometimes sets the time estimates for the WUs that are Ready to Start to 3m43s, causing it to request too many. Then I'll look again and the time estimates will be back to their proper value of around 13m (for 13 tasks on my 7870).

    If you want to count the number of HCC tasks in your cache, start a command prompt (windows) or shell (*nix). cd to the WCG project directory (where your app_config.xml file lives). Do "dir x*.zip" (no quotes)(windows) or "ls x*.zip | wc -l" (*nix). That should show the no of files that match x*.zip

    If there's a problem with the *nix version, try:
    ls | grep 'x*.zip' | wc -l

    PS: Only had 6500 jobs so I bumped the cache up a bit

  12. #112
    HWiNFO Author
    Join Date
    Apr 2006
    Location
    /dev/null
    Posts
    801
    Are we there yet? I seem to get resend tasks only now..

  13. #113
    Registered User
    Join Date
    Feb 2013
    Location
    Middle Earth
    Posts
    208
    Quote Originally Posted by Mumak View Post
    Are we there yet? I seem to get resend tasks only now..
    I'm still doing the _0 and _1 tasks.


    "I refuse to answer that question on the grounds that I don't know the answer" [Douglas Adams (11 March 1952 - 11 May 2001)]

  14. #114
    Xtreme Member
    Join Date
    Mar 2012
    Posts
    390
    I just realized from stock to overclocked settings on my 7950, power consumption goes from 257W to 512W. Power consumption went up almost 200% and PPD output went up about 130%. Talk about efficiency!

  15. #115
    Xtremely High Voltage Sparky's Avatar
    Join Date
    Mar 2006
    Location
    Ohio, USA
    Posts
    16,040
    I didn't notice that much of a power increase on mine going from 800 to 1000.
    The Cardboard Master
    Crunch with us, the XS WCG team
    Intel Core i7 2600k @ 4.5GHz, 16GB DDR3-1600, Radeon 7950 @ 1000/1250, Win 10 Pro x64

  16. #116
    Xtreme Member
    Join Date
    Mar 2012
    Posts
    390
    Quote Originally Posted by Sparky View Post
    I didn't notice that much of a power increase on mine going from 800 to 1000.
    I went from 880MHz 993 mV to 1230 MHz 1218 mV.

  17. #117
    Xtreme X.I.P.
    Join Date
    Jan 2008
    Posts
    727
    Quote Originally Posted by Sparky View Post
    I didn't notice that much of a power increase on mine going from 800 to 1000.
    Sparky, If you did not change the VDDC for your video card, the wattage change will not be that much going from 800 to 1000 MHz.

    1.s is going to a higher core speed (1230 MHz) with a significant VDDC increase thus the notable raise in Watts.


  18. #118
    Registered User
    Join Date
    Feb 2013
    Location
    Middle Earth
    Posts
    208
    Quote Originally Posted by Mumak View Post
    Are we there yet? I seem to get resend tasks only now..
    I now have a _3.


    "I refuse to answer that question on the grounds that I don't know the answer" [Douglas Adams (11 March 1952 - 11 May 2001)]

  19. #119
    Xtremely High Voltage Sparky's Avatar
    Join Date
    Mar 2006
    Location
    Ohio, USA
    Posts
    16,040
    I have mostly _0s and _1s, with a few _2s thrown in for seasoning.
    The Cardboard Master
    Crunch with us, the XS WCG team
    Intel Core i7 2600k @ 4.5GHz, 16GB DDR3-1600, Radeon 7950 @ 1000/1250, Win 10 Pro x64

  20. #120
    Xtreme Cruncher
    Join Date
    Dec 2008
    Location
    Texas
    Posts
    5,152
    Quote Originally Posted by Gandalf View Post
    I now have a _3.
    That's normal (although not very common). That just means two people errored out/aborted the same WU, or someone sent back an errored unit and therefore is marked "inconclusive". But it does signal the end...

    Do we have any idea where to put the GPUs once this is over? Thought I saw something on that, but overlooking it at the moment. I know GPUGrid is great for Nvidia, but what about ATI? There are a bunch, but what's the most worthy (read: humanitarian) project that runs on ATI?

    (BTW, hello again! Been a while)



    24 hour prime stable? Please, I'm 24/7/365 WCG stable!

    So you can do Furmark, Can you Grid???

  21. #121
    Xtreme Member
    Join Date
    Oct 2012
    Posts
    448
    Quote Originally Posted by Otis11 View Post
    Do we have any idea where to put the GPUs once this is over? Thought I saw something on that, but overlooking it at the moment. I know GPUGrid is great for Nvidia, but what about ATI? There are a bunch, but what's the most worthy (read: humanitarian) project that runs on ATI?

    (BTW, hello again! Been a while)

    This is something that's been on my mind. I"m wavering between Folding @Home or POEM. I like more of the projects at POEM, but there doesn't appear to be an active XS team for them.
    Desktop rigs:
    Oysterhead- Intel i5-2320 CPU@3.0Ghz, Zalman 9500AT2, 8Gb Patriot 1333Mhz DDR3 RAM, 120Gb Kingston V200+ SSD, 1Tb Seagate HD, Linux Mint 17 Cinnamon 64 bit, LG 330W PSU

    Flying Frog Brigade-Intel Xeon W3520@2.66Ghz, 6Gb Hynix 1066Mhz DDR3 RAM, 640Gb Hitachi HD, 512Mb GDDR5 AMD HD4870, Mac OSX 10.6.8/Linux Mint 14 Cinnamon dual boot

    Laptop:
    Colonel Claypool-Intel T6600 Core 2 Duo, 4Gb 1066Mhz DDR3 RAM, 1Gb GDDR3 Nvidia 230M,240Gb Edge SATA6 SSD, Windows 7 Home 64 bit




  22. #122
    Registered User
    Join Date
    Feb 2013
    Location
    Middle Earth
    Posts
    208
    Quote Originally Posted by Otis11 View Post
    That's normal (although not very common). That just means two people errored out/aborted the same WU, or someone sent back an errored unit and therefore is marked "inconclusive". But it does signal the end...

    Do we have any idea where to put the GPUs once this is over? Thought I saw something on that, but overlooking it at the moment. I know GPUGrid is great for Nvidia, but what about ATI? There are a bunch, but what's the most worthy (read: humanitarian) project that runs on ATI?

    (BTW, hello again! Been a while)

    Otis, you need to fix your Signature. FYI - I have 10 AMD 7770 GPU's.

    I've signed up with the XtremeSystems Einstein@Home team, and the Milkyway@Home team, and the Seti@Home team.

    http://einstein.phys.uwm.edu/index.php

    http://milkyway.cs.rpi.edu/milkyway/index.php

    http://setiathome.berkeley.edu/index.php

    Asteroids isn't doing GPUs right now.
    Last edited by Gandalf; 04-23-2013 at 07:08 PM.


    "I refuse to answer that question on the grounds that I don't know the answer" [Douglas Adams (11 March 1952 - 11 May 2001)]

  23. #123
    Xtreme Member
    Join Date
    Mar 2012
    Posts
    390
    Is anyone receiving large amounts of resends yet? There are estimated to be less than 2.5 days of available work.

  24. #124
    Xtreme Member
    Join Date
    Jan 2010
    Posts
    323
    For real?? I thought there were something like a week left...

  25. #125
    Xtreme Member
    Join Date
    Oct 2012
    Posts
    448
    I"ve got WU's on one 'puter until Monday, and the other GPU enabled one has WU"s until Wed. I am starting to see some of the resend/2's now.
    Desktop rigs:
    Oysterhead- Intel i5-2320 CPU@3.0Ghz, Zalman 9500AT2, 8Gb Patriot 1333Mhz DDR3 RAM, 120Gb Kingston V200+ SSD, 1Tb Seagate HD, Linux Mint 17 Cinnamon 64 bit, LG 330W PSU

    Flying Frog Brigade-Intel Xeon W3520@2.66Ghz, 6Gb Hynix 1066Mhz DDR3 RAM, 640Gb Hitachi HD, 512Mb GDDR5 AMD HD4870, Mac OSX 10.6.8/Linux Mint 14 Cinnamon dual boot

    Laptop:
    Colonel Claypool-Intel T6600 Core 2 Duo, 4Gb 1066Mhz DDR3 RAM, 1Gb GDDR3 Nvidia 230M,240Gb Edge SATA6 SSD, Windows 7 Home 64 bit




Page 5 of 11 FirstFirst ... 2345678 ... LastLast

Tags for this Thread

Bookmarks

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •