Results 1 to 16 of 16

Thread: Very long work units

  1. #1
    Xtreme Cruncher
    Join Date
    Jan 2009
    Location
    Nashville
    Posts
    4,162

    Very long work units

    I have a IBUCH work unit that has run 15 hours and is 41% done. Shows another 18 hours to go. This is on a GTX460, overclocked and shows 85-90% GPU usage.

    On a GTX260 i have a TONI that has run 10 hours and is 15% done. Driver shows running at full clocks.

    Looked thru project forums but did not see anything about this. Ideas?

    EDIT:

    The GTX260 is on a new build and the GTX460 had a bunch of errors, that have anything to do with it? I seem to remember mention of a test WU or something.
    Last edited by PoppaGeek; 09-07-2011 at 04:13 PM.

  2. #2
    Xtreme Cruncher
    Join Date
    Dec 2008
    Location
    Texas
    Posts
    5,152
    That might be what's affecting me too... I gotta check as I don't have regular access to the machine, but that would explain why the points are so low even though it's connecting...

    Yep... got one over 31 hours on an OC'd 260 that validated... http://www.gpugrid.net/workunit.php?wuid=2677215

    It's GIANNI_KKFREE
    Last edited by Otis11; 09-07-2011 at 05:33 PM.


    24 hour prime stable? Please, I'm 24/7/365 WCG stable!

    So you can do Furmark, Can you Grid???

  3. #3
    Xtreme Cruncher
    Join Date
    Jan 2009
    Location
    Nashville
    Posts
    4,162
    I am still getting very long work units, 30+ hours. Not all are this long and not just one of my cards but all of them have gotten at least one. I have looked though the threads at GPUGrid.net and I see no mention of this at all. With the longer turn in time announced I am wondering why they would not mention this and why no one else is noticing.

  4. #4
    Xtreme Cruncher
    Join Date
    Dec 2008
    Location
    Texas
    Posts
    5,152
    I've gotten 2 20 plus hour WUs...


    24 hour prime stable? Please, I'm 24/7/365 WCG stable!

    So you can do Furmark, Can you Grid???

  5. #5
    Xtreme Cruncher
    Join Date
    Jan 2009
    Location
    Nashville
    Posts
    4,162
    I posted to their forums, waiting for reply.

  6. #6
    Xtreme Cruncher
    Join Date
    Mar 2009
    Location
    kingston.ma
    Posts
    2,139
    I have not been running much lately but the KKFREE WUs I received yesterday and today are "normal", and by that I mean they are taking about 10.5 hours for my 480 which is the same ballpark as when that WU type first came out.

    I took a look at the results and it looks like the WUs are the same number of "steps" but your time per step are way off ... you guys are not having any issues with downclocking are you?

    PG ... you are loosing out on bonus points because your cache is pulling WUs long before you really need them. I'm guessing you could bump 20-30k easy by fixing that. (sorry Otis )
    Last edited by Snow Crash; 09-10-2011 at 01:04 PM.

  7. #7
    Xtreme Cruncher
    Join Date
    Jan 2009
    Location
    Nashville
    Posts
    4,162
    While I have one card, GTX260, with a history of down clocking my other 3 cards do not. I have checked and double check the last few days and everything is running full clocks and showing 80-90% load. When I originally posted about this I was concerned about one card under Linux but now all my cards are showing long run times. One card shows one work unit with 16ms steps and another WU at 30ms.

    EDIT:

    Just discovered that the 260 with a history of downclocking shows 118ms on a WU. VERY frustrating as the Nvidia panel shows it at full clocks. This is under Linux and not aware of another way to check clocks.
    Last edited by PoppaGeek; 09-10-2011 at 01:14 PM.

  8. #8
    Xtreme Cruncher
    Join Date
    Mar 2009
    Location
    kingston.ma
    Posts
    2,139
    stumped ...

    they're not fermi cards so you never used swan_sync and consequently never needed to turn it back off,

    no downclocking (PG - while you might be having issues with Linux, Otis's general issue is on Win7),

    no announcement on GPUGrid about any changes,

    same number of steps per WU ...

    it almost looks like they are testing a different molecule branch but that typically has a new "batch" type name,

    maybe they are hitting some new parameters in the batch that are just really tough ... doubt that as WUs in a givn batch are usually fairly consistant.

    I know a while back GDF bumped the internal thread priority (which I don't know how to check) but I thought that was at the application level and not WU specific

    I see you have not jumped to the latest version of BOINC (6.13.xx) which I've read on some projects is causing GPU difficulties
    Last edited by Snow Crash; 09-10-2011 at 01:26 PM.

  9. #9
    Xtreme Cruncher
    Join Date
    Jan 2009
    Location
    Nashville
    Posts
    4,162
    Quote Originally Posted by Snow Crash View Post
    stumped ...

    they're not fermi cards so you never used swan_sync and consequently never needed to turn it back off,

    no downclocking (PG - while you might be having issues with Linux, Otis's general issue is on Win7),

    no announcement on GPUGrid about any changes,

    same number of steps per WU ...

    it almost looks like they are testing a different molecule branch but that typically has a new "batch" type name,

    maybe they are hitting some new parameters in the batch that are just really tough ... doubt that as WUs in a givn batch are usually fairly consistant.

    I know a while back GDF bumped the internal thread priority (which I don't know how to check) but I thought that was at the application level and not WU specific

    I see you have not jumped to the latest version of BOINC (6.13.xx) which I've read on some projects is causing GPU difficulties
    He went back to a lower priority. The high priority was causing freezes. I stopped running GPUGrid becasue of it.

    I set the work cache to a lower number. Thanks for that.

    To see the priority on Windows open task manager and click on Processes tab. If there is not a column that says Base Priority click on View > Select Columns. Under Linux open a terminal and type Top.
    Last edited by PoppaGeek; 09-10-2011 at 02:16 PM.

  10. #10
    Xtreme Cruncher
    Join Date
    Jan 2009
    Location
    Nashville
    Posts
    4,162
    On the Linux box:
    Without GPUGrid running open Nvidia drivers panel and it shows Performance Level 0 which is low clocks.
    Start GPUGrid and it jumps to Performance Level 3 which is full speed. Pause GPUGrid and it goes back to Level 0. So SEEMS to be working. Temps go up about 7c when running.

    Current work has run 21 hours and is 32% done.

  11. #11
    Xtreme Cruncher
    Join Date
    Mar 2009
    Location
    kingston.ma
    Posts
    2,139
    I know nothing about Linux but what you are describing does not sound like what we have seen before as down-clocking ... unless it is doing it after some period of time and when you "look" at it the monitoring tool itself casues it to wake up???


    Quote Originally Posted by PoppaGeek View Post
    He went back to a lower priority. The high priority was causing freezes. I stopped running GPUGrid becasue of it.
    That was when he set the process priority too high ... he then got info from Mr. Haslegrove on how to bump the thread priority. http://www.gpugrid.net/forum_thread....rap=true#21425

    I set the work cache to a lower number. Thanks for that.
    I *assume* you also have the report_immediately option set in your cc_config
    <cc_config>
    <options>
    <report_results_immediately>1</report_results_immediately>
    </options>
    </cc_config>

    To see the priority on Windows open task manager and click on Processes tab. If there is not a column that says Base Priority click on View > Select Columns. Under Linux open a terminal and type Top.
    This is different than the thread priority which is what got bumped the last time.

    I found another post about super long WUs ... but no real answer, just the usual non-explainations ...
    http://www.gpugrid.net/forum_thread.php?id=2555#21511
    Last edited by Snow Crash; 09-10-2011 at 02:45 PM.

  12. #12
    Xtreme Cruncher
    Join Date
    Jan 2009
    Location
    Nashville
    Posts
    4,162
    I'm done.

    After aborting a WU that had run 175,577 seconds and still had 20 hours to go I get up today and find another machine GPUGrid errored out it's WU and all others sent by gpugrid and WCG WUs. And yes that is what did it. I do not feel good enough for this frustration. This project is getting more like F@H every day. Between upping priority on WHATEVER and me trying for days trying to figure out what is wrong with my machine, one WU crashes and takes all them with it, needing to set queues to .01 on WCG to get CPUGrid WUS in on time


    I have set the project at No More Tasks. Maybe when it is 30f outside and I need the heat I'll come back.
    Last edited by PoppaGeek; 09-12-2011 at 11:44 AM.

  13. #13
    Xtreme Cruncher
    Join Date
    Mar 2009
    Location
    kingston.ma
    Posts
    2,139
    Sorry to hear of your frustrations and I completely understand your decision. You know we will always be here to support you in anyway we can when ever it is you decide to come back ... heck we don't even care if you're crunching, you're a blast and a big part of our involvement with the project ... beside, who's gonna keep Otis in line if you're not hanging around

  14. #14
    Xtreme Cruncher
    Join Date
    Dec 2008
    Location
    Texas
    Posts
    5,152
    Quote Originally Posted by Snow Crash View Post
    Sorry to hear of your frustrations and I completely understand your decision. You know we will always be here to support you in anyway we can when ever it is you decide to come back ... heck we don't even care if you're crunching, you're a blast and a big part of our involvement with the project ... beside, who's gonna keep Otis in line if you're not hanging around
    Otis don't have time to get outta line no more...

    PG - Don't be a stranger... And keep an eye on your mirror.


    24 hour prime stable? Please, I'm 24/7/365 WCG stable!

    So you can do Furmark, Can you Grid???

  15. #15
    Xtreme Cruncher
    Join Date
    Mar 2009
    Location
    kingston.ma
    Posts
    2,139
    well ... the project team got to looking at this in general and found a batch in their cancer research sub project that needs to have the "size" of each WU adjusted. Short term they have stopped them until thye get that straightened out. The upshot is that it was only a sizing error the results are all good!
    http://www.gpugrid.net/forum_thread....rap=true#22101

    This doesn't account for the long runs on KKFREE but it does account for some of the WUs Otis processed.

  16. #16
    Xtreme Cruncher
    Join Date
    Dec 2008
    Location
    Texas
    Posts
    5,152
    Well that's good to hear.

    I couldn't find anything wrong with my rig so it's still crunching away... Hope that's everything.


    24 hour prime stable? Please, I'm 24/7/365 WCG stable!

    So you can do Furmark, Can you Grid???

Bookmarks

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •