MMM
Results 1 to 25 of 327

Thread: Badge hunting for both hpf2 and gfam. efficiency drive.

Threaded View

  1. #11
    HWiNFO Author
    Join Date
    Apr 2006
    Location
    /dev/null
    Posts
    801
    My hosts were overloaded with GFAM tasks which I couldn't finish on time, so I have aborted about 150 WUs. Now I should maintain enough GFAM WUs to finish the target with my own hosts and could start building a cache for HPF2. Current run time returned is 32-36 days/day, so I believe the GFAM target could be reached on Sunday.
    Will monitor how the situation evolves and adjust accordingly...

    EDIT: OC, I posted this before I read your previous post, so that was my thought too Currently my rigs hold ~6 days of GFAM work, the rest is commited to HPF2, they all are set to buffer 10 days. I'll drop more GFAM WUs the closer I'm to the target and the more I'm sure I can reach it, so the HPF2 WUs fill-up before they dry out.

    EDIT2: I have also disabled GFAM in my profile to avoid downloading further WUs (yep, I still got several resends), since I believe there's enough work buffered to reach the target. All machines on my account have currently 1200 GFAM WUs buffered ! That's plenty enough for ~100 more days needed

    EDIT3: The current HPF2 buffer among all machines is ~1000 WUs and going up..

    EDIT4: I think all helpers can abort most GFAM WUs in cache and leave ~2-3 days of GFAM work in their caches, so the HPF2 WUs get loaded. Then continuosly reduce the GFAM buffers depending on more precise status. This way GFAM will be finished ASAP and a full switch to HPF2 can be made, but all depends on how soon HPF2 new work will end... Due to the nature of HPF2, I believe there might be a lot of resends that we can catch even after the end date...
    Last edited by Mumak; 06-07-2013 at 12:40 AM.

Bookmarks

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •