MMM
Page 2 of 3 FirstFirst 123 LastLast
Results 26 to 50 of 58

Thread: Runtime: how about increasing it

  1. #26
    Xtreme Member
    Join Date
    Mar 2010
    Posts
    368
    My IO rig is now crunching with 36 threads. I am still not convinced this will increase the number of results or points per day. On the other hand it should increase the runtime.
    I will let now the rig run over the week-end and see the stats. If there is a sizeable increase in performance then I will propagate this change to all the rigs. Wait and see.
    [SIGPIC][/SIGPIC]

  2. #27
    Xtreme Legend
    Join Date
    Mar 2008
    Location
    Plymouth (UK)
    Posts
    5,279
    A single large leap to a higher number of wu's is unlikely to show an increase in points whereas small increments may do so for the duration of the move from a lesser to a larger amount.

    After the rig settles the I would expect to see a similar or slightly elevated points tally and the same goes for results. The increase in runtime should be related to the percentage increase of wu's running

    Early days to be certain but it may be that the number of cpu cores/threads times an odd number ie. 8*3 or 8*5 and 12*3 or 12*5 may look after the offset better for longer. This is just an observation on this rig which I have not really proven yet and may not actually happen on later hardware.


    My Biggest Fear Is When I die, My Wife Sells All My Stuff For What I Told Her I Paid For It.
    79 SB threads and 32 IB Threads across 4 rigs 111 threads Crunching!!

  3. #28
    Xtreme Member
    Join Date
    Jun 2010
    Location
    Crab Nebula
    Posts
    493
    Quote Originally Posted by EtaCarinae View Post
    My IO rig is now crunching with 36 threads. I am still not convinced this will increase the number of results or points per day. On the other hand it should increase the runtime.
    I will let now the rig run over the week-end and see the stats. If there is a sizeable increase in performance then I will propagate this change to all the rigs. Wait and see.
    If you were running 10 GPU and 2 CPU before I can guarantee that you'll get a very noticeable increase in production.



    You'll never know what you're living for until you know what you're willing to die for.

  4. #29
    Xtreme Member
    Join Date
    Mar 2010
    Posts
    368
    Quote Originally Posted by nanoprobe View Post
    If you were running 10 GPU and 2 CPU before I can guarantee that you'll get a very noticeable increase in production.
    Indeed nanoprobe. I was running 10 GPU and 2 CPU. So let's see what happens. At least 36 threads make a better use of the 6 GB RAM I have on each rig.
    And even if it is better. what is the maximum then? 100 threads, or 20 threads? Or up to the maximum that memory capacity will allow?
    [SIGPIC][/SIGPIC]

  5. #30
    Not Yours!
    Join Date
    Nov 2009
    Location
    Oslo-Norway
    Posts
    550
    Running two 7970's here with 12 WU's per GPU, looks like I've got a bit of headroom on the temps.






    Going to increase the OC on these cards during Easter when I got the time to figure out the vcore settings.
    Do you think I should increase the number of WU's per GPU OC? CPU load is rarely above 60%.
    Last edited by Mydog; 03-21-2013 at 01:05 PM.

  6. #31
    Xtreme Member
    Join Date
    Jun 2010
    Location
    Crab Nebula
    Posts
    493
    Quote Originally Posted by EtaCarinae View Post
    Indeed nanoprobe. I was running 10 GPU and 2 CPU. So let's see what happens. At least 36 threads make a better use of the 6 GB RAM I have on each rig.
    And even if it is better. what is the maximum then? 100 threads, or 20 threads? Or up to the maximum that memory capacity will allow?
    That's a hard call. My maximum is 32 on my current hardware with 8GB of ram. (Your mobo ram speed will make a difference.) Any more than that saw the efficiency drop off. With an HT hex core you could possibly run up to 48 tasks. I don't know how much ram each task requires but I would think 6 GB would be enough to run 48. Running 36 will cause a backlog of PVs so I would give your current settings 4 or more days to get a baseline for results. After 36 try 42 and see what happens. If you don't see a better average per task run time than 36 is your limit. If 42 gives you better times (it may only be a few seconds per task) then bump it up to 48 and see what happens. My best guess is 48 will be your limit if 36 or 42 aren't. @ 32 my average run time per task is about 37.5 seconds. (32 tasks every 20 minutes)
    Are we having fun yet?



    You'll never know what you're living for until you know what you're willing to die for.

  7. #32
    Xtreme Member
    Join Date
    Jun 2010
    Location
    Crab Nebula
    Posts
    493
    Quote Originally Posted by Mydog View Post
    Running two 7970's here with 12 WU's per GPU, looks like I've got a bit of headroom on the temps.






    Going to increase the OC on these cards during Easter when I got the time to figure out the vcore settings.
    Do you think I should increase the number of WU's per GPU OC? CPU load is rarely above 60%.
    Running dual cards on a system is a whole different ballgame. There is a point where you run into bottlenecks so I can't advise you on how to proceed. Trial and error is the only way to find out. Maybe someone else here who is running a dual card setup can chime in.



    You'll never know what you're living for until you know what you're willing to die for.

  8. #33
    Xtreme Member
    Join Date
    Mar 2010
    Posts
    368
    I did two tests. Running 36 threads and 12 threads. The results are that running 36 threads the daily point production is lower than with 10 GPU threads by -12% at least. In fact the maximum value is with 12 threads. That means that you use a full CPU core for one GPU thread. I get +8% at least over a 10+2 configuration. All this at same hardware settings.
    The reason is that HCC WU are still CPU intensive. The CPU is havily used at the end of each diffraction image, this means two times per WU, to which you have to add all the logistics, like loading and unloading the WUs on the GPU and all PC functions on the networks etc... If one would like to add over 12 threads then there would be a way to have better results, but this would mean to be able to define the starting time of each WU, so to avoid collisions and have multiple WUs using at the same time CPU resources.
    The reason I am using the 10+2 configuration is that in case there is a shortage of GPU WUs and the cache is empty (it happened already) I avoid having an powered but idle machines and the CPU crunches at least two WUs, better than nothing.
    Last edited by EtaCarinae; 03-25-2013 at 12:26 AM.
    [SIGPIC][/SIGPIC]

  9. #34
    Xtreme Member
    Join Date
    May 2008
    Location
    Sydney, Australia
    Posts
    242
    Quote Originally Posted by EtaCarinae View Post
    ... The reason I am using the 10+2 configuration is that in case there is a shortage of GPU WUs and the cache is empty (it happened already) I avoid having an powered but idle machines and the CPU crunches at least two WUs, better than nothing.
    If we run some CPU WUs in addition to HCC GPU ones, I think we have to do some micromanagement of the work cache to keep things running optimally.

    You have to have "HCC" ticked in your machine's WCG Device Profile on order to get GPU work. This means that you get some HCC CPU work as well as the GPU work. IMHO, crunching HCC WUs with a CPU is now a huge waste of energy & computing resources, because it's so much more efficient to crunch them with a GPU. To avoid this wastage, you need to abort all HCC CPU WUs received! The CPU that you would have used for HCC should be used for a CPU-only project instead - for those who want to crunch cancer work, HFCC work is still available.


    Interesting that you found 12 GPU threads to be best. I'm running a Q9650 with 1 x HD7870, no CPU WUs, and 12 threads seems to be best with it too. I just tried 15, 14 and 13, but throughput was worse. Earlier I tried all numbers from 4 to 12, plus 16, and chose 12. Running 1 thread per CPU core (4) was definitely worse, ie there was no "magic" in having 1 CPU core per GPU thread.

  10. #35
    Xtreme Cruncher
    Join Date
    Apr 2007
    Location
    Western Canada
    Posts
    1,004
    I'm beginning to think the methodology for max. production numbers being posted is flawed. There is no doubt that running CPU tasks with GPU will slow down production, that is a given and makes sense.

    However on the subject of GPU only crunching something isn't adding up!

    A while back I was running 16 wu's on 79xx and 8 on 7770's. Someone said that 10 was optimal for 79xx cards so I dropped mine to 10 and added another 7950 and 7770 to the farm. After adding those cards my point production barely went up and stayed around the 530k mark for. So I'm reading this thread a couple of weeks ago and decided to put the 79xx's back to 16 wu's, then 24 were they have been running for just over a week. I've seen my PPD production jump up significantly since then averaging well over 620k ppd, nothing else changed(have a look at the pie chart). I'm not doing any fancy math trying to figure out immediate production or anything, just looking at BOINC stats and free-dc. Also my pending where increasing throughout that last period, I see today that pending's are finally coming down from the 600 page mark.

    So what the heck is going on, my PPD is not jiving with what other people are seeing/saying ? Like I've always said I really care less about points other than their indication of work done, but I find this to be odd.

    FWIW my current farm consists of 1-7970, 2-7950's and 5 7770's, all running on Quad core's with hyperthreading.
    Last edited by Johnmark; 03-25-2013 at 08:01 AM.

  11. #36
    Xtreme X.I.P.
    Join Date
    Jan 2008
    Posts
    727
    JohnMark,

    Your latest posted numbers for the 7970 and 7950 are good. The dual 7770 rig you run appears to a low scaling compare to other single 7770 rigs that you operate.

    I believe that each rig has its own sweet spot insofar as the number of WUs to run for best results #s or optimum runtime. The rigs components and whether you are in a single, Xfire or Tri-fire video card set up will dictate the ideal # of concurrent WUs to run for best efficiency. You must find tune each rig individually. There is no magic WU # that can be applied to a single video card series. What folks provide in this forum are good WU benchmarks #s to use to fine tune your own rigs individually. Even if you have the same hardware as someone else you may not exactly match their PPD as variables such as applications versions, running background processes, carrying out daily WUs spacing and internet speed will influence your result output.

    How am I ever going to catch up to you with them new Xtreme results you're now putting out...???
    Last edited by jeanguy2; 03-25-2013 at 08:57 AM.


  12. #37
    Xtreme Member
    Join Date
    Jun 2010
    Location
    Crab Nebula
    Posts
    493
    Quote Originally Posted by Johnmark View Post

    FWIW my current farm consists of 1-7970, 2-7950's and 5 7770's, all running on Quad core's with hyperthreading.
    What type of quads?



    You'll never know what you're living for until you know what you're willing to die for.

  13. #38
    Xtreme X.I.P.
    Join Date
    Jan 2008
    Posts
    727
    Quote Originally Posted by nanoprobe View Post
    What type of quads?
    Nano..., List of JohnMark's rigs here


  14. #39
    Xtreme Cruncher
    Join Date
    Apr 2007
    Location
    Western Canada
    Posts
    1,004
    I hear what your saying jeanguy2, it's just interesting there is such a significant difference on output!
    Some of it makes sense, however I can't help feeling I'm missing something in this whole GPU/W.U. vs production thing.

    You've got a slice or two of my pie already, isn't that good enough

    nanoprobe, the CPU's are Intel Quads a 950,920,860,875k and 4 2600k's.

    Quote Originally Posted by jeanguy2 View Post
    JohnMark,

    Your latest posted numbers for the 7970 and 7950 are good. The dual 7770 rig you run appears to a low scaling compare to other single 7770 rigs that you operate.

    I believe that each rig has its own sweet spot insofar as the number of WUs to run for best results #s or optimum runtime. The rigs components and whether you are in a single, Xfire or Tri-fire video card set up will dictate the ideal # of concurrent WUs to run for best efficiency. You must find tune each rig individually. There is no magic WU # that can be applied to a single video card series. What folks provide in this forum are good WU benchmarks #s to use to fine tune your own rigs individually. Even if you have the same hardware as someone else you may not exactly match their PPD as variables such as applications versions, running background processes, carrying out daily WUs spacing and internet speed will influence your result output.

    How am I ever going to catch up to you with them new Xtreme results you're now putting out...???
    Last edited by Johnmark; 03-25-2013 at 03:16 PM.

  15. #40
    Xtreme Member
    Join Date
    Jun 2010
    Location
    Crab Nebula
    Posts
    493
    Quote Originally Posted by Johnmark View Post
    I hear what your saying jeanguy2, it's just interesting there is such a significant difference on output!
    Some of it makes sense, however I can't help feeling I'm missing something in this whole GPU/W.U. vs production thing.

    You've got a slice or two of my pie already, isn't that good enough

    nanoprobe, the CPU's are Intel Quads a 950,920,860,875k and 4 2600k's.
    Here's my take FWIW. I did a lot of testing from the time the WCG GPU app started. 4 of my 7970s run on 2600k's @ 4.2 Ghz. All have 8 GB of 1600 MHz. memory. The core clock on all the cards is 1125 Mhz on stock voltage with the memory downclocked to 1000 MHz. (I found no advantage to running the memory higher and trying to run any lower would cause the card to revert back to factory default) I stated with 8 tasks each and worked my way up 4 tasks at a time until I reached the 32 tasks per card I'm running now. The amount of tasks completed and PPD stayed about the same from 16 on up. What did increase as the amount of tasks run above 16 was the amount of run time I accumulated. Running more than 32 caused less tasks completed and PPD. The "trick" with running that many is getting the proper spacing between each task so the GPU stays as close to its' maximum load all the time.
    Overclocking your CPU/GPU and mobo memory will produce more results and more points but I'm looking for longevity so I chose not to run mine any harder.
    Here are my latest stats for 1 machine and why I think my set up is optimum for me. They were lower for 3/13 because the machine was offline part of the day for dust removal and on 3/22 for a re-image on a new drive.




    You'll never know what you're living for until you know what you're willing to die for.

  16. #41
    Xtreme X.I.P.
    Join Date
    Jan 2008
    Posts
    727
    You've got a slice or two of my pie already, isn't that good enough
    JohnMark, I went down to the Fraser river today and asked the local Sto:Lo medicine man about what to do to challenge you. He told me to move my wapiti rack from the north side of the cabin to the south side of the cabin so that it faces the spirit of the river. I am also to hang my snowshoes beneath it with their pointed heels facing upwards. I will do so tomorrow.

    Hypotetically as MM would say, you could be in trouble as this is very strong medicine....
    Last edited by jeanguy2; 03-25-2013 at 05:00 PM.


  17. #42
    Xtreme Cruncher
    Join Date
    Apr 2007
    Location
    Western Canada
    Posts
    1,004
    Well I guess I asked for it
    Quote Originally Posted by jeanguy2 View Post
    JohnMark, I went down to the Fraser river today and asked the local Sto:Lo medicine man about what to do to challenge you. He told me to move my wapiti rack from the north side of the cabin to the south side of the cabin so that it faces the spirit of the river. I am also to hang my snowshoes beneath it with their pointed heels facing upwards. I will do so tomorrow.

    Hypotetically as MM would say, you could be in trouble as this is very strong medicine....

  18. #43
    Xtreme Member
    Join Date
    May 2007
    Location
    The Netherlands
    Posts
    935
    Johnmark, I have a question for you. How do you get THIS cruncher to do so well. When I compare it to THIS one of mine, I'm losing to much points each day.

    I have it running GPU only and was at 4 WU's at a time. The 2600k is @ 4.5 GHz and the GPU @ 1000 MHz.

  19. #44
    Xtreme Cruncher
    Join Date
    Apr 2007
    Location
    Western Canada
    Posts
    1,004
    Rob, that machine is running at 4600 mhz w/8gigs RAM and the HIS GPU at 1200 at stock voltage. It's was running 8 wu's from day one and I bumped it up to 12 the other day. That's all Rob !

    Quote Originally Posted by Rob_B View Post
    Johnmark, I have a question for you. How do you get THIS cruncher to do so well. When I compare it to THIS one of mine, I'm losing to much points each day.

    I have it running GPU only and was at 4 WU's at a time. The 2600k is @ 4.5 GHz and the GPU @ 1000 MHz.

  20. #45
    Xtreme Member
    Join Date
    May 2007
    Location
    The Netherlands
    Posts
    935
    Thanks. I will play a little with the number of WU's and try to overclock the GPU.

  21. #46
    Xtreme Member
    Join Date
    May 2007
    Location
    The Netherlands
    Posts
    935
    One more question. How many WU's does that 7770 crunch each day? Mine does 650 WU's on average.

    I though that with 4 WU's it completed the WU's the quickest. I tried 6 and that lowered the total WU's each day because it took much longer to crunch them.

  22. #47
    Xtreme Cruncher
    Join Date
    Apr 2007
    Location
    Western Canada
    Posts
    1,004
    800-900 a day, last few days have been 900-1000.
    Quote Originally Posted by Rob_B View Post
    One more question. How many WU's does that 7770 crunch each day? Mine does 650 WU's on average.

    I though that with 4 WU's it completed the WU's the quickest. I tried 6 and that lowered the total WU's each day because it took much longer to crunch them.

  23. #48
    Xtreme Cruncher
    Join Date
    Nov 2008
    Location
    NE Ohio, USA
    Posts
    1,608
    Quote Originally Posted by Rob_B View Post
    One more question. How many WU's does that 7770 crunch each day? Mine does 650 WU's on average.

    I though that with 4 WU's it completed the WU's the quickest. I tried 6 and that lowered the total WU's each day because it took much longer to crunch them.
    With a 2600k I would do 8 WU's and space them apart 2 at a time.

    Also, why is that GPU only running at 1000?
    24/7 Cruncher #1
    Crosshair VII Hero, Ryzen 3900X, 4.0 GHz @ 1.225v, Arctic Liquid Freezer II 420 AIO, 4x8GB GSKILL 3600MHz C15, ASUS TUF 3090 OC
    Samsung 980 1TB NVMe, Samsung 870 QVO 1TB, 2x10TB WD Red RAID1, Win 10 Pro, Enthoo Luxe TG, EVGA SuperNOVA 1200W P2

    24/7 Cruncher #2
    ASRock X470 Taichi, Ryzen 3900X, 4.0 GHz @ 1.225v, Arctic Liquid Freezer 280 AIO, 2x16GB GSKILL NEO 3600MHz C16, EVGA 3080ti FTW3 Ultra
    Samsung 970 EVO 250GB NVMe, Samsung 870 EVO 500GBWin 10 Ent, Enthoo Pro, Seasonic FOCUS Plus 850W

    24/7 Cruncher #3
    GA-P67A-UD4-B3 BIOS F8 mod, 2600k (L051B138) @ 4.5 GHz, 1.260v full load, Arctic Liquid 120, (Boots Win @ 5.6 GHz per Massman binning)
    Samsung Green 4x4GB @2133 C10, EVGA 2080ti FTW3 Hybrid, Samsung 870 EVO 500GB, 2x1TB WD Red RAID1, Win10 Ent, Rosewill Rise, EVGA SuperNOVA 1300W G2

    24/7 Cruncher #4 ... Crucial M225 64GB SSD Donated to Endurance Testing (Died at 968 TB of writes...no that is not a typo!)
    GA-EP45T-UD3LR BIOS F10 modded, Q6600 G0 VID 1.212 (L731B536), 3.6 GHz 9x400 @ 1.312v full load, Zerotherm Zen FZ120
    OCZ 2x2GB DDR3-1600MHz C7, Gigabyte 7950 @1200/1250, Crucial MX100 128GB, 2x1TB WD Red RAID1, Win10 Ent, Centurion 590, XFX PRO650W

    Music System
    SB Server->SB Touch w/Android Tablet as a remote->Denon AVR-X3300W->JBL Studio Series Floorstanding Speakers, JBL LS Center, 2x SVS SB-2000 Subs


  24. #49
    Xtreme Member
    Join Date
    May 2007
    Location
    The Netherlands
    Posts
    935
    Quote Originally Posted by bluestang View Post
    Also, why is that GPU only running at 1000?
    Not sure how that happened But will fix that today. Just don't think it will do 1200, but it had done 1100 just fine before.

  25. #50
    Xtreme Cruncher
    Join Date
    Mar 2009
    Location
    kingston.ma
    Posts
    2,139
    my MSI 7770 is running 1200 at stock w/o invalids.

Page 2 of 3 FirstFirst 123 LastLast

Bookmarks

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •