Page 2 of 12 FirstFirst 12345 ... LastLast
Results 26 to 50 of 279

Thread: Optimizing # ATI GPUs Crunching for Best Production Efficiency

  1. #26
    Registered User
    Join Date
    Feb 2009
    Posts
    470
    oc, the number of wus has nothing to do with invalids. i am running 16 on a 7950@1000 as well and its all smooth and no invalids since a week. check voltage on gpu/cpu and maybe adjust the oc.


    Tell it it's a :banana::banana::banana::banana::banana: and threaten it with replacement

    D_A on an UPS and life

  2. #27
    Xtreme Cruncher
    Join Date
    Apr 2007
    Location
    Western Canada
    Posts
    1,004
    kjeldoran, I sure hope haswell will bring us something because it doesn't appear as though IB has brought anything to the table. How many wu's are you running on a GPU only rig? What does your latest app_info look like for a GPU only cruncher?

    OK OC my trusted advisor, I'll see what increasing the wu's does.

  3. #28
    Xtreme Legend
    Join Date
    Mar 2008
    Location
    Plymouth (UK)
    Posts
    5,279
    Quote Originally Posted by haschioz View Post
    oc, the number of wus has nothing to do with invalids. i am running 16 on a 7950@1000 as well and its all smooth and no invalids since a week. check voltage on gpu/cpu and maybe adjust the oc.
    Yeah, I may have dropped the gpu volts a bit too far. All was well running 10 at those volts but maybe it needs a tad more for 16


    My Biggest Fear Is When I die, My Wife Sells All My Stuff For What I Told Her I Paid For It.
    79 SB threads and 32 IB Threads across 4 rigs 111 threads Crunching!!

  4. #29
    Xtreme Legend
    Join Date
    Mar 2008
    Location
    Plymouth (UK)
    Posts
    5,279
    Johnmark: Running lots seems to help me but be aware that with a difference in runtime of more than the separation between wu's there will be, at any one time, a few that seem to be running together....That is to say I set things up at a separation of around 40 seconds and looking just now I see six that are within 15 seconds of each other. With the cpu completion time at the end of each wu being in the order of 20 seconds here, you can see that there will be a certain degree of overlap.

    In general though from one day to the next I see much the same picture every time I look.

    What is not happening is everything running at either 49% or 99% as happens on my 5870's


    My Biggest Fear Is When I die, My Wife Sells All My Stuff For What I Told Her I Paid For It.
    79 SB threads and 32 IB Threads across 4 rigs 111 threads Crunching!!

  5. #30
    Xtreme Member
    Join Date
    Feb 2007
    Location
    St. Louis
    Posts
    477
    Quote Originally Posted by Johnmark View Post
    kjeldoran, I sure hope haswell will bring us something because it doesn't appear as though IB has brought anything to the table. How many wu's are you running on a GPU only rig? What does your latest app_info look like for a GPU only cruncher?

    OK OC my trusted advisor, I'll see what increasing the wu's does.
    I have a 7850 with 8 GPU threads. I tried more threads on it, but it wasn't giving me more WUs in one day. It is probably due to the lower shader count compared to a 7950/7970.
    I'm hoping for 20-25% over sandy when haswell comes out. That would be a worthwhile upgrade .
    Main Rig: i7 2600K @ 4.5ghz, Thermalright HR-02 Macho, Gigabyte Z68MA-DH2-B3, 4x4GB Gskill DDR3-1600, Visiontek Radeon 7850, OCZ Vertex 2 120GB, OCZ Agility 60GB, Silverstone TJ08B-E, Seasonic X750, Win 7 Ultimate 64bit
    Fiance's Rig: Apple iMac 21.5" 2011, i5 2.5ghz, 4GB DDR3-1333, Radeon 6750m, 500GB

  6. #31
    Xtreme Cruncher
    Join Date
    Apr 2007
    Location
    Western Canada
    Posts
    1,004
    Quote Originally Posted by OldChap View Post
    Johnmark: Running lots seems to help me but be aware that with a difference in runtime of more than the separation between wu's there will be, at any one time, a few that seem to be running together....That is to say I set things up at a separation of around 40 seconds and looking just now I see six that are within 15 seconds of each other. With the cpu completion time at the end of each wu being in the order of 20 seconds here, you can see that there will be a certain degree of overlap.

    In general though from one day to the next I see much the same picture every time I look.

    What is not happening is everything running at either 49% or 99% as happens on my 5870's
    Changed the app_info to run 12 wu's. Although the time it takes to complete a unit hasn't changed much (if any) I'm now able to get a reasonable spread between units. I've been watching the 7970 at work while offsetting the units, etc. The spread between units is now such that the cpu rarely slows down the process and the GPU is usually at 100%. With the 2600k running @4.8gHz, it seems the system can handle 6/7 wu's in transition before my CPU becomes maxed out.Things slow down and get backed up.Then off course all wu's are either at 49.707% or 99.707% and it's all on the CPU's shoulders now.

    It would be nice to have a tag that controlled the spacing between wu's or something similar.


    kjeldoran, think I'll bump up the wu's a few more and see how she responds

  7. #32
    Xtreme Cruncher
    Join Date
    Apr 2007
    Location
    Western Canada
    Posts
    1,004
    I've been using Asus GPU Tweak for OCing and monitoring the GPU and noticed that my 7970 GPU never hits 100%(HWinfo shows the same loading). Are your Asus 7950/7970 showing 100% load or 96-98 like mine? The GPU temps are around 60c which seems good, so is there something else that could be throttling the GPU?

    Now running 14 threads trying to get a spread between units but they still get bunched up and I hate seeing the GPU load drop to 50-60% while the CPU is loaded at 100%

    Suggestions welcome and appreciated

  8. #33
    Xtreme Member
    Join Date
    Mar 2012
    Posts
    390
    On my system the CPU part of the WU takes about 30 seconds, so I make sure there is a bit more time than that. While half the WUs are doing the CPU part, the other WUs get all the GPU processing. Finding out the time differences and how much time to make between them will help them not bunch up. I'd pause all WUs, resume half and let them get 50% done, then resume the rest. That seemed to work for me.

  9. #34
    Registered User
    Join Date
    Apr 2010
    Location
    NC
    Posts
    389
    Quote Originally Posted by Johnmark View Post
    I've been using Asus GPU Tweak for OCing and monitoring the GPU and noticed that my 7970 GPU never hits 100%(HWinfo shows the same loading). Are your Asus 7950/7970 showing 100% load or 96-98 like mine? The GPU temps are around 60c which seems good, so is there something else that could be throttling the GPU?

    Now running 14 threads trying to get a spread between units but they still get bunched up and I hate seeing the GPU load drop to 50-60% while the CPU is loaded at 100%

    Suggestions welcome and appreciated
    Even running 24 on a 980X, my GPUs max out at 97-98%. On your 2600K, you can easily run 16.

    On mine with 16/24 set up, I've found that a good way to keep them mostly separated is to:
    -suspend all of the "waiting to run" WUs
    -make the change to app_info
    -wait for the running ones to finish
    -restart boinc
    -resume 4
    -wait for the progress to start to increment
    -repeat resume/wait
    -resume remaining queue

    Trying to keep it so that you never have multiples in CPU "mode" is next to impossible. With the variability in the WU times, it's not worth trying.

  10. #35
    Xtreme Legend
    Join Date
    Mar 2008
    Location
    Plymouth (UK)
    Posts
    5,279
    Do yours really get ALL get bunched up?

    I tried 10 through to 16 on my 7950 and whilst at any time during 24 hours the wus may overlap some the general trend is not to bunch.

    I check the runtime for 16 wu's to complete then take that less a minute and divide by the number of wu's running

    so to clarify start 16 wu's and when they have all finished take that time less about a minute and DIV 16. then as the next batch start suspend first 15 then 14 then 13 etc. starting each at the calculated time but taking into account the time it takes to go from suspend to starting a new set of wu's.

    Following the above process I find the following day that I have maybe a couple of pairs running at much the same time but the spread is still mostly clear. Another 24 hours on and things are not much different.

    I will take a punt and say that cpu time relative to wu time is the critical thing here but I am not prepared to try tuning the relative speeds of each to improve matters.

    It is not perfect so you might want to try correcting that time again this time with the runtime for 16 offset wu's without removing the "about a minute" and do all this again.

    This will never be the way we really want it as wu's can be a little different in length by design.
    Last edited by OldChap; 12-06-2012 at 12:54 PM.


    My Biggest Fear Is When I die, My Wife Sells All My Stuff For What I Told Her I Paid For It.
    79 SB threads and 32 IB Threads across 4 rigs 111 threads Crunching!!

  11. #36
    Xtreme Cruncher
    Join Date
    Nov 2008
    Location
    NE Ohio, USA
    Posts
    1,608
    So what is the concensus on how many WU's to run on a 7950 + 2600k....10, 12, 16? I'm currently running 10 and averaging 2100+ WU's a day.
    24/7 Cruncher #1
    Crosshair VII Hero, Ryzen 3900X, 4.0 GHz @ 1.225v, Arctic Liquid Freezer II 420 AIO, 4x8GB GSKILL 3600MHz C15, ASUS TUF 3090 OC
    Samsung 980 1TB NVMe, Samsung 870 QVO 1TB, 2x10TB WD Red RAID1, Win 10 Pro, Enthoo Luxe TG, EVGA SuperNOVA 1200W P2

    24/7 Cruncher #2
    ASRock X470 Taichi, Ryzen 3900X, 4.0 GHz @ 1.225v, Arctic Liquid Freezer 280 AIO, 2x16GB GSKILL NEO 3600MHz C16, EVGA 3080ti FTW3 Ultra
    Samsung 970 EVO 250GB NVMe, Samsung 870 EVO 500GBWin 10 Ent, Enthoo Pro, Seasonic FOCUS Plus 850W

    24/7 Cruncher #3
    GA-P67A-UD4-B3 BIOS F8 mod, 2600k (L051B138) @ 4.5 GHz, 1.260v full load, Arctic Liquid 120, (Boots Win @ 5.6 GHz per Massman binning)
    Samsung Green 4x4GB @2133 C10, EVGA 2080ti FTW3 Hybrid, Samsung 870 EVO 500GB, 2x1TB WD Red RAID1, Win10 Ent, Rosewill Rise, EVGA SuperNOVA 1300W G2

    24/7 Cruncher #4 ... Crucial M225 64GB SSD Donated to Endurance Testing (Died at 968 TB of writes...no that is not a typo!)
    GA-EP45T-UD3LR BIOS F10 modded, Q6600 G0 VID 1.212 (L731B536), 3.6 GHz 9x400 @ 1.312v full load, Zerotherm Zen FZ120
    OCZ 2x2GB DDR3-1600MHz C7, Gigabyte 7950 @1200/1250, Crucial MX100 128GB, 2x1TB WD Red RAID1, Win10 Ent, Centurion 590, XFX PRO650W

    Music System
    SB Server->SB Touch w/Android Tablet as a remote->Denon AVR-X3300W->JBL Studio Series Floorstanding Speakers, JBL LS Center, 2x SVS SB-2000 Subs


  12. #37
    Registered User
    Join Date
    Apr 2010
    Location
    NC
    Posts
    389
    Quote Originally Posted by bluestang View Post
    So what is the concensus on how many WU's to run on a 7950 + 2600k....10, 12, 16? I'm currently running 10 and averaging 2100+ WU's a day.
    I don't think there is a "best" for everyone. Because of the additional time each takes, going from 10 to 12 to 16 to 24 doesn't seem to make a material difference. Based on what I've seen, averaging a WU every ~40 seconds is about as good as it's ever going to get, which is ~2160 WUs/day. Some days are going to be higher and some are going to be lower because of wingmen and WU variability.

    For the most part, the number of points is going to be fairly consistent. About the only difference will be in the runtime. I'm trying to get to 20 years, so I'm running 16 (2600K) and 24 (980X).

  13. #38
    Xtreme Legend
    Join Date
    Mar 2008
    Location
    Plymouth (UK)
    Posts
    5,279
    This is mine running 16 instances (after the 28th) @1100 on a 3770K @ 4.8



    I truly do not think there is much difference in output once 8 threads is exceeded. The gains of having less switching with less threads are offset for me by having more permanent separation with more threads.

    Incidentally I have yet to confirm an observation on my 5850.... Offset wu's complete faster so more finished wu's a day BUT they seem to claim less points than when running together. So from a points perspective this may be an area worth investigating further.
    Last edited by OldChap; 12-06-2012 at 01:37 PM.


    My Biggest Fear Is When I die, My Wife Sells All My Stuff For What I Told Her I Paid For It.
    79 SB threads and 32 IB Threads across 4 rigs 111 threads Crunching!!

  14. #39
    Xtreme Cruncher
    Join Date
    Mar 2009
    Location
    kingston.ma
    Posts
    2,139
    Quote Originally Posted by OldChap View Post
    Incidentally I have yet to confirm an observation on my 5850.... Offset wu's complete faster so more finished wu's a day BUT they seem to claim less points than when running together. So from a points perspective this may be an area worth investigating further.
    I think thi sis because once you establish a baseline on runtimes, when your card suddenly takes longer WCG *assumes* you did more work. If that is true, in theory you could OC high and then over the course of time notch the OC down to claim more points. While am I not 100% positive it would work I am 100% positive it would rub my ethics the wrong way.

  15. #40
    Xtreme Cruncher
    Join Date
    Apr 2007
    Location
    Western Canada
    Posts
    1,004
    one.shot if only one part of a w.u is being processed it takes 36sec, if the CPU goes to 100% it takes 43sec or a bit longer. I wish the files checkpointed so they would start again where they stopped, instead of re-starting after a pausing.

    bluestang, would you please tell me the appropriate app_info entries you are using? You seem to getting AWESOME results and mine suck.

    pirogue, thanks for sharing your GPU loading and I like your approach to restarting the WU's. Also appreciate that it near impossible to get all wu's in perfect order. On both the 7950&7970 (both 2600k cpu's) I find that the CPU can handle 5 wu's(or parts of) without bogging anything down.

    OC I can get wu's the 7950 separated but the ones on the 7970 end up bunching up. I'll bump up the wu's to 16 and see if I can them separated and get them to stay there or at least close. Just to be sure on the app_info file for a 2600k to run 16 tasks I should have <avg_ncpus>.5, <max_ncpus>1.0, <count>.0625 right ? If that doesn't work I'll just go back to eight wu's and see where things end up. Unfortunately with all the playing around I've done on the machines with the 7000 series I have not got consistent data sets. To date my best results where 7970 =12/05/2012 ,0:012:03:15:24 ,968,078, 2,205 and the 7950 12/04/2012, 0:010:02:22:46 ,824,794 ,1,876.

    Thank You for all your help guys !!!

  16. #41
    Xtreme Cruncher
    Join Date
    Nov 2008
    Location
    NE Ohio, USA
    Posts
    1,608
    Ok, I'm feeling much better about my 10 WU's then. I may try 12 in a few days to see what it may bring.

    But since I kicked up my OC on my 7950 again a few days ago, I do see a little improvement



    EDIT: Here you go Johnmark...
    <app_info>

    <app>
    <name>hcc1</name>
    <user_friendly_name>Help Conquer Cancer</user_friendly_name>
    </app>
    <file_info>
    <name>wcg_hcc1_img_7.05_windows_intelx86__ati_hcc1 </name>
    <executable/>
    </file_info>
    <file_info>
    <name>hcckernel.cl.7.05</name>
    <executable/>
    </file_info>
    <app_version>
    <app_name>hcc1</app_name>
    <version_num>705</version_num>
    <platform>windows_intelx86</platform>
    <plan_class>ati_hcc1</plan_class>
    <avg_ncpus>0.80</avg_ncpus>
    <max_ncpus>1.0</max_ncpus>
    <flops>45000000000.000000</flops>
    <coproc>
    <type>ATI</type>
    <count>.10</count>
    </coproc>
    <file_ref>
    <file_name>wcg_hcc1_img_7.05_windows_intelx86__ati _hcc1</file_name>
    <main_program/>
    </file_ref>
    <file_ref>
    <file_name>hcckernel.cl.7.05</file_name>
    <open_name>hcckernel.cl</open_name>
    </file_ref>
    </app_version>

    </app_info>
    Last edited by bluestang; 12-06-2012 at 05:08 PM.
    24/7 Cruncher #1
    Crosshair VII Hero, Ryzen 3900X, 4.0 GHz @ 1.225v, Arctic Liquid Freezer II 420 AIO, 4x8GB GSKILL 3600MHz C15, ASUS TUF 3090 OC
    Samsung 980 1TB NVMe, Samsung 870 QVO 1TB, 2x10TB WD Red RAID1, Win 10 Pro, Enthoo Luxe TG, EVGA SuperNOVA 1200W P2

    24/7 Cruncher #2
    ASRock X470 Taichi, Ryzen 3900X, 4.0 GHz @ 1.225v, Arctic Liquid Freezer 280 AIO, 2x16GB GSKILL NEO 3600MHz C16, EVGA 3080ti FTW3 Ultra
    Samsung 970 EVO 250GB NVMe, Samsung 870 EVO 500GBWin 10 Ent, Enthoo Pro, Seasonic FOCUS Plus 850W

    24/7 Cruncher #3
    GA-P67A-UD4-B3 BIOS F8 mod, 2600k (L051B138) @ 4.5 GHz, 1.260v full load, Arctic Liquid 120, (Boots Win @ 5.6 GHz per Massman binning)
    Samsung Green 4x4GB @2133 C10, EVGA 2080ti FTW3 Hybrid, Samsung 870 EVO 500GB, 2x1TB WD Red RAID1, Win10 Ent, Rosewill Rise, EVGA SuperNOVA 1300W G2

    24/7 Cruncher #4 ... Crucial M225 64GB SSD Donated to Endurance Testing (Died at 968 TB of writes...no that is not a typo!)
    GA-EP45T-UD3LR BIOS F10 modded, Q6600 G0 VID 1.212 (L731B536), 3.6 GHz 9x400 @ 1.312v full load, Zerotherm Zen FZ120
    OCZ 2x2GB DDR3-1600MHz C7, Gigabyte 7950 @1200/1250, Crucial MX100 128GB, 2x1TB WD Red RAID1, Win10 Ent, Centurion 590, XFX PRO650W

    Music System
    SB Server->SB Touch w/Android Tablet as a remote->Denon AVR-X3300W->JBL Studio Series Floorstanding Speakers, JBL LS Center, 2x SVS SB-2000 Subs


  17. #42
    Xtreme Cruncher
    Join Date
    Apr 2007
    Location
    Western Canada
    Posts
    1,004
    Ah yea, pretty darn consistent too!! I'd say 10 is giving you excellent results

    I'll give the current setups a chance to run, hopefully the machines stay running. Have the Asus Matrix 7970 running @ 1200mhz.

  18. #43
    Xtreme Cruncher
    Join Date
    Dec 2004
    Location
    South Carolina
    Posts
    631
    Hey guys on the rig with the 990x and 3 7950's I am running 24 w/u's, 8 per card and .5 per cpu but my cpu stays loaded at 100%. Do you think it would produce better results if I were to drop it to 6 w/'s per card and .75 on the cpu? I have no issues with the w/u's bunching up, I just start out with all w/u suspended and start 4 w/u every 30 seconds and they never seem to align.
    Samsung 42" LCD/Antec 1200 Case/Corsair 1000W PS/ Gigabyte GA-X58A-UD7 / Intel I7 990X 3.47 @ 4.5 / 3 x RX360 rad /Apogee Xt /2 x 128gb Patriot Torqx M28's @ Raid 0/ WD 600Gb VelociRaptor / Kingston Hyper X 12Gb (6x2) DDR3 2000/ XFX DD HD 7970


  19. #44
    Xtreme Cruncher
    Join Date
    Apr 2007
    Location
    Western Canada
    Posts
    1,004
    Well I managed to get decent separation between units but my morning 6-8 units were in transition loading up the CPU and dropping the GPU load for a short period. Spent a few hours playing with it last night and this appears as though how she is going to run 12 or 16 wu's per GPU.

    sRHunt3r, sorry no idea about multi card setups ?

  20. #45
    Xtreme Legend
    Join Date
    Mar 2008
    Location
    Plymouth (UK)
    Posts
    5,279
    Sitting here wondering about the possible advantages/disadvantages of running:

    14 on 7950 (28 Compute Units (1792 Stream Processors) 112 Texture Units)

    or

    16 on 7970 32 compute units (2048 Stream Processors) 128 Texture Units

    It is weekend so I may look at this a little to see if making the compute units, Stream Processors and Texture Units divisible by the number of instances will have any impact at all


    My Biggest Fear Is When I die, My Wife Sells All My Stuff For What I Told Her I Paid For It.
    79 SB threads and 32 IB Threads across 4 rigs 111 threads Crunching!!

  21. #46
    Registered User
    Join Date
    Feb 2009
    Posts
    470
    thats what i wondered as well. but didnt one us (oneshot?) run 14 wus and it wasnt different to 16? anyways as soon as i am near sapphire on hcc i will tinker with the thread count once again.


    Tell it it's a :banana::banana::banana::banana::banana: and threaten it with replacement

    D_A on an UPS and life

  22. #47
    Xtreme Cruncher
    Join Date
    Nov 2008
    Location
    NE Ohio, USA
    Posts
    1,608
    Just curious, has anyone hit a million WCG points for a day on their 7950 yet?
    24/7 Cruncher #1
    Crosshair VII Hero, Ryzen 3900X, 4.0 GHz @ 1.225v, Arctic Liquid Freezer II 420 AIO, 4x8GB GSKILL 3600MHz C15, ASUS TUF 3090 OC
    Samsung 980 1TB NVMe, Samsung 870 QVO 1TB, 2x10TB WD Red RAID1, Win 10 Pro, Enthoo Luxe TG, EVGA SuperNOVA 1200W P2

    24/7 Cruncher #2
    ASRock X470 Taichi, Ryzen 3900X, 4.0 GHz @ 1.225v, Arctic Liquid Freezer 280 AIO, 2x16GB GSKILL NEO 3600MHz C16, EVGA 3080ti FTW3 Ultra
    Samsung 970 EVO 250GB NVMe, Samsung 870 EVO 500GBWin 10 Ent, Enthoo Pro, Seasonic FOCUS Plus 850W

    24/7 Cruncher #3
    GA-P67A-UD4-B3 BIOS F8 mod, 2600k (L051B138) @ 4.5 GHz, 1.260v full load, Arctic Liquid 120, (Boots Win @ 5.6 GHz per Massman binning)
    Samsung Green 4x4GB @2133 C10, EVGA 2080ti FTW3 Hybrid, Samsung 870 EVO 500GB, 2x1TB WD Red RAID1, Win10 Ent, Rosewill Rise, EVGA SuperNOVA 1300W G2

    24/7 Cruncher #4 ... Crucial M225 64GB SSD Donated to Endurance Testing (Died at 968 TB of writes...no that is not a typo!)
    GA-EP45T-UD3LR BIOS F10 modded, Q6600 G0 VID 1.212 (L731B536), 3.6 GHz 9x400 @ 1.312v full load, Zerotherm Zen FZ120
    OCZ 2x2GB DDR3-1600MHz C7, Gigabyte 7950 @1200/1250, Crucial MX100 128GB, 2x1TB WD Red RAID1, Win10 Ent, Centurion 590, XFX PRO650W

    Music System
    SB Server->SB Touch w/Android Tablet as a remote->Denon AVR-X3300W->JBL Studio Series Floorstanding Speakers, JBL LS Center, 2x SVS SB-2000 Subs


  23. #48
    Registered User
    Join Date
    Feb 2009
    Posts
    470
    you should be the closest with 950k. it should actually be doable with about 1200 on the core.


    Tell it it's a :banana::banana::banana::banana::banana: and threaten it with replacement

    D_A on an UPS and life

  24. #49
    Xtreme Member
    Join Date
    Mar 2010
    Posts
    368
    I feel a little puzzled. If you are able to get nearly a million with an OC 7950, you are at not far from my level (a little over 1Mppd) but I run a 7970. I should be able to pull more from my 7970. I have to check again about my overclockings.
    [SIGPIC][/SIGPIC]

  25. #50
    Xtreme Member
    Join Date
    Mar 2010
    Posts
    368
    The only solution I see is for every GPU thread to have 1 full CPU thread. For twelve core that would mean 12 threads at most. I noticed that over that the CPU performance loss has a too big impact.
    And indeed the only way is to push the CPU frequencies and memories as much as possible.
    In the past I made some test to check the performance of hyperthreading. I remember that hyperthreading (running 12 threads on a six core CPU) did improve th overall crunching power by 15-20%. So keeping hyprthreading on and use the 12 logical cores makes sense. But no more.
    [SIGPIC][/SIGPIC]

Page 2 of 12 FirstFirst 12345 ... LastLast

Bookmarks

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •