MMM
Page 10 of 14 FirstFirst ... 78910111213 ... LastLast
Results 226 to 250 of 327

Thread: Badge hunting for both hpf2 and gfam. efficiency drive.

  1. #226
    HWiNFO Author
    Join Date
    Apr 2006
    Location
    /dev/null
    Posts
    801
    My hosts were overloaded with GFAM tasks which I couldn't finish on time, so I have aborted about 150 WUs. Now I should maintain enough GFAM WUs to finish the target with my own hosts and could start building a cache for HPF2. Current run time returned is 32-36 days/day, so I believe the GFAM target could be reached on Sunday.
    Will monitor how the situation evolves and adjust accordingly...

    EDIT: OC, I posted this before I read your previous post, so that was my thought too Currently my rigs hold ~6 days of GFAM work, the rest is commited to HPF2, they all are set to buffer 10 days. I'll drop more GFAM WUs the closer I'm to the target and the more I'm sure I can reach it, so the HPF2 WUs fill-up before they dry out.

    EDIT2: I have also disabled GFAM in my profile to avoid downloading further WUs (yep, I still got several resends), since I believe there's enough work buffered to reach the target. All machines on my account have currently 1200 GFAM WUs buffered ! That's plenty enough for ~100 more days needed

    EDIT3: The current HPF2 buffer among all machines is ~1000 WUs and going up..

    EDIT4: I think all helpers can abort most GFAM WUs in cache and leave ~2-3 days of GFAM work in their caches, so the HPF2 WUs get loaded. Then continuosly reduce the GFAM buffers depending on more precise status. This way GFAM will be finished ASAP and a full switch to HPF2 can be made, but all depends on how soon HPF2 new work will end... Due to the nature of HPF2, I believe there might be a lot of resends that we can catch even after the end date...
    Last edited by Mumak; 06-07-2013 at 12:40 AM.

  2. #227
    HWiNFO Author
    Join Date
    Apr 2006
    Location
    /dev/null
    Posts
    801
    GFAM update: 1:272
    HPF2 status unchanged - all results returned are PV yet

  3. #228
    Xtreme Cruncher
    Join Date
    Nov 2008
    Location
    NE Ohio, USA
    Posts
    1,608
    With the large caches of WUs we need to set, we are almost guaranteed to be running in high priority right? Or am I the only one having this issue?

    My caches settings are now set for 8 min + 2 max add.

    How does running in high priority affect getting new work?
    24/7 Cruncher #1
    Crosshair VII Hero, Ryzen 3900X, 4.0 GHz @ 1.225v, Arctic Liquid Freezer II 420 AIO, 4x8GB GSKILL 3600MHz C15, ASUS TUF 3090 OC
    Samsung 980 1TB NVMe, Samsung 870 QVO 1TB, 2x10TB WD Red RAID1, Win 10 Pro, Enthoo Luxe TG, EVGA SuperNOVA 1200W P2

    24/7 Cruncher #2
    ASRock X470 Taichi, Ryzen 3900X, 4.0 GHz @ 1.225v, Arctic Liquid Freezer 280 AIO, 2x16GB GSKILL NEO 3600MHz C16, EVGA 3080ti FTW3 Ultra
    Samsung 970 EVO 250GB NVMe, Samsung 870 EVO 500GBWin 10 Ent, Enthoo Pro, Seasonic FOCUS Plus 850W

    24/7 Cruncher #3
    GA-P67A-UD4-B3 BIOS F8 mod, 2600k (L051B138) @ 4.5 GHz, 1.260v full load, Arctic Liquid 120, (Boots Win @ 5.6 GHz per Massman binning)
    Samsung Green 4x4GB @2133 C10, EVGA 2080ti FTW3 Hybrid, Samsung 870 EVO 500GB, 2x1TB WD Red RAID1, Win10 Ent, Rosewill Rise, EVGA SuperNOVA 1300W G2

    24/7 Cruncher #4 ... Crucial M225 64GB SSD Donated to Endurance Testing (Died at 968 TB of writes...no that is not a typo!)
    GA-EP45T-UD3LR BIOS F10 modded, Q6600 G0 VID 1.212 (L731B536), 3.6 GHz 9x400 @ 1.312v full load, Zerotherm Zen FZ120
    OCZ 2x2GB DDR3-1600MHz C7, Gigabyte 7950 @1200/1250, Crucial MX100 128GB, 2x1TB WD Red RAID1, Win10 Ent, Centurion 590, XFX PRO650W

    Music System
    SB Server->SB Touch w/Android Tablet as a remote->Denon AVR-X3300W->JBL Studio Series Floorstanding Speakers, JBL LS Center, 2x SVS SB-2000 Subs


  4. #229
    HWiNFO Author
    Join Date
    Apr 2006
    Location
    /dev/null
    Posts
    801
    I don't seem to be having any problems running tasks in High Priority - I'm still getting new WUs.

  5. #230
    Xtreme Cruncher
    Join Date
    Nov 2008
    Location
    NE Ohio, USA
    Posts
    1,608
    Ok, TestPC is now d/l'ing more work. I was able to get two batches of 30 WUs.

    Still says quota 19 days, but I know have 84 WU's in cache @ 3.75 hrs each so about 13 days runtime/6.5 days of work.
    24/7 Cruncher #1
    Crosshair VII Hero, Ryzen 3900X, 4.0 GHz @ 1.225v, Arctic Liquid Freezer II 420 AIO, 4x8GB GSKILL 3600MHz C15, ASUS TUF 3090 OC
    Samsung 980 1TB NVMe, Samsung 870 QVO 1TB, 2x10TB WD Red RAID1, Win 10 Pro, Enthoo Luxe TG, EVGA SuperNOVA 1200W P2

    24/7 Cruncher #2
    ASRock X470 Taichi, Ryzen 3900X, 4.0 GHz @ 1.225v, Arctic Liquid Freezer 280 AIO, 2x16GB GSKILL NEO 3600MHz C16, EVGA 3080ti FTW3 Ultra
    Samsung 970 EVO 250GB NVMe, Samsung 870 EVO 500GBWin 10 Ent, Enthoo Pro, Seasonic FOCUS Plus 850W

    24/7 Cruncher #3
    GA-P67A-UD4-B3 BIOS F8 mod, 2600k (L051B138) @ 4.5 GHz, 1.260v full load, Arctic Liquid 120, (Boots Win @ 5.6 GHz per Massman binning)
    Samsung Green 4x4GB @2133 C10, EVGA 2080ti FTW3 Hybrid, Samsung 870 EVO 500GB, 2x1TB WD Red RAID1, Win10 Ent, Rosewill Rise, EVGA SuperNOVA 1300W G2

    24/7 Cruncher #4 ... Crucial M225 64GB SSD Donated to Endurance Testing (Died at 968 TB of writes...no that is not a typo!)
    GA-EP45T-UD3LR BIOS F10 modded, Q6600 G0 VID 1.212 (L731B536), 3.6 GHz 9x400 @ 1.312v full load, Zerotherm Zen FZ120
    OCZ 2x2GB DDR3-1600MHz C7, Gigabyte 7950 @1200/1250, Crucial MX100 128GB, 2x1TB WD Red RAID1, Win10 Ent, Centurion 590, XFX PRO650W

    Music System
    SB Server->SB Touch w/Android Tablet as a remote->Denon AVR-X3300W->JBL Studio Series Floorstanding Speakers, JBL LS Center, 2x SVS SB-2000 Subs


  6. #231
    Xtreme Member
    Join Date
    Jul 2012
    Posts
    219
    Bocksie is having trouble receiving new wu. Its only looking for HPF2 tasks but comes back with 0.
    Fresh non-GUI Debian (?), 7.0.27. Is this an issue of being a new machine? Or do I have some settings wrong? (Should be default boinc settings)
    Richland 6790K @ 4.713 Ghz / 2208 NB / 1123 gpu / 2304 Ram [96 Bclk]
    F2A85-M Pro, Mushkin Black 2133, iGPU (8760D)
    9.7L case (excluding 230mm fan) or 11.6L w/2nd rad fan

  7. #232
    HWiNFO Author
    Join Date
    Apr 2006
    Location
    /dev/null
    Posts
    801
    Carrissa machine seems to return lots of errors...

  8. #233
    HWiNFO Author
    Join Date
    Apr 2006
    Location
    /dev/null
    Posts
    801
    Quote Originally Posted by Yeroon View Post
    Bocksie is having trouble receiving new wu. Its only looking for HPF2 tasks but comes back with 0.
    Fresh non-GUI Debian (?), 7.0.27. Is this an issue of being a new machine? Or do I have some settings wrong? (Should be default boinc settings)
    Default profile is currently set to get HPF2 tasks only. Bocksie is the machine name? Did it get no tasks at all, if yes for how long? Maybe check the Even Log for more details.

  9. #234
    Xtreme Cruncher
    Join Date
    Nov 2008
    Location
    NE Ohio, USA
    Posts
    1,608
    Quote Originally Posted by Mumak View Post
    Carrissa machine seems to return lots of errors...
    Did on my account as well, along with Snow Crash's rigs and with my 2 rigs. I want to say we probably had 200+ WU's error out in our quest.

    They do it right away though, so no worries. Except, might affect resends.
    24/7 Cruncher #1
    Crosshair VII Hero, Ryzen 3900X, 4.0 GHz @ 1.225v, Arctic Liquid Freezer II 420 AIO, 4x8GB GSKILL 3600MHz C15, ASUS TUF 3090 OC
    Samsung 980 1TB NVMe, Samsung 870 QVO 1TB, 2x10TB WD Red RAID1, Win 10 Pro, Enthoo Luxe TG, EVGA SuperNOVA 1200W P2

    24/7 Cruncher #2
    ASRock X470 Taichi, Ryzen 3900X, 4.0 GHz @ 1.225v, Arctic Liquid Freezer 280 AIO, 2x16GB GSKILL NEO 3600MHz C16, EVGA 3080ti FTW3 Ultra
    Samsung 970 EVO 250GB NVMe, Samsung 870 EVO 500GBWin 10 Ent, Enthoo Pro, Seasonic FOCUS Plus 850W

    24/7 Cruncher #3
    GA-P67A-UD4-B3 BIOS F8 mod, 2600k (L051B138) @ 4.5 GHz, 1.260v full load, Arctic Liquid 120, (Boots Win @ 5.6 GHz per Massman binning)
    Samsung Green 4x4GB @2133 C10, EVGA 2080ti FTW3 Hybrid, Samsung 870 EVO 500GB, 2x1TB WD Red RAID1, Win10 Ent, Rosewill Rise, EVGA SuperNOVA 1300W G2

    24/7 Cruncher #4 ... Crucial M225 64GB SSD Donated to Endurance Testing (Died at 968 TB of writes...no that is not a typo!)
    GA-EP45T-UD3LR BIOS F10 modded, Q6600 G0 VID 1.212 (L731B536), 3.6 GHz 9x400 @ 1.312v full load, Zerotherm Zen FZ120
    OCZ 2x2GB DDR3-1600MHz C7, Gigabyte 7950 @1200/1250, Crucial MX100 128GB, 2x1TB WD Red RAID1, Win10 Ent, Centurion 590, XFX PRO650W

    Music System
    SB Server->SB Touch w/Android Tablet as a remote->Denon AVR-X3300W->JBL Studio Series Floorstanding Speakers, JBL LS Center, 2x SVS SB-2000 Subs


  10. #235
    Xtreme Member
    Join Date
    Jul 2012
    Posts
    219
    It has not recieved any wu at all, since it fired up late last night.
    07-Jun-2013 10:25:44 [World Community Grid] update requested by user
    07-Jun-2013 10:25:44 [World Community Grid] Sending scheduler request: Requested by user.
    07-Jun-2013 10:25:44 [World Community Grid] Requesting new tasks for CPU
    07-Jun-2013 10:25:47 [World Community Grid] Scheduler request completed: got 0 new tasks
    07-Jun-2013 10:25:47 [World Community Grid] No tasks sent
    07-Jun-2013 10:25:47 [World Community Grid] No tasks are available for Human Proteome Folding - Phase 2
    07-Jun-2013 10:25:47 [World Community Grid] No tasks are available for the applications you have selected.
    Richland 6790K @ 4.713 Ghz / 2208 NB / 1123 gpu / 2304 Ram [96 Bclk]
    F2A85-M Pro, Mushkin Black 2133, iGPU (8760D)
    9.7L case (excluding 230mm fan) or 11.6L w/2nd rad fan

  11. #236
    HWiNFO Author
    Join Date
    Apr 2006
    Location
    /dev/null
    Posts
    801
    There should be HPF2 tasks available, I got the last one about 20 mins ago. I have no idea why is that, maybe others might know?

  12. #237
    Xtreme Member
    Join Date
    Jul 2012
    Posts
    219
    Swapped weak keys with my account, it DL's new talks from other projects, so switched it back to Mumaks so it can DL new wu if avaiable. None yet though. Does your "default" profile happen to have a core count limit that is possible to reach with this many crunchers? My only other guess.
    Kyalami get 5 tasks when requested to see if new tasks were available, but through yojimbo's account.
    Richland 6790K @ 4.713 Ghz / 2208 NB / 1123 gpu / 2304 Ram [96 Bclk]
    F2A85-M Pro, Mushkin Black 2133, iGPU (8760D)
    9.7L case (excluding 230mm fan) or 11.6L w/2nd rad fan

  13. #238
    HWiNFO Author
    Join Date
    Apr 2006
    Location
    /dev/null
    Posts
    801
    There's no such limit set, all the other machines are getting tasks successfully.

  14. #239
    Xtreme Member
    Join Date
    Jul 2012
    Posts
    219
    Yeah, my bad on the cores thing, its per machine, so I doubt that could cause my problem. Looks like I'll keep updating and see if i pick any WU up, unless someones got a better idea.
    Richland 6790K @ 4.713 Ghz / 2208 NB / 1123 gpu / 2304 Ram [96 Bclk]
    F2A85-M Pro, Mushkin Black 2133, iGPU (8760D)
    9.7L case (excluding 230mm fan) or 11.6L w/2nd rad fan

  15. #240
    Xtreme Member
    Join Date
    May 2007
    Location
    The Netherlands
    Posts
    935
    Could it be because you are running linux? Not sure if there is a difference in wu's between linux and windows on this project.

  16. #241
    Xtreme Member
    Join Date
    Jul 2012
    Posts
    219
    Don't think that's the issue as both Kyalami and Ra are running Linux, and I know at least one of them is able to receive wu (probably both though).
    Richland 6790K @ 4.713 Ghz / 2208 NB / 1123 gpu / 2304 Ram [96 Bclk]
    F2A85-M Pro, Mushkin Black 2133, iGPU (8760D)
    9.7L case (excluding 230mm fan) or 11.6L w/2nd rad fan

  17. #242
    Xtreme Legend
    Join Date
    Mar 2008
    Location
    Plymouth (UK)
    Posts
    5,279
    Quote Originally Posted by Yeroon View Post
    Bocksie is having trouble receiving new wu. Its only looking for HPF2 tasks but comes back with 0.
    Fresh non-GUI Debian (?), 7.0.27. Is this an issue of being a new machine? Or do I have some settings wrong? (Should be default boinc settings)
    When I started up this linux rig I used an old ssd that was already installed with Mint 14 cinnamon then:
    apt-get update
    apt-get dist-upgrade
    then
    got 7.0.65 like this: http://www.xtremesystems.org/forums/...=1#post5024891 (don't forget to change all instances of BOINC version)

    attached to wcg. then immediately

    detached using cli like this:
    cd to BOINCFOLDER then:
    boinccmd --project http://www.worldcommunitygrid.org detach
    boinccmd --project_attach http://www.worldcommunitygrid.org 829514_0d906f88e81e7441291e0a6b5bccca65

    The above weak key is Mumak's from post 11 of this thread

    Once it had dl'd the basics it pretty much started work straight away

    Not sure if any of this will help but....

    You may want to check tools>preferences> cpu and network settings just to be sure.

    EDIT:
    The rig in question has 180-200 wu's a day added for each of deadline days 14th, 15th, 16th and so far today has around 150 for the 17th so still getting work fine it seems.
    Last edited by OldChap; 06-07-2013 at 10:01 AM.


    My Biggest Fear Is When I die, My Wife Sells All My Stuff For What I Told Her I Paid For It.
    79 SB threads and 32 IB Threads across 4 rigs 111 threads Crunching!!

  18. #243
    Xtreme Legend
    Join Date
    Mar 2008
    Location
    Plymouth (UK)
    Posts
    5,279
    Quote Originally Posted by 0ne.shot View Post
    Update #8

    Okay, currently I'm at: GO Fight Against Malaria 1:242:00:20:29


    05/31/2013 0:044:22:21:28

    06/01/2013 0:050:11:18:21

    06/02/2013 0:054:19:19:03

    06/03/2013 0:060:13:08:32

    06/04/2013 0:053:16:57:37

    06/05/2013 0:064:11:41:05

    06/06/2013 0:072:13:45:45

    06/07/2013 0:000:15:17:33

    I have 102 PV WUs right now, so (102 tasks * ~4 hours)/24=17 days. 123 days left-17=106 more days with ~60 days/day of compute time = around 2 more days needed on GFAM for Sapphire. 123 * 6 WU per day = 738 WUs (4hr/WU) or 123 * ~6.8 WU per day = 835 WUs (3.5hr/WU) needed to reach goal. Of course I'm making the assumption of 4 hr/WU to 3.5 hr/WU.

    I currently have 1094 GFAM WUs in queue among all devices. We need between 738 to 835 WUs and we have a surplus between 356 and 259 WUs.

    I'm at HPF2 1:026:09:01:13 There are 3243 HPF2 WUs in queue among all rigs. I have 275 HPF2 PVs. I'm guessing an average of 5 hours per WU. 275 PVs *5 hours/24= 57 days. This places is us at 282 more days needed for Sapphire on HPF2. The WUs in queue give us 3243 * 5 / 24= 675 days in queue for HPF2.

    I've been working a lot, so the updates haven't been as frequent. I hope this helps clear things up as to where we're at.
    Excellent post One.shot!!! It contains ALL the pertinent information and calculations needed to show me that we don't need to do anything else for you..... and more especially we don't have to worry about having enough HPF2 work

    Very happy with this.


    My Biggest Fear Is When I die, My Wife Sells All My Stuff For What I Told Her I Paid For It.
    79 SB threads and 32 IB Threads across 4 rigs 111 threads Crunching!!

  19. #244
    Xtreme Legend
    Join Date
    Mar 2008
    Location
    Plymouth (UK)
    Posts
    5,279
    Originally Posted by yojimbo197
    Human Proteome Folding - Phase 2 run time: 1:046:18:58:46

    319 days, 6 hours still needed

    I have my 3 rigs(10 cores) plus:
    Aco2-508 8 cores PecosRiverM
    Bco2-508 8 cores PecosRiverM <<< Thanks
    a 12 cores - Fallwind
    francine 8 cores - jeanguy2 <<< will be moving
    ginette 12 cores - jeanguy2 <<< will be moving
    Kyalami 3 cores - Yeroon
    Home-PC 8 cores - Bluestang
    cordell 4 cores - Bluestang
    Not sure if you cannot post here PecosRiverM or if we have 2 different names for same guy but whichever...Thank you from me. My email is listed on WCG if you want to clarify

    yojimbo: You should have 46 cores on this which gives you a maximum of around 550 days with 10 day cache + 2 days running. ~460 days with 8 day cache so all is looking great.

    How many wu's in cache overall? I would be looking for about 1900 to be really sure.
    Last edited by OldChap; 06-07-2013 at 10:25 AM.


    My Biggest Fear Is When I die, My Wife Sells All My Stuff For What I Told Her I Paid For It.
    79 SB threads and 32 IB Threads across 4 rigs 111 threads Crunching!!

  20. #245
    Xtreme Legend
    Join Date
    Mar 2008
    Location
    Plymouth (UK)
    Posts
    5,279
    Quote Originally Posted by Mumak View Post
    My hosts were overloaded with GFAM tasks which I couldn't finish on time, so I have aborted about 150 WUs. Now I should maintain enough GFAM WUs to finish the target with my own hosts and could start building a cache for HPF2. Current run time returned is 32-36 days/day, so I believe the GFAM target could be reached on Sunday.
    Will monitor how the situation evolves and adjust accordingly...

    EDIT: OC, I posted this before I read your previous post, so that was my thought too Currently my rigs hold ~6 days of GFAM work, the rest is commited to HPF2, they all are set to buffer 10 days. I'll drop more GFAM WUs the closer I'm to the target and the more I'm sure I can reach it, so the HPF2 WUs fill-up before they dry out.

    EDIT2: I have also disabled GFAM in my profile to avoid downloading further WUs (yep, I still got several resends), since I believe there's enough work buffered to reach the target. All machines on my account have currently 1200 GFAM WUs buffered ! That's plenty enough for ~100 more days needed

    EDIT3: The current HPF2 buffer among all machines is ~1000 WUs and going up..

    EDIT4: I think all helpers can abort most GFAM WUs in cache and leave ~2-3 days of GFAM work in their caches, so the HPF2 WUs get loaded. Then continuosly reduce the GFAM buffers depending on more precise status. This way GFAM will be finished ASAP and a full switch to HPF2 can be made, but all depends on how soon HPF2 new work will end... Due to the nature of HPF2, I believe there might be a lot of resends that we can catch even after the end date...
    I only dumped about 25 wu's. We need to be sure not to get this bit wrong as much as we need to get HPF2 work. I feel we have another 24 hours before we REALLY need to know how much work is already in cache for HPF2 so no more dumping please, at least not yet.

    Take a look at One.shot's post and maybe give us as much of that sort of info as you possibly can for the next couple of days. WU's in HPF2 cache total interests me greatly.

    I will try to get number of cores info updated by this time tomorrow.


    My Biggest Fear Is When I die, My Wife Sells All My Stuff For What I Told Her I Paid For It.
    79 SB threads and 32 IB Threads across 4 rigs 111 threads Crunching!!

  21. #246
    HWiNFO Author
    Join Date
    Apr 2006
    Location
    /dev/null
    Posts
    801
    I have updated the list of helper machines here: http://www.xtremesystems.org/forums/...=1#post5192178

    As of now, there are 90 GFAM PVs, very roughly that should give ~15 days of work, so let's take 10 to be safe.
    94 days of work is needed for GFAM target, -PVs = 84 days needed.
    Current rate is ~30 days of work/day

    Currently there are 2530 HPF2 tasks in caches among all machines including mine.

  22. #247
    Xtreme Legend
    Join Date
    Mar 2008
    Location
    Plymouth (UK)
    Posts
    5,279
    So, already 420 days @ 4 hours average per wu.

    I am still running GFAM so I have no point of reference for HPF2 average runtime but I am thinking 4 hours is a bit on the low side.

    The very least you will get is emerald

    save me the pain guys... EDITED

    simracingmedia2 - Rob - 8 cores
    Kyalami - Yeroon - 3 cores
    Quad - OC - 4 cores

    TestPC - bluestang - 2 cores
    ginette - jeanguy2 - 12 cores
    francine - jeanguy2 - 8 cores
    d -fallwind - 8 cores
    e -fallwind - 8 cores
    f -fallwind - 8 cores
    Carrissa - fallwind - 12 cores
    ra - Yeroon's bro - 6 cores (8 really but not full time)
    MainToy - PecosRiverM - 8 cores
    Daughter - PecosRiverM - 4 cores
    another - PecosRiverM - 4 cores

    ...am I right so far? 91 cores + another machine and was it 20 cores of Mumak's own?

    35 cores running GFAM now able to get 7 days would mean something in the order of 245 days

    64 cores able to run now and get 8 days cache too could be 640 days

    Mumak: that cache of yours could be growing pretty damn fast ...I predict >5000 wu's in total.

    2 years @ 4 hours per wu needs 4380 wu's evenly spread in time across all the rigs
    Last edited by OldChap; 06-07-2013 at 02:39 PM.


    My Biggest Fear Is When I die, My Wife Sells All My Stuff For What I Told Her I Paid For It.
    79 SB threads and 32 IB Threads across 4 rigs 111 threads Crunching!!

  23. #248
    Xtreme Cruncher
    Join Date
    Nov 2008
    Location
    NE Ohio, USA
    Posts
    1,608
    TestPC - bluestang - 2 cores

    Carrissa - fallwind - 12 cores
    Last edited by bluestang; 06-07-2013 at 11:33 AM.
    24/7 Cruncher #1
    Crosshair VII Hero, Ryzen 3900X, 4.0 GHz @ 1.225v, Arctic Liquid Freezer II 420 AIO, 4x8GB GSKILL 3600MHz C15, ASUS TUF 3090 OC
    Samsung 980 1TB NVMe, Samsung 870 QVO 1TB, 2x10TB WD Red RAID1, Win 10 Pro, Enthoo Luxe TG, EVGA SuperNOVA 1200W P2

    24/7 Cruncher #2
    ASRock X470 Taichi, Ryzen 3900X, 4.0 GHz @ 1.225v, Arctic Liquid Freezer 280 AIO, 2x16GB GSKILL NEO 3600MHz C16, EVGA 3080ti FTW3 Ultra
    Samsung 970 EVO 250GB NVMe, Samsung 870 EVO 500GBWin 10 Ent, Enthoo Pro, Seasonic FOCUS Plus 850W

    24/7 Cruncher #3
    GA-P67A-UD4-B3 BIOS F8 mod, 2600k (L051B138) @ 4.5 GHz, 1.260v full load, Arctic Liquid 120, (Boots Win @ 5.6 GHz per Massman binning)
    Samsung Green 4x4GB @2133 C10, EVGA 2080ti FTW3 Hybrid, Samsung 870 EVO 500GB, 2x1TB WD Red RAID1, Win10 Ent, Rosewill Rise, EVGA SuperNOVA 1300W G2

    24/7 Cruncher #4 ... Crucial M225 64GB SSD Donated to Endurance Testing (Died at 968 TB of writes...no that is not a typo!)
    GA-EP45T-UD3LR BIOS F10 modded, Q6600 G0 VID 1.212 (L731B536), 3.6 GHz 9x400 @ 1.312v full load, Zerotherm Zen FZ120
    OCZ 2x2GB DDR3-1600MHz C7, Gigabyte 7950 @1200/1250, Crucial MX100 128GB, 2x1TB WD Red RAID1, Win10 Ent, Centurion 590, XFX PRO650W

    Music System
    SB Server->SB Touch w/Android Tablet as a remote->Denon AVR-X3300W->JBL Studio Series Floorstanding Speakers, JBL LS Center, 2x SVS SB-2000 Subs


  24. #249
    Xtreme Cruncher
    Join Date
    Jun 2007
    Location
    SK, Canada
    Posts
    836
    Carrissa=12 cores. d, e and f are 8 cores each. All have been set to 7 days cache.
    i7 3970X @ 4500MHz 1.28v
    Asus Rampage IV Extreme
    4x4GB Corsair Dominator GT 2133MHz 9-11-10-27
    Gigabyte Windforce 7970 OC 3-way Crossfire
    Windows 7 Ultimate x64
    HK 3.0-MCP655-Phobya 400mm rad
    Corsair AX1200i
    Sandisk Exrtreme 240GB
    3x2TB WD Greens for storage
    TT Armor VA8003SWA





  25. #250
    HWiNFO Author
    Join Date
    Apr 2006
    Location
    /dev/null
    Posts
    801
    @OC: yes, I have 20 cores.
    I have checked returned HPF2 results and it seem the run time ranges between ~3 to ~5.7 hours. If you need a more exact mean value based on results returned so far, I can calc that...
    Current total HPF2 cache is 3165 WUs and I'm sure it will grow..

Page 10 of 14 FirstFirst ... 78910111213 ... LastLast

Bookmarks

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •