Well you can setup UDMON with preset flags or custom ones for each node. :D:D...
Printable View
Right. Is there a guide for setting up that Second SMP client? Is it here alreayd somewhere or is there one online somewhere? I'd like to incorporate it into this thread. Or is there any volunteers. I'd do it myself, but frankly not sure what I'm talking about.
I hav enoticed that closing down one client on a dual SMP machine, WinXP32, latest Beta that it kills the WU on the other SMP. :(
This is the log of the SMP I didn't shut down.
[01:43:38] Writing local files
[01:43:38] Completed 2450000 out of 5000000 steps (49 percent)
[01:54:25] Writing local files
[01:54:25] Completed 2500000 out of 5000000 steps (50 percent)
[02:00:17] CoreStatus = 7B (123)
[02:00:17] Client-core communications error: ERROR 0x7b
[02:00:17] Deleting current work unit & continuing...[02:00:17] Using generic mpiexec calls
[02:01:51] Killing all core threads
[02:01:51] Killing 2 cores
[02:01:51] Killing core 0
[02:01:51] Killing core 1
Odd, one shouldn't affect the other... are you using the -local flag?
C:\FAH1\Folding@home-Win32-x86.exe -smp -local -verbosity 9
I can also add, that the chances of me stabbing someone to death are running very high. 50% WU? Gone? Thank God I don't own a cat. And Live alone.
Also.. it says shutting down 2 cores. Its a quad and supposed to be running god dammit 4 CORES!!! AAAARGGGHGH
dual SMP machine with affinity changer means each SMP is using only 2 of the 4 cores ;)
Don't kill me :sofa:
Maybe someone who has run dual SMPs can chime in on this. I have not.
curious, why don't you use it? I think it is supposed to make dual SMPs work nicer together.
I've run dual smp's in windows without affinity changer and with.. you get a pretty mice little boost with AC..... plus it's an easy one click install.. no config needed. all you do is click next, next, done and the service starts. you never have to fool with it again, it starts with windows. and the only thing it affects in windows is the process fahcore_a1... all it really does is sort the fahcore_a1 threads out so ones using higher cpu are actually doing more work. not sure on the exact details, but I don't think it splits up the cores like taskset does in linux on dual smp's..
http://www.codeplex.com/affinitychan...ReleaseId=9432
Can you tell me what SMP version does exactly? It uses my 2 cores only isntead of one? For GPU, I should run GPU version also?
Stanford's site has the information you need on what SMP does.
http://folding.stanford.edu/English/FAQ-SMP
...but you're essentially correct. Use the guides on XS for help on setting up SMP+GPU. We're always here to help:up:
SMP is working just fine. As I understand I need to run two clients, SMP + GPU2 tool. I have few other questions. Since I have SMP + GPU2 should I configure different machine ID for both of them? Also, how do I pause SMP? GPU2 has tray icon with option to pause.
yes, different machine ID for both.
SMP can't really be paused, however you probably won't notice it playing games. I don't. If you want to stop it though, just exit it with Ctrl+C and then restart it when you are done.
Thank you for your help. I just keep coming with more and more questions. ;-)
How can I measure speed of CPU calculations? I am interested to know, what type of increase would I get if I overclock. Honestly, with my C2D I can fold for like 16-17h daily.
FahMon or FahSpy for PPD calculations.
The increase in PPD is generally proportional with the amount (%) of OC.
Hey Mike. I ran AC. Its a neat app, but I didn't notice a huge increase in PPD. I noticed a bit... but not huge. Maybe 100PPD across 6 machines. However.... what i did notice was that the few SMP's that might have been stalled for WU upload, with AC running, it leaves 2 cores on a quad with no work. Without AC, the working SMP takes over the 2 cores while the other one is uploading (and sometimes i find it stalls completely....). SO I'm going to not use AC form now on. Sure, it would be good for guys with great upload speed, that can zap a WU to the servers in a few minutes... but not for me.
yeah, a 1000ppd across 6 machines isn't what I see... I see about a 1000ppd increase per Q6600...
you're right though, when one sends it drops the cpu to 50%. I don't usually have freeze issues though...
I did before I started launching both install.bat's for each dir back in the old days in Nov. at least that was what seemed to stop the hangs, could have been coincidence, you know how these smp woes go... sometimes you fix it, and have no idea what actually did the trick.
here's my 3 Q6600's now: x4, x5, and x6.... (x5 and x6 have 8800gt's folding in them, all are vista)
Quote:
Project : 2665
Core : SMP Gromacs
Frames : 100
Credit : 1920
-- X4-smp2-Vista64-Q6600-3420mhz-6.22R3 --
Min. Time / Frame : 16mn 28s - 1679.03 ppd
Avg. Time / Frame : 16mn 42s - 1655.57 ppd
Cur. Time / Frame : 16mn 44s - 1652.27 ppd
R3F. Time / Frame : 16mn 44s - 1652.27 ppd
Eff. Time / Frame : 34mn 13s - 808.03 ppd
-- X4-smp1-Vista64-Q6600-3420mhz-6.22R3 --
Min. Time / Frame : 16mn 03s - 1722.62 ppd
Avg. Time / Frame : 16mn 15s - 1701.42 ppd
Cur. Time / Frame : 16mn 15s - 1701.42 ppd
R3F. Time / Frame : 16mn 15s - 1701.42 ppd
Eff. Time / Frame : 22mn 05s - 1251.98 ppd
-- X5-smp1-Vista64-Q6600-3330mhz-6.22R3 --
Min. Time / Frame : 16mn 36s - 1665.54 ppd
Avg. Time / Frame : 17mn 23s - 1590.49 ppd
Cur. Time / Frame : 17mn 49s - 1551.81 ppd
R3F. Time / Frame : 17mn 49s - 1551.81 ppd
Eff. Time / Frame : 18mn 32s - 1491.80 ppd
-- X5-smp2-Vista64-Q6600-3330mhz-6.22R3 --
Min. Time / Frame : 16mn 00s - 1728.00 ppd
Avg. Time / Frame : 16mn 49s - 1644.08 ppd
Cur. Time / Frame : 17mn 02s - 1623.17 ppd
R3F. Time / Frame : 17mn 02s - 1623.17 ppd
Eff. Time / Frame : 17mn 57s - 1540.28 ppd
-- X6-smp2-Vista64-Q6600-3330mhz-6.22R3 --
Min. Time / Frame : 18mn 38s - 1483.79 ppd
Avg. Time / Frame : 18mn 58s - 1457.72 ppd
Cur. Time / Frame : 18mn 43s - 1477.19 ppd
R3F. Time / Frame : 18mn 43s - 1477.19 ppd
Eff. Time / Frame : 18mn 52s - 1465.44 ppd
-- X6-smp1-Vista64-Q6600-3330mhz-6.22R3 --
Min. Time / Frame : 17mn 09s - 1612.13 ppd
Avg. Time / Frame : 17mn 16s - 1601.24 ppd
Cur. Time / Frame : 17mn 16s - 1601.24 ppd
R3F. Time / Frame : 17mn 16s - 1601.24 ppd
Eff. Time / Frame : 18mn 04s - 1530.33 ppd
Strange Riptide, that app should give you a bigger boost ( unless, before, you were manually setting affinities and priorities in taskman all the time ).
:confused:
You're right Marvin. I just went back... seems the total PPD did not change for OTHER reasons. But individually the machines that are working correctly DID change PPD. :)
Lol glad to hear that :rofl: Really I am because I was either nuts for expecting the bigger raise, or I was right and you where wrong. I know sanity isn't one of my virtues lol, but atleast I was on the money with this one.
Mind telling me what those other reasons where? I don't run AC, but from what I heard it needs a break in time to show the change but other then that there are no other steps which I know about.
*sigh* The other reasons are that clients stall uploading, I get IO errors, I get missing work file errors.... blah blah fcking blah :rolleyes: So that in turn has castrated what should be a >20k PPD setup to a 12K PPD setup actually awarded. I could give a long and glorious, poetic and philosophical rant... but that would not solve anything. :)
Which is only due to your slow upload? Because you're the first I heard of getting so much issues with AC, just want to try and narrow down the culprit so to say :)
Its not AC issues. Its SMP issues. For example. When I shut down one SMP in some machines it kills and destroys the other WU and SMP client.
Heres for example....
Can you imagine what a dead WU at 42% does to a highly strung Irishman? :yepp:Code:[03:20:09] Completed 205000 out of 500000 steps (41 percent)
[03:28:10] Writing local files
[03:28:10] Completed 210000 out of 500000 steps (42 percent)
[03:29:52] CoreStatus = 7B (123)
[03:29:52] Client-core communications error: ERROR 0x7b
[03:29:52] Deleting current work unit & continuing...
[03:29:52] Using generic mpiexec calls
[03:32:15] - Warning: Could not delete all work unit files (5): Core returned invalid code
[03:32:15] Trying to send all finished work units
Looks like MPICH issues then, I would move the smp's to Deino and retest AC with the deino package.
And highly strung :rofl: I won't tell you what I read the first time :rofl: :horse:
Idk, what does it do? Make them eyes go all red and develish, make steam come out off your ears and lighting out off yer ass :confused: :up: