That is not where mine ended up on any of my installs but I get the picture
That is not where mine ended up on any of my installs but I get the picture
My Biggest Fear Is When I die, My Wife Sells All My Stuff For What I Told Her I Paid For It.79 SB threads and 32 IB Threads across 4 rigs 111 threads Crunching!!
checked and found:
m@M ~ $ ulimit -n
1024
want to increase this to lots more (say 10 times bigger)
Would one of you adepts save me the pain of finding the answer on how to do this
The aim of the game here is to increase the number of concurrent network connections which I think is covered by this. I expected to find 1024 somewhere in etc/security/limits.conf. but when I didn't see it I got to thinking.... ask one of the guys
Thanks
Edit: Incidentally this is so that I can efficiently dual purpose a crunching rig
Last edited by OldChap; 10-11-2013 at 10:19 AM.
My Biggest Fear Is When I die, My Wife Sells All My Stuff For What I Told Her I Paid For It.79 SB threads and 32 IB Threads across 4 rigs 111 threads Crunching!!
ulimit -n displays the maximum number of open file descriptors.
I don't think this is the command you're looking for.
Here's an article about it: http://www.itworld.com/operating-sys...-limits-ulimit
[SIGPIC][/SIGPIC]
Yeah, this has not been an easy find. I decided to trial changes to the limits.conf file just to see what happens. I am basing this on comments HERE
Do you think a re- boot is in order? Up until now all I did was log out and back in then re-start wcg etc. Testing cannot commence until around 2 am here when my internet connection gets quieter.
Re-booting anyway then we will see if I got this right
OK added this:
root hard nofile 10240
Myname hard nofile 10240
myname soft nofile 10240
after a re-boot the new limits show up in ulimit -n and ulimit -a. time will tell if this is enough.
Last edited by OldChap; 10-11-2013 at 03:09 PM.
My Biggest Fear Is When I die, My Wife Sells All My Stuff For What I Told Her I Paid For It.79 SB threads and 32 IB Threads across 4 rigs 111 threads Crunching!!
I think the answer from the thread you quoted has what you need. Something to do with the port range and reuse times
Maximum number of connections are impacted by certain limits on both client & server sides, albeit a little differently.
On the client side: Increase the ephermal port range, and decrease the fin_timeout
To find out the default values:
sysctl net.ipv4.ip_local_port_range
sysctl net.ipv4.tcp_fin_timeout
The ephermal port range defines the maximum number of outbound sockets a host can create from a particular I.P. address. The fin_timeout defines the minimum time these sockets will stay in TIME_WAIT state (unusable after being used once). Usual system defaults are:
net.ipv4.ip_local_port_range = 32768 61000
net.ipv4.tcp_fin_timeout = 60
This basically means your system cannot guarantee more than (61000 - 32768) / 60 = 470 sockets at any given time. If you are not happy with that, you could begin with increasing the port_range. Setting the range to 15000 61000 is pretty common these days. You could further increase the availability by decreasing the fin_timeout. Suppose you do both, you should see over 1500 outbound connections, more readily.
Added this in my edit:
The above should not be interpreted as the factors impacting system capability for making outbound connections / second. But rather these factors affect system's ability to handle concurrent connections in a sustainable manner for large periods of activity.
Default Sysctl values on a typical linux box for tcp_tw_recycle & tcp_tw_reuse would be
net.ipv4.tcp_tw_recycle = 0
net.ipv4.tcp_tw_reuse = 0
These do not allow a connection in wait state after use, and force them to last the complete time_wait cycle. I recommend setting them to:
net.ipv4.tcp_tw_recycle = 1
net.ipv4.tcp_tw_reuse = 1
This allows fast cycling of sockets in time_wait state and re-using them. But before you do this change make sure that this does not conflict with the protocols that you would use for the application that needs these sockets.
On the Server Side: The net.core.somaxconn value has an important role. It limits the maximum number of requests queued to a listen socket. If you are sure of your server application's capability, bump it up from default 128 to something like 128 to 1024. Now you can take advantage of this increase by modifying the listen backlog variable in your application's listen call, to an equal or higher integer.
txqueuelen parameter of your ethernet cards also have a role to play. Default values are 1000, so bump them up to 5000 or even more if your system can handle it.
Similarly bump up the values for net.core.netdev_max_backlog and net.ipv4.tcp_max_syn_backlog. Their default values are 1000 and 1024 respectively.
Now remember to start both your client and server side applications by increasing the FD ulimts, in the shell.
[SIGPIC][/SIGPIC]
1 step at a time, see my post above. Maybe that way I can run down the real reason for a lack of scalability when adding nodes
Edit: Made no difference.
Found some info and downloaded ethtool. Increased tx and rx buffers to max but with 100Mb connection currently @ 8Mb I must wait.
If this also doesn't help then port range and re-use is next
Last edited by OldChap; 10-12-2013 at 08:19 AM.
My Biggest Fear Is When I die, My Wife Sells All My Stuff For What I Told Her I Paid For It.79 SB threads and 32 IB Threads across 4 rigs 111 threads Crunching!!
Memory failure..... Not the machine... Mine
What was the fix for "apt-get install ia32-libs" when that stopped working??
My Biggest Fear Is When I die, My Wife Sells All My Stuff For What I Told Her I Paid For It.79 SB threads and 32 IB Threads across 4 rigs 111 threads Crunching!!
try sudo apt-get install lib32z1
http://www.xtremesystems.org/forums/...=1#post5218990
Would someone please show how to set up remote_hosts.cfg and gui_rpc_auth.cfg from cli linux
My Biggest Fear Is When I die, My Wife Sells All My Stuff For What I Told Her I Paid For It.79 SB threads and 32 IB Threads across 4 rigs 111 threads Crunching!!
use nano should be self explanatory
Code:sudo nano remote_hosts.cfg
[SIGPIC][/SIGPIC]
Ha !! Superb !!!
As Guides go that one is up there
Thanks Mick
My Biggest Fear Is When I die, My Wife Sells All My Stuff For What I Told Her I Paid For It.79 SB threads and 32 IB Threads across 4 rigs 111 threads Crunching!!
No problem, happy to help.
[SIGPIC][/SIGPIC]
would there be use for a headless USB bootable BOINC distro that runs in RAM and syncs settings upon startup/exit?
I like large posteriors and I cannot prevaricate
There might. I'm happy with the way my headless machines are set up but I'm sure there are others who would run such a thing.
[SIGPIC][/SIGPIC]
I played a bit with Fatdog 700 but it was still in alpha and I am unclear how one saves the whole setup once everything is going the way you want it. Runs well though and is very small and naturally runs 100% in memory
I figure that once I get it figured out the stick could be imaged for all to use. am waiting until it gets out of beta first though.
My Biggest Fear Is When I die, My Wife Sells All My Stuff For What I Told Her I Paid For It.79 SB threads and 32 IB Threads across 4 rigs 111 threads Crunching!!
i'm thinking about using bittorrent sync so all boinc files are stored online, but need to be sure it's 100% secure first (maybe use extra encryption too just to be sure)
I like large posteriors and I cannot prevaricate
Bookmarks