Quote Originally Posted by OldChap View Post
checked and found:

m@M ~ $ ulimit -n
1024

want to increase this to lots more (say 10 times bigger)

Would one of you adepts save me the pain of finding the answer on how to do this

The aim of the game here is to increase the number of concurrent network connections which I think is covered by this. I expected to find 1024 somewhere in etc/security/limits.conf. but when I didn't see it I got to thinking.... ask one of the guys

Thanks

Edit: Incidentally this is so that I can efficiently dual purpose a crunching rig
I think the answer from the thread you quoted has what you need. Something to do with the port range and reuse times

Maximum number of connections are impacted by certain limits on both client & server sides, albeit a little differently.

On the client side: Increase the ephermal port range, and decrease the fin_timeout

To find out the default values:

sysctl net.ipv4.ip_local_port_range
sysctl net.ipv4.tcp_fin_timeout
The ephermal port range defines the maximum number of outbound sockets a host can create from a particular I.P. address. The fin_timeout defines the minimum time these sockets will stay in TIME_WAIT state (unusable after being used once). Usual system defaults are:

net.ipv4.ip_local_port_range = 32768 61000
net.ipv4.tcp_fin_timeout = 60
This basically means your system cannot guarantee more than (61000 - 32768) / 60 = 470 sockets at any given time. If you are not happy with that, you could begin with increasing the port_range. Setting the range to 15000 61000 is pretty common these days. You could further increase the availability by decreasing the fin_timeout. Suppose you do both, you should see over 1500 outbound connections, more readily.

Added this in my edit:

The above should not be interpreted as the factors impacting system capability for making outbound connections / second. But rather these factors affect system's ability to handle concurrent connections in a sustainable manner for large periods of activity.

Default Sysctl values on a typical linux box for tcp_tw_recycle & tcp_tw_reuse would be

net.ipv4.tcp_tw_recycle = 0
net.ipv4.tcp_tw_reuse = 0
These do not allow a connection in wait state after use, and force them to last the complete time_wait cycle. I recommend setting them to:

net.ipv4.tcp_tw_recycle = 1
net.ipv4.tcp_tw_reuse = 1
This allows fast cycling of sockets in time_wait state and re-using them. But before you do this change make sure that this does not conflict with the protocols that you would use for the application that needs these sockets.

On the Server Side: The net.core.somaxconn value has an important role. It limits the maximum number of requests queued to a listen socket. If you are sure of your server application's capability, bump it up from default 128 to something like 128 to 1024. Now you can take advantage of this increase by modifying the listen backlog variable in your application's listen call, to an equal or higher integer.

txqueuelen parameter of your ethernet cards also have a role to play. Default values are 1000, so bump them up to 5000 or even more if your system can handle it.

Similarly bump up the values for net.core.netdev_max_backlog and net.ipv4.tcp_max_syn_backlog. Their default values are 1000 and 1024 respectively.

Now remember to start both your client and server side applications by increasing the FD ulimts, in the shell.