Monthly Archives: December 2015
Note to self: http server keepalive timeout

Note: While I found this issue on Apache httpd, it may apply to any http server out there.

HTTP KeepAlive

The “KeepAlive” concept is simple: the browser opens a connection to the server and sends out multiple requests (e.g. for the main page, for stylesheets, javascript includes and images) through a single connection. This effectively reduces the page load time, providing – or at least that’s what the theory says – a better customer experience. All good from this perspective.

Server Configuration

The Apache httpd server controls this feature through 3 configuration directives, all of them in the core module:

  • KeepAlive: on/off, default on;

  • KeepAliveTimeout: seconds, default 5;

  • MaxKeepAliveRequests: positive number, default 100.

There is no setting in Apache httpd to somehow allow KeepAlive and non-KeepAlive requests at the same time (e.g. to allow up to 100 KeepAlive requests, what comes above to be treated differently). One must choose the server behavior from the very beginning.

The Traffic

Now it’s math time. Starting from the MaxClients value (default 256), what is the request rate (new clients / second) that can be served without compromising the user experience? MaxClients you may say, but let’s not draw conclusions too fast. There are some issues to be considered:

  • The time between opening the connection and the KeepAliveTimeout expiration. On a default configuration, at the very least 5 seconds, but on a more typical side, maybe 7-10 seconds;

  • Traffic fluctuation (spikes);

  • Internal Apache httpd time (spawning new processes to handle new connections, etc).

It’s getting complicated. But assuming a typical 7 seconds time for every process that serves a client and an uniform behavior (all or almost all clients keep a server process busy or waiting for data for 7 seconds), on a default setting of 256 MaxClients, the uniform traffic rate that can be served is 256/7 = 36 (new) requests / second. Any spike larger than that will cause page loading delays and a poor user experience.

Better Planning

If disabling KeepAlive is not an option, all the planning should start from the worst expected spike that is to be handled (e.g. 2x or 3x the average during the busiest period). For a 50 req/s busy period average, the server should be able to allow 100 or even 150 req/s without compromising the user experience. If the configuration is already maxed out from the hardware point of view, then other solutions should be looked into (multiple servers, load balancers, I’m not dwelling into that).

Assuming a maximum rate of 150 req/s, with a KeepAliveTimeout at the 5 seconds default, one may need to adjust MaxClients (and ServerLimit if prefork mpm is used) to somewhere in the area of 1,000.

Conclusion

Don’t leave the defaults on; always start the parameter calculation from the desired outcome first. On suboptimal hardware (memory wise) disabling KeepAlive is the way to go to squeeze a bit more performance. Oh, and don’t assume the hardware is capable of keeping up with your calculations; turn any failure or mis-planning in a learning subject.

That’s it for today, have fun!

Python Cheat Sheet: Lists

Lists are linear sequences that provide constant time data lookup. They can be resized, searched, sorted (using a custom compare function) and are not restricted to a single data type (e.g. you can define lists with mixed data). Lists in Python are 0 indexed.

Defining a pre-initialized list with 10 numeric values (all zeros):
A = [0] * 10
Defining a pre-initialized list with 10 numeric values (powers of 2):
A = [ 2**i for i in xrange(0, 10) ]

Note1: xrange above returns values from 0 to 9 inclusive.

Note2: this syntax is known as list comprehension.

Iterating through all the elements (read only):
A = [ 0, 'a', {'b':'c'} ]
for e in A:
	print e
Iterating through all the elements (read/write):
A = [ 0, 'a', {'b':'c'} ]
for i in xrange(len(A)):
	print i, A[i]
Adding new elements at the end of the list:
A = [ 0, 1, 2, 3 ]
A += [ 4 ]

Note: there is an append method that can also be used for this purpose.

Insert new elements:
A = [ 0, 1, 2, 3 ]
A.insert(0, -1)	#position, value

Note: the insert above puts a new element at the front of the list.

Find elements in the list:
A = [ 0, 1, 2, 3 ]
A.index(2)

Note: index does a linear search for the element with the value provided. A ValueError exception is thrown if the element cannot be found.

Remove elements from the list:
A = [ 0, 1, 2, 3 ]
A.remove(2)		#by value
del A[1]		#by index
Using a Python List as a Stack:
A = [ ]
A.append(1)
A.append(2)			#always add elements at the end
stacktop = A.pop()	#returns 2, the last element added

Note: pop throws the exception IndexError if the list is empty.

Using a Python List as a Queue:
A = [ ]
A.insert(0, 1)
A.insert(0, 2)	#always insert at the beginning of the list
elem = A.pop()	#returns 1, the first element added

Note: for an optimized implementation for both Stacks and Queues you may want to look at the collections.deque data structure.

That’s it for today, have fun!


Setting up a secure Linux server

There is no such thing as a completely secure server: as long as you provide public access to services running on a server, there is a risk that somebody at some point is going to try something like a privilege escalation or denial of service. What one can do is to minimize the chance of success of such attack or at least to minimize the damages.

I am not going to provide here some “high tech” security mechanisms but rather some “common sense” ones; such measures will most likely prevent speculative attackers or bots from doing their stuff. Let’s start with the first trick from the book:

1. Set up iptables

One may think: why set the firewall up? If I provide 3 services to the world and those are the only ones with listening sockets, why would I need a completely configured firewall?

The answer is: the firewall is always necessary. Having a policy of “deny all + exceptions” will render useless any rogue service that an attacker may inject through some privilege escalation attack.

On a RedHat (CentOS) system one will find the firewall configuration file as /etc/sysconfig/iptables. A typical restrictive configuration might be:

*filter
:INPUT DROP [0:0]
:FORWARD DROP [0:0]
:OUTPUT ACCEPT [0:0]
-A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT
-A INPUT -p icmp -j ACCEPT
-A INPUT -i lo -j ACCEPT
-A INPUT -p tcp -m multiport --dports 22,80,443 -j ACCEPT

The configuration above allows new tcp traffic (new connections) to ports 22 (ssh), 80 (http) and 443 (https), while allowing all outgoing traffic and also the responses received in relation to such traffic. It also allows icmp (ping), but denies everything else.

Continue Reading →