Category Archives: Configuration
A small introduction to Chef

Chef Logo

If you know what Chef is, you may skip this one or actually stay for a (hopefully) interesting read. Either way, Chef is a tool that one may use for automating the node configurations – e.g. installed software, their configuration files, system configuration files, NFS mounts and so on.

When interacting with chef, one may realize that there are 3 main components involved:

  • The Chef Server, which is actually just a big data repository.
  • The Chef Client, which is installed and runs on endpoints and does all the “dirty work” such as changing files and installing packages. The client authenticates itself to the server by a public/private key mechanism.
  • The Knife: this is the tool used by the sysadmins to actually do work with Chef.

What is stored in the repository known as Chef Server?

  • Cookbooks;
  • Data Bags;
  • Environment and Node data.

A Cookbook is a small project written in Ruby. This project contains files for:

  • Recipes – these are individual files that contain rules to be aplied on clients (e.g. install a rpm package, deploy a config file from a template). The file contents is plain Ruby code using constructs (calls to libraries) provided by Chef.
  • Attributes – or, better phrased, default attributes (e.g. port numbers, sizes, paths etc).
  • Templates – templatized configuration files with attributes replaced with template variables that are to be initialized in the recipe from default or user provided values.

These small projects are versioned and usually are also maintained – for development purposes – in an external repository (most likely git based). There is a system of includes and dependencies; starting some recipe installation will maybe trigger the installation of a full environment.

The “data bag” is an interesting concept: these are json data pieces stored inside the Chef Server. They can be encrypted, making them suitable for sensitive data such as passwords or private keys.

What is a (Chef) Environment? One may see it as a form of grouping nodes (servers). This allows for having shared, common data for all the nodes within a certain environment and also for running commands to groups of nodes identified by the environment they belong to. The (Chef) Node also has a data record associated with it that may override the inherited settings from the environment. Such data record (also json encoded) contains overrides for the default attributes described before and, on individual nodes, the recipe list to be applied (also called the “run list”).


As I have mentioned before, the effective work is performed with knife. The entire documentation is available on the Chef website:

From a DevOps perspective, the most frequent operations are:

  • Modifying environment and node data;
  • Uploading and downloading cookbooks without dependency checking; for dependencies there is another tool, berks;
  • Running the same shell command on a group of nodes (query based).

Is it a fun tool? It sure is. But you know the saying, “don’t drink and drive”; in this context, due to the impact of individual commands, one must be extra careful (and most likely sober).

That’s it for a small introduction on Chef concepts. Thank you for the read and have a nice day!

Adding swap space to a node

Why would you need to add swap, you may ask (or not); nevertheless, the answer is that swap space is not added by default to any node in AWS or on any Cloud Provider. The reasons for this are many, the main one being, most likely, the complexity implied, along with the possibility of wasting storage resources. But anyway, one may get these days instances with 200+ gigabytes of internal memory, why would anybody still need swap?

Enabling swap can indeed be an option for nodes with 1-2 gigabytes of memory such as t2.micro, but not for production workloads due to the performance issues. If you want to put up a node for experiments, dev testing or qa, such instances are cost effective approaches for most use cases (t2.micro is also free to use, within limits, during the first year).

Note: When adding swap space to an EBS-backed node in AWS, please keep the EBS storage limitations in mind when going on with this approach.

Adding swap space (e.g. 1Gb) is possible with a few commands:

# dd if=/dev/zero of=/swapfile bs=1024 count=1048576
# mkswap /swapfile
# swapon /swapfile
# echo -e "/swapfile\t\tswap\t\t\tswap\tdefaults\t0 0" >> /etc/fstab

This creates a special 1Gb file in the root directory, tells the kernel to use it as swap space and adds this setup as default in /etc/fstab.

Just defining the swap space may not be enough for the desired setup. One may also want to alter the “swappiness” behavior – when memory pages are sent to swap to free up space for running applications. For this a certain kernel parameter, /proc/sys/vm/swappiness, must be altered to suit one’s needs.

This parameter contains a numeric value, corresponding to the minimum allowed quantity of free memory available before the kernel starts to swap pages. A value of “10” means that no memory pages will be sent to swap before the free memory drops under 10%. Using the console, the parameter can be set with sysctl:

# sysctl vm.swappiness=10

One may also add it to the system configuration so that it survives reboots:

# echo "vm.swappiness=10" >> /etc/sysctl.conf

Hope you enjoyed this one. Have a nice day!


proftpd and secure ftp; not so fast

Switching to “secure mode” from a ftp perspective usually means creating a server certificate (X509, the type used for other secure protocols such as https), loading the “secure module” within the daemon configuration – this step is usually achieved by default – and filling in the gaps in the config file.

For proftpd the steps are easy:

First, create the certificate with a validity of 10 years (yes, yes…):

# openssl req -new -x509 -days 3650 -nodes -out /etc/proftpd/ssl/proftpd.cert.pem -keyout /etc/proftpd/ssl/proftpd.key.pem

Then, ensure that you set the right permissions over the files:

-rw------- 1 root root 1298 Oct 10 15:53 proftpd.cert.pem
-rw------- 1 root root 887 Oct 10 15:53 proftpd.key.pem

And finally wrap things up in the config file; just pick the defaults from the documentation:

<IfModule mod_tls.c>
TLSEngine                       on
TLSLog                          /var/log/proftpd/tls.log
TLSProtocol                     SSLv23
TLSOptions                      NoCertRequest
TLSRSACertificateFile           /etc/proftpd/ssl/proftpd.cert.pem
TLSRSACertificateKeyFile        /etc/proftpd/ssl/proftpd.key.pem
TLSVerifyClient                 off
TLSRequired                     on
TLSRenegotiate                  none
</IfModule>

Restarting the proftpd daemon (e.g. with the service command) means that a modern client such as Filezilla can connect, ask you to accept the certificate data and then proceed with the normal operation. Well, not so fast!


Actually, at this point, if everything runs smoothly, there is a security issue waiting to blow just around the corner: you have multiple open ports in the iptables configuration. A secure iptables implementation means a policy of “deny all traffic but this particular exception list” and having “free ports” not closed by the firewall is an invitation for somebody to inject a script and set up a listening socket, accepting commands from outside. This may be used to try privilege escalation attacks or just do some old school DDoS; either way, this is not something one may want to happen on his shift.

What this have to do with the ftp protocol? Well, this uses the known port 21 (tcp) for control (e.g. sending commands from the client to the server) while the data transfer (e.g. directory listing, file contents) happens on a different port. Actually there are 2 protocol variants here:

  • Passive – when the server opens a listening socket on a dynamic port (allocated and managed by the kernel in the 1024:65535 dynamic pool) and it sents it to the client through the control channel; the client then connects to that port and does the data transfer.

  • Active – when the client advertises a listening socket on a local port and the server connects to the client from port 20 to that advertised port.

The “active mode” is no longer used by default by the modern ftp clients as most workstations are today behind firewalls; only when “passive mode” fails – and this happens when the server does not actually support it or has it disabled in the configuration – the ftp client switches to “active mode”. Proftpd does not have a configuration switch for disabling “passive mode”, though.

So, wrapping things up, how do you get the passive mode port accessible through iptables while not compromising the node security? You may have learned before about “ip_conntrack_ftp”, the kernel module that allows such traffic to go through the firewall even when all the possible “passive mode” ports are explicitly filtered by iptables. This kernel module can be enabled on RedHat distributions (e.g. RHEL, CentOS, Fedora) in “/etc/sysconfig/iptables-config”:

# Load additional iptables modules (nat helpers)
#   Default: -none-
# Space separated list of nat helpers (e.g. 'ip_nat_ftp ip_nat_irc'), which
# are loaded after the firewall rules are applied. Options for the helpers are
# stored in /etc/modprobe.conf.
IPTABLES_MODULES="ip_conntrack_ftp"

Does it work? The better question should be “How does it work?”.

This kernel module works by looking inside the contents of the data packet; if it positively identifies data transfer associated to an already open connection between a ftp client and the local ftp server, it lets the packet go through. The problem with secure ftp (ftps – not sftp, as that is a different kind of beast) is that such type of traffic analyisis can no longer happen so the packets get silently dropped. This means that a previously secure working (plain) ftp configuration no longer works with secure ftp.

Are there any fixes for this? Yes, you can explicitly open some ports in iptables and make proftpd aware of them, but that’s a security risk as I have explained before:

# iptables -A INPUT -p tcp -m tcp --dport 62000:62500 -j ACCEPT

You may have somewhere on the node a iptables configuration file where to put such command, like “/etc/sysconfig/iptables” (and reload the rules).

Once the ports are no longer filtered, the proftpd configuration may be completed with:

PassivePorts 62000 62500

Restarting proftpd should allow ftps to finally work – that is, until the number of simultaneous clients (for this particular configuration) exceeds 501. At that point it will no longer work. For a soft-fail approach one should consider setting MaxClients, MaxClientsPerUser and MaxTransfersPerUser in the proftpd configuration file.

That’s it for today – or, how to phrase it, that’s why I have not personally enabled ftps in any production environment until now. Have fun – and stay safe!

Previous Page · Next Page