Why would you need to add swap, you may ask (or not); nevertheless, the answer is that swap space is not added by default to any node in AWS or on any Cloud Provider. The reasons for this are many, the main one being, most likely, the complexity implied, along with the possibility of wasting storage resources. But anyway, one may get these days instances with 200+ gigabytes of internal memory, why would anybody still need swap?
Enabling swap can indeed be an option for nodes with 1-2 gigabytes of memory such as t2.micro, but not for production workloads due to the performance issues. If you want to put up a node for experiments, dev testing or qa, such instances are cost effective approaches for most use cases (t2.micro is also free to use, within limits, during the first year).
Note: When adding swap space to an EBS-backed node in AWS, please keep the EBS storage limitations in mind when going on with this approach.
Adding swap space (e.g. 1Gb) is possible with a few commands:
# dd if=/dev/zero of=/swapfile bs=1024 count=1048576 # mkswap /swapfile # swapon /swapfile # echo -e "/swapfile\t\tswap\t\t\tswap\tdefaults\t0 0" >> /etc/fstab
This creates a special 1Gb file in the root directory, tells the kernel to use it as swap space and adds this setup as default in /etc/fstab.
Just defining the swap space may not be enough for the desired setup. One may also want to alter the “swappiness” behavior – when memory pages are sent to swap to free up space for running applications. For this a certain kernel parameter, /proc/sys/vm/swappiness, must be altered to suit one’s needs.
This parameter contains a numeric value, corresponding to the minimum allowed quantity of free memory available before the kernel starts to swap pages. A value of “10” means that no memory pages will be sent to swap before the free memory drops under 10%. Using the console, the parameter can be set with sysctl:
# sysctl vm.swappiness=10
One may also add it to the system configuration so that it survives reboots:
# echo "vm.swappiness=10" >> /etc/sysctl.conf
Hope you enjoyed this one. Have a nice day!
… is ignoring the run order of the scriptlets. Because you may want to arrive to a result close to this one:
-
When installing the rpm for the first time, add the service to chkconfig and get it in the running state;
-
When removing (for good), stop the service and remove it from chkconfig;
-
When upgrading, stop the current running service, do the binary upgrade and start the new service (do not alter chkconfig).
A quick approach to this would ignore the upgrade part completely, hoping that coupling together the removal and the installation would do the trick. Something like the following in the .spec file:
%post chkconfig --add mywow service mywow start %preun service mywow stop chkconfig --del mywow
This will actually do the job if installation and removal of the rpm package were the only operations to be considered. The problem is that during an upgrade this will go horribly wrong as the scriptlet running order is different than the one expected by the previous sample:
-
%pre of the new package
-
%post of the new package
-
%preun of the old package
-
%postun of the old package
The obvious result is that an upgrade with those scriptlets will leave the service stopped and out of chkconfig. A real disaster.
The solution is to look for the first parameter that gets passed when running each scriptlet. The rpm program passes a certain value to indicate if it’s an installation / removal or an upgrade. Considering that value we can write some better scripts:
%post if [ $1 -gt 1 ] ; then service mynow restart else chkconfig --add mywow service mywow start fi %preun if [ $1 -eq 0 ] ; then service mynow stop chkconfig --del mywow fi
Fixed. Now we are good with this.
Hope you enjoyed it!
Switching to “secure mode” from a ftp perspective usually means creating a server certificate (X509, the type used for other secure protocols such as https), loading the “secure module” within the daemon configuration – this step is usually achieved by default – and filling in the gaps in the config file.
For proftpd the steps are easy:
First, create the certificate with a validity of 10 years (yes, yes…):
# openssl req -new -x509 -days 3650 -nodes -out /etc/proftpd/ssl/proftpd.cert.pem -keyout /etc/proftpd/ssl/proftpd.key.pem
Then, ensure that you set the right permissions over the files:
-rw------- 1 root root 1298 Oct 10 15:53 proftpd.cert.pem
-rw------- 1 root root 887 Oct 10 15:53 proftpd.key.pem
And finally wrap things up in the config file; just pick the defaults from the documentation:
<IfModule mod_tls.c> TLSEngine on TLSLog /var/log/proftpd/tls.log TLSProtocol SSLv23 TLSOptions NoCertRequest TLSRSACertificateFile /etc/proftpd/ssl/proftpd.cert.pem TLSRSACertificateKeyFile /etc/proftpd/ssl/proftpd.key.pem TLSVerifyClient off TLSRequired on TLSRenegotiate none </IfModule>
Restarting the proftpd daemon (e.g. with the service command) means that a modern client such as Filezilla can connect, ask you to accept the certificate data and then proceed with the normal operation. Well, not so fast!
Actually, at this point, if everything runs smoothly, there is a security issue waiting to blow just around the corner: you have multiple open ports in the iptables configuration. A secure iptables implementation means a policy of “deny all traffic but this particular exception list” and having “free ports” not closed by the firewall is an invitation for somebody to inject a script and set up a listening socket, accepting commands from outside. This may be used to try privilege escalation attacks or just do some old school DDoS; either way, this is not something one may want to happen on his shift.
What this have to do with the ftp protocol? Well, this uses the known port 21 (tcp) for control (e.g. sending commands from the client to the server) while the data transfer (e.g. directory listing, file contents) happens on a different port. Actually there are 2 protocol variants here:
-
Passive – when the server opens a listening socket on a dynamic port (allocated and managed by the kernel in the 1024:65535 dynamic pool) and it sents it to the client through the control channel; the client then connects to that port and does the data transfer.
-
Active – when the client advertises a listening socket on a local port and the server connects to the client from port 20 to that advertised port.
The “active mode” is no longer used by default by the modern ftp clients as most workstations are today behind firewalls; only when “passive mode” fails – and this happens when the server does not actually support it or has it disabled in the configuration – the ftp client switches to “active mode”. Proftpd does not have a configuration switch for disabling “passive mode”, though.
So, wrapping things up, how do you get the passive mode port accessible through iptables while not compromising the node security? You may have learned before about “ip_conntrack_ftp”, the kernel module that allows such traffic to go through the firewall even when all the possible “passive mode” ports are explicitly filtered by iptables. This kernel module can be enabled on RedHat distributions (e.g. RHEL, CentOS, Fedora) in “/etc/sysconfig/iptables-config”:
# Load additional iptables modules (nat helpers) # Default: -none- # Space separated list of nat helpers (e.g. 'ip_nat_ftp ip_nat_irc'), which # are loaded after the firewall rules are applied. Options for the helpers are # stored in /etc/modprobe.conf. IPTABLES_MODULES="ip_conntrack_ftp"
Does it work? The better question should be “How does it work?”.
This kernel module works by looking inside the contents of the data packet; if it positively identifies data transfer associated to an already open connection between a ftp client and the local ftp server, it lets the packet go through. The problem with secure ftp (ftps – not sftp, as that is a different kind of beast) is that such type of traffic analyisis can no longer happen so the packets get silently dropped. This means that a previously secure working (plain) ftp configuration no longer works with secure ftp.
Are there any fixes for this? Yes, you can explicitly open some ports in iptables and make proftpd aware of them, but that’s a security risk as I have explained before:
# iptables -A INPUT -p tcp -m tcp --dport 62000:62500 -j ACCEPT
You may have somewhere on the node a iptables configuration file where to put such command, like “/etc/sysconfig/iptables” (and reload the rules).
Once the ports are no longer filtered, the proftpd configuration may be completed with:
PassivePorts 62000 62500
Restarting proftpd should allow ftps to finally work – that is, until the number of simultaneous clients (for this particular configuration) exceeds 501. At that point it will no longer work. For a soft-fail approach one should consider setting MaxClients, MaxClientsPerUser and MaxTransfersPerUser in the proftpd configuration file.
That’s it for today – or, how to phrase it, that’s why I have not personally enabled ftps in any production environment until now. Have fun – and stay safe!