Why would you need to add swap, you may ask (or not); nevertheless, the answer is that swap space is not added by default to any node in AWS or on any Cloud Provider. The reasons for this are many, the main one being, most likely, the complexity implied, along with the possibility of wasting storage resources. But anyway, one may get these days instances with 200+ gigabytes of internal memory, why would anybody still need swap?
Enabling swap can indeed be an option for nodes with 1-2 gigabytes of memory such as t2.micro, but not for production workloads due to the performance issues. If you want to put up a node for experiments, dev testing or qa, such instances are cost effective approaches for most use cases (t2.micro is also free to use, within limits, during the first year).
Note: When adding swap space to an EBS-backed node in AWS, please keep the EBS storage limitations in mind when going on with this approach.
Adding swap space (e.g. 1Gb) is possible with a few commands:
# dd if=/dev/zero of=/swapfile bs=1024 count=1048576 # mkswap /swapfile # swapon /swapfile # echo -e "/swapfile\t\tswap\t\t\tswap\tdefaults\t0 0" >> /etc/fstab
This creates a special 1Gb file in the root directory, tells the kernel to use it as swap space and adds this setup as default in /etc/fstab.
Just defining the swap space may not be enough for the desired setup. One may also want to alter the “swappiness” behavior – when memory pages are sent to swap to free up space for running applications. For this a certain kernel parameter, /proc/sys/vm/swappiness, must be altered to suit one’s needs.
This parameter contains a numeric value, corresponding to the minimum allowed quantity of free memory available before the kernel starts to swap pages. A value of “10” means that no memory pages will be sent to swap before the free memory drops under 10%. Using the console, the parameter can be set with sysctl:
# sysctl vm.swappiness=10
One may also add it to the system configuration so that it survives reboots:
# echo "vm.swappiness=10" >> /etc/sysctl.conf
Hope you enjoyed this one. Have a nice day!
… is ignoring the run order of the scriptlets. Because you may want to arrive to a result close to this one:
-
When installing the rpm for the first time, add the service to chkconfig and get it in the running state;
-
When removing (for good), stop the service and remove it from chkconfig;
-
When upgrading, stop the current running service, do the binary upgrade and start the new service (do not alter chkconfig).
A quick approach to this would ignore the upgrade part completely, hoping that coupling together the removal and the installation would do the trick. Something like the following in the .spec file:
%post chkconfig --add mywow service mywow start %preun service mywow stop chkconfig --del mywow
This will actually do the job if installation and removal of the rpm package were the only operations to be considered. The problem is that during an upgrade this will go horribly wrong as the scriptlet running order is different than the one expected by the previous sample:
-
%pre of the new package
-
%post of the new package
-
%preun of the old package
-
%postun of the old package
The obvious result is that an upgrade with those scriptlets will leave the service stopped and out of chkconfig. A real disaster.
The solution is to look for the first parameter that gets passed when running each scriptlet. The rpm program passes a certain value to indicate if it’s an installation / removal or an upgrade. Considering that value we can write some better scripts:
%post if [ $1 -gt 1 ] ; then service mynow restart else chkconfig --add mywow service mywow start fi %preun if [ $1 -eq 0 ] ; then service mynow stop chkconfig --del mywow fi
Fixed. Now we are good with this.
Hope you enjoyed it!