
There is no such thing as a completely secure server: as long as you provide public access to services running on a server, there is a risk that somebody at some point is going to try something like a privilege escalation or denial of service. What one can do is to minimize the chance of success of such attack or at least to minimize the damages.
I am not going to provide here some “high tech” security mechanisms but rather some “common sense” ones; such measures will most likely prevent speculative attackers or bots from doing their stuff. Let’s start with the first trick from the book:
1. Set up iptables
One may think: why set the firewall up? If I provide 3 services to the world and those are the only ones with listening sockets, why would I need a completely configured firewall?
The answer is: the firewall is always necessary. Having a policy of “deny all + exceptions” will render useless any rogue service that an attacker may inject through some privilege escalation attack.
On a RedHat (CentOS) system one will find the firewall configuration file as /etc/sysconfig/iptables. A typical restrictive configuration might be:
*filter :INPUT DROP [0:0] :FORWARD DROP [0:0] :OUTPUT ACCEPT [0:0] -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT -A INPUT -p icmp -j ACCEPT -A INPUT -i lo -j ACCEPT -A INPUT -p tcp -m multiport --dports 22,80,443 -j ACCEPT
The configuration above allows new tcp traffic (new connections) to ports 22 (ssh), 80 (http) and 443 (https), while allowing all outgoing traffic and also the responses received in relation to such traffic. It also allows icmp (ping), but denies everything else.
You can find the first article of the series here: Crazy DevOps interview questions.
Question 1:
Suppose you run the following commands:
# cd /tmp # mkdir a # cd a # mkdir b # cd b # ln /tmp/a a
… what is the result?
At this point one may point out that the hardlink being defined may basically create a circular reference, which is a correct answer on its own. It’s not complete, though: how would the operating system (the file system) handle such command, anyway?
A command line guru may simply dismiss the question saying that hardlinks are not allowed for directories and that’s about it. Another guru may point out that we’re missing the -d parameter to ln and the command will fail before anything else considered. Correct, but still not the complete answer expected by the interviewer.
The complete answer must point out that:
-
Not all file systems disallow directory hardlinks (most do). The notable exception is HFS+ (Apple OS/X).
-
The hard links are, by definition, multiple directory entries pointing to a single inode. There is a “hardlink counter” field within the inode. Deleting a hard link will not delete the file unless that counter is 1.
-
Directory hard links are not by definition dangerous to be disallowed by default. The major problem with them is the circular reference situation described above. This can be solved by using graph theory but such implementation is both cpu and memory intensive.
-
The decision to disallow hard links for directories was taken with this computation cost in mind. Such computation cost grows with the file system size.
I agree to you that a comprehensive answer is usually expected in an interview setting by a company within the “Big Four” technology companies.
Every large deployment has the odd Windows node; well, not every deployment, as many places have Windows-only environments. The reason is quite simple: most closed-source SDKs do not run on anything but Windows, so one needs to write a Windows service to access those functionalities. Fortunately for command-line gurus, Microsoft has created an extraordinary toolset, the PowerShell.
When coding something in PowerShell, one must remember 2 things:
-
The script files should end in .ps1 rather than .bat (and one must usually override a security setting to get the system to run them);
-
Most tasks are one-liners.
Coming from Linux scripting some things may look odd (e.g. it is not possible to directly move directories, one must copy them recursively and then delete the source) but in the end the job gets done. I have put below a few basic tasks that one may at some point need to perform.
1. Getting a zip file from some http server and unzip it:
$source_file = "http://internal.repo/package/module.zip" $tmp_folder = "C\Temp" $dest_folder = "C:\Program Files\Internal\Application" $source_file_name = [io.path]::GetFileNameWithoutExtension($source_file) $dest_zip = "$tmp_folder\$source_file_name" New-Item -ItemType Directory -Force -Path $temp_folder Invoke-WebRequest $source_file -OutFile $dest_zip New-Item -ItemType Directory -Force -Path $dest_folder Add-Type -AssemblyName System.IO.Compression.FileSystem [System.IO.Compression.ZipFile]::ExtractToDirectory($dest_zip, $dest_folder) Remove-Item -Path "$dest_zip" -Force ### No -Force for the temp folder Remove-Item -Path "$tmp_folder"