You can find the first article of the series here: Crazy DevOps interview questions.
Suppose you run the following commands:
# cd /tmp # mkdir a # cd a # mkdir b # cd b # ln /tmp/a a
… what is the result?
At this point one may point out that the hardlink being defined may basically create a circular reference, which is a correct answer on its own. It’s not complete, though: how would the operating system (the file system) handle such command, anyway?
A command line guru may simply dismiss the question saying that hardlinks are not allowed for directories and that’s about it. Another guru may point out that we’re missing the -d parameter to ln and the command will fail before anything else considered. Correct, but still not the complete answer expected by the interviewer.
The complete answer must point out that:
Not all file systems disallow directory hardlinks (most do). The notable exception is HFS+ (Apple OS/X).
The hard links are, by definition, multiple directory entries pointing to a single inode. There is a “hardlink counter” field within the inode. Deleting a hard link will not delete the file unless that counter is 1.
Directory hard links are not by definition dangerous to be disallowed by default. The major problem with them is the circular reference situation described above. This can be solved by using graph theory but such implementation is both cpu and memory intensive.
The decision to disallow hard links for directories was taken with this computation cost in mind. Such computation cost grows with the file system size.
I agree to you that a comprehensive answer is usually expected in an interview setting by a company within the “Big Four” technology companies.
Suppose you run the following command on Linux:
# mv /home/testuser/magicalapp /usr/local/
… what does it do? How does it work (or if does not work, why not)?
The answer is with the basic mv functionality. This command may or may not actually move files around; most of the time it just creates a new directory entry at the destination, pointing to the same inode as the source (the source directory entry is being deleted within the process).
When the source and the destination are on different file systems, mv usually works in the same way as:
# cp -r /home/testuser/magicalapp /usr/local/ # rm -rf /home/testuser/magicalapp
Note (1): Older (like in very old) versions of mv were not able to move directories between file systems.
Note (2): in PowerShell the move item equivalent works differently; it can move files between file systems but has no recurse functionality.
This is a nice question to ask in an interview, don’t you think? Not sure how many people would answer it correctly, though.
What does every field in the top cpu usage mean? (e.g. us, sy, wa, …)
The “general” answer is that these fields contain percent values, all adding up to 100. Each value is a “quota”: the % of the cpu usage is consumed by that particular type of task. The tasks are:
us: user space programs, all the general processing, not otherwise identified.
sy: everything kernel related – kernel mode processing.
ni: user space programs running in “nice” mode.
id: idle time; the cpu “runs” NOP commands.
wa: waiting for resource (e.g. hard disk, network). This does not indicate busy waiting but rather cpu time spent sending or receiving raw data to/from external devices.
hi: time spent handling hardware interrupts (generated by external devices such as the physical network interface). A large value here may possibly indicate a DDoS on the network interface.
si: time spent handling software interrupts. User programs request kernel services via a software interrupt but there may be more usages.
st: stolen cpu time (not available on physical nodes). This is cpu time used by the hypervisor in a virtualized environment.
This was the second episode of the interview questions. Hope you enjoyed it!