Tag Archives: script
A bit of PowerShell scripting

Every large deployment has the odd Windows node; well, not every deployment, as many places have Windows-only environments. The reason is quite simple: most closed-source SDKs do not run on anything but Windows, so one needs to write a Windows service to access those functionalities. Fortunately for command-line gurus, Microsoft has created an extraordinary toolset, the PowerShell.

When coding something in PowerShell, one must remember 2 things:

  • The script files should end in .ps1 rather than .bat (and one must usually override a security setting to get the system to run them);

  • Most tasks are one-liners.

Coming from Linux scripting some things may look odd (e.g. it is not possible to directly move directories, one must copy them recursively and then delete the source) but in the end the job gets done. I have put below a few basic tasks that one may at some point need to perform.

1. Getting a zip file from some http server and unzip it:

$source_file = "http://internal.repo/package/module.zip"
$tmp_folder = "C\Temp"
$dest_folder = "C:\Program Files\Internal\Application"

$source_file_name = [io.path]::GetFileNameWithoutExtension($source_file)
$dest_zip = "$tmp_folder\$source_file_name"
  
New-Item -ItemType Directory -Force -Path $temp_folder  
Invoke-WebRequest $source_file -OutFile $dest_zip

New-Item -ItemType Directory -Force -Path $dest_folder
Add-Type -AssemblyName System.IO.Compression.FileSystem
[System.IO.Compression.ZipFile]::ExtractToDirectory($dest_zip, $dest_folder)

Remove-Item -Path "$dest_zip" -Force

### No -Force for the temp folder
Remove-Item -Path "$tmp_folder"

Continue Reading →

Doing Backups in AWS

This text is about doing backups for data already existing in AWS, not for outside data, although some methods apply for both cases. But let’s start from the beginning:

What Data?

Your data can be located on EC2 nodes (virtual servers) or you may be using some dedicated database service such as RDS. The dedicated services have the backup functionality built-in already, with settings easily accessible through the interface. I won’t deal with those but rather with the “raw” data you may have on a node.

The data on the node falls in 2 categories, or can be looked over from 2 different perspectives:

  1. When one wants to capture the “system state” at a certain point in time. This perspective does not consider the data composition, but the functionality that is being captured for use at a later date as a known good fallback point.

  2. When one wants to get the state of a specific subsystem (e.g. a subset of the local storage, a subset of the local database). This is the “classical backup” as it is widely known.

Capturing State

AWS offers full support for taking snapshots of volumes:

AWS Volume Snapshot example

One does not need to only use the interface; all the functionalities are available programatically. One may also want to look over Boto Python library.

Classical Backup

One can store files through programatical means (e.g. from cron-based scripts to full fledged backup software that runs on a schedule) in the Amazon Cloud to the following services:

  • Simple Storage Service (S3): this is the easiest to use as it offers instant storage, instant retrieval and also versioning (e.g. you may mirror some directory contents on the secure storage at various points in time). It is not a cost effective method of storage for huge amounts of data (multiple terabytes) over long periods of time.

  • Glacier: this is the equivalent of the tape storage. The retrieval is not instant (one must schedule such retrieval in advance). It does not support versioning by default. It is 3-4 times cheaper than S3, though.

  • A dedicated EC2 node (or multiple nodes organized as a backup storage cluster): this is not cost effective but may work in certain scenarios (e.g. live data mirroring).

  • A dedicated database in RDS: this is far from cost effective but is the solution if one wants to use some existing backup software that can store data to a database only.

That was my introduction on doing backups in AWS. Thank you for your read!

Crazy DevOps interview questions

Who likes interviews? Me neither. Well, depends…

If you get any of the following questions during an interview then either the interviewer did read this one or he’s getting info from the same sources as I am. Either way, let’s get one step forward.


Question 1:

Suppose you log into a node and find the following situation:

# ls -la /bin/chmod
rw-r--r-- 1 root root 56032 ian 14  2015 /bin/chmod

Is it fixable?

… yes. Most of the time. Let’s remember how the executables are being started on Linux: with a system loader, ld-linux-something. Let’s check:

# ldd /bin/chmod 
	linux-vdso.so.1 =>  (0x00007ffdf27fc000)
	libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007fb11a650000)
	/lib64/ld-linux-x86-64.so.2 (0x00007fb11aa15000)

OK, got it:

# /lib64/ld-linux-x86-64.so.2 /bin/chmod +x /bin/chmod

Fun, isn’t it? The follow up question obviously is “what if the loader’s rights are unproperly set” or something similar. The answer to that is that not all the filesystem issues are fixable with easy commands. One may even have to mount the file system on a different installation (e.g. from a Live CD or attach the virtual storage to another, “good” node) and fix things up from there.


Continue Reading →

Previous Page · Next Page