Cloud Integration – one good start

Introduction

Going back in time to early 2000s, one can find a lot of businesses on “web hosting” or providing services directly from bare-metal servers. After all, the “cloud” only became a thing and gained traction in the last 5 to 10 years. Many such businesses survived to present day.

What was (and in many scenarios still is) the reality of being able to provide services from a colocated or on-premises bare-metal server?

  • The physical server had to be purchased before anything else; there were requirements on the enclosure size that sometimes prevented consumer-grade hardware from being used;

  • The operating system had to be installed by hand, from installation media;

  • The configuration was a tedious process, with many details being fixed throughout the days after going live.

  • Unplanned issues, failures and upgrades (both hardware and software) were difficult to deal with and at times required completely taking down the server. Meeting any SLAs was actually a matter of luck.

Even with the above reality, things sometimes worked properly.

Cloud Integration

The Cloud came along with a different paradigm, completely separating the service from the actual hardware it runs on. Servers could be created on demand and uptime could be preserved by using the failover features built into the Cloud platform. Not everything was fun and games, though; the actual service performance was too often much slower than the bare-metal equivalent. This actually brings me to the problem at hand: how to integrate bare-metal servers, colocated or on-premises, with cloud services?

First things first, why would one suplement actual servers with cloud services? Taking a look at the list above:

  • Operating system installation may not be completelly automated in any particular scenario, but one can keep versioned images that can be used to put up an operating system with minimal intervention;

  • Configuration can be managed automatically with a tool like Chef. The Chef Server can be kept in the cloud and connected to the existing infrastructure through some VPN;

  • Data backups can be kept in the Cloud, along with tools that can restore anything at any time with minimal human intervention.

Such changes can bring quieter nights, cut a lot of manual work and get the SLAs into the “predictable” area.

Using Amazon Web Services

The AWS services relevant to the integration scenario above are S3, EC2, CodeCommit and OpsWorks.

1. Backups

Data backups can be stored in one or more S3 buckets. One can use one of:

  • a specialized tool like Amanda that integrates nicely with AWS;

  • some good old solution using a combination of tar and find to handle incrementals;

  • mirror data into a versioned S3 bucket.

2. Configuration

One can choose one of 2 options:

  • use OpsWorks and have Amazon set up a Chef Server – this is simple but monthly costs can escalate pretty fast;

  • get a normal EC2 instance and install the Chef Server yourself (even a t2.micro from the free offering will do).

Regardless of the variant chosen from above, the chef server must be integrated with the on premises or colocated servers using a VPN solution (e.g. OpenVpn).

Development files (cookbooks, node attributes, data bags) can be easily stored with AWS CodeCommit, a git repository that should incur no costs for normal usage scenarios. The alternative can of course be to install a git repository on an EC2 instance, different from the Chef Server (to avoid the single point of failure).

Conclusion

This is just the beginning of the path that goes to the Cloud. Thank you for your read!


Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.