While a public yum repository is easy to set up with S3, going private is more difficult. The privacy must be enforced by some plugin that can retrieve files from the S3 bucket with the API using stored credentials (maybe). Storing credentials can be avoided on EC2 machines that are assigned a proper role, but this is not possible in any other scenario.
Nevertheless, the first steps are common for both public and private setups:
1. Create the proper directory structure
This is achievable with the “createrepo” binary that can be installed on both RedHat and Debian-based systems (e.g. Fedora/Centos or Ubuntu). Running this program results in a “repodata” sub-directory being created with a couple of files that store parsed rpm information.
2. Sync the local repository with the S3 bucket
Installing the AWS Command Line Interface makes things really easy, as this can be solved with a one liner:
$ aws s3 sync ~/localrepo s3://yumrepobucket/remoterepo --delete
Running such command from cron is a bit more complicated as the credentials may not be properly picked up. Using the AWS_CONFIG_FILE environment variable will certainly help.
3. The yum plugin
While writing such plugin is indeed possible with Python coding knowledge, getting something from github is the fastest option.
The plugin I found to work best for my needs was yum-s3-iam. The installation was simple:
s3iam.py went to /usr/lib/yum-plugins/;
s3iam.conf went to /etc/yum/pluginconf.d/;
A customized version of s3iam.repo went to /etc/yum.repos.d.
While not specified anywhere in the documentation, one can add the key_id and the secret_key parameters to the .repo file. Otherwise configuring the EC2 role (if applicable) should do the trick. From a security perspective neither the role, nor the user (if a role cannot be used) should have more than read-only access on the bucket.
At this point one should have a private yum repository on a S3 bucket. Make good use of it!