I’ve been using Docker containers on Linux systems for a while now, and have recently developed a neat solution to what I imagine is a fairly common problem.
I had a number of Docker containers that I wanted to launch from a common image, but with a slightly different configuration depending on the environment in which I was launching them. For example, a web application container might connect to a different database in a staging environment, or a MongoDB replica set name might be different. This meant my options basically looked like:
- Maintain multiple containers / Dockerfiles.
- Maintain the configuration in separate data volumes and use
--volumes-fromto pull the relevant container in.
- Bundle the configuration files into one container, and manually specify the
ENTRYPOINTvalues to pick this up.
None of those really appealed due to needless duplication, or the complexity of an approach that would necessitate really long
docker run commands. So I knocked up a quick & simple Ruby script that I could use across all my containers, which does the following :
- Generates configuration files from ERB templates
- Uses values provided in YAML files, one per environment
- Copies the generated templates to the correct location and specifies permissions
- Executes a replacement process once it’s finished (e.g. mongod, nginx, supervisord, etc.)
- Provides a plugin architecture to grab values from a variety of sources – environment variables, Consul clusters and so on.
This way I can keep all my configuration together in the container (or provide it at run-time), and just tell Docker which environment to use when I start it.
This project is now called Tiller and has grown significantly in scale and popularity since announced.
The “TL;DR” pitch for Tiller is that it’s a tool that normally runs as the
EXEC inside a Docker container; it generates dynamic configuration files for your application/services and then runs the application as a replacement process. It has a pluggable architecture and comes bundled with many plugins, so you can get values for your configuration files from environment variables, a consul cluster, JSON files, and many other things. It was created to provide a simple, generic way of building flexible “parameterized containers” that can grab their configuration at run-time – think of a service that needs a different DB connection string in a production environment, or maybe you want to pass in secrets such as passwords at run-time instead of baking them in your container.
And, as the end result is a set of plain-text files, your application doesn’t need to know how to directly talk to any of these services, it still only needs to know how to load a configuration file.
For more information, see the documentation at the Github page, join the Gitter chatroom or check this blog for more updates & examples. A short selection of recent blog posts is provided below as a helpful “jumping off” point :
- Tiller Project and Docker Container Configuration
- Tiller v0.0.7 Now Supports Environment Variables
- Tiller and Docker Environment Variables – includes a walkthrough and example Dockerfile to download.
- Building Dynamic Docker Images With JSON and Tiller 0.1.4 – more practical examples and a walk-through of the new JSON data source.
- Querying Tiller configuration from a running Docker container – how to use the new Tiller API to query the generated configuration of a running container.
- Tiller 0.3.0 and defaults datasource How to use the new “defaults” data source.
- Tiller and Consul Full walkthrough that shows how to retrieve templates and values from a Consul cluster.