markround.com

Solaris packages, DevOps, Bass and other things of interest...

Centralized Docker Container Configuration With Tiller

| Comments

If you’ve been using Docker for a while, you probably know that you can use Tiller to generate configuration files inside your containers. This means you can provide a single container for running in a variety of different environments (think of a web application that needs different database URIs and credentials depending on where it is run).

You can also use it to provide ‘parameterized’ containers – where you allow end-users to provide some aspects of the configuration. For example, the team over at StackStorm are using Tiller and the environment plugin to let users easily configure their containers. I’ve provided a variety of plugins to help with this, and have written about it previously.

There was always a catch, though. Until recently, Tiller obtained configuration from a ‘local’ source – either configuration files bundled in a container, or from environment variables passed in at container run time. This is probably fine for most situations, but did prevent a lot of interesting use-cases. However, the 0.6.x releases now support fetching configuration and templates from a variety of network sources. Currently (as of 0.6.1), you can retrieve data from either a ZooKeeper cluster or a HTTP server, with more services to be supported in later releases.

This is a massive change as it now means you can have a central store for all your container configuration, and opens the door for a whole bunch of really cool possibilities. You could, for example, have a service that allows users to edit configuration files or set parameters through a web interface. Or you could plug Tiller into your automation stack so you can alter container configuration on the fly. You could even use Tiller “on the metal” to configure physical/VM hosts when they boot.

Of course, this does introduce an external dependancy on launching your containers, so you should ensure that before you implement this in a production setting you have carefully considered all your points of failure!

Enabling these plugins is straightforward – simply add them to common.yaml, e.g.

common.yaml
1
2
3
4
5
6
7
data_sources:
  - file
  - http

template_sources:
  - http
  - file

As with all Tiller plugins, the ordering is significant. In the above example, values from the HTTP data source will take precedence over YAML files, but templates loaded from files will take precedence over templates stored in HTTP. You should tweak this as appropriate for your environment.

You also do not need to enable both plugins; for example you may just want to retrieve values for your templates from a web server, but continue to use files to store your actual template content.

As the HTTP plugins can fetch everything (a list of templates, contents and values) from a webservice, if you accept the defaults your environment files can literally now be reduced to :

environment.yaml
1
2
3
common:
  http:
    uri: 'http://tiller.example.com'

You can check out the documentation for these new plugins over at the Github project page. If you have any suggestions for improvements or feature requests, please feel free to open an issue or leave a comment below!

Tiller 0.5.0 Released

| Comments

Just a quick “heads up” for users of Tiller – version 0.5.0 has just been released and has a few new features, and some other changes. Firstly, I have added support for per-environment common settings. Normally, you’d do things like enable the API, set the exec parameter and so on in common.yaml, but as per issue #10, you can now specify / over-ride these settings in your environment files. Simply drop them under a common: section, e.g.

1
2
3
4
common:
  api_enable: true
  api_port: 1234
  exec: /usr/bin/foo -v

This will also hopefully come in handy later on for some other plugins, such as the planned etcd integration.

There’s also two more changes, hopefully these won’t affect anyone! I’ve firstly moved to using spawn instead of the previous “fork & exec” method, as this provides many more useful options when forking the replacement process. As this method was introduced in the Ruby 1.9.x series, I have dropped support for 1.8.x and made required_ruby_version = '>= 1.9.2' a dependancy of the gem. I never properly tested on 1.8.x anyway, and decided to end implicit support given that 1.8.7 was released in 2008 and extended maintenance finally ended last year. If you’re running 1.8.x anywhere, you really should upgrade to something more recent as few projects these days support it.

Of course, if you really, really need to use Tiller under Ruby 1.8.x, you can always change the spawn call back to the old “fork & exec” method, but as time goes on I’ll probably rely on more modern Ruby features. So for now, consider this a totally unsupported hack to buy you some time. However if it causes too many problems for people, I can always look at providing switchable behaviour but I’d really like to avoid this – and it’s still unlikely that I’ll run any tests against it.

Tiller 0.3.0 and New Defaults Datasource

| Comments

Tiller v0.3.0 has just been released, which brings a couple of changes. The first is that the ordering of plugins specified in common.yaml is now significant. Tiller will run these plugins in the order that they are specified; this is important as before the order was effectively random, so your templates may change with this release if you rely on merging values from different sources (hence the major version bump indicating a breaking change).

The reason for this (apart from making Tiller’s behaviour more deterministic) is that there is now a DefaultsDataSource, which allows you to specify global and per-template values in a defaults.yaml (or in separate YAML files under /etc/tiller/defaults.d/) and then over-ride these with other data sources later.

You’ll hopefully find this useful if you have a lot of environments you want to deploy your Docker containers into (Development, Integration, Staging, NFT, Production, etc.) but only a few values change between each one.

Examples

Note : If you’re new to Tiller, I recommend reading the documentation and my other articles on this blog.

The following is a simple example of how you might use the new DefaultsDataSource to generate a fictional application configuration file. Here’s what your common.yaml might look like :

/etc/tiller/common.yaml
1
2
3
4
5
6
7
8
9
exec: /usr/local/bin/myapp

data_sources:
  - defaults
  - file
  - environment_json

template_sources:
  - file

As mentioned above, the order you load data and template sources is significant. Tiller will use each one in the order it is listed, from top to bottom so you now have control over which module has priority. If you wanted to change it so the file module over-rides values from the environment_json module (see see http://www.markround.com/blog/2014/10/17/building-dynamic-docker-images-with-json-and-tiller-0-dot-1-4/), you’d swap the order above :

data_sources:
  - defaults
  - environment_json
  - file

Now, here’s our example template configuration file that we want to ship with our Docker container :

/etc/tiller/templates/app.conf.erb
1
2
3
4
5
6
7
8
9
[http]
http.port=<%= port %>
http.hostname=<%= environment %>.<%= domain_name %>

[smtp]
mail.domain_name=<%= domain_name %>

[db]
db.host=<%= database %>

In this, you can see that there’s a few dynamic values defined, but we probably don’t want to have to specify them in all our environment files if they’re the same for most of our environments. For example, the domain_name is used in a couple of places, and we’ll also assume that for all our environments the HTTP port will remain the same apart from the staging environment. You can see that if we had a lot of templates to generate, being able to specify the domain_name and other shared variables in a single place will now be much neater.

Let’s now fill in the defaults for our templates. This is done by creating the new defaults.yaml file in your Tiller configuration directory, which is usually /etc/tiller :

/etc/tiller/defaults.yaml
1
2
3
4
5
global:
  domain_name: 'example.com'

app.conf.erb:
  port: '8080'

Now, for all our environments, we only need to provide the values that will change, or that we want to over-ride. Let’s take our “production” environment first – the only thing we want to specify in this example is the database name:

/etc/tiller/environments/production.yaml
1
2
3
4
app.conf.erb:
  target: /tmp/app.conf
  config:
    database: 'prd-db-1.example.com'

Now run Tiller to generate the file :

1
2
3
4
5
6
7
8
9
10
11
12
13
14
$ tiller -n
tiller v0.3.0 (https://github.com/markround/tiller) <[email protected]>
Template generation completed

$ cat /tmp/app.conf
[http]
http.port=8080
http.hostname=production.example.com

[smtp]
mail.domain_name=example.com

[db]
db.host=prd-db-1.example.com

Let’s now create a new “staging” environment, and demonstrate over-riding the port as well as setting the database; notice how we’re only setting the values that have changed for this environment :

/etc/tiller/environments/staging.yaml
1
2
3
4
5
app.conf.erb:
  target: /tmp/app.conf
  config:
    port: '8081'
    database: 'stg-db-1.dev.example.com'

And now run Tiller to create our config file for this environment:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
$ tiller -e staging -n
tiller v0.3.0 (https://github.com/markround/tiller) <[email protected]>
Warning, merging duplicate data values.
port => '8080' being replaced by : '8081' from FileDataSource
Template generation completed

$ cat /tmp/app.conf
[http]
http.port=8081
http.hostname=staging.example.com

[smtp]
mail.domain_name=example.com

[db]
db.host=stg-db-1.dev.example.com

You’ll notice that Tiller warned you about the value from the DefaultsDataSource being replaced with one from the FileDataSource; you can see here how the ordering of plugins loaded in common.yaml is important.

And there you have it. A short example (and I’ve omitted the creation of the other example environments and templates), but you can see how this new behaviour will make life much easier when you use Tiller as the CMD or ENTRYPOINT in your container. Hopefully this will mean more efficient Tiller configs and will help you create more flexible Docker images. Any feedback or queries, just leave them in the comments section below, or report a bug/request a new feature on the Github issue tracker.

Solaris Bash Package and Other Updates

| Comments

A quick update for users of my Solaris 11 x86 packages. I’ve created a GNU Bash 4.3 package which includes the patch for the much-publicized Shellshock vulnerability. As the package name “bash” also matches the one provided by Oracle, as usual you’ll just need to specify the full FMRI when installing:

1
$ pkg install pkg://markround/mar/shell/bash

And just to confirm you’re safe from Shellshock, using the test script at shellshocker.net:

1
2
3
4
5
6
7
8
9
$ export PATH=/opt/mar/bin:$PATH
$ ./shellshock_test.sh
CVE-2014-6271 (original shellshock): not vulnerable
CVE-2014-6277 (segfault): not vulnerable
CVE-2014-6278 (Florian's patch): not vulnerable
CVE-2014-7169 (taviso bug): not vulnerable
CVE-2014-7186 (redir_stack bug): not vulnerable
CVE-2014-7187 (nested loops off by one): not vulnerable
CVE-2014-//// (exploit 3 on http://shellshocker.net/): not vulnerable

I’ve also updated the following packages:

  • HAProxy – 1.5.9. New major version, includes native SSL support and much more.
  • NGinX – 1.6.2. Bump to latest stable version from 1.6.0.
  • rsync – 3.1.1. Bumped from 3.1.0
  • redis – 2.8.17. Latest stable version including many bug fixes.

These have all been in the /dev branch for a while, and have now been promoted to /stable.

Querying Tiller Configuration From a Running Docker Container

| Comments

Tiller 0.2.2 now brings a simple HTTP API that allows you to query the state of the Tiller configuration inside running Docker containers. This may be particularly useful when you’re attempting to debug configuration issues; you can quickly and easily check the templates, global values and Tiller configuration.

You can enable this API by passing the --api (and optional --api-port) command-line arguments. Alternatively, you can also set these in common.yaml :

1
2
api_enable: true
api_port: 6275

Now, once Tiller has forked a child process (specified by the exec parameter), you will see a message on stdout informing you the API is starting :

1
Tiller API starting on port 6275

If you want to expose this port from inside a Docker container, you will need to add this port to your list of mappings (e.g. docker run ... -p 6275:6275 ...). As a demonstration, here’s a walk-through using the Docker container built from my Tiller and Environment Variables blog post. Assuming that you’ve run through that tutorial and built the Docker container, just make a small addition to your common.yaml, so it now looks like:

data/tiller/common.yaml
1
2
3
4
5
6
7
8
exec: /usr/sbin/nginx
data_sources:
  - file
  - environment
template_sources:
  - file
api_enable: true
api_port: 6275

And rebuild your container:

1
$ docker build --no-cache -t tiller-docker-example .

Now, run it again using the previous article’s example, but also map port 6275:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
$ docker run -e environment=production -e name=Mark \
 -t -i -p 80:80 -p 6275:6275 tiller-docker-example
 
tiller v0.2.2 (https://github.com/markround/tiller) <[email protected]>
Using configuration from /etc/tiller
Using plugins from /usr/local/lib/tiller
Using environment production
Template sources loaded [FileTemplateSource]
Data sources loaded [FileDataSource, EnvironmentDataSource]
Templates to build ["welcome.erb"]
Building template welcome.erb
Setting ownership/permissions on /usr/share/nginx/html/welcome.html
Template generation completed
Executing /usr/sbin/nginx...
Child process forked.
Tiller API starting on port 6275

And you should now be able to ping the API (replace $DOCKER_HOST_IP with the IP address or hostname of your Docker host here (e.g. localhost)):

1
2
3
4
5
6
7
$ curl -D - http://$DOCKER_HOST_IP:6275/ping

HTTP/1.1 200 OK
Content-Type: application/json
Server: Tiller 0.2.2 / API v1

{ "ping": "Tiller API v1 OK" }

You can check out the Tiller config:

1
$ curl http://$DOCKER_HOST_IP:6275/v1/config

And the result (in formatted JSON):

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
{
    "tiller_base": "/etc/tiller",
    "tiller_lib": "/usr/local/lib",
    "environment": "production",
    "no_exec": false,
    "verbose": true,
    "api_enable": true,
    "exec": "/usr/sbin/nginx",
    "data_sources": [
        "file",
        "environment"
    ],
    "template_sources": [
        "file"
    ],
    "api_port": 6275
}

Or a particular template :

1
curl http://$DOCKER_HOST:6275/v1/template/welcome.erb

This returns the following JSON:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
{
    "merged_values": {
        "environment": "production",
        "env_home": "/",
        "env_path": "/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
        "env_hostname": "0aaed58b34db",
        "env_term": "xterm",
        "env_environment": "production",
        "env_name": "Mark",
        "env_color": "Blue"
    },
    "target_values": {
        "target": "/usr/share/nginx/html/welcome.html"
    }
}

For other API endpoints, see the Documentation. And please bear in mind that this is intended as a development / debugging tool – there are serious security implications involved in exposing this port (and your configuration details)to an untrusted network!

Building Dynamic Docker Images With JSON and Tiller 0.1.4

| Comments

Tiller 0.1.4 has just been released, and brings a few new improvements. Firstly, you can now use -b,-l and -e command-line flags to set the tiller_base, tiller_lib, and enviroment values respectively. This makes things a little neater when debugging or testing new configurations on the command line.

I also added a environment_json data source, based on idea by Florent Valdelievre (thanks for the feedback, Florent!)

This means you can now pass in arbitrary JSON data through the tiller_json environment variable, and use the resulting structure in your templates. As it merges values with the rest of the global values from other data sources, you can also use it to over-ride a default setting in your environment files; this may be particularly useful if you build Docker containers that are provided to end-users who wish to customise their settings.

To illustrate this, here are a few quick examples, also showing the new command-line flags when developing your templates.

Update: Please note that although this feature was added in Tiller 0.1.4, I’m using 0.1.5 in these examples as it includes newline suppression for ERb templates (the -%> syntax you’ll see below) which makes templates with loop constructs much neater.

Firstly, install the Tiller gem, create a work directory with the usual Tiller structure and change into it :

1
2
3
4
$ gem install tiller
$ mkdir json_example
$ cd json_example
$ mkdir templates environments

Then, create your common.yaml which enables the relevant data & template sources :

common.yaml
1
2
3
4
5
data_sources:
  - file
  - environment_json
template_sources:
  - file

Create your environment file (environments/production.yaml) :

environments/production.yaml
1
2
3
4
json_template.erb:
  target: parsed_template.txt
  config:
    default_value: 'This is a default value that may be overridden'

And your template (templates/json_template.erb) :

templates/json_template.erb
1
2
3
4
5
Default value : <%= default_value %>
List of servers follows...
<% servers.each do |server| -%>
  Server : <%= server %>
<% end -%>

Now, run Tiller and pass in your JSON as an environment variable (you can add the -v flag to tiller to make the output more verbose) :

1
2
3
4
5
6
7
8
9
10
11
$ tiller_json='{ "servers" : [ "server1" , "server2" , "server3" ] }' tiller -b $PWD -n
tiller v0.1.5 (https://github.com/markround/tiller) <[email protected]>
Template generation completed

$ cat parsed_template.txt
Default value : This is a default value that may be overridden
List of servers follows...

  server1
  server2
  server3

As mentioned above, you can also use this to over-ride a default. Notice that Tiller will warn you of this :

1
2
3
4
5
6
7
8
9
10
11
12
$ export tiller_json='{ "default_value" : "Hello, World!" , "servers" : [ "server1" ] }'
$ tiller -b $PWD -n
tiller v0.1.5 (https://github.com/markround/tiller) <[email protected]>
Warning, merging duplicate global and local values.
default_value => 'This is a default value that may be overridden' being replaced by : 'Hello, World!' from merged configuration
Template generation completed

$ cat parsed_template.txt
Default value : Hello, World!
List of servers follows...

  server1

More complicated structures can easily be built up. However, these can be quite error prone to pass “on the fly”, so instead create a file called config.json with the following contents :

config.json
1
2
3
4
5
6
7
8
9
10
11
12
13
14
{
  "servers" : [
      {
        "hostname" : "server1",
        "port" : "80"
      },
      {
        "hostname" : "server2",
        "port" : "8080"
      }
    ],
  "username" : "mark",
  "password" : "tiller"
}

This is much easier to read and check for syntax errors! This example contains a list of server configuration hashes, and a couple of simple key:value pairs.

Edit your template as follows :

templates/json_template.erb
1
2
3
4
5
6
Username : <%= username %>
Password : <%= password %>
List of servers follows...
<% servers.each do |server|  -%>
  http://<%= server['hostname'] %>:<%= server['port'] %> 
<% end -%>

Now you get the following produced :

1
2
3
4
5
6
7
8
9
$ tiller_json="$(cat config.json)" tiller -n -b $PWD
$ cat parsed_template.txt

Username : mark
Password : tiller
List of servers follows...

  http://server1:80
  http://server2:8080

Assuming you use Tiller as your Docker CMDor ENTRYPOINT command and set up a suitable exec in your common.yaml, you can now include complex configuration in your Docker containers at runtime simply by doing something like :

1
2
$ docker run -e tiller_json="$(cat config.json)" \
  -d -t -i ...rest of arguments...

Hope that’s of some help! If you have any feedback, just use the comments feature on this blog and I’ll reply ASAP. You may also want to check out some of my other articles on Tiller, particularly the walkthrough with a sample Dockerfile, and how to use the API to query a running Docker container.

Tiller and Docker Environment Variables

| Comments

After a recent query was posted on the Tiller issue tracker, I thought it might be useful to show a complete example of using Tiller and one of it’s plugins to create a dynamic Docker container. I assume you’re at least somewhat familiar with Tiller. If not, take a quick look through my documentation and other related blog posts.

This example will create a container image that runs a web-server through Tiller, and generates a simple web page populated with environment variables; two from a Docker -e flag, and one from the Dockerfile ENV declaration.

This is obviously a very contrived example as the most common usage of Tiller is to populate configuration files. However, changes to a web page are more easily visualised than twiddling some configuration options! I also thought I’d demonstrate how you can use one of the plugins (EnvironmentDataSource in this case) to fetch the values, instead of the more familiar static values specified in a Tiller environment file.

You can download a compressed archive of the files used in this example here. Alternatively, you can follow along below and create the files by copying & pasting.

As an aside : If you’d like to ship a Docker container with some default values, but would like to allow end-users override them with their own configuration, take a look at Building dynamic Docker images with JSON and Tiller and Tiller 0.3.0 and new Defaults data source.

Dockerfile configuration

So first off, let’s create our Dockerfile that will install NginX, and then setup Tiller to produce a templated web page:

Dockerfile
1
2
3
4
5
6
7
8
9
10
11
FROM ubuntu:latest

ENV color Blue

RUN apt-get -y update && apt-get -y install nginx
RUN apt-get -y install ruby && gem install tiller
RUN echo "daemon off;" >> /etc/nginx/nginx.conf

ADD data/tiller /etc/tiller

CMD ["/usr/local/bin/tiller" , "-v"]

And that’s it. Pretty simple; it just installs NginX, configures it to run under Docker without forking and copies our Tiller configuration in. It also defines an environment variable called “color”. This will be used later. Note that I’m also using the -v argument when calling Tiller; this makes the output more verbose so we can see what’s going on in more detail.

Tiller configuration

Under data/tiller, you’ll find the usual files that get copied to /etc/tiller :

etc
└── tiller
    ├── common.yaml
    ├── environments
    │   └── production.yaml
    └── templates
        └── welcome.erb

Let’s take a quick look at these files.

common.yaml
1
2
3
4
5
6
exec: /usr/sbin/nginx
data_sources:
  - file
  - environment
template_sources:
  - file

Nothing out of the ordinary here; we just pass control over to NginX when we’re done and load a couple of data sources. Note that even though we intend to populate the template file with the values from environment variables, we still also need the file datasource. This is because the environment datasource cannot provide the meta-data such as where to install the templates, or their permissions etc.

environments/production.yaml
1
2
3
welcome.erb:
  target: /usr/share/nginx/html/welcome.html
  config: {}

This tells Tiller to process a single template file, and to install it to the default document root for NginX. Note that instead of the usual key: "value" pairs passed in the config section, there’s just an empty hash, as we’re not providing any values from this file – we’ll fill everything in using environment variables.

templates/welcome.erb
1
2
3
4
<h1>Tiller env_ demonstration</h1>
Hello, <%= env_name %>. <br />
You are running in the <%= env_environment %> environment. <br />
Your favourite color is <%= env_color %>.

You can see we’re using 3 environment variables, and they are all available in lower-case format, and are prefixed by env_. One of them (color) we defined in the Dockerfile, the others will be passed in at runtime.

Running the container

First, build the container and tag it with it’s name (“tiller-docker-example”):

1
$ docker build -t tiller-docker-example .

And now, let’s run the container, and pass in two variables, environment and name. As well as being referenced in the template, environment is used by Tiller to select which yaml file under /etc/tiller/environments to use (although you can omit it, and it will use production by default).

I’ll omit the -d flag, so it keeps running in the foreground so you can see the Tiller output :

1
2
3
4
5
6
7
8
9
10
11
12
13
14
$ docker run -e environment=production -e name=Mark \
 -t -i -p 80:80 tiller-docker-example

tiller v0.1.3 (https://github.com/markround/tiller) <[email protected]>
Using configuration from /etc/tiller
Using plugins from /usr/local/lib/tiller
Using environment production
Template sources loaded [FileTemplateSource]
Data sources loaded [EnvironmentDataSource, FileDataSource]
Templates to build ["welcome.erb"]
Building template welcome.erb
Setting ownership/permissions on /usr/share/nginx/html/welcome.html
Template generation completed, about to exec replacement process.
Calling /usr/sbin/nginx...

And in a web browser, check out the welcome page :

Environment variables example

And that’s it! As I stated before, this is a fairly contrived example but you should still be able to see both how the various plugins work, and how you can maintain a single container but still alter it’s runtime configuration through environment variables. See the Github documentation for more examples and details on how to write your own plugins.

If you have any questions or feedback, just use the comments feature on this blog, or open an issue on the Github project page and I’ll do my best to help!

Tiller v0.0.7 Now Supports Environment Variables

| Comments

Just a quick update : Tiller has just been updated to v0.0.7. I’ve just added a new EnvironmentDataSource which is super-simple, but very convenient. It makes all your environment variables accessible to templates (as global values) by converting them to lower-case, and prefixing them with env_.

To use it, just add it to your datasources in /etc/tiller/common.yaml :

/etc/tiller/common.yaml
1
2
3
data_sources:
  - file
  - environment

And then you can do things like this in your templates :

1
2
Hello, world!
My auto-generated hostname is <%= env_hostname %>

Check it out at GitHub or install the Ruby gem.

Tiller Project and Docker Container Configuration

| Comments

After several days of hacking on my Runner.rb project, I’m pleased to say that I’ve completed a much more polished and complete solution to shipping multiple configuration files in a single Docker container. The project is now known as Tiller and has been published as a Ruby gem so you can just run gem install tiller to install it. You can find a good overview of it, and how it works over at the Github README.md, but it’s still essentially the same approach :

  • Provide templates of your configuration files as ERB templates
  • Provide a source for each “environment” containing the values that should be placed in those templates, and where you want them installed. This is usually done with a YAML configuration file, but you can now use other sources for this information (see later in this blog post).

The first change (apart from the name) is that there’s no longer a nasty config hash to use inside your templates; you can simply declare a value in an environment file:

example.erb:
  target: /var/www/html/example.html
  config:
    welcome_text: 'Hello, world!'

And then reference it straight inside the template :

example.erb
1
2
<h1>This is generated from example.erb</h1>
<%= welcome_text %>

However, a much bigger change is that I have abstracted out the data generation and sources of templates. I’ve bundled two providers (FileDataSource and FileTemplateSource) that simply read the contents of ERB files under /etc/tiller/templates, and use YAML files under /etc/tiller/environments so that it mimics the old Runner.rb.

This means you can now write your own plugins (and I’ll also work on some additional ones to ship with later releases) to do things like pull templates from a remote HTTP server, populate the values with custom/dynamic variables such as FQDN, IP address of the host, or even fetch them from a LDAP server instead of pulling them off a YAML file on disk.

As a very simple example of a custom “global data source”, suppose your Docker containers all use the name.site.example.com FQDN structure (e.g. example01.london.example.com), and you wanted to extract the site part to use in a configuration file template. You could write a file called /usr/local/lib/tiller/data/example.rb :

/usr/local/lib/tiller/data/example.rb
1
2
3
4
5
6
7
8
class ExampleDataSource < Tiller::DataSource

  def global_values
    # site: Just the second part of the FQDN
    # This assumes DNS is working correctly!
    { 'site' =>  Socket.gethostbyname(Socket.gethostname).first.split('.')[1] }
  end
end

And then load it in /etc/tiller/common.yaml along with the default file data source :

/etc/tiller/common.yaml
1
2
3
data_sources:
  - file
  - example

And now all your templates can use this by referencing <%= site %>.

There’s much more you can do with this, including defining values for specific templates, and writing TemplateSources to provide the templates themselves, but that’s a bit too much detail for this introductory blog post. Go and check out the documentation at the Github page, browse through some of the examples, and check this blog for more updates & examples.

Dynamic Configuration Files With Docker Containers

| Comments

I’ve been using Docker containers on Linux systems for a while now, and have recently developed a neat solution to what I imagine is a fairly common problem.

I had a number of Docker containers that I wanted to launch from a common image, but with a slightly different configuration depending on the environment in which I was launching them. For example, a web application container might connect to a different database in a staging environment, or a MongoDB replica set name might be different. This meant my options basically looked like:

  • Maintain multiple containers / Dockerfiles.
  • Maintain the configuration in separate data volumes and use --volumes-from to pull the relevant container in.
  • Bundle the configuration files into one container, and manually specify the CMD or ENTRYPOINT values to pick this up.

None of those really appealed due to needless duplication, or the complexity of an approach that would necessitate really long docker run commands. So I knocked up a quick & simple Ruby script that I could use across all my containers, which does the following :

  • Generates configuration files from ERB templates
  • Uses values provided in YAML files, one per environment
  • Copies the generated templates to the correct location and specifies permissions
  • Executes a replacement process once it’s finished (e.g. mongod, nginx, supervisord, etc.)

This way I can keep all my configuration together in the container, and just tell Docker which environment to use when I start it. As a simple example, here’s what it looks like when you run it :

# docker run -t -i -e environment=staging markround/demo_container:latest
Runner.rb v0.0.1
Using runner configuration from /etc/runner
Using environment staging
Parsing /etc/runner/templates/mongodb.conf.erb
Setting ownerships and privileges on /etc/mongodb.conf
Template generation completed, about to exec replacement process.
Calling /usr/bin/supervisord...

Update

UPDATE: This project is now called tiller and has changed significantly. For more information, see the documentation at it’s Github page, or my recent blog posts in chronological order: