And now a diversion from most of my geeky posts! I’ve just finished (well, as “finished” as most of my musical projects get) my latest track: “The Pleiades”, featuring the talents of my sister, psy-trance producer Spinney Lainey on flute. I’ve still got a long way to go on my journey through the world of music production, but this is the first thing I’ve felt more or less happy with and wanted to share it with the world. Hope you enjoy!
It’s only a minor version number bump, but Tiller 0.7.8 now brings a major new plugin which I’m really excited about : It now supports the awesome Consul system from Hashicorp. Consul is a distributed key/value store and service discovery mechanism – the basic idea is that you have a Consul cluster consisting of multiple nodes for high availability and performance. You then place all your configuration values in it, and also register your services (like web server backends, databases, cache nodes and so on) with it. This means you can have a dynamic environment where components discover their configuration and other nodes in your infrastructure on-the-fly: no more hard-coding database URIs or load balancer pools!
This makes it an ideal fit for a “cloud” Docker environment using something like Swarm, Kubernetes or Mesosphere/Marathon. Your containers can run on any host, advertise whatever ports they like, and Consul will make sure that everything can find what it needs to. The only sticking point is how to get your configuration to your applications.
If you’re writing your own microservices from scratch, you can of course talk directly to the Consul API, but for other things that require configuration files (Nginx, Apache, Rails applications and so on) you need a tool to talk to Consul and generate the files for you. Hashicorp (the company behind Consul) do have a tool called consul-template which does this, but Tiller has (to my admittedly biased point of view!) several advantages, not least the ability to use straight-forward Ruby ERB templates and embedded Ruby code instead of Go template syntax, and the ability to load other data source plugins.
So you although you can fetch everything from Consul, Tiller lets you do things like store templates in files or retrieve them from a HTTP web service, and then “chain” key/value stores : Provide values from defaults files first, then check Consul, and finally over-ride some settings from environment variables at run-time.
If you’re new to Tiller, I recommend having a quick look at the documentation, and following some of my Tiller blog posts, in particular this article which walks through a practical example of using Tiller inside Docker.
That said, here follows a quick example of using Tiller to fetch data from Consul. In the examples below, I’m just generating some demonstration template files for simplicity. In a real-world situation, these template files would be application configuration files like nginx.conf, mongod.conf and so on.
The Tiller Consul plugin requires the
diplomat Ruby gem to be installed, so assuming you have a working Ruby environment, this should be all you need:
1 2 3 4
Start a Consul server
Go to the Consul downloads page, and grab both the Web UI and binary download for your platform, and unzip them in the same location. I’ll do this with shell commands below, for the Mac OS platform (replace “darwin” with “linux” if you are using a Linux system):
1 2 3 4 5
Now, start up Consul in stand-alone mode :
1 2 3
You’ll see some startup messages :
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
Leave this process running in your shell, and with a browser check your server is up and running by visiting [http://localhost:8500/ui]. You should see a screen similar to the following :
We’ll now populate it with some test data. I have a script to do this, so download and run it, passing the base URL to your Consul server as the only argument :
1 2 3 4
Now, if you visit your Consul page and click on the “Key / Value” link at the top, you’ll see a bunch of data under the /tiller path. Here’s the view of the global values :
And if you click around further, you can find the template definitions also stored in Consul :
Incidentally, this is the same data that is used in my automated tests that check all the features of Tiller are working correctly whenever I make any changes. You can see the results of these tests over at Travis CI, or run them yourself if you clone the Git source and run
bundle exec rake features
Now your Consul server is ready to go, so here’s how to hook Tiller up to it. Just create your
common.yaml file with the following contents (just running “true” after we’ve finished for demonstration purposes – in a real Docker container, this would be your application or webserver binary etc.):
1 2 3 4 5 6 7 8 9
And run Tiller against it, using the “development” environment :
1 2 3 4 5 6 7 8 9 10 11 12
And there you have it. You’ll have ended up with a couple of files :
template2.txt in your current directory, which have been entirely populated with templates and values all served from Consul:
1 2 3 4 5 6 7 8 9 10 11
Have a try at running Tiller with the “production” environment and see what changes. You can also try changing the values or even the templates themselves inside Consul to see the changes reflected whenever you run Tiller.
What next ?
So, assuming you’re using Tiller as a Docker
ENTRYPOINT to generate your configuration files before handing over to a replacement process, you can now create dynamic containers that are populated with data entirely from a Consul cluster. Or you can chain one of the many other plugins (
file and so on) together, so your container can get core values from variety of sources.
You can also make use of Consul’s service registration system (perhaps by using the fantastic Registrator) and populate your configuration files dynamically with auto-discovered microservice backends and much more.
As always, all feedback and comments welcome!
Thanks to a nice suggestion by kydorn in issue #18, the file datasource supports a
global_values: block, so you can now create defaults per environment (and use the defaults datasource to provide a default for all environments). This is available in Tiller 0.7.7, which was just released.
This means if you have a common value that you want available to all templates (but may be different in each environment), you don’t have to repeat the definition for every template. In conjunction with the defaults datasource, it lets you do things like :
- Use the defaults datasource to specify default values for all environments
global_values:for defaults specific to each environment
- Optionally over-write them with local
config:values on templates
There’s an example now in the README.md which demonstrates this :
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24
Hopefully that will cut down the size of some config files, and also introduce some more flexibility to your Tiller setup. Thanks again, Kydorn, for the suggestion!
Another super-quick update : There’s now a Gitter chatroom for Tiller. Feel free to drop by for help and chat with other users!
I recently released Tiller 0.7.6 with a new feature for the
environment_json plugin. This can be used to provide dynamic configuration for Docker containers, using JSON in an environment variable. I previously posted a complete example of how you’d make use of this. However, most Tiller plugins provide values as “global” sources, which will be over-ridden by local, template-specific values. The documentation makes note of this in the “Gotchas” section, but there’s now another alternative to using global plugins all the way through.
If your JSON structure contains a
"_version" : 2 key:value pair, Tiller will source global values from a
global key, and treat other keys as being template-specific. For example :
1 2 3 4 5 6 7 8 9
Only a minor update, but hopefully it should add some flexibility around value precedence :)
Yesterday, I released Tiller 0.7.0, and as you might imagine with a version “bump” from the 0.6.x series, there’s a few changes. Some of it is internal stuff that you’ll probably only notice if you’re building your own plugins for Tiller, but there is a fairly big new feature for end users: Single-file configuration.
Recently, when I’ve written some documentation or examples on how to use Tiller, I’ve been struck at how I had to create several files in different directories just to illustrate a simple point. This approach has some benefits when dealing with larger configurations or inherited Dockerfiles, but for simple jobs it’s a little unwieldy.
Take for instance my JSON walkthrough example as a case-in-point: You have to create 3 different files (4 if you include the template) before you can get started. Instead, with the new format
common.yaml, we can put everything apart from the template content in a single file:
1 2 3 4 5 6 7 8 9 10 11
I’m sure you’ll agree that makes things much easier to read and follow!
Admittedly, for complex environments you may well want to keep the “one file per environment” approach – particularly if you have a base Docker container with a standard
common.yaml that all your other containers inherit. For that reason, Tiller will continue to support both methods and I have no intention of removing support for “old style” configuration. So don’t think of the way things used to be done as deprecated in anyway, it’s just that there’s now a new approach that may make your life easier :)
As always, I welcome all feedback, bug reports & feature requests – just leave a comment on this blog, email me, or open a ticket on the GitHub issues page.
If you’ve been using Docker for a while, you probably know that you can use Tiller to generate configuration files inside your containers. This means you can provide a single container for running in a variety of different environments (think of a web application that needs different database URIs and credentials depending on where it is run).
You can also use it to provide ‘parameterized’ containers – where you allow end-users to provide some aspects of the configuration. For example, the team over at StackStorm are using Tiller and the environment plugin to let users easily configure their containers. I’ve provided a variety of plugins to help with this, and have written about it previously.
There was always a catch, though. Until recently, Tiller obtained configuration from a ‘local’ source – either configuration files bundled in a container, or from environment variables passed in at container run time. This is probably fine for most situations, but did prevent a lot of interesting use-cases. However, the 0.6.x releases now support fetching configuration and templates from a variety of network sources. Currently (as of 0.6.1), you can retrieve data from either a ZooKeeper cluster or a HTTP server, with more services to be supported in later releases.
Update: Tiller now supports Consul!
This is a massive change as it now means you can have a central store for all your container configuration, and opens the door for a whole bunch of really cool possibilities. You could, for example, have a service that allows users to edit configuration files or set parameters through a web interface. Or you could plug Tiller into your automation stack so you can alter container configuration on the fly. You could even use Tiller “on the metal” to configure physical/VM hosts when they boot.
Of course, this does introduce an external dependancy on launching your containers, so you should ensure that before you implement this in a production setting you have carefully considered all your points of failure!
Enabling these plugins is straightforward – simply add them to
1 2 3 4 5 6 7
As with all Tiller plugins, the ordering is significant. In the above example, values from the HTTP data source will take precedence over YAML files, but templates loaded from files will take precedence over templates stored in HTTP. You should tweak this as appropriate for your environment.
You also do not need to enable both plugins; for example you may just want to retrieve values for your templates from a web server, but continue to use files to store your actual template content.
As the HTTP plugins can fetch everything (a list of templates, contents and values) from a webservice, if you accept the defaults your environment configuration blocks can literally now be reduced to :
1 2 3
You can check out the documentation for these new plugins over at the Github project page. If you have any suggestions for improvements or feature requests, please feel free to open an issue or leave a comment below!
Just a quick “heads up” for users of Tiller – version 0.5.0 has just been released and has a few new features, and some other changes. Firstly, I have added support for per-environment common settings. Normally, you’d do things like enable the API, set the exec parameter and so on in
common.yaml, but as per issue #10, you can now specify / over-ride these settings in your environment configuration blocks. Simply drop them under a
common: section, e.g.
1 2 3 4
This will also hopefully come in handy later on for some other plugins, such as the planned etcd integration.
There’s also two more changes, hopefully these won’t affect anyone! I’ve firstly moved to using spawn instead of the previous “fork & exec” method, as this provides many more useful options when forking the replacement process. As this method was introduced in the Ruby 1.9.x series, I have dropped support for 1.8.x and made
required_ruby_version = '>= 1.9.2' a dependancy of the gem. I never properly tested on 1.8.x anyway, and decided to end implicit support given that 1.8.7 was released in 2008 and extended maintenance finally ended last year. If you’re running 1.8.x anywhere, you really should upgrade to something more recent as few projects these days support it.
Of course, if you really, really need to use Tiller under Ruby 1.8.x, you can always change the
spawn call back to the old “fork & exec” method, but as time goes on I’ll probably rely on more modern Ruby features. So for now, consider this a totally unsupported hack to buy you some time. However if it causes too many problems for people, I can always look at providing switchable behaviour but I’d really like to avoid this – and it’s still unlikely that I’ll run any tests against it.
Tiller v0.3.0 has just been released, which brings a couple of changes. The first is that the ordering of plugins specified in
common.yaml is now significant. Tiller will run these plugins in the order that they are specified; this is important as before the order was effectively random, so your templates may change with this release if you rely on merging values from different sources (hence the major version bump indicating a breaking change).
The reason for this (apart from making Tiller’s behaviour more deterministic) is that there is now a
DefaultsDataSource, which allows you to specify global and per-template values in a
defaults.yaml (or in separate YAML files under
/etc/tiller/defaults.d/) and then over-ride these with other data sources later.
You’ll hopefully find this useful if you have a lot of environments you want to deploy your Docker containers into (Development, Integration, Staging, NFT, Production, etc.) but only a few values change between each one.
Note 2 : Tiller v0.7.0 and later support a new configuration system, where you can place most configuration blocks in one file, instead of splitting it out over different environment files. However, this article still refers to the old “one file per environment” approach as it’s still supported and won’t be removed.
The following is a simple example of how you might use the new
DefaultsDataSource to generate a fictional application configuration file. Here’s what your common.yaml might look like :
1 2 3 4 5 6 7 8 9
As mentioned above, the order you load data and template sources is significant. Tiller will use each one in the order it is listed, from top to bottom so you now have control over which module has priority. If you wanted to change it so the
file module over-rides values from the
environment_json module (see see http://www.markround.com/blog/2014/10/17/building-dynamic-docker-images-with-json-and-tiller-0-dot-1-4/), you’d swap the order above :
data_sources: - defaults - environment_json - file
Now, here’s our example template configuration file that we want to ship with our Docker container :
1 2 3 4 5 6 7 8 9
In this, you can see that there’s a few dynamic values defined, but we probably don’t want to have to specify them in all our environment files if they’re the same for most of our environments. For example, the domain_name is used in a couple of places, and we’ll also assume that for all our environments the HTTP port will remain the same apart from the staging environment. You can see that if we had a lot of templates to generate, being able to specify the
domain_name and other shared variables in a single place will now be much neater.
Let’s now fill in the defaults for our templates. This is done by creating the new
defaults.yaml file in your Tiller configuration directory, which is usually
1 2 3 4 5
Now, for all our environments, we only need to provide the values that will change, or that we want to over-ride. Let’s take our “production” environment first – the only thing we want to specify in this example is the database name:
1 2 3 4
Now run Tiller to generate the file :
1 2 3 4 5 6 7 8 9 10 11 12 13 14
Let’s now create a new “staging” environment, and demonstrate over-riding the port as well as setting the database; notice how we’re only setting the values that have changed for this environment :
1 2 3 4 5
And now run Tiller to create our config file for this environment:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
You’ll notice that Tiller warned you about the value from the
DefaultsDataSource being replaced with one from the
FileDataSource; you can see here how the ordering of plugins loaded in
common.yaml is important.
And there you have it. A short example (and I’ve omitted the creation of the other example environments and templates), but you can see how this new behaviour will make life much easier when you use Tiller as the
ENTRYPOINT in your container. Hopefully this will mean more efficient Tiller configs and will help you create more flexible Docker images. Any feedback or queries, just leave them in the comments section below, or report a bug/request a new feature on the Github issue tracker.
A quick update for users of my Solaris 11 x86 packages. I’ve created a GNU Bash 4.3 package which includes the patch for the much-publicized Shellshock vulnerability. As the package name “bash” also matches the one provided by Oracle, as usual you’ll just need to specify the full FMRI when installing:
And just to confirm you’re safe from Shellshock, using the test script at shellshocker.net:
1 2 3 4 5 6 7 8 9
I’ve also updated the following packages:
- HAProxy – 1.5.9. New major version, includes native SSL support and much more.
- NGinX – 1.6.2. Bump to latest stable version from 1.6.0.
- rsync – 3.1.1. Bumped from 3.1.0
- redis – 2.8.17. Latest stable version including many bug fixes.