markround.com

Solaris packages, DevOps, Bass and other things of interest...

Tiller 0.3.0 and New Defaults Datasource

| Comments

Tiller v0.3.0 has just been released, which brings a couple of changes. The first is that the ordering of plugins specified in common.yaml is now significant. Tiller will run these plugins in the order that they are specified; this is important as before the order was effectively random, so your templates may change with this release if you rely on merging values from different sources (hence the major version bump indicating a breaking change).

The reason for this (apart from making Tiller’s behaviour more deterministic) is that there is now a DefaultsDataSource, which allows you to specify global and per-template values in a defaults.yaml (or in separate YAML files under /etc/tiller/defaults.d/) and then over-ride these with other data sources later.

You’ll hopefully find this useful if you have a lot of environments you want to deploy your Docker containers into (Development, Integration, Staging, NFT, Production, etc.) but only a few values change between each one.

Examples

Note : If you’re new to Tiller, I recommend reading the documentation and my other articles on this blog.

The following is a simple example of how you might use the new DefaultsDataSource to generate a fictional application configuration file. Here’s what your common.yaml might look like :

/etc/tiller/common.yaml
1
2
3
4
5
6
7
8
9
exec: /usr/local/bin/myapp

data_sources:
  - defaults
  - file
  - environment_json

template_sources:
  - file

As mentioned above, the order you load data and template sources is significant. Tiller will use each one in the order it is listed, from top to bottom so you now have control over which module has priority. If you wanted to change it so the file module over-rides values from the environment_json module (see see http://www.markround.com/blog/2014/10/17/building-dynamic-docker-images-with-json-and-tiller-0-dot-1-4/), you’d swap the order above :

data_sources:
  - defaults
  - environment_json
  - file

Now, here’s our example template configuration file that we want to ship with our Docker container :

/etc/tiller/templates/app.conf.erb
1
2
3
4
5
6
7
8
9
[http]
http.port=<%= port %>
http.hostname=<%= environment %>.<%= domain_name %>

[smtp]
mail.domain_name=<%= domain_name %>

[db]
db.host=<%= database %>

In this, you can see that there’s a few dynamic values defined, but we probably don’t want to have to specify them in all our environment files if they’re the same for most of our environments. For example, the domain_name is used in a couple of places, and we’ll also assume that for all our environments the HTTP port will remain the same apart from the staging environment. You can see that if we had a lot of templates to generate, being able to specify the domain_name and other shared variables in a single place will now be much neater.

Let’s now fill in the defaults for our templates. This is done by creating the new defaults.yaml file in your Tiller configuration directory, which is usually /etc/tiller :

/etc/tiller/defaults.yaml
1
2
3
4
5
global:
  domain_name: 'example.com'

app.conf.erb:
  port: '8080'

Now, for all our environments, we only need to provide the values that will change, or that we want to over-ride. Let’s take our “production” environment first – the only thing we want to specify in this example is the database name:

/etc/tiller/environments/production.yaml
1
2
3
4
app.conf.erb:
  target: /tmp/app.conf
  config:
    database: 'prd-db-1.example.com'

Now run Tiller to generate the file :

1
2
3
4
5
6
7
8
9
10
11
12
13
14
$ tiller -n
tiller v0.3.0 (https://github.com/markround/tiller) <[email protected]>
Template generation completed

$ cat /tmp/app.conf
[http]
http.port=8080
http.hostname=production.example.com

[smtp]
mail.domain_name=example.com

[db]
db.host=prd-db-1.example.com

Let’s now create a new “staging” environment, and demonstrate over-riding the port as well as setting the database; notice how we’re only setting the values that have changed for this environment :

/etc/tiller/environments/staging.yaml
1
2
3
4
5
app.conf.erb:
  target: /tmp/app.conf
  config:
    port: '8081'
    database: 'stg-db-1.dev.example.com'

And now run Tiller to create our config file for this environment:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
$ tiller -e staging -n
tiller v0.3.0 (https://github.com/markround/tiller) <[email protected]>
Warning, merging duplicate data values.
port => '8080' being replaced by : '8081' from FileDataSource
Template generation completed

$ cat /tmp/app.conf
[http]
http.port=8081
http.hostname=staging.example.com

[smtp]
mail.domain_name=example.com

[db]
db.host=stg-db-1.dev.example.com

You’ll notice that Tiller warned you about the value from the DefaultsDataSource being replaced with one from the FileDataSource; you can see here how the ordering of plugins loaded in common.yaml is important.

And there you have it. A short example (and I’ve omitted the creation of the other example environments and templates), but you can see how this new behaviour will make life much easier when you use Tiller as the CMD or ENTRYPOINT in your container. Hopefully this will mean more efficient Tiller configs and will help you create more flexible Docker images. Any feedback or queries, just leave them in the comments section below, or report a bug/request a new feature on the Github issue tracker.

Solaris Bash Package and Other Updates

| Comments

A quick update for users of my Solaris 11 x86 packages. I’ve created a GNU Bash 4.3 package which includes the patch for the much-publicized Shellshock vulnerability. As the package name “bash” also matches the one provided by Oracle, as usual you’ll just need to specify the full FMRI when installing:

1
$ pkg install pkg://markround/mar/shell/bash

And just to confirm you’re safe from Shellshock, using the test script at shellshocker.net:

1
2
3
4
5
6
7
8
9
$ export PATH=/opt/mar/bin:$PATH
$ ./shellshock_test.sh
CVE-2014-6271 (original shellshock): not vulnerable
CVE-2014-6277 (segfault): not vulnerable
CVE-2014-6278 (Florian's patch): not vulnerable
CVE-2014-7169 (taviso bug): not vulnerable
CVE-2014-7186 (redir_stack bug): not vulnerable
CVE-2014-7187 (nested loops off by one): not vulnerable
CVE-2014-//// (exploit 3 on http://shellshocker.net/): not vulnerable

I’ve also updated the following packages:

  • HAProxy – 1.5.9. New major version, includes native SSL support and much more.
  • NGinX – 1.6.2. Bump to latest stable version from 1.6.0.
  • rsync – 3.1.1. Bumped from 3.1.0
  • redis – 2.8.17. Latest stable version including many bug fixes.

These have all been in the /dev branch for a while, and have now been promoted to /stable.

Querying Tiller Configuration From a Running Docker Container

| Comments

Tiller 0.2.2 now brings a simple HTTP API that allows you to query the state of the Tiller configuration inside running Docker containers. This may be particularly useful when you’re attempting to debug configuration issues; you can quickly and easily check the templates, global values and Tiller configuration.

You can enable this API by passing the --api (and optional --api-port) command-line arguments. Alternatively, you can also set these in common.yaml :

1
2
api_enable: true
api_port: 6275

Now, once Tiller has forked a child process (specified by the exec parameter), you will see a message on stdout informing you the API is starting :

1
Tiller API starting on port 6275

If you want to expose this port from inside a Docker container, you will need to add this port to your list of mappings (e.g. docker run ... -p 6275:6275 ...). As a demonstration, here’s a walk-through using the Docker container built from my Tiller and Environment Variables blog post. Assuming that you’ve run through that tutorial and built the Docker container, just make a small addition to your common.yaml, so it now looks like:

data/tiller/common.yaml
1
2
3
4
5
6
7
8
exec: /usr/sbin/nginx
data_sources:
  - file
  - environment
template_sources:
  - file
api_enable: true
api_port: 6275

And rebuild your container:

1
$ docker build --no-cache -t tiller-docker-example .

Now, run it again using the previous article’s example, but also map port 6275:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
$ docker run -e environment=production -e name=Mark \
 -t -i -p 80:80 -p 6275:6275 tiller-docker-example
 
tiller v0.2.2 (https://github.com/markround/tiller) <[email protected]>
Using configuration from /etc/tiller
Using plugins from /usr/local/lib/tiller
Using environment production
Template sources loaded [FileTemplateSource]
Data sources loaded [FileDataSource, EnvironmentDataSource]
Templates to build ["welcome.erb"]
Building template welcome.erb
Setting ownership/permissions on /usr/share/nginx/html/welcome.html
Template generation completed
Executing /usr/sbin/nginx...
Child process forked.
Tiller API starting on port 6275

And you should now be able to ping the API (replace $DOCKER_HOST_IP with the IP address or hostname of your Docker host here (e.g. localhost)):

1
2
3
4
5
6
7
$ curl -D - http://$DOCKER_HOST_IP:6275/ping

HTTP/1.1 200 OK
Content-Type: application/json
Server: Tiller 0.2.2 / API v1

{ "ping": "Tiller API v1 OK" }

You can check out the Tiller config:

1
$ curl http://$DOCKER_HOST_IP:6275/v1/config

And the result (in formatted JSON):

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
{
    "tiller_base": "/etc/tiller",
    "tiller_lib": "/usr/local/lib",
    "environment": "production",
    "no_exec": false,
    "verbose": true,
    "api_enable": true,
    "exec": "/usr/sbin/nginx",
    "data_sources": [
        "file",
        "environment"
    ],
    "template_sources": [
        "file"
    ],
    "api_port": 6275
}

Or a particular template :

1
curl http://$DOCKER_HOST:6275/v1/template/welcome.erb

This returns the following JSON:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
{
    "merged_values": {
        "environment": "production",
        "env_home": "/",
        "env_path": "/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
        "env_hostname": "0aaed58b34db",
        "env_term": "xterm",
        "env_environment": "production",
        "env_name": "Mark",
        "env_color": "Blue"
    },
    "target_values": {
        "target": "/usr/share/nginx/html/welcome.html"
    }
}

For other API endpoints, see the Documentation. And please bear in mind that this is intended as a development / debugging tool – there are serious security implications involved in exposing this port (and your configuration details)to an untrusted network!

Building Dynamic Docker Images With JSON and Tiller 0.1.4

| Comments

Tiller 0.1.4 has just been released, and brings a few new improvements. Firstly, you can now use -b,-l and -e command-line flags to set the tiller_base, tiller_lib, and enviroment values respectively. This makes things a little neater when debugging or testing new configurations on the command line.

I also added a environment_json data source, based on idea by Florent Valdelievre (thanks for the feedback, Florent!)

This means you can now pass in arbitrary JSON data through the tiller_json environment variable, and use the resulting structure in your templates. As it merges values with the rest of the global values from other data sources, you can also use it to over-ride a default setting in your environment files; this may be particularly useful if you build Docker containers that are provided to end-users who wish to customise their settings.

To illustrate this, here are a few quick examples, also showing the new command-line flags when developing your templates.

Update: Please note that although this feature was added in Tiller 0.1.4, I’m using 0.1.5 in these examples as it includes newline suppression for ERb templates (the -%> syntax you’ll see below) which makes templates with loop constructs much neater.

Firstly, install the Tiller gem, create a work directory with the usual Tiller structure and change into it :

1
2
3
4
$ gem install tiller
$ mkdir json_example
$ cd json_example
$ mkdir templates environments

Then, create your common.yaml which enables the relevant data & template sources :

common.yaml
1
2
3
4
5
data_sources:
  - file
  - environment_json
template_sources:
  - file

Create your environment file (environments/production.yaml) :

environments/production.yaml
1
2
3
4
json_template.erb:
  target: parsed_template.txt
  config:
    default_value: 'This is a default value that may be overridden'

And your template (templates/json_template.erb) :

templates/json_template.erb
1
2
3
4
5
Default value : <%= default_value %>
List of servers follows...
<% servers.each do |server| -%>
  Server : <%= server %>
<% end -%>

Now, run Tiller and pass in your JSON as an environment variable (you can add the -v flag to tiller to make the output more verbose) :

1
2
3
4
5
6
7
8
9
10
11
$ tiller_json='{ "servers" : [ "server1" , "server2" , "server3" ] }' tiller -b $PWD -n
tiller v0.1.5 (https://github.com/markround/tiller) <[email protected]>
Template generation completed

$ cat parsed_template.txt
Default value : This is a default value that may be overridden
List of servers follows...

  server1
  server2
  server3

As mentioned above, you can also use this to over-ride a default. Notice that Tiller will warn you of this :

1
2
3
4
5
6
7
8
9
10
11
12
$ export tiller_json='{ "default_value" : "Hello, World!" , "servers" : [ "server1" ] }'
$ tiller -b $PWD -n
tiller v0.1.5 (https://github.com/markround/tiller) <[email protected]>
Warning, merging duplicate global and local values.
default_value => 'This is a default value that may be overridden' being replaced by : 'Hello, World!' from merged configuration
Template generation completed

$ cat parsed_template.txt
Default value : Hello, World!
List of servers follows...

  server1

More complicated structures can easily be built up. However, these can be quite error prone to pass “on the fly”, so instead create a file called config.json with the following contents :

config.json
1
2
3
4
5
6
7
8
9
10
11
12
13
14
{
  "servers" : [
      {
        "hostname" : "server1",
        "port" : "80"
      },
      {
        "hostname" : "server2",
        "port" : "8080"
      }
    ],
  "username" : "mark",
  "password" : "tiller"
}

This is much easier to read and check for syntax errors! This example contains a list of server configuration hashes, and a couple of simple key:value pairs.

Edit your template as follows :

templates/json_template.erb
1
2
3
4
5
6
Username : <%= username %>
Password : <%= password %>
List of servers follows...
<% servers.each do |server|  -%>
  http://<%= server['hostname'] %>:<%= server['port'] %> 
<% end -%>

Now you get the following produced :

1
2
3
4
5
6
7
8
9
$ tiller_json="$(cat config.json)" tiller -n -b $PWD
$ cat parsed_template.txt

Username : mark
Password : tiller
List of servers follows...

  http://server1:80
  http://server2:8080

Assuming you use Tiller as your Docker CMDor ENTRYPOINT command and set up a suitable exec in your common.yaml, you can now include complex configuration in your Docker containers at runtime simply by doing something like :

1
2
$ docker run -e tiller_json="$(cat config.json)" \
  -d -t -i ...rest of arguments...

Hope that’s of some help! If you have any feedback, just use the comments feature on this blog and I’ll reply ASAP. You may also want to check out some of my other articles on Tiller, particularly the walkthrough with a sample Dockerfile, and how to use the API to query a running Docker container.

Tiller and Docker Environment Variables

| Comments

After a recent query was posted on the Tiller issue tracker, I thought it might be useful to show a complete example of using Tiller and one of it’s plugins to create a dynamic Docker container. I assume you’re at least somewhat familiar with Tiller. If not, take a quick look through my documentation and other related blog posts.

This example will create a container image that runs a web-server through Tiller, and generates a simple web page populated with environment variables; two from a Docker -e flag, and one from the Dockerfile ENV declaration.

This is obviously a very contrived example as the most common usage of Tiller is to populate configuration files. However, changes to a web page are more easily visualised than twiddling some configuration options! I also thought I’d demonstrate how you can use one of the plugins (EnvironmentDataSource in this case) to fetch the values, instead of the more familiar static values specified in a Tiller environment file.

You can download a compressed archive of the files used in this example here. Alternatively, you can follow along below and create the files by copying & pasting.

As an aside : If you’d like to ship a Docker container with some default values, but would like to allow end-users override them with their own configuration, take a look at Building dynamic Docker images with JSON and Tiller and Tiller 0.3.0 and new Defaults data source.

Dockerfile configuration

So first off, let’s create our Dockerfile that will install NginX, and then setup Tiller to produce a templated web page:

Dockerfile
1
2
3
4
5
6
7
8
9
10
11
FROM ubuntu:latest

ENV color Blue

RUN apt-get -y update && apt-get -y install nginx
RUN apt-get -y install ruby && gem install tiller
RUN echo "daemon off;" >> /etc/nginx/nginx.conf

ADD data/tiller /etc/tiller

CMD ["/usr/local/bin/tiller" , "-v"]

And that’s it. Pretty simple; it just installs NginX, configures it to run under Docker without forking and copies our Tiller configuration in. It also defines an environment variable called “color”. This will be used later. Note that I’m also using the -v argument when calling Tiller; this makes the output more verbose so we can see what’s going on in more detail.

Tiller configuration

Under data/tiller, you’ll find the usual files that get copied to /etc/tiller :

etc
└── tiller
    ├── common.yaml
    ├── environments
    │   └── production.yaml
    └── templates
        └── welcome.erb

Let’s take a quick look at these files.

common.yaml
1
2
3
4
5
6
exec: /usr/sbin/nginx
data_sources:
  - file
  - environment
template_sources:
  - file

Nothing out of the ordinary here; we just pass control over to NginX when we’re done and load a couple of data sources. Note that even though we intend to populate the template file with the values from environment variables, we still also need the file datasource. This is because the environment datasource cannot provide the meta-data such as where to install the templates, or their permissions etc.

environments/production.yaml
1
2
3
welcome.erb:
  target: /usr/share/nginx/html/welcome.html
  config: {}

This tells Tiller to process a single template file, and to install it to the default document root for NginX. Note that instead of the usual key: "value" pairs passed in the config section, there’s just an empty hash, as we’re not providing any values from this file – we’ll fill everything in using environment variables.

templates/welcome.erb
1
2
3
4
<h1>Tiller env_ demonstration</h1>
Hello, <%= env_name %>. <br />
You are running in the <%= env_environment %> environment. <br />
Your favourite color is <%= env_color %>.

You can see we’re using 3 environment variables, and they are all available in lower-case format, and are prefixed by env_. One of them (color) we defined in the Dockerfile, the others will be passed in at runtime.

Running the container

First, build the container and tag it with it’s name (“tiller-docker-example”):

1
$ docker build -t tiller-docker-example .

And now, let’s run the container, and pass in two variables, environment and name. As well as being referenced in the template, environment is used by Tiller to select which yaml file under /etc/tiller/environments to use (although you can omit it, and it will use production by default).

I’ll omit the -d flag, so it keeps running in the foreground so you can see the Tiller output :

1
2
3
4
5
6
7
8
9
10
11
12
13
14
$ docker run -e environment=production -e name=Mark \
 -t -i -p 80:80 tiller-docker-example

tiller v0.1.3 (https://github.com/markround/tiller) <[email protected]>
Using configuration from /etc/tiller
Using plugins from /usr/local/lib/tiller
Using environment production
Template sources loaded [FileTemplateSource]
Data sources loaded [EnvironmentDataSource, FileDataSource]
Templates to build ["welcome.erb"]
Building template welcome.erb
Setting ownership/permissions on /usr/share/nginx/html/welcome.html
Template generation completed, about to exec replacement process.
Calling /usr/sbin/nginx...

And in a web browser, check out the welcome page :

Environment variables example

And that’s it! As I stated before, this is a fairly contrived example but you should still be able to see both how the various plugins work, and how you can maintain a single container but still alter it’s runtime configuration through environment variables. See the Github documentation for more examples and details on how to write your own plugins.

If you have any questions or feedback, just use the comments feature on this blog, or open an issue on the Github project page and I’ll do my best to help!

Tiller v0.0.7 Now Supports Environment Variables

| Comments

Just a quick update : Tiller has just been updated to v0.0.7. I’ve just added a new EnvironmentDataSource which is super-simple, but very convenient. It makes all your environment variables accessible to templates (as global values) by converting them to lower-case, and prefixing them with env_.

To use it, just add it to your datasources in /etc/tiller/common.yaml :

/etc/tiller/common.yaml
1
2
3
data_sources:
  - file
  - environment

And then you can do things like this in your templates :

1
2
Hello, world!
My auto-generated hostname is <%= env_hostname %>

Check it out at GitHub or install the Ruby gem.

Tiller Project and Docker Container Configuration

| Comments

After several days of hacking on my Runner.rb project, I’m pleased to say that I’ve completed a much more polished and complete solution to shipping multiple configuration files in a single Docker container. The project is now known as Tiller and has been published as a Ruby gem so you can just run gem install tiller to install it. You can find a good overview of it, and how it works over at the Github README.md, but it’s still essentially the same approach :

  • Provide templates of your configuration files as ERB templates
  • Provide a source for each “environment” containing the values that should be placed in those templates, and where you want them installed. This is usually done with a YAML configuration file, but you can now use other sources for this information (see later in this blog post).

The first change (apart from the name) is that there’s no longer a nasty config hash to use inside your templates; you can simply declare a value in an environment file:

example.erb:
  target: /var/www/html/example.html
  config:
    welcome_text: 'Hello, world!'

And then reference it straight inside the template :

example.erb
1
2
<h1>This is generated from example.erb</h1>
<%= welcome_text %>

However, a much bigger change is that I have abstracted out the data generation and sources of templates. I’ve bundled two providers (FileDataSource and FileTemplateSource) that simply read the contents of ERB files under /etc/tiller/templates, and use YAML files under /etc/tiller/environments so that it mimics the old Runner.rb.

This means you can now write your own plugins (and I’ll also work on some additional ones to ship with later releases) to do things like pull templates from a remote HTTP server, populate the values with custom/dynamic variables such as FQDN, IP address of the host, or even fetch them from a LDAP server instead of pulling them off a YAML file on disk.

As a very simple example of a custom “global data source”, suppose your Docker containers all use the name.site.example.com FQDN structure (e.g. example01.london.example.com), and you wanted to extract the site part to use in a configuration file template. You could write a file called /usr/local/lib/tiller/data/example.rb :

/usr/local/lib/tiller/data/example.rb
1
2
3
4
5
6
7
8
class ExampleDataSource < Tiller::DataSource

  def global_values
    # site: Just the second part of the FQDN
    # This assumes DNS is working correctly!
    { 'site' =>  Socket.gethostbyname(Socket.gethostname).first.split('.')[1] }
  end
end

And then load it in /etc/tiller/common.yaml along with the default file data source :

/etc/tiller/common.yaml
1
2
3
data_sources:
  - file
  - example

And now all your templates can use this by referencing <%= site %>.

There’s much more you can do with this, including defining values for specific templates, and writing TemplateSources to provide the templates themselves, but that’s a bit too much detail for this introductory blog post. Go and check out the documentation at the Github page, browse through some of the examples, and check this blog for more updates & examples.

Dynamic Configuration Files With Docker Containers

| Comments

I’ve been using Docker containers on Linux systems for a while now, and have recently developed a neat solution to what I imagine is a fairly common problem.

I had a number of Docker containers that I wanted to launch from a common image, but with a slightly different configuration depending on the environment in which I was launching them. For example, a web application container might connect to a different database in a staging environment, or a MongoDB replica set name might be different. This meant my options basically looked like:

  • Maintain multiple containers / Dockerfiles.
  • Maintain the configuration in separate data volumes and use --volumes-from to pull the relevant container in.
  • Bundle the configuration files into one container, and manually specify the CMD or ENTRYPOINT values to pick this up.

None of those really appealed due to needless duplication, or the complexity of an approach that would necessitate really long docker run commands. So I knocked up a quick & simple Ruby script that I could use across all my containers, which does the following :

  • Generates configuration files from ERB templates
  • Uses values provided in YAML files, one per environment
  • Copies the generated templates to the correct location and specifies permissions
  • Executes a replacement process once it’s finished (e.g. mongod, nginx, supervisord, etc.)

This way I can keep all my configuration together in the container, and just tell Docker which environment to use when I start it. As a simple example, here’s what it looks like when you run it :

# docker run -t -i -e environment=staging markround/demo_container:latest
Runner.rb v0.0.1
Using runner configuration from /etc/runner
Using environment staging
Parsing /etc/runner/templates/mongodb.conf.erb
Setting ownerships and privileges on /etc/mongodb.conf
Template generation completed, about to exec replacement process.
Calling /usr/bin/supervisord...

Update

UPDATE: This project is now called tiller and has changed significantly. For more information, see the documentation at it’s Github page, or my recent blog posts in chronological order:

Getting Started With Sensu on Solaris

| Comments

Sensu is a monitoring framework written in Ruby. It’s small and very easy to extend, as well as being extremely scalable. It uses a message bus to communicate with it’s different components (clients, servers, api hosts) as well as external systems such as Graphite, which can be used to produce graphs and store metrics.

It uses a subscription-based model, where clients register with the server, instead of the traditional Nagios-like model where the server requires a long list of clients, checks and other configuration. This makes Sensu particularly well-suited for a fast-changing and dynamic cloud infrastructure, where hosts may be added or removed on a regular basis.

In this tutorial, I’ll show how to install the various components that make up a basic Sensu installation, and the Uchiwa dashboard. In future articles I’ll show how to add checks, manage alerts, and configure a range 3rd party components such as the previously mentioned Graphite.

While the package management details, paths and so on are all Solaris-specific and use my packages, the steps and general principles behind this article should be applicable to other systems. You could also use a different source of packages (Solaris 11.2 will come with a bundled RabbitMQ server for example), or even compile your own manually.

For the sake of this tutorial, I’ll set up all the Sensu server components on a single machine, although there’s no reason why you couldn’t split them up over multiple hosts.

Install RabbitMQ and Redis

Firstly, we need to install two pre-requisites for Sensu – Redis and RabbitMQ. Redis is used by Sensu as a datastore, and RabbitMQ is used to pass messages (check commands, and the results of those checks) between the various components of Sensu and it’s clients.

To get started, add one of my package repositories to your system. You can pick either the /dev or /stable branches, depending on your requirements. See the documentation for more information on how to do this, and the difference between the repository branches.

Once you’ve done that, an install of redis and rabbitmq is simple :

$ sudo pkg install redis rabbitmq
           Packages to install:  4
       Create boot environment: No
Create backup boot environment: No
            Services to change:  1

DOWNLOAD                                PKGS         FILES    XFER (MB)   SPEED
Completed                                4/4     5189/5189    52.2/52.2    0B/s

PHASE                                          ITEMS
Installing new actions                     5690/5690
Updating package state database                 Done
Updating image state                            Done
Creating fast lookup database                   Done

You should now see both services come online :

$ svcs -a | egrep "(rabbit|redis)"
online         13:01:22 svc:/network/redis:default
online         13:01:24 svc:/network/rabbitmq:default

Testing the services

Redis is very easy to test with the “ping” command. If all is working properly, it should reply with “PONG” :

$ redis-cli
127.0.0.1:6379> ping
PONG
127.0.0.1:6379> exit

Your RabbitMQ installation can be verified by browsing to the login screen on port 15672 (e.g. http://address_of_your_server:15672). You should see the following :

RabbitqMQ login page

RabbitMQ creates a default user called “guest”, with the password also set to “guest”. However, it initially restricts access to this account from the local machine. As you’re most likely going to want to administer the RabbitMQ server from another machine (e.g. one with a GUI desktop and web browser), you’ll need to set up access control permitting this.

Following the example in the [Rabbit MQ manual] (https://www.rabbitmq.com/access-control.html), run the following to create a new, blank rabbitmq.config file, and unset the loopback_users configuration item so that all users are permitted to log in remotely :

$ echo "[{rabbit, [{loopback_users, []}]}]." | sudo tee /opt/mar/etc/rabbitmq/rabbitmq.config
$ sudo /sbin/svcadm restart rabbitmq

And after logging in as the default “guest” account, you’ll see the following page :

RabbitMQ main page

Configuring RabbitMQ

We now need to create a new “vhost” and user for Sensu to use. You can easily do this through the GUI, although I’ll show how to do it through the command line, so you can just copy & paste. First, create the vhost :

$ sudo rabbitmqctl add_vhost /sensu

Then add the user sensu with the password password123 (obviously, in a production environment, you’d change this to something more secure!) :

$ sudo rabbitmqctl add_user sensu password123

Finally, grant that user full permissions on the sensu vhost :

$ sudo rabbitmqctl set_permissions -p /sensu sensu ".*" ".*" ".*"

If you want to be able to use that account to login to the web interface, you will also need to apply the administrator tag to the account :

$ sudo rabbitmqctl set_user_tags sensu administrator

Although doing this is discouraged for security reasons. If you want to remove the administrator tag after you’ve confirmed you can log in, simply set an empty list of tags :

$ sudo rabbitmqctl set_user_tags sensu

For more information on all these commands, see the rabbitmqctl man page

Install Sensu

Install the server and verify connections

Now we have the basic requirements setup, we can start to install the different components that make up a Sensu installation. Start by installing my sensu-server meta-package, which will pull in Ruby, various libraries, configuration files and a SMF manifest :

$ sudo pkg install sensu-server

You should then see the sensu-server service come online :

$ svcs sensu-server
STATE          STIME    FMRI
online         15:08:32 svc:/network/sensu-server:default

A quick look at the files under /opt/mar/etc/sensu/conf.d will show the reason: The configuration files telling Sensu where and how to connect to Redis and RabbitMQ all have the defaults set to the examples used above (e.g. localhost, with the default password of ‘password123’) :

$ cat /opt/mar/etc/sensu/conf.d/rabbitmq.json
{
  "rabbitmq": {
    "user": "sensu",
    "port": 5672,
    "vhost": "/sensu",
    "host": "localhost",
    "password": "password123"
  }
}

You’ll obviously want to change these credentials in a production setting! You can also verify that the server has started correctly by looking at the RabbitMQ admin console. If you click on the “connections” tab, you’ll see a new connection ftom the sensu server :

RabbiqMQ connections

Install the API

The next component to install is the API service, which lets other tools communicate with the Sensu server. As before, this is a simple “pkg install” away :

$ sudo pkg install sensu-api

And verify :

$ svcs sensu-api
STATE          STIME    FMRI
online         15:21:49 svc:/network/sensu-api:default

You’ll also see another connection showing up in the RabbitMQ console.

Install the client

We’re installing a client on the same machine to demonstrate this process. Again, one more package to install :

$ sudo pkg install sensu-client

This however will fail to start :

$ svcs -xv
svc:/network/sensu-client:default (?)
 State: maintenance since Thu Jun 12 15:25:25 2014
Reason: Start method failed repeatedly, last exited with status 2.
   See: http://support.oracle.com/msg/SMF-8000-KS
   See: /var/svc/log/network-sensu-client:default.log
Impact: This service is not running.

The reason is simple – it needs a configuration to be present before it will start. If you look at the sample configuration provided (/opt/mar/etc/sensu/conf.d/client.json.example), you’ll see a few things that need to be changed. Copy this file and edit it :

 $ sudo cp /opt/mar/etc/sensu/conf.d/client.json.example /opt/mar/etc/sensu/conf.d/client.json
 $ sudo vi /opt/mar/etc/sensu/conf.d/client.json

And modify the address and hostname fields accordingly. Also, change the “subscriptions” to an empty array (we’ll cover that later). For example, on my host “demo.markround.com” with the IP address of 192.168.0.143, my configuration looks like :

{
  "client": {
    "address": "192.168.0.143",
    "safe_mode": false,
    "name": "demo.markround.com",
    "subscriptions": []
  }
}

Save this file, and then clear the service – you should see it come online immediately :

$ sudo /sbin/svcadm clear sensu-client
$ svcs sensu-client
STATE          STIME    FMRI
online         15:31:15 svc:/network/sensu-client:default

You’ll also see one more connection in the RabbitMQ console, and you’ll also see messages being passed between the components every few seconds :

RabbiqMQ connections

Wrap-up

You can now use the “sensu-cli” tool (a 3rd party component, but one I’ve bundled with my sensu-common package) to further verify Sensu is up and running. On the first run, it will create a configuration file in your home directory, so you’ll need to enter the same command twice :

$ sensu-cli client list
We created the configuration file for you at /home/mark/.sensu/settings.rb.  
You can also place this in /etc/sensu/sensu-cli. Edit the settings as needed.

$ sensu-cli client list
-------
address:  192.168.0.143
safe_mode:  false
name:  demo.markround.com
subscriptions:  []
timestamp:  1402583695
1 total items

We’ll now also install the Uchiwa dashboard, to provide a graphical view into Sensu. This is a 3rd party tool written in Node.js and is a big improvement over the default sensu-dashboard tool. Installation is straightforward from my package repositories :

$ sudo pkg install uchiwa
$ svcs uchiwa
online         13:51:22 svc:/network/uchiwa:default

And you can now browse to http://address_of_your_server:3000, and login with the username “admin” and password “secret” (change these in /opt/mar/etc/uchiwa/config.js). You’ll see the following screen :

Uchiwa dashboard

And there you have it! A very basic Sensu installation. Of course, it doesn’t actually monitor anything yet, but I’ll cover that in a future document. Any questions of feedback, just use the comments in this blog article. See you next time!

Ruby Gems Update

| Comments

I had a bash at finishing off the Ruby gem dependencies for Sensu on Solaris 11 over the last few days (and a bunch of other stuff,but I’ll write about that a bit later).

These have been pushed out to the /dev branch of my repositories, and should be available via a simple “pkg” install command. I’m going to build various meta-packages that pull all this in automatically, but for now if you want to give them a try, just use something like :

$ sudo pkg install 'pkg://markround/mar/libraries/rubygem-*'
           Packages to install:  52
            Packages to update:   1
       Create boot environment:  No
Create backup boot environment: Yes

DOWNLOAD                                PKGS         FILES    XFER (MB)   SPEED
Completed                              53/53   13711/13711    30.4/30.4    0B/s

PHASE                                          ITEMS
Installing new actions                   16437/16437
Updating modified actions                        1/1
Updating package state database                 Done
Updating package cache                           1/1
Updating image state                            Done
Creating fast lookup database                   Done

These are again Solaris 11 x86-only. Some of these will have to remain that way – in particular, the libv8 and rubyracer gems require a x86 architecture as the V8 Javascript Engine doesn’t support SPARC. However, these are only used by the sensu-dashboard gem, so as long as you don’t require the GUI components you should be OK. The whole Sensu architecture is distributed though, so you can always run the dashboard elsewhere on a supported platform.

Anyway, after installing you should end up with the following gems ready to use :

$ gem list

*** LOCAL GEMS ***

addressable (2.3.6)
amq-client (1.0.2)
amq-protocol (1.2.0)
amqp (1.0.0)
async_sinatra (1.0.0)
bigdecimal (1.2.0)
builder (3.2.2)
bundler (1.6.2)
carrot-top (0.0.7)
coffee-script (2.2.0)
coffee-script-source (1.7.0)
commonjs (0.2.7)
cookiejar (0.3.0)
daemons (1.1.9)
em-http-request (1.0.3)
em-redis-unified (0.4.2)
em-socksify (0.3.0)
em-worker (0.0.2)
eventmachine (1.0.3)
execjs (2.0.2)
handlebars_assets (0.15)
hike (1.2.3)
hirb (0.7.1)
http_parser.rb (0.6.0)
io-console (0.4.2)
ipaddress (0.8.0)
json (1.7.7)
less (2.5.0)
libv8 (3.16.14.3 x86_64-solaris-2.11)
minitest (4.3.2)
mixlib-cli (1.4.0)
mixlib-config (2.1.0)
mixlib-log (1.6.0)
mixlib-shellout (1.3.0)
multi_json (1.9.2)
ohai (6.16.0)
oj (2.0.9)
psych (2.0.0)
rack (1.5.2)
rack-protection (1.5.2)
rainbow (2.0.0)
rake (0.9.6)
rdoc (4.0.0)
ref (1.0.5)
sass (3.3.4)
sinatra (1.3.5)
slim (2.0.2)
sprockets (2.12.0)
systemu (2.6.4, 2.5.2)
temple (0.6.7)
test-unit (2.0.0.0)
therubyracer (0.12.1)
thin (1.5.0)
tilt (1.4.1)
trollop (2.0)
yajl-ruby (1.2.0)
yui-compressor (0.12.0)