Thu, Mar 26, 2015
You already know about it: the latest hype is all about application containers and Docker. At Tryolabs we try to keep up to date with latest developments, so we have been using Docker extensively for quite some time; both for development as well as production systems.
The idea is that we have all our applications containerized so they and their dependencies can coexist without affecting each other. For example, say your application A depends on version 1.0 of some obscure system library, while application B depends on version 1.1. Trying to make these two versions coexist and play along with each other could become a big mess. With the Docker way, it is much more simple. You just create a container in which you compile version 1.0 of the library and then install application A on it, and do the same with container B and version 1.1. You now have two independent apps with different system dependencies. In theory it seems very simple; in practice it may not be as much, but it's definitely worth considering. If you haven't, you should check out the Docker documentation.
As with most technologies that are in their infancy, many of the best practices are not quite yet established. This post attempts to walk us through what most newcomers to Docker will face when dealing with one of the first problems that will pop up: how to make our containers configurable.
Your application has local development, different testing environments, staging, and finally production; maybe even different deployments in different regions of the world. There is no escape.
In the Docker ecosystem we can imagine a general principle that we want to achieve for maintainability: it is desirable to have the same container used for all environments.
This has obvious advantages. Most importantly, it reduces the likelihood of getting those hard to diagnose bugs that result from different dependencies in different environments. Let's take a real use case. We have a web application that depends on nginx for serving requests. We use Amazon Web Services a lot, so suppose our production setup runs on EC2. Because of security requirements, our application must function with HTTPS, thus we got a SSL certificate from some provider. For development and testing, our nginx application can be configured to deal with said certificate. But for production we are highly scalable, so we are going to use an Elastic Load Balancer which process with the certificate itself, leaving our production nginx to get plain HTTP requests and no special certificate configuration needed. Real enough?
First of all, we need to have a Docker container with nginx setup. Usually in the Docker world someone has already created a container for us with the tool we need, and being nginx such popular software, we can almost be certain of it. Checking Docker registry we even find an official image, which we can fetch doing docker pull nginx.
Once we inspect the container, we see the nginx configuration file is located in /etc/nginx/conf.d/default.conf. To further setup our nginx, we need to modify this file. I am also going to suppose that we have two files ssl-cert.pem and ssl-cert.key – if you don't, you can get the Ubuntu snakeoil certificates for testing.
Provided our application server (ie. uWSGI or whatever we use) listens on port 80801, the nginx config file for SSL would look similar to this1:
While the one for plain HTTP requests would be:
The first idea that comes to mind is that we can create two different Dockerfiles for each of these situations. For SSL:
And non SSL:
While this works, it is far from optimal. We are going to be building different Docker containers dev and production, which is not really desirable. As we said above, we actually want to be able to harvest the exact same container.
Turns out docker run has a -v option that can bind mount a volume (file or directory) from the host into the container. This is perfect for our case: we can keep all the files in the host, and don't need to build two different Dockerfiles. In fact, for this special case we don't need to build any Dockerfile, since we can run the vanilla nginx container and inject the files we care about.
For our SSL nginx, we would run it like:
And the plain HTTP one:
The advantages are clear. Now we only have a single container that we can use for all our environments. The drawback? Imagine you have more environments than just dev/test/production, and that our nginx configuration file is much bigger and complex. In this case, we would need to create (and worse, maintain!) a different configuration file for each of these environments. If we are not careful, these files can become out of sync and then it is when bugs appear. We need a better way.
The Twelve Factor is a general guideline outlining some of the best practices for building applications. One of the factors explains why it is a very good idea to store application configuration (data that varies between deploys), in environment variables.
The docker run command does have -e and --env-file options to provide environments variables for processes inside the container, at container run time. We would be all set, but unfortunately nginx – as many existing applications – does not support environment variables in its configuration file.
What we need is a tool that can read environment variables and generate a configuration file from a template. Meet j2cli! I am familiar with Python, so using Jinja2 templating language is definitely a plus for me. But of course, you could also accomplish the same feat with some other tool that uses other templating language.
First, we should create a Jinja2 template for the nginx config:
For this to work, we need the following:
This can be achieved with the following Dockerfile and custom entrypoint script:
And our docker-entrypoint.sh – make sure to give it +x permissions and set the shebang at the top to save some headache:
We build with something like docker build -t our-nginx . and then to run we do:
for SSL, and just:
for regular HTTP.
In the end, we have a single container that can be parametrized via environment variables. Most importantly, we reached a pattern that can be reused for any application that requires a configuration file and does not natively support environment variables. And trust me, there are quite a few :)
© 2024. All rights reserved.