What I want
- Boot a postgres instance.
- Boot a redis instance
- Boot a rabbitmq instance
- Boot a app server instance
- Do some more stuff
I had a project running postgres 8.4 in production, so I had to ignore/hold ArchLinux updates.
But I also needed postgres 9.1 for another project. Also, it is a very good idea to develop in similar production environment (in my case, CentOS).
Docker is a awesome tool, that will make it possible, the 'Instantly' part in particular. It uses Linux Containers (lxc) instead of traditional virtualization methods, lxc use the same host kernel, making everything really faster.
Explaining docker details is out of scope of this article. If you aren't using docker yet, you can get detailed instructions on installing and running on getting started guide. Also, check out this article on using docker as development environment.
There are some ways to configure your docker images, you may use a Dockerfile or boot the image, install packages and then commit, you can see it detailed from links above.
In my case, I setup the container using Provy, since I use it's roles to provisioning the production servers I'm working on and I wanted them to be as similar as possible. But if this is not one of your need, you may just pull a postgres image from docker index, eg.:
$ docker pull zaiste/postgresql # to test it: $ docker run -i -t zaiste/postgresql /bin/bash
To run the postgres container I call docker like this:
$ docker run -v [/path/on/host/:/path/on/container/] -p [port to be exposed to host] -i -t [container_name] [command to execute]
The most interesting part here is the 'v' option, it allows us to share an folder from host to the container, so the postgres databases will persist when we kill postgres container, also, not having data on image will make it smaller. In my case, I have this:
$ docker run -v $HOME/.d_volumes/postgres/:/var/lib/pgsql/ -p 5432 -i -t centos_postgres /bin/su postgres -c '/usr/pgsql-9.1/bin/postgres -D /var/lib/pgsql/9.1/data'
Do the same to all services you want to run.
Autoenv automatically execute .env file when you cd into it. I already use it for environment variables. So to make it work with docker was simple as putting docker calls in the beginning of .env file.
To not create an new docker container every time I cd into the project folder, I'm using cidfile option, it writes the container id to a file, and do not load another the container if the file exists.
So my .env files looks like this:
# /path/to/project/.env # postgres docker run -cidfile $HOME/.postgres.cid -d -v $HOME/.d_volumes/postgres/:/var/lib/pgsql/ -p 5432 -i -t centos_postgres /bin/su postgres -c '/usr/pgsql-9.1/bin/postgres -D /var/lib/pgsql/9.1/data' 2> /dev/null PG_IP=$(docker inspect $(cat $HOME/.postgres.cid) | grep IPAddress | cut -d '"' -f 4) export DATABASE_URL=postgres://postgres:@$PG_IP:5432/db_name # Django app docker run -cidfile $HOME/.app.cid -d -e DATABASE_URL=$DATABASE_URL -v $HOME/.d_volumes/app/:/srv/ -p 8000 -i -t app_server /bin/su user -c '~/.virtualenvs/project/srv/project/manage.py runserver 0.0.0.0:8000'
Note that I also use docker inspect command, to get postgres ip from docker. My django app is ready to read from environment variables (thanks also to dj-database-url).
Now, cd project, and that's all, everything I need will be running as expected. To see the site on browser, check first docker exposed id by running docker ps, or in our case, eg:
$ docker port $(cat $HOME/.app.cid) 8000 49164
Then we can open browser on localhost:[port from docker]
The most obvious thing is to use the same docker container in production and dev environment, since this is one of most interesting docker features. But I think I'll wait docker 1.0 release.
This method has the advantage to automate without the need to change code and still being project specific, in my case, I just kept my development workflow, just making it better.