In depth: Doing our bit for the (development) environment with Docker
18th Dec 2018
Originally posted on our ‘Base’ Medium account.
In September 2019, Base became Passenger.
As the team behind Passenger grows, so have our approaches and processes in software engineering. One such evolution has been our recent move to Docker, when we decided that our Vagrant/Ansible solution was no longer fit for purpose.
On the surface, Passenger — Base’s core product offering — is an easy to use, customer-facing digital product. Look more deeply and you’ll find that the system comprises a number of different technologies and databases. It is the combination of these, working as one beneath Passenger’s app and web surface, that attracts such heartwarming feedback from users on a daily basis.
We recently invested in Docker to help us build, manage and secure each Passenger application in a more efficient way. Read on to better understand the work we’re doing to make Passenger a stronger, more reliable solution for all that use it.
Why we moved to Docker!
Passenger’s numerous development environments were previously built using Vagrant and our production Ansible playbooks — scripts/configuration files which turn blank machines into a fully configured boxes that can serve requests.
As Vagrant uses virtual machines as its backend, Passenger benefited from some great separation between its development environments. However, that was also a very resource intensive approach. Vagrant creates a new VM for each application used at Passenger, which requires fixed CPU, disk and memory to be allocated. We previously used an Ubuntu LTS base image for this — one of the most popular distributions. The Ansible playbook would then run on this VM, which creates a fully configured environment ready to serve requests.
These “full-fat” VMs with fixed resource allocation can be a time sink. Rebuilding the whole Passenger stack takes around an hour — not to mention requiring expensive hardware in order to run the stack performantly. When developing, full requests — depending on the complexity — could take 5–20 seconds to compute. An eternity in development time.
When we increased the amount of engineers working on the Passenger stack this summer (2018), it quickly became apparent that this development environment was no longer the fit-for-purpose solution it had been before. Technology and common practice have shifted towards containerised environments.
Containing the problem
As containers use dynamic resource allocation, they do not require an up-front allocation of disk, CPU or memory (although you can reserve these for each container if required). Importantly, containers are also a much more native approach, as no virtualization is required. The application runs natively on the host machine, but using a different base image than the host. (Developers even use different operating systems — the most popular is Debian-based Linux distributions, we also have OpenSuse Tumbleweed, Fedora and MacOS.)
The most popular, easiest, and most portable way of implementing containerized environments today is Docker. This approach defines the environment in a special Dockerfile. Docker then creates a container using a base image. We chose an Alpine base image; an extremely small linux distribution. This is built using musl rather than the bigger glibc, which is more standard. The Dockerfile contains a list of commands that can bring this container into a fully functioning environment.
This approach results in far fewer lines of configuration than the Vagrant/Ansible solution — with the added benefit of keeping everything in a single file, which makes the whole application environment visible at a glance.
As our services often require other programs which would not be suited to run in the same container as the application, e.g. MariaDB, Elasticsearch & Redis, we’ve also adopted Docker Compose, a tool that enables us to further define how Passenger’s multiple application containers can come together to build an overall service.
Docker Compose enables us to create a specification in a docker-compose.yml for each service that defines how multiple containers come together to build a service. We also have environment specific compose files, which overlay with the base file to apply adjustments, such as enable debugging and disable master/slave replication in development.
What does all this mean for Passenger?
The process of dockerizing our development environments was not an easy one.
First off, the diversity of the Base engineers and the variety of preferred development systems they use was the cause of several headaches.
Furthermore, MacOS does not support host bridged networking. A container cannot be assigned a unique ip address for communication; instead, different ports must be bound to the host for each container.
We also had issues around the osxfs filesystem with MariaDB. This required us to use docker volumes instead of mounting directly into the host filesystem.
But, despite these early issues…
- We can now rebuild our entire development stack in ~8 minutes rather than 1 hour
- Our development machines have significantly more available CPU, memory and disk space:
- We have gone from 8GB of memory used to < 1GB
- …and from ~100GB of disk space to < 8GB
- Requests with full debugging have reduced from 20–2 seconds to 800–125 milliseconds
So… a little bit of hard work, and some smart investment in tooling, has really paid off. We’ll keep improving our build tools and engineering infrastructure as time goes on. This will ensure that the product features that our customers are buying into, are supported by everything needed to deliver them reliably, and at scale.
Share this article
Newsletter
We care about protecting your data. Here’s our Privacy Policy.
Start your journey with Passenger
If you want to learn more, request a demo or talk to someone who can help you take the next step forwards, just drop us a line.