In the last decade, a massive transformation has taken place in how applications are deployed: containers. This handbook outlines how containers are built, run and how they can be employed to the best effect.

Container Basics

In the old days, so about five years ago, running a production service was split into two roles:

  • developers who write the code,
  • sysadmins (ops personnel) who operate it on the on-premises servers.

However, as the software is, these roles were frequently in conflict.

Much software that developers write needs system administrators to install some aspects on the server to run it. For example, to run a PHP code written by developers, sysadmins need to install a PHP runtime environment on the server. Likewise, the sysadmins need to install the .NET runtime on the servers to run a .NET application.

Containers, at their core, are not like virtual machines. The basic concept is that the containerized application runs like usual, but the operating system (Linux, Windows) simulates a different environment. For example, the operating system can show the application only a part of the disk or a different folder structure. It can also route the network traffic differently, limit the CPU and memory the application can use. Still, fundamentally, the app is running on the same operating system as the host. This is different from virtual machines, where each virtual machine on a host system has its operating system.

However, things broke if the developers used a different version during the development process than the one installed on the server.

Containers offer a remedy for the situation by packing the runtime environment and the software together. The process works as follows:

  1. Preparing the recipe on how to create a runtime environment. The recipe documents both the creation process and the execution by the container system of the container image.
  2. Pushing the container image into a container registry, where all servers can fetch the new image.
  3. Fetching the container image, creating a new container, and launching the software is the production-server task.

This process brings several advantages. First, since the production server runs the same software environment as the development system, a developer can ensure no differences. Furthermore, the same environment can run everywhere, and the production environment is not updated but replaced. Thus, the developer can be sure that the development environment matches the production environment exactly.

This is called reproducibility. Ever since the advent of Docker, containers have not been built and installed by hand. Instead, the instructions on installing the software in the container have been codified and written down in the Dockerfile. Docker or other container engines can automatically execute this file to create a container image. This container image contains the application and all the data it needs and can be shipped to different servers for execution.

This has a distinct advantage in that installing the application is documented, and the container can be tested in its entirety before shipping it to a production system.

Container Registries

A concept called Container Registry eases the process of moving container images between servers. This is a simple web service where containers can be uploaded and downloaded. What’s more, the upload does not take as a single file, but multiple parts called layers. These layers build on top of each other and contain only the difference to the previous layer. This way, if a build process takes multiple steps, only the parts that need to be uploaded have changed between two build runs.

Container Architectures

A widespread container deployment scenario is that the Dockerfile and the related application files are stored in a code versioning system such as git. When a developer pushes a new piece of code into the versioning system, it triggers the so-called CI/CD system to start a build. This build takes the Dockerfile and creates the container image, then pushes it into the registry.

Finally, depending on the configuration, the CI/CD system can trigger a release in the orchestrator API to roll out the new version. The orchestrator then does a zero-downtime update, replacing one container at a time. The data, of course, is preserved if an appropriate storage system is connected.

Docker Swarm, Kubernetes & Others

Containers are standardized technology, and there are multiple implementations. One such implementation is Docker. However, in the past couple of years, Kubernetes has significantly overtaken it in terms of popularity.

The reason for Kubernetes’ sudden popularity is that it provides a standard, cloud-independent base layer to run workloads. Therefore, deploying workloads based on the steps described above can leverage any cloud provider serving primary IaaS offerings.

K8s decides which machine the workload should run, takes care of the internal networking between the individual containers, and sets up firewalls. In case of an infrastructure node failure, it moves the workload automatically.

Furthermore, it works completely utilizing standardized Application Program Interfaces (API). Thus, developers are easily enabled to write extensions to it. The cloud community has written many extensions, from custom network implementations through security features to extend k8s.

Since Kubernetes has its built-in security solution, it can be safely integrated with a CI/CD system to deploy workloads without giving the CI/CD system admin-level permissions. (This is not possible with Docker Swarm.)

Storage Systems

However, when wanting to run a database, the setup becomes tricky because databases need an underlying storage system. If the storage is attached directly to the container from the host machine, the service needs to be pinned to the computer it is running on. Otherwise, a move to a different host machine may result in the service not having access to data.

Alternatively, you can also use network-based storage systems, such as Ceph, OpenEBS, iSCSI, NFS, etc. However, neither Kubernetes itself nor Exoscale, at this time, offer network-attached storage, which means that you have to deploy such a storage system yourself.

Rancher & Others

Kubernetes itself is highly complicated under the hood, and setting up Kubernetes from scratch can be a highly complex task. The open-source Kubernetes project has started developing tools to ease the burden. They are rudimentary. It is also worth noting that Kubernetes, in a default deployment, does not contain many tools required to operate a thriving container ecosystem, such as storage or a registry.

Fortunately, commercial offerings, such as Rancher, OpenShift, and others, have picked up the slack to provide an easy (more comfortable) to use solution to deploy Kubernetes. Rancher stands out by itself, providing a free and open-source solution and asking for money for their commercial support.