For many, Kubernetes can be described as a multilayered, multifunctional container orchestration platform. As an engineer responsible for designing, deploying, and maintaining Kubernetes infrastructure, there are plenty of concepts to learn and functionalities to understand. Obtaining the knowledge required to manage Kubernetes at the service level is often sufficient when maintaining a relatively basic deployment of Kubernetes. When we take a deeper look at what Kubernetes actually provides us with, though, we uncover a key low-level component of the orchestration engine: the container runtime.
Container runtimes are the key component of Kubernetes. They’re the software component that grants systems the ability to run containers. Without a container runtime, Kubernetes and other container orchestration platforms would not exist. In this article, we’ll take a extensive look at what container runtimes are, the functions that they provide, and the different types of container runtimes.
What is a Container Runtime?
Before discussing what a container runtime is, let’s briefly review what a container is. A container, much like a virtual machine, provides a way to package software and all of its dependencies into a confined unit that runs consistently in any supported environment. Virtual machines, however, are larger in size, and take longer to become operational. Containers have a much smaller footprint, and are therefore much faster to bring up (and tear down) than virtual machines. This is because unlike virtual machines, which maintain full copies of the operating system, containers just share the operating system kernel of the host system.
Containers have become today’s preferred method of deploying cloud-native applications that utilize microservices architecture. Cloud service providers are increasingly offering containers-as-a-service solutions as part of their product lines. And every day, more and more features are added to these services, offering new ways of managing these applications.
Here’s the question, though: How does a host system actually run a container? A system that is running containers is only doing so because of its container runtime. The container runtime is the software installed on a host system that allows it to isolate its resources for containers, pull down container images, and manage the lifecycle of containers. With respect to Kubernetes, a container runtime is required on every node in your cluster. Without it, pods in Kubernetes are non-existent.
Container Runtime Use Cases
There are two main use cases for container runtimes:
- A higher-level container orchestration tool requires container runtimes to be installed. The container runtime acts like the backbone and does the heavy lifting of running the containers, while the orchestration tool communicates with the runtime to manage the containers.
- A user has varying workloads that are running on the same host, and wants to isolate them from each other. In this case, the user can directly interact with the container runtime to create and run containers. This is suitable if the workloads are not critical enough to warrant the overhead of using an orchestration tool like Kubernetes.
A Closer Look at Container Runtimes
We’ll start by discussing a brief history of container runtimes, followed by the low-level container runtimes that adhere to the Open Container Initiative (OCI) runtime specifications, and the high-level container runtimes that conform to the Container Runtime Interface (CRI) runtime specifications.
Brief History of Container Runtimes
In 2007, Linux kernel mainlined cgroups which opened the door for containerization. Soon after, many projects that used cgroups to enable containerization were created, the most notable being LXC, systemd-nspawn, and rkt.
In 2008, Docker, then known as dotCloud, appeared on the scene, with a goal of making LXC easier to use. In 2014, Docker started the Open Container Initiative, with the goal of establishing an open standard for container runtimes and images. LXC was dropped as the default runtime, and libcontainer was adopted in its place.
Open Container Interface (OCI) Runtimes
The Open Container Interface is a Linux Foundation project, primarily focused on defining guidelines, standards, and specifications for Linux containers.
The OCI Runtime Specifications mostly deal with managing the container lifecycle and configuration for various platforms, such as Linux, Windows, and Solaris. Container runtimes that conform to the OCI specification are considered “low-level” runtimes.
The main initiative for the project is runC, a low-level container runtime written in Go. runC was originally created by Docker, and is the gold standard for low-level container runtimes.
Low-level container runtimes are responsible solely for the creation and management of containers. They are not responsible for performing any other tasks than those.
There are other popular low-level container runtimes, including:
- crun: A container runtime project led by Red Hat, written in the C language.
- Railcar: Originally created by Oracle, written in Rust. It’s now deprecated.
The above-mentioned runtimes are all native runtimes—they run the containerized processes on the host kernel. There are a few sandboxed and virtualized runtimes, as well, which offer better isolation of processes by not running them directly on the host kernel. Popular examples include the following:
- gVisor and Nabla are examples of sandboxed runtimes. These runtimes run the containerized processes on a kernel proxy layer that interacts with the host kernel.
- runV, Clear, and kata-containers are examples of virtualized runtimes. These runtimes utilize a virtual machine interface to run the containerized processes. Both runV and Clear have been deprecated, and their features have been merged into Kata.
Container Runtime Interface (CRI) Runtimes
There are also a number of high-level container runtimes that conform to the Container Runtime Interface specification. In fact, this is the specification that’s required by the container runtime installed on the nodes in your Kubernetes cluster(s).
When Kubernetes was first released, it used Docker runtime as the hardcoded default. As the platform grew, the need to support alternative runtimes also grew. To make Kubernetes more runtime-agnostic, the Container Runtime Interface was created. It’s a high-level spec that’s focused on container orchestration. Unlike OCI, CRI deals with additional aspects of container management, such as image management, snapshots, and networking, while delegating the container execution to an OCI-compliant runtime.
The container runtimes that adhere to the Container Runtime Interface (CRI) specification are:
- Dockershim was a component of Kubernetes that added the required CRI abstraction in front of Docker Engine to make Kubernetes recognize Docker Engine as CRI compatible. It has been deprecated.
- containerd is the most popular container runtime engine, and is managed, developed, and used by Docker. It uses runC under the hood for container execution.
- CRI-O was specifically designed for Kubernetes by Red Hat. CRI-O is a CRI implementation that enables using any OCI compatible runtime.
Container Runtimes and Kubernetes
Kubernetes supports any CRI-compliant container runtime. When you’re setting up a Kubernetes cluster, you must install a container runtime in each node of the cluster so that pods can be run there. However, in most cases, you don’t need to worry about container runtimes beyond this—they’re usually behind-the-scenes components.
To get started with a runtime, you can follow the Kubernetes documentation on container runtimes. Once you have a runtime installed, you can install kubeadm as usual, and specify the container runtime to use. This can be done by editing <terminal inline>/var/lib/kubelet/kubeadm-flags.env<terminal inline>, setting <terminal inline>--container-runtime=remote<terminal inline>, and specifying the endpoint of the socket of the runtime in the <terminal inline>--container-runtime-endpoint<terminal inline> flag. For example, for CRI-O, you’ll need the following flags - <terminal inline>--container-runtime=remote --container-runtime-endpoint=/var/run/crio/crio.sock<terminal inline>. However, note that kubeadm can automatically detect some of the most popular runtimes by scanning for their sockets.
Even though container runtimes are rarely thought of once they’ve been installed, there are a few scenarios where you would want to get more hands-on with the container runtime of a node.
- You need to change container runtime. There can be multiple reasons for this, such as to improve performance or address security concerns. If you’re using Docker as the container runtime and are worried about Kubernetes’ deprecation of dockershim, you can check whether the change affects you. If it does, you can switch to something like containerd.
- You need to upgrade container runtimes to fix bugs or patch security issues.
Although very rare, these situations can happen.
Most might agree that Kubernetes is a complex platform. In this article, you dove into the intricacies of Kubernetes, and explored the world of container runtimes and how they are a vital component of the Kubernetes architecture. You touched on the different types of container runtimes, and have looked at different examples of those technologies to gain a better understanding of the container runtimes landscape.
Container runtimes provide you with the ability to manage the lifecycle of containers. Containers allow you to run parallel workloads on the same system, and safely isolate them from one another. Without them, container orchestration platforms would not exist, and deploying containerized applications—especially those that utilize microservice architectures—would be far more complex.