Pluggability is part of the success story of Kubernetes, and as a community we have ensured that many layers – including storage, networking, and schedulers – can be replaced and improved without changing the Kubernetes user experience. Earlier this year, the Kubernetes project created an API called the Container Runtime Interface (CRI) to make the way a container is run on Kubernetes pluggable. This week, two projects implementing the Kubernetes CRI API are nearing maturity: CRI-O, which has hit version 1.0.0, and containerd, which is on 1.0.0-beta.2 and is anticipated to leave beta soon.
These releases are early milestones towards production use, so congratulations to the teams working on the projects. And, as is the case with most projects in their early days, today may be the first time many people are hearing about these efforts. So, we wanted to provide some history and context on their use inside the Kubernetes ecosystem.
If you are a day-to-day user of Kubernetes, you can stop reading here if you want. The CRI and various implementations like Docker Engine, containerd, and CRI-O are intended to be transparent to the user and leave the Kubernetes API and experience unchanged. Largely, it is internal plumbing of the system where most implementations will have 99 percent of the same features; by analogy, like having GNU sed vs. BSD sed on your laptop. And if you are a CoreOS Tectonic user, you can be confident that our full-stack automated operations and flexible cloud and on-premises installer will enable you to deploy and operate the best possible configuration of Kubernetes over time as the internals change.
The terminology of containers
Before we get into what these projects mean for the Kubernetes community, let's first do a quick refresh on the terminology:
- Container images package all dependencies for an application into a single named asset that can be copied between many machines. Think of them as the server equivalent of a mobile app: self-contained and easily installed.
- Container runtimes, in the Kubernetes ecosystem, are internal components of the Kubernetes system that download and run applications packaged in container images. Examples of container runtimes include containerd, CRI-O, Docker Engine, frakti, and rkt.
- The Open Container Initiative (OCI) Runtime Specification defines a schema for host-specific container runtime details like userids, bind mounts, kernel namespaces, and cgroups that will be created when a container image is launched. This format is important for ensuring container runtime interoperability, but isn't what an infrastructure administrator will interact with in day-to-day work, if at all.
As containers reached wide and accelerating use in the last few years, due to their rising popularity with developers and infrastructure owners alike, there was a need to ensure long-term format stability and inter-tool compatibility. We helped to introduce the industry conversation around container interoperability in 2014 with appc. And since then, we along with other container industry companies formed the Open Container Initiative (OCI), which over two years of work has recently published v1.0 of the image and runtime spec.
It is partly from this effort that containerd was born. Drawing upon code from the Docker Engine, containerd is a standalone container runtime component that is an implementation of the OCI specifications. Similarly, the CRI-O project started from OCI developed components with the goal of creating a from-scratch container runtime interface implementation for Kubernetes. Today both CRI-O and containerd can download container images, setup those containers using the OCI runtime format, and run them.
Container runtimes and how they work in Kubernetes
Kubernetes 1.5, released in late 2016, was the first version of Kubernetes to implement the Container Runtime Interface. An abstraction layer based on work done in the Kubernetes project's SIG Node, along with additional work from Google, CoreOS, and others, the CRI defines an API interface between Kubernetes and the container runtime. The goal was twofold: to introduce an API to enable mocking of this interface for testing, and to enable container runtimes to be swapped into Kubernetes such as Docker Engine, CRI-O, rkt, and containerd.
More details about the CRI itself are available on the Kubernetes Project's Container Runtime Interface blog post.
Container Runtime Interface impact
Although projects like containerd and CRI-O are interesting developments and well worth watching, so far the primary benefits of the CRI for the Kubernetes community have been better code organization and improved code coverage in the Kubelet itself, resulting in a code base that's both higher quality and more thoroughly tested than ever before.
For nearly all deployments, however, we expect the Kubernetes community will continue to use Docker Engine in the near term. The reason is that introducing a new container runtime could potentially break existing functionality and doesn't introduce immediate new value to existing Kubernetes users.
The work in CRI has made it easy to swap container runtimes inside of Kubernetes and make the end-to-end tests pass. Over the last three years, however, a plethora of integrations between Kubernetes and the Docker Engine have created dependencies that make assumptions about things like paths of logs to disks, Linux container internals, container image storage, and other subtle interactions.
Working out these dependencies will require adjustments and improved abstractions, which will take time to develop. And until these changes are made, any discussion of whether containerd, CRI-O, or any other CRI implementation is best for the Kubernetes community will remain theoretical for most production users. In all, we look forward to continued work with the Kubernetes community on the next steps.
More options for container runtimes? No problem.
If you are a user of Kubernetes, rest assured that CoreOS will help you navigate and adopt the best-in-breed container infrastructure, as we have done consistently in the past. Readers new to the ecosystem can take this away: as the ecosystem of container runtimes matures, CoreOS will evaluate their suitability in production Kubernetes environments as compared to the Docker Engine. If alternative container runtimes provide significant improvements to Kubernetes users, we will advocate for their adoption in upstream projects and provide them as supported options on the CoreOS Tectonic platform.
Further, the entire Tectonic platform has been designed to enable our customers to navigate and upgrade through inevitable internal changes to the Kubernetes platform with its unique automated operations capabilities. As the Kubernetes ecosystem continues to evaluate container runtime options to ship in future releases of Kubernetes, Tectonic is ready to enable our customers with the flexibility of leveraging the best and most secure choice.
If you'd like to see for yourself how Tectonic is the easiest route to enterprise-ready Kubernetes in your environment today, you can download Tectonic and deploy a cluster of up to ten nodes for free. Or, if you'd just like to explore Tectonic on your own laptop, the Tectonic Sandbox is a unique test and experimentation environment that you can run on your own machine, with no additional hardware and without a cloud account.
Experiment locally for free