During OpenStack Summit Austin, we partnered with Intel in an initiative aiming to automate and orchestrate the deployment and management of OpenStack with Kubernetes. Today, we are pleased to announce a technical preview of the project affectionately called "Stackanetes", available with the latest OpenStack release, "Newton". This tech preview of OpenStack on Kubernetes provides high availability with simple scaling and control plane self-healing, virtual machine live migration, and the full complement of OpenStack IaaS features – deployed, managed, and scaled with Kubernetes automation. With today's preview, you can deploy OpenStack on Kubernetes in 15 minutes with a single kpm command.
Along with the ability to deploy, monitor, and manage the OpenStack control and compute planes with Kubernetes orchestration, this preview release also demonstrates the powerful modularity of the Kubernetes container runtime interface. The rkt container engine, an alternative to the default node container manager in Kubernetes, can now run the entire OpenStack deployment, including the Nova libvirt driver and the Neutron/Open vSwitch network agents. Rkt's pluggable isolation offers a choice of execution mechanisms, ranging from standard software container isolation to full VM protection of container workloads, that is especially interesting in an OpenStack context.
Simplifying OpenStack deployment and management
OpenStack has a reputation for complexity that can sometimes rival its power. Kubernetes cluster orchestration makes OpenStack much easier to deploy and manage. OpenStack is composed of several stateless applications that communicate to provide services – a familiar model in the world of microservices that Kubernetes and containers call home. Of course, OpenStack also relies on a set of stateful data stores that underpin those services. The Stackanetes work demonstrates these applications can also be deployed, scaled, and managed by Kubernetes.
Controlling OpenStack installation dependencies
Ordering dependencies is the linchpin of OpenStack installation. Services need to deploy, start, and register in a precise order. Cordoning the OpenStack components in containers orchestrated by Kubernetes makes controlling this order much easier. Kubernetes jobs provide a mechanism for initialization procedures such as Keystone registration. A new entrypoint in each container in turn allows OpenStack applications to be aware of their own dependencies, and to query the Kubernetes API to ensure their availability. These two facilities let us define and enforce the entire OpenStack initialization workflow.
Scaling and service discovery for OpenStack
It takes less than three hundred lines of the YAML markup that defines Kubernetes objects to declare a highly-available and self-healing Glance deployment, including its initialization tasks. The Kubernetes services abstraction provides cluster-wide service discovery and load-balancing to the OpenStack deployment, forever banishing static IP addresses from OpenStack configuration files. Coupled with the recently added Kubernetes Ingress resources, the same services can securely expose the OpenStack APIs to internet users.
Automatic zero downtime OpenStack upgrades
Perhaps most powerfully, Stackanetes harnesses the native Kubernetes rolling update automation and, in a few cases, some basic traffic shifting, to make manual upgrades of the OpenStack control plane obsolete.
CoreOS rkt runs Stackanetes
CoreOS is a founding member of the Cloud Native Computing Foundation (CNCF) that manages the Kubernetes project, and a leading contributor. Several key Kubernetes pieces, such as the etcd distributed key-value store, began at CoreOS. We worked with the community to modularize the Kubernetes interface to node container runtimes, including the rkt container engine. This work is about more than just rkt. It's also about refining and exercising Kubernetes interfaces, and paving the way for other modular runtimes in the future.
Keeping Kubernetes open and modular
As it has matured, the work around rkt and other container engines has grown into a more formalized Container Runtime Interface (CRI), to match the modular approach to managing dynamic subsystems taken by another Kubernetes foundation component, the Container Network Interface (CNI). These architectural facilities make Kubernetes more powerful, more flexible, and more capable of supporting the intricacies of real world infrastructure.
Containerizing low-level applications with rkt
With today's tech preview release, we are proud to announce that Kubernetes clusters using the rkt container engine can support a full OpenStack deployment on par with the default container runtime. In fact, rkt's architecture facilitates putting certain low-level applications, for instance the Nova libvirt driver, into containers so they can be orchestrated on the cluster.
Kubernetes makes OpenStack easier
Kubernetes's features, flexibility and scalability make Stackanetes a robust solution for managing OpenStack. The combined effort between CoreOS, Intel, and Mirantis demonstrates that Kubernetes is a mature solution for improving the lifecycle management, monitoring, and scaling of complex production platforms like OpenStack.
Try it out – deploy OpenStack on Kubernetes today in 15 minutes with a single kpm command. To learn more about running OpenStack on Kubernetes please see the reference architecture here. If you are on site at OpenStack Summit check out the talk from CoreOS, Intel, and Mirantis on October 27th at OpenStack Summit Barcelona.