Today, we celebrate this week’s release of Kubernetes 1.12, which brings a lot of incremental feature enhancements and bug fixes across the release that help close issues encountered by enterprises adopting modern containerized systems. Each release cycle, we’re frequently asked about the theme of the release. There are always exciting enhancements to highlight, but an important theme to note is trust and stability.
The Kubernetes project has grown immensely over the last few years and has come to be respected as a leader in container orchestration and management solutions. With that stature comes the responsibility to build APIs and tools that are well-tested, easy to maintain, highly performant, and scalable; qualities that are trusted and stable. In each of the upcoming release cycles, we expect to continue to see a community effort around prioritizing the maturation and stabilization of existing functionality over the delivery of new features.
Red Hat has been a leader in these efforts, currently working across a substantial amount of upstream Kubernetes SIGs (Special Interest Groups), the Release Team, and the Steering Committee so that Kubernetes as a project can continue to deliver a product that is trusted by both enterprises and community members.
Improving TLS in Kubernetes
Security is a cornerstone of what we aim to provide with Kubernetes. Since Kubernetes 1.4, we’ve been working on a framework to provide cluster operators the ability to manage TLS assets for the Kubernetes control plane components and the kubelet.
In Kubernetes 1.12, TLS bootstrapping officially graduates to general availability! This feature significantly streamlines Kubernetes’ ability to add and remove nodes to the cluster.
TLS bootstrapping is just the beginning of the story though. Cluster operators are also responsible for ensuring the TLS assets they manage remain up-to-date and can be rotated in the face of security events. With this in mind, a mechanism for generating CSRs (Certificate Signing Requests) and submitting them to an in-cluster CA (Certificate Authority) was developed. We will see this server certificate rotation functionality graduate to beta in this release.
Multitenancy Gets Even Better
Multitenancy is the principle that software can be architected in a manner that allows for multiple “tenants”, while maintaining some aspect of isolation between. We see this pattern for websites and databases, so it’s fair to assume that this should be possible for infrastructure.
Classically, infrastructure operators will define these boundaries through a combination of configuring isolated hardware and defining routing restrictions (route tables, VLANs, firewall rules). As the lines continue to blur between the physical, virtual, and containerized environments, we need to adapt. Being able to leverage multitenancy features in Kubernetes means we can further take advantage of the economies of scale of using a container orchestrator, while keeping our tenants safe.
One of the problems we face is determining the priority one tenant’s workload might have against another tenant’s workload across the cluster. In this release comes the ability to support priority on the various resource quotas via the new ResourceQuotaScopeSelector feature. This enhances the existing priority and preemption feature that was delivered in Kubernetes 1.11.
On the network security front, two NetworkPolicy components graduate to GA: egress and ipBlock.
Egress, as the name implies, enables administrators to define how network traffic leaves a Pod and which segments of a network that traffic can be delivered to. The ipBlock functionality allows for defining CIDR ranges in NetworkPolicy definitions.
HPA (Horizontal Pod Autoscaler) and VPA (Vertical Pod Autoscaler) are analogous to the non-containerized autoscaling principles wherein pods can react to changes in load by either creating / deleting pods or expanding / contracting the resource requests of a pod, respectively. Each of these features help alleviate the need for a cluster operator to provision additional compute nodes to handle capacity issues.
For Kubernetes 1.12, the HPA has improved algorithms to help ensure that the pod count reaches the appropriate size faster, with fewer spikes when scaling down. Additionally, a new version of the HPA API (v2beta2) gives improved support for using custom metrics in a wider range of situations to determine when the pod count needs to increase or decrease.
VPA is an optional addon component to Kubernetes clusters, which moves to beta, continuing this idea of stabilizing existing ecosystem components.
CSI Support Continues to Evolve
As consumers move more and more stateful workloads into Kubernetes, it’s important to provide a stable framework to present storage to clusters. CSI (Container Storage Interface) is that framework and we’ve observed consistent improvements to it over the last few release cycles.
One critical aspect of storage presentation is considering where the storage is located relative to pods. Having storage geographically distant from Kubernetes introduces latency and reliability concerns. To resolve this, CSI now supports the notion of topology awareness and this functionality moves to beta in Kubernetes 1.12. What this means is that stateful workloads can now have a conceptual understanding of where storage resources live, whether it be a rack, datacenter, availability zone, or region.
Building an Extensible Framework for kubectl
Finally, we’re excited to announce kubectl plugins, which are introduced as alpha in this release. The design of this is similar to git-style plugins. As operators become more entrenched with using kubectl on a day-to-day basis, patterns have developed to suit common use cases, like targeting a kubectl command at a specific namespace.
With kubectl plugins, developers can engineer extensions to kubectl, which accommodate their administration scenarios, while not being baked into the core kubectl codebase. This is going to allow teams to develop and deliver kubectl functionality faster and in a more consistent manner.
A prime example of this is the oc tool, which we use to interact with OpenShift.
Kicking the Tires on Kubernetes 1.12
Kubernetes 1.12 is scheduled to be released this week and will be available here.
Get ready to be able to leverage all of these improvements to the security, scalability, and extensibility of the Kubernetes ecosystem!
For enterprises interested in productized Kubernetes 1.12, we’re planning to include this in a future release of Red Hat OpenShift.
We’ll be hosting an OpenShift Commons Briefing on October 4 at 9 AM PT to discuss Kubernetes 1.12. Join me, Red Hatter and Kubernetes Product Management Chair, Stephen Augustus, for a deep dive on each of the highlighted features as well as a look into what we expect to see in future release cycles.