With the release of Kubernetes version 1.3 just around the corner, we’d like to share a preview of the CoreOS contributions helping guide the community toward this important milestone. Kubernetes is seeing a lot of early adoption in the enterprise and that continues to drive rapid feature development. The CoreOS team chose to concentrate on a few key areas that will further Kubernetes uptake in data centers around the world:
- Authentication and authorization for Kubernetes APIs
- Improving the Kubernetes installation and upgrade process
- Scaling improvements through the introduction of a preliminary etcd v3 for cluster metadata
- And last but not least: rktnetes, an initial release of Kubernetes using the rkt container runtime
Authentication and authorization for Kubernetes APIs
Developers and operators like Kubernetes for its ability to deploy and manage containers across a cluster through an API. However, in any real world environment, it’s important to be able to control that access. Not every developer needs super-user rights on the CI cluster. Production deployments require even tighter controls, like managing access to individual resources, simplifying management by grouping users together, and continuously auditing user actions. Kubernetes needs new authentication (authN) and authorization (authZ) mechanisms to fulfill these requirements.
AuthN: OpenID Connect Identity Providers
As part of version 1.3, CoreOS developed an OpenID Connect (OIDC) AuthProvider plugin, allowing OIDC Identity Providers (IdPs) to authenticate kubectl and other clients on behalf of the API Server. This OIDC AuthProvider can be used to connect Kubernetes to any OIDC IdP for authentication decisions. In the cloud, popular OIDC IdPs include Google Identity and Amazon Cognito.
Another OIDC Identity Provider is CoreOS’s open source Dex. Dex provides a flexible way to federate authentication with several services, including existing LDAP servers and Active Directory’s LDAP interface. Together, the Kubernetes OIDC AuthProvider and dex integrate cluster access with an organization’s existing authentication policy.
AuthZ: Role Based Access Control
CoreOS developers have also been hard at work integrating Role Based Access Control (RBAC) into Kubernetes, and it has gone upstream with the “alpha” label in v1.3. This RBAC implementation is based on Red Hat’s version from the OpenShift product. Controlling access based on a user’s role is a familiar best practice at most organizations, and allows admins to regulate what a user can and can’t do according to job function. For example, users with the “QA Team” role may have read-only access to a subset of nodes in the production cluster used for release testing.
The RBAC system builds atop Kubernetes resources to make roles, rights, and relationships dynamic first-class API citizens, leaving behind the static flat files of previous Kubernetes authZ facilities. The RBAC features added in Kubernetes 1.3 simplify the creation of multi-tenant environments for different business groups, teams, or accounting regimes in an enterprise. We will examine the developing Kubernetes authN and authZ philosophy and features in a future post.
Simplifying Kubernetes installation and upgrades
Kubernetes can be thought of as an operating system for an entire cluster, and sometimes that complexity shows through in the process of installing and managing its various components. The current recommended update method — essentially “create a new cluster, migrate the workload, destroy the old cluster” — isn’t ideal for operations teams. CoreOS is part of the continuing community effort to make this process easier, and the following improvements are scheduled to merge in Kubernetes version 1.3:
- A number of upgrades and refinements to the popular AWS Kubernetes installation tool, kube-aws, and step-by-step documentation guides for multiple environments.
- Work towards supporting the “self-hosted” kubelet, which, as it matures, will underpin a seamless Kubernetes upgrade process controlled by the Kubernetes API itself.
- The kubelet can now run from within a rkt App Container Image (ACI). This makes kubelet updates a simple matter of deploying a new container image.
etcd v3 storage layer
You may already know that Kubernetes relies on the open source etcd key-value store for coordinating cluster state. Kubernetes version 1.3 adds experimental support for the new etcd v3 backend. The addition of etcd v3 sets a solid base for future Kubernetes scalability work, such as running more pods, more nodes, and with more consistent performance under load. Version 3 of etcd also brings a more powerful feature set, including Multi-Version Concurrency Control (MVCC), high-efficiency multiple key transactions, and will help scale future releases of Kubernetes to thousands of nodes and beyond.
At this stage, we’re looking to gather feedback on the new etcd backend, and use that shared experience to drive it toward release as a production feature in Kubernetes v1.4. We’ll share more details in a series of posts about etcd v3 changes, migration from etcd v2 to v3, and updates on etcd v3 and Kubernetes integration in the coming weeks.
rktnetes: rkt container engine in Kubernetes
The work referred to as rktnetes introduces a new container engine in Kubernetes. CoreOS’s open source rkt container runtime has useful features that align it with Kubernetes and a container cluster worldview, like considering the pod as the unit of control, and employing the Container Network Interface (CNI) to simplify container connectivity. rktnetes teaches Kubernetes to use rkt to fetch and execute container images on each node.
In Kubernetes 1.3, rktnetes is in the “experimental” track and ready to run on many clusters. It is targeted to become “stable” in version 1.4. Work on rktnetes helps assure the generality of essential Kubernetes interfaces, and paves the way for a field of specialized, interchangeable container runtimes to bloom, each suited to the particular needs of an architecture, site, or deployment.
Kubernetes 1.3 and beyond: Join the mission to secure the Internet
For the reasons highlighted above, along with a bevy of other essential features made possible by hard work and clever thinking across the entire community, Kubernetes 1.3 is a substantial release, and the CoreOS team is proud to be a part of it. Watch this space and the official Kubernetes blog for details about the official release of Kubernetes v1.3 in the coming weeks, and start trying out rktnetes, the etcd v3 API, and the many other new features. If you’re into working on projects like Kubernetes and etcd at the cutting edge of production distributed systems, join us. If you're interested in learning more about how CoreOS can help with enterprise Kubernetes deployments, contact us to set up a private Kubernetes training session or try out Tectonic Starter.