Over the past two years, we’ve seen a shift in the way organizations think about and manage distributed applications. At CoreOS, work toward this shift began with fleet, a simple distributed service manager released in 2014. Today, the community is seeing widespread adoption of Kubernetes, a system with origins at Google that is becoming the de facto standard for open source container orchestration.
For numerous technical and market reasons, Kubernetes is the best tool for managing and automating container infrastructure at massive scale.
To this end, CoreOS will remove fleet from Container Linux on February 1, 2018, and support for fleet will end at that time. fleet has already been in maintenance mode for some time, receiving only security and bugfix updates, and this move reflects our focus on Kubernetes and Tectonic for cluster orchestration and management. It also simplifies the deployment picture for users while delivering an automatically updated Container Linux operating system of the absolute minimum surface and size.
New cluster deployments should be using:
- CoreOS Tectonic for production Kubernetes clusters backed by expert support and turnkey deployments and updates, or
- open source Kubernetes on Container Linux, or
- open source minikube for a first look at Kubernetes in general.
After February 1, 2018, a fleet container image will continue to be available from the CoreOS Quay registry, but will not be shipped as part of Container Linux. Current fleet users with Container Linux Support can get help with migration from their usual support channel until the final deprecation date. We also include documentation on this migration below.
We’ll also stand ready to help fleet users with their questions on the CoreOS-User mailing list throughout this period. To help fleet admins get a head start, we’re hosting a live webinar on the move from fleet to Kubernetes with CoreOS CTO Brandon Philips on February 14 at 10 AM PT. It’s your chance to get questions answered live.
fleet: First steps on a journey
CoreOS started working on cluster orchestration from the moment we launched our company and our operating system, now known as CoreOS Container Linux. We were among the first developers exploring software containers as a way to allow automated deployment and scheduling on the cluster resources offered by cloud providers. The result of those early efforts was fleet, an open-source cluster scheduler designed to treat a group of machines as though they shared an init system.
A little less than a year into our work on fleet, Google introduced the open source Kubernetes project. We were flattered that it leveraged the CoreOS etcd distributed key-value backing store that we created for fleet and Container Linux, but more importantly Kubernetes offered direction and solutions we’d identified but not yet implemented for fleet. Kubernetes was designed around a solid, extensible API, and had already laid down code for service discovery, container networking, and other features essential for scaling the core concepts. Beyond that, it was backed by the decades of experience in the Google Borg, Omega, and SRE groups.
Kubernetes and Tectonic: How we orchestrate containers today
For those reasons, we decided to bet on Kubernetes as the future of our container orchestration plans, and dedicated developer resources to begin contributing to the Kubernetes code base and community right away, well before Kubernetes 1.0. CoreOS is also a charter member of the Cloud Native Computing Foundation (CNCF), the industry consortium to which Google donated the Kubernetes copyrights, making the software a truly industry-wide effort.
CoreOS developers lead Kubernetes release cycles, Special Interest Groups (SIGs), and have worked over the last two years to make Kubernetes simpler to deploy, easier to manage and update, and more capable in production. The CoreOS flannel SDN is a popular mechanism for Kubernetes networking, in part because the Kubernetes network interface model is the Container Network Interface (CNI) pioneered by CoreOS and now shared by many containerized systems. Our teams worked closely on the design and implementation of the Kubernetes Role-Based Access Control (RBAC) system, and our open-source dex OIDC provider complements it with federation to major authentication providers and enterprise solutions like LDAP. And of course, etcd, originally a data store for fleet, carries the flag of those early efforts into the Kubernetes era.
fleet explored a vision for automating many cluster chores, but as CEO Alex Polvi likes to say, Kubernetes “completed our sentence”. We’re thankful for the feedback and support fleet has had from the community over time, and beyond what you’ve done for fleet, we’ve brought your experiences and ideas forward into Kubernetes and Tectonic and the current world of container cluster orchestration.
Getting started with Kubernetes on CoreOS Tectonic
If you’re deploying a new cluster, the easiest way to get started is to check out Tectonic. Tectonic delivers simple installation and automated upgrades of the cluster orchestration software, atop pure open source Kubernetes. A free license for a cluster of up to 10 machines is available to enable you to test your applications on either of the two supported platforms: AWS or bare metal in your own datacenter.
Note on minikube: An easy first look at Kubernetes
If you are new to container orchestration, minikube, a tool that makes it easy to run Kubernetes locally, is also an easy way to get a first look at Kubernetes, by deploying on your laptop or any local computer.
Getting started with Kubernetes on CoreOS Container Linux
To dive into the details of Kubernetes, take a look at the guides for deploying Kubernetes on CoreOS Container Linux. These docs offer a general introduction to Kubernetes concepts, some peeks under the covers, and paths to deployment on platforms beyond the initial two supported by Tectonic.
Sustaining fleet clusters with the fleet container
After fleet is removed from the Container Linux Alpha channel in February 2018, it will subsequently be removed from the Beta and Stable channels. After those releases, users who want to keep using fleet will need to run it in a container. A small wrapper script knows where to get the fleet application container and how to run it.
Admins can migrate toward a containerized fleet deployment by tuning this example fleet Ignition config. The Ignition machine provisioner can place the configured wrapper on intended fleet nodes and activate the service.
Next steps from fleet to Kubernetes
Get more information on our live webinar on February 14, which will be available for playback immediately following. And finally, sign up for a Kubernetes training in your area to get started with Kubernetes in a classroom with an expert from CoreOS.