Kubernetes on CoreOS

Achieve high-availability and improve efficiency of container deployments

Kubernetes is powerful container management software inspired by Google’s operational experience with containers. Essential features like service discovery, automatic load-balancing, container replication and more are built in. Plus, it’s all powered via an HTTP API.

If you already have Docker containers that you'd like to launch and load balance, Kubernetes is the best way to run them.

These docs are the best place to learn how to install, run and use Kubernetes on CoreOS Container Linux. Anyone can submit changes to these docs via GitHub. For more in-depth support, jumping into #coreos on IRC, emailing the dev list or filing a bug are recommended.

For commercial support, Tectonic delivers self-driving Kubernetes for the enterprise, and is available in a free tier for small clusters.


Kubernetes Concepts

Container Linux runs on most cloud providers, virtualization platforms and bare metal servers. Running a local VM on your laptop is a great dev environment. Following the Quick Start guide is the fastest way to get set up.


Installing Kubernetes

Follow these guides to install and configure Kubernetes on Container Linux. Each of these guides follow Kubernetes best practices and embody the "CoreOS Way":


Upgrading Kubernetes

Upgrading your cluster is done by modifying the pod manifests on each worker or controller that run the Kubernetes services — proxy, podmaster, etc.


Building on Open Source Kubernetes

The CoreOS Kubernetes guides reflect the best practices for configuring Kubernetes and deploying software the "CoreOS Way". Below you'll find information on how CoreOS works with the upstream Kubernetes project and community.