Kubernetes is powerful container management software inspired by Google’s operational experience with containers. Essential features like service discovery, automatic load-balancing, container replication and more are built in. Plus, it’s all powered via an HTTP API.
If you already have Docker containers that you'd like to launch and load balance, Kubernetes is the best way to run them.
These docs are the best place to learn how to install, run and use Kubernetes on CoreOS. Anyone can submit changes to these docs via GitHub. For more in-depth support, jumping into #coreos on IRC, emailing the dev list or filing a bug are recommended.
CoreOS runs on most cloud providers, virtualization platforms and bare metal servers. Running a local VM on your laptop is a great dev environment. Following the Quick Start guide is the fastest way to get set up.
Learn how to efficiently utilize how your containers are grouped and communicate together
Launch and scale your pods in a highly available manner
Follow these guides to install and configure Kubernetes on CoreOS. Each of these guides follow Kubernetes best practices and embody the "CoreOS Way":
Deploy a production cluster on AWS
Deploy a production cluster on bare metal
You provide TLS certs, scripts do the rest
Get up and running anywhere you can run CoreOSStep 1: Getting Started Step 2: Deploy Kubernetes Master Step 3: Deploy Kubernetes Workers Step 4: Configure Kubectl Step 5: Deploy Add-ons Create CA and Cluster Certificates with OpenSSL
Upgrading your cluster is done by modifying the pod manifests on each worker or controller that run the Kubernetes services — proxy, podmaster, etc.
The CoreOS Kubernetes guides reflect the best practices for configuring Kubernetes and deploying software the "CoreOS Way". Below you'll find information on how CoreOS works with the upstream Kubernetes project and community.