Upgrading Kubernetes

There's a better way!

The Tectonic Installer provides a Terraform-based Kubernetes installation. It is open source, uses upstream Kubernetes and can be easily customized.

Install Kubernetes with Tectonic

This guide will not be maintained in the future and is not currently tested against current releases of Kubernetes.

This document describes upgrading the Kubernetes components on a cluster's master and worker nodes. For general information on Kubernetes cluster management, upgrades (including more advanced topics such as major API version upgrades) see the Kubernetes upstream documentation and version upgrade notes

NOTE: The following upgrade documentation is for installations based on the CoreOS + Kubernetes step-by-step installation guide.

Upgrading the Kubelet

The Kubelet runs on both master and worker nodes, and is distributed as a hyperkube container image. The image version is usually set as an environment variable in the kubelet.service file, which is then passed to the kubelet-wrapper script.

To update the image version, modify the kubelet service file on each node (/etc/systemd/system/kubelet.service) to reference the new hyperkube image.

For example, modifying the KUBELET_IMAGE_TAG environment variable in the following service file would change the container image version used when launching the kubelet via the kubelet-wrapper script.

/etc/systemd/system/kubelet.service

Environment=KUBELET_IMAGE_TAG=v1.6.1_coreos.0
ExecStart=/usr/lib/coreos/kubelet-wrapper \
  --api-servers=https://master [...]

Upgrading Calico

The Calico agent runs on both master and worker nodes, and is distributed as a container image. It runs self hosted under Kubernetes.

To upgrade Calico, follow the documentation here

Note: If you are running Calico as a systemd service, you will first need to change to a self-hosted install by following this guide

Upgrading Master Nodes

Master nodes consist of the following Kubernetes components:

  • kube-proxy
  • kube-apiserver
  • kube-controller-manager
  • kube-scheduler
  • policy-controller

While upgrading the master components, user pods on worker nodes will continue to run normally.

Upgrading Master Node Components

The master node components (kube-controller-manager,kube-scheduler, kube-apiserver, and kube-proxy) are run as "static pods". This means the pod definition is a file on disk (default location: /etc/kubernetes/manifests). To update these components, you simply need to update the static manifest file. When the manifest changes on disk, the kubelet will pick up the changes and restart the local pod.

For example, to upgrade the kube-apiserver version you could update the pod image tag in /etc/kubernetes/manifests/kube-apiserver.yaml:

From: image: quay.io/coreos/hyperkube:v1.0.6_coreos.0

To: image: quay.io/coreos/hyperkube:v1.0.7_coreos.0

In high-availability deployments, the control-plane components (apiserver, scheduler, and controller-manager) are deployed to all master nodes. Upgrades of these components will require them being updated on each master node.

NOTE: Because a particular master node may not be elected to run a particular component (e.g. kube-scheduler), updating the local manifest may not update the currently active instance of the Pod. You should update the manifests on all master nodes to ensure that no matter which is active, all will reflect the updated manifest.

Upgrading Worker Nodes

Worker nodes consist of the following kubernetes components.

  • kube-proxy

Upgrading the kube-proxy

The kube-proxy is run as a "static pod". To upgrade the pod definition, simply modify the pod manifest located in /etc/kubernetes/manifests/kube-proxy.yaml. The kubelet will pick up the changes and re-launch the kube-proxy pod.

Example Upgrade Process:

  1. Prepare new pod manifests for master nodes
  2. Prepare new pod manifests for worker nodes
  3. For each master node:
    1. Back up existing manifests
    2. Update manifests
  4. Repeat item 3 for each worker node