Running Kubernetes Example on CoreOS, Part 2

July 30, 2014 · By Kelsey Hightower

Edit: The most up to date Kubernetes + CoreOS guide can be found on the Kubernetes GitHub project.

In the previous post I outlined how to set up a single node Kubernetes cluster manually. This was a great way to get started and try out the basic Kubernetes examples. Now its time to take it up a notch.

In this post I will demonstrate how to get Kubernetes installed on a three node CoreOS cluster running on VMware Fusion. The goal is to setup an environment that closely mimics a cluster of bare-metal machines supporting the Kubernetes networking model.

Before we jump into the tutorial we’ll take a moment to understand what Kubernetes is all about. I tend to think of Kubernetes as a project that builds on the idea that the datacenter is the computer, and Kubernetes provides an API to manage it all.

</img>

The deployment workflow provided by Kubernetes allows us to think about the cluster as a whole and less about each individual machine. In the Kubernetes model we don't deploy containers to specific hosts, instead we describe the containers we want running, and Kubernetes figures out how and where to run them. This declarative style of managing infrastructure opens up the door for large scale deployments and self healing infrastructures. Yeah, welcome to the future.

But how does it all work?

</img>

Kubernetes introduces the concept of a Pod, which is a group of dependent containers that share networking and a filesystem. Pods are great for applications that benefit from collocating services such as a caching server or log manager. Pods are created within a Kubernetes cluster through specification files that are pushed to the Kubernetes API. Kubernetes then schedules Pod creation on a machine in the cluster.

At this point the Kubelet service running on the machine kicks in and starts the related containers via the Docker API. It’s the Kubelet’s responsibility to keep the scheduled containers up and running until the Pod is either deleted or scheduled onto another machine.

Next Kubernetes provides a form of service discovery based on labels. Each collection of Pods can have an arbitrary number of labels which can be used to locate them. For example, to locate the Redis service running in production you could use the following labels to perform a label query:

service=redis,environment=production

Finally, Kubernetes ships with a TCP proxy that runs on every host. The TCP proxy is responsible for mapping service ports to label queries. The idea is that consumers of a service could contact any machine in a Kubernetes cluster on a service port and the request is load balanced across Pods responsible for that service.

The service proxy works even if the target Pods runs on other hosts. It should also be noted that every Pod in a Kubernetes cluster has its own IP address, and has the ability to communicate with other Pods within the cluster.

If you are interested in the inter-workings of Kubernetes, be sure to checkout the Kubernetes documentation.

Now it's time to build our own Kubernetes cluster.

The Environment

The examples in this post were testing with VMware Fusion 6 on OS X 10.9.4, but should also work with other virtualization products.

The Network

The Kubernetes networking model aims to provide each Pod (container set) with its own IP address and supports cross host communication. To make this work create a custom VMware network, vmnet2, dedicated for containers. The vmnet2 network should have DHCP and NAT disabled to mimic a basic switch.

Kubernetes Overview

Next create 3 VMs with the following virtual hardware specs:

  • 1 CPU
  • 512 MB RAM
  • 20 GB HDD
  • 2 Network Interfaces
  • CD ROM (Used for CoreOS installation and config-drive)

The second network interface for each VM will be connected to the vmnet2 network and will later become a member of the cbr0 bridge used by Docker.

Kubernetes Overview

Each VM plays a specific role in the Kubernetes cluster, either a Master or Minion in Kubernetes terms. The Master runs the apiserver, controller manager and optionally the kubelet and proxy servers. Minions on the other hand run only the kubelet and proxy servers. While you can have more than one Minion there can only be a single Master per cluster; a Kubernetes limitation that will removed in the near future.

Kubernetes Overview

Next install CoreOS on each of the VMs. For this environment I followed the ISO installation method and performed a local install. At this point you should have a clean set of VMs and are ready to install Kubernetes.

Installing Kubernetes

The recommend way of setting up Kubernetes on CoreOS is via Cloud-Config. The cloud-config files used for this tutorial can be found on GitHub.

There is one cloud-config file for each VM:

  • master.yml
  • node1.yml
  • node2.yml

Using cloud-config files we can automate 100% of the Kubernetes installation and setup process. There are a number of items being setup and configured in each cloud-config file:

  • Set the hostname
  • Configure static networking for the primary interface
  • Setup the cbr0 bridge and assign it a subnet from the 10.244.0.0/16 range
  • Configure iptables to NAT non-container traffic through the primary interface
  • Configure Docker to use the cbr0 bridge and disable adding rules to iptables
  • Download and install the Kubernetes binaries
  • Install the systemd units for each Kubernetes service
  • Configure etcd to use a static set of peers

Customize the Cloud-Config Files

The cloud-config files should not be used as is; the following changes must be made:

  • Change the static host IP address throughout the cloud-config file.
  • Change the SSH authorized key

Not excited about updating a bunch of cloud-config files? I figured as much, so I wrote kubeconfig to help you out. Instead of editing the cloud-configs manually you can use kubeconfig to generate the cloud-config files specifically for your environment.

wget http://storage.googleapis.com/kubernetes/darwin/kubeconfig -O /usr/local/bin/kubeconfig
chmod +x /usr/local/bin/kubeconfig

Before we can use kubeconfig we need to gather the following bits of information:

  • Network information to configure static networking on each of the VMs
    • DNS server
    • Gateway
    • 3 IP addresses
  • SSH Public Key

Gather 3 IP Addresses

First allocate 3 IP addresses to be used by the primary network interfaces of the VMs. For my setup the primary interfaces are connected to the vmnet8 virtual network.

To avoid picking IP addresses within the VMware DHCP range for the vmnet8 network, view the DHCP configuration file with the following command:

cat "/Library/Preferences/VMware Fusion/vmnet8/dhcpd.conf"

On my system I get the following output:

subnet 192.168.12.0 netmask 255.255.255.0 {
  range 192.168.12.128 192.168.12.254;
  option broadcast-address 192.168.12.255;
  option domain-name-servers 192.168.12.2;
  option domain-name localdomain;
  default-lease-time 1800;                # default is 30 minutes
  max-lease-time 7200;                    # default is 2 hours
  option netbios-name-servers 192.168.12.2;
  option routers 192.168.12.2;
}

From this output I can see the DHCP range, DNS (domain-name-servers), and Gateway (routers) I should use in my config.yml file.

In my environment I’ve chosen the following network settings:

dns: 192.168.12.2
gateway: 192.168.12.2
master_ip: 192.168.12.10
node1_ip: 192.168.12.11
node2_ip: 192.168.12.12

Gather the SSH authorized key

Finally set the sshkey key with your own public key. Skipping this step will prevent you from logging into your VMs.

sshkey: ssh-rsa AAAAB3NzaC1yc2..

Your final config.yml file should look something like this:

dns: 192.168.12.2
gateway: 192.168.12.2
master_ip: 192.168.12.10
node1_ip: 192.168.12.11
node2_ip: 192.168.12.12
sshkey: <public ssh key>

To generate the cloud-config files run the following command:

kubeconfig -c config.yml

Create Configuration Drives

While we could use the cloud-config files generated by kubeconfig directly, it’s easier to use config-drives to expose our cloud-configs to the VMs. Again we can use kubeconfig to automate this process for us.

kubeconfig -c config.yml -iso

The result of running the above command is a config-drive for all three VMs:

  • master.iso
  • node1.iso
  • node2.iso

For each VM attach the related config-drive and boot them.

Kubernetes Overview

At this point you should be able to login as the core user to each of your VMs. It will take a few minutes for the Kubernetes binaries to download and the related services to start.

Kubernetes Overview

Use the systemd journal to monitor progress of the VMs after they boot up:

journalctl -f

You can check the status of each of the Kubernetes components with the following commands

sudo systemctl status apiserver
sudo systemctl status controller-manager
sudo systemctl status kubelet
sudo systemctl status proxy
sudo systemctl status etcd
sudo systemctl status docker

You are now ready to start running the examples hosted on the Kubernetes GitHub repo.

Running Kubernetes Examples

You have two choices on running the examples. You can log on to the Master and run the kubecfg command from there. Or you can download the kubecfg command line tool to your local machine and execute commands remotely.

Conclusion

This post demonstrated how to setup a CoreOS cluster running Kubernetes on virtual hardware that mimics a bare metal infrastructure. Kubernetes consists of many moving parts, but as you can see the installation and setup of everything can be fully automated when using CoreOS with Cloud-Config.

Kubernetes is still in the early stages and things are rapidly evolving. The good news is you’ll have the ability to help shape the direction of the Kubernetes project by kicking the tires and joining the community. CoreOS will be here to support you by providing the best platform for running Linux containers and hosting projects like Kubernetes.