Running Kubernetes Example on CoreOS, Part 1

July 10, 2014 · By Kelsey Hightower

Edit: The most up to date Kubernetes + CoreOS guide can be found on the Kubernetes GitHub project.

This post is not kept up to date with the latest developments! The latest documentation can be found at this GitHub repository: https://github.com/GoogleCloudPlatform/kubernetes/tree/master/docs/getting-started-guides/coreos/units

Kubernetes is the container cluster manager from Google that offers a unique workflow for managing containers across multiple machines. Kubernetes introduces the concept of a pod, which represents a group of containers that should be deployed as a single logical service. The pod concept tends to fit well with the popular pattern of running a single service per container. Kubernetes makes it easy to run multiple pods on a single machine or across an entire cluster for better resource utilization and high availability. Kubernetes also actively monitors the health of pods to ensure they are always running within the cluster.

Kubernetes is able to achieve this advanced level of cluster wide orchestration by leveraging the etcd distributed key-value store; a project started and maintained by CoreOS. etcd takes care of storing and replicating data used by Kubernetes across the entire cluster, and thanks to the Raft consensus algorithm etcd can recover from hardware failure and network partitions. The combination of Kubernetes and etcd is even more powerful when deployed on CoreOS, an operating system built for this kinda thing.

Now lets see Kubernetes in action.

Feel free to follow along at home as we step through getting Kubernetes installed on CoreOS and bringing up a single pod running a Redis database. First step is to get CoreOS up and running; a single node cluster will work for this tutorial. Take a look at the CoreOS installation docs and choose your favorite platform. Next we need to download the Kubernetes components onto our CoreOS instance.

For convenience we've precompiled all the Kubernetes components including:

  • apiserver
  • controller-manager
  • kubecfg
  • kubelet
  • proxy

Login to your CoreOS instance and run the following commands:

sudo mkdir -p /opt/kubernetes/bin 
sudo chown -R core: /opt/kubernetes
cd /opt/kubernetes
wget https://github.com/kelseyhightower/kubernetes-coreos/releases/download/v0.0.1/kubernetes-coreos.tar.gz
tar -C bin/ -xvf kubernetes-coreos.tar.gz

At this point you should have all the Kubernetes components installed under the /opt/kubernetes/bin directory. Kubernetes also requires Docker and etcd, both ship with CoreOS out of the box so nothing to do here. While CoreOS recommends that you run third party applications via Docker containers, CoreOS does support running Kubernetes directly on the OS. The best way to do that is by creating systemd unit files and starting the Kubernetes components with systemctl.

git clone https://github.com/kelseyhightower/kubernetes-coreos.git
sudo cp kubernetes-coreos/units/* /etc/systemd/system/

Before starting everything, make sure etcd is started:

sudo systemctl start etcd

With the systemd units in place we are ready to start the Kubernetes services.

sudo systemctl start apiserver
sudo systemctl start controller-manager
sudo systemctl start kubelet
sudo systemctl start proxy

Congratulations! You now have Kubernetes up and running on CoreOS. Take a moment and tweet that!

Creating a Kubernetes pod

Pods are how Kubernetes groups containers and define a logical deployment group. Lets take a look at a simple pod configuration for a Redis database.

{
  "id": "redis",
  "desiredState": {
    "manifest": {
      "version": "v1beta1",
      "id": "redis",
      "containers": [{
        "name": "redis",
        "image": "dockerfile/redis",
        "ports": [{
          "containerPort": 6379,
          "hostPort": 6379 
        }]
      }]
    }
  },
  "labels": {
    "name": "redis"
  }
}

You’ll notice that a pod configuration describes a Docker container and port mapping. Pods also define a label which is used for service discovery. We’ll cover service discovery in more detail in the next post, but for now know that it’s possible for other tools to discover the pods you spin up using Kubernetes.

Finally we can create the Redis database pod with the kubecfg command line tool and the Redis pod configuration file.

/opt/kubernetes/bin/kubecfg -h http://127.0.0.1:8080 -c kubernetes-coreos/pods/redis.json create /pods

(this command might take a minute to complete depending on how fast the docker pull is)

At this point the Redis database pod is up and running. We can validate this with a Redis client and connecting to our pod through the docker0 interface.

docker run -t -i dockerfile/redis /usr/local/bin/redis-cli -h 172.17.42.1

You can also use the kubecfg command to list running pods.

/opt/kubernetes/bin/kubecfg -h http://127.0.0.1:8080 list /pods

Once you are done with a pod you can simply delete it.

/opt/kubernetes/bin/kubecfg -h http://127.0.0.1:8080 delete /pods/redis

Conclusion

Hopefully this blog post has demonstrated that Kubernetes can be installed outside of GCE onto just about any Linux platform. CoreOS makes it simple to setup Kubernetes since it ships with Docker, etcd and systemd out of the box. All you have to do is install a few binaries and systemd units then you are ready to go.

In next post we will take a look at the true power of Kubernetes, which is managing Linux containers across multiple machines, service discovery, and full life cycle management of your Linux containers.

* All code examples, systemd units, and precompiled binaries can be found at the following github repository: https://github.com/kelseyhightower/kubernetes-coreos