Kubernetes in Minutes with Minikube and rkt Container Engine

September 21, 2016 · By Sergiusz Urbaniak

Today CoreOS is excited to announce support for the rkt container engine in Minikube, a tool to run Kubernetes locally.

Get started with Kubernetes and rkt using Minikube

Minikube is an easy way to get started with Kubernetes. Minikube runs Kubernetes inside a virtual machine on your laptop or workstation. Supported virtualization providers include VirtualBox, VMware Fusion, and the native hypervisor on MacOS; and VirtualBox or KVM on Linux.

The Minikube guide spins up a single-node Kubernetes cluster ready to run containers with the rkt container engine or the Docker runtime. Minikube launches a virtual Kubernetes cluster, and you'll use the kubectl command line tool to control it – deploying, scheduling, and scaling your in-development applications and microservices.

Running containers using rkt became a supported feature in Kubernetes v1.3. This work aims to be compatible with existing Kubernetes practices: Run the same Docker container images, use the same kubectl CLI workflows, and deploy Kubernetes manifests without modifications while using the rkt engine.

Minikube + rkt = <3

With rkt support added to Kubernetes, the obvious next step was to integrate it into Minikube. We are happy to announce that as of Minikube v0.10, you can enable rkt easily using the --container-runtime and --network-plugin flags. After installing Minikube v0.10 (or later), you'll be able to fire up your own Kubernetes with rkt cluster:

Minikube installation guide

The additional --iso-url parameter points to a minimal ISO system image containing rkt (and Docker). Keep an eye on the latest CoreOS Minikube-ISO release to get the newest updates and bug fixes. While you must specify this option today, the Minikube project is in the process of making the CoreOS Minkube-ISO image the default. Watch Minikube issue #571 for updates.

Depending on your platform, you can select the hypervisor Minikube uses with the --vm-driver- flag.

Using the Minikube Kubernetes cluster

Once the system becomes available, you will see a message that Minikube is ready. Now you can use kubectl to inspect the cluster, the node, and the pre-installed Minikube pods:

…
Kubectl is now configured to use the cluster.
$ kubectl get nodes
NAME       STATUS    AGE
minikube   Ready     6m
$ kubectl get pods --all-namespaces
NAMESPACE     NAME                          READY     STATUS    RESTARTS   AGE
kube-system   kube-addon-manager-minikube   1/1       Running   0          6m
…

Logging into the Minikube Kubernetes node

You can also ssh into the Minikube virtual machine to interact with rkt directly:

$ minikube ssh
$ rkt list
UUID		APP			IMAGE NAME							STATE	CREATED		STARTED		NETWORKS
4b4fce22	kube-addon-manager	gcr.io/google-containers/kube-addon-manager-amd64:v2		running	9 minutes ago	9 minutes ago
6567accd	kubernetes-dashboard	gcr.io/google_containers/kubernetes-dashboard-amd64:v1.1.1	running	9 minutes ago	9 minutes ago

This output shows the two default Minikube pods. Next, you can deploy your own pods using the kubectl run or create subcommands. Rkt happily converts and runs common Docker container images on the fly, so you don't have to change your container build process and CI pipelines to use rkt in your cluster. Developers can easily build with Docker, run with rkt.

Feedback

We're working hard to make rkt a first class citizen in Kubernetes, and to keep the Kubernetes container runtime interface clean and modular. In the future, this will allow CoreOS to deliver extended security and platform capabilities both upstream in Kubernetes and as part of Tectonic, our enterprise-grade Kubernetes distribution. Minikube support makes it even easier to get started participating in that effort. We hope you'll try out Kubernetes with rkt in Minikube, and tell us about your experiences, bug reports, and any issues that you encounter.

Next steps with Tectonic

Now that you have a Kubernetes cluster right on your laptop, try out Tectonic Starter. Signing up for Starter is free, and it includes the Tectonic Console, a graphical control panel for running, scaling, and monitoring your container applications on Kubernetes.

Work on Tectonic and Kubernetes

Tectonic is the CoreOS enterprise Kubernetes distribution for on-premises, cloud, and hybrid production deployments. We're working upstream and on our own code to make Tectonic atop Kubernetes the most versatile, secure, and production-grade suite of tools for container cluster orchestration. Join us! We're hiring engineers in New York, Berlin, and San Francisco.