We’re integrating Tectonic with Red Hat OpenShift

We are bringing the best of Tectonic to Red Hat OpenShift to build the most secure, hybrid Kubernetes application platform.

Workload separation

Tectonic 1.6 and later versions leverage Kubernetes's new beta support for tolerations, taints and pod affinity, to guarantee a clear separation between control plane and user workloads, as well as improve the reliability of the control plane by spreading the services across multiple nodes.

Every Tectonic master node registers itself to the cluster tainted. Taints are an attribute or marker applied to a node that describe a special property. Before a pod can be scheduled to this node, it must tolerate the taint. Since most pods will not tolerate this taint, they will be scheduled only to worker nodes. This behavior also includes pods generated from Deployments, ReplicaSets, and DaemonSets.

Components of the control plane, however, are specifically marked to tolerate the taint and therefore are allowed to be scheduled on master nodes. Furthermore, most of them are constrained to exclusively run on these nodes and thus won't be colocated with any other workloads (see below).

Finally, in order to limit the impact of host failures, the Kubernetes scheduler is hinted, using the pod anti-affinity mechanism, to avoid running multiple instances of the same components on the same nodes. While not strictly related to workloads separation, disruption control requires the cluster to always have at least one instance of the component running at any given time, regardless of update or rescheduling operations.

Control plane's components scheduling

The following table provides a reference for the scheduling of the control plane's components, as of Tectonic 1.6.4.

Component Type Replicas (default) Node role Anti-Affinity Disruption Control
kube-apiserver DaemonSet n master N/A N/A
kube-controller-manager Deployment 2 master yes yes
kube-scheduler Deployment 2 master yes yes
kube-dns Deployment 1 master N/A no
kube-flannel DaemonSet n all N/A N/A
kube-proxy DaemonSet n all N/A N/A
kube-etcd-network-checkpointer DaemonSet n master N/A N/A
pod-checkpointer DaemonSet n master N/A N/A
etcd-operator Deployment 1 any N/A no
kube-etcd-* Pod* 1* master yes N/A
heapster ReplicaSet 1 any N/A N/A
All other workloads Any ? worker ? ?

n: number of qualified nodes *: managed by an operator