We’re integrating Tectonic with Red Hat OpenShift

We are bringing the best of Tectonic to Red Hat OpenShift to build the most secure, hybrid Kubernetes application platform.

Upgrading Tectonic & Kubernetes

Use Tectonic Console to control the process by which Tectonic and Kubernetes are updated. Clusters are attached to an update channel and are set to update either automatically, or with admin approval.

Cluster update settings in the Console

During an update, the latest versions of the Tectonic and Kubernetes components are downloaded. A seamless rolling update will occur to install the latest versions. A cluster admin can pause the update at any time.

Please note that the update payload process may affect any or all components in the tectonic-system and kube-system namespaces.

To learn more about how this process is executed, read about Operators.

Production and pre-production channels

Tectonic clusters can be conifigured to track either a production or pre-production update channel for the desired "minor" version of Kubernetes, like 1.8 or 1.7.

Name Workload Availability
Pre-production Development or testing environments running real workloads, with the goal of catching capability bugs early Available upon release
Production Clusters serving any amount of production traffic and/or have high reliability and uptime requirements Promoted 2-4 weeks after initial release

Configure your desired update channel in the Cluster Settings screen in the Console.

Upgrading between minor versions of Kubernetes

Before attempting an upgrade to a new "minor" version, ensure you are running the latest Tectonic version of the current minor version. For example, you must be running 1.7.9-tectonic.3 before upgrading to the next available version, 1.8.4-tectonic.1.

To start the opt-in upgrade process, switch your channel to the new minor version, and choose either production or pre-production. After this selection, click Check for Updates to query the new channel for updates.

Preserve & Restore etcd

If you'd like to preserve and restore etcd data to the new cluster, see the etcd disaster recovery guide.