This guide walks through a bare-metal installation of Tectonic utilizing PXE-based tools. This document will cover:
|1. Overview||Review types of machines in the cluster
Review networking requirements
|2. Provisioning Infrastructure||Download and install bootcfg
Generate TLS assets
|3. Configure Networking||Set up DHCP, TFTP, and DNS services
Configure DNS for the cluster
|4. Tectonic Installer||Install Kubernetes and Tectonic|
|5. Tectonic Console||You're done! Your cluster is ready!|
A minimum of 3 machines are required to run Tectonic.
The provisioner node runs the network boot and provisioning service (bootcfg) and PXE services if you don't already run them elsewhere. You may use CoreOS or any Linux distribution for this node. This node needs to be up and running to provision nodes, but won't join Tectonic clusters.
A Tectonic cluster consists of two types of nodes:
Controller nodes run the control plane of the cluster. These services will be highly available and will execute automatic leader election should a node fail. These nodes will have their own trust zone since they are critical to the cluster. CoreOS will be installed to disk upon boot.
Worker nodes run your applications. New worker nodes will join the cluster by talking to the controller nodes for admission. CoreOS will be installed to disk upon boot.
This guide requires familiarity with PXE booting, the ability to configure network services, and to add DNS names. These are walked through in detail below.
bootcfg is a service for network booting and provisioning bare-metal nodes into CoreOS clusters.
bootcfg should be installed on a provisioner node to serve configs during provisioning.
The commands to set up
bootcfg should be performed on the provisioner node.
Download the latest Tectonic release to the provisioner node and extract it.
wget https://releases.tectonic.com/releases/tectonic_v1.4.3.tar.gz tar xzvf tectonic-v1.4.3.tar.gz cd tectonic/coreos-baremetal
bootcfg API allows client apps such as the Tectonic Installer to manage how machines are provisioned. TLS credentials are needed for client authentication and to establish a secure communication channel.
If your organization manages public key infrastructure and a certificate authority, create a server certificate and key for the
bootcfg service and a client certificate and key.
Otherwise, generate a self-signed
ca.crt, a server certificate (
server.key), and client credentials (
client.key) with the
scripts/tls/cert-gen script. Export the DNS name or IP (discouraged) of the provisioner node.
$ cd scripts/tls # DNS or IP Subject Alt Names where bootcfg can be reached $ export SAN=DNS.1:bootcfg.example.com,IP.1:192.168.1.42 $ ./cert-gen
Place the TLS credentials in the default location:
$ sudo mkdir -p /etc/bootcfg $ sudo cp ca.crt server.crt server.key /etc/bootcfg/
ca.crt will be used by the Tectonic Installer later.
bootcfg binary is available for general Linux distributions.
bootcfg static binary to an appropriate location on the provisioner node.
$ cd tectonic/coreos-baremetal $ sudo cp bootcfg /usr/local/bin
bootcfg service should be run by a non-root user with access to the
bootcfg data directory (
/var/lib/bootcfg). Create a
bootcfg user and group.
$ sudo useradd -U bootcfg $ sudo mkdir -p /var/lib/bootcfg/assets $ sudo chown -R bootcfg:bootcfg /var/lib/bootcfg
Copy the provided
bootcfg systemd unit file.
$ sudo cp contrib/systemd/bootcfg.service /etc/systemd/system/ $ sudo systemctl daemon-reload
The example unit exposes the
bootcfg HTTP config endpoints on port 8080 and exposes the API on port 8081 (to Tectonic Installer). Customize the port settings to suit your preferences.
Be sure to allow your port choices on the provisioner's firewall so the Tectonic Installer can access the service. Here are the commands for those using
$ sudo firewall-cmd --zone=MYZONE --add-port=8080/tcp --permanent $ sudo firewall-cmd --zone=MYZONE --add-port=8081/tcp --permanent
bootcfg service and enable it if you'd like it to start on every boot.
$ sudo systemctl start bootcfg.service $ sudo systemctl enable bootcfg.service
Verify the bootcfg service is running and can be reached by nodes (those being provisioned). It is recommended that you define a DNS name for this purpose (see Networking).
$ systemctl status bootcfg $ dig bootcfg.example.com
Verify you receive a response from the HTTP and API endpoints. All of the following responses are expected:
$ curl http://bootcfg.example.com:8080 bootcfg $ cd tectonic/coreos-baremetal/scripts/tls $ openssl s_client -connect bootcfg.example.com:8081 -CAfile /etc/bootcfg/ca.crt -cert client.crt -key client.key CONNECTED(00000003) depth=1 CN = fake-ca verify return:1 depth=0 CN = fake-server verify return:1 --- Certificate chain 0 s:/CN=fake-server i:/CN=fake-ca --- ....
bootcfg can serve CoreOS images to reduce bandwidth usage and increase the speed of CoreOS PXE boots and installs to disk. Tectonic Installer will use this feature.
Download a recent CoreOS stable release with signatures.
$ cd coreos-baremetal/scripts $ ./get-coreos stable 1122.2.0 . # note the "." 3rd argument
Move the images to
$ sudo cp -r coreos /var/lib/bootcfg/assets
$ tree /var/lib/bootcfg/assets /var/lib/bootcfg/assets/ ├── coreos │ └── 1122.2.0 │ ├── CoreOS_Image_Signing_Key.asc │ ├── coreos_production_image.bin.bz2 │ ├── coreos_production_image.bin.bz2.sig │ ├── coreos_production_pxe_image.cpio.gz │ ├── coreos_production_pxe_image.cpio.gz.sig │ ├── coreos_production_pxe.vmlinuz │ └── coreos_production_pxe.vmlinuz.sig
and verify the images are acessible.
$ curl http://bootcfg.example.com:8080/assets/coreos/SOME-VERSION/ <pre>...
A bare-metal Tectoinc cluster requires PXE infrastructure, which we'll setup next.
Review network setup with your network administrator to set up DHCP, TFTP, and DNS services on your network. At a high level, your goals are to:
bootcfgiPXE HTTP endpoint
A simple approach (which may be suitable) is to run a proxy DHCP and TFTP service with dnsmasq, alongside an existing DHCP server. You'd add a dnsmasq configuration, prepare
/var/lib/tftpboot, and configure the firewall.
dhcp-range=192.168.1.1,proxy,255.255.255.0 enable-tftp tftp-root=/var/lib/tftpboot pxe-service=tag:#ipxe,x86PC,"PXE chainload to iPXE",undionly.kpxe dhcp-userclass=set:ipxe,iPXE pxe-service=tag:ipxe,x86PC,"iPXE",http://bootcfg.example.com:8080/boot.ipxe log-queries log-dhcp
CoreOS provides dnsmasq as
quay.io/coreos/dnsmasq, if you wish to use rkt or Docker.
Ensure each node has a domain name on your network and the following names resolve.
bootcfg.example.comresolves to your
tectonic.example.comresolves to any controller node in the Tectonic cluster.
The Tectonic Installer is a graphical application run on your laptop to create Tectonic clusters. It authenticates to
bootcfg via its API.
Your laptop running the Tectonic installer app must be able to access your
bootcfg instance. You will need the
client.key credentials created when setting up
bootcfg to complete the flow, as well as the
The commands to run the Tectonic Installer should be performed on your laptop.
Download the latest Tectonic release to your laptop and and extract it.
wget https://releases.tectonic.com/releases/tectonic_v1.4.3.tar.gz tar xzvf tectonic-v1.4.3.tar.gz cd tectonic
On your laptop, run the Tectonic Installer that matches your platform (
A tab should open in your browser. Follow the instructions to enter information needed for provisioning. You will need to enter machine MAC addresses, domain names, and your SSH public key.
Then, you'll be prompted to power on your machines via IPMI or by pressing the power button and guided through the rest of the bring-up.
After the installer is complete, you'll have a Tectonic cluster and be able to access the Tectonic console. You are ready to deploy your first application on to the cluster!
You can remove all Tectonic components if you'd like to perform a fresh installation to the same cluster.
WARNING: these actions are irreversable.
Delete all components in the
$ kubectl delete ns tectonic-system namespace "tectonic-system" deleted
Remove all Tectonic PostgreSQL data to avoid conflicts if you are planning another installation.
Drop the database named:
If you chose to run PostgreSQL in-cluster, delete database data on disk.
Since the PostgreSQL data is pinned to one worker node, you may have to login to all worker nodes before you find the data directory.
$ ssh core@WORKER_NODE
On the remote machine:
# cleanup /var/lib/postgres and /var/lib/postgresql $ sudo rm -rf /var/lib/postgres*
bootkube-start script from a controller. This should re-create Tectonic components.