We are bringing the best of Tectonic to Red Hat OpenShift to build the most secure, hybrid Kubernetes application platform.
The following tools and access rights are required to use Tectonic Installer with an Amazon Web Services (AWS) account.
To obtain the License and Pull Secret required during Tectonic installation, go to https://account.coreos.com/login and sign up for a CoreOS account. This account will entitle you to up to 10 free nodes on a production ready Tectonic cluster.
The AWS credentials you provide require access to the following AWS services:
An importable AWS policy containing the minimum privileges needed to run the Tectonic installer can be found here.
The following steps demonstrate how to generate and use temporary AWS credentials in conjunction with the Tectonic Installer:
dnf install
:$ sudo dnf install awscli
$ aws configure
tectonic-installer
role in AWS with the trust policy detailed here. The trust relationship policy grants an entity permission to assume the role.$ aws iam create-role --role-name tectonic-installer --assume-role-policy-document file://Documentation/files/aws-sts-trust-policy.json
The file://
prefix is required before the filepath.
tectonic-installer
role containing the minimum privileges needed to run the Tectonic installer. The policy is available here.$ aws iam put-role-policy --role-name tectonic-installer --policy-name TectonicInstallerPolicy --policy-document file://Documentation/files/aws-policy.json
tectonic-installer
role. To do so, click on the Trust Relationships
tab and then on the Edit Trust Relationship
button to bring up the trusted entities JSON editor. You'll then need to add a new section for your user's ARN.The example Trust Relationship below has been edited to add a user's (named tectonic) ARN:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "ec2.amazonaws.com",
"AWS": "arn:aws:iam::477645798577:user/tectonic"
},
"Action": "sts:AssumeRole"
}
]
}
tectonic-installer
role with your AWS user using the AWS CLI tool as follows:$ aws sts assume-role --role-arn=<TECTONIC_INSTALLER_ROLE_ARN> --role-session-name=<DESIRED_SESSION_NAME>
The returned response will look like:
{
"Credentials": {
"SecretAccessKey": "<SECRET_ACCESS_KEY>",
"AccessKeyId": "<ACCESS_KEY_ID>",
"Expiration": "2016-12-14T02:21:37Z",
"SessionToken": "<SESSION_TOKEN>"
},...
}
Use the SECRET_ACCESS_KEY
, ACCESS_KEY_ID
, and SESSION_TOKEN
to authenticate in the installer.
If building the Tectonic cluster using the CLI directly, then you can configure terraform
to perform the STS assume-role
operation automatically on every run. It will automatically retrieve and use the temporary credentials every time so you don't have to refresh them manually when they expire.
To enable Terraform to perform the assume-role
operation edit the file platforms/aws/main.tf
and change the provider "aws" { ... }
block to include the following configuration:
provider "aws" {
region = "${var.tectonic_aws_region}"
assume_role {
role_arn = "<tectonic-installer-ROLE-ARN>"
session_name = "terraform"
}
}
You can then run Terraform using an unpriviledged user that only has permissions to assume the tectonic-installer
role.
The final step of the Tectonic install requires an SSH key and access to standard utilities like ssh
and scp
. Setting up a new key on AWS should take less than 5 minutes.
Tectonic uses AWS S3 to store all credentials, using server-side AES encryption for storage, and TLS encryption for upload/download. Any pod run in the system can query the AWS metadata, get node AWS credentials, and pull down cluster credentials from AWS S3. CoreOS plans to address this issue in a later release.
First, create a key.
ls ~/.ssh/
. If you've previously created a key, you may see a file like id_rsa.pub
. If you'd like to use this key, skip to upload the key to AWS below.ssh-keygen --help
to validate you have the openssh utilities installed. If you cannot find the binaries on your system, please consult your distro's documentation.ssh-keygen -t rsa -b 4096 -C "aws tectonic for alice@example.com"
. The content after -C
is a comment. Replace alice@example.com with an appropriate AWS email or IAM account.$HOME/.ssh/id_rsa.pub
. Otherwise, the key-pair is in your current directory.Next, upload the key to AWS.
.pub
.For additional information about AWS and SSH keys consult the official AWS guide.
In order to access the cluster two ELB backed services are exposed. Both are accessible over the standard TLS port (443).
With temporary credentials and an SSH key, you'll be ready to install Tectonic. Head over to the install doc to get started.
The following table includes the high level networking features required to install Tectonic into new or existing VPCs, with or without public access to cluster services.
Public facing cluster | Internal cluster | |
---|---|---|
New VPC | Installer creates public subnets | Select 'internal' in Tectonic installer |
Existing VPC | 2 subnets, connected to an IGW | Create 2 subnets, Establish a VPN |
If you are experiencing issues with an install involving VPC-internal components, you may find the troubleshooting section useful.
By default, Tectonic Installer creates a new AWS Virtual Private Cloud (VPC) for each cluster. Advanced users can choose to use an existing VPC instead. An existing VPC must have an Internet Gateway. Tectonic Installer will not create an Internet Gateway in an existing VPC.
An existing VPC for a public cluster must have a public subnet for controllers, and a private subnet for workers. An existing VPC for an internal cluster must have 2 private subnets, one each for controllers and workers.
Public subnets have a default route to the Internet Gateway and should auto-assign IP addresses. Private subnets have a default route to a default gateway, such as a NAT Gateway or a Virtual Private Gateway.
DHCP Options Set attached to the VPC must have an AWS private domain name. In us-east-1 region, an AWS private domain name is ec2.internal whereas other regions use region.compute.internal.
When using an existing VPC, tag AWS VPC subnets with the kubernetes.io/cluster/my-cluster-name = shared
tag. shared
is used to tag resources shared between multiple clusters, which should not be destroyed if any individual cluster is destroyed. If this tag is not specified, AWS ELB integration with Tectonic may not be able to use VPC subnets.