When installation is complete, log in to your Tectonic Console to set up cluster credentials, and deploy a simple application.
In this tutorial you will:
Applications are usually deployed on Tectonic clusters by passing a YAML manifest file to the
kubectl create CLI tool. Once the application is deployed, it can be monitored and scaled using the Tectonic web interface.
To configure credentials for your cluster, populate a
kubeconfig file with valid authentication credentials, then configure
kubectl to use them to connect to a Tectonic Cluster.
First, log in to Tectonic Console to authenticate.
Then, download the
kubectlbinary for your operating system.
Click I’m Done to exit the download window, and return to the console.
Next, move the downloaded
kubectl file to
/usr/local/bin (or any other directory in your PATH).
$ chmod +x kubectl $ mv kubectl /usr/local/bin/kubectl
Make the downloaded
kubectl-config file kubectl’s default by copying it to a
.kube directory on your machine.
$ mkdir -p ~/.kube/ # create the directory $ cp path/to/file/kubectl-config $HOME/.kube/config # rename the file and copy it into the directory
Once you've downloaded and copied the
kubectl-config file to the
.kube directory, you’re ready to start using
.kube/config with appropriate file permissions as it contains cluster access credentials.
kubectl with a different
kubeconfig than the default, name the desired
kubeconfig in an environment variable or on the
kubectl command line:
KUBECONFIGto the location of the selected
$ export KUBECONFIG=/path/to/kubectl-config
$ kubectl --kubeconfig=/path/to/kubectl-config get pods
kubectl is properly configured, it can be used to explore Kubernetes entities:
$ kubectl get nodes NAME LABELS STATUS 10.0.0.197 kubernetes.io/hostname=10.0.0.197 Ready 10.0.0.198 kubernetes.io/hostname=10.0.0.198 Ready 10.0.0.199 kubernetes.io/hostname=10.0.0.199 Ready
Review our kubectl documentation for more setup help.
As an example application, we will deploy a simple, stateless website for a local bakery to the cluster: The Cookie Shop.
This example allows us to explore two useful Kubernetes concepts,
services. Both are top-level Kubernetes objects, just like
Deployments: Run multiple copies of a container across multiple nodes
Services: Endpoint that load balances traffic to containers run by a deployment
Copy the following YAML into a file named
apiVersion: apps/v1beta2 kind: Deployment metadata: name: simple-deployment namespace: default labels: k8s-app: simple spec: selector: matchLabels: k8s-app: simple replicas: 3 template: metadata: labels: k8s-app: simple spec: containers: - name: nginx image: quay.io/coreos/example-app:v1.0 ports: - name: http containerPort: 80
replicas: 3, will create 3 running copies.
Image: quay.io/coreos/example-app:v1.0 defines the container image to run, hosted on Quay.io.
Then, copy the following YAML into a file named
kind: Service apiVersion: v1 metadata: name: simple-service namespace: default spec: selector: k8s-app: simple ports: - protocol: TCP port: 80 type: LoadBalancer
To connect the Service to the containers run by the Deployment, the Deployment
containerPort and the Service
port must match.
Instantiate the cluster objects specified in the
simple-service.yaml manifest by passing the filepaths to
kubectl create. Check that they were created successfully by listing out the objects afterwards:
$ kubectl create -f simple-deployment.yaml deployment "simple-deployment" created $ kubectl get deployments NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE deploy/simple-deployment 3 3 3 3 7m
$ kubectl create -f simple-service.yaml service "simple-service" created $ kubectl get services -o wide NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR svc/simple-service 10.3.0.204 a9b5de374e28611e6945f02c590b59c5-2010998492.us-west-2.elb.amazonaws.com 80:32567/TCP 7m app=simple
The manifest specifies the pods required for a replicated Deployment, and connects them to a Kubernetes external load balancer service. On AWS, this service connects to an Elastic Load Balancer (ELB) through which it is exposed to the internet.
EXTERNAL-IP column gives the DNS name of the externally routable port for the service.
Check your setup by navigating to the URL listed. (It may take a few minutes for AWS to set up the ELB, and for the URL to resolve.)
Use the Tectonic Console to monitor your app’s public IP, Service, Deployment, and related Pods.
Go to Routing > Services to monitor the site’s services.
Go to Workloads > Deployments and click on the deployment name to monitor the deployment’s Pods.
You can also deploy an application using the Tectonic Console by navigating to the Deployments page, creating a new deployment or service, and copying the above YAML contents.
This example will use the simple app deployed above. To create an identical app using the Tectonic Console, first delete the existing app using
$ kubectl delete deploy/simple-deployment svc/simple-service deployment "simple-deployment" deleted service "simple-service" deleted
To deploy using the Tectonic console, use the content of the two YAML files already created.
First, deploy the sample app.
simple-deployment.yaml, and paste into the YAML pane, replacing its contents.
The Console will create your deployment, and display its Overview window.
Then, add the service.
simple-service.yamlinto the pane, replacing the default content.
The Console will create your service, and display its Overview window. Copy the External Load Balancer URL displayed into a browser to check your work. (It may take several minutes for AWS to update their ELB.)
The examples above used container images that have been shared publicly. To generate and host your own container images, we suggest using Quay.io or Quay Enterprise. The Quay container registry offers sophisticated access controls, easy automated builds, and automated security scanning, free for public projects.
Substitute your custom image and version (known as a "tag") in the Deployment above, by changing:
containers: - name: nginx image: quay.io/coreos/example-app:v1.0