In etcd operator, we provide the following options to save cluster backups to:
This docs talks about how to configure etcd operator to use these backup options.
By default, operator supports saving backup to PV on GCE.
This is done by passing flag
--pv-provisioner=kubernetes.io/gce-pd to operator, which is also the default value.
This is essentially saving backups to an instance of GCE PD.
If running on AWS Kubernetes, pass the flag
--pv-provisioner=kubernetes.io/aws-ebs to operator.
See AWS deployment.
This is essentially saving backups on an instance of AWS EBS.
Saving backups to S3 is also supported. The S3 backup policy can be set at two levels:
If configurations for both levels are specified then the cluster level configuration will override the operator level configuration.
See the S3 backup deployment template on how to configure the operator to enable S3 backups. The following flags need to be passed to operator:
backup-aws-secret: The name of the kube secret object that stores the AWS credential file. The file name must be 'credentials'. Profile must be "default".
backup-aws-config: The name of the kube configmap object that stores the AWS config file. The file name must be 'config'.
backup-s3-bucket: The name of the S3 bucket to store backups in.
Both the secret and configmap objects must be created in the same namespace that the etcd-operator is running in.
For example, let's say we have aws credentials:
$ cat ~/.aws/credentials [default] aws_access_key_id = XXX aws_secret_access_key = XXX
We create a secret "aws":
$ kubectl -n <namespace-name> create secret generic aws --from-file=$AWS_DIR/credentials
We have aws config:
$ cat ~/.aws/config [default] region = us-west-1
We create a configmap "aws":
$ kubectl -n <namespace-name> create configmap aws --from-file=$AWS_DIR/config
What we have:
We will start etcd operator with the following flags:
$ ./etcd-operator ... --backup-aws-secret=aws --backup-aws-config=aws --backup-s3-bucket=etcd_backups
Then we could start using S3 storage for backups. See spec examples on how to configure a cluster that uses an S3 bucket as its storage type.
See the S3 backup with cluster specific configuration spec to see what the cluster's
spec.backup field should be configured as to set a cluster specific S3 backup configuration. The following additional fields need to be set under the cluster spec's
s3Bucket: The name of the S3 bucket to store backups in.
awsSecret: The secret object name which should contain two files named
The profile to use in both the files
$ cat ~/.aws/credentials [default] aws_access_key_id = XXX aws_secret_access_key = XXX $ cat ~/.aws/config [default] region = us-west-1
We can then create the secret named "aws" from the two files by:
$ kubectl -n <namespace-name> create secret generic aws --from-file=$AWS_DIR/credentials --from-file=$AWS_DIR/config
Once the secret is created, it can be used to configure a new cluster or update an existing one with the specific S3 configurations:
spec: backup: s3: s3Bucket: example-s3-bucket awsSecret: aws
For AWS k8s users: If
credentials file is not given,
operator and backup sidecar pods will make use of AWS IAM roles on the nodes where they are deployed.