Enabling HTTPS in an existing etcd cluster

This guide outlines the process of migrating an existing etcd cluster from HTTP communication to encrypted HTTPS. For added security, it also shows how to require TLS peer certificates to authenticate connections.

Prepare cluster components

Check insecure port availability

By default, etcd communicates with clients over two ports: 2379, the current and official IANA port designation, and 4001, for clients who may implement versions of the protocol older than 0.4. We leverage this quirk of legacy support to migrate a running cluster from insecure plain-HTTP communication (on the old port, 4001) to encrypted HTTPS (on the current port, 2379) without cluster downtime. We restrict the legacy communication on port 4001 to the local interface.

If you've configured flannel, fleet, or other components to use custom ports, or 2379 only, they will be reconfigured to use port 4001.

If etcd isn't listening on port 4001, it must also be reconfigured. If you used a Container Linux Config to spin up your machines, you can retrieve the --listen-client-urls value from /etc/systemd/system/etcd-member.service.d/20-clct-etcd-member.conf to verify the etcd ports:

$ grep listen-client-urls /run/systemd/system/etcd-member.service.d/20-clct-etcd-member.conf
  --listen-client-urls="" \

In this case etcd is listening only on port 2379. Add port 4001 with a systemd drop-in unit file. Edit the line that starts with --listen-client-urls in the /etc/systemd/system/etcd-member.service.d/20-clct-etcd-member.conf file and append the new URL on port 4001 to the existing value retrieved in the previous step:


Run systemctl daemon-reload followed by systemctl restart etcd-member.service to restart etcd. Check cluster status using the etcdctl commands:

$ etcdctl member list
$ etcdctl cluster-health

Repeat these steps on each etcd cluster member.

Generate TLS key pairs

Follow the guide to generating self-signed certificates to create the certificate/key pairs needed for each etcd cluster member.

Copy key pairs to nodes

Assume we have three Container Linux machines, running three etcd cluster members: server1, server2, server3; with corresponding IP addresses:,, and We will use the following key pair file names in our example:


Create the /etc/ssl/etcd directory, then copy the corresponding certificate and key there. Set permissions to secure the directory and key file:

$ chown -R etcd:etcd /etc/ssl/etcd
$ chmod 600 /etc/ssl/etcd/*-key.pem

Copy the ca.pem CA certificate file into /etc/ssl/etcd as well.

Alternatively, copy ca.pem into /etc/ssl/certs instead, and run the update-ca-certificates script to update the system certificates bundle. After doing so, the added CA will be available to any program running on the node, and it will not be necessary to set the CA path for each application.

Repeat this step on the rest of the cluster members.

Using an etcd proxy

If you typically connect to a remote etcd cluster, this is a good time to configure an etcd proxy that handles the remote connection and TLS termination, and to reconfigure your apps to communicate through the proxy on localhost. In this case, you must generate a client key pair (e.g. client1.pem and client1-key.pem) and repeat the Copy Key Pairs step above with the client key pair files.

It is also necessary to modify your systemd unit files or drop-ins which use etcdctl in ExecStart*= or ExecStop*= directives to replace the invocation of /usr/bin/etcdctl with /usr/bin/etcdctl --no-sync. This will force etcdctl to use the proxy for all operations.

Configure etcd key pair

Now we will configure etcd to use the new certificates. Create a /etc/systemd/system/etcd-member.service.d/30-certs.conf drop-in file with the following contents:


Reload systemd configs with systemctl daemon-reload then restart etcd by invoking systemctl restart etcd-member.service. Check cluster health:

$ etcdctl member list
$ etcdctl cluster-health

Repeat this step on the rest of the cluster members.

Configure etcd proxy key pair

If proxying etcd connections as discussed above, create a systemd drop-in unit file named /etc/systemd/system/etcd-member.service.d/30-certs.conf with the following contents:

# Listen only on loopback interface.

Reload systemd configs with systemctl daemon-reload, then restart etcd with systemctl restart etcd-member.service. Check proxy status with, e.g.:

$ curl

Change etcd peer URLs

Use etcdctl piped through an awk filter to print the commands needed to reconfigure the cluster peer URLs. After reviewing the command lines, we will invoke them on one etcd cluster member.

Current etcd

On etcd v2.2 or later, invoke:

$ etcdctl member list | awk -F'[: =]' '{print "etcdctl member update "$1" https:"$7":"$8}'

A series of command lines will be printed, one for each etcd cluster member:

etcdctl member update 2428639343c1baab
etcdctl member update 50da8780fd6c8919
etcdctl member update 81901418ed658b78

On any one etcd cluster member, run each of the printed commands – except the last one. We will change the peer URL for the last etcd cluster member only after completing any proxy configuration below.

Check cluster health:

$ etcdctl member list
$ etcdctl cluster-health

If the cluster members report as expected, we can move on to configuring the local etcd proxy, if neeeded, then invoking the last of the printed command lines on a cluster member.

etcd proxy

An operating etcd proxy will automatically adopt new peer URLs within 30 seconds of each update (that is, the default [--proxy-refresh-interval][proxy-refresh] 30000).

etcd versions 2.1 and older

For etcd versions 2.1 and earlier, the awk filter to produce the peer URL change commands is different:

$ etcdctl member list | awk -F'[: =]' '{print "curl -XPUT -H \"Content-Type: application/json\" http://localhost:2379/v2/members/"$1" -d \x27{\"peerURLs\":[\"https:"$7":"$8"\"]}\x27"}'

This will produce a set of command lines like:

curl -XPUT -H "Content-Type: application/json" http://localhost:2379/v2/members/2428639343c1baab -d '{"peerURLs":[""]}'
curl -XPUT -H "Content-Type: application/json" http://localhost:2379/v2/members/50da8780fd6c8919 -d '{"peerURLs":[""]}'
curl -XPUT -H "Content-Type: application/json" http://localhost:2379/v2/members/81901418ed658b78 -d '{"peerURLs":[""]}'

Apply the changes in the same manner described above, by running each of the printed commands, except the last one, on any one etcd cluster member. Finally, invoke the last printed command after completing any etcd proxy configuration in the previous section.

Change etcd client URLs

Edit the lines that start with --listen-client-urls, --advertise-client-urls, and --listen-peer-urls in the /etc/systemd/system/etcd-member.service.d/20-clct-etcd-member.conf file and append the new URL on port 4001 to the existing value retrieved in the previous step:

--advertise-client-urls:, \
--listen-client-urls:, \
--listen-peer-urls:, \

Reload systemd configs with systemctl daemon-reload and restart etcd by issuing systemctl restart etcd-member.service. Check that HTTPS connections are working properly with, e.g.:

$ curl --cacert /etc/ssl/etcd/ca.pem --cert /etc/ssl/etcd/server1.pem --key /etc/ssl/etcd/server1-key.pem

Check cluster health with etcdctl, now under HTTPS encryption:

$ etcdctl --ca-file /etc/ssl/etcd/ca.pem --cert-file /etc/ssl/etcd/server1.pem --key-file /etc/ssl/etcd/server1-key.pem member list
$ etcdctl --ca-file /etc/ssl/etcd/ca.pem --cert-file /etc/ssl/etcd/server1.pem --key-file /etc/ssl/etcd/server1-key.pem cluster-health

The certificate options can be read from environment variables to shorten the commands:

$ export ETCDCTL_CERT_FILE=/etc/ssl/etcd/server1.pem
$ export ETCDCTL_KEY_FILE=/etc/ssl/etcd/server1-key.pem
$ export ETCDCTL_CA_FILE=/etc/ssl/etcd/ca.pem
$ etcdctl member list
$ etcdctl cluster-health

Check etcd status and availability of the insecure port on the loopback interface:

$ systemctl status etcd-member.service
$ curl
$ curl

Check fleet and flannel:

$ journalctl -u flanneld -f
$ journalctl -u fleet -f

Error messages from an otherwise functional cluster may be ignored on the last etcd restart, but they should not appear thereafter.

Once again, after verifying status, repeat this step on each etcd cluster member.