Skip to main content

CoreOS Delivers etcd v2.3.0 with Increased Stability and v3 API Preview

Today we’re happy to announce the release of etcd v2.3.0, focusing on improving stability and reliability. This release also introduces an experimental implementation of the next-generation v3 API, including a client and command line tool, providing developers early access to the future of etcd.

etcd is an open source, distributed, consistent key-value store. etcd is leveraged by hundreds of projects for shared configuration, service discovery, and scheduler coordination, and is an essential component of the CoreOS stack. It is the primary scheduling and service discovery data store in the Kubernetes cluster orchestration system.

You can get involved in the etcd community and find new binaries on GitHub. Read more about the latest updates below.

Stable Auth API

We introduced an experimental version of a v2 authorization API a few months ago. In the time since, real-world testing has made the etcd Auth API stable and ready for reliable production use and development integration. The Auth API provides mechanisms for identifying users and granting capabilities in the etcd cluster based on that identity.

Reduced latency by up to 2x

To ensure durability, an etcd cluster persists every request to the disks of the majority of its members before replying to the request. Previously, this disk I/O happened in sequential order between an etcd leader and its followers, so the minimum latency was 2 * fsync latency + network latency. To reduce this potential bottleneck, we implemented an optimization mentioned in the Raft thesis. etcd 2.3.0 instead performs disk I/O in parallel between a leader and its followers, which means the minimum latency is fsync latency + network latency, since the followers’ writes do not wait on the leader’s writes.

Functional testing dashboard

We have been subjecting etcd to long-running functional testing over the past year. Our etcd clusters have passed tens of thousands of rounds of these functional tests, which includes tests that kill nodes and drop network packets. It has been extremely helpful in detecting runtime issues before our users detect them for us, and has markedly increased the reliability of etcd releases.

Up-to-date results of these functional tests are now publicly visible in a slick new etcd tests dashboard.

etcd functional testing dashboard
The dashboard views include essential stats like the total rounds of functional tests executed on this etcd cluster since it last restarted



The test failure rate is typically below 0.1%. The etcd team investigates each functional test failure, and drives improvements in etcd or in the testing infrastructure itself with the results of that analysis. These functional tests even discovered an interesting bug in an etcd dependency, and enabled us to create an upstream fix, benefiting the wider open source community.

A high proportion of recent test failures have actually revealed shortcomings of the tests and testing environment, rather than etcd. For instance, running the etcd testing cluster on commodity cloud infrastructure sometimes means that over-provisioned virtual resources trigger false failures in tests with timing assumptions. These false alarms can even extend into the reporting mechanism itself, which has delivered false positives in the past when it should have waited another second or two. Continued improvements are underway as we continue testing and using this infrastructure.

v3 API and new storage engine

etcd 2.3 introduces an experimental implementation of the new v3 API. The etcd v3 API is designed to make etcd scale to more clients more efficiently. To better support these new API features a new experimental storage engine is also included. The new storage engine uses a b+tree-based on-disk storage format with full MVCC support. This enables etcd to do incremental snapshots and inexpensive “time travel” queries, like accessing previous versions of a key.

All v2 API requests continue to use the stable, existing storage engine and on-disk format. Only the v3 API operates on the new b+tree storage system. The v2 and v3 API have very different data models, and it is non-trivial to make the v2 and v3 APIs manipulate the same key-value namespace without compromising API guarantees for both versions. So, while unifying the data storage layer for both APIs is possible in theory, it delivers little practical benefit to current v2 users. For now, it makes more sense and simplifies the implementation to maintain a distinction between storage engines for the two API versions.

We have no intention of deprecating v2 API, and the two APIs will continue to exist side-by-side as we make the etcd v3 release. We will provide a migration guide for developers and users looking to smoothly upgrade their apps from using the etcd v2 API to the v3 API.

Storage engine use by the two APIs
The etcd v2 and v3 APIs use different storage engines



Get started with etcd v3 API

We encourage your feedback and involvement to guide the continued development of the v3 API. To try out new v3 features, start etcd with the additional --experimental-v3demo and --experimental-gRPC-addr options. To prepare to integrate etcd v3 into your applications, read the client library documentation.

If you have a Go development environment, you can get started with the etcd v3 API in the following 6 steps:

go get
cd $GOPATH/src/
go build -o bin/etcdctlv3 ./etcdctlv3
goreman start -f V3DemoProcfile start
./bin/etcdctlv3 put hello "v3 API"

Be sure to join the etcd-dev mailing list to stay updated. As we continue to invest in etcd and build requested features, we also welcome your contributions! Haven’t used etcd yet? Get started here.