Today we are excited to share with the community that Istio has achieved the milestone of hitting 1.0. In 2016 work began on Istio to provide an answer to the growing need for a service mesh within cloud native environments.
At CoreOS and now at Red Hat, our belief is minimizing the time and expertise needed to develop the next generation of applications through open source software. The ecosystem we have created around Kubernetes and OpenShift is nurtured with this in mind. Utilizing Red Hat OpenShift Application Runtimes (RHOAR) developers can focus on the application logic and business processes that help to put their products ahead of competitors. Similarly, Istio allows developers to stand on the shoulders of open source software in decreasing the amount of minutiae they need to implement to deliver scalable and more secure applications through the concept of a "service mesh."
What is a “service mesh” though?
In its most simplified form a service mesh can be thought of as an additional application tier that handles gluing all of the components of an application together. Through the intelligent use of proxy servers managed by a control plane, the service mesh can provide service discovery, application tracing and observability, and control the flow of application requests through advanced traffic management.
Istio aids us in achieving this vision of automated operations by increasing the capabilities of Kubernetes in all of those areas and more.
When you consult the CNCF projects page you will notice that both Linkerd and Envoy are listed as “service mesh” solutions. In this case, both of those components provide the functions of a service mesh, but the application developer and cluster administrator still need to orchestrate the configuration of the individual proxy instances.
Ultimately, Istio is the control plane for management of a service mesh. Its purpose is to deploy a series of Envoy sidecars and coordinate this through the container orchestration layer. Istio aims to work with Kubernetes, Apache Mesos and other orchestration systems. Within Kubernetes pods participating in the service mesh will have the proxy sidecar scheduled. These pods implement network policies within kernel space (which can be achieved via iptables, nftables, ebpf, etc) which are designed to force all traffic to ingress or egress from the pod through the proxy. It’s through this process of traffic interception that Istio is able perform its “magic” of automatically connecting components of a service together.
New in Istio 1.0
The primary focus of development between Istio version 0.8 and 1.0 was the resolution of bugs and improvement in performance. In support of this the upstream developers have declared that all “core features" are now “ready for production use.” Though the focus was on stability and performance, the opportunity to bring new features was leveraged as well. This release brings:
- Better handling of role based access controls (RBAC): Istio provides strong authentication based on non-replayable identities to protect against replay attacks from a compromised service. It also provides flexible authorization policies for enabling micro-segmentation based on identity or any request attribute like IP address. In this release Istio re-vamped its system of “attributes” to give more fine grained control over policy enforcement.
- Improved transport layer security (TLS) handling: With the simplification of mutual TLS configuration, Istio provides greater security for service to service communication and end-user to service communication.
- And more to improve policy, telemetry and security: The latest Istio version also brings JWT authentication, telemetry buffering, a new policy cache, and increased (and refactored) test suites.
There’s a lot more to read about and you can review the release notes here.
What does this mean?
For the Red Hat engineering team, the work continues. While the upstream has denoted Istio as “ready for production use,” their definition differs from ours. As it stands, many of the APIs within Istio are still evolving. Anyone who has attempted integration with an API knows the pain that can be experienced when the grammar of an API changes. This is not to say that we feel that the software is unstable, only that our customers demand more rigorous qualification of the feature sets available. This will now mean that we work across many other teams at Red Hat to test the quality statements for our customers.