What is Istio and what are its top alternatives?
Istio is a popular open-source service mesh that helps to connect, secure, control, and observe services within a network. Its key features include traffic management, security, observability, and policy enforcement. However, Istio can be complex to set up and manage, requiring a good understanding of networking concepts and can add overhead to your microservices architecture.
- Linkerd: Linkerd is a lightweight and service mesh designed to be beginner-friendly with features like traffic management, security, and reliability. It has a strong focus on simplicity and ease of use. Pros: Easy setup, minimal resource consumption. Cons: Less feature-rich compared to Istio.
- Consul: Consul by HashiCorp is a service mesh and service discovery tool that provides features like service discovery, health checking, and key-value storage. It offers a robust solution for connecting and securing services. Pros: Built-in service discovery, decentralized architecture. Cons: Not as dedicated to service mesh as Istio.
- Kuma: Kuma is a modern service mesh with advanced traffic routing, observability, and security capabilities. It is designed to be both powerful and easy to use, making it a great alternative to Istio. Pros: Multi-mesh support, native Kubernetes integration. Cons: Relatively newer in the market.
- Consul Connect: Consul Connect is a feature of Consul that specifically focuses on service-to-service connectivity and security. It provides sidecar proxies for secure communication and traffic management. Pros: Deep integration with Consul, strong security features. Cons: May require familiarity with Consul.
- AWS App Mesh: AWS App Mesh is a fully managed service mesh that provides traffic management, observability, and security features for microservices running on AWS. It offers seamless integration with AWS services. Pros: Easy integration with AWS ecosystem, managed service. Cons: Limited to AWS environment.
- SuperGloo: SuperGloo is a service mesh management plane that simplifies the installation and management of multiple service meshes. It provides a unified control plane for different meshes and offers advanced features for traffic control and security. Pros: Multi-mesh management, integration with Istio and other meshes. Cons: Requires additional layer of abstraction.
- Traefik Mesh: Traefik Mesh is a service mesh based on Traefik proxy that offers features like traffic splitting, load balancing, and SSL termination. It focuses on simplicity and ease of use for managing microservices. Pros: Lightweight, simple configuration. Cons: Limited feature set compared to other service meshes.
- NGINX Service Mesh: NGINX Service Mesh is built on top of NGINX and provides advanced traffic management, security, and monitoring capabilities for microservices. It offers a scalable and robust solution for service-to-service communication. Pros: High performance, deep integration with NGINX. Cons: Requires familiarity with NGINX.
- Pomerium: Pomerium is an identity-aware access proxy that can function as a lightweight service mesh for securing access to internal services. It provides features like access control, authentication, and encrypted communication. Pros: Identity-based access control, flexible deployment options. Cons: Limited in scope compared to full-fledged service meshes.
- Octarine: Octarine focuses on securing Kubernetes clusters and microservices by providing visibility, compliance, and threat detection capabilities. It integrates with Istio and other service meshes to enhance security posture. Pros: Kubernetes-native security, threat detection. Cons: More focused on security than traffic management.
Top Alternatives to Istio
- linkerd
linkerd is an out-of-process network stack for microservices. It functions as a transparent RPC proxy, handling everything needed to make inter-service RPC safe and sane--including load-balancing, service discovery, instrumentation, and routing. ...
- Envoy
Originally built at Lyft, Envoy is a high performance C++ distributed proxy designed for single services and applications, as well as a communication bus and “universal data plane” designed for large microservice “service mesh” architectures. ...
- Kubernetes
Kubernetes is an open source orchestration system for Docker containers. It handles scheduling onto nodes in a compute cluster and actively manages workloads to ensure that their state matches the users declared intentions. ...
- Conduit
Conduit is a lightweight open source service mesh designed for performance, power, and ease of use when running applications on Kubernetes. Conduit is incredibly fast, lightweight, fundamentally secure, and easy to get started with. ...
- Kong
Kong is a scalable, open source API Layer (also known as an API Gateway, or API Middleware). Kong controls layer 4 and 7 traffic and is extended through Plugins, which provide extra functionality and services beyond the core platform. ...
- AWS App Mesh
AWS App Mesh is a service mesh based on the Envoy proxy that makes it easy to monitor and control containerized microservices. App Mesh standardizes how your microservices communicate, giving you end-to-end visibility and helping to ensure high-availability for your applications. App Mesh gives you consistent visibility and network traffic controls for every microservice in an application. You can use App Mesh with Amazon ECS (using the Amazon EC2 launch type), Amazon EKS, and Kubernetes on AWS. ...
- Apigee
API management, design, analytics, and security are at the heart of modern digital architecture. The Apigee intelligent API platform is a complete solution for moving business to the digital world. ...
- Consul
Consul is a tool for service discovery and configuration. Consul is distributed, highly available, and extremely scalable. ...
Istio alternatives & related posts
- CNCF Project3
- Service Mesh1
- Fast Integration1
- Pre-check permissions1
- Light Weight1
related linkerd posts
related Envoy posts
We just launched the Segment Config API (try it out for yourself here) — a set of public REST APIs that enable you to manage your Segment configuration. Behind the scenes the Config API is built with Go , GRPC and Envoy.
At Segment, we build new services in Go by default. The language is simple so new team members quickly ramp up on a codebase. The tool chain is fast so developers get immediate feedback when they break code, tests or integrations with other systems. The runtime is fast so it performs great at scale.
For the newest round of APIs we adopted the GRPC service #framework.
The Protocol Buffer service definition language makes it easy to design type-safe and consistent APIs, thanks to ecosystem tools like the Google API Design Guide for API standards, uber/prototool
for formatting and linting .protos and lyft/protoc-gen-validate
for defining field validations, and grpc-gateway
for defining REST mapping.
With a well designed .proto, its easy to generate a Go server interface and a TypeScript client, providing type-safe RPC between languages.
For the API gateway and RPC we adopted the Envoy service proxy.
The internet-facing segmentapis.com
endpoint is an Envoy front proxy that rate-limits and authenticates every request. It then transcodes a #REST / #JSON request to an upstream GRPC request. The upstream GRPC servers are running an Envoy sidecar configured for Datadog stats.
The result is API #security , #reliability and consistent #observability through Envoy configuration, not code.
We experimented with Swagger service definitions, but the spec is sprawling and the generated clients and server stubs leave a lot to be desired. GRPC and .proto and the Go implementation feels better designed and implemented. Thanks to the GRPC tooling and ecosystem you can generate Swagger from .protos, but it’s effectively impossible to go the other way.
At uSwitch we wanted a way to load balance between our multiple Kubernetes clusters in AWS to give us added redundancy. We already had ingresses defined for all our applications so we wanted to build on top of that, instead of creating a new system that would require our various teams to change code/config etc.
Envoy seemed to tick a lot of boxes:
- Loadbalancing capabilities right out of the box: health checks, circuit breaking, retries etc.
- Tracing and prometheus metrics support
- Lightweight
- Good community support
This was all good but what really sold us was the api that supported dynamic configuration. This would allow us to dynamically configure envoy to route to ingresses and clusters as they were created or destroyed.
To do this we built a tool called Yggdrasil using their Go sdk. Yggdrasil effectively just creates envoy configuration from Kubernetes ingress objects, so you point Yggdrasil at your kube clusters, it generates config from the ingresses and then envoy can loadbalance between your clusters for you. This is all done dynamically so as soon as new ingress is created the envoy nodes get updated with the new config. Importantly this all worked with what we already had, no need to create new config for every application, we just put this on top of it.
Kubernetes
- Leading docker container management solution166
- Simple and powerful129
- Open source107
- Backed by google76
- The right abstractions58
- Scale services25
- Replication controller20
- Permission managment11
- Supports autoscaling9
- Simple8
- Cheap8
- Self-healing6
- Open, powerful, stable5
- Reliable5
- No cloud platform lock-in5
- Promotes modern/good infrascture practice5
- Scalable4
- Quick cloud setup4
- Custom and extensibility3
- Captain of Container Ship3
- Cloud Agnostic3
- Backed by Red Hat3
- Runs on azure3
- A self healing environment with rich metadata3
- Everything of CaaS2
- Gke2
- Golang2
- Easy setup2
- Expandable2
- Sfg2
- Steep learning curve16
- Poor workflow for development15
- Orchestrates only infrastructure8
- High resource requirements for on-prem clusters4
- Too heavy for simple systems2
- Additional vendor lock-in (Docker)1
- More moving parts to secure1
- Additional Technology Overhead1
related Kubernetes posts
How Uber developed the open source, end-to-end distributed tracing Jaeger , now a CNCF project:
Distributed tracing is quickly becoming a must-have component in the tools that organizations use to monitor their complex, microservice-based architectures. At Uber, our open source distributed tracing system Jaeger saw large-scale internal adoption throughout 2016, integrated into hundreds of microservices and now recording thousands of traces every second.
Here is the story of how we got here, from investigating off-the-shelf solutions like Zipkin, to why we switched from pull to push architecture, and how distributed tracing will continue to evolve:
https://eng.uber.com/distributed-tracing/
(GitHub Pages : https://www.jaegertracing.io/, GitHub: https://github.com/jaegertracing/jaeger)
Bindings/Operator: Python Java Node.js Go C++ Kubernetes JavaScript OpenShift C# Apache Spark
Our first experience with .NET core was when we developed our OSS feature management platform - Tweek (https://github.com/soluto/tweek). We wanted to create a solution that is able to run anywhere (super important for OSS), has excellent performance characteristics and can fit in a multi-container architecture. We decided to implement our rule engine processor in F# , our main service was implemented in C# and other components were built using JavaScript / TypeScript and Go.
Visual Studio Code worked really well for us as well, it worked well with all our polyglot services and the .Net core integration had great cross-platform developer experience (to be fair, F# was a bit trickier) - actually, each of our team members used a different OS (Ubuntu, macos, windows). Our production deployment ran for a time on Docker Swarm until we've decided to adopt Kubernetes with almost seamless migration process.
After our positive experience of running .Net core workloads in containers and developing Tweek's .Net services on non-windows machines, C# had gained back some of its popularity (originally lost to Node.js), and other teams have been using it for developing microservices, k8s sidecars (like https://github.com/Soluto/airbag), cli tools, serverless functions and other projects...
related Conduit posts
- Easy to maintain37
- Easy to install32
- Flexible26
- Great performance21
- Api blueprint7
- Custom Plugins4
- Kubernetes-native3
- Security2
- Has a good plugin infrastructure2
- Agnostic2
- Load balancing1
- Documentation is clear1
- Very customizable1
related Kong posts
We needed a lightweight and completely customizable #microservices #gateway to be able to generate #JWT and introspect #OAuth2 tokens as well. The #gateway was going to front all #APIs for our single page web app as well as externalized #APIs for our partners.
ContendersWe looked at Tyk Cloud and Kong. Kong's plugins are all Lua based and its core is NGINX and OpenResty. Although it's open source, it's not the greatest platform to be able to customize. On top of that enterprise features are paid and expensive. Tyk is Go and the nomenclature used within Tyk like "sessions" was bizarre, and again enterprise features were paid.
DecisionWe ultimately decided to roll our own using ExpressJS into Express Gateway because the use case for using ExpressJS as an #API #gateway was tried and true, in fact - all the enterprise features that the other two charge for #OAuth2 introspection etc were freely available within ExpressJS middleware.
OutcomeWe opened source Express Gateway with a core set of plugins and the community started writing their own and could quickly do so by rolling lots of ExpressJS middleware into Express Gateway
related AWS App Mesh posts
- Highly scalable and secure API Management Platform12
- Good documentation6
- Quick jumpstart6
- Fast and adjustable caching3
- Easy to use3
- Expensive11
- Doesn't support hybrid natively1
related Apigee posts
Amazon API Gateway vs Apigee. How do they compare as an API Gateway? What is the equivalent functionality, similarities, and differences moving from Apigee API GW to AWS API GW?
- Great service discovery infrastructure61
- Health checking35
- Distributed key-value store29
- Monitoring26
- High-availability23
- Web-UI12
- Token-based acls10
- Gossip clustering6
- Dns server5
- Not Java4
- Docker integration1
- Javascript1
related Consul posts
As we've evolved or added additional infrastructure to our stack, we've biased towards managed services. Most new backing stores are Amazon RDS instances now. We do use self-managed PostgreSQL with TimescaleDB for time-series data—this is made HA with the use of Patroni and Consul.
We also use managed Amazon ElastiCache instances instead of spinning up Amazon EC2 instances to run Redis workloads, as well as shifting to Amazon Kinesis instead of Kafka.
Since the beginning, Cal Henderson has been the CTO of Slack. Earlier this year, he commented on a Quora question summarizing their current stack.
Apps- Web: a mix of JavaScript/ES6 and React.
- Desktop: And Electron to ship it as a desktop application.
- Android: a mix of Java and Kotlin.
- iOS: written in a mix of Objective C and Swift.
- The core application and the API written in PHP/Hack that runs on HHVM.
- The data is stored in MySQL using Vitess.
- Caching is done using Memcached and MCRouter.
- The search service takes help from SolrCloud, with various Java services.
- The messaging system uses WebSockets with many services in Java and Go.
- Load balancing is done using HAproxy with Consul for configuration.
- Most services talk to each other over gRPC,
- Some Thrift and JSON-over-HTTP
- Voice and video calling service was built in Elixir.
- Built using open source tools including Presto, Spark, Airflow, Hadoop and Kafka.
- For server configuration and management we use Terraform, Chef and Kubernetes.
- We use Prometheus for time series metrics and ELK for logging.