Alternatives to linkerd logo

Alternatives to linkerd

Istio, HAProxy, Kubernetes, Hystrix, and Consul are the most popular alternatives and competitors to linkerd.
129
7

What is linkerd and what are its top alternatives?

linkerd is an out-of-process network stack for microservices. It functions as a transparent RPC proxy, handling everything needed to make inter-service RPC safe and sane--including load-balancing, service discovery, instrumentation, and routing.
linkerd is a tool in the Microservices Tools category of a tech stack.
linkerd is an open source tool with GitHub stars and GitHub forks. Here’s a link to linkerd's open source repository on GitHub

Top Alternatives to linkerd

  • Istio
    Istio

    Istio is an open platform for providing a uniform way to integrate microservices, manage traffic flow across microservices, enforce policies and aggregate telemetry data. Istio's control plane provides an abstraction layer over the underlying cluster management platform, such as Kubernetes, Mesos, etc. ...

  • HAProxy
    HAProxy

    HAProxy (High Availability Proxy) is a free, very fast and reliable solution offering high availability, load balancing, and proxying for TCP and HTTP-based applications. ...

  • Kubernetes
    Kubernetes

    Kubernetes is an open source orchestration system for Docker containers. It handles scheduling onto nodes in a compute cluster and actively manages workloads to ensure that their state matches the users declared intentions. ...

  • Hystrix
    Hystrix

    Hystrix is a latency and fault tolerance library designed to isolate points of access to remote systems, services and 3rd party libraries, stop cascading failure and enable resilience in complex distributed systems where failure is inevitable. ...

  • Consul
    Consul

    Consul is a tool for service discovery and configuration. Consul is distributed, highly available, and extremely scalable. ...

  • Envoy
    Envoy

    Originally built at Lyft, Envoy is a high performance C++ distributed proxy designed for single services and applications, as well as a communication bus and “universal data plane” designed for large microservice “service mesh” architectures. ...

  • Conduit
    Conduit

    Conduit is a lightweight open source service mesh designed for performance, power, and ease of use when running applications on Kubernetes. Conduit is incredibly fast, lightweight, fundamentally secure, and easy to get started with. ...

  • NGINX
    NGINX

    nginx [engine x] is an HTTP and reverse proxy server, as well as a mail proxy server, written by Igor Sysoev. According to Netcraft nginx served or proxied 30.46% of the top million busiest sites in Jan 2018. ...

linkerd alternatives & related posts

Istio logo

Istio

945
1.5K
54
Open platform to connect, manage, and secure microservices, by Google, IBM, and Lyft
945
1.5K
+ 1
54
PROS OF ISTIO
  • 14
    Zero code for logging and monitoring
  • 9
    Service Mesh
  • 8
    Great flexibility
  • 5
    Resiliency
  • 5
    Powerful authorization mechanisms
  • 5
    Ingress controller
  • 4
    Easy integration with Kubernetes and Docker
  • 4
    Full Security
CONS OF ISTIO
  • 17
    Performance

related Istio posts

Shared insights
on
IstioIstioDaprDapr

At my company, we are trying to move away from a monolith into microservices led architecture. We are now stuck with a problem to establish a communication mechanism between microservices. Since, we are planning to use service meshes and something like Dapr/Istio, we are not sure on how to split services between the two. Service meshes offer Traffic Routing or Splitting whereas, Dapr can offer state management and service-service invocation. At the same time both of them provide mLTS, Metrics, Resiliency and tracing. How to choose who should offer what?

See more
Anas MOKDAD
Shared insights
on
KongKongIstioIstio

As for the new support of service mesh pattern by Kong, I wonder how does it compare to Istio?

See more
HAProxy logo

HAProxy

2.4K
2.1K
562
The Reliable, High Performance TCP/HTTP Load Balancer
2.4K
2.1K
+ 1
562
PROS OF HAPROXY
  • 132
    Load balancer
  • 102
    High performance
  • 69
    Very fast
  • 58
    Proxying for tcp and http
  • 55
    SSL termination
  • 31
    Open source
  • 27
    Reliable
  • 20
    Free
  • 18
    Well-Documented
  • 12
    Very popular
  • 7
    Runs health checks on backends
  • 7
    Suited for very high traffic web sites
  • 6
    Scalable
  • 5
    Ready to Docker
  • 4
    Powers many world's most visited sites
  • 3
    Simple
  • 2
    Ssl offloading
  • 2
    Work with NTLM
  • 1
    Available as a plugin for OPNsense
  • 1
    Redis
CONS OF HAPROXY
  • 6
    Becomes your single point of failure

related HAProxy posts

Around the time of their Series A, Pinterest’s stack included Python and Django, with Tornado and Node.js as web servers. Memcached / Membase and Redis handled caching, with RabbitMQ handling queueing. Nginx, HAproxy and Varnish managed static-delivery and load-balancing, with persistent data storage handled by MySQL.

See more
Tom Klein

We're using Git through GitHub for public repositories and GitLab for our private repositories due to its easy to use features. Docker and Kubernetes are a must have for our highly scalable infrastructure complimented by HAProxy with Varnish in front of it. We are using a lot of npm and Visual Studio Code in our development sessions.

See more
Kubernetes logo

Kubernetes

59.8K
51.7K
681
Manage a cluster of Linux containers as a single system to accelerate Dev and simplify Ops
59.8K
51.7K
+ 1
681
PROS OF KUBERNETES
  • 166
    Leading docker container management solution
  • 129
    Simple and powerful
  • 107
    Open source
  • 76
    Backed by google
  • 58
    The right abstractions
  • 25
    Scale services
  • 20
    Replication controller
  • 11
    Permission managment
  • 9
    Supports autoscaling
  • 8
    Simple
  • 8
    Cheap
  • 6
    Self-healing
  • 5
    Open, powerful, stable
  • 5
    Reliable
  • 5
    No cloud platform lock-in
  • 5
    Promotes modern/good infrascture practice
  • 4
    Scalable
  • 4
    Quick cloud setup
  • 3
    Custom and extensibility
  • 3
    Captain of Container Ship
  • 3
    Cloud Agnostic
  • 3
    Backed by Red Hat
  • 3
    Runs on azure
  • 3
    A self healing environment with rich metadata
  • 2
    Everything of CaaS
  • 2
    Gke
  • 2
    Golang
  • 2
    Easy setup
  • 2
    Expandable
  • 2
    Sfg
CONS OF KUBERNETES
  • 16
    Steep learning curve
  • 15
    Poor workflow for development
  • 8
    Orchestrates only infrastructure
  • 4
    High resource requirements for on-prem clusters
  • 2
    Too heavy for simple systems
  • 1
    Additional vendor lock-in (Docker)
  • 1
    More moving parts to secure
  • 1
    Additional Technology Overhead

related Kubernetes posts

Conor Myhrvold
Tech Brand Mgr, Office of CTO at Uber · | 44 upvotes · 12.6M views

How Uber developed the open source, end-to-end distributed tracing Jaeger , now a CNCF project:

Distributed tracing is quickly becoming a must-have component in the tools that organizations use to monitor their complex, microservice-based architectures. At Uber, our open source distributed tracing system Jaeger saw large-scale internal adoption throughout 2016, integrated into hundreds of microservices and now recording thousands of traces every second.

Here is the story of how we got here, from investigating off-the-shelf solutions like Zipkin, to why we switched from pull to push architecture, and how distributed tracing will continue to evolve:

https://eng.uber.com/distributed-tracing/

(GitHub Pages : https://www.jaegertracing.io/, GitHub: https://github.com/jaegertracing/jaeger)

Bindings/Operator: Python Java Node.js Go C++ Kubernetes JavaScript OpenShift C# Apache Spark

See more
Yshay Yaacobi

Our first experience with .NET core was when we developed our OSS feature management platform - Tweek (https://github.com/soluto/tweek). We wanted to create a solution that is able to run anywhere (super important for OSS), has excellent performance characteristics and can fit in a multi-container architecture. We decided to implement our rule engine processor in F# , our main service was implemented in C# and other components were built using JavaScript / TypeScript and Go.

Visual Studio Code worked really well for us as well, it worked well with all our polyglot services and the .Net core integration had great cross-platform developer experience (to be fair, F# was a bit trickier) - actually, each of our team members used a different OS (Ubuntu, macos, windows). Our production deployment ran for a time on Docker Swarm until we've decided to adopt Kubernetes with almost seamless migration process.

After our positive experience of running .Net core workloads in containers and developing Tweek's .Net services on non-windows machines, C# had gained back some of its popularity (originally lost to Node.js), and other teams have been using it for developing microservices, k8s sidecars (like https://github.com/Soluto/airbag), cli tools, serverless functions and other projects...

See more
Hystrix logo

Hystrix

173
162
2
Latency and fault tolerance library
173
162
+ 1
2
PROS OF HYSTRIX
  • 2
    Cirkit breaker
CONS OF HYSTRIX
    Be the first to leave a con

    related Hystrix posts

    Consul logo

    Consul

    1.2K
    1.5K
    213
    A tool for service discovery, monitoring and configuration
    1.2K
    1.5K
    + 1
    213
    PROS OF CONSUL
    • 61
      Great service discovery infrastructure
    • 35
      Health checking
    • 29
      Distributed key-value store
    • 26
      Monitoring
    • 23
      High-availability
    • 12
      Web-UI
    • 10
      Token-based acls
    • 6
      Gossip clustering
    • 5
      Dns server
    • 4
      Not Java
    • 1
      Docker integration
    • 1
      Javascript
    CONS OF CONSUL
      Be the first to leave a con

      related Consul posts

      John Kodumal

      As we've evolved or added additional infrastructure to our stack, we've biased towards managed services. Most new backing stores are Amazon RDS instances now. We do use self-managed PostgreSQL with TimescaleDB for time-series data—this is made HA with the use of Patroni and Consul.

      We also use managed Amazon ElastiCache instances instead of spinning up Amazon EC2 instances to run Redis workloads, as well as shifting to Amazon Kinesis instead of Kafka.

      See more

      Since the beginning, Cal Henderson has been the CTO of Slack. Earlier this year, he commented on a Quora question summarizing their current stack.

      Apps
      • Web: a mix of JavaScript/ES6 and React.
      • Desktop: And Electron to ship it as a desktop application.
      • Android: a mix of Java and Kotlin.
      • iOS: written in a mix of Objective C and Swift.
      Backend
      • The core application and the API written in PHP/Hack that runs on HHVM.
      • The data is stored in MySQL using Vitess.
      • Caching is done using Memcached and MCRouter.
      • The search service takes help from SolrCloud, with various Java services.
      • The messaging system uses WebSockets with many services in Java and Go.
      • Load balancing is done using HAproxy with Consul for configuration.
      • Most services talk to each other over gRPC,
      • Some Thrift and JSON-over-HTTP
      • Voice and video calling service was built in Elixir.
      Data warehouse
      • Built using open source tools including Presto, Spark, Airflow, Hadoop and Kafka.
      Etc
      See more
      Envoy logo

      Envoy

      296
      544
      9
      C++ front/service proxy
      296
      544
      + 1
      9
      PROS OF ENVOY
      • 9
        GRPC-Web
      CONS OF ENVOY
        Be the first to leave a con

        related Envoy posts

        Noah Zoschke
        Engineering Manager at Segment · | 30 upvotes · 303.4K views

        We just launched the Segment Config API (try it out for yourself here) — a set of public REST APIs that enable you to manage your Segment configuration. Behind the scenes the Config API is built with Go , GRPC and Envoy.

        At Segment, we build new services in Go by default. The language is simple so new team members quickly ramp up on a codebase. The tool chain is fast so developers get immediate feedback when they break code, tests or integrations with other systems. The runtime is fast so it performs great at scale.

        For the newest round of APIs we adopted the GRPC service #framework.

        The Protocol Buffer service definition language makes it easy to design type-safe and consistent APIs, thanks to ecosystem tools like the Google API Design Guide for API standards, uber/prototool for formatting and linting .protos and lyft/protoc-gen-validate for defining field validations, and grpc-gateway for defining REST mapping.

        With a well designed .proto, its easy to generate a Go server interface and a TypeScript client, providing type-safe RPC between languages.

        For the API gateway and RPC we adopted the Envoy service proxy.

        The internet-facing segmentapis.com endpoint is an Envoy front proxy that rate-limits and authenticates every request. It then transcodes a #REST / #JSON request to an upstream GRPC request. The upstream GRPC servers are running an Envoy sidecar configured for Datadog stats.

        The result is API #security , #reliability and consistent #observability through Envoy configuration, not code.

        We experimented with Swagger service definitions, but the spec is sprawling and the generated clients and server stubs leave a lot to be desired. GRPC and .proto and the Go implementation feels better designed and implemented. Thanks to the GRPC tooling and ecosystem you can generate Swagger from .protos, but it’s effectively impossible to go the other way.

        See more
        Joseph Irving
        DevOps Engineer at uSwitch · | 7 upvotes · 543.1K views
        Shared insights
        on
        KubernetesKubernetesEnvoyEnvoyGolangGolang
        at

        At uSwitch we wanted a way to load balance between our multiple Kubernetes clusters in AWS to give us added redundancy. We already had ingresses defined for all our applications so we wanted to build on top of that, instead of creating a new system that would require our various teams to change code/config etc.

        Envoy seemed to tick a lot of boxes:

        • Loadbalancing capabilities right out of the box: health checks, circuit breaking, retries etc.
        • Tracing and prometheus metrics support
        • Lightweight
        • Good community support

        This was all good but what really sold us was the api that supported dynamic configuration. This would allow us to dynamically configure envoy to route to ingresses and clusters as they were created or destroyed.

        To do this we built a tool called Yggdrasil using their Go sdk. Yggdrasil effectively just creates envoy configuration from Kubernetes ingress objects, so you point Yggdrasil at your kube clusters, it generates config from the ingresses and then envoy can loadbalance between your clusters for you. This is all done dynamically so as soon as new ingress is created the envoy nodes get updated with the new config. Importantly this all worked with what we already had, no need to create new config for every application, we just put this on top of it.

        See more
        Conduit logo

        Conduit

        9
        18
        0
        Open-source service mesh for Kubernetes
        9
        18
        + 1
        0
        PROS OF CONDUIT
          Be the first to leave a pro
          CONS OF CONDUIT
            Be the first to leave a con

            related Conduit posts

            NGINX logo

            NGINX

            113.3K
            60.9K
            5.5K
            A high performance free open source web server powering busiest sites on the Internet.
            113.3K
            60.9K
            + 1
            5.5K
            PROS OF NGINX
            • 1.4K
              High-performance http server
            • 894
              Performance
            • 730
              Easy to configure
            • 607
              Open source
            • 530
              Load balancer
            • 289
              Free
            • 288
              Scalability
            • 226
              Web server
            • 175
              Simplicity
            • 136
              Easy setup
            • 30
              Content caching
            • 21
              Web Accelerator
            • 15
              Capability
            • 14
              Fast
            • 12
              High-latency
            • 12
              Predictability
            • 8
              Reverse Proxy
            • 7
              The best of them
            • 7
              Supports http/2
            • 5
              Great Community
            • 5
              Lots of Modules
            • 5
              Enterprise version
            • 4
              High perfomance proxy server
            • 3
              Embedded Lua scripting
            • 3
              Streaming media delivery
            • 3
              Streaming media
            • 3
              Reversy Proxy
            • 2
              Blash
            • 2
              GRPC-Web
            • 2
              Lightweight
            • 2
              Fast and easy to set up
            • 2
              Slim
            • 2
              saltstack
            • 1
              Virtual hosting
            • 1
              Narrow focus. Easy to configure. Fast
            • 1
              Along with Redis Cache its the Most superior
            • 1
              Ingress controller
            CONS OF NGINX
            • 10
              Advanced features require subscription

            related NGINX posts

            Simon Reymann
            Senior Fullstack Developer at QUANTUSflow Software GmbH · | 30 upvotes · 11M views

            Our whole DevOps stack consists of the following tools:

            • GitHub (incl. GitHub Pages/Markdown for Documentation, GettingStarted and HowTo's) for collaborative review and code management tool
            • Respectively Git as revision control system
            • SourceTree as Git GUI
            • Visual Studio Code as IDE
            • CircleCI for continuous integration (automatize development process)
            • Prettier / TSLint / ESLint as code linter
            • SonarQube as quality gate
            • Docker as container management (incl. Docker Compose for multi-container application management)
            • VirtualBox for operating system simulation tests
            • Kubernetes as cluster management for docker containers
            • Heroku for deploying in test environments
            • nginx as web server (preferably used as facade server in production environment)
            • SSLMate (using OpenSSL) for certificate management
            • Amazon EC2 (incl. Amazon S3) for deploying in stage (production-like) and production environments
            • PostgreSQL as preferred database system
            • Redis as preferred in-memory database/store (great for caching)

            The main reason we have chosen Kubernetes over Docker Swarm is related to the following artifacts:

            • Key features: Easy and flexible installation, Clear dashboard, Great scaling operations, Monitoring is an integral part, Great load balancing concepts, Monitors the condition and ensures compensation in the event of failure.
            • Applications: An application can be deployed using a combination of pods, deployments, and services (or micro-services).
            • Functionality: Kubernetes as a complex installation and setup process, but it not as limited as Docker Swarm.
            • Monitoring: It supports multiple versions of logging and monitoring when the services are deployed within the cluster (Elasticsearch/Kibana (ELK), Heapster/Grafana, Sysdig cloud integration).
            • Scalability: All-in-one framework for distributed systems.
            • Other Benefits: Kubernetes is backed by the Cloud Native Computing Foundation (CNCF), huge community among container orchestration tools, it is an open source and modular tool that works with any OS.
            See more
            John-Daniel Trask
            Co-founder & CEO at Raygun · | 19 upvotes · 283.8K views

            We chose AWS because, at the time, it was really the only cloud provider to choose from.

            We tend to use their basic building blocks (EC2, ELB, Amazon S3, Amazon RDS) rather than vendor specific components like databases and queuing. We deliberately decided to do this to ensure we could provide multi-cloud support or potentially move to another cloud provider if the offering was better for our customers.

            We’ve utilized c3.large nodes for both the Node.js deployment and then for the .NET Core deployment. Both sit as backends behind an nginx instance and are managed using scaling groups in Amazon EC2 sitting behind a standard AWS Elastic Load Balancing (ELB).

            While we’re satisfied with AWS, we do review our decision each year and have looked at Azure and Google Cloud offerings.

            #CloudHosting #WebServers #CloudStorage #LoadBalancerReverseProxy

            See more