What is Zuul and what are its top alternatives?
Top Alternatives to Zuul
- Apigee
API management, design, analytics, and security are at the heart of modern digital architecture. The Apigee intelligent API platform is a complete solution for moving business to the digital world. ...
- Eureka
Eureka is a REST (Representational State Transfer) based service that is primarily used in the AWS cloud for locating services for the purpose of load balancing and failover of middle-tier servers. ...
- Kong
Kong is a scalable, open source API Layer (also known as an API Gateway, or API Middleware). Kong controls layer 4 and 7 traffic and is extended through Plugins, which provide extra functionality and services beyond the core platform. ...
- HAProxy
HAProxy (High Availability Proxy) is a free, very fast and reliable solution offering high availability, load balancing, and proxying for TCP and HTTP-based applications. ...
- Istio
Istio is an open platform for providing a uniform way to integrate microservices, manage traffic flow across microservices, enforce policies and aggregate telemetry data. Istio's control plane provides an abstraction layer over the underlying cluster management platform, such as Kubernetes, Mesos, etc. ...
- NGINX
nginx [engine x] is an HTTP and reverse proxy server, as well as a mail proxy server, written by Igor Sysoev. According to Netcraft nginx served or proxied 30.46% of the top million busiest sites in Jan 2018. ...
- Jenkins
In a nutshell Jenkins CI is the leading open-source continuous integration server. Built with Java, it provides over 300 plugins to support building and testing virtually any project. ...
- Consul
Consul is a tool for service discovery and configuration. Consul is distributed, highly available, and extremely scalable. ...
Zuul alternatives & related posts
- Highly scalable and secure API Management Platform12
- Good documentation6
- Quick jumpstart6
- Fast and adjustable caching3
- Easy to use3
- Expensive11
- Doesn't support hybrid natively1
related Apigee posts
Amazon API Gateway vs Apigee. How do they compare as an API Gateway? What is the equivalent functionality, similarities, and differences moving from Apigee API GW to AWS API GW?
- Easy setup and integration with spring-cloud21
- Web ui9
- Health checking8
- Monitoring8
- Circuit breaker7
- Netflix battle tested components6
- Service discovery6
- Open Source5
- Nada1
related Eureka posts
- Easy to maintain37
- Easy to install32
- Flexible26
- Great performance21
- Api blueprint7
- Custom Plugins4
- Kubernetes-native3
- Security2
- Has a good plugin infrastructure2
- Agnostic2
- Load balancing1
- Documentation is clear1
- Very customizable1
related Kong posts
We needed a lightweight and completely customizable #microservices #gateway to be able to generate #JWT and introspect #OAuth2 tokens as well. The #gateway was going to front all #APIs for our single page web app as well as externalized #APIs for our partners.
ContendersWe looked at Tyk Cloud and Kong. Kong's plugins are all Lua based and its core is NGINX and OpenResty. Although it's open source, it's not the greatest platform to be able to customize. On top of that enterprise features are paid and expensive. Tyk is Go and the nomenclature used within Tyk like "sessions" was bizarre, and again enterprise features were paid.
DecisionWe ultimately decided to roll our own using ExpressJS into Express Gateway because the use case for using ExpressJS as an #API #gateway was tried and true, in fact - all the enterprise features that the other two charge for #OAuth2 introspection etc were freely available within ExpressJS middleware.
OutcomeWe opened source Express Gateway with a core set of plugins and the community started writing their own and could quickly do so by rolling lots of ExpressJS middleware into Express Gateway
- Load balancer132
- High performance102
- Very fast69
- Proxying for tcp and http58
- SSL termination55
- Open source31
- Reliable27
- Free20
- Well-Documented18
- Very popular12
- Runs health checks on backends7
- Suited for very high traffic web sites7
- Scalable6
- Ready to Docker5
- Powers many world's most visited sites4
- Simple3
- Ssl offloading2
- Work with NTLM2
- Available as a plugin for OPNsense1
- Redis1
- Becomes your single point of failure6
related HAProxy posts
Around the time of their Series A, Pinterest’s stack included Python and Django, with Tornado and Node.js as web servers. Memcached / Membase and Redis handled caching, with RabbitMQ handling queueing. Nginx, HAproxy and Varnish managed static-delivery and load-balancing, with persistent data storage handled by MySQL.
We're using Git through GitHub for public repositories and GitLab for our private repositories due to its easy to use features. Docker and Kubernetes are a must have for our highly scalable infrastructure complimented by HAProxy with Varnish in front of it. We are using a lot of npm and Visual Studio Code in our development sessions.
- Zero code for logging and monitoring14
- Service Mesh9
- Great flexibility8
- Resiliency5
- Powerful authorization mechanisms5
- Ingress controller5
- Easy integration with Kubernetes and Docker4
- Full Security4
- Performance17
related Istio posts
At my company, we are trying to move away from a monolith into microservices led architecture. We are now stuck with a problem to establish a communication mechanism between microservices. Since, we are planning to use service meshes and something like Dapr/Istio, we are not sure on how to split services between the two. Service meshes offer Traffic Routing or Splitting whereas, Dapr can offer state management and service-service invocation. At the same time both of them provide mLTS, Metrics, Resiliency and tracing. How to choose who should offer what?
As for the new support of service mesh pattern by Kong, I wonder how does it compare to Istio?
NGINX
- High-performance http server1.4K
- Performance894
- Easy to configure730
- Open source607
- Load balancer530
- Free289
- Scalability288
- Web server226
- Simplicity175
- Easy setup136
- Content caching30
- Web Accelerator21
- Capability15
- Fast14
- High-latency12
- Predictability12
- Reverse Proxy8
- Supports http/27
- The best of them7
- Great Community5
- Lots of Modules5
- Enterprise version5
- High perfomance proxy server4
- Embedded Lua scripting3
- Streaming media delivery3
- Streaming media3
- Reversy Proxy3
- Blash2
- GRPC-Web2
- Lightweight2
- Fast and easy to set up2
- Slim2
- saltstack2
- Virtual hosting1
- Narrow focus. Easy to configure. Fast1
- Along with Redis Cache its the Most superior1
- Ingress controller1
- Advanced features require subscription10
related NGINX posts
Our whole DevOps stack consists of the following tools:
- GitHub (incl. GitHub Pages/Markdown for Documentation, GettingStarted and HowTo's) for collaborative review and code management tool
- Respectively Git as revision control system
- SourceTree as Git GUI
- Visual Studio Code as IDE
- CircleCI for continuous integration (automatize development process)
- Prettier / TSLint / ESLint as code linter
- SonarQube as quality gate
- Docker as container management (incl. Docker Compose for multi-container application management)
- VirtualBox for operating system simulation tests
- Kubernetes as cluster management for docker containers
- Heroku for deploying in test environments
- nginx as web server (preferably used as facade server in production environment)
- SSLMate (using OpenSSL) for certificate management
- Amazon EC2 (incl. Amazon S3) for deploying in stage (production-like) and production environments
- PostgreSQL as preferred database system
- Redis as preferred in-memory database/store (great for caching)
The main reason we have chosen Kubernetes over Docker Swarm is related to the following artifacts:
- Key features: Easy and flexible installation, Clear dashboard, Great scaling operations, Monitoring is an integral part, Great load balancing concepts, Monitors the condition and ensures compensation in the event of failure.
- Applications: An application can be deployed using a combination of pods, deployments, and services (or micro-services).
- Functionality: Kubernetes as a complex installation and setup process, but it not as limited as Docker Swarm.
- Monitoring: It supports multiple versions of logging and monitoring when the services are deployed within the cluster (Elasticsearch/Kibana (ELK), Heapster/Grafana, Sysdig cloud integration).
- Scalability: All-in-one framework for distributed systems.
- Other Benefits: Kubernetes is backed by the Cloud Native Computing Foundation (CNCF), huge community among container orchestration tools, it is an open source and modular tool that works with any OS.
We chose AWS because, at the time, it was really the only cloud provider to choose from.
We tend to use their basic building blocks (EC2, ELB, Amazon S3, Amazon RDS) rather than vendor specific components like databases and queuing. We deliberately decided to do this to ensure we could provide multi-cloud support or potentially move to another cloud provider if the offering was better for our customers.
We’ve utilized c3.large nodes for both the Node.js deployment and then for the .NET Core deployment. Both sit as backends behind an nginx instance and are managed using scaling groups in Amazon EC2 sitting behind a standard AWS Elastic Load Balancing (ELB).
While we’re satisfied with AWS, we do review our decision each year and have looked at Azure and Google Cloud offerings.
#CloudHosting #WebServers #CloudStorage #LoadBalancerReverseProxy
- Hosted internally523
- Free open source469
- Great to build, deploy or launch anything async318
- Tons of integrations243
- Rich set of plugins with good documentation211
- Has support for build pipelines111
- Easy setup68
- It is open-source66
- Workflow plugin53
- Configuration as code13
- Very powerful tool12
- Many Plugins11
- Continuous Integration10
- Great flexibility10
- Git and Maven integration is better9
- 100% free and open source8
- Github integration7
- Slack Integration (plugin)7
- Easy customisation6
- Self-hosted GitLab Integration (plugin)6
- Docker support5
- Pipeline API5
- Fast builds4
- Platform idnependency4
- Hosted Externally4
- Excellent docker integration4
- It`w worked3
- Customizable3
- Can be run as a Docker container3
- It's Everywhere3
- JOBDSL3
- AWS Integration3
- Easily extendable with seamless integration2
- PHP Support2
- Build PR Branch Only2
- NodeJS Support2
- Ruby/Rails Support2
- Universal controller2
- Loose Coupling2
- Workarounds needed for basic requirements13
- Groovy with cumbersome syntax10
- Plugins compatibility issues8
- Lack of support7
- Limited abilities with declarative pipelines7
- No YAML syntax5
- Too tied to plugins versions4
related Jenkins posts
Often enough I have to explain my way of going about setting up a CI/CD pipeline with multiple deployment platforms. Since I am a bit tired of yapping the same every single time, I've decided to write it up and share with the world this way, and send people to read it instead ;). I will explain it on "live-example" of how the Rome got built, basing that current methodology exists only of readme.md and wishes of good luck (as it usually is ;)).
It always starts with an app, whatever it may be and reading the readmes available while Vagrant and VirtualBox is installing and updating. Following that is the first hurdle to go over - convert all the instruction/scripts into Ansible playbook(s), and only stopping when doing a clear vagrant up
or vagrant reload
we will have a fully working environment. As our Vagrant environment is now functional, it's time to break it! This is the moment to look for how things can be done better (too rigid/too lose versioning? Sloppy environment setup?) and replace them with the right way to do stuff, one that won't bite us in the backside. This is the point, and the best opportunity, to upcycle the existing way of doing dev environment to produce a proper, production-grade product.
I should probably digress here for a moment and explain why. I firmly believe that the way you deploy production is the same way you should deploy develop, shy of few debugging-friendly setting. This way you avoid the discrepancy between how production work vs how development works, which almost always causes major pains in the back of the neck, and with use of proper tools should mean no more work for the developers. That's why we start with Vagrant as developer boxes should be as easy as vagrant up
, but the meat of our product lies in Ansible which will do meat of the work and can be applied to almost anything: AWS, bare metal, docker, LXC, in open net, behind vpn - you name it.
We must also give proper consideration to monitoring and logging hoovering at this point. My generic answer here is to grab Elasticsearch, Kibana, and Logstash. While for different use cases there may be better solutions, this one is well battle-tested, performs reasonably and is very easy to scale both vertically (within some limits) and horizontally. Logstash rules are easy to write and are well supported in maintenance through Ansible, which as I've mentioned earlier, are at the very core of things, and creating triggers/reports and alerts based on Elastic and Kibana is generally a breeze, including some quite complex aggregations.
If we are happy with the state of the Ansible it's time to move on and put all those roles and playbooks to work. Namely, we need something to manage our CI/CD pipelines. For me, the choice is obvious: TeamCity. It's modern, robust and unlike most of the light-weight alternatives, it's transparent. What I mean by that is that it doesn't tell you how to do things, doesn't limit your ways to deploy, or test, or package for that matter. Instead, it provides a developer-friendly and rich playground for your pipelines. You can do most the same with Jenkins, but it has a quite dated look and feel to it, while also missing some key functionality that must be brought in via plugins (like quality REST API which comes built-in with TeamCity). It also comes with all the common-handy plugins like Slack or Apache Maven integration.
The exact flow between CI and CD varies too greatly from one application to another to describe, so I will outline a few rules that guide me in it: 1. Make build steps as small as possible. This way when something breaks, we know exactly where, without needing to dig and root around. 2. All security credentials besides development environment must be sources from individual Vault instances. Keys to those containers should exist only on the CI/CD box and accessible by a few people (the less the better). This is pretty self-explanatory, as anything besides dev may contain sensitive data and, at times, be public-facing. Because of that appropriate security must be present. TeamCity shines in this department with excellent secrets-management. 3. Every part of the build chain shall consume and produce artifacts. If it creates nothing, it likely shouldn't be its own build. This way if any issue shows up with any environment or version, all developer has to do it is grab appropriate artifacts to reproduce the issue locally. 4. Deployment builds should be directly tied to specific Git branches/tags. This enables much easier tracking of what caused an issue, including automated identifying and tagging the author (nothing like automated regression testing!).
Speaking of deployments, I generally try to keep it simple but also with a close eye on the wallet. Because of that, I am more than happy with AWS or another cloud provider, but also constantly peeking at the loads and do we get the value of what we are paying for. Often enough the pattern of use is not constantly erratic, but rather has a firm baseline which could be migrated away from the cloud and into bare metal boxes. That is another part where this approach strongly triumphs over the common Docker and CircleCI setup, where you are very much tied in to use cloud providers and getting out is expensive. Here to embrace bare-metal hosting all you need is a help of some container-based self-hosting software, my personal preference is with Proxmox and LXC. Following that all you must write are ansible scripts to manage hardware of Proxmox, similar way as you do for Amazon EC2 (ansible supports both greatly) and you are good to go. One does not exclude another, quite the opposite, as they can live in great synergy and cut your costs dramatically (the heavier your base load, the bigger the savings) while providing production-grade resiliency.
Releasing new versions of our services is done by Travis CI. Travis first runs our test suite. Once it passes, it publishes a new release binary to GitHub.
Common tasks such as installing dependencies for the Go project, or building a binary are automated using plain old Makefiles. (We know, crazy old school, right?) Our binaries are compressed using UPX.
Travis has come a long way over the past years. I used to prefer Jenkins in some cases since it was easier to debug broken builds. With the addition of the aptly named “debug build” button, Travis is now the clear winner. It’s easy to use and free for open source, with no need to maintain anything.
#ContinuousIntegration #CodeCollaborationVersionControl
- Great service discovery infrastructure61
- Health checking35
- Distributed key-value store29
- Monitoring26
- High-availability23
- Web-UI12
- Token-based acls10
- Gossip clustering6
- Dns server5
- Not Java4
- Docker integration1
- Javascript1
related Consul posts
As we've evolved or added additional infrastructure to our stack, we've biased towards managed services. Most new backing stores are Amazon RDS instances now. We do use self-managed PostgreSQL with TimescaleDB for time-series data—this is made HA with the use of Patroni and Consul.
We also use managed Amazon ElastiCache instances instead of spinning up Amazon EC2 instances to run Redis workloads, as well as shifting to Amazon Kinesis instead of Kafka.
Since the beginning, Cal Henderson has been the CTO of Slack. Earlier this year, he commented on a Quora question summarizing their current stack.
Apps- Web: a mix of JavaScript/ES6 and React.
- Desktop: And Electron to ship it as a desktop application.
- Android: a mix of Java and Kotlin.
- iOS: written in a mix of Objective C and Swift.
- The core application and the API written in PHP/Hack that runs on HHVM.
- The data is stored in MySQL using Vitess.
- Caching is done using Memcached and MCRouter.
- The search service takes help from SolrCloud, with various Java services.
- The messaging system uses WebSockets with many services in Java and Go.
- Load balancing is done using HAproxy with Consul for configuration.
- Most services talk to each other over gRPC,
- Some Thrift and JSON-over-HTTP
- Voice and video calling service was built in Elixir.
- Built using open source tools including Presto, Spark, Airflow, Hadoop and Kafka.
- For server configuration and management we use Terraform, Chef and Kubernetes.
- We use Prometheus for time series metrics and ELK for logging.