What is Zuul and what are its top alternatives?
Top Alternatives to Zuul
API management, design, analytics, and security are at the heart of modern digital architecture. The Apigee intelligent API platform is a complete solution for moving business to the digital world. ...
Eureka is a REST (Representational State Transfer) based service that is primarily used in the AWS cloud for locating services for the purpose of load balancing and failover of middle-tier servers. ...
Kong is a scalable, open source API Layer (also known as an API Gateway, or API Middleware). Kong controls layer 4 and 7 traffic and is extended through Plugins, which provide extra functionality and services beyond the core platform. ...
HAProxy (High Availability Proxy) is a free, very fast and reliable solution offering high availability, load balancing, and proxying for TCP and HTTP-based applications. ...
Istio is an open platform for providing a uniform way to integrate microservices, manage traffic flow across microservices, enforce policies and aggregate telemetry data. Istio's control plane provides an abstraction layer over the underlying cluster management platform, such as Kubernetes, Mesos, etc. ...
nginx [engine x] is an HTTP and reverse proxy server, as well as a mail proxy server, written by Igor Sysoev. According to Netcraft nginx served or proxied 30.46% of the top million busiest sites in Jan 2018. ...
In a nutshell Jenkins CI is the leading open-source continuous integration server. Built with Java, it provides over 300 plugins to support building and testing virtually any project. ...
Consul is a tool for service discovery and configuration. Consul is distributed, highly available, and extremely scalable. ...
Zuul alternatives & related posts
- Highly scalable and secure API Management Platform11
- Quick jumpstart5
- Good documentation5
- Fast and adjustable caching3
- Easy to use3
related Apigee posts
- Easy setup and integration with spring-cloud20
- Health checking8
- Web ui8
- Circuit breaker7
- Netflix battle tested components6
- Service discovery6
- Open Source4
related Eureka posts
- Easy to maintain36
- Easy to install30
- Great performance20
- Api blueprint5
- Custom Plugins4
- Documentation is clear1
related Kong posts
We needed a lightweight and completely customizable #microservices #gateway to be able to generate #JWT and introspect #OAuth2 tokens as well. The #gateway was going to front all #APIs for our single page web app as well as externalized #APIs for our partners.Contenders
We looked at Tyk Cloud and Kong. Kong's plugins are all Lua based and its core is NGINX and OpenResty. Although it's open source, it's not the greatest platform to be able to customize. On top of that enterprise features are paid and expensive. Tyk is Go and the nomenclature used within Tyk like "sessions" was bizarre, and again enterprise features were paid.Decision
We ultimately decided to roll our own using ExpressJS into Express Gateway because the use case for using ExpressJS as an #API #gateway was tried and true, in fact - all the enterprise features that the other two charge for #OAuth2 introspection etc were freely available within ExpressJS middleware.Outcome
We opened source Express Gateway with a core set of plugins and the community started writing their own and could quickly do so by rolling lots of ExpressJS middleware into Express Gateway
- Load balancer130
- High performance100
- Very fast69
- Proxying for tcp and http57
- SSL termination55
- Open source30
- Very popular12
- Suited for very high traffic web sites7
- Runs health checks on backends7
- Ready to Docker5
- Powers many world's most visited sites4
- Work with NTLM2
- Ssl offloading2
- Becomes your single point of failure3
related HAProxy posts
Around the time of their Series A, Pinterest’s stack included Python and Django, with Tornado and Node.js as web servers. Memcached / Membase and Redis handled caching, with RabbitMQ handling queueing. Nginx, HAproxy and Varnish managed static-delivery and load-balancing, with persistent data storage handled by MySQL.
We're using Git through GitHub for public repositories and GitLab for our private repositories due to its easy to use features. Docker and Kubernetes are a must have for our highly scalable infrastructure complimented by HAProxy with Varnish in front of it. We are using a lot of npm and Visual Studio Code in our development sessions.
- Zero code for logging and monitoring12
- Service Mesh8
- Great flexibility7
- Ingress controller4
- Easy integration with Kubernetes and Docker3
- Full Security3
- Powerful authorization mechanisms3
- High-performance http server1.4K
- Easy to configure728
- Open source606
- Load balancer529
- Web server222
- Easy setup134
- Content caching29
- Web Accelerator19
- Reverse Proxy7
- Supports http/26
- The best of them4
- Lots of Modules4
- Enterprise version4
- Great Community4
- High perfomance proxy server3
- Streaming media3
- Embedded Lua scripting3
- Reversy Proxy3
- Streaming media delivery3
- Fast and easy to set up2
- Virtual hosting1
- Ingress controller1
- Narrow focus. Easy to configure. Fast1
- Along with Redis Cache its the Most superior1
- Advanced features require subscription8
related NGINX posts
Recently I have been working on an open source stack to help people consolidate their personal health data in a single database so that AI and analytics apps can be run against it to find personalized treatments. We chose to go with a #containerized approach leveraging Docker #containers with a local development environment setup with Docker Compose and nginx for container routing. For the production environment we chose to pull code from GitHub and build/push images using Jenkins and using Kubernetes to deploy to Amazon EC2.
We also implemented a dashboard app to handle user authentication/authorization, as well as a custom SSO server that runs on Heroku which allows experts to easily visit more than one instance without having to login repeatedly. The #Backend was implemented using my favorite #Stack which consists of FeathersJS on top of Node.js and ExpressJS with PostgreSQL as the main database. The #Frontend was implemented using React, Redux.js, Semantic UI React and the FeathersJS client. Though testing was light on this project, we chose to use AVA as well as ESLint to keep the codebase clean and consistent.
We switched to Traefik so we can use the REST API to dynamically configure subdomains and have the ability to redirect between multiple servers.
We still use nginx with a docker-compose to expose the traffic from our APIs and TCP microservices, but for managing routing to the internet Traefik does a much better job
The biggest win for naologic was the ability to set dynamic configurations without having to restart the server
- Hosted internally521
- Free open source465
- Great to build, deploy or launch anything async314
- Tons of integrations243
- Rich set of plugins with good documentation210
- Has support for build pipelines109
- Open source and tons of integrations72
- Easy setup63
- It is open-source61
- Workflow plugin54
- Configuration as code11
- Very powerful tool10
- Many Plugins9
- Continuous Integration8
- Great flexibility8
- Git and Maven integration is better8
- Github integration6
- 100% free and open source6
- Slack Integration (plugin)6
- Easy customisation5
- Self-hosted GitLab Integration (plugin)5
- Docker support4
- Excellent docker integration3
- Platform idnependency3
- Fast builds3
- Pipeline API3
- Can be run as a Docker container2
- It`w worked2
- Hosted Externally2
- It's Everywhere2
- AWS Integration2
- NodeJS Support1
- PHP Support1
- Ruby/Rails Support1
- Universal controller1
- Easily extendable with seamless integration1
- Build PR Branch Only1
- Workarounds needed for basic requirements12
- Groovy with cumbersome syntax8
- Limited abilities with declarative pipelines6
- Plugins compatibility issues6
- Lack of support5
- No YAML syntax4
- Too tied to plugins versions2
related Jenkins posts
Often enough I have to explain my way of going about setting up a CI/CD pipeline with multiple deployment platforms. Since I am a bit tired of yapping the same every single time, I've decided to write it up and share with the world this way, and send people to read it instead ;). I will explain it on "live-example" of how the Rome got built, basing that current methodology exists only of readme.md and wishes of good luck (as it usually is ;)).
It always starts with an app, whatever it may be and reading the readmes available while Vagrant and VirtualBox is installing and updating. Following that is the first hurdle to go over - convert all the instruction/scripts into Ansible playbook(s), and only stopping when doing a clear
vagrant up or
vagrant reload we will have a fully working environment. As our Vagrant environment is now functional, it's time to break it! This is the moment to look for how things can be done better (too rigid/too lose versioning? Sloppy environment setup?) and replace them with the right way to do stuff, one that won't bite us in the backside. This is the point, and the best opportunity, to upcycle the existing way of doing dev environment to produce a proper, production-grade product.
I should probably digress here for a moment and explain why. I firmly believe that the way you deploy production is the same way you should deploy develop, shy of few debugging-friendly setting. This way you avoid the discrepancy between how production work vs how development works, which almost always causes major pains in the back of the neck, and with use of proper tools should mean no more work for the developers. That's why we start with Vagrant as developer boxes should be as easy as
vagrant up, but the meat of our product lies in Ansible which will do meat of the work and can be applied to almost anything: AWS, bare metal, docker, LXC, in open net, behind vpn - you name it.
We must also give proper consideration to monitoring and logging hoovering at this point. My generic answer here is to grab Elasticsearch, Kibana, and Logstash. While for different use cases there may be better solutions, this one is well battle-tested, performs reasonably and is very easy to scale both vertically (within some limits) and horizontally. Logstash rules are easy to write and are well supported in maintenance through Ansible, which as I've mentioned earlier, are at the very core of things, and creating triggers/reports and alerts based on Elastic and Kibana is generally a breeze, including some quite complex aggregations.
If we are happy with the state of the Ansible it's time to move on and put all those roles and playbooks to work. Namely, we need something to manage our CI/CD pipelines. For me, the choice is obvious: TeamCity. It's modern, robust and unlike most of the light-weight alternatives, it's transparent. What I mean by that is that it doesn't tell you how to do things, doesn't limit your ways to deploy, or test, or package for that matter. Instead, it provides a developer-friendly and rich playground for your pipelines. You can do most the same with Jenkins, but it has a quite dated look and feel to it, while also missing some key functionality that must be brought in via plugins (like quality REST API which comes built-in with TeamCity). It also comes with all the common-handy plugins like Slack or Apache Maven integration.
The exact flow between CI and CD varies too greatly from one application to another to describe, so I will outline a few rules that guide me in it: 1. Make build steps as small as possible. This way when something breaks, we know exactly where, without needing to dig and root around. 2. All security credentials besides development environment must be sources from individual Vault instances. Keys to those containers should exist only on the CI/CD box and accessible by a few people (the less the better). This is pretty self-explanatory, as anything besides dev may contain sensitive data and, at times, be public-facing. Because of that appropriate security must be present. TeamCity shines in this department with excellent secrets-management. 3. Every part of the build chain shall consume and produce artifacts. If it creates nothing, it likely shouldn't be its own build. This way if any issue shows up with any environment or version, all developer has to do it is grab appropriate artifacts to reproduce the issue locally. 4. Deployment builds should be directly tied to specific Git branches/tags. This enables much easier tracking of what caused an issue, including automated identifying and tagging the author (nothing like automated regression testing!).
Speaking of deployments, I generally try to keep it simple but also with a close eye on the wallet. Because of that, I am more than happy with AWS or another cloud provider, but also constantly peeking at the loads and do we get the value of what we are paying for. Often enough the pattern of use is not constantly erratic, but rather has a firm baseline which could be migrated away from the cloud and into bare metal boxes. That is another part where this approach strongly triumphs over the common Docker and CircleCI setup, where you are very much tied in to use cloud providers and getting out is expensive. Here to embrace bare-metal hosting all you need is a help of some container-based self-hosting software, my personal preference is with Proxmox and LXC. Following that all you must write are ansible scripts to manage hardware of Proxmox, similar way as you do for Amazon EC2 (ansible supports both greatly) and you are good to go. One does not exclude another, quite the opposite, as they can live in great synergy and cut your costs dramatically (the heavier your base load, the bigger the savings) while providing production-grade resiliency.
Releasing new versions of our services is done by Travis CI. Travis first runs our test suite. Once it passes, it publishes a new release binary to GitHub.
Common tasks such as installing dependencies for the Go project, or building a binary are automated using plain old Makefiles. (We know, crazy old school, right?) Our binaries are compressed using UPX.
Travis has come a long way over the past years. I used to prefer Jenkins in some cases since it was easier to debug broken builds. With the addition of the aptly named “debug build” button, Travis is now the clear winner. It’s easy to use and free for open source, with no need to maintain anything.
- Great service discovery infrastructure59
- Health checking35
- Distributed key-value store27
- Token-based acls10
- Gossip clustering6
- Dns server5
- Not Java2
- Docker integration1
related Consul posts
As we've evolved or added additional infrastructure to our stack, we've biased towards managed services. Most new backing stores are Amazon RDS instances now. We do use self-managed PostgreSQL with TimescaleDB for time-series data—this is made HA with the use of Patroni and Consul.
We also use managed Amazon ElastiCache instances instead of spinning up Amazon EC2 instances to run Redis workloads, as well as shifting to Amazon Kinesis instead of Kafka.
Since the beginning, Cal Henderson has been the CTO of Slack. Earlier this year, he commented on a Quora question summarizing their current stack.Apps
- Desktop: And Electron to ship it as a desktop application.
- Android: a mix of Java and Kotlin.
- iOS: written in a mix of Objective C and Swift.
- The core application and the API written in PHP/Hack that runs on HHVM.
- The data is stored in MySQL using Vitess.
- Caching is done using Memcached and MCRouter.
- The search service takes help from SolrCloud, with various Java services.
- The messaging system uses WebSockets with many services in Java and Go.
- Load balancing is done using HAproxy with Consul for configuration.
- Most services talk to each other over gRPC,
- Some Thrift and JSON-over-HTTP
- Voice and video calling service was built in Elixir.
- Built using open source tools including Presto, Spark, Airflow, Hadoop and Kafka.