HAProxy vs nginx

Get Advice Icon

Need advice about which tool to choose?Ask the StackShare community!

HAProxy
HAProxy

1.8K
1.1K
+ 1
475
nginx
nginx

55K
13.4K
+ 1
5.4K
Add tool

HAProxy vs nginx: What are the differences?

Developers describe HAProxy as "The Reliable, High Performance TCP/HTTP Load Balancer". HAProxy (High Availability Proxy) is a free, very fast and reliable solution offering high availability, load balancing, and proxying for TCP and HTTP-based applications. On the other hand, nginx is detailed as "A high performance free open source web server powering busiest sites on the Internet". nginx [engine x] is an HTTP and reverse proxy server, as well as a mail proxy server, written by Igor Sysoev. According to Netcraft nginx served or proxied 30.46% of the top million busiest sites in Jan 2018.

HAProxy belongs to "Load Balancer / Reverse Proxy" category of the tech stack, while nginx can be primarily classified under "Web Servers".

"Load balancer", "High performance" and "Very fast" are the key factors why developers consider HAProxy; whereas "High-performance http server", "Performance" and "Easy to configure" are the primary reasons why nginx is favored.

nginx is an open source tool with 9K GitHub stars and 3.41K GitHub forks. Here's a link to nginx's open source repository on GitHub.

According to the StackShare community, nginx has a broader approval, being mentioned in 8631 company stacks & 2495 developers stacks; compared to HAProxy, which is listed in 452 company stacks and 205 developer stacks.

- No public GitHub repository available -

What is HAProxy?

HAProxy (High Availability Proxy) is a free, very fast and reliable solution offering high availability, load balancing, and proxying for TCP and HTTP-based applications.

What is nginx?

nginx [engine x] is an HTTP and reverse proxy server, as well as a mail proxy server, written by Igor Sysoev. According to Netcraft nginx served or proxied 30.46% of the top million busiest sites in Jan 2018.
Get Advice Icon

Need advice about which tool to choose?Ask the StackShare community!

Why do developers choose HAProxy?
Why do developers choose nginx?

Sign up to add, upvote and see more prosMake informed product decisions

    Be the first to leave a con
    What companies use HAProxy?
    What companies use nginx?

    Sign up to get full access to all the companiesMake informed product decisions

    What tools integrate with HAProxy?
    What tools integrate with nginx?

    Sign up to get full access to all the tool integrationsMake informed product decisions

    What are some alternatives to HAProxy and nginx?
    Traefik
    A modern HTTP reverse proxy and load balancer that makes deploying microservices easy. Traefik integrates with your existing infrastructure components and configures itself automatically and dynamically.
    Envoy
    Originally built at Lyft, Envoy is a high performance C++ distributed proxy designed for single services and applications, as well as a communication bus and “universal data plane” designed for large microservice “service mesh” architectures.
    Squid
    Squid reduces bandwidth and improves response times by caching and reusing frequently-requested web pages. Squid has extensive access controls and makes a great server accelerator. It runs on most available operating systems, including Windows and is licensed under the GNU GPL.
    Varnish
    Varnish Cache is a web application accelerator also known as a caching HTTP reverse proxy. You install it in front of any server that speaks HTTP and configure it to cache the contents. Varnish Cache is really, really fast. It typically speeds up delivery with a factor of 300 - 1000x, depending on your architecture.
    Pound
    Pound was developed to enable distributing the load among several Web-servers and to allow for a convenient SSL wrapper for those Web servers that do not offer it natively.
    See all alternatives
    Decisions about HAProxy and nginx
    HAProxy
    HAProxy
    Varnish
    Varnish
    Tornado
    Tornado
    Django
    Django
    Redis
    Redis
    RabbitMQ
    RabbitMQ
    nginx
    nginx
    Memcached
    Memcached
    MySQL
    MySQL
    Python
    Python
    Node.js
    Node.js

    Around the time of their Series A, Pinterest’s stack included Python and Django, with Tornado and Node.js as web servers. Memcached / Membase and Redis handled caching, with RabbitMQ handling queueing. Nginx, HAproxy and Varnish managed static-delivery and load-balancing, with persistent data storage handled by MySQL.

    See more
    StackShare Editors
    StackShare Editors
    HAProxy
    HAProxy
    StatsD
    StatsD
    nginx
    nginx

    The frontline API is proxied through a HAProxy load balancer with NGINX as the fronted, which also handles SSL termination. This frontline API consist of 600 stateless endpoints that join together multiple services.

    As part of the Marketplace stack, engineers in this area integrate with various other internal services, including logtron to log to disk and Kafka and uber-statsd-client, the Node.js client for statsd.

    See more
    Tim Abbott
    Tim Abbott
    Founder at Zulip · | 5 upvotes · 38.7K views
    atZulipZulip
    Apache HTTP Server
    Apache HTTP Server
    nginx
    nginx

    We've been happy with nginx as part of our stack. As an open source web application that folks install on-premise, the configuration system for the webserver is pretty important to us. I have a few complaints (e.g. the configuration syntax for conditionals is a pain), but overall we've found it pretty easy to build a configurable set of options (see link) for how to run Zulip on nginx, both directly and with a remote reverse proxy in front of it, with a minimum of code duplication.

    Certainly I've been a lot happier with it than I was working with Apache HTTP Server in past projects.

    See more
    Go
    Go
    Lua
    Lua
    OpenResty
    OpenResty
    nginx
    nginx
    Logstash
    Logstash
    Prometheus
    Prometheus

    At Kong while building an internal tool, we struggled to route metrics to Prometheus and logs to Logstash without incurring too much latency in our metrics collection.

    We replaced nginx with OpenResty on the edge of our tool which allowed us to use the lua-nginx-module to run Lua code that captures metrics and records telemetry data during every request’s log phase. Our code then pushes the metrics to a local aggregator process (written in Go) which in turn exposes them in Prometheus Exposition Format for consumption by Prometheus. This solution reduced the number of components we needed to maintain and is fast thanks to NGINX and LuaJIT.

    See more
    Scott Mebberson
    Scott Mebberson
    CTO / Chief Architect at Idearium · | 5 upvotes · 22.5K views
    Caddy
    Caddy
    nginx
    nginx

    We used to primarily use nginx for our static web server and proxy in-front of Node.js. Now, we use Caddy. And we couldn't be happier.

    Caddy is simpler on all fronts. Configuration is easier. Free HTTPS out of the box. Some fantastic plugins. And for the most part, it's fast.

    Don't get me wrong, it's not lost on me that Nginx is actually a superior product.

    But for the times when you don't need that extra performance, and complexity - take a look at Caddy.

    See more
    Simon Bettison
    Simon Bettison
    Managing Director at Bettison.org Limited · | 6 upvotes · 89.4K views
    atBettison.org LimitedBettison.org Limited
    Amazon EC2 Container Service
    Amazon EC2 Container Service
    Docker
    Docker
    Amazon VPC
    Amazon VPC
    Amazon Route 53
    Amazon Route 53
    Amazon SQS
    Amazon SQS
    Amazon SES
    Amazon SES
    Amazon CloudFront
    Amazon CloudFront
    nginx
    nginx
    Unicorn
    Unicorn
    Ruby
    Ruby
    Travis CI
    Travis CI
    Selenium
    Selenium
    RSpec
    RSpec
    Rails
    Rails
    Amazon ElastiCache
    Amazon ElastiCache
    Redis
    Redis
    Sidekiq
    Sidekiq
    Elasticsearch
    Elasticsearch
    PostgreSQL
    PostgreSQL

    In 2010 we made the very difficult decision to entirely re-engineer our existing monolithic LAMP application from the ground up in order to address some growing concerns about it's long term viability as a platform.

    Full application re-write is almost always never the answer, because of the risks involved. However the situation warranted drastic action as it was clear that the existing product was going to face severe scaling issues. We felt it better address these sooner rather than later and also take the opportunity to improve the international architecture and also to refactor the database in. order that it better matched the changes in core functionality.

    PostgreSQL was chosen for its reputation as being solid ACID compliant database backend, it was available as an offering AWS RDS service which reduced the management overhead of us having to configure it ourselves. In order to reduce read load on the primary database we implemented an Elasticsearch layer for fast and scalable search operations. Synchronisation of these indexes was to be achieved through the use of Sidekiq's Redis based background workers on Amazon ElastiCache. Again the AWS solution here looked to be an easy way to keep our involvement in managing this part of the platform at a minimum. Allowing us to focus on our core business.

    Rails ls was chosen for its ability to quickly get core functionality up and running, its MVC architecture and also its focus on Test Driven Development using RSpec and Selenium with Travis CI providing continual integration. We also liked Ruby for its terse, clean and elegant syntax. Though YMMV on that one!

    Unicorn was chosen for its continual deployment and reputation as a reliable application server, nginx for its reputation as a fast and stable reverse-proxy. We also took advantage of the Amazon CloudFront CDN here to further improve performance by caching static assets globally.

    We tried to strike a balance between having control over management and configuration of our core application with the convenience of being able to leverage AWS hosted services for ancillary functions (Amazon SES , Amazon SQS Amazon Route 53 all hosted securely inside Amazon VPC of course!).

    Whilst there is some compromise here with potential vendor lock in, the tasks being performed by these ancillary services are no particularly specialised which should mitigate this risk. Furthermore we have already containerised the stack in our development using Docker environment, and looking to how best to bring this into production - potentially using Amazon EC2 Container Service

    See more
    Chris McFadden
    Chris McFadden
    VP, Engineering at SparkPost · | 7 upvotes · 60.1K views
    atSparkPostSparkPost
    Lua
    Lua
    OpenResty
    OpenResty
    nginx
    nginx

    We use nginx and OpenResty as our API proxy running on EC2 for auth, caching, and some rate limiting for our dozens of microservices. Since OpenResty support embedded Lua we were able to write a custom access module that calls out to our authentication service with the resource, method, and access token. If that succeeds then critical account info is passed down to the underlying microservice. This proxy approach keeps all authentication and authorization in one place and provides a unified CX for our API users. Nginx is fast and cheap to run though we are always exploring alternatives that are also economical. What do you use?

    See more
    nginx
    nginx

    I use nginx because it is very light weight. Where Apache tries to include everything in the web server, nginx opts to have external programs/facilities take care of that so the web server can focus on efficiently serving web pages. While this can seem inefficient, it limits the number of new bugs found in the web server, which is the element that faces the client most directly.

    See more
    Marcel Kornegoor
    Marcel Kornegoor
    CTO at AT Computing · | 6 upvotes · 13.1K views
    Apache HTTP Server
    Apache HTTP Server
    nginx
    nginx

    nginx or Apache HTTP Server that's the question. The best choice depends on what it needs to serve. In general, Nginx performs better with static content, where Apache and Nginx score roughly the same when it comes to dynamic content. Since most webpages and web-applications use both static and dynamic content, a combination of both platforms may be the best solution.

    Since both webservers are easy to deploy and free to use, setting up a performance or feature comparison test is no big deal. This way you can see what solutions suits your application or content best. Don't forget to look at other aspects, like security, back-end compatibility (easy of integration) and manageability, as well.

    A reasonably good comparison between the two can be found in the link below.

    See more
    Interest over time
    Reviews of HAProxy and nginx
    No reviews found
    How developers use HAProxy and nginx
    Avatar of MaxCDN
    MaxCDN uses nginxnginx

    The original API performed a synchronous Nginx reload after provisioning a zone, which often took up to 30 seconds or longer. While important, this step shouldn’t block the response to the user (or API) that a new zone has been created, or block subsequent requests to adjust the zone. With the new API, an independent worker reloads Nginx configurations based on zone modifications.It’s like ordering a product online: don’t pause the purchase process until the product’s been shipped. Say the order has been created, and you can still cancel or modify shipping information. Meanwhile, the remaining steps are being handled behind the scenes. In our case, the zone provision happens instantly, and you can see the result in your control panel or API. Behind the scenes, the zone will be serving traffic within a minute.

    Avatar of Cloudcraft
    Cloudcraft uses nginxnginx

    Nginx serves as the loadbalancer, router and SSL terminator of cloudcraft.co. As one of our app server nodes is spun up, an Ansible orchestration script adds the new node dynamically to the nginx loadbalancer config which is then reloaded for a zero downtime seamless rolling deployment. By putting nginx in front or whatever web and API servers you might have, you gain a ton of flexibility. While previously I've cobbled together HAProxy and Stun as a poor man's loadbalancer, nginx just does a much better job and is far simpler in the long run.

    Avatar of datapile
    datapile uses nginxnginx

    Used nginx as exactly what it is great for: serving static content in a cache-friendly, load balanced manner.

    It is exclusively for production web page hosting, we don't use nginx internally, only on the public-facing versions of static sites / Angular & Backbone/Marionette applications.

    Avatar of PÄ“teris Caune
    PÄ“teris Caune uses nginxnginx

    We use NGINX both as reverse HTTP proxy and also as a SMTP proxy, to handle incoming email.

    We previously handled incoming email with Mandrill, and then later with AWS SES. Handling incoming email yourself is not that much more difficult and saves quite a bit on operational costs.

    Avatar of Trello
    Trello uses HAProxyHAProxy

    We use HAProxy to load balance between our webservers. It balances TCP between the machines round robin and leaves everything else to Node.js, leaving the connections open with a reasonably long time to live to support WebSockets and re-use of a TCP connection for AJAX polling.

    Avatar of Wirkn Inc.
    Wirkn Inc. uses nginxnginx

    NGINX sits in front of all of our web servers. It is fantastic at load balancing traffic as well as serving as a cache at times when under massive load. It's a robust tool that we're happy to have at the front lines of all Wirkn web apps.

    Avatar of The Independent
    The Independent uses HAProxyHAProxy

    HAProxy manages internal and origin load balancing using KeepaliveD. Two small servers host the entire site, never moving about 15% load even during the largest load spikes.

    Avatar of Packet
    Packet uses HAProxyHAProxy

    We use HAProxy to balance traffic at various points in our stack, includgin nginx nodes on different physical machines, and api nodes on the backend.

    Avatar of ssshake
    ssshake uses HAProxyHAProxy

    I use HAproxy primarily for application routing and SSL termination. I also use its logs and statistics to visualize incoming traffic in Kibana.

    Avatar of Clarabridge Engage
    Clarabridge Engage uses HAProxyHAProxy

    We use HAProxy to load balance web requests for our web application, but also for some internal load balancing of microservices.

    How much does HAProxy cost?
    How much does nginx cost?
    Pricing unavailable
    Pricing unavailable