Nginx or Apache - Help me decide
Nginx & Apache are the two most used web servers on the internet. Together, they own over 60% of the total market share.
But which one should you use? That's exactly the purpose of this article.
The first thing you should think about when choosing a critical piece of software like a web server is its performance characteristics.
Not only how many requests they can serve per second, but also how they behave under heavy load & what does the resource usage (RAM, CPU) look like.
This our testing setup:
- Ubuntu 18.04
- Apache 2.4.29 (mpm_event)
- Nginx 1.14.0
- Default settings
- 1GB ram
- 1 CPU
As a benchmarking tool we're going to use wrk with the following settings:
- -d 60 (duration of the test)
- -c 40 (concurrency)
- --latency (latency distribution)
Our target URL returns a small HTML file with no server language involved.
Running this test we get the following results (requests/second):
| Apache | Nginx | | ------ | ----- | | 670.53 | 660.15 |
It seems like Nginx & Apache are about the same speed!
But what about resource usage?
While running this test, Apache averaged a CPU usage of 20% & 18MB RAM:
Nginx CPU usage averaged 12% & only 8MB RAM:
While this benchmark might not be representative of all real-world use cases & you should consider running your own benchmarks for your particular setup, it can give you a general idea of how these servers perform.
In conclusion, if your biggest concern is performance & efficient use of your resources you should consider using Nginx.
Both servers come with a good set of core features which should be enough for most people...
...but sometimes you need that little extra.
That's why you can extend both servers using modules.
Modules can be compiled into the main server binary, or they can be added as dynamic modules that can be installed separately from the binary.
Dynamic modules are more flexible because they can be updated on their own, and you can add new modules without having to recompile your server.
Most Apache modules are dynamic, but Nginx recently (version 1.9.11, released in 2016) started supporting this feature.
Let's take a look at some useful modules for both servers.
- modsecurity: Available for Apache. This module adds a Web Application Firewall (WAF) in front of your application. There is a Nginx version, but it seems not maintained, you can use Naxsi instead.
- page_speed: Available for Apache & Nginx. This module can optimize images on the fly & add other optimizations to improve page loading times.
- ngx_mruby / mod_ruby: Available for Apache & Nginx. This module allows you to use the Ruby programming language to process requests & make decisions to redirect to another page, return some file contents, etc. The nginx version is well-maintained & faster.
Many popular modules are available for both servers, so module availability may not be a factor when deciding what server to use.
For a complete list of available modules you can go here:
Installing a new module:
Adding a new module to Apache is easier than adding new modules to Nginx.
You can install Apache modules from your package repository, then use the
a2enmod command to enable it & restart your server.
Nginx may require you to compile from source to install some modules, since dynamic modules must be built against the same version of Nginx that you're running.
However, you can do this on a non-production server, then copy the dynamic module (.so file) into production.
If you think you'll need to be changing modules frequently this is something to consider, but that's not often the case.
The popularity of a piece of open-source software matters because the most popular usually get the most attention. This can translate into better documentation, the ability to find solutions to specific problems & how well maintained is the software itself.
So exactly how popular are Apache & Nginx?
According to the 2018 August Web Server Survey conducted by netcraft.com, these are the stats for active sites:
- Apache 38.68% (-0.62 from previous month)
- nginx 22.67% (+0.11 from previous month)
The rate of change is pretty small, but that's to be expected from an established technology.
Looking at the big picture it looks like Apache has been losing a lot of ground over the last 7 years. In 2011, Apache owned 60% of the active sites market share, while Nginx (released in October 2004) only had 10% by that same year.
If the trend continues, Nginx is going to overtake Apache as the "king of web servers" in a few years.
Maybe earlier than we expect.
Something to keep in mind when making your decision.
Most Common Uses
Let's have a look at the most common uses for Apache & Nginx, this will help you decide if your use case matches with what the server does best naturally.
- Runs PHP applications (like Wordpress) without external software, just install
mod_phpif it isn't already part of the default install for your distribution.
- Works great in a shared environment (like a hosting provider) because it supports directory-based configuration with
- Serves static assets very efficiently thanks to its event-driven approach to handling requests.
- Is a great proxy & cache layer for the same reason.
- You can easily implement custom logic with modules like
ngx_mruby. Cloudflare makes great use of this for their custom WAF (Web Application Firewall).
A few more things to consider before making your final decision:
- Nginx offers an enterprise-grade solution in the form of Nginx PLUS. This adds professional support & a few extra capabilities (like monitoring) which may be important to you if you are running a big operation.
- Apache & Nginx can be used together, with Nginx proxying non-static asset request to Apache. This can add significant complexity to your setup, but it's something to consider if you want to use features from both.
We hope you found this comparison useful!
Apache HTTP Server vs nginx: What are the differences?
What is Apache HTTP Server? The most popular web server on the Internet since April 1996. The Apache HTTP Server is a powerful and flexible HTTP/1.1 compliant web server. Originally designed as a replacement for the NCSA HTTP Server, it has grown to be the most popular web server on the Internet.
What is nginx? A high performance free open source web server powering busiest sites on the Internet. nginx [engine x] is an HTTP and reverse proxy server, as well as a mail proxy server, written by Igor Sysoev. According to Netcraft nginx served or proxied 30.46% of the top million busiest sites in Jan 2018.
Apache HTTP Server and nginx belong to "Web Servers" category of the tech stack.
"Web server", "Most widely-used web server" and "Virtual hosting" are the key factors why developers consider Apache HTTP Server; whereas "High-performance http server", "Performance" and "Easy to configure" are the primary reasons why nginx is favored.
Apache HTTP Server and nginx are both open source tools. It seems that nginx with 9.1K GitHub stars and 3.43K forks on GitHub has more adoption than Apache HTTP Server with 2.21K GitHub stars and 657 GitHub forks.
According to the StackShare community, nginx has a broader approval, being mentioned in 8669 company stacks & 2556 developers stacks; compared to Apache HTTP Server, which is listed in 6194 company stacks and 1067 developer stacks.
What is Apache HTTP Server?
What is nginx?
Want advice about which of these to choose?Ask the StackShare community!
What are the cons of using Apache HTTP Server?
What are the cons of using nginx?
What tools integrate with Apache HTTP Server?
What tools integrate with nginx?
We used to primarily use nginx for our static web server and proxy in-front of Node.js. Now, we use Caddy. And we couldn't be happier.
Caddy is simpler on all fronts. Configuration is easier. Free HTTPS out of the box. Some fantastic plugins. And for the most part, it's fast.
Don't get me wrong, it's not lost on me that Nginx is actually a superior product.
But for the times when you don't need that extra performance, and complexity - take a look at Caddy.
In 2010 we made the very difficult decision to entirely re-engineer our existing monolithic LAMP application from the ground up in order to address some growing concerns about it's long term viability as a platform.
Full application re-write is almost always never the answer, because of the risks involved. However the situation warranted drastic action as it was clear that the existing product was going to face severe scaling issues. We felt it better address these sooner rather than later and also take the opportunity to improve the international architecture and also to refactor the database in. order that it better matched the changes in core functionality.
PostgreSQL was chosen for its reputation as being solid ACID compliant database backend, it was available as an offering AWS RDS service which reduced the management overhead of us having to configure it ourselves. In order to reduce read load on the primary database we implemented an Elasticsearch layer for fast and scalable search operations. Synchronisation of these indexes was to be achieved through the use of Sidekiq's Redis based background workers on Amazon ElastiCache. Again the AWS solution here looked to be an easy way to keep our involvement in managing this part of the platform at a minimum. Allowing us to focus on our core business.
Rails ls was chosen for its ability to quickly get core functionality up and running, its MVC architecture and also its focus on Test Driven Development using RSpec and Selenium with Travis CI providing continual integration. We also liked Ruby for its terse, clean and elegant syntax. Though YMMV on that one!
Unicorn was chosen for its continual deployment and reputation as a reliable application server, nginx for its reputation as a fast and stable reverse-proxy. We also took advantage of the Amazon CloudFront CDN here to further improve performance by caching static assets globally.
We tried to strike a balance between having control over management and configuration of our core application with the convenience of being able to leverage AWS hosted services for ancillary functions (Amazon SES , Amazon SQS Amazon Route 53 all hosted securely inside Amazon VPC of course!).
Whilst there is some compromise here with potential vendor lock in, the tasks being performed by these ancillary services are no particularly specialised which should mitigate this risk. Furthermore we have already containerised the stack in our development using Docker environment, and looking to how best to bring this into production - potentially using Amazon EC2 Container Service
We use nginx and OpenResty as our API proxy running on EC2 for auth, caching, and some rate limiting for our dozens of microservices. Since OpenResty support embedded Lua we were able to write a custom access module that calls out to our authentication service with the resource, method, and access token. If that succeeds then critical account info is passed down to the underlying microservice. This proxy approach keeps all authentication and authorization in one place and provides a unified CX for our API users. Nginx is fast and cheap to run though we are always exploring alternatives that are also economical. What do you use?
The original API performed a synchronous Nginx reload after provisioning a zone, which often took up to 30 seconds or longer. While important, this step shouldn’t block the response to the user (or API) that a new zone has been created, or block subsequent requests to adjust the zone. With the new API, an independent worker reloads Nginx configurations based on zone modifications.It’s like ordering a product online: don’t pause the purchase process until the product’s been shipped. Say the order has been created, and you can still cancel or modify shipping information. Meanwhile, the remaining steps are being handled behind the scenes. In our case, the zone provision happens instantly, and you can see the result in your control panel or API. Behind the scenes, the zone will be serving traffic within a minute.
Nginx serves as the loadbalancer, router and SSL terminator of cloudcraft.co. As one of our app server nodes is spun up, an Ansible orchestration script adds the new node dynamically to the nginx loadbalancer config which is then reloaded for a zero downtime seamless rolling deployment. By putting nginx in front or whatever web and API servers you might have, you gain a ton of flexibility. While previously I've cobbled together HAProxy and Stun as a poor man's loadbalancer, nginx just does a much better job and is far simpler in the long run.
Used nginx as exactly what it is great for: serving static content in a cache-friendly, load balanced manner.
It is exclusively for production web page hosting, we don't use nginx internally, only on the public-facing versions of static sites / Angular & Backbone/Marionette applications.
We use NGINX both as reverse HTTP proxy and also as a SMTP proxy, to handle incoming email.
We previously handled incoming email with Mandrill, and then later with AWS SES. Handling incoming email yourself is not that much more difficult and saves quite a bit on operational costs.
NGINX sits in front of all of our web servers. It is fantastic at load balancing traffic as well as serving as a cache at times when under massive load. It's a robust tool that we're happy to have at the front lines of all Wirkn web apps.
We use httpd in front of our Tomcat web server. Apache terminates the TLS connections and forwards to the embedded Tomcat server(s) for request processing. We also use it as load balancer for multi-server deployments.
Most known webserver. We are using Apache due to his htaccess feature but its just a backedn to proccess PHP. In font of Apache we are using NGINX to server static files
Apache splits static traffic from application traffic, as well as providing a selection of tools to assist in running of the site (rewrites, logging etc).
Primary web server, delivers PHP-rendered pages as well as static HTML content. Ruby CGIs deliver objects to browser-side code using REST/JSON