Hacker News, Reddit, Stack Overflow Stats
No public GitHub repository stats available
|A high performance free open source web server powering busiest sites on the Internet||Apache httpd has been the most popular web server on the Internet since April 1996||Internet Information Services is a web server for Microsoft Windows|
|Why people like using this tool||
Companies using this service
nginx acts as our main webserver, proxying all connections to our other servers (both Python servers and Go servers). We use nginx because it's crazy fast, and it's proven to be reliable for the entire time we've been using it.
Nginx's easy load balancing made it an essential part of this stack. The simple implementation of an SSL certificate also helped us a lot in this stack.
Great resources in spinning up proper configurations for proxying public URLs to local node apps and services, plus it's a very solid and fast webserver in general.
NGINX sits in front of all of our web servers. It is fantastic at load balancing traffic as well as serving as a cache at times when under massive load. It's a robust tool that we're happy to have at the front lines of all Wirkn web apps.
This is used as our primary reverse proxy allowing us to bypass the firewall restrictions most of our users have by routing all our apis via layer 7.
Experience with nginx as a reverse proxy for a number of backend projects (with SSL support). My personal website runs nginx in a Docker container.
With nginx we deploy all web content and use it as a reverse-proxy for unified ssl and encryption aswell as a burst-control and traffic filter for all web-traffic.
NGINX reverse proxies node.js applications to Varnish and handles the serving of static files
Nginx is used as a proxy between all of our backend Docker services. It allows us to keep the SSL termination on a single service so we don't have the overhead being added to our Node.js processes.
My preferred web server. Updated more often than other competitors, and was the first web server I was introduced to when creating Django apps.
NGINX acts as the gateway server, decoding TLS and load balancing requests to the various downstream servers.
Used nginx as exactly what it is great for: serving static content in a cache-friendly, load balanced manner.
It is exclusively for production web page hosting, we don't use nginx internally, only on the public-facing versions of static sites / Angular & Backbone/Marionette applications.
I use Nginx as an SSL terminator, load balancer, and static content web server.
The webserver is lightweight, easy to setup, and highly configurable.
All incoming requests pass through our high redundancy load balanced nginx instances, no matter if it's static resources for the websites, API calls via uwsgi or TCP stream-based services.
Simple, no hassle, small dependencies and utilized to serve different stuffs easily (fcgi, reverse proxy to golang, nodejs and rails)
Nginx serves as a proxy for dynamic data and delivers static content.
Nginx, как обычно, проксирует динамические запросы бэкенду и отдает статический контент.
NGINX works with PHP to serve the content for our pages, its event driven system is both lightweight and responsive.
Nginx serves as the loadbalancer, router and SSL terminator of cloudcraft.co. As one of our app server nodes is spun up, an Ansible orchestration script adds the new node dynamically to the nginx loadbalancer config which is then reloaded for a zero downtime seamless rolling deployment. By putting nginx in front or whatever web and API servers you might have, you gain a ton of flexibility. While previously I've cobbled together HAProxy and Stun as a poor man's loadbalancer, nginx just does a much better job and is far simpler in the long run.
→ Yet Core
Reverse proxy supporting HSTS and Elliptic-Curve Diffie-Hellman key exchange for perfect forward secrecy - AWS ELB doesn't support the range of HTTPS optionsthat nginx does.
NGINX is quite possibly our favorite tool on the serverside, because of how easy it makes load-balancing. You just throw it in, tweak the config and boom, you're up and running in minutes.
Nginx is a caching proxy that provides basic load balancing for our API and serves our rich client applications to end-user browsers.
Nginx serves client https requests and acts as an sslproxy to the gunicorn application server. Rewrites request headers.
nginx is frequently used in our web applications as a proxy that also servces a few static files or that adds some HTTP response headers, e. g. for client-side caching.
Allows us to run an Apache Webserver and a Node.js websocket server on port 80 on a single host
We use NGINX both as reverse HTTP proxy and also as a SMTP proxy, to handle incoming email.
We previously handled incoming email with Mandrill, and then later with AWS SES. Handling incoming email yourself is not that much more difficult and saves quite a bit on operational costs.
Needed for our micro services architecture and best to scale the whole system into docker containers in teh cloud.
We use nginx to serve all of our static files, and to load balance our multiple backends. Our nginx frontend servers are themselves load-balanced using a Linode NodeBalancer.
We switched from Apache to nginx for a speed boost, since we only need a light web server.
We are using NXING to server and cache static files and as a load balancer in front of our Apache webservers.
Nginx is used to serve the site from within Docker via Passenger, as well as for error page and www to non-www HTTP redirects.
Not many web servers can offer all four, but nginx can.
Nginx is the webserver we use in front of the Unicorn application servers. We have multiple Unicorn workers for every application (depending on their performance requirements), and Nginx basically functions as a load balancer.
Used to be Apache, but configs, robustness and performance of nginx - obviously made sense.
This functions as our load balancer and traffic cop directing API and incoming requests to the correct component.
The original API performed a synchronous Nginx reload after provisioning a zone, which often took up to 30 seconds or longer. While important, this step shouldn’t block the response to the user (or API) that a new zone has been created, or block subsequent requests to adjust the zone. With the new API, an independent worker reloads Nginx configurations based on zone modifications.It’s like ordering a product online: don’t pause the purchase process until the product’s been shipped. Say the order has been created, and you can still cancel or modify shipping information. Meanwhile, the remaining steps are being handled behind the scenes. In our case, the zone provision happens instantly, and you can see the result in your control panel or API. Behind the scenes, the zone will be serving traffic within a minute.
The API is hosted on a VPS alongside the WordPress site for efficient access to the database.
easy to set up - though I plan to replace with nginx since there is no server side processing planned
Workhouse server. Does the job, without too many complaints, and with good responses on SO.
→ Sud Web
Our websites run on Apache. We use the proxy reverse feature to serve directly static websites hosted on GitHub on our domain.
modproxybalancer provides highly concurrent end-user browser connections, and manages session routing to the application cluster
Primary web server, delivers PHP-rendered pages as well as static HTML content. Ruby CGIs deliver objects to browser-side code using REST/JSON
Most known webserver. We are using Apache due to his htaccess feature but its just a backedn to proccess PHP. In font of Apache we are using NGINX to server static files
Apache splits static traffic from application traffic, as well as providing a selection of tools to assist in running of the site (rewrites, logging etc).
This is a legacy system requirement. We have some portions of our website written in PHP. Normally this wouldn't be an issue but at the time they decided to use PHP+Windows they were also trying to use MSSQL databases (All the microsoft influence was due to some azure credits the company received early on). The particular driver they ended up picking forced them into using the
mssql_* functions instead of PDO. This meant that the majority of the site used these rather outdated calls and replacing them was a rather large endeavour. So while we migrate some of the PHP backend away to various node.js api systems we are simply sustaining the existing PHP portions.