Jetty聽vs聽nginx

Get Advice Icon

Need advice about which tool to choose?Ask the StackShare community!

Jetty
Jetty

308
143
+ 1
41
nginx
nginx

55.5K
13.8K
+ 1
5.4K
Add tool

Jetty vs nginx: What are the differences?

Jetty: An open-source project providing an HTTP server, HTTP client, and javax.servlet container. Jetty is used in a wide variety of projects and products, both in development and production. Jetty can be easily embedded in devices, tools, frameworks, application servers, and clusters. See the Jetty Powered page for more uses of Jetty; nginx: A high performance free open source web server powering busiest sites on the Internet. nginx [engine x] is an HTTP and reverse proxy server, as well as a mail proxy server, written by Igor Sysoev. According to Netcraft nginx served or proxied 30.46% of the top million busiest sites in Jan 2018.

Jetty and nginx can be primarily classified as "Web Servers" tools.

"Lightweight" is the top reason why over 12 developers like Jetty, while over 1437 developers mention "High-performance http server" as the leading cause for choosing nginx.

Jetty and nginx are both open source tools. nginx with 9K GitHub stars and 3.41K forks on GitHub appears to be more popular than Jetty with 2.54K GitHub stars and 1.39K GitHub forks.

According to the StackShare community, nginx has a broader approval, being mentioned in 8631 company stacks & 2494 developers stacks; compared to Jetty, which is listed in 58 company stacks and 16 developer stacks.

What is Jetty?

Jetty is used in a wide variety of projects and products, both in development and production. Jetty can be easily embedded in devices, tools, frameworks, application servers, and clusters. See the Jetty Powered page for more uses of Jetty.

What is nginx?

nginx [engine x] is an HTTP and reverse proxy server, as well as a mail proxy server, written by Igor Sysoev. According to Netcraft nginx served or proxied 30.46% of the top million busiest sites in Jan 2018.
Get Advice Icon

Need advice about which tool to choose?Ask the StackShare community!

Why do developers choose Jetty?
Why do developers choose nginx?

Sign up to add, upvote and see more prosMake informed product decisions

    Be the first to leave a con
    What companies use Jetty?
    What companies use nginx?

    Sign up to get full access to all the companiesMake informed product decisions

    What tools integrate with Jetty?
    What tools integrate with nginx?

    Sign up to get full access to all the tool integrationsMake informed product decisions

    What are some alternatives to Jetty and nginx?
    Apache Tomcat
    Apache Tomcat powers numerous large-scale, mission-critical web applications across a diverse range of industries and organizations.
    Netty
    Netty is a NIO client server framework which enables quick and easy development of network applications such as protocol servers and clients. It greatly simplifies and streamlines network programming such as TCP and UDP socket server.
    Wildfly
    It is a flexible, lightweight, managed application runtime that helps you build amazing applications. It supports the latest standards for web development.
    JBoss
    An application platform for hosting your apps that provides an innovative modular, cloud-ready architecture, powerful management and automation, and world class developer productivity.
    GlassFish
    An Application Server means, It can manage Java EE applications You should use GlassFish for Java EE enterprise applications. The need for a seperate Web server is mostly needed in a production environment.
    See all alternatives
    Decisions about Jetty and nginx
    Tim Abbott
    Tim Abbott
    Founder at Zulip | 6 upvotes 46K views
    atZulipZulip
    nginx
    nginx
    Apache HTTP Server
    Apache HTTP Server

    We've been happy with nginx as part of our stack. As an open source web application that folks install on-premise, the configuration system for the webserver is pretty important to us. I have a few complaints (e.g. the configuration syntax for conditionals is a pain), but overall we've found it pretty easy to build a configurable set of options (see link) for how to run Zulip on nginx, both directly and with a remote reverse proxy in front of it, with a minimum of code duplication.

    Certainly I've been a lot happier with it than I was working with Apache HTTP Server in past projects.

    See more
    Prometheus
    Prometheus
    Logstash
    Logstash
    nginx
    nginx
    OpenResty
    OpenResty
    Lua
    Lua
    Go
    Go

    At Kong while building an internal tool, we struggled to route metrics to Prometheus and logs to Logstash without incurring too much latency in our metrics collection.

    We replaced nginx with OpenResty on the edge of our tool which allowed us to use the lua-nginx-module to run Lua code that captures metrics and records telemetry data during every request鈥檚 log phase. Our code then pushes the metrics to a local aggregator process (written in Go) which in turn exposes them in Prometheus Exposition Format for consumption by Prometheus. This solution reduced the number of components we needed to maintain and is fast thanks to NGINX and LuaJIT.

    See more
    Scott Mebberson
    Scott Mebberson
    CTO / Chief Architect at Idearium | 5 upvotes 29.6K views
    nginx
    nginx
    Caddy
    Caddy

    We used to primarily use nginx for our static web server and proxy in-front of Node.js. Now, we use Caddy. And we couldn't be happier.

    Caddy is simpler on all fronts. Configuration is easier. Free HTTPS out of the box. Some fantastic plugins. And for the most part, it's fast.

    Don't get me wrong, it's not lost on me that Nginx is actually a superior product.

    But for the times when you don't need that extra performance, and complexity - take a look at Caddy.

    See more
    Simon Bettison
    Simon Bettison
    Managing Director at Bettison.org Limited | 6 upvotes 106.8K views
    atBettison.org LimitedBettison.org Limited
    PostgreSQL
    PostgreSQL
    Elasticsearch
    Elasticsearch
    Sidekiq
    Sidekiq
    Redis
    Redis
    Amazon ElastiCache
    Amazon ElastiCache
    Rails
    Rails
    RSpec
    RSpec
    Selenium
    Selenium
    Travis CI
    Travis CI
    Ruby
    Ruby
    Unicorn
    Unicorn
    nginx
    nginx
    Amazon CloudFront
    Amazon CloudFront
    Amazon SES
    Amazon SES
    Amazon SQS
    Amazon SQS
    Amazon Route 53
    Amazon Route 53
    Amazon VPC
    Amazon VPC
    Docker
    Docker
    Amazon EC2 Container Service
    Amazon EC2 Container Service

    In 2010 we made the very difficult decision to entirely re-engineer our existing monolithic LAMP application from the ground up in order to address some growing concerns about it's long term viability as a platform.

    Full application re-write is almost always never the answer, because of the risks involved. However the situation warranted drastic action as it was clear that the existing product was going to face severe scaling issues. We felt it better address these sooner rather than later and also take the opportunity to improve the international architecture and also to refactor the database in. order that it better matched the changes in core functionality.

    PostgreSQL was chosen for its reputation as being solid ACID compliant database backend, it was available as an offering AWS RDS service which reduced the management overhead of us having to configure it ourselves. In order to reduce read load on the primary database we implemented an Elasticsearch layer for fast and scalable search operations. Synchronisation of these indexes was to be achieved through the use of Sidekiq's Redis based background workers on Amazon ElastiCache. Again the AWS solution here looked to be an easy way to keep our involvement in managing this part of the platform at a minimum. Allowing us to focus on our core business.

    Rails ls was chosen for its ability to quickly get core functionality up and running, its MVC architecture and also its focus on Test Driven Development using RSpec and Selenium with Travis CI providing continual integration. We also liked Ruby for its terse, clean and elegant syntax. Though YMMV on that one!

    Unicorn was chosen for its continual deployment and reputation as a reliable application server, nginx for its reputation as a fast and stable reverse-proxy. We also took advantage of the Amazon CloudFront CDN here to further improve performance by caching static assets globally.

    We tried to strike a balance between having control over management and configuration of our core application with the convenience of being able to leverage AWS hosted services for ancillary functions (Amazon SES , Amazon SQS Amazon Route 53 all hosted securely inside Amazon VPC of course!).

    Whilst there is some compromise here with potential vendor lock in, the tasks being performed by these ancillary services are no particularly specialised which should mitigate this risk. Furthermore we have already containerised the stack in our development using Docker environment, and looking to how best to bring this into production - potentially using Amazon EC2 Container Service

    See more
    Chris McFadden
    Chris McFadden
    VP, Engineering at SparkPost | 7 upvotes 80.5K views
    atSparkPostSparkPost
    nginx
    nginx
    OpenResty
    OpenResty
    Lua
    Lua

    We use nginx and OpenResty as our API proxy running on EC2 for auth, caching, and some rate limiting for our dozens of microservices. Since OpenResty support embedded Lua we were able to write a custom access module that calls out to our authentication service with the resource, method, and access token. If that succeeds then critical account info is passed down to the underlying microservice. This proxy approach keeps all authentication and authorization in one place and provides a unified CX for our API users. Nginx is fast and cheap to run though we are always exploring alternatives that are also economical. What do you use?

    See more
    nginx
    nginx

    I use nginx because it is very light weight. Where Apache tries to include everything in the web server, nginx opts to have external programs/facilities take care of that so the web server can focus on efficiently serving web pages. While this can seem inefficient, it limits the number of new bugs found in the web server, which is the element that faces the client most directly.

    See more
    Marcel Kornegoor
    Marcel Kornegoor
    CTO at AT Computing | 6 upvotes 19.6K views
    nginx
    nginx
    Apache HTTP Server
    Apache HTTP Server

    nginx or Apache HTTP Server that's the question. The best choice depends on what it needs to serve. In general, Nginx performs better with static content, where Apache and Nginx score roughly the same when it comes to dynamic content. Since most webpages and web-applications use both static and dynamic content, a combination of both platforms may be the best solution.

    Since both webservers are easy to deploy and free to use, setting up a performance or feature comparison test is no big deal. This way you can see what solutions suits your application or content best. Don't forget to look at other aspects, like security, back-end compatibility (easy of integration) and manageability, as well.

    A reasonably good comparison between the two can be found in the link below.

    See more
    Interest over time
    Reviews of Jetty and nginx
    No reviews found
    How developers use Jetty and nginx
    Avatar of MaxCDN
    MaxCDN uses nginxnginx

    The original API performed a synchronous Nginx reload after provisioning a zone, which often took up to 30 seconds or longer. While important, this step shouldn鈥檛 block the response to the user (or API) that a new zone has been created, or block subsequent requests to adjust the zone. With the new API, an independent worker reloads Nginx configurations based on zone modifications.It鈥檚 like ordering a product online: don鈥檛 pause the purchase process until the product鈥檚 been shipped. Say the order has been created, and you can still cancel or modify shipping information. Meanwhile, the remaining steps are being handled behind the scenes. In our case, the zone provision happens instantly, and you can see the result in your control panel or API. Behind the scenes, the zone will be serving traffic within a minute.

    Avatar of Cloudcraft
    Cloudcraft uses nginxnginx

    Nginx serves as the loadbalancer, router and SSL terminator of cloudcraft.co. As one of our app server nodes is spun up, an Ansible orchestration script adds the new node dynamically to the nginx loadbalancer config which is then reloaded for a zero downtime seamless rolling deployment. By putting nginx in front or whatever web and API servers you might have, you gain a ton of flexibility. While previously I've cobbled together HAProxy and Stun as a poor man's loadbalancer, nginx just does a much better job and is far simpler in the long run.

    Avatar of datapile
    datapile uses nginxnginx

    Used nginx as exactly what it is great for: serving static content in a cache-friendly, load balanced manner.

    It is exclusively for production web page hosting, we don't use nginx internally, only on the public-facing versions of static sites / Angular & Backbone/Marionette applications.

    Avatar of P膿teris Caune
    P膿teris Caune uses nginxnginx

    We use NGINX both as reverse HTTP proxy and also as a SMTP proxy, to handle incoming email.

    We previously handled incoming email with Mandrill, and then later with AWS SES. Handling incoming email yourself is not that much more difficult and saves quite a bit on operational costs.

    Avatar of Wirkn Inc.
    Wirkn Inc. uses nginxnginx

    NGINX sits in front of all of our web servers. It is fantastic at load balancing traffic as well as serving as a cache at times when under massive load. It's a robust tool that we're happy to have at the front lines of all Wirkn web apps.

    How much does Jetty cost?
    How much does nginx cost?
    Pricing unavailable
    Pricing unavailable
    News about Jetty
    More news