Alternatives to NSQ logo

Alternatives to NSQ

RabbitMQ, Kafka, Redis, NATS, and gRPC are the most popular alternatives and competitors to NSQ.
119
277
+ 1
143

What is NSQ and what are its top alternatives?

NSQ is a realtime distributed messaging platform designed to operate at scale, handling billions of messages per day. It promotes distributed and decentralized topologies without single points of failure, enabling fault tolerance and high availability coupled with a reliable message delivery guarantee. See features & guarantees.
NSQ is a tool in the Message Queue category of a tech stack.
NSQ is an open source tool with 20.5K GitHub stars and 2.6K GitHub forks. Here’s a link to NSQ's open source repository on GitHub

Top Alternatives to NSQ

  • RabbitMQ

    RabbitMQ

    RabbitMQ gives your applications a common platform to send and receive messages, and your messages a safe place to live until received. ...

  • Kafka

    Kafka

    Kafka is a distributed, partitioned, replicated commit log service. It provides the functionality of a messaging system, but with a unique design. ...

  • Redis

    Redis

    Redis is an open source, BSD licensed, advanced key-value store. It is often referred to as a data structure server since keys can contain strings, hashes, lists, sets and sorted sets. ...

  • NATS

    NATS

    Unlike traditional enterprise messaging systems, NATS has an always-on dial tone that does whatever it takes to remain available. This forms a great base for building modern, reliable, and scalable cloud and distributed systems. ...

  • gRPC

    gRPC

    gRPC is a modern open source high performance RPC framework that can run in any environment. It can efficiently connect services in and across data centers with pluggable support for load balancing, tracing, health checking... ...

  • MQTT

    MQTT

    It was designed as an extremely lightweight publish/subscribe messaging transport. It is useful for connections with remote locations where a small code footprint is required and/or network bandwidth is at a premium. ...

  • ZeroMQ

    ZeroMQ

    The 0MQ lightweight messaging kernel is a library which extends the standard socket interfaces with features traditionally provided by specialised messaging middleware products. 0MQ sockets provide an abstraction of asynchronous message queues, multiple messaging patterns, message filtering (subscriptions), seamless access to multiple transport protocols and more. ...

  • Amazon SQS

    Amazon SQS

    Transmit any volume of data, at any level of throughput, without losing messages or requiring other services to be always available. With SQS, you can offload the administrative burden of operating and scaling a highly available messaging cluster, while paying a low price for only what you use. ...

NSQ alternatives & related posts

RabbitMQ logo

RabbitMQ

14.8K
12.8K
518
Open source multiprotocol messaging broker
14.8K
12.8K
+ 1
518
PROS OF RABBITMQ
  • 229
    It's fast and it works with good metrics/monitoring
  • 79
    Ease of configuration
  • 58
    I like the admin interface
  • 50
    Easy to set-up and start with
  • 20
    Durable
  • 18
    Intuitive work through python
  • 18
    Standard protocols
  • 10
    Written primarily in Erlang
  • 8
    Simply superb
  • 6
    Completeness of messaging patterns
  • 3
    Scales to 1 million messages per second
  • 3
    Reliable
  • 2
    Better than most traditional queue based message broker
  • 2
    Distributed
  • 2
    Supports AMQP
  • 1
    Inubit Integration
  • 1
    Supports MQTT
  • 1
    Runs on Open Telecom Platform
  • 1
    High performance
  • 1
    Reliability
  • 1
    Clusterable
  • 1
    Clear documentation with different scripting language
  • 1
    Great ui
  • 1
    Better routing system
  • 1
    Delayed messages
CONS OF RABBITMQ
  • 9
    Too complicated cluster/HA config and management
  • 6
    Needs Erlang runtime. Need ops good with Erlang runtime
  • 5
    Configuration must be done first, not by your code
  • 4
    Slow

related RabbitMQ posts

James Cunningham
Operations Engineer at Sentry · | 18 upvotes · 1.3M views
Shared insights
on
CeleryCeleryRabbitMQRabbitMQ
at

As Sentry runs throughout the day, there are about 50 different offline tasks that we execute—anything from “process this event, pretty please” to “send all of these cool people some emails.” There are some that we execute once a day and some that execute thousands per second.

Managing this variety requires a reliably high-throughput message-passing technology. We use Celery's RabbitMQ implementation, and we stumbled upon a great feature called Federation that allows us to partition our task queue across any number of RabbitMQ servers and gives us the confidence that, if any single server gets backlogged, others will pitch in and distribute some of the backlogged tasks to their consumers.

#MessageQueue

See more
Yogesh Bhondekar
Co-Founder at weconnect.chat · | 15 upvotes · 94.4K views

Hi, I am building an enhanced web-conferencing app that will have a voice/video call, live chats, live notifications, live discussions, screen sharing, etc features. Ref: Zoom.

I need advise finalizing the tech stack for this app. I am considering below tech stack:

  • Frontend: React
  • Backend: Node.js
  • Database: MongoDB
  • IAAS: #AWS
  • Containers & Orchestration: Docker / Kubernetes
  • DevOps: GitLab, Terraform
  • Brokers: Redis / RabbitMQ

I need advice at the platform level as to what could be considered to support concurrent video streaming seamlessly.

Also, please suggest what could be a better tech stack for my app?

#SAAS #VideoConferencing #WebAndVideoConferencing #zoom #stack

See more
Kafka logo

Kafka

15.7K
14.7K
573
Distributed, fault tolerant, high throughput pub-sub messaging system
15.7K
14.7K
+ 1
573
PROS OF KAFKA
  • 122
    High-throughput
  • 116
    Distributed
  • 87
    Scalable
  • 81
    High-Performance
  • 65
    Durable
  • 36
    Publish-Subscribe
  • 19
    Simple-to-use
  • 15
    Open source
  • 10
    Written in Scala and java. Runs on JVM
  • 6
    Message broker + Streaming system
  • 4
    Avro schema integration
  • 2
    Suport Multiple clients
  • 2
    Robust
  • 2
    KSQL
  • 2
    Partioned, replayable log
  • 1
    Fun
  • 1
    Extremely good parallelism constructs
  • 1
    Simple publisher / multi-subscriber model
  • 1
    Flexible
CONS OF KAFKA
  • 27
    Non-Java clients are second-class citizens
  • 26
    Needs Zookeeper
  • 7
    Operational difficulties
  • 2
    Terrible Packaging

related Kafka posts

Eric Colson
Chief Algorithms Officer at Stitch Fix · | 21 upvotes · 2M views

The algorithms and data infrastructure at Stitch Fix is housed in #AWS. Data acquisition is split between events flowing through Kafka, and periodic snapshots of PostgreSQL DBs. We store data in an Amazon S3 based data warehouse. Apache Spark on Yarn is our tool of choice for data movement and #ETL. Because our storage layer (s3) is decoupled from our processing layer, we are able to scale our compute environment very elastically. We have several semi-permanent, autoscaling Yarn clusters running to serve our data processing needs. While the bulk of our compute infrastructure is dedicated to algorithmic processing, we also implemented Presto for adhoc queries and dashboards.

Beyond data movement and ETL, most #ML centric jobs (e.g. model training and execution) run in a similarly elastic environment as containers running Python and R code on Amazon EC2 Container Service clusters. The execution of batch jobs on top of ECS is managed by Flotilla, a service we built in house and open sourced (see https://github.com/stitchfix/flotilla-os).

At Stitch Fix, algorithmic integrations are pervasive across the business. We have dozens of data products actively integrated systems. That requires serving layer that is robust, agile, flexible, and allows for self-service. Models produced on Flotilla are packaged for deployment in production using Khan, another framework we've developed internally. Khan provides our data scientists the ability to quickly productionize those models they've developed with open source frameworks in Python 3 (e.g. PyTorch, sklearn), by automatically packaging them as Docker containers and deploying to Amazon ECS. This provides our data scientist a one-click method of getting from their algorithms to production. We then integrate those deployments into a service mesh, which allows us to A/B test various implementations in our product.

For more info:

#DataScience #DataStack #Data

See more
John Kodumal

As we've evolved or added additional infrastructure to our stack, we've biased towards managed services. Most new backing stores are Amazon RDS instances now. We do use self-managed PostgreSQL with TimescaleDB for time-series data—this is made HA with the use of Patroni and Consul.

We also use managed Amazon ElastiCache instances instead of spinning up Amazon EC2 instances to run Redis workloads, as well as shifting to Amazon Kinesis instead of Kafka.

See more
Redis logo

Redis

43.1K
32.2K
3.9K
An in-memory database that persists on disk
43.1K
32.2K
+ 1
3.9K
PROS OF REDIS
  • 875
    Performance
  • 535
    Super fast
  • 511
    Ease of use
  • 441
    In-memory cache
  • 321
    Advanced key-value cache
  • 190
    Open source
  • 179
    Easy to deploy
  • 163
    Stable
  • 153
    Free
  • 120
    Fast
  • 40
    High-Performance
  • 39
    High Availability
  • 34
    Data Structures
  • 32
    Very Scalable
  • 23
    Replication
  • 20
    Great community
  • 19
    Pub/Sub
  • 17
    "NoSQL" key-value data store
  • 14
    Hashes
  • 12
    Sets
  • 10
    Sorted Sets
  • 9
    Lists
  • 8
    BSD licensed
  • 8
    NoSQL
  • 7
    Async replication
  • 7
    Integrates super easy with Sidekiq for Rails background
  • 7
    Bitmaps
  • 6
    Open Source
  • 6
    Keys with a limited time-to-live
  • 5
    Strings
  • 5
    Lua scripting
  • 4
    Awesomeness for Free!
  • 4
    Hyperloglogs
  • 3
    outstanding performance
  • 3
    Runs server side LUA
  • 3
    Networked
  • 3
    LRU eviction of keys
  • 3
    Written in ANSI C
  • 3
    Feature Rich
  • 3
    Transactions
  • 2
    Data structure server
  • 2
    Performance & ease of use
  • 1
    Existing Laravel Integration
  • 1
    Automatic failover
  • 1
    Easy to use
  • 1
    Object [key/value] size each 500 MB
  • 1
    Simple
  • 1
    Channels concept
  • 1
    Scalable
  • 1
    Temporarily kept on disk
  • 1
    Dont save data if no subscribers are found
  • 0
    Jk
CONS OF REDIS
  • 14
    Cannot query objects directly
  • 2
    No secondary indexes for non-numeric data types
  • 1
    No WAL

related Redis posts

Robert Zuber

We use MongoDB as our primary #datastore. Mongo's approach to replica sets enables some fantastic patterns for operations like maintenance, backups, and #ETL.

As we pull #microservices from our #monolith, we are taking the opportunity to build them with their own datastores using PostgreSQL. We also use Redis to cache data we’d never store permanently, and to rate-limit our requests to partners’ APIs (like GitHub).

When we’re dealing with large blobs of immutable data (logs, artifacts, and test results), we store them in Amazon S3. We handle any side-effects of S3’s eventual consistency model within our own code. This ensures that we deal with user requests correctly while writes are in process.

See more

I'm working as one of the engineering leads in RunaHR. As our platform is a Saas, we thought It'd be good to have an API (We chose Ruby and Rails for this) and a SPA (built with React and Redux ) connected. We started the SPA with Create React App since It's pretty easy to start.

We use Jest as the testing framework and react-testing-library to test React components. In Rails we make tests using RSpec.

Our main database is PostgreSQL, but we also use MongoDB to store some type of data. We started to use Redis  for cache and other time sensitive operations.

We have a couple of extra projects: One is an Employee app built with React Native and the other is an internal back office dashboard built with Next.js for the client and Python in the backend side.

Since we have different frontend apps we have found useful to have Bit to document visual components and utils in JavaScript.

See more
NATS logo

NATS

236
359
51
Lightweight publish-subscribe & distributed queueing messaging system
236
359
+ 1
51
PROS OF NATS
  • 20
    Fastest pub-sub system out there
  • 14
    Rock solid
  • 10
    Easy to grasp
  • 3
    Light-weight
  • 3
    Easy, Fast, Secure
  • 1
    Robust Security Model
CONS OF NATS
  • 1
    No Order
  • 1
    Persistence with Jetstream supported
  • 1
    No Persistence

related NATS posts

Reza Saadat
IoT Solutions Architect at GreenEdge · | 5 upvotes · 2.6K views
Shared insights
on
MQTTMQTTNATSNATS

I want to use NATS for my IoT Platform and replace it instead of the MQTT broker. is there any preferred added value to do that?

See more
gRPC logo

gRPC

876
951
46
A high performance, open-source universal RPC framework
876
951
+ 1
46
PROS OF GRPC
  • 19
    Higth performance
  • 10
    The future of API
  • 10
    Easy setup
  • 4
    Contract-based
  • 3
    Polyglot
CONS OF GRPC
    Be the first to leave a con

    related gRPC posts

    Shared insights
    on
    gRPCgRPCSignalRSignalR.NET.NET

    We need to interact from several different Web applications (remote) to a client-side application (.exe in .NET Framework, Windows.Console under our controlled environment). From the web applications, we need to send and receive data and invoke methods to client-side .exe on javascript events like users onclick. SignalR is one of the .Net alternatives to do that, but it adds overhead for what we need. Is it better to add SignalR at both client-side application and remote web application, or use gRPC as it sounds lightest and is multilingual?

    SignalR or gRPC are always sending and receiving data on the client-side (from browser to .exe and back to browser). And web application is used for graphical visualization of data to the user. There is no need for local .exe to send or interact with remote web API. Which architecture or framework do you suggest to use in this case?

    See more
    Shared insights
    on
    KafkaKafkagRPCgRPC
    at

    By mid-2015, Uber’s rider growth coupled with its cadence of releasing new services, like Eats and Freight, was pressuring the infrastructure. To allow the decoupling of consumption from production, and to add an abstraction layer between users, developers, and infrastructure, Uber built Catalyst, a serverless internal service mesh.

    Uber decided to build their own severless solution, rather that using something like AWS Lambda, speed for its global production environments as well as introspectability.

    See more
    MQTT logo

    MQTT

    360
    387
    4
    A machine-to-machine Internet of Things connectivity protocol
    360
    387
    + 1
    4
    PROS OF MQTT
    • 2
      Varying levels of Quality of Service to fit a range of
    • 1
      Very easy to configure and use with open source tools
    • 1
      Lightweight with a relatively small data footprint
    CONS OF MQTT
    • 1
      Easy to configure in an unsecure manner

    related MQTT posts

    ZeroMQ logo

    ZeroMQ

    219
    476
    71
    Fast, lightweight messaging library that allows you to design complex communication system without much effort
    219
    476
    + 1
    71
    PROS OF ZEROMQ
    • 24
      Fast
    • 20
      Lightweight
    • 11
      Transport agnostic
    • 7
      No broker required
    • 4
      Low latency
    • 4
      Low level APIs are in C
    • 1
      Open source
    CONS OF ZEROMQ
    • 5
      No message durability
    • 3
      Not a very reliable system - message delivery wise
    • 1
      M x N problem with M producers and N consumers

    related ZeroMQ posts

    Meili Triantafyllidi
    Software engineer at Digital Science · | 5 upvotes · 168.7K views
    Shared insights
    on
    Amazon SQSAmazon SQSRabbitMQRabbitMQZeroMQZeroMQ

    Hi, we are in a ZMQ set up in a push/pull pattern, and we currently start to have more traffic and cases that the service is unavailable or stuck. We want to: * Not loose messages in services outages * Safely restart service without losing messages (ZeroMQ seems to need to close the socket in the receiver before restart manually)

    Do you have experience with this setup with ZeroMQ? Would you suggest RabbitMQ or Amazon SQS (we are in AWS setup) instead? Something else?

    Thank you for your time

    See more
    Amazon SQS logo

    Amazon SQS

    1.9K
    1.6K
    166
    Fully managed message queuing service
    1.9K
    1.6K
    + 1
    166
    PROS OF AMAZON SQS
    • 60
      Easy to use, reliable
    • 39
      Low cost
    • 27
      Simple
    • 13
      Doesn't need to maintain it
    • 8
      It is Serverless
    • 4
      Has a max message size (currently 256K)
    • 3
      Easy to configure with Terraform
    • 3
      Triggers Lambda
    • 3
      Delayed delivery upto 15 mins only
    • 3
      Delayed delivery upto 12 hours
    • 1
      JMS compliant
    • 1
      Support for retry and dead letter queue
    • 1
      D
    CONS OF AMAZON SQS
    • 2
      Has a max message size (currently 256K)
    • 2
      Proprietary
    • 2
      Difficult to configure
    • 1
      Has a maximum 15 minutes of delayed messages only

    related Amazon SQS posts

    Praveen Mooli
    Engineering Manager at Taylor and Francis · | 14 upvotes · 2M views

    We are in the process of building a modern content platform to deliver our content through various channels. We decided to go with Microservices architecture as we wanted scale. Microservice architecture style is an approach to developing an application as a suite of small independently deployable services built around specific business capabilities. You can gain modularity, extensive parallelism and cost-effective scaling by deploying services across many distributed servers. Microservices modularity facilitates independent updates/deployments, and helps to avoid single point of failure, which can help prevent large-scale outages. We also decided to use Event Driven Architecture pattern which is a popular distributed asynchronous architecture pattern used to produce highly scalable applications. The event-driven architecture is made up of highly decoupled, single-purpose event processing components that asynchronously receive and process events.

    To build our #Backend capabilities we decided to use the following: 1. #Microservices - Java with Spring Boot , Node.js with ExpressJS and Python with Flask 2. #Eventsourcingframework - Amazon Kinesis , Amazon Kinesis Firehose , Amazon SNS , Amazon SQS, AWS Lambda 3. #Data - Amazon RDS , Amazon DynamoDB , Amazon S3 , MongoDB Atlas

    To build #Webapps we decided to use Angular 2 with RxJS

    #Devops - GitHub , Travis CI , Terraform , Docker , Serverless

    See more
    Tim Specht
    ‎Co-Founder and CTO at Dubsmash · | 14 upvotes · 621.2K views

    In order to accurately measure & track user behaviour on our platform we moved over quickly from the initial solution using Google Analytics to a custom-built one due to resource & pricing concerns we had.

    While this does sound complicated, it’s as easy as clients sending JSON blobs of events to Amazon Kinesis from where we use AWS Lambda & Amazon SQS to batch and process incoming events and then ingest them into Google BigQuery. Once events are stored in BigQuery (which usually only takes a second from the time the client sends the data until it’s available), we can use almost-standard-SQL to simply query for data while Google makes sure that, even with terabytes of data being scanned, query times stay in the range of seconds rather than hours. Before ingesting their data into the pipeline, our mobile clients are aggregating events internally and, once a certain threshold is reached or the app is going to the background, sending the events as a JSON blob into the stream.

    In the past we had workers running that continuously read from the stream and would validate and post-process the data and then enqueue them for other workers to write them to BigQuery. We went ahead and implemented the Lambda-based approach in such a way that Lambda functions would automatically be triggered for incoming records, pre-aggregate events, and write them back to SQS, from which we then read them, and persist the events to BigQuery. While this approach had a couple of bumps on the road, like re-triggering functions asynchronously to keep up with the stream and proper batch sizes, we finally managed to get it running in a reliable way and are very happy with this solution today.

    #ServerlessTaskProcessing #GeneralAnalytics #RealTimeDataProcessing #BigDataAsAService

    See more