Alternatives to RabbitMQ logo

Alternatives to RabbitMQ

Kafka, ActiveMQ, ZeroMQ, Amazon SNS, and Redis are the most popular alternatives and competitors to RabbitMQ.
5.2K
3.8K
+ 1
466

What is RabbitMQ and what are its top alternatives?

RabbitMQ gives your applications a common platform to send and receive messages, and your messages a safe place to live until received.
RabbitMQ is a tool in the Message Queue category of a tech stack.
RabbitMQ is an open source tool with 6.7K GitHub stars and 2.2K GitHub forks. Here’s a link to RabbitMQ's open source repository on GitHub

RabbitMQ alternatives & related posts

related Kafka posts

Eric Colson
Eric Colson
Chief Algorithms Officer at Stitch Fix · | 19 upvotes · 448.5K views
atStitch FixStitch Fix
Kafka
Kafka
PostgreSQL
PostgreSQL
Amazon S3
Amazon S3
Apache Spark
Apache Spark
Presto
Presto
Python
Python
R
R
PyTorch
PyTorch
Docker
Docker
Amazon EC2 Container Service
Amazon EC2 Container Service
#AWS
#Etl
#ML
#DataScience
#DataStack
#Data

The algorithms and data infrastructure at Stitch Fix is housed in #AWS. Data acquisition is split between events flowing through Kafka, and periodic snapshots of PostgreSQL DBs. We store data in an Amazon S3 based data warehouse. Apache Spark on Yarn is our tool of choice for data movement and #ETL. Because our storage layer (s3) is decoupled from our processing layer, we are able to scale our compute environment very elastically. We have several semi-permanent, autoscaling Yarn clusters running to serve our data processing needs. While the bulk of our compute infrastructure is dedicated to algorithmic processing, we also implemented Presto for adhoc queries and dashboards.

Beyond data movement and ETL, most #ML centric jobs (e.g. model training and execution) run in a similarly elastic environment as containers running Python and R code on Amazon EC2 Container Service clusters. The execution of batch jobs on top of ECS is managed by Flotilla, a service we built in house and open sourced (see https://github.com/stitchfix/flotilla-os).

At Stitch Fix, algorithmic integrations are pervasive across the business. We have dozens of data products actively integrated systems. That requires serving layer that is robust, agile, flexible, and allows for self-service. Models produced on Flotilla are packaged for deployment in production using Khan, another framework we've developed internally. Khan provides our data scientists the ability to quickly productionize those models they've developed with open source frameworks in Python 3 (e.g. PyTorch, sklearn), by automatically packaging them as Docker containers and deploying to Amazon ECS. This provides our data scientist a one-click method of getting from their algorithms to production. We then integrate those deployments into a service mesh, which allows us to A/B test various implementations in our product.

For more info:

#DataScience #DataStack #Data

See more
John Kodumal
John Kodumal
CTO at LaunchDarkly · | 15 upvotes · 267.1K views
atLaunchDarklyLaunchDarkly
Amazon RDS
Amazon RDS
PostgreSQL
PostgreSQL
TimescaleDB
TimescaleDB
Patroni
Patroni
Consul
Consul
Amazon ElastiCache
Amazon ElastiCache
Amazon EC2
Amazon EC2
Redis
Redis
Amazon Kinesis
Amazon Kinesis
Kafka
Kafka

As we've evolved or added additional infrastructure to our stack, we've biased towards managed services. Most new backing stores are Amazon RDS instances now. We do use self-managed PostgreSQL with TimescaleDB for time-series data—this is made HA with the use of Patroni and Consul.

We also use managed Amazon ElastiCache instances instead of spinning up Amazon EC2 instances to run Redis workloads, as well as shifting to Amazon Kinesis instead of Kafka.

See more

related ActiveMQ posts

Naushad Warsi
Naushad Warsi
software developer at klingelnberg · | 1 upvotes · 89.1K views
ActiveMQ
ActiveMQ
RabbitMQ
RabbitMQ

I use ActiveMQ because RabbitMQ have stopped giving the support for AMQP 1.0 or above version and the earlier version of AMQP doesn't give the functionality to support OAuth.

If OAuth is not required and we can go with AMQP 0.9 then i still recommend rabbitMq.

See more
ZeroMQ logo

ZeroMQ

150
144
52
150
144
+ 1
52
Fast, lightweight messaging library that allows you to design complex communication system without much effort
ZeroMQ logo
ZeroMQ
VS
RabbitMQ logo
RabbitMQ
Amazon SNS logo

Amazon SNS

691
339
2
691
339
+ 1
2
Fully managed push messaging service
Amazon SNS logo
Amazon SNS
VS
RabbitMQ logo
RabbitMQ
Redis logo

Redis

16.2K
11.2K
3.8K
16.2K
11.2K
+ 1
3.8K
An in-memory database that persists on disk
Redis logo
Redis
VS
RabbitMQ logo
RabbitMQ

related Redis posts

Robert Zuber
Robert Zuber
CTO at CircleCI · | 22 upvotes · 388.6K views
atCircleCICircleCI
MongoDB
MongoDB
PostgreSQL
PostgreSQL
Redis
Redis
GitHub
GitHub
Amazon S3
Amazon S3

We use MongoDB as our primary #datastore. Mongo's approach to replica sets enables some fantastic patterns for operations like maintenance, backups, and #ETL.

As we pull #microservices from our #monolith, we are taking the opportunity to build them with their own datastores using PostgreSQL. We also use Redis to cache data we’d never store permanently, and to rate-limit our requests to partners’ APIs (like GitHub).

When we’re dealing with large blobs of immutable data (logs, artifacts, and test results), we store them in Amazon S3. We handle any side-effects of S3’s eventual consistency model within our own code. This ensures that we deal with user requests correctly while writes are in process.

See more
Thierry Schellenbach
Thierry Schellenbach
CEO at Stream · | 17 upvotes · 169.5K views
atStreamStream
Redis
Redis
Cassandra
Cassandra
RocksDB
RocksDB
#InMemoryDatabases
#DataStores
#Databases

1.0 of Stream leveraged Cassandra for storing the feed. Cassandra is a common choice for building feeds. Instagram, for instance started, out with Redis but eventually switched to Cassandra to handle their rapid usage growth. Cassandra can handle write heavy workloads very efficiently.

Cassandra is a great tool that allows you to scale write capacity simply by adding more nodes, though it is also very complex. This complexity made it hard to diagnose performance fluctuations. Even though we had years of experience with running Cassandra, it still felt like a bit of a black box. When building Stream 2.0 we decided to go for a different approach and build Keevo. Keevo is our in-house key-value store built upon RocksDB, gRPC and Raft.

RocksDB is a highly performant embeddable database library developed and maintained by Facebook’s data engineering team. RocksDB started as a fork of Google’s LevelDB that introduced several performance improvements for SSD. Nowadays RocksDB is a project on its own and is under active development. It is written in C++ and it’s fast. Have a look at how this benchmark handles 7 million QPS. In terms of technology it’s much more simple than Cassandra.

This translates into reduced maintenance overhead, improved performance and, most importantly, more consistent performance. It’s interesting to note that LinkedIn also uses RocksDB for their feed.

#InMemoryDatabases #DataStores #Databases

See more

related Beanstalkd posts

Frédéric MARAND
Frédéric MARAND
Core Developer at OSInet · | 2 upvotes · 103.5K views
atOSInetOSInet
Beanstalkd
Beanstalkd
RabbitMQ
RabbitMQ
Kafka
Kafka

I used Kafka originally because it was mandated as part of the top-level IT requirements at a Fortune 500 client. What I found was that it was orders of magnitude more complex ...and powerful than my daily Beanstalkd , and far more flexible, resilient, and manageable than RabbitMQ.

So for any case where utmost flexibility and resilience are part of the deal, I would use Kafka again. But due to the complexities involved, for any time where this level of scalability is not required, I would probably just use Beanstalkd for its simplicity.

I tend to find RabbitMQ to be in an uncomfortable middle place between these two extremities.

See more
gRPC logo

gRPC

305
232
2
305
232
+ 1
2
A high performance, open-source universal RPC framework
gRPC logo
gRPC
VS
RabbitMQ logo
RabbitMQ

related gRPC posts

StackShare Editors
StackShare Editors
| 2 upvotes · 29.3K views
atUber TechnologiesUber Technologies
Kafka
Kafka
gRPC
gRPC

By mid-2015, Uber’s rider growth coupled with its cadence of releasing new services, like Eats and Freight, was pressuring the infrastructure. To allow the decoupling of consumption from production, and to add an abstraction layer between users, developers, and infrastructure, Uber built Catalyst, a serverless internal service mesh.

Uber decided to build their own severless solution, rather that using something like AWS Lambda, speed for its global production environments as well as introspectability.

See more
MQTT logo

MQTT

101
61
0
101
61
+ 1
0
A machine-to-machine Internet of Things connectivity protocol
    Be the first to leave a pro
    MQTT logo
    MQTT
    VS
    RabbitMQ logo
    RabbitMQ

    related Celery posts

    James Cunningham
    James Cunningham
    Operations Engineer at Sentry · | 18 upvotes · 217.3K views
    atSentrySentry
    Celery
    Celery
    RabbitMQ
    RabbitMQ
    #MessageQueue

    As Sentry runs throughout the day, there are about 50 different offline tasks that we execute—anything from “process this event, pretty please” to “send all of these cool people some emails.” There are some that we execute once a day and some that execute thousands per second.

    Managing this variety requires a reliably high-throughput message-passing technology. We use Celery's RabbitMQ implementation, and we stumbled upon a great feature called Federation that allows us to partition our task queue across any number of RabbitMQ servers and gives us the confidence that, if any single server gets backlogged, others will pitch in and distribute some of the backlogged tasks to their consumers.

    #MessageQueue

    See more
    Michael Mota
    Michael Mota
    CEO & Founder at AlterEstate · | 4 upvotes · 52.5K views
    atAlterEstateAlterEstate
    Celery
    Celery
    RabbitMQ
    RabbitMQ
    Django
    Django

    Automations are what makes a CRM powerful. With Celery and RabbitMQ we've been able to make powerful automations that truly works for our clients. Such as for example, automatic daily reports, reminders for their activities, important notifications regarding their client activities and actions on the website and more.

    We use Celery basically for everything that needs to be scheduled for the future, and using RabbitMQ as our Queue-broker is amazing since it fully integrates with Django and Celery storing on our database results of the tasks done so we can see if anything fails immediately.

    See more
    MSMQ logo

    MSMQ

    13
    11
    0
    13
    11
    + 1
    0
    A technology for asynchronous messaging
      Be the first to leave a pro
      MSMQ logo
      MSMQ
      VS
      RabbitMQ logo
      RabbitMQ

      related Amazon SQS posts

      Tim Specht
      Tim Specht
      ‎Co-Founder and CTO at Dubsmash · | 14 upvotes · 135.6K views
      atDubsmashDubsmash
      Google Analytics
      Google Analytics
      Amazon Kinesis
      Amazon Kinesis
      AWS Lambda
      AWS Lambda
      Amazon SQS
      Amazon SQS
      Google BigQuery
      Google BigQuery
      #ServerlessTaskProcessing
      #GeneralAnalytics
      #RealTimeDataProcessing
      #BigDataAsAService

      In order to accurately measure & track user behaviour on our platform we moved over quickly from the initial solution using Google Analytics to a custom-built one due to resource & pricing concerns we had.

      While this does sound complicated, it’s as easy as clients sending JSON blobs of events to Amazon Kinesis from where we use AWS Lambda & Amazon SQS to batch and process incoming events and then ingest them into Google BigQuery. Once events are stored in BigQuery (which usually only takes a second from the time the client sends the data until it’s available), we can use almost-standard-SQL to simply query for data while Google makes sure that, even with terabytes of data being scanned, query times stay in the range of seconds rather than hours. Before ingesting their data into the pipeline, our mobile clients are aggregating events internally and, once a certain threshold is reached or the app is going to the background, sending the events as a JSON blob into the stream.

      In the past we had workers running that continuously read from the stream and would validate and post-process the data and then enqueue them for other workers to write them to BigQuery. We went ahead and implemented the Lambda-based approach in such a way that Lambda functions would automatically be triggered for incoming records, pre-aggregate events, and write them back to SQS, from which we then read them, and persist the events to BigQuery. While this approach had a couple of bumps on the road, like re-triggering functions asynchronously to keep up with the stream and proper batch sizes, we finally managed to get it running in a reliable way and are very happy with this solution today.

      #ServerlessTaskProcessing #GeneralAnalytics #RealTimeDataProcessing #BigDataAsAService

      See more
      Praveen Mooli
      Praveen Mooli
      Engineering Manager at Taylor and Francis · | 12 upvotes · 343.9K views
      MongoDB Atlas
      MongoDB Atlas
      Java
      Java
      Spring Boot
      Spring Boot
      Node.js
      Node.js
      ExpressJS
      ExpressJS
      Python
      Python
      Flask
      Flask
      Amazon Kinesis
      Amazon Kinesis
      Amazon Kinesis Firehose
      Amazon Kinesis Firehose
      Amazon SNS
      Amazon SNS
      Amazon SQS
      Amazon SQS
      AWS Lambda
      AWS Lambda
      Angular 2
      Angular 2
      RxJS
      RxJS
      GitHub
      GitHub
      Travis CI
      Travis CI
      Terraform
      Terraform
      Docker
      Docker
      Serverless
      Serverless
      Amazon RDS
      Amazon RDS
      Amazon DynamoDB
      Amazon DynamoDB
      Amazon S3
      Amazon S3
      #Backend
      #Microservices
      #Eventsourcingframework
      #Webapps
      #Devops
      #Data

      We are in the process of building a modern content platform to deliver our content through various channels. We decided to go with Microservices architecture as we wanted scale. Microservice architecture style is an approach to developing an application as a suite of small independently deployable services built around specific business capabilities. You can gain modularity, extensive parallelism and cost-effective scaling by deploying services across many distributed servers. Microservices modularity facilitates independent updates/deployments, and helps to avoid single point of failure, which can help prevent large-scale outages. We also decided to use Event Driven Architecture pattern which is a popular distributed asynchronous architecture pattern used to produce highly scalable applications. The event-driven architecture is made up of highly decoupled, single-purpose event processing components that asynchronously receive and process events.

      To build our #Backend capabilities we decided to use the following: 1. #Microservices - Java with Spring Boot , Node.js with ExpressJS and Python with Flask 2. #Eventsourcingframework - Amazon Kinesis , Amazon Kinesis Firehose , Amazon SNS , Amazon SQS, AWS Lambda 3. #Data - Amazon RDS , Amazon DynamoDB , Amazon S3 , MongoDB Atlas

      To build #Webapps we decided to use Angular 2 with RxJS

      #Devops - GitHub , Travis CI , Terraform , Docker , Serverless

      See more
      WCF logo

      WCF

      51
      25
      0
      51
      25
      + 1
      0
      A runtime and a set of APIs for building connected, service-oriented applications
        Be the first to leave a pro
        WCF logo
        WCF
        VS
        RabbitMQ logo
        RabbitMQ