What is Beanstalkd and what are its top alternatives?
Top Alternatives to Beanstalkd
- RabbitMQ
RabbitMQ gives your applications a common platform to send and receive messages, and your messages a safe place to live until received. ...
- Redis
Redis is an open source (BSD licensed), in-memory data structure store, used as a database, cache, and message broker. Redis provides data structures such as strings, hashes, lists, sets, sorted sets with range queries, bitmaps, hyperloglogs, geospatial indexes, and streams. ...
- Resque
Background jobs can be any Ruby class or module that responds to perform. Your existing classes can easily be converted to background jobs or you can create new classes specifically to do work. Or, you can do both. ...
- Kafka
Kafka is a distributed, partitioned, replicated commit log service. It provides the functionality of a messaging system, but with a unique design. ...
- Gearman
Gearman allows you to do work in parallel, to load balance processing, and to call functions between languages. It can be used in a variety of applications, from high-availability web sites to the transport of database replication events. ...
- Celery
Celery is an asynchronous task queue/job queue based on distributed message passing. It is focused on real-time operation, but supports scheduling as well. ...
- ZeroMQ
The 0MQ lightweight messaging kernel is a library which extends the standard socket interfaces with features traditionally provided by specialised messaging middleware products. 0MQ sockets provide an abstraction of asynchronous message queues, multiple messaging patterns, message filtering (subscriptions), seamless access to multiple transport protocols and more. ...
- Sidekiq
Sidekiq uses threads to handle many jobs at the same time in the same process. It does not require Rails but will integrate tightly with Rails 3/4 to make background processing dead simple. ...
Beanstalkd alternatives & related posts
- It's fast and it works with good metrics/monitoring232
- Ease of configuration79
- I like the admin interface58
- Easy to set-up and start with50
- Durable20
- Intuitive work through python18
- Standard protocols18
- Written primarily in Erlang10
- Simply superb8
- Completeness of messaging patterns6
- Scales to 1 million messages per second3
- Reliable3
- Distributed2
- Supports AMQP2
- Better than most traditional queue based message broker2
- Inubit Integration1
- Delayed messages1
- Supports MQTT1
- Runs on Open Telecom Platform1
- High performance1
- Reliability1
- Clusterable1
- Clear documentation with different scripting language1
- Great ui1
- Better routing system1
- Too complicated cluster/HA config and management9
- Needs Erlang runtime. Need ops good with Erlang runtime6
- Configuration must be done first, not by your code5
- Slow4
related RabbitMQ posts
As Sentry runs throughout the day, there are about 50 different offline tasks that we execute—anything from “process this event, pretty please” to “send all of these cool people some emails.” There are some that we execute once a day and some that execute thousands per second.
Managing this variety requires a reliably high-throughput message-passing technology. We use Celery's RabbitMQ implementation, and we stumbled upon a great feature called Federation that allows us to partition our task queue across any number of RabbitMQ servers and gives us the confidence that, if any single server gets backlogged, others will pitch in and distribute some of the backlogged tasks to their consumers.
#MessageQueue
Hi, I am building an enhanced web-conferencing app that will have a voice/video call, live chats, live notifications, live discussions, screen sharing, etc features. Ref: Zoom.
I need advise finalizing the tech stack for this app. I am considering below tech stack:
- Frontend: React
- Backend: Node.js
- Database: MongoDB
- IAAS: #AWS
- Containers & Orchestration: Docker / Kubernetes
- DevOps: GitLab, Terraform
- Brokers: Redis / RabbitMQ
I need advice at the platform level as to what could be considered to support concurrent video streaming seamlessly.
Also, please suggest what could be a better tech stack for my app?
#SAAS #VideoConferencing #WebAndVideoConferencing #zoom #stack
- Performance880
- Super fast537
- Ease of use510
- In-memory cache441
- Advanced key-value cache321
- Open source190
- Easy to deploy179
- Stable163
- Free152
- Fast120
- High-Performance40
- High Availability39
- Data Structures34
- Very Scalable32
- Replication23
- Pub/Sub20
- Great community20
- "NoSQL" key-value data store17
- Hashes14
- Sets12
- Sorted Sets10
- Lists9
- BSD licensed8
- NoSQL8
- Integrates super easy with Sidekiq for Rails background7
- Async replication7
- Bitmaps7
- Keys with a limited time-to-live6
- Open Source6
- Strings5
- Lua scripting5
- Hyperloglogs4
- Awesomeness for Free!4
- Transactions3
- Runs server side LUA3
- outstanding performance3
- Networked3
- LRU eviction of keys3
- Written in ANSI C3
- Feature Rich3
- Performance & ease of use2
- Data structure server2
- Simple1
- Channels concept1
- Scalable1
- Temporarily kept on disk1
- Dont save data if no subscribers are found1
- Automatic failover1
- Easy to use1
- Existing Laravel Integration1
- Object [key/value] size each 500 MB1
- Cannot query objects directly14
- No secondary indexes for non-numeric data types2
- No WAL1
related Redis posts
We use MongoDB as our primary #datastore. Mongo's approach to replica sets enables some fantastic patterns for operations like maintenance, backups, and #ETL.
As we pull #microservices from our #monolith, we are taking the opportunity to build them with their own datastores using PostgreSQL. We also use Redis to cache data we’d never store permanently, and to rate-limit our requests to partners’ APIs (like GitHub).
When we’re dealing with large blobs of immutable data (logs, artifacts, and test results), we store them in Amazon S3. We handle any side-effects of S3’s eventual consistency model within our own code. This ensures that we deal with user requests correctly while writes are in process.
















I'm working as one of the engineering leads in RunaHR. As our platform is a Saas, we thought It'd be good to have an API (We chose Ruby and Rails for this) and a SPA (built with React and Redux ) connected. We started the SPA with Create React App since It's pretty easy to start.
We use Jest as the testing framework and react-testing-library to test React components. In Rails we make tests using RSpec.
Our main database is PostgreSQL, but we also use MongoDB to store some type of data. We started to use Redis for cache and other time sensitive operations.
We have a couple of extra projects: One is an Employee app built with React Native and the other is an internal back office dashboard built with Next.js for the client and Python in the backend side.
Since we have different frontend apps we have found useful to have Bit to document visual components and utils in JavaScript.
Resque
- Free5
- Scalable3
- Easy to use on heroku1
related Resque posts
Kafka
- High-throughput126
- Distributed119
- Scalable90
- High-Performance84
- Durable65
- Publish-Subscribe37
- Simple-to-use19
- Open source17
- Written in Scala and java. Runs on JVM11
- Message broker + Streaming system8
- Avro schema integration4
- Robust4
- KSQL4
- Suport Multiple clients2
- Partioned, replayable log2
- Flexible1
- Extremely good parallelism constructs1
- Simple publisher / multi-subscriber model1
- Fun1
- Non-Java clients are second-class citizens29
- Needs Zookeeper27
- Operational difficulties7
- Terrible Packaging2
related Kafka posts










The algorithms and data infrastructure at Stitch Fix is housed in #AWS. Data acquisition is split between events flowing through Kafka, and periodic snapshots of PostgreSQL DBs. We store data in an Amazon S3 based data warehouse. Apache Spark on Yarn is our tool of choice for data movement and #ETL. Because our storage layer (s3) is decoupled from our processing layer, we are able to scale our compute environment very elastically. We have several semi-permanent, autoscaling Yarn clusters running to serve our data processing needs. While the bulk of our compute infrastructure is dedicated to algorithmic processing, we also implemented Presto for adhoc queries and dashboards.
Beyond data movement and ETL, most #ML centric jobs (e.g. model training and execution) run in a similarly elastic environment as containers running Python and R code on Amazon EC2 Container Service clusters. The execution of batch jobs on top of ECS is managed by Flotilla, a service we built in house and open sourced (see https://github.com/stitchfix/flotilla-os).
At Stitch Fix, algorithmic integrations are pervasive across the business. We have dozens of data products actively integrated systems. That requires serving layer that is robust, agile, flexible, and allows for self-service. Models produced on Flotilla are packaged for deployment in production using Khan, another framework we've developed internally. Khan provides our data scientists the ability to quickly productionize those models they've developed with open source frameworks in Python 3 (e.g. PyTorch, sklearn), by automatically packaging them as Docker containers and deploying to Amazon ECS. This provides our data scientist a one-click method of getting from their algorithms to production. We then integrate those deployments into a service mesh, which allows us to A/B test various implementations in our product.
For more info:
- Our Algorithms Tour: https://algorithms-tour.stitchfix.com/
- Our blog: https://multithreaded.stitchfix.com/blog/
- Careers: https://multithreaded.stitchfix.com/careers/
#DataScience #DataStack #Data










As we've evolved or added additional infrastructure to our stack, we've biased towards managed services. Most new backing stores are Amazon RDS instances now. We do use self-managed PostgreSQL with TimescaleDB for time-series data—this is made HA with the use of Patroni and Consul.
We also use managed Amazon ElastiCache instances instead of spinning up Amazon EC2 instances to run Redis workloads, as well as shifting to Amazon Kinesis instead of Kafka.
Gearman
- Ease of use and very simple APIs11
- Free11
- Polyglot6
- No single point of failure5
- Scalable3
- High-throughput3
- Foreground & background processing2
- Very fast2
- Different Programming Languages Channel1
- Many supported programming languages1
related Gearman posts
- Task queue97
- Python integration62
- Django integration39
- Scheduled Task29
- Publish/subsribe18
- Various backend broker7
- Easy to use6
- Great community5
- Workflow5
- Free4
- Dynamic1
- Sometimes loses tasks4
- Depends on broker1
related Celery posts
As Sentry runs throughout the day, there are about 50 different offline tasks that we execute—anything from “process this event, pretty please” to “send all of these cool people some emails.” There are some that we execute once a day and some that execute thousands per second.
Managing this variety requires a reliably high-throughput message-passing technology. We use Celery's RabbitMQ implementation, and we stumbled upon a great feature called Federation that allows us to partition our task queue across any number of RabbitMQ servers and gives us the confidence that, if any single server gets backlogged, others will pitch in and distribute some of the backlogged tasks to their consumers.
#MessageQueue
Hi! I am creating a scraping system in Django, which involves long running tasks between 1 minute & 1 Day. As I am new to Message Brokers and Task Queues, I need advice on which architecture to use for my system. ( Amazon SQS, RabbitMQ, or Celery). The system should be autoscalable using Kubernetes(K8) based on the number of pending tasks in the queue.
ZeroMQ
- Fast24
- Lightweight20
- Transport agnostic11
- No broker required7
- Low level APIs are in C4
- Low latency4
- Open source1
- Publish-Subscribe1
- No message durability5
- Not a very reliable system - message delivery wise3
- M x N problem with M producers and N consumers1
related ZeroMQ posts
Hi, we are in a ZMQ set up in a push/pull pattern, and we currently start to have more traffic and cases that the service is unavailable or stuck. We want to: * Not loose messages in services outages * Safely restart service without losing messages (ZeroMQ seems to need to close the socket in the receiver before restart manually)
Do you have experience with this setup with ZeroMQ? Would you suggest RabbitMQ or Amazon SQS (we are in AWS setup) instead? Something else?
Thank you for your time
- Simple123
- Efficient background processing99
- Scalability60
- Better then resque37
- Great documentation26
- Admin tool15
- Great community14
- Integrates with redis automatically, with zero config8
- Great support7
- Stupidly simple to integrate and run on Rails/Heroku7
- Freeium3
- Ruby3
- Pro version2
- Fast1
- Dashboard w/live polling1
- Great ecosystem of addons1
related Sidekiq posts









We decided to use AWS Lambda for several serverless tasks such as
- Managing AWS backups
- Processing emails received on Amazon SES and stored to Amazon S3 and notified via Amazon SNS, so as to push a message on our Redis so our Sidekiq Rails workers can process inbound emails
- Pushing some relevant Amazon CloudWatch metrics and alarms to Slack
I'm building a new process management tool. I decided to build with Rails as my backend, using Sidekiq for background jobs. I chose to work with these tools because I've worked with them before and know that they're able to get the job done. They may not be the sexiest tools, but they work and are reliable, which is what I was optimizing for. For data stores, I opted for PostgreSQL and Redis. Because I'm planning on offering dashboards, I wanted a SQL database instead of something like MongoDB that might work early on, but be difficult to use as soon as I want to facilitate aggregate queries.
On the front-end I'm using Vue.js and vuex in combination with #Turbolinks. In effect, I want to render most pages on the server side without key interactions being managed by Vue.js . This is the first project I'm working on where I've explicitly decided not to include jQuery . I have found React and Redux.js more confusing to setup. I appreciate the opinionated approach from the Vue.js community and that things just work together the way that I'd expect. To manage my javascript dependencies, I'm using Yarn .
For CSS frameworks, I'm using #Bulma.io. I really appreciate it's minimal nature and that there are no hard javascript dependencies. And to add a little spice, I'm using #font-awesome.