Alternatives to Beanstalkd logo

Alternatives to Beanstalkd

RabbitMQ, Redis, Resque, Kafka, and Gearman are the most popular alternatives and competitors to Beanstalkd.
113
156
+ 1
74

What is Beanstalkd and what are its top alternatives?

Beanstalks's interface is generic, but was originally designed for reducing the latency of page views in high-volume web applications by running time-consuming tasks asynchronously.
Beanstalkd is a tool in the Background Processing category of a tech stack.
Beanstalkd is an open source tool with GitHub stars and GitHub forks. Here’s a link to Beanstalkd's open source repository on GitHub

Top Alternatives to Beanstalkd

  • RabbitMQ
    RabbitMQ

    RabbitMQ gives your applications a common platform to send and receive messages, and your messages a safe place to live until received. ...

  • Redis
    Redis

    Redis is an open source (BSD licensed), in-memory data structure store, used as a database, cache, and message broker. Redis provides data structures such as strings, hashes, lists, sets, sorted sets with range queries, bitmaps, hyperloglogs, geospatial indexes, and streams. ...

  • Resque
    Resque

    Background jobs can be any Ruby class or module that responds to perform. Your existing classes can easily be converted to background jobs or you can create new classes specifically to do work. Or, you can do both. ...

  • Kafka
    Kafka

    Kafka is a distributed, partitioned, replicated commit log service. It provides the functionality of a messaging system, but with a unique design. ...

  • Gearman
    Gearman

    Gearman allows you to do work in parallel, to load balance processing, and to call functions between languages. It can be used in a variety of applications, from high-availability web sites to the transport of database replication events. ...

  • Celery
    Celery

    Celery is an asynchronous task queue/job queue based on distributed message passing. It is focused on real-time operation, but supports scheduling as well. ...

  • ZeroMQ
    ZeroMQ

    The 0MQ lightweight messaging kernel is a library which extends the standard socket interfaces with features traditionally provided by specialised messaging middleware products. 0MQ sockets provide an abstraction of asynchronous message queues, multiple messaging patterns, message filtering (subscriptions), seamless access to multiple transport protocols and more. ...

  • Sidekiq
    Sidekiq

    Sidekiq uses threads to handle many jobs at the same time in the same process. It does not require Rails but will integrate tightly with Rails 3/4 to make background processing dead simple. ...

Beanstalkd alternatives & related posts

RabbitMQ logo

RabbitMQ

20.6K
17.9K
528
Open source multiprotocol messaging broker
20.6K
17.9K
+ 1
528
PROS OF RABBITMQ
  • 234
    It's fast and it works with good metrics/monitoring
  • 80
    Ease of configuration
  • 59
    I like the admin interface
  • 50
    Easy to set-up and start with
  • 21
    Durable
  • 18
    Standard protocols
  • 18
    Intuitive work through python
  • 10
    Written primarily in Erlang
  • 8
    Simply superb
  • 6
    Completeness of messaging patterns
  • 3
    Scales to 1 million messages per second
  • 3
    Reliable
  • 2
    Distributed
  • 2
    Supports AMQP
  • 2
    Better than most traditional queue based message broker
  • 2
    Supports MQTT
  • 1
    Clusterable
  • 1
    Clear documentation with different scripting language
  • 1
    Great ui
  • 1
    Inubit Integration
  • 1
    Better routing system
  • 1
    High performance
  • 1
    Runs on Open Telecom Platform
  • 1
    Delayed messages
  • 1
    Reliability
  • 1
    Open-source
CONS OF RABBITMQ
  • 9
    Too complicated cluster/HA config and management
  • 6
    Needs Erlang runtime. Need ops good with Erlang runtime
  • 5
    Configuration must be done first, not by your code
  • 4
    Slow

related RabbitMQ posts

James Cunningham
Operations Engineer at Sentry · | 18 upvotes · 1.6M views
Shared insights
on
CeleryCeleryRabbitMQRabbitMQ
at

As Sentry runs throughout the day, there are about 50 different offline tasks that we execute—anything from “process this event, pretty please” to “send all of these cool people some emails.” There are some that we execute once a day and some that execute thousands per second.

Managing this variety requires a reliably high-throughput message-passing technology. We use Celery's RabbitMQ implementation, and we stumbled upon a great feature called Federation that allows us to partition our task queue across any number of RabbitMQ servers and gives us the confidence that, if any single server gets backlogged, others will pitch in and distribute some of the backlogged tasks to their consumers.

#MessageQueue

See more
Yogesh Bhondekar
Product Manager | SaaS | Traveller · | 15 upvotes · 409.4K views

Hi, I am building an enhanced web-conferencing app that will have a voice/video call, live chats, live notifications, live discussions, screen sharing, etc features. Ref: Zoom.

I need advise finalizing the tech stack for this app. I am considering below tech stack:

  • Frontend: React
  • Backend: Node.js
  • Database: MongoDB
  • IAAS: #AWS
  • Containers & Orchestration: Docker / Kubernetes
  • DevOps: GitLab, Terraform
  • Brokers: Redis / RabbitMQ

I need advice at the platform level as to what could be considered to support concurrent video streaming seamlessly.

Also, please suggest what could be a better tech stack for my app?

#SAAS #VideoConferencing #WebAndVideoConferencing #zoom #stack

See more
Redis logo

Redis

58.1K
43.8K
3.9K
Open source (BSD licensed), in-memory data structure store
58.1K
43.8K
+ 1
3.9K
PROS OF REDIS
  • 886
    Performance
  • 542
    Super fast
  • 513
    Ease of use
  • 444
    In-memory cache
  • 324
    Advanced key-value cache
  • 194
    Open source
  • 182
    Easy to deploy
  • 164
    Stable
  • 155
    Free
  • 121
    Fast
  • 42
    High-Performance
  • 40
    High Availability
  • 35
    Data Structures
  • 32
    Very Scalable
  • 24
    Replication
  • 22
    Great community
  • 22
    Pub/Sub
  • 19
    "NoSQL" key-value data store
  • 16
    Hashes
  • 13
    Sets
  • 11
    Sorted Sets
  • 10
    NoSQL
  • 10
    Lists
  • 9
    Async replication
  • 9
    BSD licensed
  • 8
    Bitmaps
  • 8
    Integrates super easy with Sidekiq for Rails background
  • 7
    Keys with a limited time-to-live
  • 7
    Open Source
  • 6
    Lua scripting
  • 6
    Strings
  • 5
    Awesomeness for Free
  • 5
    Hyperloglogs
  • 4
    Transactions
  • 4
    Outstanding performance
  • 4
    Runs server side LUA
  • 4
    LRU eviction of keys
  • 4
    Feature Rich
  • 4
    Written in ANSI C
  • 4
    Networked
  • 3
    Data structure server
  • 3
    Performance & ease of use
  • 2
    Dont save data if no subscribers are found
  • 2
    Automatic failover
  • 2
    Easy to use
  • 2
    Temporarily kept on disk
  • 2
    Scalable
  • 2
    Existing Laravel Integration
  • 2
    Channels concept
  • 2
    Object [key/value] size each 500 MB
  • 2
    Simple
CONS OF REDIS
  • 15
    Cannot query objects directly
  • 3
    No secondary indexes for non-numeric data types
  • 1
    No WAL

related Redis posts

Robert Zuber

We use MongoDB as our primary #datastore. Mongo's approach to replica sets enables some fantastic patterns for operations like maintenance, backups, and #ETL.

As we pull #microservices from our #monolith, we are taking the opportunity to build them with their own datastores using PostgreSQL. We also use Redis to cache data we’d never store permanently, and to rate-limit our requests to partners’ APIs (like GitHub).

When we’re dealing with large blobs of immutable data (logs, artifacts, and test results), we store them in Amazon S3. We handle any side-effects of S3’s eventual consistency model within our own code. This ensures that we deal with user requests correctly while writes are in process.

See more

I'm working as one of the engineering leads in RunaHR. As our platform is a Saas, we thought It'd be good to have an API (We chose Ruby and Rails for this) and a SPA (built with React and Redux ) connected. We started the SPA with Create React App since It's pretty easy to start.

We use Jest as the testing framework and react-testing-library to test React components. In Rails we make tests using RSpec.

Our main database is PostgreSQL, but we also use MongoDB to store some type of data. We started to use Redis  for cache and other time sensitive operations.

We have a couple of extra projects: One is an Employee app built with React Native and the other is an internal back office dashboard built with Next.js for the client and Python in the backend side.

Since we have different frontend apps we have found useful to have Bit to document visual components and utils in JavaScript.

See more
Resque logo

Resque

116
123
9
A Redis-backed Ruby library for creating background jobs, placing them on multiple queues, and processing them later
116
123
+ 1
9
PROS OF RESQUE
  • 5
    Free
  • 3
    Scalable
  • 1
    Easy to use on heroku
CONS OF RESQUE
    Be the first to leave a con

    related Resque posts

    Kafka logo

    Kafka

    22.7K
    21.1K
    606
    Distributed, fault tolerant, high throughput pub-sub messaging system
    22.7K
    21.1K
    + 1
    606
    PROS OF KAFKA
    • 126
      High-throughput
    • 119
      Distributed
    • 92
      Scalable
    • 86
      High-Performance
    • 66
      Durable
    • 38
      Publish-Subscribe
    • 19
      Simple-to-use
    • 18
      Open source
    • 12
      Written in Scala and java. Runs on JVM
    • 9
      Message broker + Streaming system
    • 4
      Avro schema integration
    • 4
      Robust
    • 4
      KSQL
    • 3
      Suport Multiple clients
    • 2
      Partioned, replayable log
    • 1
      Flexible
    • 1
      Extremely good parallelism constructs
    • 1
      Fun
    • 1
      Simple publisher / multi-subscriber model
    CONS OF KAFKA
    • 32
      Non-Java clients are second-class citizens
    • 29
      Needs Zookeeper
    • 9
      Operational difficulties
    • 5
      Terrible Packaging

    related Kafka posts

    Eric Colson
    Chief Algorithms Officer at Stitch Fix · | 21 upvotes · 5.7M views

    The algorithms and data infrastructure at Stitch Fix is housed in #AWS. Data acquisition is split between events flowing through Kafka, and periodic snapshots of PostgreSQL DBs. We store data in an Amazon S3 based data warehouse. Apache Spark on Yarn is our tool of choice for data movement and #ETL. Because our storage layer (s3) is decoupled from our processing layer, we are able to scale our compute environment very elastically. We have several semi-permanent, autoscaling Yarn clusters running to serve our data processing needs. While the bulk of our compute infrastructure is dedicated to algorithmic processing, we also implemented Presto for adhoc queries and dashboards.

    Beyond data movement and ETL, most #ML centric jobs (e.g. model training and execution) run in a similarly elastic environment as containers running Python and R code on Amazon EC2 Container Service clusters. The execution of batch jobs on top of ECS is managed by Flotilla, a service we built in house and open sourced (see https://github.com/stitchfix/flotilla-os).

    At Stitch Fix, algorithmic integrations are pervasive across the business. We have dozens of data products actively integrated systems. That requires serving layer that is robust, agile, flexible, and allows for self-service. Models produced on Flotilla are packaged for deployment in production using Khan, another framework we've developed internally. Khan provides our data scientists the ability to quickly productionize those models they've developed with open source frameworks in Python 3 (e.g. PyTorch, sklearn), by automatically packaging them as Docker containers and deploying to Amazon ECS. This provides our data scientist a one-click method of getting from their algorithms to production. We then integrate those deployments into a service mesh, which allows us to A/B test various implementations in our product.

    For more info:

    #DataScience #DataStack #Data

    See more
    John Kodumal

    As we've evolved or added additional infrastructure to our stack, we've biased towards managed services. Most new backing stores are Amazon RDS instances now. We do use self-managed PostgreSQL with TimescaleDB for time-series data—this is made HA with the use of Patroni and Consul.

    We also use managed Amazon ElastiCache instances instead of spinning up Amazon EC2 instances to run Redis workloads, as well as shifting to Amazon Kinesis instead of Kafka.

    See more
    Gearman logo

    Gearman

    74
    141
    45
    A generic application framework to farm out work to other machines or processes
    74
    141
    + 1
    45
    PROS OF GEARMAN
    • 11
      Ease of use and very simple APIs
    • 11
      Free
    • 6
      Polyglot
    • 5
      No single point of failure
    • 3
      Scalable
    • 3
      High-throughput
    • 2
      Foreground & background processing
    • 2
      Very fast
    • 1
      Different Programming Languages Channel
    • 1
      Many supported programming languages
    CONS OF GEARMAN
      Be the first to leave a con

      related Gearman posts

      Celery logo

      Celery

      1.6K
      1.6K
      279
      Distributed task queue
      1.6K
      1.6K
      + 1
      279
      PROS OF CELERY
      • 98
        Task queue
      • 63
        Python integration
      • 40
        Django integration
      • 30
        Scheduled Task
      • 19
        Publish/subsribe
      • 8
        Various backend broker
      • 6
        Easy to use
      • 5
        Great community
      • 5
        Workflow
      • 4
        Free
      • 1
        Dynamic
      CONS OF CELERY
      • 4
        Sometimes loses tasks
      • 1
        Depends on broker

      related Celery posts

      James Cunningham
      Operations Engineer at Sentry · | 18 upvotes · 1.6M views
      Shared insights
      on
      CeleryCeleryRabbitMQRabbitMQ
      at

      As Sentry runs throughout the day, there are about 50 different offline tasks that we execute—anything from “process this event, pretty please” to “send all of these cool people some emails.” There are some that we execute once a day and some that execute thousands per second.

      Managing this variety requires a reliably high-throughput message-passing technology. We use Celery's RabbitMQ implementation, and we stumbled upon a great feature called Federation that allows us to partition our task queue across any number of RabbitMQ servers and gives us the confidence that, if any single server gets backlogged, others will pitch in and distribute some of the backlogged tasks to their consumers.

      #MessageQueue

      See more
      Michael Mota

      Automations are what makes a CRM powerful. With Celery and RabbitMQ we've been able to make powerful automations that truly works for our clients. Such as for example, automatic daily reports, reminders for their activities, important notifications regarding their client activities and actions on the website and more.

      We use Celery basically for everything that needs to be scheduled for the future, and using RabbitMQ as our Queue-broker is amazing since it fully integrates with Django and Celery storing on our database results of the tasks done so we can see if anything fails immediately.

      See more
      ZeroMQ logo

      ZeroMQ

      255
      572
      71
      Fast, lightweight messaging library that allows you to design complex communication system without much effort
      255
      572
      + 1
      71
      PROS OF ZEROMQ
      • 23
        Fast
      • 20
        Lightweight
      • 11
        Transport agnostic
      • 7
        No broker required
      • 4
        Low level APIs are in C
      • 4
        Low latency
      • 1
        Open source
      • 1
        Publish-Subscribe
      CONS OF ZEROMQ
      • 5
        No message durability
      • 3
        Not a very reliable system - message delivery wise
      • 1
        M x N problem with M producers and N consumers

      related ZeroMQ posts

      Meili Triantafyllidi
      Software engineer at Digital Science · | 6 upvotes · 415.2K views
      Shared insights
      on
      Amazon SQSAmazon SQSRabbitMQRabbitMQZeroMQZeroMQ

      Hi, we are in a ZMQ set up in a push/pull pattern, and we currently start to have more traffic and cases that the service is unavailable or stuck. We want to: * Not loose messages in services outages * Safely restart service without losing messages (ZeroMQ seems to need to close the socket in the receiver before restart manually)

      Do you have experience with this setup with ZeroMQ? Would you suggest RabbitMQ or Amazon SQS (we are in AWS setup) instead? Something else?

      Thank you for your time

      See more
      Sidekiq logo

      Sidekiq

      1.2K
      622
      407
      Simple, efficient background processing for Ruby
      1.2K
      622
      + 1
      407
      PROS OF SIDEKIQ
      • 123
        Simple
      • 99
        Efficient background processing
      • 60
        Scalability
      • 37
        Better then resque
      • 26
        Great documentation
      • 15
        Admin tool
      • 14
        Great community
      • 8
        Integrates with redis automatically, with zero config
      • 7
        Great support
      • 7
        Stupidly simple to integrate and run on Rails/Heroku
      • 3
        Freeium
      • 3
        Ruby
      • 2
        Pro version
      • 1
        Fast
      • 1
        Dashboard w/live polling
      • 1
        Great ecosystem of addons
      CONS OF SIDEKIQ
        Be the first to leave a con

        related Sidekiq posts

        Cyril Duchon-Doris

        We decided to use AWS Lambda for several serverless tasks such as

        • Managing AWS backups
        • Processing emails received on Amazon SES and stored to Amazon S3 and notified via Amazon SNS, so as to push a message on our Redis so our Sidekiq Rails workers can process inbound emails
        • Pushing some relevant Amazon CloudWatch metrics and alarms to Slack
        See more
        Simon Bettison
        Managing Director at Bettison.org Limited · | 8 upvotes · 678.9K views

        In 2012 we made the very difficult decision to entirely re-engineer our existing monolithic LAMP application from the ground up in order to address some growing concerns about it's long term viability as a platform.

        Full application re-write is almost always never the answer, because of the risks involved. However the situation warranted drastic action as it was clear that the existing product was going to face severe scaling issues. We felt it better address these sooner rather than later and also take the opportunity to improve the international architecture and also to refactor the database in. order that it better matched the changes in core functionality.

        PostgreSQL was chosen for its reputation as being solid ACID compliant database backend, it was available as an offering AWS RDS service which reduced the management overhead of us having to configure it ourselves. In order to reduce read load on the primary database we implemented an Elasticsearch layer for fast and scalable search operations. Synchronisation of these indexes was to be achieved through the use of Sidekiq's Redis based background workers on Amazon ElastiCache. Again the AWS solution here looked to be an easy way to keep our involvement in managing this part of the platform at a minimum. Allowing us to focus on our core business.

        Rails ls was chosen for its ability to quickly get core functionality up and running, its MVC architecture and also its focus on Test Driven Development using RSpec and Selenium with Travis CI providing continual integration. We also liked Ruby for its terse, clean and elegant syntax. Though YMMV on that one!

        Unicorn was chosen for its continual deployment and reputation as a reliable application server, nginx for its reputation as a fast and stable reverse-proxy. We also took advantage of the Amazon CloudFront CDN here to further improve performance by caching static assets globally.

        We tried to strike a balance between having control over management and configuration of our core application with the convenience of being able to leverage AWS hosted services for ancillary functions (Amazon SES , Amazon SQS Amazon Route 53 all hosted securely inside Amazon VPC of course!).

        Whilst there is some compromise here with potential vendor lock in, the tasks being performed by these ancillary services are no particularly specialised which should mitigate this risk. Furthermore we have already containerised the stack in our development using Docker environment, and looking to how best to bring this into production - potentially using Amazon EC2 Container Service

        See more