Alternatives to Sidekiq logo

Alternatives to Sidekiq

Resque, Celery, RabbitMQ, delayed_job, and Kafka are the most popular alternatives and competitors to Sidekiq.
803
378
+ 1
403

What is Sidekiq and what are its top alternatives?

Sidekiq uses threads to handle many jobs at the same time in the same process. It does not require Rails but will integrate tightly with Rails 3/4 to make background processing dead simple.
Sidekiq is a tool in the Background Processing category of a tech stack.
Sidekiq is an open source tool with 9.9K GitHub stars and 1.7K GitHub forks. Here’s a link to Sidekiq's open source repository on GitHub

Sidekiq alternatives & related posts

Resque logo

Resque

85
50
8
85
50
+ 1
8
A Redis-backed Ruby library for creating background jobs, placing them on multiple queues, and processing them later
Resque logo
Resque
VS
Sidekiq logo
Sidekiq

related Celery posts

James Cunningham
James Cunningham
Operations Engineer at Sentry · | 18 upvotes · 73.4K views
atSentrySentry
RabbitMQ
RabbitMQ
Celery
Celery
#MessageQueue

As Sentry runs throughout the day, there are about 50 different offline tasks that we execute—anything from “process this event, pretty please” to “send all of these cool people some emails.” There are some that we execute once a day and some that execute thousands per second.

Managing this variety requires a reliably high-throughput message-passing technology. We use Celery's RabbitMQ implementation, and we stumbled upon a great feature called Federation that allows us to partition our task queue across any number of RabbitMQ servers and gives us the confidence that, if any single server gets backlogged, others will pitch in and distribute some of the backlogged tasks to their consumers.

#MessageQueue

See more

related RabbitMQ posts

James Cunningham
James Cunningham
Operations Engineer at Sentry · | 18 upvotes · 73.4K views
atSentrySentry
RabbitMQ
RabbitMQ
Celery
Celery
#MessageQueue

As Sentry runs throughout the day, there are about 50 different offline tasks that we execute—anything from “process this event, pretty please” to “send all of these cool people some emails.” There are some that we execute once a day and some that execute thousands per second.

Managing this variety requires a reliably high-throughput message-passing technology. We use Celery's RabbitMQ implementation, and we stumbled upon a great feature called Federation that allows us to partition our task queue across any number of RabbitMQ servers and gives us the confidence that, if any single server gets backlogged, others will pitch in and distribute some of the backlogged tasks to their consumers.

#MessageQueue

See more
Tim Abbott
Tim Abbott
Founder at Zulip · | 9 upvotes · 39.5K views
atZulipZulip
Redis
Redis
Python
Python
RabbitMQ
RabbitMQ

We've been using RabbitMQ as Zulip's queuing system since we needed a queuing system. What I like about it is that it scales really well and has good libraries for a wide range of platforms, including our own Python. So aside from getting it running, we've had to put basically 0 effort into making it scale for our needs.

However, there's several things that could be better about it: * It's error messages are absolutely terrible; if ever one of our users ends up getting an error with RabbitMQ (even for simple things like a misconfigured hostname), they always end up needing to get help from the Zulip team, because the errors logs are just inscrutable. As an open source project, we've handled this issue by really carefully scripting the installation to be a failure-proof configuration (in this case, setting the RabbitMQ hostname to 127.0.0.1, so that no user-controlled configuration can break it). But it was a real pain to get there and the process of determining we needed to do that caused a significant amount of pain to folks installing Zulip. * The pika library for Python takes a lot of time to startup a RabbitMQ connection; this means that Zulip server restarts are more disruptive than would be ideal. * It's annoying that you need to run the rabbitmqctl management commands as root.

But overall, I like that it has clean, clear semanstics and high scalability, and haven't been tempted to do the work to migrate to something like Redis (which has its own downsides).

See more
delayed_job logo

delayed_job

39
35
6
39
35
+ 1
6
Database backed asynchronous priority queue -- Extracted from Shopify
delayed_job logo
delayed_job
VS
Sidekiq logo
Sidekiq

related delayed_job posts

Jerome Dalbert
Jerome Dalbert
Senior Backend Engineer at StackShare · | 4 upvotes · 20.2K views
atGratify CommerceGratify Commerce
Amazon SQS
Amazon SQS
Ruby
Ruby
Sidekiq
Sidekiq
AWS Elastic Beanstalk
AWS Elastic Beanstalk
Rails
Rails
delayed_job
delayed_job
#BackgroundProcessing

delayed_job is a great Rails background job library for new projects, as it only uses what you already have: a relational database. We happily used it during the company’s first two years.

But it started to falter as our web and database transactions significantly grew. Our app interacted with users via SMS texts sent inside background jobs. Because the delayed_job daemon ran every couple seconds, this meant that users often waited several long seconds before getting text replies, which was not acceptable. Moreover, job processing was done inside AWS Elastic Beanstalk web instances, which were already under stress and not meant to handle jobs.

We needed a fast background job system that could process jobs in near real-time and integrate well with AWS. Sidekiq is a fast and popular Ruby background job library, but it does not leverage the Elastic Beanstalk worker architecture, and you have to maintain a Redis instance.

We ended up choosing active-elastic-job, which seamlessly integrates with worker instances and Amazon SQS. SQS is a fast queue and you don’t need to worry about infrastructure or scaling, as AWS handles it for you.

We noticed significant performance gains immediately after making the switch.

#BackgroundProcessing

See more
Jerome Dalbert
Jerome Dalbert
Senior Backend Engineer at StackShare · | 3 upvotes · 7K views
atStackShareStackShare
Redis
Redis
delayed_job
delayed_job
Ruby
Ruby
Sidekiq
Sidekiq

We use Sidekiq to process millions of Ruby background jobs a day under normal loads. We sometimes process more than that when running one-off backfill tasks.

With so many jobs, it wouldn't really make sense to use delayed_job, as it would put our main database under unnecessary load, which would make it a bottleneck with most DB queries serving jobs and not end users. I suppose you could create a separate DB just for jobs, but that can be a hassle. Sidekiq uses a separate Redis instance so you don't have this problem. And it is very performant!

I also like that its free version comes "batteries included" with:

  • A web monitoring UI that provides some nice stats.
  • An API that can come in handy for one-off tasks, like changing the queue of certain already enqueued jobs.

Sidekiq is a pleasure to use. All our engineers love it!

See more

related Kafka posts

Eric Colson
Eric Colson
Chief Algorithms Officer at Stitch Fix · | 19 upvotes · 196.2K views
atStitch FixStitch Fix
Amazon EC2 Container Service
Amazon EC2 Container Service
Docker
Docker
PyTorch
PyTorch
R
R
Python
Python
Presto
Presto
Apache Spark
Apache Spark
Amazon S3
Amazon S3
PostgreSQL
PostgreSQL
Kafka
Kafka
#Data
#DataStack
#DataScience
#ML
#Etl
#AWS

The algorithms and data infrastructure at Stitch Fix is housed in #AWS. Data acquisition is split between events flowing through Kafka, and periodic snapshots of PostgreSQL DBs. We store data in an Amazon S3 based data warehouse. Apache Spark on Yarn is our tool of choice for data movement and #ETL. Because our storage layer (s3) is decoupled from our processing layer, we are able to scale our compute environment very elastically. We have several semi-permanent, autoscaling Yarn clusters running to serve our data processing needs. While the bulk of our compute infrastructure is dedicated to algorithmic processing, we also implemented Presto for adhoc queries and dashboards.

Beyond data movement and ETL, most #ML centric jobs (e.g. model training and execution) run in a similarly elastic environment as containers running Python and R code on Amazon EC2 Container Service clusters. The execution of batch jobs on top of ECS is managed by Flotilla, a service we built in house and open sourced (see https://github.com/stitchfix/flotilla-os).

At Stitch Fix, algorithmic integrations are pervasive across the business. We have dozens of data products actively integrated systems. That requires serving layer that is robust, agile, flexible, and allows for self-service. Models produced on Flotilla are packaged for deployment in production using Khan, another framework we've developed internally. Khan provides our data scientists the ability to quickly productionize those models they've developed with open source frameworks in Python 3 (e.g. PyTorch, sklearn), by automatically packaging them as Docker containers and deploying to Amazon ECS. This provides our data scientist a one-click method of getting from their algorithms to production. We then integrate those deployments into a service mesh, which allows us to A/B test various implementations in our product.

For more info:

#DataScience #DataStack #Data

See more
John Kodumal
John Kodumal
CTO at LaunchDarkly · | 15 upvotes · 90.9K views
atLaunchDarklyLaunchDarkly
Kafka
Kafka
Amazon Kinesis
Amazon Kinesis
Redis
Redis
Amazon EC2
Amazon EC2
Amazon ElastiCache
Amazon ElastiCache
Consul
Consul
Patroni
Patroni
TimescaleDB
TimescaleDB
PostgreSQL
PostgreSQL
Amazon RDS
Amazon RDS

As we've evolved or added additional infrastructure to our stack, we've biased towards managed services. Most new backing stores are Amazon RDS instances now. We do use self-managed PostgreSQL with TimescaleDB for time-series data—this is made HA with the use of Patroni and Consul.

We also use managed Amazon ElastiCache instances instead of spinning up Amazon EC2 instances to run Redis workloads, as well as shifting to Amazon Kinesis instead of Kafka.

See more

related Amazon SQS posts

Tim Specht
Tim Specht
‎Co-Founder and CTO at Dubsmash · | 14 upvotes · 34.3K views
atDubsmashDubsmash
Google BigQuery
Google BigQuery
Amazon SQS
Amazon SQS
AWS Lambda
AWS Lambda
Amazon Kinesis
Amazon Kinesis
Google Analytics
Google Analytics
#BigDataAsAService
#RealTimeDataProcessing
#GeneralAnalytics
#ServerlessTaskProcessing

In order to accurately measure & track user behaviour on our platform we moved over quickly from the initial solution using Google Analytics to a custom-built one due to resource & pricing concerns we had.

While this does sound complicated, it’s as easy as clients sending JSON blobs of events to Amazon Kinesis from where we use AWS Lambda & Amazon SQS to batch and process incoming events and then ingest them into Google BigQuery. Once events are stored in BigQuery (which usually only takes a second from the time the client sends the data until it’s available), we can use almost-standard-SQL to simply query for data while Google makes sure that, even with terabytes of data being scanned, query times stay in the range of seconds rather than hours. Before ingesting their data into the pipeline, our mobile clients are aggregating events internally and, once a certain threshold is reached or the app is going to the background, sending the events as a JSON blob into the stream.

In the past we had workers running that continuously read from the stream and would validate and post-process the data and then enqueue them for other workers to write them to BigQuery. We went ahead and implemented the Lambda-based approach in such a way that Lambda functions would automatically be triggered for incoming records, pre-aggregate events, and write them back to SQS, from which we then read them, and persist the events to BigQuery. While this approach had a couple of bumps on the road, like re-triggering functions asynchronously to keep up with the stream and proper batch sizes, we finally managed to get it running in a reliable way and are very happy with this solution today.

#ServerlessTaskProcessing #GeneralAnalytics #RealTimeDataProcessing #BigDataAsAService

See more
Praveen Mooli
Praveen Mooli
Technical Leader at Taylor and Francis · | 11 upvotes · 81.2K views
MongoDB Atlas
MongoDB Atlas
Amazon S3
Amazon S3
Amazon DynamoDB
Amazon DynamoDB
Amazon RDS
Amazon RDS
Serverless
Serverless
Docker
Docker
Terraform
Terraform
Travis CI
Travis CI
GitHub
GitHub
RxJS
RxJS
Angular 2
Angular 2
AWS Lambda
AWS Lambda
Amazon SQS
Amazon SQS
Amazon SNS
Amazon SNS
Amazon Kinesis Firehose
Amazon Kinesis Firehose
Amazon Kinesis
Amazon Kinesis
Flask
Flask
Python
Python
ExpressJS
ExpressJS
Node.js
Node.js
Spring Boot
Spring Boot
Java
Java
#Data
#Devops
#Webapps
#Eventsourcingframework
#Microservices
#Backend

We are in the process of building a modern content platform to deliver our content through various channels. We decided to go with Microservices architecture as we wanted scale. Microservice architecture style is an approach to developing an application as a suite of small independently deployable services built around specific business capabilities. You can gain modularity, extensive parallelism and cost-effective scaling by deploying services across many distributed servers. Microservices modularity facilitates independent updates/deployments, and helps to avoid single point of failure, which can help prevent large-scale outages. We also decided to use Event Driven Architecture pattern which is a popular distributed asynchronous architecture pattern used to produce highly scalable applications. The event-driven architecture is made up of highly decoupled, single-purpose event processing components that asynchronously receive and process events.

To build our #Backend capabilities we decided to use the following: 1. #Microservices - Java with Spring Boot , Node.js with ExpressJS and Python with Flask 2. #Eventsourcingframework - Amazon Kinesis , Amazon Kinesis Firehose , Amazon SNS , Amazon SQS, AWS Lambda 3. #Data - Amazon RDS , Amazon DynamoDB , Amazon S3 , MongoDB Atlas

To build #Webapps we decided to use Angular 2 with RxJS

#Devops - GitHub , Travis CI , Terraform , Docker , Serverless

See more

related Beanstalkd posts

Frédéric MARAND
Frédéric MARAND
Core Developer at OSInet · | 2 upvotes · 87.2K views
atOSInetOSInet
RabbitMQ
RabbitMQ
Beanstalkd
Beanstalkd
Kafka
Kafka

I used Kafka originally because it was mandated as part of the top-level IT requirements at a Fortune 500 client. What I found was that it was orders of magnitude more complex ...and powerful than my daily Beanstalkd , and far more flexible, resilient, and manageable than RabbitMQ.

So for any case where utmost flexibility and resilience are part of the deal, I would use Kafka again. But due to the complexities involved, for any time where this level of scalability is not required, I would probably just use Beanstalkd for its simplicity.

I tend to find RabbitMQ to be in an uncomfortable middle place between these two extremities.

See more
Kue logo

Kue

28
30
1
28
30
+ 1
1
Kue is a priority job queue backed by redis, built for node.js
Kue logo
Kue
VS
Sidekiq logo
Sidekiq
Hangfire logo

Hangfire

23
10
0
23
10
+ 1
0
Perform background processing in .NET and .NET Core applications
    Be the first to leave a pro
    Hangfire logo
    Hangfire
    VS
    Sidekiq logo
    Sidekiq
    PHP-FPM logo

    PHP-FPM

    23
    13
    0
    23
    13
    + 1
    0
    An alternative FastCGI daemon for PHP
      Be the first to leave a pro
      PHP-FPM logo
      PHP-FPM
      VS
      Sidekiq logo
      Sidekiq
      Bull logo

      Bull

      13
      9
      2
      13
      9
      + 1
      2
      Premium Queue package for handling jobs and messages in NodeJS
      Bull logo
      Bull
      VS
      Sidekiq logo
      Sidekiq
      Que logo

      Que

      10
      8
      0
      10
      8
      + 1
      0
      A Ruby job queue that uses PostgreSQL's advisory locks for speed and reliability
        Be the first to leave a pro
        Que logo
        Que
        VS
        Sidekiq logo
        Sidekiq
        Faktory logo

        Faktory

        3
        4
        2
        3
        4
        + 1
        2
        Background jobs for any language, by the makers of Sidekiq
        Faktory logo
        Faktory
        VS
        Sidekiq logo
        Sidekiq
        runit logo

        runit

        2
        1
        0
        2
        1
        + 1
        0
        Cross-platform Unix init scheme with service supervision
          Be the first to leave a pro
          runit logo
          runit
          VS
          Sidekiq logo
          Sidekiq
          Posthook logo

          Posthook

          2
          1
          0
          2
          1
          + 1
          0
          Simple job scheduling as a service
            Be the first to leave a pro
            Posthook logo
            Posthook
            VS
            Sidekiq logo
            Sidekiq