What is Resque and what are its top alternatives?
Resque is a popular job queueing system for Ruby applications, built on top of Redis. It provides a simple UI for managing jobs, prioritizing them, and monitoring their status. Resque is known for its reliability, scalability, and performance, making it a go-to choice for background processing in many Ruby projects. However, one of its limitations is that it requires a separate Redis instance to function, adding complexity to the setup process.
- Sidekiq: Sidekiq is a popular alternative to Resque that is known for its high performance and low memory usage. It utilizes threads instead of separate processes for better efficiency. Pros: Efficient memory usage, support for multi-threading. Cons: Paid options for some advanced features.
- Delayed::Job: Delayed::Job is a simple, database-backed job queue that is easy to set up and use. It is a lightweight alternative to Resque with fewer dependencies. Pros: Easy to set up, minimal configuration required. Cons: May not be as efficient for high-traffic applications.
- Que: Que is a high-performance job queue for Ruby applications that is also built on top of PostgreSQL. It offers advanced features like job priority and scheduling. Pros: Seamless integration with PostgreSQL, supports job dependencies. Cons: Limited scalability compared to Redis-based solutions.
- Sucker Punch: Sucker Punch is a simple, single-threaded background processing library that is suitable for lightweight job queuing needs. Pros: Easy to set up, lightweight solution. Cons: Limited scalability and performance compared to multi-threaded or multi-process alternatives.
- Shoryuken: Shoryuken is a concurrent job processor for Amazon SQS that is highly scalable and efficient. It is designed to work well with Rails applications and offers built-in support for handling large volumes of jobs. Pros: Scalable, efficient processing of jobs. Cons: Specific to Amazon SQS, may not be suitable for other queueing systems.
- Quebert: Quebert is a versatile job queuing library that supports multiple backends, including Redis and in-memory queues. It offers features like job retry logic and error handling. Pros: Flexible backend support, robust error handling. Cons: May require additional configuration for specific use cases.
- Sneakers: Sneakers is a fast and scalable queuing system for Ruby applications that is built on top of RabbitMQ. It is suitable for high-throughput and mission-critical applications. Pros: High performance, built-in support for RabbitMQ. Cons: Requires a RabbitMQ server to function, may add complexity to the setup.
- Kue: Kue is a feature-rich job queuing system for Node.js applications that offers a user-friendly UI for managing jobs. It supports priority queues, delayed jobs, and job progress tracking. Pros: User-friendly interface, comprehensive feature set. Cons: Limited to Node.js applications, may not be suitable for Ruby projects.
- Brpoplpush: Brpoplpush is a lightweight Redis queueing library for Ruby that is known for its simplicity and low overhead. It provides basic queueing functionality without the need for additional dependencies. Pros: Lightweight, minimal overhead. Cons: Limited features compared to more advanced job queuing systems.
- Backburner: Backburner is a flexible and extensible job queuing system for Ruby applications that offers features like batch processing and job lifecycle management. It is suitable for handling complex background processing needs. Pros: Extensible architecture, advanced features. Cons: May require additional setup for specific use cases, may not be as straightforward to use as simpler alternatives.
Top Alternatives to Resque
- Sidekiq
Sidekiq uses threads to handle many jobs at the same time in the same process. It does not require Rails but will integrate tightly with Rails 3/4 to make background processing dead simple. ...
- delayed_job
Delayed_job (or DJ) encapsulates the common pattern of asynchronously executing longer tasks in the background. It is a direct extraction from Shopify where the job table is responsible for a multitude of core tasks. ...
- Celery
Celery is an asynchronous task queue/job queue based on distributed message passing. It is focused on real-time operation, but supports scheduling as well. ...
- Beanstalkd
Beanstalks's interface is generic, but was originally designed for reducing the latency of page views in high-volume web applications by running time-consuming tasks asynchronously. ...
- RabbitMQ
RabbitMQ gives your applications a common platform to send and receive messages, and your messages a safe place to live until received. ...
- Rake
It is a software task management and build automation tool. It allows the user to specify tasks and describe dependencies as well as to group tasks in a namespace. ...
- Hangfire
It is an open-source framework that helps you to create, process and manage your background jobs, i.e. operations you don't want to put in your request processing pipeline. It supports all kind of background tasks – short-running and long-running, CPU intensive and I/O intensive, one shot and recurrent. ...
- PHP-FPM
It is an alternative PHP FastCGI implementation with some additional features useful for sites of any size, especially busier sites. It includes Adaptive process spawning, Advanced process management with graceful stop/start, Emergency restart in case of accidental opcode cache destruction etc. ...
Resque alternatives & related posts
- Simple124
- Efficient background processing99
- Scalability60
- Better then resque37
- Great documentation26
- Admin tool15
- Great community14
- Integrates with redis automatically, with zero config8
- Stupidly simple to integrate and run on Rails/Heroku7
- Great support7
- Ruby3
- Freeium3
- Pro version2
- Dashboard w/live polling1
- Great ecosystem of addons1
- Fast1
related Sidekiq posts
We decided to use AWS Lambda for several serverless tasks such as
- Managing AWS backups
- Processing emails received on Amazon SES and stored to Amazon S3 and notified via Amazon SNS, so as to push a message on our Redis so our Sidekiq Rails workers can process inbound emails
- Pushing some relevant Amazon CloudWatch metrics and alarms to Slack
In 2012 we made the very difficult decision to entirely re-engineer our existing monolithic LAMP application from the ground up in order to address some growing concerns about it's long term viability as a platform.
Full application re-write is almost always never the answer, because of the risks involved. However the situation warranted drastic action as it was clear that the existing product was going to face severe scaling issues. We felt it better address these sooner rather than later and also take the opportunity to improve the international architecture and also to refactor the database in. order that it better matched the changes in core functionality.
PostgreSQL was chosen for its reputation as being solid ACID compliant database backend, it was available as an offering AWS RDS service which reduced the management overhead of us having to configure it ourselves. In order to reduce read load on the primary database we implemented an Elasticsearch layer for fast and scalable search operations. Synchronisation of these indexes was to be achieved through the use of Sidekiq's Redis based background workers on Amazon ElastiCache. Again the AWS solution here looked to be an easy way to keep our involvement in managing this part of the platform at a minimum. Allowing us to focus on our core business.
Rails ls was chosen for its ability to quickly get core functionality up and running, its MVC architecture and also its focus on Test Driven Development using RSpec and Selenium with Travis CI providing continual integration. We also liked Ruby for its terse, clean and elegant syntax. Though YMMV on that one!
Unicorn was chosen for its continual deployment and reputation as a reliable application server, nginx for its reputation as a fast and stable reverse-proxy. We also took advantage of the Amazon CloudFront CDN here to further improve performance by caching static assets globally.
We tried to strike a balance between having control over management and configuration of our core application with the convenience of being able to leverage AWS hosted services for ancillary functions (Amazon SES , Amazon SQS Amazon Route 53 all hosted securely inside Amazon VPC of course!).
Whilst there is some compromise here with potential vendor lock in, the tasks being performed by these ancillary services are no particularly specialised which should mitigate this risk. Furthermore we have already containerised the stack in our development using Docker environment, and looking to how best to bring this into production - potentially using Amazon EC2 Container Service
- Easy to get started3
- Reliable2
- Doesn't require Redis1
related delayed_job posts
delayed_job is a great Rails background job library for new projects, as it only uses what you already have: a relational database. We happily used it during the company’s first two years.
But it started to falter as our web and database transactions significantly grew. Our app interacted with users via SMS texts sent inside background jobs. Because the delayed_job daemon ran every couple seconds, this meant that users often waited several long seconds before getting text replies, which was not acceptable. Moreover, job processing was done inside AWS Elastic Beanstalk web instances, which were already under stress and not meant to handle jobs.
We needed a fast background job system that could process jobs in near real-time and integrate well with AWS. Sidekiq is a fast and popular Ruby background job library, but it does not leverage the Elastic Beanstalk worker architecture, and you have to maintain a Redis instance.
We ended up choosing active-elastic-job, which seamlessly integrates with worker instances and Amazon SQS. SQS is a fast queue and you don’t need to worry about infrastructure or scaling, as AWS handles it for you.
We noticed significant performance gains immediately after making the switch.
#BackgroundProcessing
We use Sidekiq to process millions of Ruby background jobs a day under normal loads. We sometimes process more than that when running one-off backfill tasks.
With so many jobs, it wouldn't really make sense to use delayed_job, as it would put our main database under unnecessary load, which would make it a bottleneck with most DB queries serving jobs and not end users. I suppose you could create a separate DB just for jobs, but that can be a hassle. Sidekiq uses a separate Redis instance so you don't have this problem. And it is very performant!
I also like that its free version comes "batteries included" with:
- A web monitoring UI that provides some nice stats.
- An API that can come in handy for one-off tasks, like changing the queue of certain already enqueued jobs.
Sidekiq is a pleasure to use. All our engineers love it!
- Task queue99
- Python integration63
- Django integration40
- Scheduled Task30
- Publish/subsribe19
- Various backend broker8
- Easy to use6
- Great community5
- Workflow5
- Free4
- Dynamic1
- Sometimes loses tasks4
- Depends on broker1
related Celery posts
As Sentry runs throughout the day, there are about 50 different offline tasks that we execute—anything from “process this event, pretty please” to “send all of these cool people some emails.” There are some that we execute once a day and some that execute thousands per second.
Managing this variety requires a reliably high-throughput message-passing technology. We use Celery's RabbitMQ implementation, and we stumbled upon a great feature called Federation that allows us to partition our task queue across any number of RabbitMQ servers and gives us the confidence that, if any single server gets backlogged, others will pitch in and distribute some of the backlogged tasks to their consumers.
#MessageQueue
Automations are what makes a CRM powerful. With Celery and RabbitMQ we've been able to make powerful automations that truly works for our clients. Such as for example, automatic daily reports, reminders for their activities, important notifications regarding their client activities and actions on the website and more.
We use Celery basically for everything that needs to be scheduled for the future, and using RabbitMQ as our Queue-broker is amazing since it fully integrates with Django and Celery storing on our database results of the tasks done so we can see if anything fails immediately.
- Fast23
- Free12
- Does one thing well12
- Scalability9
- Simplicity8
- External admin UI developer friendly3
- Job delay3
- Job prioritization2
- External admin UI2
related Beanstalkd posts
I used Kafka originally because it was mandated as part of the top-level IT requirements at a Fortune 500 client. What I found was that it was orders of magnitude more complex ...and powerful than my daily Beanstalkd , and far more flexible, resilient, and manageable than RabbitMQ.
So for any case where utmost flexibility and resilience are part of the deal, I would use Kafka again. But due to the complexities involved, for any time where this level of scalability is not required, I would probably just use Beanstalkd for its simplicity.
I tend to find RabbitMQ to be in an uncomfortable middle place between these two extremities.
- It's fast and it works with good metrics/monitoring234
- Ease of configuration79
- I like the admin interface59
- Easy to set-up and start with50
- Durable21
- Intuitive work through python18
- Standard protocols18
- Written primarily in Erlang10
- Simply superb8
- Completeness of messaging patterns6
- Scales to 1 million messages per second3
- Reliable3
- Distributed2
- Supports MQTT2
- Better than most traditional queue based message broker2
- Supports AMQP2
- Clusterable1
- Clear documentation with different scripting language1
- Great ui1
- Inubit Integration1
- Better routing system1
- High performance1
- Runs on Open Telecom Platform1
- Delayed messages1
- Reliability1
- Open-source1
- Too complicated cluster/HA config and management9
- Needs Erlang runtime. Need ops good with Erlang runtime6
- Configuration must be done first, not by your code5
- Slow4
related RabbitMQ posts
As Sentry runs throughout the day, there are about 50 different offline tasks that we execute—anything from “process this event, pretty please” to “send all of these cool people some emails.” There are some that we execute once a day and some that execute thousands per second.
Managing this variety requires a reliably high-throughput message-passing technology. We use Celery's RabbitMQ implementation, and we stumbled upon a great feature called Federation that allows us to partition our task queue across any number of RabbitMQ servers and gives us the confidence that, if any single server gets backlogged, others will pitch in and distribute some of the backlogged tasks to their consumers.
#MessageQueue
Around the time of their Series A, Pinterest’s stack included Python and Django, with Tornado and Node.js as web servers. Memcached / Membase and Redis handled caching, with RabbitMQ handling queueing. Nginx, HAproxy and Varnish managed static-delivery and load-balancing, with persistent data storage handled by MySQL.
related Rake posts
- Integrated UI dashboard7
- Simple5
- Robust3
- In Memory2
- Simole0