What is Hangfire and what are its top alternatives?
Top Alternatives to Hangfire
- RabbitMQ
RabbitMQ gives your applications a common platform to send and receive messages, and your messages a safe place to live until received. ...
- NServiceBus
Performance, scalability, pub/sub, reliable integration, workflow orchestration, and everything else you could possibly want in a service bus. ...
- Azure Functions
Azure Functions is an event driven, compute-on-demand experience that extends the existing Azure application platform with capabilities to implement code triggered by events occurring in virtually any Azure or 3rd party service as well as on-premises systems. ...
- Kafka
Kafka is a distributed, partitioned, replicated commit log service. It provides the functionality of a messaging system, but with a unique design. ...
- Sidekiq
Sidekiq uses threads to handle many jobs at the same time in the same process. It does not require Rails but will integrate tightly with Rails 3/4 to make background processing dead simple. ...
- Resque
Background jobs can be any Ruby class or module that responds to perform. Your existing classes can easily be converted to background jobs or you can create new classes specifically to do work. Or, you can do both. ...
- Beanstalkd
Beanstalks's interface is generic, but was originally designed for reducing the latency of page views in high-volume web applications by running time-consuming tasks asynchronously. ...
- PHP-FPM
It is an alternative PHP FastCGI implementation with some additional features useful for sites of any size, especially busier sites. It includes Adaptive process spawning, Advanced process management with graceful stop/start, Emergency restart in case of accidental opcode cache destruction etc. ...
Hangfire alternatives & related posts
- It's fast and it works with good metrics/monitoring232
- Ease of configuration79
- I like the admin interface58
- Easy to set-up and start with50
- Durable20
- Standard protocols18
- Intuitive work through python18
- Written primarily in Erlang10
- Simply superb8
- Completeness of messaging patterns6
- Scales to 1 million messages per second3
- Reliable3
- Better than most traditional queue based message broker2
- Distributed2
- Supports MQTT2
- Supports AMQP2
- Inubit Integration1
- Open-source1
- Delayed messages1
- Runs on Open Telecom Platform1
- High performance1
- Reliability1
- Clusterable1
- Clear documentation with different scripting language1
- Great ui1
- Better routing system1
- Too complicated cluster/HA config and management9
- Needs Erlang runtime. Need ops good with Erlang runtime6
- Configuration must be done first, not by your code5
- Slow4
related RabbitMQ posts
As Sentry runs throughout the day, there are about 50 different offline tasks that we execute—anything from “process this event, pretty please” to “send all of these cool people some emails.” There are some that we execute once a day and some that execute thousands per second.
Managing this variety requires a reliably high-throughput message-passing technology. We use Celery's RabbitMQ implementation, and we stumbled upon a great feature called Federation that allows us to partition our task queue across any number of RabbitMQ servers and gives us the confidence that, if any single server gets backlogged, others will pitch in and distribute some of the backlogged tasks to their consumers.
#MessageQueue
Hi, I am building an enhanced web-conferencing app that will have a voice/video call, live chats, live notifications, live discussions, screen sharing, etc features. Ref: Zoom.
I need advise finalizing the tech stack for this app. I am considering below tech stack:
- Frontend: React
- Backend: Node.js
- Database: MongoDB
- IAAS: #AWS
- Containers & Orchestration: Docker / Kubernetes
- DevOps: GitLab, Terraform
- Brokers: Redis / RabbitMQ
I need advice at the platform level as to what could be considered to support concurrent video streaming seamlessly.
Also, please suggest what could be a better tech stack for my app?
#SAAS #VideoConferencing #WebAndVideoConferencing #zoom #stack
NServiceBus
- Brings on-prem issues to the cloud1
- Not as good as alternatives, good job security1
related NServiceBus posts
- Pay only when invoked13
- Great developer experience for C#10
- Multiple languages supported8
- Great debugging support6
- Can be used as lightweight https service4
- Costo3
- Easy scalability3
- Poor developer experience for C#2
- Event driven2
- Azure component events for Storage, services etc2
- WebHooks2
- No persistent (writable) file system available1
- Poor support for Linux environments1
- Sporadic server & language runtime issues1
- Not suited for long-running applications1
related Azure Functions posts









CodeFactor being a #SAAS product, our goal was to run on a cloud-native infrastructure since day one. We wanted to stay product focused, rather than having to work on the infrastructure that supports the application. We needed a cloud-hosting provider that would be reliable, economical and most efficient for our product.
CodeFactor.io aims to provide an automated and frictionless code review service for software developers. That requires agility, instant provisioning, autoscaling, security, availability and compliance management features. We looked at the top three #IAAS providers that take up the majority of market share: Amazon's Amazon EC2 , Microsoft's Microsoft Azure, and Google Compute Engine.
AWS has been available since 2006 and has developed the most extensive services ant tools variety at a massive scale. Azure and GCP are about half the AWS age, but also satisfied our technical requirements.
It is worth noting that even though all three providers support Docker containerization services, GCP has the most robust offering due to their investments in Kubernetes. Also, if you are a Microsoft shop, and develop in .NET - Visual Studio Azure shines at integration there and all your existing .NET code works seamlessly on Azure. All three providers have serverless computing offerings (AWS Lambda, Azure Functions, and Google Cloud Functions). Additionally, all three providers have machine learning tools, but GCP appears to be the most developer-friendly, intuitive and complete when it comes to #Machinelearning and #AI.
The prices between providers are competitive across the board. For our requirements, AWS would have been the most expensive, GCP the least expensive and Azure was in the middle. Plus, if you #Autoscale frequently with large deltas, note that Azure and GCP have per minute billing, where AWS bills you per hour. We also applied for the #Startup programs with all three providers, and this is where Azure shined. While AWS and GCP for startups would have covered us for about one year of infrastructure costs, Azure Sponsorship would cover about two years of CodeFactor's hosting costs. Moreover, Azure Team was terrific - I felt that they wanted to work with us where for AWS and GCP we were just another startup.
In summary, we were leaning towards GCP. GCP's advantages in containerization, automation toolset, #Devops mindset, and pricing were the driving factors there. Nevertheless, we could not say no to Azure's financial incentives and a strong sense of partnership and support throughout the process.
Bottom line is, IAAS offerings with AWS, Azure, and GCP are evolving fast. At CodeFactor, we aim to be platform agnostic where it is practical and retain the flexibility to cherry-pick the best products across providers.
In a couple of recent projects we had an opportunity to try out the new Serverless approach to building web applications. It wasn't necessarily a question if we should use any particular vendor but rather "if" we can consider serverless a viable option for building apps. Obviously our goal was also to get a feel for this technology and gain some hands-on experience.
We did consider AWS Lambda, Firebase from Google as well as Azure Functions. Eventually we went with AWS Lambdas.
PROS- No servers to manage (obviously!)
- Limited fixed costs – you pay only for used time
- Automated scaling and balancing
- Automatic failover (or, at this level of abstraction, no failover problem at all)
- Security easier to provide and audit
- Low overhead at the start (with the certain level of knowledge)
- Short time to market
- Easy handover - deployment coupled with code
- Perfect choice for lean startups with fast-paced iterations
- Augmentation for the classic cloud, server(full) approach
- Not much know-how and best practices available about structuring the code and projects on the market
- Not suitable for complex business logic due to the risk of producing highly coupled code
- Cost difficult to estimate (helpful tools: serverlesscalc.com)
- Difficulty in migration to other platforms (Vendor lock⚠️)
- Little engineers with experience in serverless on the job market
- Steep learning curve for engineers without any cloud experience
More details are on our blog: https://evojam.com/blog/2018/12/5/should-you-go-serverless-meet-the-benefits-and-flaws-of-new-wave-of-cloud-solutions I hope it helps 🙌 & I'm curious of your experiences.
Kafka
- High-throughput126
- Distributed119
- Scalable90
- High-Performance85
- Durable65
- Publish-Subscribe37
- Simple-to-use19
- Open source18
- Written in Scala and java. Runs on JVM11
- Message broker + Streaming system8
- Robust4
- Avro schema integration4
- KSQL4
- Suport Multiple clients3
- Partioned, replayable log2
- Extremely good parallelism constructs1
- Fun1
- Flexible1
- Simple publisher / multi-subscriber model1
- Non-Java clients are second-class citizens31
- Needs Zookeeper28
- Operational difficulties8
- Terrible Packaging3
related Kafka posts










The algorithms and data infrastructure at Stitch Fix is housed in #AWS. Data acquisition is split between events flowing through Kafka, and periodic snapshots of PostgreSQL DBs. We store data in an Amazon S3 based data warehouse. Apache Spark on Yarn is our tool of choice for data movement and #ETL. Because our storage layer (s3) is decoupled from our processing layer, we are able to scale our compute environment very elastically. We have several semi-permanent, autoscaling Yarn clusters running to serve our data processing needs. While the bulk of our compute infrastructure is dedicated to algorithmic processing, we also implemented Presto for adhoc queries and dashboards.
Beyond data movement and ETL, most #ML centric jobs (e.g. model training and execution) run in a similarly elastic environment as containers running Python and R code on Amazon EC2 Container Service clusters. The execution of batch jobs on top of ECS is managed by Flotilla, a service we built in house and open sourced (see https://github.com/stitchfix/flotilla-os).
At Stitch Fix, algorithmic integrations are pervasive across the business. We have dozens of data products actively integrated systems. That requires serving layer that is robust, agile, flexible, and allows for self-service. Models produced on Flotilla are packaged for deployment in production using Khan, another framework we've developed internally. Khan provides our data scientists the ability to quickly productionize those models they've developed with open source frameworks in Python 3 (e.g. PyTorch, sklearn), by automatically packaging them as Docker containers and deploying to Amazon ECS. This provides our data scientist a one-click method of getting from their algorithms to production. We then integrate those deployments into a service mesh, which allows us to A/B test various implementations in our product.
For more info:
- Our Algorithms Tour: https://algorithms-tour.stitchfix.com/
- Our blog: https://multithreaded.stitchfix.com/blog/
- Careers: https://multithreaded.stitchfix.com/careers/
#DataScience #DataStack #Data










As we've evolved or added additional infrastructure to our stack, we've biased towards managed services. Most new backing stores are Amazon RDS instances now. We do use self-managed PostgreSQL with TimescaleDB for time-series data—this is made HA with the use of Patroni and Consul.
We also use managed Amazon ElastiCache instances instead of spinning up Amazon EC2 instances to run Redis workloads, as well as shifting to Amazon Kinesis instead of Kafka.
- Simple123
- Efficient background processing99
- Scalability60
- Better then resque37
- Great documentation26
- Admin tool15
- Great community14
- Integrates with redis automatically, with zero config8
- Great support7
- Stupidly simple to integrate and run on Rails/Heroku7
- Freeium3
- Ruby3
- Pro version2
- Fast1
- Dashboard w/live polling1
- Great ecosystem of addons1
related Sidekiq posts









We decided to use AWS Lambda for several serverless tasks such as
- Managing AWS backups
- Processing emails received on Amazon SES and stored to Amazon S3 and notified via Amazon SNS, so as to push a message on our Redis so our Sidekiq Rails workers can process inbound emails
- Pushing some relevant Amazon CloudWatch metrics and alarms to Slack
I'm building a new process management tool. I decided to build with Rails as my backend, using Sidekiq for background jobs. I chose to work with these tools because I've worked with them before and know that they're able to get the job done. They may not be the sexiest tools, but they work and are reliable, which is what I was optimizing for. For data stores, I opted for PostgreSQL and Redis. Because I'm planning on offering dashboards, I wanted a SQL database instead of something like MongoDB that might work early on, but be difficult to use as soon as I want to facilitate aggregate queries.
On the front-end I'm using Vue.js and vuex in combination with #Turbolinks. In effect, I want to render most pages on the server side without key interactions being managed by Vue.js . This is the first project I'm working on where I've explicitly decided not to include jQuery . I have found React and Redux.js more confusing to setup. I appreciate the opinionated approach from the Vue.js community and that things just work together the way that I'd expect. To manage my javascript dependencies, I'm using Yarn .
For CSS frameworks, I'm using #Bulma.io. I really appreciate it's minimal nature and that there are no hard javascript dependencies. And to add a little spice, I'm using #font-awesome.
Resque
- Free5
- Scalable3
- Easy to use on heroku1
related Resque posts
- Fast23
- Free12
- Does one thing well12
- Scalability9
- Simplicity8
- External admin UI developer friendly3
- Job delay3
- Job prioritization2
- External admin UI2
related Beanstalkd posts
I used Kafka originally because it was mandated as part of the top-level IT requirements at a Fortune 500 client. What I found was that it was orders of magnitude more complex ...and powerful than my daily Beanstalkd , and far more flexible, resilient, and manageable than RabbitMQ.
So for any case where utmost flexibility and resilience are part of the deal, I would use Kafka again. But due to the complexities involved, for any time where this level of scalability is not required, I would probably just use Beanstalkd for its simplicity.
I tend to find RabbitMQ to be in an uncomfortable middle place between these two extremities.