Get Advice Icon

Need advice about which tool to choose?Ask the StackShare community!

Celery
Celery

890
535
+ 1
239
Apache Spark
Apache Spark

1K
823
+ 1
98
Add tool

Celery vs Apache Spark: What are the differences?

Celery: Distributed task queue. Celery is an asynchronous task queue/job queue based on distributed message passing. It is focused on real-time operation, but supports scheduling as well; Apache Spark: Fast and general engine for large-scale data processing. Spark is a fast and general processing engine compatible with Hadoop data. It can run in Hadoop clusters through YARN or Spark's standalone mode, and it can process data in HDFS, HBase, Cassandra, Hive, and any Hadoop InputFormat. It is designed to perform both batch processing (similar to MapReduce) and new workloads like streaming, interactive queries, and machine learning.

Celery and Apache Spark are primarily classified as "Message Queue" and "Big Data" tools respectively.

"Task queue" is the top reason why over 84 developers like Celery, while over 45 developers mention "Open-source" as the leading cause for choosing Apache Spark.

Celery and Apache Spark are both open source tools. Apache Spark with 22.3K GitHub stars and 19.3K forks on GitHub appears to be more popular than Celery with 12.7K GitHub stars and 3.3K GitHub forks.

According to the StackShare community, Apache Spark has a broader approval, being mentioned in 263 company stacks & 111 developers stacks; compared to Celery, which is listed in 271 company stacks and 77 developer stacks.

What is Celery?

Celery is an asynchronous task queue/job queue based on distributed message passing. It is focused on real-time operation, but supports scheduling as well.

What is Apache Spark?

Spark is a fast and general processing engine compatible with Hadoop data. It can run in Hadoop clusters through YARN or Spark's standalone mode, and it can process data in HDFS, HBase, Cassandra, Hive, and any Hadoop InputFormat. It is designed to perform both batch processing (similar to MapReduce) and new workloads like streaming, interactive queries, and machine learning.
Get Advice Icon

Need advice about which tool to choose?Ask the StackShare community!

Why do developers choose Celery?
Why do developers choose Apache Spark?

Sign up to add, upvote and see more prosMake informed product decisions

What companies use Celery?
What companies use Apache Spark?

Sign up to get full access to all the companiesMake informed product decisions

What tools integrate with Celery?
What tools integrate with Apache Spark?

Sign up to get full access to all the tool integrationsMake informed product decisions

What are some alternatives to Celery and Apache Spark?
RabbitMQ
RabbitMQ gives your applications a common platform to send and receive messages, and your messages a safe place to live until received.
Kafka
Kafka is a distributed, partitioned, replicated commit log service. It provides the functionality of a messaging system, but with a unique design.
Amazon SQS
Transmit any volume of data, at any level of throughput, without losing messages or requiring other services to be always available. With SQS, you can offload the administrative burden of operating and scaling a highly available messaging cluster, while paying a low price for only what you use.
ActiveMQ
Apache ActiveMQ is fast, supports many Cross Language Clients and Protocols, comes with easy to use Enterprise Integration Patterns and many advanced features while fully supporting JMS 1.1 and J2EE 1.4. Apache ActiveMQ is released under the Apache 2.0 License.
ZeroMQ
The 0MQ lightweight messaging kernel is a library which extends the standard socket interfaces with features traditionally provided by specialised messaging middleware products. 0MQ sockets provide an abstraction of asynchronous message queues, multiple messaging patterns, message filtering (subscriptions), seamless access to multiple transport protocols and more.
See all alternatives
Decisions about Celery and Apache Spark
StackShare Editors
StackShare Editors
Presto
Presto
Apache Spark
Apache Spark
Hadoop
Hadoop

Around 2015, the growing use of Uber’s data exposed limitations in the ETL and Vertica-centric setup, not to mention the increasing costs. “As our company grew, scaling our data warehouse became increasingly expensive. To cut down on costs, we started deleting older, obsolete data to free up space for new data.”

To overcome these challenges, Uber rebuilt their big data platform around Hadoop. “More specifically, we introduced a Hadoop data lake where all raw data was ingested from different online data stores only once and with no transformation during ingestion.”

“In order for users to access data in Hadoop, we introduced Presto to enable interactive ad hoc user queries, Apache Spark to facilitate programmatic access to raw data (in both SQL and non-SQL formats), and Apache Hive to serve as the workhorse for extremely large queries.

See more
StackShare Editors
StackShare Editors
Presto
Presto
Apache Spark
Apache Spark
Hadoop
Hadoop

To improve platform scalability and efficiency, Uber transitioned from JSON to Parquet, and built a central schema service to manage schemas and integrate different client libraries.

While the first generation big data platform was vulnerable to upstream data format changes, “ad hoc data ingestions jobs were replaced with a standard platform to transfer all source data in its original, nested format into the Hadoop data lake.”

These platform changes enabled the scaling challenges Uber was facing around that time: “On a daily basis, there were tens of terabytes of new data added to our data lake, and our Big Data platform grew to over 10,000 vcores with over 100,000 running batch jobs on any given day.”

See more
StackShare Editors
StackShare Editors
Presto
Presto
Apache Spark
Apache Spark
Scala
Scala
MySQL
MySQL
Kafka
Kafka

Slack’s data team works to “provide an ecosystem to help people in the company quickly and easily answer questions about usage, so they can make better and data informed decisions.” To achieve that goal, that rely on a complex data pipeline.

An in-house tool call Sqooper scrapes MySQL backups and pipe them to S3. Job queue and log data is sent to Kafka then persisted to S3 using an open source tool called Secor, which was created by Pinterest.

For compute, Amazon’s Elastic MapReduce (EMR) creates clusters preconfigured for Presto, Hive, and Spark.

Presto is then used for ad-hoc questions, validating data assumptions, exploring smaller datasets, and creating visualizations for some internal tools. Hive is used for larger data sets or longer time series data, and Spark allows teams to write efficient and robust batch and aggregation jobs. Most of the Spark pipeline is written in Scala.

Thrift binds all of these engines together with a typed schema and structured data.

Finally, the Hive Metastore serves as the ground truth for all data and its schema.

See more
James Cunningham
James Cunningham
Operations Engineer at Sentry · | 18 upvotes · 118.7K views
atSentrySentry
RabbitMQ
RabbitMQ
Celery
Celery
#MessageQueue

As Sentry runs throughout the day, there are about 50 different offline tasks that we execute—anything from “process this event, pretty please” to “send all of these cool people some emails.” There are some that we execute once a day and some that execute thousands per second.

Managing this variety requires a reliably high-throughput message-passing technology. We use Celery's RabbitMQ implementation, and we stumbled upon a great feature called Federation that allows us to partition our task queue across any number of RabbitMQ servers and gives us the confidence that, if any single server gets backlogged, others will pitch in and distribute some of the backlogged tasks to their consumers.

#MessageQueue

See more
StackShare Editors
StackShare Editors
Apache Thrift
Apache Thrift
Kotlin
Kotlin
Presto
Presto
HHVM (HipHop Virtual Machine)
HHVM (HipHop Virtual Machine)
gRPC
gRPC
Kubernetes
Kubernetes
Apache Spark
Apache Spark
Airflow
Airflow
Terraform
Terraform
Hadoop
Hadoop
Swift
Swift
Hack
Hack
Memcached
Memcached
Consul
Consul
Chef
Chef
Prometheus
Prometheus

Since the beginning, Cal Henderson has been the CTO of Slack. Earlier this year, he commented on a Quora question summarizing their current stack.

Apps
  • Web: a mix of JavaScript/ES6 and React.
  • Desktop: And Electron to ship it as a desktop application.
  • Android: a mix of Java and Kotlin.
  • iOS: written in a mix of Objective C and Swift.
Backend
  • The core application and the API written in PHP/Hack that runs on HHVM.
  • The data is stored in MySQL using Vitess.
  • Caching is done using Memcached and MCRouter.
  • The search service takes help from SolrCloud, with various Java services.
  • The messaging system uses WebSockets with many services in Java and Go.
  • Load balancing is done using HAproxy with Consul for configuration.
  • Most services talk to each other over gRPC,
  • Some Thrift and JSON-over-HTTP
  • Voice and video calling service was built in Elixir.
Data warehouse
  • Built using open source tools including Presto, Spark, Airflow, Hadoop and Kafka.
Etc
See more
Eric Colson
Eric Colson
Chief Algorithms Officer at Stitch Fix · | 19 upvotes · 289.7K views
atStitch FixStitch Fix
Amazon EC2 Container Service
Amazon EC2 Container Service
Docker
Docker
PyTorch
PyTorch
R
R
Python
Python
Presto
Presto
Apache Spark
Apache Spark
Amazon S3
Amazon S3
PostgreSQL
PostgreSQL
Kafka
Kafka
#AWS
#Etl
#ML
#DataScience
#DataStack
#Data

The algorithms and data infrastructure at Stitch Fix is housed in #AWS. Data acquisition is split between events flowing through Kafka, and periodic snapshots of PostgreSQL DBs. We store data in an Amazon S3 based data warehouse. Apache Spark on Yarn is our tool of choice for data movement and #ETL. Because our storage layer (s3) is decoupled from our processing layer, we are able to scale our compute environment very elastically. We have several semi-permanent, autoscaling Yarn clusters running to serve our data processing needs. While the bulk of our compute infrastructure is dedicated to algorithmic processing, we also implemented Presto for adhoc queries and dashboards.

Beyond data movement and ETL, most #ML centric jobs (e.g. model training and execution) run in a similarly elastic environment as containers running Python and R code on Amazon EC2 Container Service clusters. The execution of batch jobs on top of ECS is managed by Flotilla, a service we built in house and open sourced (see https://github.com/stitchfix/flotilla-os).

At Stitch Fix, algorithmic integrations are pervasive across the business. We have dozens of data products actively integrated systems. That requires serving layer that is robust, agile, flexible, and allows for self-service. Models produced on Flotilla are packaged for deployment in production using Khan, another framework we've developed internally. Khan provides our data scientists the ability to quickly productionize those models they've developed with open source frameworks in Python 3 (e.g. PyTorch, sklearn), by automatically packaging them as Docker containers and deploying to Amazon ECS. This provides our data scientist a one-click method of getting from their algorithms to production. We then integrate those deployments into a service mesh, which allows us to A/B test various implementations in our product.

For more info:

#DataScience #DataStack #Data

See more
Michael Mota
Michael Mota
CEO & Founder at AlterEstate · | 4 upvotes · 14.9K views
atAlterEstateAlterEstate
Django
Django
RabbitMQ
RabbitMQ
Celery
Celery

Automations are what makes a CRM powerful. With Celery and RabbitMQ we've been able to make powerful automations that truly works for our clients. Such as for example, automatic daily reports, reminders for their activities, important notifications regarding their client activities and actions on the website and more.

We use Celery basically for everything that needs to be scheduled for the future, and using RabbitMQ as our Queue-broker is amazing since it fully integrates with Django and Celery storing on our database results of the tasks done so we can see if anything fails immediately.

See more
Interest over time
Reviews of Celery and Apache Spark
No reviews found
How developers use Celery and Apache Spark
Avatar of Kalibrr
Kalibrr uses CeleryCelery

All of our background jobs (e.g., image resizing, file uploading, email and SMS sending) are done through Celery (using Redis as its broker). Celery's scheduling and retrying features are especially useful for error-prone tasks, such as email and SMS sending.

Avatar of Cloudify
Cloudify uses CeleryCelery

For orchestrating the creation of the correct number of instances, managing errors and retries, and finally managing the deallocation of resources we use RabbitMQ in conjunction with the Celery Project framework, along with a self-developed workflow engine.

Avatar of MOKA Analytics
MOKA Analytics uses CeleryCelery

We maintain a fork of Celery 3 that adds HTTPS support for Redis brokers. The Winning Model currently uses Celery 3 because Celery 4 dropped support for Windows.

We plan on migrating to Celery 4 once Azure ASE supports Linux apps

Avatar of Yaakov Gesher
Yaakov Gesher uses CeleryCelery

We used celery, in combination with RabbitMQ and celery-beat, to run periodic tasks, as well as some user-initiated long-running tasks on the server.

Avatar of Dieter Adriaenssens
Dieter Adriaenssens uses CeleryCelery

Using Celery, the web service creates tasks that are executed by a background worker. Celery uses a RabbitMQ instance as a task queue.

Avatar of Wei Chen
Wei Chen uses Apache SparkApache Spark

Spark is good at parallel data processing management. We wrote a neat program to handle the TBs data we get everyday.

Avatar of Ralic Lo
Ralic Lo uses Apache SparkApache Spark

Used Spark Dataframe API on Spark-R for big data analysis.

Avatar of BrainFinance
BrainFinance uses Apache SparkApache Spark

As a part of big data machine learning stack (SMACK).

Avatar of Kalibrr
Kalibrr uses Apache SparkApache Spark

We use Apache Spark in computing our recommendations.

Avatar of Dotmetrics
Dotmetrics uses Apache SparkApache Spark

Big data analytics and nightly transformation jobs.

How much does Celery cost?
How much does Apache Spark cost?
Pricing unavailable
Pricing unavailable
News about Apache Spark
More news