Need advice about which tool to choose?Ask the StackShare community!

Druid

379
865
+ 1
32
Apache Spark

2.9K
3.5K
+ 1
140
Add tool

Apache Spark vs Druid: What are the differences?

Introduction:

Apache Spark and Druid are two popular open-source tools used for big data processing and analytics. While both tools are designed to handle large volumes of data, they have key differences in terms of architecture, data processing capabilities, and use cases.

1. Data Processing Model:

Apache Spark is a distributed computing system that offers in-memory processing and supports batch processing, real-time streaming, and machine learning workloads. It provides a high-level API for developers to write distributed data processing applications.

Druid, on the other hand, is a real-time analytics database specifically designed for handling time-series data. It is optimized for fast querying and aggregation of large datasets and provides low-latency access to real-time data.

2. Workload Types:

Spark is well-suited for a wide range of data processing use cases, including batch processing, real-time streaming, machine learning, and graph processing. It can handle both structured and unstructured data and can scale horizontally by adding more compute resources.

Druid, on the other hand, is specifically designed for use cases that require fast aggregation and querying of time-series data, such as monitoring real-time metrics, event analytics, and log analysis. It is not designed for transactional data processing or complex analytics.

3. Data Storage and Indexing:

Spark uses a distributed file system, like Hadoop HDFS, for storing data and supports various file formats like Parquet, Avro, and ORC. It relies on a distributed computing model where data is loaded into memory for processing.

Druid, on the other hand, has its own column-oriented storage format that is optimized for time-series data. It uses a combination of memory and disk-based storage to achieve fast querying and provides different indexing strategies like bitmap indexes and inverted indexes.

4. Querying Capabilities:

Spark provides a SQL-like interface called Spark SQL that allows users to run SQL queries on structured data. It also offers APIs in different programming languages like Scala, Java, and Python for more flexibility. Spark SQL can handle both batch and real-time queries.

Druid, on the other hand, uses a custom querying language known as Druid Query Language (DSL), which is designed specifically for querying time-series data. It supports fast aggregations, filtering, and complex queries on large datasets.

5. Scalability:

Spark is known for its scalability and ability to handle large-scale data processing. It can scale horizontally by adding more nodes to the cluster, allowing it to process petabytes of data. Spark also provides built-in fault-tolerance mechanisms for handling failures.

Druid is designed to scale horizontally as well, but it is more optimized for low-latency, real-time queries on time-series data. It achieves high query throughput by leveraging distributed data storage, parallel processing, and caching techniques.

6. Ecosystem Integration:

Spark has a rich ecosystem with support for various data sources, connectors, and libraries. It integrates well with other components in the Hadoop ecosystem, including HDFS, Hive, and HBase. It also provides connectors for popular databases like MySQL, PostgreSQL, and Cassandra.

Druid also has a growing ecosystem, but its focus is primarily on real-time analytics use cases. It provides connectors for data sources like Kafka and supports integration with visualization tools like Superset and Tableau.

In summary, Apache Spark and Druid are both powerful tools for big data processing and analytics. Spark is a general-purpose distributed computing system suitable for a wide range of workloads, while Druid is specifically optimized for real-time analytics on time-series data.

Advice on Druid and Apache Spark
Nilesh Akhade
Technical Architect at Self Employed · | 5 upvotes · 523.5K views

We have a Kafka topic having events of type A and type B. We need to perform an inner join on both type of events using some common field (primary-key). The joined events to be inserted in Elasticsearch.

In usual cases, type A and type B events (with same key) observed to be close upto 15 minutes. But in some cases they may be far from each other, lets say 6 hours. Sometimes event of either of the types never come.

In all cases, we should be able to find joined events instantly after they are joined and not-joined events within 15 minutes.

See more
Replies (2)
Recommends
on
ElasticsearchElasticsearch

The first solution that came to me is to use upsert to update ElasticSearch:

  1. Use the primary-key as ES document id
  2. Upsert the records to ES as soon as you receive them. As you are using upsert, the 2nd record of the same primary-key will not overwrite the 1st one, but will be merged with it.

Cons: The load on ES will be higher, due to upsert.

To use Flink:

  1. Create a KeyedDataStream by the primary-key
  2. In the ProcessFunction, save the first record in a State. At the same time, create a Timer for 15 minutes in the future
  3. When the 2nd record comes, read the 1st record from the State, merge those two, and send out the result, and clear the State and the Timer if it has not fired
  4. When the Timer fires, read the 1st record from the State and send out as the output record.
  5. Have a 2nd Timer of 6 hours (or more) if you are not using Windowing to clean up the State

Pro: if you have already having Flink ingesting this stream. Otherwise, I would just go with the 1st solution.

See more
Akshaya Rawat
Senior Specialist Platform at Publicis Sapient · | 3 upvotes · 366.9K views
Recommends
on
Apache SparkApache Spark

Please refer "Structured Streaming" feature of Spark. Refer "Stream - Stream Join" at https://spark.apache.org/docs/latest/structured-streaming-programming-guide.html#stream-stream-joins . In short you need to specify "Define watermark delays on both inputs" and "Define a constraint on time across the two inputs"

See more
Get Advice from developers at your company using StackShare Enterprise. Sign up for StackShare Enterprise.
Learn More
Pros of Druid
Pros of Apache Spark
  • 15
    Real Time Aggregations
  • 6
    Batch and Real-Time Ingestion
  • 5
    OLAP
  • 3
    OLAP + OLTP
  • 2
    Combining stream and historical analytics
  • 1
    OLTP
  • 61
    Open-source
  • 48
    Fast and Flexible
  • 8
    One platform for every big data problem
  • 8
    Great for distributed SQL like applications
  • 6
    Easy to install and to use
  • 3
    Works well for most Datascience usecases
  • 2
    Interactive Query
  • 2
    Machine learning libratimery, Streaming in real
  • 2
    In memory Computation

Sign up to add or upvote prosMake informed product decisions

Cons of Druid
Cons of Apache Spark
  • 3
    Limited sql support
  • 2
    Joins are not supported well
  • 1
    Complexity
  • 4
    Speed

Sign up to add or upvote consMake informed product decisions

- No public GitHub repository available -

What is Druid?

Druid is a distributed, column-oriented, real-time analytics data store that is commonly used to power exploratory dashboards in multi-tenant environments. Druid excels as a data warehousing solution for fast aggregate queries on petabyte sized data sets. Druid supports a variety of flexible filters, exact calculations, approximate algorithms, and other useful calculations.

What is Apache Spark?

Spark is a fast and general processing engine compatible with Hadoop data. It can run in Hadoop clusters through YARN or Spark's standalone mode, and it can process data in HDFS, HBase, Cassandra, Hive, and any Hadoop InputFormat. It is designed to perform both batch processing (similar to MapReduce) and new workloads like streaming, interactive queries, and machine learning.

Need advice about which tool to choose?Ask the StackShare community!

What companies use Druid?
What companies use Apache Spark?
See which teams inside your own company are using Druid or Apache Spark.
Sign up for StackShare EnterpriseLearn More

Sign up to get full access to all the companiesMake informed product decisions

What tools integrate with Druid?
What tools integrate with Apache Spark?

Sign up to get full access to all the tool integrationsMake informed product decisions

Blog Posts

Dec 22 2021 at 5:41AM

Pinterest

MySQLKafkaDruid+3
3
573
Mar 24 2021 at 12:57PM

Pinterest

GitJenkinsKafka+7
3
2147
MySQLKafkaApache Spark+6
2
2009
What are some alternatives to Druid and Apache Spark?
HBase
Apache HBase is an open-source, distributed, versioned, column-oriented store modeled after Google' Bigtable: A Distributed Storage System for Structured Data by Chang et al. Just as Bigtable leverages the distributed data storage provided by the Google File System, HBase provides Bigtable-like capabilities on top of Apache Hadoop.
MongoDB
MongoDB stores data in JSON-like documents that can vary in structure, offering a dynamic, flexible schema. MongoDB was also designed for high availability and scalability, with built-in replication and auto-sharding.
Cassandra
Partitioning means that Cassandra can distribute your data across multiple machines in an application-transparent matter. Cassandra will automatically repartition as machines are added and removed from the cluster. Row store means that like relational databases, Cassandra organizes data by rows and columns. The Cassandra Query Language (CQL) is a close relative of SQL.
Prometheus
Prometheus is a systems and service monitoring system. It collects metrics from configured targets at given intervals, evaluates rule expressions, displays the results, and can trigger alerts if some condition is observed to be true.
Elasticsearch
Elasticsearch is a distributed, RESTful search and analytics engine capable of storing data and searching it in near real time. Elasticsearch, Kibana, Beats and Logstash are the Elastic Stack (sometimes called the ELK Stack).
See all alternatives