Apache Parquet vs Apache Spark

Need advice about which tool to choose?Ask the StackShare community!

Apache Parquet

86
167
+ 1
0
Apache Spark

2.8K
3.2K
+ 1
139
Add tool

Apache Spark vs Apache Parquet: What are the differences?

Developers describe Apache Spark as "Fast and general engine for large-scale data processing". Spark is a fast and general processing engine compatible with Hadoop data. It can run in Hadoop clusters through YARN or Spark's standalone mode, and it can process data in HDFS, HBase, Cassandra, Hive, and any Hadoop InputFormat. It is designed to perform both batch processing (similar to MapReduce) and new workloads like streaming, interactive queries, and machine learning. On the other hand, Apache Parquet is detailed as "*A free and open-source column-oriented data storage format *". It is a columnar storage format available to any project in the Hadoop ecosystem, regardless of the choice of data processing framework, data model or programming language.

Apache Spark and Apache Parquet can be primarily classified as "Big Data" tools.

Some of the features offered by Apache Spark are:

  • Run programs up to 100x faster than Hadoop MapReduce in memory, or 10x faster on disk
  • Write applications quickly in Java, Scala or Python
  • Combine SQL, streaming, and complex analytics

On the other hand, Apache Parquet provides the following key features:

  • Columnar storage format
  • Type-specific encoding
  • Pig integration

Apache Spark and Apache Parquet are both open source tools. Apache Spark with 23.2K GitHub stars and 19.9K forks on GitHub appears to be more popular than Apache Parquet with 918 GitHub stars and 805 GitHub forks.

According to the StackShare community, Apache Spark has a broader approval, being mentioned in 360 company stacks & 587 developers stacks; compared to Apache Parquet, which is listed in 6 company stacks and 7 developer stacks.

Advice on Apache Parquet and Apache Spark
Nilesh Akhade
Technical Architect at Self Employed · | 5 upvotes · 363.5K views

We have a Kafka topic having events of type A and type B. We need to perform an inner join on both type of events using some common field (primary-key). The joined events to be inserted in Elasticsearch.

In usual cases, type A and type B events (with same key) observed to be close upto 15 minutes. But in some cases they may be far from each other, lets say 6 hours. Sometimes event of either of the types never come.

In all cases, we should be able to find joined events instantly after they are joined and not-joined events within 15 minutes.

See more
Replies (2)
Recommends
ElasticsearchElasticsearch

The first solution that came to me is to use upsert to update ElasticSearch:

  1. Use the primary-key as ES document id
  2. Upsert the records to ES as soon as you receive them. As you are using upsert, the 2nd record of the same primary-key will not overwrite the 1st one, but will be merged with it.

Cons: The load on ES will be higher, due to upsert.

To use Flink:

  1. Create a KeyedDataStream by the primary-key
  2. In the ProcessFunction, save the first record in a State. At the same time, create a Timer for 15 minutes in the future
  3. When the 2nd record comes, read the 1st record from the State, merge those two, and send out the result, and clear the State and the Timer if it has not fired
  4. When the Timer fires, read the 1st record from the State and send out as the output record.
  5. Have a 2nd Timer of 6 hours (or more) if you are not using Windowing to clean up the State

Pro: if you have already having Flink ingesting this stream. Otherwise, I would just go with the 1st solution.

See more
Akshaya Rawat
Senior Specialist Platform at Publicis Sapient · | 3 upvotes · 235.6K views
Recommends
Apache SparkApache Spark

Please refer "Structured Streaming" feature of Spark. Refer "Stream - Stream Join" at https://spark.apache.org/docs/latest/structured-streaming-programming-guide.html#stream-stream-joins . In short you need to specify "Define watermark delays on both inputs" and "Define a constraint on time across the two inputs"

See more
Get Advice from developers at your company using StackShare Enterprise. Sign up for StackShare Enterprise.
Learn More
Pros of Apache Parquet
Pros of Apache Spark
    Be the first to leave a pro
    • 59
      Open-source
    • 48
      Fast and Flexible
    • 8
      One platform for every big data problem
    • 7
      Great for distributed SQL like applications
    • 6
      Easy to install and to use
    • 3
      Works well for most Datascience usecases
    • 2
      Interactive Query
    • 2
      In memory Computation
    • 2
      Machine learning libratimery, Streaming in real

    Sign up to add or upvote prosMake informed product decisions

    Cons of Apache Parquet
    Cons of Apache Spark
      Be the first to leave a con
      • 3
        Speed

      Sign up to add or upvote consMake informed product decisions

      No Stats

      What is Apache Parquet?

      It is a columnar storage format available to any project in the Hadoop ecosystem, regardless of the choice of data processing framework, data model or programming language.

      What is Apache Spark?

      Spark is a fast and general processing engine compatible with Hadoop data. It can run in Hadoop clusters through YARN or Spark's standalone mode, and it can process data in HDFS, HBase, Cassandra, Hive, and any Hadoop InputFormat. It is designed to perform both batch processing (similar to MapReduce) and new workloads like streaming, interactive queries, and machine learning.

      Need advice about which tool to choose?Ask the StackShare community!

      Jobs that mention Apache Parquet and Apache Spark as a desired skillset
      CBRE
      United Kingdom of Great Britain and Northern Ireland England Feltham
      CBRE
      United States of America Texas Richardson
      CBRE
      Philippines National Capital Region Makati City
      CBRE
      United States of America Texas Richardson
      What companies use Apache Parquet?
      What companies use Apache Spark?
      See which teams inside your own company are using Apache Parquet or Apache Spark.
      Sign up for StackShare EnterpriseLearn More

      Sign up to get full access to all the companiesMake informed product decisions

      What tools integrate with Apache Parquet?
      What tools integrate with Apache Spark?

      Sign up to get full access to all the tool integrationsMake informed product decisions

      Blog Posts

      Mar 24 2021 at 12:57PM

      Pinterest

      GitJenkinsKafka+7
      3
      1845
      MySQLKafkaApache Spark+6
      2
      1808
      Aug 28 2019 at 3:10AM

      Segment

      PythonJavaAmazon S3+16
      7
      2340
      What are some alternatives to Apache Parquet and Apache Spark?
      Avro
      It is a row-oriented remote procedure call and data serialization framework developed within Apache's Hadoop project. It uses JSON for defining data types and protocols, and serializes data in a compact binary format.
      Apache Kudu
      A new addition to the open source Apache Hadoop ecosystem, Kudu completes Hadoop's storage layer to enable fast analytics on fast data.
      JSON
      JavaScript Object Notation is a lightweight data-interchange format. It is easy for humans to read and write. It is easy for machines to parse and generate. It is based on a subset of the JavaScript Programming Language.
      Cassandra
      Partitioning means that Cassandra can distribute your data across multiple machines in an application-transparent matter. Cassandra will automatically repartition as machines are added and removed from the cluster. Row store means that like relational databases, Cassandra organizes data by rows and columns. The Cassandra Query Language (CQL) is a close relative of SQL.
      HBase
      Apache HBase is an open-source, distributed, versioned, column-oriented store modeled after Google' Bigtable: A Distributed Storage System for Structured Data by Chang et al. Just as Bigtable leverages the distributed data storage provided by the Google File System, HBase provides Bigtable-like capabilities on top of Apache Hadoop.
      See all alternatives