What is Apache Parquet?
It is a columnar storage format available to any project in the Hadoop ecosystem, regardless of the choice of data processing framework, data model or programming language.
Apache Parquet is a tool in the Databases category of a tech stack.
Apache Parquet is an open source tool with GitHub stars and GitHub forks. Here’s a link to Apache Parquet's open source repository on GitHub
Who uses Apache Parquet?
Companies
25 companies reportedly use Apache Parquet in their tech stacks, including Walmart, Skyscanner, and platform.
Developers
65 developers on StackShare have stated that they use Apache Parquet.
Apache Parquet Integrations
Java, Hadoop, Apache Hive, Apache Impala, and Apache Thrift are some of the popular tools that integrate with Apache Parquet. Here's a list of all 11 tools that integrate with Apache Parquet.
Decisions about Apache Parquet
Here are some stack decisions, common use cases and reviews by companies and developers who chose Apache Parquet in their tech stack.
Pardha Saradhi
Technical Lead at Incred Financial Solutions · | 6 upvotes · 107.7K views
Hi,
We are currently storing the data in Amazon S3 using Apache Parquet format. We are using Presto to query the data from S3 and catalog it using AWS Glue catalog. We have Metabase sitting on top of Presto, where our reports are present. Currently, Presto is becoming too costly for us, and we are looking for alternatives for it but want to use the remaining setup (S3, Metabase) as much as possible. Please suggest alternative approaches.
Blog Posts
Apache Parquet's Features
- Columnar storage format
- Type-specific encoding
- Pig integration
- Cascading integration
- Crunch integration
- Apache Arrow integration
- Apache Scrooge integration
- Adaptive dictionary encoding
- Predicate pushdown
- Column stats
Apache Parquet Alternatives & Comparisons
What are some alternatives to Apache Parquet?
Avro
It is a row-oriented remote procedure call and data serialization framework developed within Apache's Hadoop project. It uses JSON for defining data types and protocols, and serializes data in a compact binary format.
Apache Kudu
A new addition to the open source Apache Hadoop ecosystem, Kudu completes Hadoop's storage layer to enable fast analytics on fast data.
JSON
JavaScript Object Notation is a lightweight data-interchange format. It is easy for humans to read and write. It is easy for machines to parse and generate. It is based on a subset of the JavaScript Programming Language.
Cassandra
Partitioning means that Cassandra can distribute your data across multiple machines in an application-transparent matter. Cassandra will automatically repartition as machines are added and removed from the cluster. Row store means that like relational databases, Cassandra organizes data by rows and columns. The Cassandra Query Language (CQL) is a close relative of SQL.
HBase
Apache HBase is an open-source, distributed, versioned, column-oriented store modeled after Google' Bigtable: A Distributed Storage System for Structured Data by Chang et al. Just as Bigtable leverages the distributed data storage provided by the Google File System, HBase provides Bigtable-like capabilities on top of Apache Hadoop.