Need advice about which tool to choose?Ask the StackShare community!

Pig

59
111
+ 1
5
Presto

392
1K
+ 1
66
Add tool

Pig vs Presto: What are the differences?

  1. Data Processing Language: Pig uses a language called Pig Latin, which is a high-level procedural data flow language for working with Hadoop. On the other hand, Presto uses SQL as its query language, making it more user-friendly for those familiar with traditional database querying languages.
  2. Data Types: Pig has a limited set of data types compared to Presto, which supports a wider range of data types including arrays, maps, and structs out of the box. This allows for more flexibility in data manipulation and storage.
  3. Execution Engine: Pig relies on MapReduce for its execution, which can lead to slower performance due to the overhead of processing large volumes of data in a batch fashion. Presto, on the other hand, uses a distributed SQL query engine that allows for more efficient processing of data in a distributed environment.
  4. Ecosystem Integration: Pig is closely integrated with the Hadoop ecosystem and is commonly used for ETL (extract, transform, load) jobs within Hadoop clusters. Presto, on the other hand, can connect to various data sources including Hadoop, relational databases, and cloud storage systems, making it a more versatile tool for querying diverse data sources.
  5. Data Locality: In Pig, data locality is not taken into consideration during query processing, which can result in slower performance as data shuffling is required to process the data. In Presto, data locality is optimized for better performance by minimizing data movement across the network and maximizing parallel processing on the nodes where the data resides.

In Summary, Pig and Presto differ in their data processing language, data types support, execution engine, ecosystem integration, and data locality optimization.

Decisions about Pig and Presto
Ashish Singh
Tech Lead, Big Data Platform at Pinterest · | 38 upvotes · 2.9M views

To provide employees with the critical need of interactive querying, we’ve worked with Presto, an open-source distributed SQL query engine, over the years. Operating Presto at Pinterest’s scale has involved resolving quite a few challenges like, supporting deeply nested and huge thrift schemas, slow/ bad worker detection and remediation, auto-scaling cluster, graceful cluster shutdown and impersonation support for ldap authenticator.

Our infrastructure is built on top of Amazon EC2 and we leverage Amazon S3 for storing our data. This separates compute and storage layers, and allows multiple compute clusters to share the S3 data.

We have hundreds of petabytes of data and tens of thousands of Apache Hive tables. Our Presto clusters are comprised of a fleet of 450 r4.8xl EC2 instances. Presto clusters together have over 100 TBs of memory and 14K vcpu cores. Within Pinterest, we have close to more than 1,000 monthly active users (out of total 1,600+ Pinterest employees) using Presto, who run about 400K queries on these clusters per month.

Each query submitted to Presto cluster is logged to a Kafka topic via Singer. Singer is a logging agent built at Pinterest and we talked about it in a previous post. Each query is logged when it is submitted and when it finishes. When a Presto cluster crashes, we will have query submitted events without corresponding query finished events. These events enable us to capture the effect of cluster crashes over time.

Each Presto cluster at Pinterest has workers on a mix of dedicated AWS EC2 instances and Kubernetes pods. Kubernetes platform provides us with the capability to add and remove workers from a Presto cluster very quickly. The best-case latency on bringing up a new worker on Kubernetes is less than a minute. However, when the Kubernetes cluster itself is out of resources and needs to scale up, it can take up to ten minutes. Some other advantages of deploying on Kubernetes platform is that our Presto deployment becomes agnostic of cloud vendor, instance types, OS, etc.

#BigData #AWS #DataScience #DataEngineering

See more
Karthik Raveendran
CPO at Attinad Software · | 3 upvotes · 208.3K views

The platform deals with time series data from sensors aggregated against things( event data that originates at periodic intervals). We use Cassandra as our distributed database to store time series data. Aggregated data insights from Cassandra is delivered as web API for consumption from other applications. Presto as a distributed sql querying engine, can provide a faster execution time provided the queries are tuned for proper distribution across the cluster. Another objective that we had was to combine Cassandra table data with other business data from RDBMS or other big data systems where presto through its connector architecture would have opened up a whole lot of options for us.

See more
Get Advice from developers at your company using StackShare Enterprise. Sign up for StackShare Enterprise.
Learn More
Pros of Pig
Pros of Presto
  • 2
    Finer-grained control on parallelization
  • 1
    Proven at Petabyte scale
  • 1
    Open-source
  • 1
    Join optimizations for highly skewed data
  • 18
    Works directly on files in s3 (no ETL)
  • 13
    Open-source
  • 12
    Join multiple databases
  • 10
    Scalable
  • 7
    Gets ready in minutes
  • 6
    MPP

Sign up to add or upvote prosMake informed product decisions

- No public GitHub repository available -

What is Pig?

Pig is a dataflow programming environment for processing very large files. Pig's language is called Pig Latin. A Pig Latin program consists of a directed acyclic graph where each node represents an operation that transforms data. Operations are of two flavors: (1) relational-algebra style operations such as join, filter, project; (2) functional-programming style operators such as map, reduce.

What is Presto?

Distributed SQL Query Engine for Big Data

Need advice about which tool to choose?Ask the StackShare community!

What companies use Pig?
What companies use Presto?
See which teams inside your own company are using Pig or Presto.
Sign up for StackShare EnterpriseLearn More

Sign up to get full access to all the companiesMake informed product decisions

What tools integrate with Pig?
What tools integrate with Presto?

Sign up to get full access to all the tool integrationsMake informed product decisions

What are some alternatives to Pig and Presto?
Capybara
Capybara helps you test web applications by simulating how a real user would interact with your app. It is agnostic about the driver running your tests and comes with Rack::Test and Selenium support built in. WebKit is supported through an external gem.
Apache Spark
Spark is a fast and general processing engine compatible with Hadoop data. It can run in Hadoop clusters through YARN or Spark's standalone mode, and it can process data in HDFS, HBase, Cassandra, Hive, and any Hadoop InputFormat. It is designed to perform both batch processing (similar to MapReduce) and new workloads like streaming, interactive queries, and machine learning.
Splunk
It provides the leading platform for Operational Intelligence. Customers use it to search, monitor, analyze and visualize machine data.
Apache Flink
Apache Flink is an open source system for fast and versatile data analytics in clusters. Flink supports batch and streaming analytics, in one system. Analytical programs can be written in concise and elegant APIs in Java and Scala.
Amazon Athena
Amazon Athena is an interactive query service that makes it easy to analyze data in Amazon S3 using standard SQL. Athena is serverless, so there is no infrastructure to manage, and you pay only for the queries that you run.
See all alternatives