Pig
Pig

36
45
4
Apache Spark
Apache Spark

975
748
98
Add tool

Pig vs Apache Spark: What are the differences?

Developers describe Pig as "Platform for analyzing large data sets". Pig is a dataflow programming environment for processing very large files. Pig's language is called Pig Latin. A Pig Latin program consists of a directed acyclic graph where each node represents an operation that transforms data Operations are of two flavors: (1) relational-algebra style operations such as join, filter, project; (2) functional-programming style operators such as map, reduce. . On the other hand, Apache Spark is detailed as "Fast and general engine for large-scale data processing". Spark is a fast and general processing engine compatible with Hadoop data. It can run in Hadoop clusters through YARN or Spark's standalone mode, and it can process data in HDFS, HBase, Cassandra, Hive, and any Hadoop InputFormat. It is designed to perform both batch processing (similar to MapReduce) and new workloads like streaming, interactive queries, and machine learning.

Pig and Apache Spark can be primarily classified as "Big Data" tools.

Pig and Apache Spark are both open source tools. It seems that Apache Spark with 22.5K GitHub stars and 19.4K forks on GitHub has more adoption than Pig with 583 GitHub stars and 449 GitHub forks.

According to the StackShare community, Apache Spark has a broader approval, being mentioned in 266 company stacks & 112 developers stacks; compared to Pig, which is listed in 9 company stacks and 4 developer stacks.

No Stats

What is Pig?

Pig is a dataflow programming environment for processing very large files. Pig's language is called Pig Latin. A Pig Latin program consists of a directed acyclic graph where each node represents an operation that transforms data. Operations are of two flavors: (1) relational-algebra style operations such as join, filter, project; (2) functional-programming style operators such as map, reduce.

What is Apache Spark?

Spark is a fast and general processing engine compatible with Hadoop data. It can run in Hadoop clusters through YARN or Spark's standalone mode, and it can process data in HDFS, HBase, Cassandra, Hive, and any Hadoop InputFormat. It is designed to perform both batch processing (similar to MapReduce) and new workloads like streaming, interactive queries, and machine learning.

Want advice about which of these to choose?Ask the StackShare community!

Why do developers choose Pig?
Why do developers choose Apache Spark?

Sign up to add, upvote and see more prosMake informed product decisions

What are the cons of using Pig?
What are the cons of using Apache Spark?
    Be the first to leave a con
    What companies use Pig?
    What companies use Apache Spark?

    Sign up to get full access to all the companiesMake informed product decisions

    What tools integrate with Pig?
    What tools integrate with Apache Spark?

    Sign up to get full access to all the tool integrationsMake informed product decisions

    What are some alternatives to Pig and Apache Spark?
    Capybara
    Capybara helps you test web applications by simulating how a real user would interact with your app. It is agnostic about the driver running your tests and comes with Rack::Test and Selenium support built in. WebKit is supported through an external gem.
    Amazon Athena
    Amazon Athena is an interactive query service that makes it easy to analyze data in Amazon S3 using standard SQL. Athena is serverless, so there is no infrastructure to manage, and you pay only for the queries that you run.
    Apache Flink
    Apache Flink is an open source system for fast and versatile data analytics in clusters. Flink supports batch and streaming analytics, in one system. Analytical programs can be written in concise and elegant APIs in Java and Scala.
    Presto
    Presto is an open source distributed SQL query engine for running interactive analytic queries against data sources of all sizes ranging from gigabytes to petabytes.
    Druid
    Druid is a distributed, column-oriented, real-time analytics data store that is commonly used to power exploratory dashboards in multi-tenant environments. Druid excels as a data warehousing solution for fast aggregate queries on petabyte sized data sets. Druid supports a variety of flexible filters, exact calculations, approximate algorithms, and other useful calculations.
    See all alternatives
    Decisions about Pig and Apache Spark
    No stack decisions found
    Interest over time
    Reviews of Pig and Apache Spark
    No reviews found
    How developers use Pig and Apache Spark
    Avatar of Wei Chen
    Wei Chen uses Apache SparkApache Spark

    Spark is good at parallel data processing management. We wrote a neat program to handle the TBs data we get everyday.

    Avatar of Ralic Lo
    Ralic Lo uses Apache SparkApache Spark

    Used Spark Dataframe API on Spark-R for big data analysis.

    Avatar of Kalibrr
    Kalibrr uses Apache SparkApache Spark

    We use Apache Spark in computing our recommendations.

    Avatar of BrainFinance
    BrainFinance uses Apache SparkApache Spark

    As a part of big data machine learning stack (SMACK).

    Avatar of Dotmetrics
    Dotmetrics uses Apache SparkApache Spark

    Big data analytics and nightly transformation jobs.

    How much does Pig cost?
    How much does Apache Spark cost?
    Pricing unavailable
    Pricing unavailable
    News about Pig
    More news
    News about Apache Spark
    More news