AWS Glue vs Presto vs Apache Spark

Get Advice Icon

Need advice about which tool to choose?Ask the StackShare community!

AWS Glue
AWS Glue

62
38
+ 1
0
Presto
Presto

113
190
+ 1
46
Apache Spark
Apache Spark

1.1K
843
+ 1
98
- No public GitHub repository available -

What is AWS Glue?

A fully managed extract, transform, and load (ETL) service that makes it easy for customers to prepare and load their data for analytics.

What is Presto?

Presto is an open source distributed SQL query engine for running interactive analytic queries against data sources of all sizes ranging from gigabytes to petabytes.

What is Apache Spark?

Spark is a fast and general processing engine compatible with Hadoop data. It can run in Hadoop clusters through YARN or Spark's standalone mode, and it can process data in HDFS, HBase, Cassandra, Hive, and any Hadoop InputFormat. It is designed to perform both batch processing (similar to MapReduce) and new workloads like streaming, interactive queries, and machine learning.
Get Advice Icon

Need advice about which tool to choose?Ask the StackShare community!

Why do developers choose AWS Glue?
Why do developers choose Presto?
Why do developers choose Apache Spark?
    Be the first to leave a pro

    Sign up to add, upvote and see more prosMake informed product decisions

      Be the first to leave a con
        Be the first to leave a con
        What companies use AWS Glue?
        What companies use Presto?
        What companies use Apache Spark?

        Sign up to get full access to all the companiesMake informed product decisions

        What tools integrate with AWS Glue?
        What tools integrate with Presto?
        What tools integrate with Apache Spark?

        Sign up to get full access to all the tool integrationsMake informed product decisions

        What are some alternatives to AWS Glue, Presto, and Apache Spark?
        AWS Data Pipeline
        AWS Data Pipeline is a web service that provides a simple management system for data-driven workflows. Using AWS Data Pipeline, you define a pipeline composed of the “data sources” that contain your data, the “activities” or business logic such as EMR jobs or SQL queries, and the “schedule” on which your business logic executes. For example, you could define a job that, every hour, runs an Amazon Elastic MapReduce (Amazon EMR)–based analysis on that hour’s Amazon Simple Storage Service (Amazon S3) log data, loads the results into a relational database for future lookup, and then automatically sends you a daily summary email.
        Airflow
        Use Airflow to author workflows as directed acyclic graphs (DAGs) of tasks. The Airflow scheduler executes your tasks on an array of workers while following the specified dependencies. Rich command lines utilities makes performing complex surgeries on DAGs a snap. The rich user interface makes it easy to visualize pipelines running in production, monitor progress and troubleshoot issues when needed.
        Talend
        It is an open source software integration platform helps you in effortlessly turning data into business insights. It uses native code generation that lets you run your data pipelines seamlessly across all cloud providers and get optimized performance on all platforms.
        Alooma
        Get the power of big data in minutes with Alooma and Amazon Redshift. Simply build your pipelines and map your events using Alooma’s friendly mapping interface. Query, analyze, visualize, and predict now.
        Amazon Athena
        Amazon Athena is an interactive query service that makes it easy to analyze data in Amazon S3 using standard SQL. Athena is serverless, so there is no infrastructure to manage, and you pay only for the queries that you run.
        See all alternatives
        Decisions about AWS Glue, Presto, and Apache Spark
        StackShare Editors
        StackShare Editors
        Hadoop
        Hadoop
        Apache Spark
        Apache Spark
        Presto
        Presto

        Around 2015, the growing use of Uber’s data exposed limitations in the ETL and Vertica-centric setup, not to mention the increasing costs. “As our company grew, scaling our data warehouse became increasingly expensive. To cut down on costs, we started deleting older, obsolete data to free up space for new data.”

        To overcome these challenges, Uber rebuilt their big data platform around Hadoop. “More specifically, we introduced a Hadoop data lake where all raw data was ingested from different online data stores only once and with no transformation during ingestion.”

        “In order for users to access data in Hadoop, we introduced Presto to enable interactive ad hoc user queries, Apache Spark to facilitate programmatic access to raw data (in both SQL and non-SQL formats), and Apache Hive to serve as the workhorse for extremely large queries.

        See more
        StackShare Editors
        StackShare Editors
        Hadoop
        Hadoop
        Apache Spark
        Apache Spark
        Presto
        Presto

        To improve platform scalability and efficiency, Uber transitioned from JSON to Parquet, and built a central schema service to manage schemas and integrate different client libraries.

        While the first generation big data platform was vulnerable to upstream data format changes, “ad hoc data ingestions jobs were replaced with a standard platform to transfer all source data in its original, nested format into the Hadoop data lake.”

        These platform changes enabled the scaling challenges Uber was facing around that time: “On a daily basis, there were tens of terabytes of new data added to our data lake, and our Big Data platform grew to over 10,000 vcores with over 100,000 running batch jobs on any given day.”

        See more
        StackShare Editors
        StackShare Editors
        Kafka
        Kafka
        MySQL
        MySQL
        Scala
        Scala
        Apache Spark
        Apache Spark
        Presto
        Presto

        Slack’s data team works to “provide an ecosystem to help people in the company quickly and easily answer questions about usage, so they can make better and data informed decisions.” To achieve that goal, that rely on a complex data pipeline.

        An in-house tool call Sqooper scrapes MySQL backups and pipe them to S3. Job queue and log data is sent to Kafka then persisted to S3 using an open source tool called Secor, which was created by Pinterest.

        For compute, Amazon’s Elastic MapReduce (EMR) creates clusters preconfigured for Presto, Hive, and Spark.

        Presto is then used for ad-hoc questions, validating data assumptions, exploring smaller datasets, and creating visualizations for some internal tools. Hive is used for larger data sets or longer time series data, and Spark allows teams to write efficient and robust batch and aggregation jobs. Most of the Spark pipeline is written in Scala.

        Thrift binds all of these engines together with a typed schema and structured data.

        Finally, the Hive Metastore serves as the ground truth for all data and its schema.

        See more
        StackShare Editors
        StackShare Editors
        Prometheus
        Prometheus
        Chef
        Chef
        Consul
        Consul
        Memcached
        Memcached
        Hack
        Hack
        Swift
        Swift
        Hadoop
        Hadoop
        Terraform
        Terraform
        Airflow
        Airflow
        Apache Spark
        Apache Spark
        Kubernetes
        Kubernetes
        gRPC
        gRPC
        HHVM (HipHop Virtual Machine)
        HHVM (HipHop Virtual Machine)
        Presto
        Presto
        Kotlin
        Kotlin
        Apache Thrift
        Apache Thrift

        Since the beginning, Cal Henderson has been the CTO of Slack. Earlier this year, he commented on a Quora question summarizing their current stack.

        Apps
        • Web: a mix of JavaScript/ES6 and React.
        • Desktop: And Electron to ship it as a desktop application.
        • Android: a mix of Java and Kotlin.
        • iOS: written in a mix of Objective C and Swift.
        Backend
        • The core application and the API written in PHP/Hack that runs on HHVM.
        • The data is stored in MySQL using Vitess.
        • Caching is done using Memcached and MCRouter.
        • The search service takes help from SolrCloud, with various Java services.
        • The messaging system uses WebSockets with many services in Java and Go.
        • Load balancing is done using HAproxy with Consul for configuration.
        • Most services talk to each other over gRPC,
        • Some Thrift and JSON-over-HTTP
        • Voice and video calling service was built in Elixir.
        Data warehouse
        • Built using open source tools including Presto, Spark, Airflow, Hadoop and Kafka.
        Etc
        See more
        Eric Colson
        Eric Colson
        Chief Algorithms Officer at Stitch Fix · | 19 upvotes · 339.6K views
        atStitch FixStitch Fix
        Kafka
        Kafka
        PostgreSQL
        PostgreSQL
        Amazon S3
        Amazon S3
        Apache Spark
        Apache Spark
        Presto
        Presto
        Python
        Python
        R
        R
        PyTorch
        PyTorch
        Docker
        Docker
        Amazon EC2 Container Service
        Amazon EC2 Container Service
        #AWS
        #Etl
        #ML
        #DataScience
        #DataStack
        #Data

        The algorithms and data infrastructure at Stitch Fix is housed in #AWS. Data acquisition is split between events flowing through Kafka, and periodic snapshots of PostgreSQL DBs. We store data in an Amazon S3 based data warehouse. Apache Spark on Yarn is our tool of choice for data movement and #ETL. Because our storage l