StackShareStackShare
Follow on
StackShare

Discover and share technology stacks from companies around the world.

Follow on

© 2025 StackShare. All rights reserved.

Product

  • Stacks
  • Tools
  • Feed

Company

  • About
  • Contact

Legal

  • Privacy Policy
  • Terms of Service
  1. Stackups
  2. Utilities
  3. API Tools
  4. Data Transfer
  5. AWS Data Pipeline vs AWS Glue

AWS Data Pipeline vs AWS Glue

OverviewDecisionsComparisonAlternatives

Overview

AWS Data Pipeline
AWS Data Pipeline
Stacks94
Followers398
Votes1
AWS Glue
AWS Glue
Stacks461
Followers819
Votes9

AWS Data Pipeline vs AWS Glue: What are the differences?

Introduction

AWS Data Pipeline and AWS Glue are both services provided by Amazon Web Services for managing and processing data. While both services offer similar features and capabilities, there are key differences between AWS Data Pipeline and AWS Glue. In this article, we will explore these differences and understand when to use each service.

  1. Data Transformation Capabilities: AWS Data Pipeline provides a range of built-in activities and pre-defined templates for data transformation tasks. These activities allow for easy data manipulation, such as filtering, aggregating, and joining, through simple configuration settings. On the other hand, AWS Glue offers a more powerful and flexible approach to data transformation. With AWS Glue, you can define and create complex ETL (Extract, Transform, Load) jobs using the built-in Spark framework, enabling more sophisticated data transformations.

  2. Dependency and Scheduling: AWS Data Pipeline is designed for orchestrating and managing the execution of batch workflows. It allows you to define dependencies between activities and schedule them accordingly. AWS Data Pipeline uses a visual interface to define and manage these workflows. In contrast, AWS Glue focuses on data cataloging and data preparation. While it does provide scheduling capabilities, it does not offer the same level of dependency management as AWS Data Pipeline.

  3. Data Catalog and Metadata Management: AWS Glue includes a fully managed data catalog that automatically discovers and categorizes metadata about your data sources. This catalog allows you to search, query, and explore your data assets effortlessly. AWS Glue also provides the ability to create and manage custom metadata, making it easier to understand and manage your data. In comparison, AWS Data Pipeline does not have built-in capabilities for data cataloging and metadata management.

  4. Data Format Support: AWS Glue supports a wide range of data formats out of the box, including popular file formats like JSON, CSV, Parquet, and Avro. It can automatically infer the schema of your data and generate code to perform data processing tasks. AWS Data Pipeline, on the other hand, has more limited support for data formats, primarily focusing on relational databases, Hadoop Distributed File System (HDFS), and Amazon S3.

  5. Data Source Connectivity: AWS Glue offers seamless connectivity to various data sources, including Amazon S3, relational databases (RDS), and Redshift. It provides native connectors for these data sources, simplifying the process of data extraction and loading. AWS Data Pipeline also supports these data sources but requires additional configuration and setup to establish connections.

  6. Pricing Model: AWS Glue follows a pay-as-you-go pricing model, where you are charged based on the number of compute units used and the amount of data processed. AWS Data Pipeline, on the other hand, is priced based on the number of pipeline executions and the duration of these executions. The pricing model for AWS Data Pipeline is more focused on the workflow orchestration and automation aspects.

In Summary, AWS Data Pipeline and AWS Glue differ in their data transformation capabilities, dependency and scheduling features, data catalog and metadata management, data format support, data source connectivity, and pricing model. Depending on your specific requirements and use case, you can choose the appropriate service to suit your needs.

Share your Stack

Help developers discover the tools you use. Get visibility for your team's tech choices and contribute to the community's knowledge.

View Docs
CLI (Node.js)
or
Manual

Advice on AWS Data Pipeline, AWS Glue

Vamshi
Vamshi

Data Engineer at Tata Consultancy Services

May 29, 2020

Needs adviceonPySparkPySparkAzure Data FactoryAzure Data FactoryDatabricksDatabricks

I have to collect different data from multiple sources and store them in a single cloud location. Then perform cleaning and transforming using PySpark, and push the end results to other applications like reporting tools, etc. What would be the best solution? I can only think of Azure Data Factory + Databricks. Are there any alternatives to #AWS services + Databricks?

269k views269k
Comments
datocrats-org
datocrats-org

Jul 29, 2020

Needs adviceonAmazon EC2Amazon EC2TableauTableauPowerBIPowerBI

We need to perform ETL from several databases into a data warehouse or data lake. We want to

  • keep raw and transformed data available to users to draft their own queries efficiently
  • give users the ability to give custom permissions and SSO
  • move between open-source on-premises development and cloud-based production environments

We want to use inexpensive Amazon EC2 instances only on medium-sized data set 16GB to 32GB feeding into Tableau Server or PowerBI for reporting and data analysis purposes.

319k views319k
Comments
Pavithra
Pavithra

Mar 12, 2020

Needs adviceonAmazon S3Amazon S3Amazon AthenaAmazon AthenaAmazon RedshiftAmazon Redshift

Hi all,

Currently, we need to ingest the data from Amazon S3 to DB either Amazon Athena or Amazon Redshift. But the problem with the data is, it is in .PSV (pipe separated values) format and the size is also above 200 GB. The query performance of the timeout in Athena/Redshift is not up to the mark, too slow while compared to Google BigQuery. How would I optimize the performance and query result time? Can anyone please help me out?

522k views522k
Comments

Detailed Comparison

AWS Data Pipeline
AWS Data Pipeline
AWS Glue
AWS Glue

AWS Data Pipeline is a web service that provides a simple management system for data-driven workflows. Using AWS Data Pipeline, you define a pipeline composed of the “data sources” that contain your data, the “activities” or business logic such as EMR jobs or SQL queries, and the “schedule” on which your business logic executes. For example, you could define a job that, every hour, runs an Amazon Elastic MapReduce (Amazon EMR)–based analysis on that hour’s Amazon Simple Storage Service (Amazon S3) log data, loads the results into a relational database for future lookup, and then automatically sends you a daily summary email.

A fully managed extract, transform, and load (ETL) service that makes it easy for customers to prepare and load their data for analytics.

You can find (and use) a variety of popular AWS Data Pipeline tasks in the AWS Management Console’s template section.;Hourly analysis of Amazon S3‐based log data;Daily replication of AmazonDynamoDB data to Amazon S3;Periodic replication of on-premise JDBC database tables into RDS
Easy - AWS Glue automates much of the effort in building, maintaining, and running ETL jobs. AWS Glue crawls your data sources, identifies data formats, and suggests schemas and transformations. AWS Glue automatically generates the code to execute your data transformations and loading processes.; Integrated - AWS Glue is integrated across a wide range of AWS services.; Serverless - AWS Glue is serverless. There is no infrastructure to provision or manage. AWS Glue handles provisioning, configuration, and scaling of the resources required to run your ETL jobs on a fully managed, scale-out Apache Spark environment. You pay only for the resources used while your jobs are running.; Developer Friendly - AWS Glue generates ETL code that is customizable, reusable, and portable, using familiar technology - Scala, Python, and Apache Spark. You can also import custom readers, writers and transformations into your Glue ETL code. Since the code AWS Glue generates is based on open frameworks, there is no lock-in. You can use it anywhere.
Statistics
Stacks
94
Stacks
461
Followers
398
Followers
819
Votes
1
Votes
9
Pros & Cons
Pros
  • 1
    Easy to create DAG and execute it
Pros
  • 9
    Managed Hive Metastore
Integrations
No integrations available
Amazon Redshift
Amazon Redshift
Amazon S3
Amazon S3
Amazon RDS
Amazon RDS
Amazon Athena
Amazon Athena
MySQL
MySQL
Microsoft SQL Server
Microsoft SQL Server
Amazon EMR
Amazon EMR
Amazon Aurora
Amazon Aurora
Oracle
Oracle
Amazon RDS for PostgreSQL
Amazon RDS for PostgreSQL

What are some alternatives to AWS Data Pipeline, AWS Glue?

Apache Spark

Apache Spark

Spark is a fast and general processing engine compatible with Hadoop data. It can run in Hadoop clusters through YARN or Spark's standalone mode, and it can process data in HDFS, HBase, Cassandra, Hive, and any Hadoop InputFormat. It is designed to perform both batch processing (similar to MapReduce) and new workloads like streaming, interactive queries, and machine learning.

Presto

Presto

Distributed SQL Query Engine for Big Data

Amazon Athena

Amazon Athena

Amazon Athena is an interactive query service that makes it easy to analyze data in Amazon S3 using standard SQL. Athena is serverless, so there is no infrastructure to manage, and you pay only for the queries that you run.

Apache Flink

Apache Flink

Apache Flink is an open source system for fast and versatile data analytics in clusters. Flink supports batch and streaming analytics, in one system. Analytical programs can be written in concise and elegant APIs in Java and Scala.

lakeFS

lakeFS

It is an open-source data version control system for data lakes. It provides a “Git for data” platform enabling you to implement best practices from software engineering on your data lake, including branching and merging, CI/CD, and production-like dev/test environments.

Druid

Druid

Druid is a distributed, column-oriented, real-time analytics data store that is commonly used to power exploratory dashboards in multi-tenant environments. Druid excels as a data warehousing solution for fast aggregate queries on petabyte sized data sets. Druid supports a variety of flexible filters, exact calculations, approximate algorithms, and other useful calculations.

Apache Kylin

Apache Kylin

Apache Kylin™ is an open source Distributed Analytics Engine designed to provide SQL interface and multi-dimensional analysis (OLAP) on Hadoop/Spark supporting extremely large datasets, originally contributed from eBay Inc.

Splunk

Splunk

It provides the leading platform for Operational Intelligence. Customers use it to search, monitor, analyze and visualize machine data.

Apache Impala

Apache Impala

Impala is a modern, open source, MPP SQL query engine for Apache Hadoop. Impala is shipped by Cloudera, MapR, and Amazon. With Impala, you can query data, whether stored in HDFS or Apache HBase – including SELECT, JOIN, and aggregate functions – in real time.

Vertica

Vertica

It provides a best-in-class, unified analytics platform that will forever be independent from underlying infrastructure.

Related Comparisons

Bootstrap
Materialize

Bootstrap vs Materialize

Laravel
Django

Django vs Laravel vs Node.js

Bootstrap
Foundation

Bootstrap vs Foundation vs Material UI

Node.js
Spring Boot

Node.js vs Spring-Boot

Liquibase
Flyway

Flyway vs Liquibase