AWS Data Pipeline vs AWS Glue: What are the differences?
What is AWS Data Pipeline? Process and move data between different AWS compute and storage services. AWS Data Pipeline is a web service that provides a simple management system for data-driven workflows. Using AWS Data Pipeline, you define a pipeline composed of the “data sources” that contain your data, the “activities” or business logic such as EMR jobs or SQL queries, and the “schedule” on which your business logic executes. For example, you could define a job that, every hour, runs an Amazon Elastic MapReduce (Amazon EMR)–based analysis on that hour’s Amazon Simple Storage Service (Amazon S3) log data, loads the results into a relational database for future lookup, and then automatically sends you a daily summary email.
What is AWS Glue? Fully managed extract, transform, and load (ETL) service. A fully managed extract, transform, and load (ETL) service that makes it easy for customers to prepare and load their data for analytics.
AWS Data Pipeline belongs to "Data Transfer" category of the tech stack, while AWS Glue can be primarily classified under "Big Data Tools".
Some of the features offered by AWS Data Pipeline are:
- You can find (and use) a variety of popular AWS Data Pipeline tasks in the AWS Management Console’s template section.
- Hourly analysis of Amazon S3‐based log data
- Daily replication of AmazonDynamoDB data to Amazon S3
On the other hand, AWS Glue provides the following key features:
- Easy - AWS Glue automates much of the effort in building, maintaining, and running ETL jobs. AWS Glue crawls your data sources, identifies data formats, and suggests schemas and transformations. AWS Glue automatically generates the code to execute your data transformations and loading processes.
- Integrated - AWS Glue is integrated across a wide range of AWS services.
- Serverless - AWS Glue is serverless. There is no infrastructure to provision or manage. AWS Glue handles provisioning, configuration, and scaling of the resources required to run your ETL jobs on a fully managed, scale-out Apache Spark environment. You pay only for the resources used while your jobs are running.