Need advice about which tool to choose?Ask the StackShare community!

AWS Glue

459
816
+ 1
9
Pipelines

29
72
+ 1
0
Add tool

AWS Glue vs Pipelines: What are the differences?

Introduction: In the world of cloud computing, AWS Glue and Pipelines are two popular services offered by Amazon Web Services. Both services play a crucial role in data processing and workflow management, but they have distinct features that cater to specific needs.

  1. Data Processing Approach: AWS Glue is a fully managed extract, transform, and load (ETL) service that makes it easy to prepare and load data for analytics. It automatically discovers metadata and dependencies, making it easier to create ETL pipelines. On the other hand, Pipelines is a web-based service that allows users to build complex data processing workflows using a visual interface, without the need for coding. It focuses more on workflow management rather than ETL capabilities.

  2. Scalability and Flexibility: When it comes to scalability, AWS Glue excels in handling large volumes of data and can automatically adjust resources based on the workload. It integrates well with other AWS services and is suitable for heavy data processing tasks. In contrast, Pipelines offers more flexibility in defining custom workflows and dependencies between tasks, allowing for intricate data pipelines but may require more manual intervention for scaling.

  3. Pricing Structure: AWS Glue follows a pay-as-you-go pricing model, where users only pay for the resources they consume, making it cost-effective for smaller workloads. Pipelines, on the other hand, has a tiered pricing structure based on the number of workflow runs and active pipelines, which can be beneficial for organizations with predictable usage patterns but might be less cost-effective for fluctuating workloads.

  4. Integration with Other Services: AWS Glue seamlessly integrates with various AWS services like S3, Redshift, and Athena, enabling data transformation and loading from different sources. It also supports integration with external data sources through JDBC connectors. Pipelines, on the other hand, offers integrations with third-party services like GitHub, Slack, and JIRA, allowing users to build end-to-end automation pipelines that extend beyond data processing.

  5. Monitoring and Logging Capabilities: AWS Glue provides comprehensive monitoring and logging tools to track the progress of ETL jobs, identify issues, and optimize performance. It offers detailed metrics and logs that help in troubleshooting and improving data processing workflows. In comparison, Pipelines offers basic monitoring capabilities like job status tracking and execution history, but lacks advanced logging features for in-depth analysis and performance tuning.

  6. User Interface and Learning Curve: AWS Glue provides a user-friendly console for designing ETL jobs and managing data catalogs, making it easy for users familiar with SQL and Python to get started quickly. On the other hand, Pipelines offers a more intuitive visual interface for creating workflows, which can be beneficial for users without programming experience but may have a steeper learning curve for advanced configurations and customizations.

In Summary, AWS Glue is tailored for ETL tasks, scalable data processing, and seamless AWS integration, while Pipelines offers flexibility in workflow design, custom task dependencies, third-party integrations, and a visual interface for easy workflow creation.

Advice on AWS Glue and Pipelines

We need to perform ETL from several databases into a data warehouse or data lake. We want to

  • keep raw and transformed data available to users to draft their own queries efficiently
  • give users the ability to give custom permissions and SSO
  • move between open-source on-premises development and cloud-based production environments

We want to use inexpensive Amazon EC2 instances only on medium-sized data set 16GB to 32GB feeding into Tableau Server or PowerBI for reporting and data analysis purposes.

See more
Replies (3)
John Nguyen
Recommends
on
AirflowAirflowAWS LambdaAWS Lambda

You could also use AWS Lambda and use Cloudwatch event schedule if you know when the function should be triggered. The benefit is that you could use any language and use the respective database client.

But if you orchestrate ETLs then it makes sense to use Apache Airflow. This requires Python knowledge.

See more
Recommends
on
AirflowAirflow

Though we have always built something custom, Apache airflow (https://airflow.apache.org/) stood out as a key contender/alternative when it comes to open sources. On the commercial offering, Amazon Redshift combined with Amazon Kinesis (for complex manipulations) is great for BI, though Redshift as such is expensive.

See more
Recommends

You may want to look into a Data Virtualization product called Conduit. It connects to disparate data sources in AWS, on prem, Azure, GCP, and exposes them as a single unified Spark SQL view to PowerBI (direct query) or Tableau. Allows auto query and caching policies to enhance query speeds and experience. Has a GPU query engine and optimized Spark for fallback. Can be deployed on your AWS VM or on prem, scales up and out. Sounds like the ideal solution to your needs.

See more
Vamshi Krishna
Data Engineer at Tata Consultancy Services · | 4 upvotes · 258K views

I have to collect different data from multiple sources and store them in a single cloud location. Then perform cleaning and transforming using PySpark, and push the end results to other applications like reporting tools, etc. What would be the best solution? I can only think of Azure Data Factory + Databricks. Are there any alternatives to #AWS services + Databricks?

See more

Hi all,

Currently, we need to ingest the data from Amazon S3 to DB either Amazon Athena or Amazon Redshift. But the problem with the data is, it is in .PSV (pipe separated values) format and the size is also above 200 GB. The query performance of the timeout in Athena/Redshift is not up to the mark, too slow while compared to Google BigQuery. How would I optimize the performance and query result time? Can anyone please help me out?

See more
Replies (4)

you can use aws glue service to convert you pipe format data to parquet format , and thus you can achieve data compression . Now you should choose Redshift to copy your data as it is very huge. To manage your data, you should partition your data in S3 bucket and also divide your data across the redshift cluster

See more
Carlos Acedo
Data Technologies Manager at SDG Group Iberia · | 5 upvotes · 249K views
Recommends
on
Amazon RedshiftAmazon Redshift

First of all you should make your choice upon Redshift or Athena based on your use case since they are two very diferent services - Redshift is an enterprise-grade MPP Data Warehouse while Athena is a SQL layer on top of S3 with limited performance. If performance is a key factor, users are going to execute unpredictable queries and direct and managing costs are not a problem I'd definitely go for Redshift. If performance is not so critical and queries will be predictable somewhat I'd go for Athena.

Once you select the technology you'll need to optimize your data in order to get the queries executed as fast as possible. In both cases you may need to adapt the data model to fit your queries better. In the case you go for Athena you'd also proabably need to change your file format to Parquet or Avro and review your partition strategy depending on your most frequent type of query. If you choose Redshift you'll need to ingest the data from your files into it and maybe carry out some tuning tasks for performance gain.

I'll recommend Redshift for now since it can address a wider range of use cases, but we could give you better advice if you described your use case in depth.

See more
Alexis Blandin
Recommends
on
Amazon AthenaAmazon Athena

It depend of the nature of your data (structured or not?) and of course your queries (ad-hoc or predictible?). For example you can look at partitioning and columnar format to maximize MPP capabilities for both Athena and Redshift

See more
Recommends

you can change your PSV fomat data to parquet file format with AWS GLUE and then your query performance will be improved

See more
Manage your open source components, licenses, and vulnerabilities
Learn More
Pros of AWS Glue
Pros of Pipelines
  • 9
    Managed Hive Metastore
    Be the first to leave a pro

    Sign up to add or upvote prosMake informed product decisions

    - No public GitHub repository available -

    What is AWS Glue?

    A fully managed extract, transform, and load (ETL) service that makes it easy for customers to prepare and load their data for analytics.

    What is Pipelines?

    Kubeflow is a machine learning (ML) toolkit that is dedicated to making deployments of ML workflows on Kubernetes simple, portable, and scalable. Kubeflow pipelines are reusable end-to-end ML workflows built using the Kubeflow Pipelines SDK.

    Need advice about which tool to choose?Ask the StackShare community!

    What companies use AWS Glue?
    What companies use Pipelines?
    Manage your open source components, licenses, and vulnerabilities
    Learn More

    Sign up to get full access to all the companiesMake informed product decisions

    What tools integrate with AWS Glue?
    What tools integrate with Pipelines?

    Sign up to get full access to all the tool integrationsMake informed product decisions

    Blog Posts

    Aug 28 2019 at 3:10AM

    Segment

    PythonJavaAmazon S3+16
    7
    2623
    What are some alternatives to AWS Glue and Pipelines?
    AWS Data Pipeline
    AWS Data Pipeline is a web service that provides a simple management system for data-driven workflows. Using AWS Data Pipeline, you define a pipeline composed of the “data sources” that contain your data, the “activities” or business logic such as EMR jobs or SQL queries, and the “schedule” on which your business logic executes. For example, you could define a job that, every hour, runs an Amazon Elastic MapReduce (Amazon EMR)–based analysis on that hour’s Amazon Simple Storage Service (Amazon S3) log data, loads the results into a relational database for future lookup, and then automatically sends you a daily summary email.
    Airflow
    Use Airflow to author workflows as directed acyclic graphs (DAGs) of tasks. The Airflow scheduler executes your tasks on an array of workers while following the specified dependencies. Rich command lines utilities makes performing complex surgeries on DAGs a snap. The rich user interface makes it easy to visualize pipelines running in production, monitor progress and troubleshoot issues when needed.
    Apache Spark
    Spark is a fast and general processing engine compatible with Hadoop data. It can run in Hadoop clusters through YARN or Spark's standalone mode, and it can process data in HDFS, HBase, Cassandra, Hive, and any Hadoop InputFormat. It is designed to perform both batch processing (similar to MapReduce) and new workloads like streaming, interactive queries, and machine learning.
    Talend
    It is an open source software integration platform helps you in effortlessly turning data into business insights. It uses native code generation that lets you run your data pipelines seamlessly across all cloud providers and get optimized performance on all platforms.
    Alooma
    Get the power of big data in minutes with Alooma and Amazon Redshift. Simply build your pipelines and map your events using Alooma’s friendly mapping interface. Query, analyze, visualize, and predict now.
    See all alternatives