Alternatives to AWS Glue logo

Alternatives to AWS Glue

AWS Data Pipeline, Airflow, Apache Spark, Talend, and Alooma are the most popular alternatives and competitors to AWS Glue.
210
393
+ 1
4

What is AWS Glue and what are its top alternatives?

A fully managed extract, transform, and load (ETL) service that makes it easy for customers to prepare and load their data for analytics.
AWS Glue is a tool in the Big Data Tools category of a tech stack.

Top Alternatives to AWS Glue

  • AWS Data Pipeline

    AWS Data Pipeline

    AWS Data Pipeline is a web service that provides a simple management system for data-driven workflows. Using AWS Data Pipeline, you define a pipeline composed of the “data sources” that contain your data, the “activities” or business logic such as EMR jobs or SQL queries, and the “schedule” on which your business logic executes. For example, you could define a job that, every hour, runs an Amazon Elastic MapReduce (Amazon EMR)–based analysis on that hour’s Amazon Simple Storage Service (Amazon S3) log data, loads the results into a relational database for future lookup, and then automatically sends you a daily summary email. ...

  • Airflow

    Airflow

    Use Airflow to author workflows as directed acyclic graphs (DAGs) of tasks. The Airflow scheduler executes your tasks on an array of workers while following the specified dependencies. Rich command lines utilities makes performing complex surgeries on DAGs a snap. The rich user interface makes it easy to visualize pipelines running in production, monitor progress and troubleshoot issues when needed. ...

  • Apache Spark

    Apache Spark

    Spark is a fast and general processing engine compatible with Hadoop data. It can run in Hadoop clusters through YARN or Spark's standalone mode, and it can process data in HDFS, HBase, Cassandra, Hive, and any Hadoop InputFormat. It is designed to perform both batch processing (similar to MapReduce) and new workloads like streaming, interactive queries, and machine learning. ...

  • Talend

    Talend

    It is an open source software integration platform helps you in effortlessly turning data into business insights. It uses native code generation that lets you run your data pipelines seamlessly across all cloud providers and get optimized performance on all platforms. ...

  • Alooma

    Alooma

    Get the power of big data in minutes with Alooma and Amazon Redshift. Simply build your pipelines and map your events using Alooma’s friendly mapping interface. Query, analyze, visualize, and predict now. ...

  • Databricks

    Databricks

    Databricks Unified Analytics Platform, from the original creators of Apache Spark™, unifies data science and engineering across the Machine Learning lifecycle from data preparation to experimentation and deployment of ML applications. ...

  • Splunk

    Splunk

    It provides the leading platform for Operational Intelligence. Customers use it to search, monitor, analyze and visualize machine data. ...

  • Apache Flink

    Apache Flink

    Apache Flink is an open source system for fast and versatile data analytics in clusters. Flink supports batch and streaming analytics, in one system. Analytical programs can be written in concise and elegant APIs in Java and Scala. ...

AWS Glue alternatives & related posts

AWS Data Pipeline logo

AWS Data Pipeline

77
283
1
Process and move data between different AWS compute and storage services
77
283
+ 1
1
PROS OF AWS DATA PIPELINE
CONS OF AWS DATA PIPELINE
    No cons available

    related AWS Data Pipeline posts

    Airflow logo

    Airflow

    896
    1.4K
    86
    A platform to programmaticaly author, schedule and monitor data pipelines, by Airbnb
    896
    1.4K
    + 1
    86

    related Airflow posts

    Shared insights
    on
    Jenkins
    Airflow

    I am looking for an open-source scheduler tool with cross-functional application dependencies. Some of the tasks I am looking to schedule are as follows:

    1. Trigger Matillion ETL loads
    2. Trigger Attunity Replication tasks that have downstream ETL loads
    3. Trigger Golden gate Replication Tasks
    4. Shell scripts, wrappers, file watchers
    5. Event-driven schedules

    I have used Airflow in the past, and I know we need to create DAGs for each pipeline. I am not familiar with Jenkins, but I know it works with configuration without much underlying code. I want to evaluate both and appreciate any advise

    See more

    I am looking for the best tool to orchestrate #ETL workflows in non-Hadoop environments, mainly for regression testing use cases. Would Airflow or Apache NiFi be a good fit for this purpose?

    For example, I want to run an Informatica ETL job and then run an SQL task as a dependency, followed by another task from Jira. What tool is best suited to set up such a pipeline?

    See more
    Apache Spark logo

    Apache Spark

    2K
    2.1K
    127
    Fast and general engine for large-scale data processing
    2K
    2.1K
    + 1
    127

    related Apache Spark posts

    Eric Colson
    Chief Algorithms Officer at Stitch Fix · | 20 upvotes · 1.6M views

    The algorithms and data infrastructure at Stitch Fix is housed in #AWS. Data acquisition is split between events flowing through Kafka, and periodic snapshots of PostgreSQL DBs. We store data in an Amazon S3 based data warehouse. Apache Spark on Yarn is our tool of choice for data movement and #ETL. Because our storage layer (s3) is decoupled from our processing layer, we are able to scale our compute environment very elastically. We have several semi-permanent, autoscaling Yarn clusters running to serve our data processing needs. While the bulk of our compute infrastructure is dedicated to algorithmic processing, we also implemented Presto for adhoc queries and dashboards.

    Beyond data movement and ETL, most #ML centric jobs (e.g. model training and execution) run in a similarly elastic environment as containers running Python and R code on Amazon EC2 Container Service clusters. The execution of batch jobs on top of ECS is managed by Flotilla, a service we built in house and open sourced (see https://github.com/stitchfix/flotilla-os).

    At Stitch Fix, algorithmic integrations are pervasive across the business. We have dozens of data products actively integrated systems. That requires serving layer that is robust, agile, flexible, and allows for self-service. Models produced on Flotilla are packaged for deployment in production using Khan, another framework we've developed internally. Khan provides our data scientists the ability to quickly productionize those models they've developed with open source frameworks in Python 3 (e.g. PyTorch, sklearn), by automatically packaging them as Docker containers and deploying to Amazon ECS. This provides our data scientist a one-click method of getting from their algorithms to production. We then integrate those deployments into a service mesh, which allows us to A/B test various implementations in our product.

    For more info:

    #DataScience #DataStack #Data

    See more
    Conor Myhrvold
    Tech Brand Mgr, Office of CTO at Uber · | 7 upvotes · 814K views

    Why we built Marmaray, an open source generic data ingestion and dispersal framework and library for Apache Hadoop :

    Built and designed by our Hadoop Platform team, Marmaray is a plug-in-based framework built on top of the Hadoop ecosystem. Users can add support to ingest data from any source and disperse to any sink leveraging the use of Apache Spark . The name, Marmaray, comes from a tunnel in Turkey connecting Europe and Asia. Similarly, we envisioned Marmaray within Uber as a pipeline connecting data from any source to any sink depending on customer preference:

    https://eng.uber.com/marmaray-hadoop-ingestion-open-source/

    (Direct GitHub repo: https://github.com/uber/marmaray Kafka Kafka Manager )

    See more
    Talend logo

    Talend

    79
    112
    0
    A single, unified suite for all integration needs
    79
    112
    + 1
    0
    PROS OF TALEND
      No pros available
      CONS OF TALEND
        No cons available

        related Talend posts

        Alooma logo

        Alooma

        24
        40
        0
        Integrate any data source like databases, applications, and any API - with your own Amazon Redshift
        24
        40
        + 1
        0
        PROS OF ALOOMA
          No pros available
          CONS OF ALOOMA
            No cons available

            related Alooma posts

            Databricks logo

            Databricks

            166
            280
            0
            A unified analytics platform, powered by Apache Spark
            166
            280
            + 1
            0
            PROS OF DATABRICKS
              No pros available
              CONS OF DATABRICKS
                No cons available

                related Databricks posts

                Splunk logo

                Splunk

                346
                506
                0
                Search, monitor, analyze and visualize machine data
                346
                506
                + 1
                0
                PROS OF SPLUNK
                  No pros available
                  CONS OF SPLUNK
                    No cons available

                    related Splunk posts

                    Shared insights
                    on
                    Kibana
                    Splunk
                    Grafana

                    I use Kibana because it ships with the ELK stack. I don't find it as powerful as Splunk however it is light years above grepping through log files. We previously used Grafana but found it to be annoying to maintain a separate tool outside of the ELK stack. We were able to get everything we needed from Kibana.

                    See more
                    Apache Flink logo

                    Apache Flink

                    312
                    446
                    28
                    Fast and reliable large-scale data processing engine
                    312
                    446
                    + 1
                    28

                    related Apache Flink posts

                    Surabhi Bhawsar
                    Technical Architect at Pepcus · | 7 upvotes · 452.7K views
                    Shared insights
                    on
                    Kafka
                    Apache Flink

                    I need to build the Alert & Notification framework with the use of a scheduled program. We will analyze the events from the database table and filter events that are falling under a day timespan and send these event messages over email. Currently, we are using Kafka Pub/Sub for messaging. The customer wants us to move on Apache Flink, I am trying to understand how Apache Flink could be fit better for us.

                    See more

                    I have to build a data processing application with an Apache Beam stack and Apache Flink runner on an Amazon EMR cluster. I saw some instability with the process and EMR clusters that keep going down. Here, the Apache Beam application gets inputs from Kafka and sends the accumulative data streams to another Kafka topic. Any advice on how to make the process more stable?

                    See more