Alternatives to AWS Glue logo

Alternatives to AWS Glue

AWS Data Pipeline, Airflow, Apache Spark, Talend, and Alooma are the most popular alternatives and competitors to AWS Glue.
458
9

What is AWS Glue and what are its top alternatives?

A fully managed extract, transform, and load (ETL) service that makes it easy for customers to prepare and load their data for analytics.
AWS Glue is a tool in the Big Data Tools category of a tech stack.

Top Alternatives to AWS Glue

  • AWS Data Pipeline
    AWS Data Pipeline

    AWS Data Pipeline is a web service that provides a simple management system for data-driven workflows. Using AWS Data Pipeline, you define a pipeline composed of the “data sources” that contain your data, the “activities” or business logic such as EMR jobs or SQL queries, and the “schedule” on which your business logic executes. For example, you could define a job that, every hour, runs an Amazon Elastic MapReduce (Amazon EMR)–based analysis on that hour’s Amazon Simple Storage Service (Amazon S3) log data, loads the results into a relational database for future lookup, and then automatically sends you a daily summary email. ...

  • Airflow
    Airflow

    Use Airflow to author workflows as directed acyclic graphs (DAGs) of tasks. The Airflow scheduler executes your tasks on an array of workers while following the specified dependencies. Rich command lines utilities makes performing complex surgeries on DAGs a snap. The rich user interface makes it easy to visualize pipelines running in production, monitor progress and troubleshoot issues when needed. ...

  • Apache Spark
    Apache Spark

    Spark is a fast and general processing engine compatible with Hadoop data. It can run in Hadoop clusters through YARN or Spark's standalone mode, and it can process data in HDFS, HBase, Cassandra, Hive, and any Hadoop InputFormat. It is designed to perform both batch processing (similar to MapReduce) and new workloads like streaming, interactive queries, and machine learning. ...

  • Talend
    Talend

    It is an open source software integration platform helps you in effortlessly turning data into business insights. It uses native code generation that lets you run your data pipelines seamlessly across all cloud providers and get optimized performance on all platforms. ...

  • Alooma
    Alooma

    Get the power of big data in minutes with Alooma and Amazon Redshift. Simply build your pipelines and map your events using Alooma’s friendly mapping interface. Query, analyze, visualize, and predict now. ...

  • Databricks
    Databricks

    Databricks Unified Analytics Platform, from the original creators of Apache Spark™, unifies data science and engineering across the Machine Learning lifecycle from data preparation to experimentation and deployment of ML applications. ...

  • MySQL
    MySQL

    The MySQL software delivers a very fast, multi-threaded, multi-user, and robust SQL (Structured Query Language) database server. MySQL Server is intended for mission-critical, heavy-load production systems as well as for embedding into mass-deployed software. ...

  • PostgreSQL
    PostgreSQL

    PostgreSQL is an advanced object-relational database management system that supports an extended subset of the SQL standard, including transactions, foreign keys, subqueries, triggers, user-defined types and functions. ...

AWS Glue alternatives & related posts

AWS Data Pipeline logo

AWS Data Pipeline

95
398
1
Process and move data between different AWS compute and storage services
95
398
+ 1
1
PROS OF AWS DATA PIPELINE
  • 1
    Easy to create DAG and execute it
CONS OF AWS DATA PIPELINE
    Be the first to leave a con

    related AWS Data Pipeline posts

    Airflow logo

    Airflow

    1.7K
    2.7K
    128
    A platform to programmaticaly author, schedule and monitor data pipelines, by Airbnb
    1.7K
    2.7K
    + 1
    128
    PROS OF AIRFLOW
    • 53
      Features
    • 14
      Task Dependency Management
    • 12
      Beautiful UI
    • 12
      Cluster of workers
    • 10
      Extensibility
    • 6
      Open source
    • 5
      Complex workflows
    • 5
      Python
    • 3
      Good api
    • 3
      Apache project
    • 3
      Custom operators
    • 2
      Dashboard
    CONS OF AIRFLOW
    • 2
      Observability is not great when the DAGs exceed 250
    • 2
      Running it on kubernetes cluster relatively complex
    • 2
      Open source - provides minimum or no support
    • 1
      Logical separation of DAGs is not straight forward

    related Airflow posts

    Data science and engineering teams at Lyft maintain several big data pipelines that serve as the foundation for various types of analysis throughout the business.

    Apache Airflow sits at the center of this big data infrastructure, allowing users to “programmatically author, schedule, and monitor data pipelines.” Airflow is an open source tool, and “Lyft is the very first Airflow adopter in production since the project was open sourced around three years ago.”

    There are several key components of the architecture. A web UI allows users to view the status of their queries, along with an audit trail of any modifications the query. A metadata database stores things like job status and task instance status. A multi-process scheduler handles job requests, and triggers the executor to execute those tasks.

    Airflow supports several executors, though Lyft uses CeleryExecutor to scale task execution in production. Airflow is deployed to three Amazon Auto Scaling Groups, with each associated with a celery queue.

    Audit logs supplied to the web UI are powered by the existing Airflow audit logs as well as Flask signal.

    Datadog, Statsd, Grafana, and PagerDuty are all used to monitor the Airflow system.

    See more

    We are a young start-up with 2 developers and a team in India looking to choose our next ETL tool. We have a few processes in Azure Data Factory but are looking to switch to a better platform. We were debating Trifacta and Airflow. Or even staying with Azure Data Factory. The use case will be to feed data to front-end APIs.

    See more
    Apache Spark logo

    Apache Spark

    3K
    3.5K
    140
    Fast and general engine for large-scale data processing
    3K
    3.5K
    + 1
    140
    PROS OF APACHE SPARK
    • 61
      Open-source
    • 48
      Fast and Flexible
    • 8
      One platform for every big data problem
    • 8
      Great for distributed SQL like applications
    • 6
      Easy to install and to use
    • 3
      Works well for most Datascience usecases
    • 2
      Interactive Query
    • 2
      Machine learning libratimery, Streaming in real
    • 2
      In memory Computation
    CONS OF APACHE SPARK
    • 4
      Speed

    related Apache Spark posts

    Eric Colson
    Chief Algorithms Officer at Stitch Fix · | 21 upvotes · 6.1M views

    The algorithms and data infrastructure at Stitch Fix is housed in #AWS. Data acquisition is split between events flowing through Kafka, and periodic snapshots of PostgreSQL DBs. We store data in an Amazon S3 based data warehouse. Apache Spark on Yarn is our tool of choice for data movement and #ETL. Because our storage layer (s3) is decoupled from our processing layer, we are able to scale our compute environment very elastically. We have several semi-permanent, autoscaling Yarn clusters running to serve our data processing needs. While the bulk of our compute infrastructure is dedicated to algorithmic processing, we also implemented Presto for adhoc queries and dashboards.

    Beyond data movement and ETL, most #ML centric jobs (e.g. model training and execution) run in a similarly elastic environment as containers running Python and R code on Amazon EC2 Container Service clusters. The execution of batch jobs on top of ECS is managed by Flotilla, a service we built in house and open sourced (see https://github.com/stitchfix/flotilla-os).

    At Stitch Fix, algorithmic integrations are pervasive across the business. We have dozens of data products actively integrated systems. That requires serving layer that is robust, agile, flexible, and allows for self-service. Models produced on Flotilla are packaged for deployment in production using Khan, another framework we've developed internally. Khan provides our data scientists the ability to quickly productionize those models they've developed with open source frameworks in Python 3 (e.g. PyTorch, sklearn), by automatically packaging them as Docker containers and deploying to Amazon ECS. This provides our data scientist a one-click method of getting from their algorithms to production. We then integrate those deployments into a service mesh, which allows us to A/B test various implementations in our product.

    For more info:

    #DataScience #DataStack #Data

    See more
    Patrick Sun
    Software Engineer at Stitch Fix · | 10 upvotes · 59.4K views

    As a frontend engineer on the Algorithms & Analytics team at Stitch Fix, I work with data scientists to develop applications and visualizations to help our internal business partners make data-driven decisions. I envisioned a platform that would assist data scientists in the data exploration process, allowing them to visually explore and rapidly iterate through their assumptions, then share their insights with others. This would align with our team's philosophy of having engineers "deploy platforms, services, abstractions, and frameworks that allow the data scientists to conceive of, develop, and deploy their ideas with autonomy", and solve the pain of data exploration.

    The final product, code-named Dora, is built with React, Redux.js and Victory, backed by Elasticsearch to enable fast and iterative data exploration, and uses Apache Spark to move data from our Amazon S3 data warehouse into the Elasticsearch cluster.

    See more
    Talend logo

    Talend

    152
    248
    0
    A single, unified suite for all integration needs
    152
    248
    + 1
    0
    PROS OF TALEND
      Be the first to leave a pro
      CONS OF TALEND
        Be the first to leave a con

        related Talend posts

        Alooma logo

        Alooma

        24
        47
        0
        Integrate any data source like databases, applications, and any API - with your own Amazon Redshift
        24
        47
        + 1
        0
        PROS OF ALOOMA
          Be the first to leave a pro
          CONS OF ALOOMA
            Be the first to leave a con

            related Alooma posts

            Databricks logo

            Databricks

            495
            751
            8
            A unified analytics platform, powered by Apache Spark
            495
            751
            + 1
            8
            PROS OF DATABRICKS
            • 1
              Best Performances on large datasets
            • 1
              True lakehouse architecture
            • 1
              Scalability
            • 1
              Databricks doesn't get access to your data
            • 1
              Usage Based Billing
            • 1
              Security
            • 1
              Data stays in your cloud account
            • 1
              Multicloud
            CONS OF DATABRICKS
              Be the first to leave a con

              related Databricks posts

              Jan Vlnas
              Senior Software Engineer at Mews · | 5 upvotes · 455.3K views

              From my point of view, both OpenRefine and Apache Hive serve completely different purposes. OpenRefine is intended for interactive cleaning of messy data locally. You could work with their libraries to use some of OpenRefine features as part of your data pipeline (there are pointers in FAQ), but OpenRefine in general is intended for a single-user local operation.

              I can't recommend a particular alternative without better understanding of your use case. But if you are looking for an interactive tool to work with big data at scale, take a look at notebook environments like Jupyter, Databricks, or Deepnote. If you are building a data processing pipeline, consider also Apache Spark.

              Edit: Fixed references from Hadoop to Hive, which is actually closer to Spark.

              See more
              MySQL logo

              MySQL

              125.2K
              105.9K
              3.8K
              The world's most popular open source database
              125.2K
              105.9K
              + 1
              3.8K
              PROS OF MYSQL
              • 800
                Sql
              • 679
                Free
              • 562
                Easy
              • 528
                Widely used
              • 490
                Open source
              • 180
                High availability
              • 160
                Cross-platform support
              • 104
                Great community
              • 79
                Secure
              • 75
                Full-text indexing and searching
              • 26
                Fast, open, available
              • 16
                Reliable
              • 16
                SSL support
              • 15
                Robust
              • 9
                Enterprise Version
              • 7
                Easy to set up on all platforms
              • 3
                NoSQL access to JSON data type
              • 1
                Relational database
              • 1
                Easy, light, scalable
              • 1
                Sequel Pro (best SQL GUI)
              • 1
                Replica Support
              CONS OF MYSQL
              • 16
                Owned by a company with their own agenda
              • 3
                Can't roll back schema changes

              related MySQL posts

              Nick Rockwell
              SVP, Engineering at Fastly · | 46 upvotes · 4.1M views

              When I joined NYT there was already broad dissatisfaction with the LAMP (Linux Apache HTTP Server MySQL PHP) Stack and the front end framework, in particular. So, I wasn't passing judgment on it. I mean, LAMP's fine, you can do good work in LAMP. It's a little dated at this point, but it's not ... I didn't want to rip it out for its own sake, but everyone else was like, "We don't like this, it's really inflexible." And I remember from being outside the company when that was called MIT FIVE when it had launched. And been observing it from the outside, and I was like, you guys took so long to do that and you did it so carefully, and yet you're not happy with your decisions. Why is that? That was more the impetus. If we're going to do this again, how are we going to do it in a way that we're gonna get a better result?

              So we're moving quickly away from LAMP, I would say. So, right now, the new front end is React based and using Apollo. And we've been in a long, protracted, gradual rollout of the core experiences.

              React is now talking to GraphQL as a primary API. There's a Node.js back end, to the front end, which is mainly for server-side rendering, as well.

              Behind there, the main repository for the GraphQL server is a big table repository, that we call Bodega because it's a convenience store. And that reads off of a Kafka pipeline.

              See more
              Tim Abbott

              We've been using PostgreSQL since the very early days of Zulip, but we actually didn't use it from the beginning. Zulip started out as a MySQL project back in 2012, because we'd heard it was a good choice for a startup with a wide community. However, we found that even though we were using the Django ORM for most of our database access, we spent a lot of time fighting with MySQL. Issues ranged from bad collation defaults, to bad query plans which required a lot of manual query tweaks.

              We ended up getting so frustrated that we tried out PostgresQL, and the results were fantastic. We didn't have to do any real customization (just some tuning settings for how big a server we had), and all of our most important queries were faster out of the box. As a result, we were able to delete a bunch of custom queries escaping the ORM that we'd written to make the MySQL query planner happy (because postgres just did the right thing automatically).

              And then after that, we've just gotten a ton of value out of postgres. We use its excellent built-in full-text search, which has helped us avoid needing to bring in a tool like Elasticsearch, and we've really enjoyed features like its partial indexes, which saved us a lot of work adding unnecessary extra tables to get good performance for things like our "unread messages" and "starred messages" indexes.

              I can't recommend it highly enough.

              See more
              PostgreSQL logo

              PostgreSQL

              98.2K
              82.2K
              3.5K
              A powerful, open source object-relational database system
              98.2K
              82.2K
              + 1
              3.5K
              PROS OF POSTGRESQL
              • 763
                Relational database
              • 510
                High availability
              • 439
                Enterprise class database
              • 383
                Sql
              • 304
                Sql + nosql
              • 173
                Great community
              • 147
                Easy to setup
              • 131
                Heroku
              • 130
                Secure by default
              • 113
                Postgis
              • 50
                Supports Key-Value
              • 48
                Great JSON support
              • 34
                Cross platform
              • 33
                Extensible
              • 28
                Replication
              • 26
                Triggers
              • 23
                Multiversion concurrency control
              • 23
                Rollback
              • 21
                Open source
              • 18
                Heroku Add-on
              • 17
                Stable, Simple and Good Performance
              • 15
                Powerful
              • 13
                Lets be serious, what other SQL DB would you go for?
              • 11
                Good documentation
              • 9
                Scalable
              • 8
                Free
              • 8
                Reliable
              • 8
                Intelligent optimizer
              • 7
                Transactional DDL
              • 7
                Modern
              • 6
                One stop solution for all things sql no matter the os
              • 5
                Relational database with MVCC
              • 5
                Faster Development
              • 4
                Full-Text Search
              • 4
                Developer friendly
              • 3
                Excellent source code
              • 3
                Free version
              • 3
                Great DB for Transactional system or Application
              • 3
                Relational datanbase
              • 3
                search
              • 3
                Open-source
              • 2
                Text
              • 2
                Full-text
              • 1
                Can handle up to petabytes worth of size
              • 1
                Composability
              • 1
                Multiple procedural languages supported
              • 0
                Native
              CONS OF POSTGRESQL
              • 10
                Table/index bloatings

              related PostgreSQL posts

              Simon Reymann
              Senior Fullstack Developer at QUANTUSflow Software GmbH · | 30 upvotes · 11.1M views

              Our whole DevOps stack consists of the following tools:

              • GitHub (incl. GitHub Pages/Markdown for Documentation, GettingStarted and HowTo's) for collaborative review and code management tool
              • Respectively Git as revision control system
              • SourceTree as Git GUI
              • Visual Studio Code as IDE
              • CircleCI for continuous integration (automatize development process)
              • Prettier / TSLint / ESLint as code linter
              • SonarQube as quality gate
              • Docker as container management (incl. Docker Compose for multi-container application management)
              • VirtualBox for operating system simulation tests
              • Kubernetes as cluster management for docker containers
              • Heroku for deploying in test environments
              • nginx as web server (preferably used as facade server in production environment)
              • SSLMate (using OpenSSL) for certificate management
              • Amazon EC2 (incl. Amazon S3) for deploying in stage (production-like) and production environments
              • PostgreSQL as preferred database system
              • Redis as preferred in-memory database/store (great for caching)

              The main reason we have chosen Kubernetes over Docker Swarm is related to the following artifacts:

              • Key features: Easy and flexible installation, Clear dashboard, Great scaling operations, Monitoring is an integral part, Great load balancing concepts, Monitors the condition and ensures compensation in the event of failure.
              • Applications: An application can be deployed using a combination of pods, deployments, and services (or micro-services).
              • Functionality: Kubernetes as a complex installation and setup process, but it not as limited as Docker Swarm.
              • Monitoring: It supports multiple versions of logging and monitoring when the services are deployed within the cluster (Elasticsearch/Kibana (ELK), Heapster/Grafana, Sysdig cloud integration).
              • Scalability: All-in-one framework for distributed systems.
              • Other Benefits: Kubernetes is backed by the Cloud Native Computing Foundation (CNCF), huge community among container orchestration tools, it is an open source and modular tool that works with any OS.
              See more
              Jeyabalaji Subramanian

              Recently we were looking at a few robust and cost-effective ways of replicating the data that resides in our production MongoDB to a PostgreSQL database for data warehousing and business intelligence.

              We set ourselves the following criteria for the optimal tool that would do this job: - The data replication must be near real-time, yet it should NOT impact the production database - The data replication must be horizontally scalable (based on the load), asynchronous & crash-resilient

              Based on the above criteria, we selected the following tools to perform the end to end data replication:

              We chose MongoDB Stitch for picking up the changes in the source database. It is the serverless platform from MongoDB. One of the services offered by MongoDB Stitch is Stitch Triggers. Using stitch triggers, you can execute a serverless function (in Node.js) in real time in response to changes in the database. When there are a lot of database changes, Stitch automatically "feeds forward" these changes through an asynchronous queue.

              We chose Amazon SQS as the pipe / message backbone for communicating the changes from MongoDB to our own replication service. Interestingly enough, MongoDB stitch offers integration with AWS services.

              In the Node.js function, we wrote minimal functionality to communicate the database changes (insert / update / delete / replace) to Amazon SQS.

              Next we wrote a minimal micro-service in Python to listen to the message events on SQS, pickup the data payload & mirror the DB changes on to the target Data warehouse. We implemented source data to target data translation by modelling target table structures through SQLAlchemy . We deployed this micro-service as AWS Lambda with Zappa. With Zappa, deploying your services as event-driven & horizontally scalable Lambda service is dumb-easy.

              In the end, we got to implement a highly scalable near realtime Change Data Replication service that "works" and deployed to production in a matter of few days!

              See more