Need advice about which tool to choose?Ask the StackShare community!

Apache Hive

474
470
+ 1
0
AWS Glue

448
806
+ 1
9
Add tool

AWS Glue vs Apache Hive: What are the differences?

AWS Glue and Apache Hive are popular tools used for big data processing. Let's explore the key differences between them.

  1. Data Processing Engine: AWS Glue is a fully managed, serverless extract, transform, and load (ETL) service provided by Amazon Web Services (AWS). It uses its own data processing engine called Apache Spark for executing ETL jobs. On the other hand, Apache Hive is a data warehouse infrastructure built on top of Hadoop, which uses MapReduce as its data processing engine.

  2. Ease of Use and Maintenance: AWS Glue is designed to be a fully managed service, which means AWS takes care of all the infrastructure and maintenance tasks. It provides an easy-to-use graphical interface for configuring and scheduling ETL jobs, as well as automatic data schema discovery. In contrast, Apache Hive requires manual setup and configuration of Hadoop and Hive clusters, making it more complex to deploy and maintain.

  3. Performance and Scalability: AWS Glue leverages the scalability and performance capabilities of the underlying Amazon S3 and Apache Spark services. It can process large volumes of data in parallel, making it highly scalable. Apache Hive, on the other hand, relies on Hadoop's MapReduce framework, which may not be as efficient for processing large datasets due to its batch-oriented nature.

  4. Data Catalog and Metadata Management: AWS Glue provides a centralized data catalog that automatically crawls, catalogs, and indexes metadata from various data sources. It allows users to define custom schemas and transformations, and provides a unified view of data across different storage systems. Apache Hive also provides a metastore for managing metadata, but it requires explicit schema definition and manual updates to the catalog.

  5. Integration with Ecosystem: AWS Glue seamlessly integrates with other AWS services, such as Amazon S3, Amazon Redshift, and Amazon Athena, enabling users to easily build end-to-end data processing pipelines in the AWS ecosystem. It also provides built-in integration with popular data integration tools like AWS Data Pipeline and AWS Glue DataBrew. Apache Hive, on the other hand, is part of the Apache Hadoop ecosystem and can interact with various components like HDFS, YARN, and HBase.

  6. Cost and Pricing Model: AWS Glue follows a pay-as-you-go pricing model, where users are charged based on the number of ETL jobs and the amount of data processed. It also incurs costs for data storage in Amazon S3. Apache Hive, being open source, is free to use, but it requires manual infrastructure provisioning and maintenance, which can incur additional costs in terms of hardware, software licenses, and administration.

In summary, AWS Glue is a fully managed extract, transform, and load (ETL) service that simplifies the process of building and managing data pipelines for analytics. In contrast, Apache Hive is an open-source data warehouse infrastructure built on top of Hadoop, offering SQL-like query capabilities and schema-on-read functionality for large-scale data processing. While AWS Glue offers a serverless and managed approach to data transformation with integration into the AWS ecosystem, Apache Hive provides a flexible and scalable solution for data warehousing within Hadoop environments.

Advice on Apache Hive and AWS Glue

We need to perform ETL from several databases into a data warehouse or data lake. We want to

  • keep raw and transformed data available to users to draft their own queries efficiently
  • give users the ability to give custom permissions and SSO
  • move between open-source on-premises development and cloud-based production environments

We want to use inexpensive Amazon EC2 instances only on medium-sized data set 16GB to 32GB feeding into Tableau Server or PowerBI for reporting and data analysis purposes.

See more
Replies (3)
John Nguyen
Recommends
on
AirflowAirflowAWS LambdaAWS Lambda

You could also use AWS Lambda and use Cloudwatch event schedule if you know when the function should be triggered. The benefit is that you could use any language and use the respective database client.

But if you orchestrate ETLs then it makes sense to use Apache Airflow. This requires Python knowledge.

See more
Recommends
on
AirflowAirflow

Though we have always built something custom, Apache airflow (https://airflow.apache.org/) stood out as a key contender/alternative when it comes to open sources. On the commercial offering, Amazon Redshift combined with Amazon Kinesis (for complex manipulations) is great for BI, though Redshift as such is expensive.

See more
Recommends

You may want to look into a Data Virtualization product called Conduit. It connects to disparate data sources in AWS, on prem, Azure, GCP, and exposes them as a single unified Spark SQL view to PowerBI (direct query) or Tableau. Allows auto query and caching policies to enhance query speeds and experience. Has a GPU query engine and optimized Spark for fallback. Can be deployed on your AWS VM or on prem, scales up and out. Sounds like the ideal solution to your needs.

See more
Vamshi Krishna
Data Engineer at Tata Consultancy Services · | 4 upvotes · 242.7K views

I have to collect different data from multiple sources and store them in a single cloud location. Then perform cleaning and transforming using PySpark, and push the end results to other applications like reporting tools, etc. What would be the best solution? I can only think of Azure Data Factory + Databricks. Are there any alternatives to #AWS services + Databricks?

See more

Hi all,

Currently, we need to ingest the data from Amazon S3 to DB either Amazon Athena or Amazon Redshift. But the problem with the data is, it is in .PSV (pipe separated values) format and the size is also above 200 GB. The query performance of the timeout in Athena/Redshift is not up to the mark, too slow while compared to Google BigQuery. How would I optimize the performance and query result time? Can anyone please help me out?

See more
Replies (4)

you can use aws glue service to convert you pipe format data to parquet format , and thus you can achieve data compression . Now you should choose Redshift to copy your data as it is very huge. To manage your data, you should partition your data in S3 bucket and also divide your data across the redshift cluster

See more
Carlos Acedo
Data Technologies Manager at SDG Group Iberia · | 5 upvotes · 234.8K views
Recommends
on
Amazon RedshiftAmazon Redshift

First of all you should make your choice upon Redshift or Athena based on your use case since they are two very diferent services - Redshift is an enterprise-grade MPP Data Warehouse while Athena is a SQL layer on top of S3 with limited performance. If performance is a key factor, users are going to execute unpredictable queries and direct and managing costs are not a problem I'd definitely go for Redshift. If performance is not so critical and queries will be predictable somewhat I'd go for Athena.

Once you select the technology you'll need to optimize your data in order to get the queries executed as fast as possible. In both cases you may need to adapt the data model to fit your queries better. In the case you go for Athena you'd also proabably need to change your file format to Parquet or Avro and review your partition strategy depending on your most frequent type of query. If you choose Redshift you'll need to ingest the data from your files into it and maybe carry out some tuning tasks for performance gain.

I'll recommend Redshift for now since it can address a wider range of use cases, but we could give you better advice if you described your use case in depth.

See more
Alexis Blandin
Recommends
on
Amazon AthenaAmazon Athena

It depend of the nature of your data (structured or not?) and of course your queries (ad-hoc or predictible?). For example you can look at partitioning and columnar format to maximize MPP capabilities for both Athena and Redshift

See more
Recommends

you can change your PSV fomat data to parquet file format with AWS GLUE and then your query performance will be improved

See more
Decisions about Apache Hive and AWS Glue
Ashish Singh
Tech Lead, Big Data Platform at Pinterest · | 38 upvotes · 2.9M views

To provide employees with the critical need of interactive querying, we’ve worked with Presto, an open-source distributed SQL query engine, over the years. Operating Presto at Pinterest’s scale has involved resolving quite a few challenges like, supporting deeply nested and huge thrift schemas, slow/ bad worker detection and remediation, auto-scaling cluster, graceful cluster shutdown and impersonation support for ldap authenticator.

Our infrastructure is built on top of Amazon EC2 and we leverage Amazon S3 for storing our data. This separates compute and storage layers, and allows multiple compute clusters to share the S3 data.

We have hundreds of petabytes of data and tens of thousands of Apache Hive tables. Our Presto clusters are comprised of a fleet of 450 r4.8xl EC2 instances. Presto clusters together have over 100 TBs of memory and 14K vcpu cores. Within Pinterest, we have close to more than 1,000 monthly active users (out of total 1,600+ Pinterest employees) using Presto, who run about 400K queries on these clusters per month.

Each query submitted to Presto cluster is logged to a Kafka topic via Singer. Singer is a logging agent built at Pinterest and we talked about it in a previous post. Each query is logged when it is submitted and when it finishes. When a Presto cluster crashes, we will have query submitted events without corresponding query finished events. These events enable us to capture the effect of cluster crashes over time.

Each Presto cluster at Pinterest has workers on a mix of dedicated AWS EC2 instances and Kubernetes pods. Kubernetes platform provides us with the capability to add and remove workers from a Presto cluster very quickly. The best-case latency on bringing up a new worker on Kubernetes is less than a minute. However, when the Kubernetes cluster itself is out of resources and needs to scale up, it can take up to ten minutes. Some other advantages of deploying on Kubernetes platform is that our Presto deployment becomes agnostic of cloud vendor, instance types, OS, etc.

#BigData #AWS #DataScience #DataEngineering

See more
Karthik Raveendran
CPO at Attinad Software · | 3 upvotes · 208.8K views

The platform deals with time series data from sensors aggregated against things( event data that originates at periodic intervals). We use Cassandra as our distributed database to store time series data. Aggregated data insights from Cassandra is delivered as web API for consumption from other applications. Presto as a distributed sql querying engine, can provide a faster execution time provided the queries are tuned for proper distribution across the cluster. Another objective that we had was to combine Cassandra table data with other business data from RDBMS or other big data systems where presto through its connector architecture would have opened up a whole lot of options for us.

See more
Get Advice from developers at your company using StackShare Enterprise. Sign up for StackShare Enterprise.
Learn More
Pros of Apache Hive
Pros of AWS Glue
    Be the first to leave a pro
    • 9
      Managed Hive Metastore

    Sign up to add or upvote prosMake informed product decisions

    - No public GitHub repository available -

    What is Apache Hive?

    Hive facilitates reading, writing, and managing large datasets residing in distributed storage using SQL. Structure can be projected onto data already in storage.

    What is AWS Glue?

    A fully managed extract, transform, and load (ETL) service that makes it easy for customers to prepare and load their data for analytics.

    Need advice about which tool to choose?Ask the StackShare community!

    What companies use Apache Hive?
    What companies use AWS Glue?
    See which teams inside your own company are using Apache Hive or AWS Glue.
    Sign up for StackShare EnterpriseLearn More

    Sign up to get full access to all the companiesMake informed product decisions

    What tools integrate with Apache Hive?
    What tools integrate with AWS Glue?

    Sign up to get full access to all the tool integrationsMake informed product decisions

    Blog Posts

    What are some alternatives to Apache Hive and AWS Glue?
    HBase
    Apache HBase is an open-source, distributed, versioned, column-oriented store modeled after Google' Bigtable: A Distributed Storage System for Structured Data by Chang et al. Just as Bigtable leverages the distributed data storage provided by the Google File System, HBase provides Bigtable-like capabilities on top of Apache Hadoop.
    Apache Spark
    Spark is a fast and general processing engine compatible with Hadoop data. It can run in Hadoop clusters through YARN or Spark's standalone mode, and it can process data in HDFS, HBase, Cassandra, Hive, and any Hadoop InputFormat. It is designed to perform both batch processing (similar to MapReduce) and new workloads like streaming, interactive queries, and machine learning.
    Presto
    Distributed SQL Query Engine for Big Data
    Hadoop
    The Apache Hadoop software library is a framework that allows for the distributed processing of large data sets across clusters of computers using simple programming models. It is designed to scale up from single servers to thousands of machines, each offering local computation and storage.
    Apache Impala
    Impala is a modern, open source, MPP SQL query engine for Apache Hadoop. Impala is shipped by Cloudera, MapR, and Amazon. With Impala, you can query data, whether stored in HDFS or Apache HBase – including SELECT, JOIN, and aggregate functions – in real time.
    See all alternatives