Need advice about which tool to choose?Ask the StackShare community!
Amazon Athena vs Apache Spark vs Presto: What are the differences?
Introduction:
In the world of big data and analytics, there are several tools available for processing and analyzing large volumes of data. Two popular options are Amazon Athena and Apache Spark with Presto. While both tools offer powerful capabilities, there are key differences between them that make them suitable for different use cases.
Data Processing Paradigm: Amazon Athena is a serverless interactive query service that allows you to directly analyze data stored in Amazon S3 using standard SQL. It is designed for ad-hoc queries and does not require any infrastructure provisioning. On the other hand, Apache Spark with Presto is a distributed computing framework that supports batch processing, real-time streaming, machine learning, and graph processing. It provides a more comprehensive set of tools and capabilities for data processing.
Scalability: Amazon Athena is highly scalable and can handle large volumes of data, but its performance may be impacted by the size and complexity of the dataset. In contrast, Apache Spark with Presto is designed to scale horizontally, allowing you to process and analyze massive amounts of data efficiently. It can be used to build data pipelines and handle large-scale data processing workloads.
Flexibility: Amazon Athena is tightly integrated with Amazon S3 and is optimized for querying data stored in this service. It supports various file formats like Parquet, ORC, CSV, and JSON. On the other hand, Apache Spark with Presto is agnostic to the underlying storage system and supports a wide range of data sources, including Hadoop Distributed File System (HDFS), Amazon S3, Apache Kafka, and more. It provides more flexibility in terms of data source compatibility.
Processing Speed: Amazon Athena provides near real-time query performance as it directly scans and queries data in Amazon S3. However, its performance may vary based on the size and complexity of the data. Apache Spark with Presto offers faster processing speeds through its in-memory computing capabilities. It can cache data in memory and leverage distributed computing to perform data processing tasks efficiently.
Data Types and Functions: Amazon Athena provides a wide range of built-in functions and supports various data types for querying data. However, its SQL support is limited compared to Apache Spark with Presto, which offers a more extensive set of built-in functions, data types, and libraries for data manipulation, aggregation, and analysis. This makes Apache Spark with Presto more suitable for complex data transformations and advanced analytics tasks.
Ecosystem and Integration: Amazon Athena is part of the broader AWS ecosystem and integrates seamlessly with other AWS services like AWS Glue for data cataloging and AWS Lambda for automating data workflows. Apache Spark with Presto, on the other hand, has a rich ecosystem and integrates with various data processing frameworks, storage systems, and machine learning libraries. It supports integration with big data technologies like Hadoop, Hive, and HBase.
In summary, Amazon Athena is a serverless query service optimized for analyzing data stored in Amazon S3 using SQL, while Apache Spark with Presto is a versatile distributed computing framework that supports batch processing, real-time streaming, machine learning, and graph processing. The key differences between them lie in their data processing paradigms, scalability, flexibility, processing speed, data types and functions, and ecosystem integrations.
We have a Kafka topic having events of type A and type B. We need to perform an inner join on both type of events using some common field (primary-key). The joined events to be inserted in Elasticsearch.
In usual cases, type A and type B events (with same key) observed to be close upto 15 minutes. But in some cases they may be far from each other, lets say 6 hours. Sometimes event of either of the types never come.
In all cases, we should be able to find joined events instantly after they are joined and not-joined events within 15 minutes.
The first solution that came to me is to use upsert to update ElasticSearch:
- Use the primary-key as ES document id
- Upsert the records to ES as soon as you receive them. As you are using upsert, the 2nd record of the same primary-key will not overwrite the 1st one, but will be merged with it.
Cons: The load on ES will be higher, due to upsert.
To use Flink:
- Create a KeyedDataStream by the primary-key
- In the ProcessFunction, save the first record in a State. At the same time, create a Timer for 15 minutes in the future
- When the 2nd record comes, read the 1st record from the State, merge those two, and send out the result, and clear the State and the Timer if it has not fired
- When the Timer fires, read the 1st record from the State and send out as the output record.
- Have a 2nd Timer of 6 hours (or more) if you are not using Windowing to clean up the State
Pro: if you have already having Flink ingesting this stream. Otherwise, I would just go with the 1st solution.
Please refer "Structured Streaming" feature of Spark. Refer "Stream - Stream Join" at https://spark.apache.org/docs/latest/structured-streaming-programming-guide.html#stream-stream-joins . In short you need to specify "Define watermark delays on both inputs" and "Define a constraint on time across the two inputs"
Hi all,
Currently, we need to ingest the data from Amazon S3 to DB either Amazon Athena or Amazon Redshift. But the problem with the data is, it is in .PSV (pipe separated values) format and the size is also above 200 GB. The query performance of the timeout in Athena/Redshift is not up to the mark, too slow while compared to Google BigQuery. How would I optimize the performance and query result time? Can anyone please help me out?
you can use aws glue service to convert you pipe format data to parquet format , and thus you can achieve data compression . Now you should choose Redshift to copy your data as it is very huge. To manage your data, you should partition your data in S3 bucket and also divide your data across the redshift cluster
First of all you should make your choice upon Redshift or Athena based on your use case since they are two very diferent services - Redshift is an enterprise-grade MPP Data Warehouse while Athena is a SQL layer on top of S3 with limited performance. If performance is a key factor, users are going to execute unpredictable queries and direct and managing costs are not a problem I'd definitely go for Redshift. If performance is not so critical and queries will be predictable somewhat I'd go for Athena.
Once you select the technology you'll need to optimize your data in order to get the queries executed as fast as possible. In both cases you may need to adapt the data model to fit your queries better. In the case you go for Athena you'd also proabably need to change your file format to Parquet or Avro and review your partition strategy depending on your most frequent type of query. If you choose Redshift you'll need to ingest the data from your files into it and maybe carry out some tuning tasks for performance gain.
I'll recommend Redshift for now since it can address a wider range of use cases, but we could give you better advice if you described your use case in depth.
It depend of the nature of your data (structured or not?) and of course your queries (ad-hoc or predictible?). For example you can look at partitioning and columnar format to maximize MPP capabilities for both Athena and Redshift
you can change your PSV fomat data to parquet file format with AWS GLUE and then your query performance will be improved
To provide employees with the critical need of interactive querying, we’ve worked with Presto, an open-source distributed SQL query engine, over the years. Operating Presto at Pinterest’s scale has involved resolving quite a few challenges like, supporting deeply nested and huge thrift schemas, slow/ bad worker detection and remediation, auto-scaling cluster, graceful cluster shutdown and impersonation support for ldap authenticator.
Our infrastructure is built on top of Amazon EC2 and we leverage Amazon S3 for storing our data. This separates compute and storage layers, and allows multiple compute clusters to share the S3 data.
We have hundreds of petabytes of data and tens of thousands of Apache Hive tables. Our Presto clusters are comprised of a fleet of 450 r4.8xl EC2 instances. Presto clusters together have over 100 TBs of memory and 14K vcpu cores. Within Pinterest, we have close to more than 1,000 monthly active users (out of total 1,600+ Pinterest employees) using Presto, who run about 400K queries on these clusters per month.
Each query submitted to Presto cluster is logged to a Kafka topic via Singer. Singer is a logging agent built at Pinterest and we talked about it in a previous post. Each query is logged when it is submitted and when it finishes. When a Presto cluster crashes, we will have query submitted events without corresponding query finished events. These events enable us to capture the effect of cluster crashes over time.
Each Presto cluster at Pinterest has workers on a mix of dedicated AWS EC2 instances and Kubernetes pods. Kubernetes platform provides us with the capability to add and remove workers from a Presto cluster very quickly. The best-case latency on bringing up a new worker on Kubernetes is less than a minute. However, when the Kubernetes cluster itself is out of resources and needs to scale up, it can take up to ten minutes. Some other advantages of deploying on Kubernetes platform is that our Presto deployment becomes agnostic of cloud vendor, instance types, OS, etc.
#BigData #AWS #DataScience #DataEngineering
The platform deals with time series data from sensors aggregated against things( event data that originates at periodic intervals). We use Cassandra as our distributed database to store time series data. Aggregated data insights from Cassandra is delivered as web API for consumption from other applications. Presto as a distributed sql querying engine, can provide a faster execution time provided the queries are tuned for proper distribution across the cluster. Another objective that we had was to combine Cassandra table data with other business data from RDBMS or other big data systems where presto through its connector architecture would have opened up a whole lot of options for us.
Pros of Amazon Athena
- Use SQL to analyze CSV files16
- Glue crawlers gives easy Data catalogue8
- Cheap7
- Query all my data without running servers 24x76
- No data base servers yay4
- Easy integration with QuickSight3
- Query and analyse CSV,parquet,json files in sql2
- Also glue and athena use same data catalog2
- No configuration required1
- Ad hoc checks on data made easy0
Pros of Presto
- Works directly on files in s3 (no ETL)18
- Open-source13
- Join multiple databases12
- Scalable10
- Gets ready in minutes7
- MPP6
Pros of Apache Spark
- Open-source61
- Fast and Flexible48
- One platform for every big data problem8
- Great for distributed SQL like applications8
- Easy to install and to use6
- Works well for most Datascience usecases3
- Interactive Query2
- Machine learning libratimery, Streaming in real2
- In memory Computation2
Sign up to add or upvote prosMake informed product decisions
Cons of Amazon Athena
Cons of Presto
Cons of Apache Spark
- Speed4