Need advice about which tool to choose?Ask the StackShare community!
Amazon Redshift vs Presto: What are the differences?
Introduction
Amazon Redshift and Presto are both popular distributed query engines used for processing big data. While they both offer powerful capabilities, there are some key differences between them that make each suitable for different use cases. In this article, we will explore the main differences between Amazon Redshift and Presto.
Data Storage and Formats: One of the key differences between Amazon Redshift and Presto lies in how they store and handle data. Redshift uses a columnar storage model, which is optimized for high-performance analytics on large datasets. It also supports various data formats such as Parquet, ORC, and Avro. On the other hand, Presto is a federated query engine that can access data from multiple data sources, including Hadoop Distributed File System (HDFS), Amazon S3, and relational databases, without the need to replicate or move the data.
Query Performance: Another significant difference between Redshift and Presto is their query performance characteristics. Redshift is designed for high-performance analytics on large-scale datasets, using techniques like columnar storage and advanced query optimization to deliver fast query execution times. It is a massively parallel processing (MPP) database that can scale horizontally by adding more compute nodes. In contrast, Presto is a distributed SQL query engine that offers excellent flexibility but may not achieve the same query performance as Redshift for very large datasets.
Data Consistency and ACID: When it comes to data consistency and ACID (Atomicity, Consistency, Isolation, Durability) properties, there is a difference between Redshift and Presto. Redshift provides ACID compliance for data loading, updates, and deletes within a single node, ensuring transactional consistency. However, it does not provide full ACID compliance for distributed transactions across multiple nodes. On the other hand, Presto does not offer built-in support for ACID transactions, as it focuses more on enabling interactive query processing across multiple data sources.
Ease of Use and Deployment: Redshift is a fully managed service provided by Amazon Web Services (AWS), which means that the deployment and management of the infrastructure are taken care of by AWS. It offers a user-friendly interface and integrates well with other AWS services. Presto, on the other hand, requires more manual configuration and setup, as it is an open-source project that needs to be deployed on a cluster. It offers more flexibility and control but may require additional technical expertise to set up and maintain.
Cost and Pricing Model: Redshift operates on a pay-as-you-go model, where you pay for the compute resources and storage used. It offers various pricing options, including on-demand, reserved instances, and spot instances, allowing you to choose the most cost-effective option for your workload. Presto, being an open-source project, has no direct cost associated with it. However, you need to consider the cost of the underlying infrastructure and storage for your data sources.
Ecosystem and Integration: Both Redshift and Presto have a rich ecosystem and integrate with various tools, frameworks, and data sources. Redshift integrates seamlessly with other AWS services, such as AWS Glue, AWS Data Pipeline, and AWS Athena. It also has connectors to popular BI tools and SQL clients. Presto, being a federated query engine, can access various data sources, including Hive, HDFS, MySQL, PostgreSQL, and more. It also supports integration with tools like Apache Kafka, Apache Spark, and Apache Airflow.
In summary, Amazon Redshift and Presto are both powerful distributed query engines, but they differ in terms of data storage, query performance, data consistency, ease of use, cost, and ecosystem integration. The choice between Redshift and Presto depends on the specific requirements of your use case, such as the size of the dataset, query performance needs, ACID compliance requirements, and whether you prefer a fully managed service or more hands-on control.
We need to perform ETL from several databases into a data warehouse or data lake. We want to
- keep raw and transformed data available to users to draft their own queries efficiently
- give users the ability to give custom permissions and SSO
- move between open-source on-premises development and cloud-based production environments
We want to use inexpensive Amazon EC2 instances only on medium-sized data set 16GB to 32GB feeding into Tableau Server or PowerBI for reporting and data analysis purposes.
You could also use AWS Lambda and use Cloudwatch event schedule if you know when the function should be triggered. The benefit is that you could use any language and use the respective database client.
But if you orchestrate ETLs then it makes sense to use Apache Airflow. This requires Python knowledge.
Though we have always built something custom, Apache airflow (https://airflow.apache.org/) stood out as a key contender/alternative when it comes to open sources. On the commercial offering, Amazon Redshift combined with Amazon Kinesis (for complex manipulations) is great for BI, though Redshift as such is expensive.
You may want to look into a Data Virtualization product called Conduit. It connects to disparate data sources in AWS, on prem, Azure, GCP, and exposes them as a single unified Spark SQL view to PowerBI (direct query) or Tableau. Allows auto query and caching policies to enhance query speeds and experience. Has a GPU query engine and optimized Spark for fallback. Can be deployed on your AWS VM or on prem, scales up and out. Sounds like the ideal solution to your needs.
Cloud Data-warehouse is the centerpiece of modern Data platform. The choice of the most suitable solution is therefore fundamental.
Our benchmark was conducted over BigQuery and Snowflake. These solutions seem to match our goals but they have very different approaches.
BigQuery is notably the only 100% serverless cloud data-warehouse, which requires absolutely NO maintenance: no re-clustering, no compression, no index optimization, no storage management, no performance management. Snowflake requires to set up (paid) reclustering processes, to manage the performance allocated to each profile, etc. We can also mention Redshift, which we have eliminated because this technology requires even more ops operation.
BigQuery can therefore be set up with almost zero cost of human resources. Its on-demand pricing is particularly adapted to small workloads. 0 cost when the solution is not used, only pay for the query you're running. But quickly the use of slots (with monthly or per-minute commitment) will drastically reduce the cost of use. We've reduced by 10 the cost of our nightly batches by using flex slots.
Finally, a major advantage of BigQuery is its almost perfect integration with Google Cloud Platform services: Cloud functions, Dataflow, Data Studio, etc.
BigQuery is still evolving very quickly. The next milestone, BigQuery Omni, will allow to run queries over data stored in an external Cloud platform (Amazon S3 for example). It will be a major breakthrough in the history of cloud data-warehouses. Omni will compensate a weakness of BigQuery: transferring data in near real time from S3 to BQ is not easy today. It was even simpler to implement via Snowflake's Snowpipe solution.
We also plan to use the Machine Learning features built into BigQuery to accelerate our deployment of Data-Science-based projects. An opportunity only offered by the BigQuery solution
To provide employees with the critical need of interactive querying, we’ve worked with Presto, an open-source distributed SQL query engine, over the years. Operating Presto at Pinterest’s scale has involved resolving quite a few challenges like, supporting deeply nested and huge thrift schemas, slow/ bad worker detection and remediation, auto-scaling cluster, graceful cluster shutdown and impersonation support for ldap authenticator.
Our infrastructure is built on top of Amazon EC2 and we leverage Amazon S3 for storing our data. This separates compute and storage layers, and allows multiple compute clusters to share the S3 data.
We have hundreds of petabytes of data and tens of thousands of Apache Hive tables. Our Presto clusters are comprised of a fleet of 450 r4.8xl EC2 instances. Presto clusters together have over 100 TBs of memory and 14K vcpu cores. Within Pinterest, we have close to more than 1,000 monthly active users (out of total 1,600+ Pinterest employees) using Presto, who run about 400K queries on these clusters per month.
Each query submitted to Presto cluster is logged to a Kafka topic via Singer. Singer is a logging agent built at Pinterest and we talked about it in a previous post. Each query is logged when it is submitted and when it finishes. When a Presto cluster crashes, we will have query submitted events without corresponding query finished events. These events enable us to capture the effect of cluster crashes over time.
Each Presto cluster at Pinterest has workers on a mix of dedicated AWS EC2 instances and Kubernetes pods. Kubernetes platform provides us with the capability to add and remove workers from a Presto cluster very quickly. The best-case latency on bringing up a new worker on Kubernetes is less than a minute. However, when the Kubernetes cluster itself is out of resources and needs to scale up, it can take up to ten minutes. Some other advantages of deploying on Kubernetes platform is that our Presto deployment becomes agnostic of cloud vendor, instance types, OS, etc.
#BigData #AWS #DataScience #DataEngineering
The platform deals with time series data from sensors aggregated against things( event data that originates at periodic intervals). We use Cassandra as our distributed database to store time series data. Aggregated data insights from Cassandra is delivered as web API for consumption from other applications. Presto as a distributed sql querying engine, can provide a faster execution time provided the queries are tuned for proper distribution across the cluster. Another objective that we had was to combine Cassandra table data with other business data from RDBMS or other big data systems where presto through its connector architecture would have opened up a whole lot of options for us.
Pros of Amazon Redshift
- Data Warehousing41
- Scalable27
- SQL17
- Backed by Amazon14
- Encryption5
- Cheap and reliable1
- Isolation1
- Best Cloud DW Performance1
- Fast columnar storage1
Pros of Presto
- Works directly on files in s3 (no ETL)18
- Open-source13
- Join multiple databases12
- Scalable10
- Gets ready in minutes7
- MPP6