Need advice about which tool to choose?Ask the StackShare community!
Dremio vs Presto: What are the differences?
Introduction
In this article, we will explore the key differences between Dremio and Presto, two popular data query engines. Both Dremio and Presto are used for querying and analyzing large volumes of data in a distributed fashion, but they have some distinct features and functionalities that set them apart.
Data Source Support: Dremio supports a wide range of data sources, including relational databases (like MySQL, PostgreSQL), NoSQL databases (like MongoDB), cloud storage services (like Amazon S3) and even Hadoop distributed file systems. On the other hand, Presto has limited data source support and primarily focuses on querying data stored in Hadoop, cloud storage, and relational databases.
Architecture: Dremio has a self-contained architecture, where it includes its own distributed file system called DremioFS and provides an end-to-end solution for data querying. In contrast, Presto follows a more modular architecture, where it separates query execution engine (Presto Coordinator) from data storage (Presto Worker), allowing flexibility in deploying them on different resources.
Network Optimization: Dremio utilizes a combination of data locality awareness, columnar layout, and intelligent caching to optimize data access through the network. This enables Dremio to efficiently fetch only the required data from remote nodes and minimize network traffic. Presto, on the other hand, relies on a more traditional query planner and execution model, which may result in higher network overhead for data retrieval.
SQL Compatibility: Both Dremio and Presto support SQL and offer a similar set of SQL functionalities. However, Dremio provides enhanced SQL capabilities, including support for complex data types, nested queries, window functions, and other advanced SQL features. Presto, while powerful in SQL processing, may have some limitations in terms of complex data types and advanced queries.
Administration and Management: Dremio offers a comprehensive web-based user interface for managing and monitoring the system, including fine-grained access control, query profiling, and performance tuning capabilities. Presto, on the other hand, primarily relies on command-line tools and configuration files for system administration. While Presto provides some level of monitoring and management features, it may require additional external tools for more advanced administration tasks.
Deployment Flexibility: Dremio provides a single, integrated platform that can be deployed in an on-premises environment or in the cloud. It also offers a Kubernetes-based deployment option for containerized environments. Presto, on the other hand, is designed to work in a distributed ecosystem and can be deployed across a cluster of machines or integrated with other big data processing frameworks like Hadoop or Apache Hive.
In summary, Dremio and Presto have some significant differences in terms of data source support, architecture, network optimization, SQL compatibility, administration, and deployment flexibility. However, both offer powerful data querying capabilities and are suited for different use cases depending on specific requirements.
We need to perform ETL from several databases into a data warehouse or data lake. We want to
- keep raw and transformed data available to users to draft their own queries efficiently
- give users the ability to give custom permissions and SSO
- move between open-source on-premises development and cloud-based production environments
We want to use inexpensive Amazon EC2 instances only on medium-sized data set 16GB to 32GB feeding into Tableau Server or PowerBI for reporting and data analysis purposes.
You could also use AWS Lambda and use Cloudwatch event schedule if you know when the function should be triggered. The benefit is that you could use any language and use the respective database client.
But if you orchestrate ETLs then it makes sense to use Apache Airflow. This requires Python knowledge.
Though we have always built something custom, Apache airflow (https://airflow.apache.org/) stood out as a key contender/alternative when it comes to open sources. On the commercial offering, Amazon Redshift combined with Amazon Kinesis (for complex manipulations) is great for BI, though Redshift as such is expensive.
You may want to look into a Data Virtualization product called Conduit. It connects to disparate data sources in AWS, on prem, Azure, GCP, and exposes them as a single unified Spark SQL view to PowerBI (direct query) or Tableau. Allows auto query and caching policies to enhance query speeds and experience. Has a GPU query engine and optimized Spark for fallback. Can be deployed on your AWS VM or on prem, scales up and out. Sounds like the ideal solution to your needs.
I am trying to build a data lake by pulling data from multiple data sources ( custom-built tools, excel files, CSV files, etc) and use the data lake to generate dashboards.
My question is which is the best tool to do the following:
- Create pipelines to ingest the data from multiple sources into the data lake
- Help me in aggregating and filtering data available in the data lake.
- Create new reports by combining different data elements from the data lake.
I need to use only open-source tools for this activity.
I appreciate your valuable inputs and suggestions. Thanks in Advance.
Hi Karunakaran. I obviously have an interest here, as I work for the company, but the problem you are describing is one that Zetaris can solve. Talend is a good ETL product, and Dremio is a good data virtualization product, but the problem you are describing best fits a tool that can combine the five styles of data integration (bulk/batch data movement, data replication/data synchronization, message-oriented movement of data, data virtualization, and stream data integration). I may be wrong, but Zetaris is, to the best of my knowledge, the only product in the world that can do this. Zetaris is not a dashboarding tool - you would need to combine us with Tableau or Qlik or PowerBI (or whatever) - but Zetaris can consolidate data from any source and any location (structured, unstructured, on-prem or in the cloud) in real time to allow clients a consolidated view of whatever they want whenever they want it. Please take a look at www.zetaris.com for more information. I don't want to do a "hard sell", here, so I'll say no more! Warmest regards, Rod Beecham.
To provide employees with the critical need of interactive querying, we’ve worked with Presto, an open-source distributed SQL query engine, over the years. Operating Presto at Pinterest’s scale has involved resolving quite a few challenges like, supporting deeply nested and huge thrift schemas, slow/ bad worker detection and remediation, auto-scaling cluster, graceful cluster shutdown and impersonation support for ldap authenticator.
Our infrastructure is built on top of Amazon EC2 and we leverage Amazon S3 for storing our data. This separates compute and storage layers, and allows multiple compute clusters to share the S3 data.
We have hundreds of petabytes of data and tens of thousands of Apache Hive tables. Our Presto clusters are comprised of a fleet of 450 r4.8xl EC2 instances. Presto clusters together have over 100 TBs of memory and 14K vcpu cores. Within Pinterest, we have close to more than 1,000 monthly active users (out of total 1,600+ Pinterest employees) using Presto, who run about 400K queries on these clusters per month.
Each query submitted to Presto cluster is logged to a Kafka topic via Singer. Singer is a logging agent built at Pinterest and we talked about it in a previous post. Each query is logged when it is submitted and when it finishes. When a Presto cluster crashes, we will have query submitted events without corresponding query finished events. These events enable us to capture the effect of cluster crashes over time.
Each Presto cluster at Pinterest has workers on a mix of dedicated AWS EC2 instances and Kubernetes pods. Kubernetes platform provides us with the capability to add and remove workers from a Presto cluster very quickly. The best-case latency on bringing up a new worker on Kubernetes is less than a minute. However, when the Kubernetes cluster itself is out of resources and needs to scale up, it can take up to ten minutes. Some other advantages of deploying on Kubernetes platform is that our Presto deployment becomes agnostic of cloud vendor, instance types, OS, etc.
#BigData #AWS #DataScience #DataEngineering
The platform deals with time series data from sensors aggregated against things( event data that originates at periodic intervals). We use Cassandra as our distributed database to store time series data. Aggregated data insights from Cassandra is delivered as web API for consumption from other applications. Presto as a distributed sql querying engine, can provide a faster execution time provided the queries are tuned for proper distribution across the cluster. Another objective that we had was to combine Cassandra table data with other business data from RDBMS or other big data systems where presto through its connector architecture would have opened up a whole lot of options for us.
Pros of Dremio
- Nice GUI to enable more people to work with Data3
- Connect NoSQL databases with RDBMS2
- Easier to Deploy2
- Free1
Pros of Presto
- Works directly on files in s3 (no ETL)18
- Open-source13
- Join multiple databases12
- Scalable10
- Gets ready in minutes7
- MPP6
Sign up to add or upvote prosMake informed product decisions
Cons of Dremio
- Works only on Iceberg structured data1