StackShareStackShare
Follow on
StackShare

Discover and share technology stacks from companies around the world.

Follow on

© 2025 StackShare. All rights reserved.

Product

  • Stacks
  • Tools
  • Feed

Company

  • About
  • Contact

Legal

  • Privacy Policy
  • Terms of Service
  1. Stackups
  2. Utilities
  3. Business Intelligence
  4. Business Intelligence
  5. Denodo vs Dremio

Denodo vs Dremio

OverviewDecisionsComparisonAlternatives

Overview

Denodo
Denodo
Stacks40
Followers120
Votes0
GitHub Stars0
Forks0
Dremio
Dremio
Stacks116
Followers348
Votes8

Denodo vs Dremio: What are the differences?

Introduction

Denodo and Dremio are both data virtualization tools that offer similar functionalities but have some key differences. Below are the key differences between Denodo and Dremio.

  1. Data Source Support: Denodo supports a wide range of data sources including relational databases, big data sources, cloud platforms, and web services. It provides extensive connectivity options for integration with various data sources. On the other hand, Dremio focuses more on big data sources such as Hadoop, NoSQL databases, and cloud storage systems. It offers native support for these types of data sources and provides advanced optimization techniques for query acceleration.

  2. Query Performance: Denodo provides a data caching mechanism that helps improve query performance by reducing the number of queries sent to the underlying data sources. It also optimizes query execution through various techniques such as result set caching, query pipelining, and query parallelization. Dremio, on the other hand, focuses on query acceleration for big data workloads. It leverages technologies like Apache Arrow, columnar storage, and vectorized execution to deliver high-speed query performance on large datasets.

  3. Data Governance and Security: Denodo offers robust data governance and security features. It provides fine-grained access control, data masking, encryption, and auditing capabilities to ensure data privacy and compliance. It also supports data lineage tracking, data quality management, and metadata management. Dremio, although it provides basic security features like user authentication and authorization, lacks some advanced data governance capabilities provided by Denodo.

  4. Data Transformation and Integration: Denodo offers a comprehensive set of tools for data integration, transformation, and data virtualization. It provides a visual ETL (Extract, Transform, Load) interface, data modeling tools, and data pipeline automation capabilities. On the other hand, Dremio primarily focuses on data virtualization and exploration. While it provides limited data transformation capabilities, it lacks the advanced data integration features offered by Denodo.

  5. Deployment Options: Denodo can be deployed on-premises, in the cloud, or in a hybrid environment. It supports various cloud platforms such as AWS, Azure, and Google Cloud. It also provides options for scaling and high availability. Dremio, on the other hand, is primarily designed for deployment in cloud environments like AWS, Azure, and Kubernetes. It offers automatic scaling and infrastructure optimization for cloud-based deployments.

  6. Community and Support: Denodo has a large and active community of users and provides comprehensive technical support. It offers a range of resources including documentation, forums, knowledge base articles, and training courses. Dremio has a smaller community compared to Denodo but provides good support through documentation, forums, and customer support channels.

In summary, Denodo and Dremio differ in terms of data source support, query performance optimization, data governance, data transformation capabilities, deployment options, and community support. Depending on specific use cases and requirements, organizations can choose the tool that best aligns with their needs.

Share your Stack

Help developers discover the tools you use. Get visibility for your team's tech choices and contribute to the community's knowledge.

View Docs
CLI (Node.js)
or
Manual

Advice on Denodo, Dremio

karunakaran
karunakaran

Consultant

Jun 26, 2020

Needs advice

I am trying to build a data lake by pulling data from multiple data sources ( custom-built tools, excel files, CSV files, etc) and use the data lake to generate dashboards.

My question is which is the best tool to do the following:

  1. Create pipelines to ingest the data from multiple sources into the data lake
  2. Help me in aggregating and filtering data available in the data lake.
  3. Create new reports by combining different data elements from the data lake.

I need to use only open-source tools for this activity.

I appreciate your valuable inputs and suggestions. Thanks in Advance.

80.5k views80.5k
Comments
datocrats-org
datocrats-org

Jul 29, 2020

Needs adviceonAmazon EC2Amazon EC2TableauTableauPowerBIPowerBI

We need to perform ETL from several databases into a data warehouse or data lake. We want to

  • keep raw and transformed data available to users to draft their own queries efficiently
  • give users the ability to give custom permissions and SSO
  • move between open-source on-premises development and cloud-based production environments

We want to use inexpensive Amazon EC2 instances only on medium-sized data set 16GB to 32GB feeding into Tableau Server or PowerBI for reporting and data analysis purposes.

319k views319k
Comments

Detailed Comparison

Denodo
Denodo
Dremio
Dremio

It is the leader in data virtualization providing data access, data governance and data delivery capabilities across the broadest range of enterprise, cloud, big data, and unstructured data sources without moving the data from their original repositories.

Dremio—the data lake engine, operationalizes your data lake storage and speeds your analytics processes with a high-performance and high-efficiency query engine while also democratizing data access for data scientists and analysts.

Data virtualization; Data query; Data views
Democratize all your data; Make your data engineers more productive; Accelerate your favorite tools; Self service, for everybody
Statistics
GitHub Stars
0
GitHub Stars
-
GitHub Forks
0
GitHub Forks
-
Stacks
40
Stacks
116
Followers
120
Followers
348
Votes
0
Votes
8
Pros & Cons
No community feedback yet
Pros
  • 3
    Nice GUI to enable more people to work with Data
  • 2
    Easier to Deploy
  • 2
    Connect NoSQL databases with RDBMS
  • 1
    Free
Cons
  • 1
    Works only on Iceberg structured data
Integrations
DataRobot
DataRobot
AtScale
AtScale
Vertica
Vertica
Trifacta
Trifacta
Apache Kylin
Apache Kylin
SAP HANA
SAP HANA
Amazon S3
Amazon S3
Python
Python
Tableau
Tableau
Azure Database for PostgreSQL
Azure Database for PostgreSQL
Qlik Sense
Qlik Sense
PowerBI
PowerBI

What are some alternatives to Denodo, Dremio?

Metabase

Metabase

It is an easy way to generate charts and dashboards, ask simple ad hoc queries without using SQL, and see detailed information about rows in your Database. You can set it up in under 5 minutes, and then give yourself and others a place to ask simple questions and understand the data your application is generating.

Google BigQuery

Google BigQuery

Run super-fast, SQL-like queries against terabytes of data in seconds, using the processing power of Google's infrastructure. Load data with ease. Bulk load your data using Google Cloud Storage or stream it in. Easy access. Access BigQuery by using a browser tool, a command-line tool, or by making calls to the BigQuery REST API with client libraries such as Java, PHP or Python.

Apache Spark

Apache Spark

Spark is a fast and general processing engine compatible with Hadoop data. It can run in Hadoop clusters through YARN or Spark's standalone mode, and it can process data in HDFS, HBase, Cassandra, Hive, and any Hadoop InputFormat. It is designed to perform both batch processing (similar to MapReduce) and new workloads like streaming, interactive queries, and machine learning.

Amazon Redshift

Amazon Redshift

It is optimized for data sets ranging from a few hundred gigabytes to a petabyte or more and costs less than $1,000 per terabyte per year, a tenth the cost of most traditional data warehousing solutions.

Qubole

Qubole

Qubole is a cloud based service that makes big data easy for analysts and data engineers.

Presto

Presto

Distributed SQL Query Engine for Big Data

Amazon EMR

Amazon EMR

It is used in a variety of applications, including log analysis, data warehousing, machine learning, financial analysis, scientific simulation, and bioinformatics.

Amazon Athena

Amazon Athena

Amazon Athena is an interactive query service that makes it easy to analyze data in Amazon S3 using standard SQL. Athena is serverless, so there is no infrastructure to manage, and you pay only for the queries that you run.

Superset

Superset

Superset's main goal is to make it easy to slice, dice and visualize data. It empowers users to perform analytics at the speed of thought.

Apache Flink

Apache Flink

Apache Flink is an open source system for fast and versatile data analytics in clusters. Flink supports batch and streaming analytics, in one system. Analytical programs can be written in concise and elegant APIs in Java and Scala.

Related Comparisons

Bootstrap
Materialize

Bootstrap vs Materialize

Laravel
Django

Django vs Laravel vs Node.js

Bootstrap
Foundation

Bootstrap vs Foundation vs Material UI

Node.js
Spring Boot

Node.js vs Spring-Boot

Liquibase
Flyway

Flyway vs Liquibase