Help developers discover the tools you use. Get visibility for your team's tech choices and contribute to the community's knowledge.
Superset's main goal is to make it easy to slice, dice and visualize data. It empowers users to perform analytics at the speed of thought. | Its Virtual Data Warehouse delivers performance, security and agility to exceed the demands of modern-day operational analytics. |
A rich set of visualizations to analyze your data, as well as a flexible way to extend the capabilities;An extensible, high granularity security model allowing intricate rules on who can access which features, and integration with major authentication providers (database, OpenID, LDAP, OAuth & REMOTE_USER through Flask AppBuiler);A simple semantic layer, allowing to control how data sources are displayed in the UI, by defining which fields should show up in which dropdown and which aggregation and function (metrics) are made available to the user;Deep integration with Druid allows for Caravel to stay blazing fast while slicing and dicing large, realtime datasets; | Multiple SQL-on-Hadoop Engine Support;
Access Data Where it Lays;
Built-in Support for Complex Data Types;
Single Drop-in Gateway Node Deployment |
Statistics | |
Stacks 420 | Stacks 25 |
Followers 1.0K | Followers 83 |
Votes 45 | Votes 0 |
Pros & Cons | |
Pros
Cons
| No community feedback yet |
Integrations | |
| No integrations available | |

It is an easy way to generate charts and dashboards, ask simple ad hoc queries without using SQL, and see detailed information about rows in your Database. You can set it up in under 5 minutes, and then give yourself and others a place to ask simple questions and understand the data your application is generating.

Spark is a fast and general processing engine compatible with Hadoop data. It can run in Hadoop clusters through YARN or Spark's standalone mode, and it can process data in HDFS, HBase, Cassandra, Hive, and any Hadoop InputFormat. It is designed to perform both batch processing (similar to MapReduce) and new workloads like streaming, interactive queries, and machine learning.

Distributed SQL Query Engine for Big Data

Amazon Athena is an interactive query service that makes it easy to analyze data in Amazon S3 using standard SQL. Athena is serverless, so there is no infrastructure to manage, and you pay only for the queries that you run.

Apache Flink is an open source system for fast and versatile data analytics in clusters. Flink supports batch and streaming analytics, in one system. Analytical programs can be written in concise and elegant APIs in Java and Scala.

It is an open-source data version control system for data lakes. It provides a “Git for data” platform enabling you to implement best practices from software engineering on your data lake, including branching and merging, CI/CD, and production-like dev/test environments.

Druid is a distributed, column-oriented, real-time analytics data store that is commonly used to power exploratory dashboards in multi-tenant environments. Druid excels as a data warehousing solution for fast aggregate queries on petabyte sized data sets. Druid supports a variety of flexible filters, exact calculations, approximate algorithms, and other useful calculations.

Cube: the universal semantic layer that makes it easy to connect BI silos, embed analytics, and power your data apps and AI with context.

It aims to provide interactive visualizations and business intelligence capabilities with an interface simple enough for end users to create their own reports and dashboards.

Apache Kylin™ is an open source Distributed Analytics Engine designed to provide SQL interface and multi-dimensional analysis (OLAP) on Hadoop/Spark supporting extremely large datasets, originally contributed from eBay Inc.