Help developers discover the tools you use. Get visibility for your team's tech choices and contribute to the community's knowledge.
Dremio—the data lake engine, operationalizes your data lake storage and speeds your analytics processes with a high-performance and high-efficiency query engine while also democratizing data access for data scientists and analysts. | It is a unified framework for privacy-preserving data intelligence and machine learning. It provides an abstract device layer consists of plain devices and secret devices which encapsulate various cryptographic protocols. |
Democratize all your data;
Make your data engineers more productive;
Accelerate your favorite tools;
Self service, for everybody
| Supports various privacy computing technologies and can be assembled flexibly to meet the needs of different scenarios;
Build a unified technical framework, and try to make the underlying technology iteration transparent to the upper-layer application, with high cohesion and low coupling;
Data in scenarios supported by different underlying technologies can be connected to each other |
Statistics | |
GitHub Stars - | GitHub Stars 2.6K |
GitHub Forks - | GitHub Forks 452 |
Stacks 116 | Stacks 0 |
Followers 348 | Followers 2 |
Votes 8 | Votes 0 |
Pros & Cons | |
Pros
Cons
| No community feedback yet |
Integrations | |

Run super-fast, SQL-like queries against terabytes of data in seconds, using the processing power of Google's infrastructure. Load data with ease. Bulk load your data using Google Cloud Storage or stream it in. Easy access. Access BigQuery by using a browser tool, a command-line tool, or by making calls to the BigQuery REST API with client libraries such as Java, PHP or Python.

Spark is a fast and general processing engine compatible with Hadoop data. It can run in Hadoop clusters through YARN or Spark's standalone mode, and it can process data in HDFS, HBase, Cassandra, Hive, and any Hadoop InputFormat. It is designed to perform both batch processing (similar to MapReduce) and new workloads like streaming, interactive queries, and machine learning.

It is optimized for data sets ranging from a few hundred gigabytes to a petabyte or more and costs less than $1,000 per terabyte per year, a tenth the cost of most traditional data warehousing solutions.

TensorFlow is an open source software library for numerical computation using data flow graphs. Nodes in the graph represent mathematical operations, while the graph edges represent the multidimensional data arrays (tensors) communicated between them. The flexible architecture allows you to deploy computation to one or more CPUs or GPUs in a desktop, server, or mobile device with a single API.

Qubole is a cloud based service that makes big data easy for analysts and data engineers.

Distributed SQL Query Engine for Big Data

It is used in a variety of applications, including log analysis, data warehousing, machine learning, financial analysis, scientific simulation, and bioinformatics.

Amazon Athena is an interactive query service that makes it easy to analyze data in Amazon S3 using standard SQL. Athena is serverless, so there is no infrastructure to manage, and you pay only for the queries that you run.

scikit-learn is a Python module for machine learning built on top of SciPy and distributed under the 3-Clause BSD license.

PyTorch is not a Python binding into a monolothic C++ framework. It is built to be deeply integrated into Python. You can use it naturally like you would use numpy / scipy / scikit-learn etc.