Help developers discover the tools you use. Get visibility for your team's tech choices and contribute to the community's knowledge.
Dremio—the data lake engine, operationalizes your data lake storage and speeds your analytics processes with a high-performance and high-efficiency query engine while also democratizing data access for data scientists and analysts. | It retains all the features you love from data lakes, with one key twist. Deep Lake is explicitly built for deep learning workflows with image, audio, and video datasets. This saves time on building complex data infrastructure, & enables shipping AI models into production much faster. |
Democratize all your data;
Make your data engineers more productive;
Accelerate your favorite tools;
Self service, for everybody
| Storage agnostic API;
Compressed storage;
Lazy Numpy-like indexing;
Dataset version control;
Integrations with deep learning frameworks;
Distributed transformations;
100+ most-popular image, video, and audio datasets available in seconds;
Instant visualization support in Activeloop platform |
Statistics | |
GitHub Stars - | GitHub Stars 8.9K |
GitHub Forks - | GitHub Forks 691 |
Stacks 116 | Stacks 1 |
Followers 348 | Followers 0 |
Votes 8 | Votes 0 |
Pros & Cons | |
Pros
Cons
| No community feedback yet |
Integrations | |

Run super-fast, SQL-like queries against terabytes of data in seconds, using the processing power of Google's infrastructure. Load data with ease. Bulk load your data using Google Cloud Storage or stream it in. Easy access. Access BigQuery by using a browser tool, a command-line tool, or by making calls to the BigQuery REST API with client libraries such as Java, PHP or Python.

Spark is a fast and general processing engine compatible with Hadoop data. It can run in Hadoop clusters through YARN or Spark's standalone mode, and it can process data in HDFS, HBase, Cassandra, Hive, and any Hadoop InputFormat. It is designed to perform both batch processing (similar to MapReduce) and new workloads like streaming, interactive queries, and machine learning.

It is optimized for data sets ranging from a few hundred gigabytes to a petabyte or more and costs less than $1,000 per terabyte per year, a tenth the cost of most traditional data warehousing solutions.

TensorFlow is an open source software library for numerical computation using data flow graphs. Nodes in the graph represent mathematical operations, while the graph edges represent the multidimensional data arrays (tensors) communicated between them. The flexible architecture allows you to deploy computation to one or more CPUs or GPUs in a desktop, server, or mobile device with a single API.

Qubole is a cloud based service that makes big data easy for analysts and data engineers.

Distributed SQL Query Engine for Big Data

It is used in a variety of applications, including log analysis, data warehousing, machine learning, financial analysis, scientific simulation, and bioinformatics.

Amazon Athena is an interactive query service that makes it easy to analyze data in Amazon S3 using standard SQL. Athena is serverless, so there is no infrastructure to manage, and you pay only for the queries that you run.

scikit-learn is a Python module for machine learning built on top of SciPy and distributed under the 3-Clause BSD license.

PyTorch is not a Python binding into a monolothic C++ framework. It is built to be deeply integrated into Python. You can use it naturally like you would use numpy / scipy / scikit-learn etc.