Help developers discover the tools you use. Get visibility for your team's tech choices and contribute to the community's knowledge.
It is a deep learning optimization library that makes distributed training easy, efficient, and effective. It can train DL models with over a hundred billion parameters on the current generation of GPU clusters while achieving over 5x in system performance compared to the state-of-art. Early adopters of DeepSpeed have already produced a language model (LM) with over 17B parameters called Turing-NLG, establishing a new SOTA in the LM category. | It is a library of deep learning models and datasets designed to make deep learning more accessible and accelerate ML research. It was developed by researchers and engineers in the Google Brain team and a community of users. |
Distributed Training with Mixed Precision; Model Parallelism; Memory and Bandwidth Optimizations; Simplified training API;
Gradient Clipping;
Automatic loss scaling with mixed precision; Simplified Data Loader; Performance Analysis and Debugging | Many state of the art and baseline models are built-in and new models can be added easily;
Many datasets across modalities - text, audio, image - available for generation and use, and new ones can be added easily;
Models can be used with any dataset and input mode (or even multiple); all modality-specific processing (e.g. embedding lookups for text tokens) is done with bottom and top transformations, which are specified per-feature in the model;
Support for multi-GPU machines and synchronous (1 master, many workers) and asynchronous (independent workers synchronizing through a parameter server) distributed training;
Easily swap amongst datasets and models by command-line flag with the data generation script t2t-datagen and the training script t2t-trainer;
Train on Google Cloud ML and Cloud TPUs |
Statistics | |
GitHub Stars - | GitHub Stars 16.7K |
GitHub Forks - | GitHub Forks 3.7K |
Stacks 11 | Stacks 4 |
Followers 16 | Followers 12 |
Votes 0 | Votes 0 |
Integrations | |
| No integrations available | |

TensorFlow is an open source software library for numerical computation using data flow graphs. Nodes in the graph represent mathematical operations, while the graph edges represent the multidimensional data arrays (tensors) communicated between them. The flexible architecture allows you to deploy computation to one or more CPUs or GPUs in a desktop, server, or mobile device with a single API.

scikit-learn is a Python module for machine learning built on top of SciPy and distributed under the 3-Clause BSD license.

PyTorch is not a Python binding into a monolothic C++ framework. It is built to be deeply integrated into Python. You can use it naturally like you would use numpy / scipy / scikit-learn etc.

Deep Learning library for Python. Convnets, recurrent neural networks, and more. Runs on TensorFlow or Theano. https://keras.io/

The Kubeflow project is dedicated to making Machine Learning on Kubernetes easy, portable and scalable by providing a straightforward way for spinning up best of breed OSS solutions.

Use flexible and intuitive APIs to build and train models from scratch using the low-level JavaScript linear algebra library or the high-level layers API

An enterprise-grade open source platform for building, training, and monitoring large scale deep learning applications.

It is the app framework specifically for Machine Learning and Data Science teams. You can rapidly build the tools you need. Build apps in a dozen lines of Python with a simple API.

MLflow is an open source platform for managing the end-to-end machine learning lifecycle.

H2O.ai is the maker behind H2O, the leading open source machine learning platform for smarter applications and data products. H2O operationalizes data science by developing and deploying algorithms and models for R, Python and the Sparkling Water API for Spark.