PyTorch logo


A deep learning framework that puts Python first
+ 1

What is PyTorch?

PyTorch is not a Python binding into a monolothic C++ framework. It is built to be deeply integrated into Python. You can use it naturally like you would use numpy / scipy / scikit-learn etc.
PyTorch is a tool in the Machine Learning Tools category of a tech stack.
PyTorch is an open source tool with 40K GitHub stars and 10.4K GitHub forks. Here鈥檚 a link to PyTorch's open source repository on GitHub

Who uses PyTorch?

67 companies reportedly use PyTorch in their tech stacks, including WISESIGHT, Hepsiburada, and Walmart.

479 developers on StackShare have stated that they use PyTorch.

PyTorch Integrations

Python, Databricks, Streamlit, Flair, and are some of the popular tools that integrate with PyTorch. Here's a list of all 21 tools that integrate with PyTorch.
Private Decisions at about PyTorch

Here are some stack decisions, common use cases and reviews by members of with PyTorch in their tech stack.

Yonas B.
Yonas B.
software engineer at clearforce | 1 upvotes 12.8K views
Shared insights

I used PyTorch when i was working on an AI application, image classification using deep learning. PyTorch

See more
Public Decisions about PyTorch

Here are some stack decisions, common use cases and reviews by companies and developers who chose PyTorch in their tech stack.

Eric Colson
Eric Colson
Chief Algorithms Officer at Stitch Fix | 19 upvotes 1.3M views

The algorithms and data infrastructure at Stitch Fix is housed in #AWS. Data acquisition is split between events flowing through Kafka, and periodic snapshots of PostgreSQL DBs. We store data in an Amazon S3 based data warehouse. Apache Spark on Yarn is our tool of choice for data movement and #ETL. Because our storage layer (s3) is decoupled from our processing layer, we are able to scale our compute environment very elastically. We have several semi-permanent, autoscaling Yarn clusters running to serve our data processing needs. While the bulk of our compute infrastructure is dedicated to algorithmic processing, we also implemented Presto for adhoc queries and dashboards.

Beyond data movement and ETL, most #ML centric jobs (e.g. model training and execution) run in a similarly elastic environment as containers running Python and R code on Amazon EC2 Container Service clusters. The execution of batch jobs on top of ECS is managed by Flotilla, a service we built in house and open sourced (see

At Stitch Fix, algorithmic integrations are pervasive across the business. We have dozens of data products actively integrated systems. That requires serving layer that is robust, agile, flexible, and allows for self-service. Models produced on Flotilla are packaged for deployment in production using Khan, another framework we've developed internally. Khan provides our data scientists the ability to quickly productionize those models they've developed with open source frameworks in Python 3 (e.g. PyTorch, sklearn), by automatically packaging them as Docker containers and deploying to Amazon ECS. This provides our data scientist a one-click method of getting from their algorithms to production. We then integrate those deployments into a service mesh, which allows us to A/B test various implementations in our product.

For more info:

#DataScience #DataStack #Data

See more
Conor Myhrvold
Conor Myhrvold
Tech Brand Mgr, Office of CTO at Uber | 7 upvotes 951.8K views

Why we built an open source, distributed training framework for TensorFlow , Keras , and PyTorch:

At Uber, we apply deep learning across our business; from self-driving research to trip forecasting and fraud prevention, deep learning enables our engineers and data scientists to create better experiences for our users.

TensorFlow has become a preferred deep learning library at Uber for a variety of reasons. To start, the framework is one of the most widely used open source frameworks for deep learning, which makes it easy to onboard new users. It also combines high performance with an ability to tinker with low-level model details鈥攆or instance, we can use both high-level APIs, such as Keras, and implement our own custom operators using NVIDIA鈥檚 CUDA toolkit.

Uber has introduced Michelangelo (, an internal ML-as-a-service platform that democratizes machine learning and makes it easy to build and deploy these systems at scale. In this article, we pull back the curtain on Horovod, an open source component of Michelangelo鈥檚 deep learning toolkit which makes it easier to start鈥攁nd speed up鈥攄istributed deep learning projects with TensorFlow:

(Direct GitHub repo:

See more
Yonas B.
Yonas B.
software engineer at clearforce | 1 upvotes 12.8K views
Shared insights

I used PyTorch when i was working on an AI application, image classification using deep learning. PyTorch

See more

PyTorch's Features

  • Tensor computation (like numpy) with strong GPU acceleration
  • Deep Neural Networks built on a tape-based autograd system

PyTorch Alternatives & Comparisons

What are some alternatives to PyTorch?
TensorFlow is an open source software library for numerical computation using data flow graphs. Nodes in the graph represent mathematical operations, while the graph edges represent the multidimensional data arrays (tensors) communicated between them. The flexible architecture allows you to deploy computation to one or more CPUs or GPUs in a desktop, server, or mobile device with a single API.
Deep Learning library for Python. Convnets, recurrent neural networks, and more. Runs on TensorFlow or Theano.
Caffe2 is deployed at Facebook to help developers and researchers train large machine learning models and deliver AI-powered experiences in our mobile apps. Now, developers will have access to many of the same tools, allowing them to run large-scale distributed training scenarios and build machine learning applications for mobile.
A deep learning framework designed for both efficiency and flexibility. It allows you to mix symbolic and imperative programming to maximize efficiency and productivity. At its core, it contains a dynamic dependency scheduler that automatically parallelizes both symbolic and imperative operations on the fly.
It is easy to use and efficient, thanks to an easy and fast scripting language, LuaJIT, and an underlying C/CUDA implementation.
See all alternatives

PyTorch's Followers
607 developers follow PyTorch to keep up with related blogs and decisions.
Hendrik Klopries
Terry Lennon
Wang Zhe
Nilap Nilap
Raveen Beemsingh
Emilio Ramirez
Joven Barola
essor tick
Mark Worthington