What is Pachyderm?
Pachyderm is an open source MapReduce engine that uses Docker containers for distributed computations.
Pachyderm is a tool in the Big Data Tools category of a tech stack.
Pachyderm is an open source tool with 4.4K GitHub stars and 418 GitHub forks. Here’s a link to Pachyderm's open source repository on GitHub
Who uses Pachyderm?
5 companies reportedly use Pachyderm in their tech stacks, including AgFlow, NearSt, and data-science.
6 developers on StackShare have stated that they use Pachyderm.
Pros of Pachyderm
- Git-like File System
- Dockerized MapReduce
- Microservice Architecture
- Deployed with CoreOS
Pachyderm Alternatives & Comparisons
What are some alternatives to Pachyderm?
See all alternatives
The Apache Hadoop software library is a framework that allows for the distributed processing of large data sets across clusters of computers using simple programming models. It is designed to scale up from single servers to thousands of machines, each offering local computation and storage.
Spark is a fast and general processing engine compatible with Hadoop data. It can run in Hadoop clusters through YARN or Spark's standalone mode, and it can process data in HDFS, HBase, Cassandra, Hive, and any Hadoop InputFormat. It is designed to perform both batch processing (similar to MapReduce) and new workloads like streaming, interactive queries, and machine learning.
Use Airflow to author workflows as directed acyclic graphs (DAGs) of tasks. The Airflow scheduler executes your tasks on an array of workers while following the specified dependencies. Rich command lines utilities makes performing complex surgeries on DAGs a snap. The rich user interface makes it easy to visualize pipelines running in production, monitor progress and troubleshoot issues when needed.
Kafka is a distributed, partitioned, replicated commit log service. It provides the functionality of a messaging system, but with a unique design.
It is an open-source Version Control System for data science and machine learning projects. It is designed to handle large files, data sets, machine learning models, and metrics as well as code.