453
286
+ 1
0

What is PySpark?

It is the collaboration of Apache Spark and Python. it is a Python API for Spark that lets you harness the simplicity of Python and the power of Apache Spark in order to tame Big Data.
PySpark is a tool in the Data Science Tools category of a tech stack.

Who uses PySpark?

Companies
25 companies reportedly use PySpark in their tech stacks, including trivago, Walmart, and Runtastic.

Developers
213 developers on StackShare have stated that they use PySpark.

PySpark Integrations

Dask, AWS Data Wrangler, Feathr, Mage.ai, and Serra are some of the popular tools that integrate with PySpark. Here's a list of all 5 tools that integrate with PySpark.
Decisions about PySpark

Here are some stack decisions, common use cases and reviews by companies and developers who chose PySpark in their tech stack.

Vamshi Krishna
Data Engineer at Tata Consultancy Services · | 4 upvotes · 240.7K views

I have to collect different data from multiple sources and store them in a single cloud location. Then perform cleaning and transforming using PySpark, and push the end results to other applications like reporting tools, etc. What would be the best solution? I can only think of Azure Data Factory + Databricks. Are there any alternatives to #AWS services + Databricks?

See more

Blog Posts

PySpark Alternatives & Comparisons

What are some alternatives to PySpark?
Scala
Scala is an acronym for “Scalable Language”. This means that Scala grows with you. You can play with it by typing one-line expressions and observing the results. But you can also rely on it for large mission critical systems, as many companies, including Twitter, LinkedIn, or Intel do. To some, Scala feels like a scripting language. Its syntax is concise and low ceremony; its types get out of the way because the compiler can infer them.
Python
Python is a general purpose programming language created by Guido Van Rossum. Python is most praised for its elegant syntax and readable code, if you are just beginning your programming career python suits you best.
Apache Spark
Spark is a fast and general processing engine compatible with Hadoop data. It can run in Hadoop clusters through YARN or Spark's standalone mode, and it can process data in HDFS, HBase, Cassandra, Hive, and any Hadoop InputFormat. It is designed to perform both batch processing (similar to MapReduce) and new workloads like streaming, interactive queries, and machine learning.
Pandas
Flexible and powerful data analysis / manipulation library for Python, providing labeled data structures similar to R data.frame objects, statistical functions, and much more.
Hadoop
The Apache Hadoop software library is a framework that allows for the distributed processing of large data sets across clusters of computers using simple programming models. It is designed to scale up from single servers to thousands of machines, each offering local computation and storage.
See all alternatives

PySpark's Followers
286 developers follow PySpark to keep up with related blogs and decisions.