Empowering Pinterest Data Scientists and Machine Learning Engineers with PySpark

4,462
Pinterest
Pinterest is a social bookmarking site where users collect and share photos of their favorite events, interests and hobbies. One of the fastest growing social networks online, Pinterest is the third-largest such network behind only Facebook and Twitter.

Data scientists and machine learning engineers at Pinterest found themselves hitting major challenges with existing tools. Hive and Presto were readily accessible tools for large scale data transformations, but complex logic is difficult to write in SQL. Some engineers wrote complex logics in Cascading or Scala Spark jobs, but these have a steep learning curve and take significantly more time to learn and build jobs. Furthermore, data scientists and machine learning engineers often trained models in a small-scale notebook environment, but they lacked the tools to perform large-scale inference.

To combat these challenges, we, (machine learning and data processing platform engineers), built and productionized PySpark infrastructure. The PySpark infrastructure gives our users the following capabilities:

  • Writing logic using the familiar Python language and libraries, in isolated environments that allow experimenting with new packages.
  • Rapid prototyping from our JupyterHub deployment, enabling users to interactively try out feature transformations, model ideas, and data processing jobs.
  • Integration with our internal workflow system, so that users can easily productionize their PySpark applications as scheduled workflows.

PySpark on Kubernetes as a minimum viable product (MVP)

We first built an MVP PySpark infrastructure on Pinterest Kubernetes infrastructure with Spark Standalone Mode and tested with users for feedback.

Figure 1. An overview of the MVP architecture

The infrastructure consists of Kubernetes pods carrying out different tasks:

  • Spark Master managing cluster resources
  • Workers — where Spark executors are spawned
  • Jupyter servers assigned to each user

When users launch PySpark applications from those Jupyter servers, Spark drivers are created in the same pod as Jupyter and the requested executors in worker pods.

This architecture enabled our users to experience the power of PySpark for the first time. Data scientists were able to quickly grasp Python UDFs, transform features, and perform batch inference of TensorFlow models with terabytes of data.

This architecture, however, had some limitations:

  • Jupyter notebook and PySpark driver share resources since they are in the same pod.
  • Driver’s port and address are hard-coded in the config.
  • Users can launch only one PySpark application per assigned Jupyter server.
  • Python dependency per user/team is difficult.
  • Resource management is limited to FIFO approach across all the users (no queue defined).

As the demand for PySpark grew, we worked on a production-grade PySpark infrastructure based on Yarn, Livy, and Sparkmagic.

Production-grade PySpark infrastructure

Figure 2: An overview of the production architecture

In this architecture, each Spark application runs on the YARN cluster. We use Apache Livy to proxy between our internal JupyterHub, the Spark application and the YARN cluster. On Jupyter, Sparkmagic provides a PySpark kernel that forwards the PySpark code to a running Spark application. Conda provides isolated Python environments for each application.

With this architecture, we offer two development approaches.

Interactive development:

  1. A user creates a conda environment zip containing Python packages they need, if any.
  2. From JupyterHub, they create a notebook with PySpark kernel from Sparkmagic.
  3. In the notebook, they declare resources required, conda environment, and other configuration. Livy launches a Spark application on the YARN cluster.
  4. Sparkmagic ships the user’s Jupyter cells (via Livy) to the PySpark application. Livy proxies results back to the Jupyter notebook.

See the attached picture (see Appendix) for a full annotated example of a Jupyter notebook.

Non-interactive development (ad-hoc and production workflow runs):

  1. A Pinterest-internal Job Submission Service acts as the gateway to the YARN cluster.
  2. In development, the user’s local Python code base is packaged into an archive and submitted to launch a PySpark application in YARN.
  3. In scheduled production runs, the production build’s archive is submitted instead.

Benefits

This infrastructure offers us the following benefits:

  1. No resources sharing between Jupyter notebook and PySpark drivers
  2. No hard-coded drivers’ ports and addresses
  3. Users can launch many PySpark applications
  4. Efficient resource allocation and isolation with aggressive dynamic allocation for high resource utilization
  5. Python dependency per user is supported
  6. Resource accountable
  7. Dr. Elephant for PySpark Job analyses

Technical details

Pinterest JupyterHub Integration: (benefits #1,2,3)

We made the Sparkmagic kernel available in Jupyter. When the kernel is selected, a config managed by ZooKeeper is loaded with all necessary dependencies.

We set up Apache Livy, which provides a REST API proxy from Jupyter to the YARN cluster and PySpark applications.

A YARN cluster: (benefit #4)

  • Efficient resource allocation and isolation. We define a queue structure with Fair Scheduler to ensure dedicated resources and preemptable under certain conditions (e.g. after waiting for at least 10 minutes) but a portion of non-preemptable resources will be held for queues with minResource being set. Scheduler and resource manager logs are to manage cluster resources.
  • Aggressive Dynamic allocation policy for high resource utilization. We set the policy where a PySpark application holds at most a certain amount of executors and automatically releases resources once they don’t need. This policy makes sure resources are recycled faster, leading to a better resource utilization.

Python Dependency Management: (benefit #5)

Users can try various Python libraries (e.g. different ML frameworks) without asking platform engineers to install them. To that end, we created a Jenkins job to package a conda environment based on a requirement file, and archive it as a zip file on S3. PySpark applications launched with “ — archives” to broadcast zip file to driver along with all executors, and reset both “PYSPARKPYTHON” (for driver) as well as “spark.yarn.appMasterEnv.PYSPARKPYTHON” (for executors). That way, each application runs under in an isolated Python environment with all libraries needed.

Integrating with Pinterest-internal Job Submission Service (JSS): (benefit #6)

To productionize PySpark applications, users leverage the internal workflow system to schedule. We provided a workflow template to integrate with job submission interfaces to specify code location, parameters, and a Python environment artifact to use.

Self-service job performance analysis: (benefit #7)

We forked the open-sourced Dr. Elephant, and added new heuristics to analyze application’s configuration with various kinds of runtime metrics (executor, job, stage, …). This service provides tuning suggestions and offers guidelines on how to write a spark job properly. The service alleviates users’ debugging-and-troubleshooting pain, boosting the velocity. Moreover, it avoids resource waste and improves cluster stability. Below is an example of the performance analysis.

Figure 3: An overview of Dr. Elephant

Impacts

PySpark is now being used throughout our Product Analytics and Data Science, and Ads teams for a wide range of use cases.

  • Training: users can train models with mllib or any Python machine learning frameworks (e.g. TensorFlow) iteratively with any size of data.
  • Inference: users can test and productionize their Python codes for inferences without depending on platform engineers.
  • Ad-hoc analyses: users can perform various ad-hoc analyses as needed.

Moreover, our users now have the freedom to explore various Python dependencies and use Python UDF for large scale data.

Acknowledgement

We thank David Liu (EM, Machine Learning Platform team), Ang Zhang (EM, Data Processing Platform team), Tais (our TPM), Pinterest Product Analytics and Data Science organization (Sarthak Shah, Grace Huang, Minli Zhang, Dan Lee, Ladi Ositelu), Compute-Platform team (Harry Zhang, June Liu), Data Processing Platform team (Zaheen Aziz), Jupyter team (Prasun Ghosh — Tech Lead) for their support and the collaborations.

Appendix — An example of our use-case (Appendix):

Below is an example of how our users train a model, and run inference logic at scale from their Jupyter notebook with PySpark. We leave explanations in each cell.

Pinterest
Pinterest is a social bookmarking site where users collect and share photos of their favorite events, interests and hobbies. One of the fastest growing social networks online, Pinterest is the third-largest such network behind only Facebook and Twitter.
Tools mentioned in article
Open jobs at Pinterest
Engineering Manager, Shopping Content...
Toronto, ON, CA

About Pinterest:  

Millions of people across the world come to Pinterest to find new ideas every day. It’s where they get inspiration, dream about new possibilities and plan for what matters most. Our mission is to help those people find their inspiration and create a life they love. In your role, you’ll be challenged to take on work that upholds this mission and pushes Pinterest forward. You’ll grow as a person and leader in your field, all the while helping Pinners make their lives better in the positive corner of the internet.

Pinterest is aiming to build a world-class shopping experience for our users, and has a unique advantage to succeed due to the high shopping intent of Pinners. The new Shopping Content Mining team being founded in Toronto plays a critical role in this journey. This team is responsible for building a brand new platform for mining and understanding product data, including extracting high quality product attributes from web pages and free texts that come from all major retailers across the world, mining product reviews and product relationships, product classification, etc. The rich product data generated by this platform is the foundation of the unified product catalog, which powers all shopping experiences at Pinterest (e.g., product search & recommendations, product detail page, shop the look, shopping ads).

There are unique technical challenges for this team: building large scale systems that can process billions of products, Machine Learning models that require few training examples to generate wrappers for web pages, NLP models that can extract information from free-texts, easy-to-use human labelling tools that generate high quality labeled data.Your work will have a huge impact on improving the shopping experience of 400M+ Pinners and driving revenue growth for Pinterest.

What you’ll do:

  • As the Engineering Manager, you’ll be responsible for:
    • Growing this team further in Toronto
    • Driving execution and deliver impact
    • Setting long term technical visions for this area
  • Work with tech leads to provide technical guidance on:
    • Large scale systems that can process billions of products
    • ML models for wrapper induction that require few training examples, NLP models for understanding free-texts
  • Drive cross functional collaborations with partner teams working on shopping

What we’re looking for:

  • 7+ years of industry experience, including 2+ years of management experience
  • Experience on large scale machine learning systems (full ML stack from modelling to deployment at scale.)
  • Experience with big data technologies (e.g., Hadoop/Spark) and scalable realtime systems that process stream data

Nice to have:

  • PhD in Machine Learning or related areas, publication on top ML conferences
  • Familiarity with information extraction techniques for web-pages and free-texts.
  • Experience working with shopping data is a plus.
  • Experience building internal tools for labeling / diagnosing.

#LI-EA1

Staff Machine Learning Software Engin...
Toronto, ON, CA

About Pinterest:  

Millions of people across the world come to Pinterest to find new ideas every day. It’s where they get inspiration, dream about new possibilities and plan for what matters most. Our mission is to help those people find their inspiration and create a life they love. In your role, you’ll be challenged to take on work that upholds this mission and pushes Pinterest forward. You’ll grow as a person and leader in your field, all the while helping Pinners make their lives better in the positive corner of the internet.

Shopping is at the core of Pinterest’s mission to help people create a life they love. The shopping discovery team at Pinterest is inventing a brand new, more visual and personalized shopping experience for 350M+ users worldwide. The team is responsible for delivering mid-funnel shopping experience on shopping surfaces like Product Detail Page, Shopping Search, Shopping on Board etc. As an engineer of the team you will be working on the most cutting edge recommendation algorithms to develop diverse types of shopping recommendations that will be displayed across different shopping surfaces on Pinterest. 

You’ll also be responsible for optimizing the whole page layout by appropriately selecting and slotting the UI templates and recommendation modules optimizing towards a shopping metric. As an engineer of the team you’ll be running experiments and directly improving the shopping metrics contributing to the bottom line of the company.

If you are excited about large scale machine learning problems in the area of recommendation, search and whole page optimization then you must consider this role

What you'll do: 

  • Develop large scale shopping recommendation algorithms
  • Build data pipelines to do data analysis and collect training data
  • Train deep learning models to improve quality and engagement of shopping recommenders
  • Work on backend and infrastructure to build, deploy and serve machine learning models
  • Develop algorithms to optimize the whole page layout of the shopping surfaces
  • Drive the roadmap for next generation of shopping recommenders

What we're looking for: 

  • 6+ years working experience in the area of applied Machine Learning
  • Interest or experience working on a large-scale search, recommendation and ranking problems
  • Interest and experience in doing full stack ML, including backend and ML infrastructure
  • Experience is any of the following areas
    • Developing large scale recommender systems
    • Contextual bandit algorithms
    • Reinforcement learning

#LI-JY1

Software Engineer, Sales Tools
Toronto, ON, CA

About Pinterest:  

Millions of people across the world come to Pinterest to find new ideas every day. It’s where they get inspiration, dream about new possibilities and plan for what matters most. Our mission is to help those people find their inspiration and create a life they love. In your role, you’ll be challenged to take on work that upholds this mission and pushes Pinterest forward. You’ll grow as a person and leader in your field, all the while helping Pinners make their lives better in the positive corner of the internet.

The amount of advertisers on Pinterest is growing faster than the sales team, necessitating new investments in driving sales productivity through Tooling. Your customers would be internal, and you’d become an expert at how the whole sales motion works. Friction is your enemy, and happy productive sales people is your outcome. We’re looking for motivated and self starting individuals to evolve existing, and build new tooling from scratch. You’ll work closely with internal customers from many disciplines (product, operations, sales) to rapidly deliver creative solutions in an iterative manner. 

What you’ll do:

  • Design and develop internal tools to improve efficiency of sales teams and processes
  • Architect, deploy and maintain performant and reliable systems, building for quick iteration and reusability
  • Work closely with internal customers from product management, sales and operations to craft fit for purpose tools
  • Re-think how current processes can be made better through data enrichment, connecting systems and automation
  • Define new metrics and systems for observing, evaluating and further optimizing business processes

What we’re looking for:

  • 3+ years of software engineering experience
  • Experiences in developing backend large scale services and data processing workflows in Java
  • Experience utilizing big data processing systems such as Hive/Spark/Presto. 
  • Strong developer. Loves coding and constructing technical solutions
  • Effective collaboration with other teams

#LI-GK1

Software Engineer, Pinterest Labs – I...
San Francisco, CA, US

About Pinterest:  

Millions of people across the world come to Pinterest to find new ideas every day. It’s where they get inspiration, dream about new possibilities and plan for what matters most. Our mission is to help those people find their inspiration and create a life they love. In your role, you’ll be challenged to take on work that upholds this mission and pushes Pinterest forward. You’ll grow as a person and leader in your field, all the while helping Pinners make their lives better in the positive corner of the internet.

As a Software Engineer in Pinterest Labs, you'll work on tackling new challenges in machine learning and deep learning applied to a unique Pinterest dataset of 250 billion pins. You'll work on critical machine learning applications, push the state of the art, and build models and systems that are applied across Pinterest engineering teams to be used by hundreds of millions of users at tens to hundreds of thousands of QPS. You'll have the opportunity to work in the following areas: ML fairness, representation learning, graph embeddings, image recognition, user modeling, search and recommender systems, and natural language processing. 

The goal of Inclusive AI is to develop AI systems that perform outstandingly well across our diverse set of users and our wide range of applications. You will advance the state of the art in AI fairness, performing applied research in algorithmic bias, fairness and diversity for search and recommendation systems, computer vision models, representation learning, and more

What you’ll do: 

  • Advance the state-of-the-art in AI Fairness for large scale AI systems, including applied research in algorithmic bias, and diversity for search and recommendation systems
  • Develop ML models and deploy in large-scale distributed ML systems to enable inclusive and diverse recommendations at scale.
  • Work in a fast-paced environment with a quick cadence of research, experimentation, and product launches
  • Impact hundreds of millions of users by developing the next generation of inclusive visual discovery technology

What we’re looking for: 

  • Passionate about AI fairness, diversity, machine learning, and search and recommendation systems
  • PhD, or Masters degree with industry experience, in a technical field (EECS, Stats, Engineering, Maths)
  • Inquisitive engineer with 2+ years of industry experience in Search and Recommendation systems; preferably, but not required to be, related to algorithmic bias, AI fairness, and/or diversity
  • Ability to collaborate with multiple engineering, product and non-technical teams in a cross-functional environment
  • Python, Java programming experience
  • Tensorflow OR PyTorch experience
  • Experience with large scale data processing (e.g. Spark)
  • Industry experience in deploying ML/DL models into production (familiarity with scalability/latency/portability concerns, experience with experimentation and hyperparameter tuning)
  • Strong passion for experimentation and extensive experience in solving hard ML problems

#LI-TG1

Verified by
Security Software Engineer
Tech Lead, Big Data Platform
Software Engineer
Talent Brand Manager
Sourcer
Software Engineer
You may also like