Pinterest Flink Deployment Framework

802
Pinterest
Pinterest is a social bookmarking site where users collect and share photos of their favorite events, interests and hobbies. One of the fastest growing social networks online, Pinterest is the third-largest such network behind only Facebook and Twitter.

By Rainie Li | Software Engineer, Stream Processing Platform Team


Background

At Pinterest, stream processing allows us to unlock value from real time data for pinners and partners. The Stream Processing Platform team is working on building a reliable and scalable platform to support many critical streaming applications including real-time experiment analytics and real time machine learning signals.

Apache Flink is a framework and distributed processing engine for stateful computations over unbounded and bounded data streams. It provides features including exactly-once guarantees, low latency, high throughput, and powerful computation model. At Pinterest, we adopt Flink as the unified streaming processing engine.

Requirements

Standardize Flink Build

At Pinterest, we use Bazel as a build system. We need a standardized Bazel rule to build all Flink jobs without changing Makefiles. Once build is done, instead of asking users to copy Flink jars to YARN clusters, jars should be automatically uploaded to remote storage.

Deployment and Operations History

Users used to copy Flink jars to YARN clusters and manually run commands. It was hard to track previous execution histories if we needed to recover failed jobs. We need to provide standard Flink operations such as launching, killing, triggering savepoint, and resuming jobs from the most recent savepoint.

Job Deduplication

Flink applications are deployed as services, therefore one instance should be running at a time for each Flink application. We need to prevent cases when users accidentally deploy twice for the same job, meaning both instances might write to the same Kafka topic. This would mean double writes to Kafka and could affect downstream jobs.

Deployment Framework

We built our Flink deployment framework on top of Bazel, Hermez (internal continuous deployment platform), Job Submission Service (internal service), and YARN clusters.

Figure 1. Deployment high level architecture

Create Bazel BUILD file

The BUILD file needs to contain load(“flink_release”). Users also need to insert a Bazel rule like this:

Define Hermez Deployment File

Hermez is the Pinterest Continuous Deployment System. In order to launch a Flink job with Hermez, users need to create a Hermez.yml file. This file contains information including which YARN cluster Flink jobs to run in, what YARN parameters to use, what resources to use, etc. For each instance of Flink job, users should set up a separate YAML file. For example, if users run their jobs in dev, staging, and prod environments, they will need to have three different YAML files (one for each environment).

Here’s an example of yml file:

Automatically Flink Job Building

The following numbers are referring to steps in Figure 1: Deployment high level architecture

Whenever a user lands a change to Git repo, Jenkins job will be triggered to build Flink job JARs (1). Jenkins job will follow flink_relase rules that are described in the BUILD file to build Flink JAR and upload it to the S3 bucket (3). Meanwhile, it will upload deployment related Hermez YAML files to Artifactory (2). Hermez monitors Artifactory; when it sees a new yml file, it will display it on UI to allow users to launch a job using that yml (5).

Flink Job Launching

When users launch a Flink job, Hermez converts the yml file into a JSON and submits it to Job Submission Service (JSS) (6). JSS is a service maintained by Pinterest that has the ability to schedule and launch Flink jobs to YARN clusters.

JSS examines the request and ensures that Flink JARs and Flink job state exist in S3 (7). If everything is alright, JSS will first launch a shell-runner job which will execute a command on a YARN cluster cluster (8). The shell-runner job downloads the Flink job’s JAR from S3 and then kicks off the actual Flink job using the configuration provided by JSS (9). The reason we add a shell-runner job is to keep JSS as a thin layer without dealing with different compute engine clients (Flink, Spark, MapReduce, etc.) and different configurations for each cluster.

JSS Deduplication

When resuming a Flink job, we provide several options including resume from most recent savepoint or checkpoint, fresh state, and specify a savepoint or checkpoint path. Job deduplication features ensure that there is only one instance of your Flink job running at a time.

The way job deduplication works is that each job has a unique name when a job is submitted. If there is already an instance of the job running, JSS will trigger a safepoint and stop it first, then submit the new job. If the stop request fails because savepoint fails, then the submitted request will fail and the running instance remains running. If there is one deployment in progress, the new job submission would be rejected

Flink Job Configuration Hotfix

Due to Flink configuration being packaged together with Flink job binary, users used to check in config changes to Repo and rebuild the package. This whole process could take more than 10 minutes. This can be a problem if we would like to quickly adjust parameters during incidents. For example, when Flink jobs failed in production due to lack of resources, we used to go through the entire build process to rollout resource config changes. After the incidents got resolved, we needed to check in another change to roll back these configs. To speed up this process, we provide a hotfix feature on Hermez to overwrite Flink job configuration without code change. Users can adjust Flink configuration values during deployment. Behind the scenes, Hermez will directly overwrite these values in ymls which Hermez read from Artifactory.

What’s Next

Reducing Deployment Latency

The current approach launches shell-runner first. Then, shell-runner launches Flink jobs to YARN clusters which could increase latency. We plan to improve this process to reduce end-to-end Flink job launch time.

Automatically Job Failover

To further improve platform and Flink application availability, we built YARN clusters in multiple AWS Availability Zones (AZ) to provide backup when one cluster or one AZ become unavailable. We are also building a service that could automatically detect any cluster failure and failover failed jobs to backup clusters in different AZs or detect application failures and restart the application automatically.

Stay tuned!

Acknowledgments

Thanks to Steven Bairos-Novak and Yu Yang for their countless contributions. Thanks Ang Zhang for updating this blog. This project is a joint effort across multiple teams at Pinterest. Thanks to the Engineering Productivity Team for Hermez support.

Pinterest
Pinterest is a social bookmarking site where users collect and share photos of their favorite events, interests and hobbies. One of the fastest growing social networks online, Pinterest is the third-largest such network behind only Facebook and Twitter.
Tools mentioned in article
Open jobs at Pinterest
Machine Learning Engineer, Home Team
San Francisco, CA, US

About Pinterest:  

Millions of people across the world come to Pinterest to find new ideas every day. It’s where they get inspiration, dream about new possibilities and plan for what matters most. Our mission is to help those people find their inspiration and create a life they love. In your role, you’ll be challenged to take on work that upholds this mission and pushes Pinterest forward. You’ll grow as a person and leader in your field, all the while helping Pinners make their lives better in the positive corner of the internet.

Homefeed is a discovery platform at Pinterest that helps users find and explore their personal interests. We work with some of the largest datasets in the world, tailoring over billions of unique content to 330M+ users. Our content ranges across all categories like home decor, fashion, food, DIY, technology, travel, automotive, and much more. Our dataset is rich with textual and visual content and has nice graph properties — harnessing these signals at scale is a significant challenge. The Homefeed ranking team focuses on the machine learning model that predicts how likely a user will interact with a certain piece of content, as well as leveraging those individual prediction scores for holistic optimization to present users with a feed of diverse content.

What you’ll do:

  • Work on state-of-the-art large-scale applied machine learning projects
  • Improve relevance and the user experience on Homefeed
  • Re-architect our deep learning models to improve their capacity and enable more use cases
  • Collaborate with other teams to build/incorporate various signals to machine learning models
  • Collaborate with other teams to extend our machine learning based solutions to other use cases

What we’re looking for:

  • Passionate about applied machine learning and deep learning
  • 8+ years experience applying machine learning methods in settings like recommender systems, search, user modeling, image recognition, graph representation learning, natural language processing

#L1-EA2

EPM Solutions Architect, Adaptive Pla...
San Francisco, CA, US

About Pinterest:  

Millions of people across the world come to Pinterest to find new ideas every day. It’s where they get inspiration, dream about new possibilities and plan for what matters most. Our mission is to help those people find their inspiration and create a life they love. In your role, you’ll be challenged to take on work that upholds this mission and pushes Pinterest forward. You’ll grow as a person and leader in your field, all the while helping Pinners make their lives better in the positive corner of the internet.

The EPM technology team at Pinterest is looking for a senior EPM architect who has at least four years of technical experience in Workday Adaptive Planning. You will be the solutions architect who oversees technical design of the complete EPM ecosystem with emphasis on Adaptive Financial and Workforce planning. The right candidate will also need to have hands-on development experience with Adaptive Planning and related technologies. The role is in IT but will work very closely with FP&A and the greater Finance/Accounting teams. Experience with Tableau suite of tools is a plus.

What you'll do: 

  • Together with the EPM Technology team, you will own Adaptive Planning and all related services
  • Oversee architecture of existing Adaptive Planning solution and make suggestions for improvements
  • Solution and lead Adaptive Planning enhancement projects from beginning to end
  • Help EPM Technology team gain deeper understanding of Adaptive Planning and train the team on Adaptive Planning best practices
  • Establish strong relationship with Finance users and leadership to drive EPM roadmap for Adaptive Planning and related technologies
  • Help establish EPM Center of Excellence at Pinterest
  • This is a contract position at Pinterest. As such, the contractor who fills this role will be employed either by our staffing partner (ProUnlimited) or by an agency partner, and not an employee of Pinterest.
  • All interviews will be scheduled and/or conducted by the Pinterest assignment manager. When a finalist has been selected, ProUnlimited or the agency partner will extend the offer and provide assignment details including duration, benefits options and onboarding details.

What we're looking for: 

  • Hands-on design and build experience with all Adaptive Planning technologies: standard sheets, cube sheets, all dimensions, reporting, integration framework, security, dashboarding and OfficeConnect
  • Strong in application design, data integration and application project lifecycle
  • Comfortable working side-by-side with business
  • Ability to translate business requirements to technical requirements
  • Strong understanding in all three financial statements and the different enterprise planning cycles
  • Familiar with Tableau suite of tools.

 

More about contract roles at Pinterest: 

  • This is temporary contract position at Pinterest. Contractors at Pinterest are employed through our agency partners. If hired, your employer of record will be the agency that submitted your resume to the role. 
  • Interviews will be scheduled through the agency you’re working with, and conducted by the Pinterest hiring manager. When the hiring manager is ready to move forward with a candidate the agency will reach out to the candidate to extend the offer and provide assignment details (duration, rate, benefits, etc.).

 

Machine Learning Engineer, Content Si...
Toronto, ON, CA

About Pinterest:  

Millions of people across the world come to Pinterest to find new ideas every day. It’s where they get inspiration, dream about new possibilities and plan for what matters most. Our mission is to help those people find their inspiration and create a life they love. In your role, you’ll be challenged to take on work that upholds this mission and pushes Pinterest forward. You’ll grow as a person and leader in your field, all the while helping Pinners make their lives better in the positive corner of the internet.

On the Content Signals team, you’ll be responsible for building machine learning signals from NLP and CV components to productionizing the end product in batch and real-time setting at Pinterest scale. Our systems offer rich semantics to the recommendation platform and enable the product engineers to build deeper experiences to further engage Pinners. In understanding structured and unstructured content, we leverage embeddings, supervised and semi-supervised learning, and LSH. To scale our systems we leverage Spark, Flink, and low-latency model serving infrastructure.

What you’ll do:

  • Apply machine learning approaches to build rich signals that enable ranking and product engineers to build deeper experiences to further engage Pinners
  • Own, improve, and scale signals over both structured and unstructured content that bring tens of millions of rich content to Pinterest each day
  • Drive the roadmap for next-generation content signals that improve the content ecosystem at Pinterest.

What we’re looking for:

  • Deep expertise in content modeling at consumer Internet scale
  • Strong ability to work cross-functionally and with partner engineering teams
  • Expert in Java, Scala or Python.

#LI-EA2

Senior Software Engineer, Shopping Co...
Toronto, ON, CA

About Pinterest:  

Millions of people across the world come to Pinterest to find new ideas every day. It’s where they get inspiration, dream about new possibilities and plan for what matters most. Our mission is to help those people find their inspiration and create a life they love. In your role, you’ll be challenged to take on work that upholds this mission and pushes Pinterest forward. You’ll grow as a person and leader in your field, all the while helping Pinners make their lives better in the positive corner of the internet.

Pinterest is aiming to build a world-class shopping experience for our users, and has a unique advantage to succeed due to the high shopping intent of Pinners. The new Shopping Content Mining team being founded in Toronto plays a critical role in this journey. This team is responsible for building a brand new platform for mining and understanding product data, including extracting high quality product attributes from web pages and free texts that come from all major retailers across the world, mining product reviews and product relationships, product classification, etc. The rich product data generated by this platform is the foundation of the unified product catalog, which powers all shopping experiences at Pinterest (e.g., product search & recommendations, product detail page, shop the look, shopping ads).

There are unique technical challenges for this team: building large scale systems that can process billions of products, Machine Learning models that require few training examples to generate wrappers for web pages, NLP models that can extract information from free-texts, easy-to-use human labelling tools that generate high quality labeled data. Your work will have a huge impact on improving the shopping experience of 400M+ Pinners and driving revenue growth for Pinterest.

What you’ll do:

  • As a backend engineer, design and build large scale systems that can process billions of products, e.g., information extraction systems using XML parsers.
  • Design and build systems / tools, e.g.,
    • UI for data labeling and ML model diagnostic
    • feature extraction for ML models
    • extraction template / model fast deployment
    • evaluation / outlier detection system for data quality
  • Drive cross functional collaborations with partner teams working on shopping

What we’re looking for:

  • 5+ years of industry experience
  • Expert in Python and Java
  • Hands-on experience with big data technologies (e.g., Hadoop/Spark) and scalable realtime systems that process stream data
  • Nice to have: Familiarity with information extraction techniques for web-pages and free-texts, Experience working with shopping data is a plus, Experience building internal tools for labeling / diagnosing, Basic knowledge of machine learning (or willing to learn!): feature extraction, training, etc

#LI-EA1

Verified by
Security Software Engineer
Tech Lead, Big Data Platform
Software Engineer
Talent Brand Manager
Sourcer
Software Engineer
You may also like