Simplifying Web Deploys

2,886
Pinterest
Pinterest is a social bookmarking site where users collect and share photos of their favorite events, interests and hobbies. One of the fastest growing social networks online, Pinterest is the third-largest such network behind only Facebook and Twitter.

In 2019, Pinterest has moved to a CI/CD model for our API and web layers, which has truly improved agility by reducing time between merge and production. Prior to the update, we had been deploying our web code in the same way for years, and it began showing its age. That mechanism is internally called A/B deploys and externally it’s referred to as Blue-Green deploys. In this post we describe how and why we replaced it with rolling deploys.

Our old deployment model (Blue-Green deploys)

Since the early days, the CD approach for the web layer of our main web property was based on the blue-green deployment model, where we kept two instances of the web layer deployed at all times. These instances were called A and B, therefore we commonly referred to this deployment model as A/B (not to be confused with A/B testing).

At any given time, only one of these instances would be active and taking traffic (let’s say A for example), so we’d deploy a new version to the other instance (B in this case) and switch over as soon as it had been verified with some canary traffic. B would then be on the latest version, active and receiving traffic. The cycle would then repeat with the next deploy happening on A and so on.

This model had a few positive aspects:

Instant rollbacks

When a regression somehow managed to make it past integration tests and canary traffic to be later on detected in production, we could instantly remove it by reactivating the previous version.

Only one version of the application runs at a given time

With only one of the instances active at a time and the cut-over happening virtually instantly, we could always rely on the fact that we were serving only one version of the application at a given time, which really simplified dealing with production metrics.

No capacity loss during deploys

Because deploys only targeted inactive instances, we could deploy very fast and then proceed to activate the new version when it was available everywhere. You can’t really do this that fast if you are updating production endpoints in-place.

However, things weren’t perfect. Here are a few things we didn’t like about this setup:

Need to keep two instances running

Since we had two instances of the webapp running at almost all times, our fleet had to be sized accordingly in terms of memory, disk and CPU. We also had to address other aspects of the instance duplicity, e.g. port and naming conflicts, which added complexity to our code.

No ramp up

To turn on a new version, we went from 0% to 100%. There’s a certain family of regressions that did not show up during canary phase and when they did show up, it was too late.

Statefulness

We had to maintain a lot of state in ZooKeeper to keep track of what had previously been served, when the new version became ready, etc. Over the years, the state machine controlling all of this grew wildly complex to the point where it was hard to change something without causing an incident.

Complex routing logic

The logic to ensure that requests were routed to the right version is hard to get right when you have more than a few possible states. We had to account for all the possible combinations of A serving, B serving, canary serving A, canary serving B, etc. This, coupled with logic to signal version upgrades to our Javascript code, made everything hard to maintain and even harder to extend that code base to support new use cases.

Uniqueness

Most other stateless clusters at Pinterest use the well known rolling deploy model based on Teletraan, so there’s a real cognitive tax in having a hard to understand deploy model just for our web cluster.

The new deployment model (Rolling deploys)

Last year we decided it was time to move to a rolling deployment model. A cross-functional team was assembled to plan and execute the project, comprised of engineers across the delivery platform, traffic and web teams.

After exploring multiple approaches — each one essentially differing in how much complexity happened at the client vs the frontend proxy vs backend web clusters — we decided that we could handle the bulk of our routing logic in our Envoy ingress cluster.

Rolling deploys from the web application perspective

From the application perspective, the move to rolling deploys represented a fundamental change in the way we dealt with production metrics and issues: we could no longer simply rely on the fact that only one version was being served at a given time; in fact, mid-deploy we would have two different versions each running on half of the fleet. Therefore one of our action items was to update our systems and metrics to be more version-aware.

The version of our client-side application also became a key point of discussion, since we have long had a requirement for version affinity between the client-side and the server-side portions of our application. That means that 1) XHR requests coming from a client running a certain version of the app should be processed by server-side code from the same version and 2) our client refreshes to a new version when a new server-side version is detected.

Graph showing web client refreshes during the day, each color represents a new version being rolled out to the web clients. There, peaks coincide with the period when a new version is being deployed to our servers. At that moment, we signal to web clients that a new version is available on the server-side and instruct it to refresh. Once the deploy is complete, the number of refreshes rolls off until a new deploy starts.

We decided to maintain this approach since it provides a number of benefits in terms of development and operations as a consequence of the consistency between client-side and server-side code. However, with rolling deploys the cut-off to a new version is no longer a single point in time but instead a longer interval where two or more servers versions can co-exist. We quickly learned that we would need to roll the client updates along with the server-side updates to maintain a healthy ratio of requests per host while keeping the version affinity mechanism.

A day in the life of the Pinterest web app.

The graph above shows active user sessions, with each color representing a different version.

Notice how we “roll” web clients from one version to the other following our deploys throughout the day. The smaller blue peak represents a deployment that was rolled back when an issue was identified before its completion. It shows one of the many advantages of this model: early incident detection.

Rolling deploys and traffic routing

Last year, the Traffic team replaced our ingress tier based on Varnish with the new and powerful Envoy proxy. Envoy is easily extensible via filters which can be written in modern C++. The ability to extend our edge load-balancers with custom functionality and powerful metrics gave us confidence to explore a replacement for the Blue-Green deployment model. We set out with the goal of having an almost identical deployment model to every other cluster, while maintaining version affinity between client and server during deploys so that the Web team can carry on with the existing premise. This is also important, because switching across versions comes with a cost (e.g.: a browser refresh). So this needs to happen at most once for every active Pinner during a deploy.

We first simplified the client-side logic to ensure that the state machine which handles version switching had only one entry point, to make it easier to operate. Because of our unique requirements we couldn’t just use Envoy’s existing routing mechanism. Our requirements were:

  • During deprecation, both deployment types should be supported (Blue-Green and Rolling)
  • We should be able to gracefully shift over a % of traffic across stages
  • Behaviors should be as deterministic as possible. E.g.: when forcing an existing session into a new version, it shouldn’t jump back to the previous one unless there’s a rollback

So, we designed and prototyped a routing filter that would be in charge of distributing requests during a rolling deploy, while honoring the above requirements.

The first requirement is critical, and most successful migrations are so because they provide a good story around gracefully moving from the Old World into the New World. This allowed us to build confidence while we moved along, even though it came with a tax of supporting more complexity.

The Envoy filter’s state machine ended up looking something like this:

  • If a request has no routing id, assign it one
  • For a given routing id, pick a stage. E.g.: hash(routing_id) % len(stages)
  • Within a given stage, if it’s using rolling deploys then pick a version. E.g.: hash(routingid) % len(versionsforthatstage)

To avoid permanently sticking users to a stage, we established that a routing id has a duration of 24 hours. We also came up with the concept of a Route Map, which describes the traffic distribution across stages and versions. Here’s an example map:

This route map will send 99.5% of traffic to prod and 0.5% to canary. Within each stage, it’ll distribute traffic dynamically and consistently across versions. Dynamically means it’ll route based on the available capacity for each version. Consistently means it’ll apply an ordering between a routing id and the available versions to ensure a given routing_id is not jumping across versions during a deploy and that it only jumps once.

The route map is stored in ZooKeeper and distributed via our config pipeline. The capacity per version per stage is calculated from the available endpoints on our published serversets (which also exist in ZooKeeper). That is, endpoints have metadata about their versions which is then used for capacity calculation. This was all very convenient, because we could rely on existing and battle tested systems. However, it also comes with the challenge of eventual consistency. Not all Envoy servers have the same view of the world at the same time.

To work around this, we extended our filter to give it the notion of “deployment direction”. That is, when a route map is changing you can infer which version is being deployed by observing how capacity changes. A version that is increasing in capacity is the new version. Thus, when there’s a mismatch between the version a session wants versus what the filter thinks it should get we use the deployment’s direction to break the ambiguity. This ended up being very useful for quelling the version bouncing happening because of lack of synchronization across Envoys.

Conclusion

Deployment strategies and traffic routing are fun challenges. Getting them right can really smooth out your developer and operational experience. They can also greatly increase your reliability, when the pipeline is easy to reason about and debug. Being able to build this on top of Envoy really made things easier, given the vitality of the project and how easy it is to extend its core logic via filters.

Changing core infrastructure that has been around for years is always challenging because there’s a lot of undocumented behavior. However, our approach of a phased transition across deployment models made it possible to get steady feedback and ensure an incident-free migration.

This project was a joint effort across multiple teams: Delivery Platform, Core Web, Service Framework and Traffic. During the process we also received very valuable feedback from other teams and actors.

Credits for design ideas & code reviews: James Fish, Derek Argueta, Scott Beardsley, Micheal Benedict, Chris Lloyd

We’re building the world’s first visual discovery engine. More than 250 million people around the world use Pinterest to dream about, plan and prepare for things they want to do in life. Come join us!

Pinterest
Pinterest is a social bookmarking site where users collect and share photos of their favorite events, interests and hobbies. One of the fastest growing social networks online, Pinterest is the third-largest such network behind only Facebook and Twitter.
Tools mentioned in article
Open jobs at Pinterest
Engineer Manager, Content Knowledge S...
San Francisco, CA, US

About Pinterest:  

Millions of people across the world come to Pinterest to find new ideas every day. It’s where they get inspiration, dream about new possibilities and plan for what matters most. Our mission is to help those people find their inspiration and create a life they love. In your role, you’ll be challenged to take on work that upholds this mission and pushes Pinterest forward. You’ll grow as a person and leader in your field, all the while helping Pinners make their lives better in the positive corner of the internet.

Pinterest helps people Discover and Do the things they love. We have more than 450M monthly active users who actively curate an ecosystem of more than 100B Pins on more than 1B boards, creating a rich human curated graph of immense value. 

Technically, we are building out an internet scale personalized recommendation engine in 22+ languages, which requires a deep understanding of the users and content on our platform. As engineer manager on the Content Knowledge Signal team, you’ll work on building 20+ content understanding signals based on Pinterest Knowledge Graph, which will make measurably positive impact on hundreds of millions of users with improved recommendation and featurization breakthroughs on almost all Pinterest product surfaces (Discovery, Shopping, Growth, Ads, etc). 

What you'll do:

  • Manage a horizontal team of talented and dedicated ML engineers to build the foundational content understanding and engagement features of our contents to be used across all Pinterest ecosystems
  • Utilize state of the art algorithms/industry best practice to build and improve content understanding signals 
  • Partner with other engineering teams and sales & marketing team to discover future opportunities to improve content recommendation on Pinterest
  • Hire new engineers to grow the team
  • Build ML models using text and visual information of a pin, identify the most relevant set of text annotations for that pin. These sets of highly relevant annotations are among the most important features used in more than 30 use cases within Pinterest, including key ranking models of Homefeed, Search and Ads.
  • Build ML models using text and images of the products, to understand their product categories (bags, shoes, shirts, etc) and their attributes (brand, color, style, etc). They are used to greatly improve relevance for product recommendation on major shopping surfaces. 
  • Build ML models to understand search queries, then use them, together with Pin level signals, to boost search relevance. 
  • Build graph based embedding as well as explicit annotation to represent the specialties of our native content creators, to improve creator and native content recommendation.
  • Build highly efficient and expandable data pipelines to understand engagement data at various entity levels. Such engagement signals are the major feature of the ranking models for our three main Discovery surfaces. 
  •  

What we're looking for:

  • 2+ years of industrial experience in ML team’s EM or TL for one or multiple of the following use cases with large scale: ads targeting, search and discovery, growth, content/user understanding
  • Hands-on experience working with ML algorithm development and productization.  
  • Experience working with PMs and XFN partners on E2E systems and moving business metrics

#TG1

Our Commitment to Diversity:

At Pinterest, our mission is to bring everyone the inspiration to create a life they love—and that includes our employees. We’re taking on the most exciting challenges of our working lives, and we succeed with a team that represents an inclusive and diverse set of identities and backgrounds.

Software Engineer, Machine Learning P...
San Francisco, CA, US

About Pinterest:  

Millions of people across the world come to Pinterest to find new ideas every day. It’s where they get inspiration, dream about new possibilities and plan for what matters most. Our mission is to help those people find their inspiration and create a life they love. In your role, you’ll be challenged to take on work that upholds this mission and pushes Pinterest forward. You’ll grow as a person and leader in your field, all the while helping Pinners make their lives better in the positive corner of the internet.

We are seeking a senior software engineer to build and boost Pinterest’s machine learning training and serving platforms and infrastructure. The candidate will work with different teams to design, build and improve our ML systems, including the model training computation platform, serving systems and model deployment systems.

What you'll do:

  • Design and build solutions to make the model training, serving and deployment process more efficient, more reliable, and less error-prone by human mistakes.
  • Design and build long term solutions to boost the model iteration velocity for machine learning engineers and data scientists.
  • Work extensively with ML engineers across Pinterest to understand their requirements, pain points, and build generalized solutions. Also work with partner teams to drive projects requiring cross-team coordination. 
  • Provide technical guidance and coaching to other junior engineers in the team.

What we're looking for:

  • Hands-on experience developing large-scale machine learning models in production, or experience working on the systems supporting onboarding large-scale machine learning models.
  • Ability to drive cross-team projects; Ability to understand our internal customers (ML practitioners), their common usage patterns and pain points.
  • Flexibility to work across different areas: tool building, model optimization, infrastructure optimization, large scale data processing pipelines, etc.
  • 5+ years of professional experience in software engineering.
  • Fluency in Python and either Java or Scala (Fluency in C++ for the MLS role).
  • Past tech lead experience is preferred, but not required. (Not necessary for the MLS role).

#LI-GB2

Our Commitment to Diversity:

At Pinterest, our mission is to bring everyone the inspiration to create a life they love—and that includes our employees. We’re taking on the most exciting challenges of our working lives, and we succeed with a team that represents an inclusive and diverse set of identities and backgrounds.

Engineering Manager, Ads Engagement M...
San Francisco, CA, US; Palo Alto, CA, US

About Pinterest:  

Millions of people across the world come to Pinterest to find new ideas every day. It’s where they get inspiration, dream about new possibilities and plan for what matters most. Our mission is to help those people find their inspiration and create a life they love. In your role, you’ll be challenged to take on work that upholds this mission and pushes Pinterest forward. You’ll grow as a person and leader in your field, all the while helping Pinners make their lives better in the positive corner of the internet.

Pinterest is one of the fastest growing online ad platforms, and our success depends on mining rich user interest data that helps us connect users with highly relevant advertisers/products. We’re looking for an Engineering Manager with experience in machine learning, data mining, and information retrieval to lead a team that develops new data-driven techniques to show the most engaging and relevant promoted content to the users. You’ll be leading a world-class ML team that is growing quickly and laying the foundation for Pinterest’s business success.

What you’ll do:

  • Manage and grow the engineering team, providing technical vision and long-term roadmap
  • Design features and build large-scale machine learning models to improve ads engagement prediction
  • Effectively collaborate and partner with several cross functional teams to build the next generation of ads engagement models
  • Mentor and grow ML engineers to allow them to become experts in modeling/engagement prediction 

What we’re looking for:

  • Degree in Computer Science, Statistics or related field
  • Industry experience building production machine learning systems at scale, data mining, search, recommendations, and/or natural language processing
  • 1+ years of experience leading projects/ teams either as TL/ TLM/ EM
  • Cross-functional collaborator and strong communicator
  • Experience with ads domain is a big plus

#LI-SM4

Our Commitment to Diversity:

At Pinterest, our mission is to bring everyone the inspiration to create a life they love—and that includes our employees. We’re taking on the most exciting challenges of our working lives, and we succeed with a team that represents an inclusive and diverse set of identities and backgrounds.

Engineering Manager, Ads Marketplace
San Francisco, CA, US; Palo Alto, CA, US; Seattle, WA, US

About Pinterest:  

Millions of people across the world come to Pinterest to find new ideas every day. It’s where they get inspiration, dream about new possibilities and plan for what matters most. Our mission is to help those people find their inspiration and create a life they love. In your role, you’ll be challenged to take on work that upholds this mission and pushes Pinterest forward. You’ll grow as a person and leader in your field, all the while helping Pinners make their lives better in the positive corner of the internet.

Within the Ads Quality team, we try to connect the dots between the aspirations of Pinners and the products offered by our partners. In this role, you will lead a team of engineers that is responsible for monetizing our Shopping and Creator surfaces. Using your strong analytical skill sets, a thorough understanding of auction mechanisms, and experience in managing an engineering team, you will advance the state of the art in Marketplace design and yield management.

What you’ll do:

  • Manage a team of engineers with a background in ML, backend development, economics, and data science to:
    • Monetize new surfaces effectively and responsibly 
    • Interface with our Product and Organic teams to understand requirements and build solutions that cater to our advertisers and users
    • Build models to enable scalable solutions for ad allocation, eligibility and pricing on the new surfaces
    • Hold a high standard for engineering excellence by building robust and future proof systems with an appreciation for simplicity and elegance
    • Identify gaps and opportunities as we expand and execute on closing those gaps effectively and in a timely manner
  • Work closely with Product on planning roadmap, set technical direction and deliver values
  • Coach and mentor team members and help them develop their career path and achieve their career goals

What we’re looking for:

  • Degree in Computer Science, Statistics, or related field
  • 2+ years of management experience
  • 5+ years of relevant experience
  • Background in computational advertising, econometrics, shopping
  • Strong industry experience in machine learning
  • Experience with ads domain is a big plus
  • Cross-functional collaborator and strong communicator

#LI-SM4

Our Commitment to Diversity:

At Pinterest, our mission is to bring everyone the inspiration to create a life they love—and that includes our employees. We’re taking on the most exciting challenges of our working lives, and we succeed with a team that represents an inclusive and diverse set of identities and backgrounds.

Verified by
Security Software Engineer
Tech Lead, Big Data Platform
Software Engineer
Talent Brand Manager
Sourcer
Software Engineer
You may also like