Pinterest

Decision at Pinterest about Hadoop

Avatar of jwegan
Pinterest
HadoopHadoop

The MapReduce workflow starts to process experiment data nightly when data of the previous day is copied over from Kafka. At this time, all the raw log requests are transformed into meaningful experiment results and in-depth analysis. To populate experiment data for the dashboard, we have around 50 jobs running to do all the calculations and transforms of data. Hadoop

1 upvote20 views

Decision at Pinterest about Jenkins

Avatar of jwegan
Pinterest
JenkinsJenkins

Jenkins is our continuous integration system for packaging builds and running unit tests after each check in. Jenkins

1 upvote16 views

Decision at Pinterest about Kafka

Avatar of jwegan
Pinterest
KafkaKafka

Front-end messages are logged to Kafka by our API and application servers. We have batch processing (on the middle-left) and real-time processing (on the middle-right) pipelines to process the experiment data. For batch processing, after daily raw log get to s3, we start our nightly experiment workflow to figure out experiment users groups and experiment metrics. We use our in-house workflow management system Pinball to manage the dependencies of all these MapReduce jobs. Kafka

1 upvote15 views

Decision at Pinterest about Storm

Avatar of jwegan
Pinterest
StormStorm

In addition to batch processing, we also wanted to achieve real-time data processing. For example, to improve the success rate of experiments, we needed to figure out experiment group allocations in real-time once the experiment configuration was pushed out to production. We used Storm to tail Kafka and compute aggregated metrics in real-time to provide crucial stats. Storm

1 upvote14 views

Decision at Pinterest about HBase

Avatar of jwegan
Pinterest
HBaseHBase

The final output is inserted into HBase to serve the experiment dashboard. We also load the output data to Redshift for ad-hoc analysis. For real-time experiment data processing, we use Storm to tail Kafka and process data in real-time and insert metrics into MySQL, so we could identify group allocation problems and send out real-time alerts and metrics. HBase

1 upvote13 views

Decision at Pinterest about Amazon S3

Avatar of jwegan
Pinterest
Amazon S3Amazon S3

Amazon S3 is where we keep our builds. It鈥檚 a simple way to share data and scales with no intervention on our end. Amazon S3

1 upvote12 views

Decision at Pinterest about Zookeeper

Avatar of jwegan
Pinterest
ZookeeperZookeeper

Zookeeper manages our state, and tells each node what version of code it should be running. Zookeeper

1 upvote12 views

Decision at Pinterest about GitHub

Avatar of jwegan
Pinterest
GitHubGitHub

Github Enterprise is our version-control overlay, managing code-reviews and facilitates code-merging, and has a great API. GitHub

1 upvote12 views

Decision at Pinterest about Varnish

Avatar of jwegan
Pinterest
VarnishVarnish

When you visit the site, you talk to a load balancer which chooses a varnish front-end which in turn talks to our web front-ends which used to run nine python processes. Each of these processes are serving the exact same version on any given web front-end. Varnish

1 upvote12 views

Decision at Pinterest about Varnish

Avatar of jwegan
Pinterest
VarnishVarnish

When you visit the site, you talk to a load balancer which chooses a varnish front-end which in turn talks to our web front-ends which used to run nine python processes. Each of these processes are serving the exact same version on any given web front-end. Varnish

1 upvote12 views