Pinterest uses Hadoop
The MapReduce workflow starts to process experiment data nightly when data of the previous day is copied over from Kafka. At this time, all the raw log requests are transformed into meaningful experiment results and in-depth analysis. To populate experiment data for the dashboard, we have around 50 jobs running to do all the calculations and transforms of data.
Vital Labs, Inc. uses Hazelcast
HazelCast is the foundation for the distributed system that hosts our APIs and intelligent workflows. We wrap the core HazelCast functions in Clojure protocols to implement micro-services on top of a coherent, single-process instance per virtual node.
Yelp uses Hadoop
in 2009 we open sourced mrjob, which allows any engineer to write a MapReduce job without contending for resources. We’re only limited by the amount of machines in an Amazon data center (which is an issue we’ve rarely encountered).
Pinterest uses Hadoop
The massive volume of discovery data that powers Pinterest and enables people to save Pins, create boards and follow other users, is generated through daily Hadoop jobs...