Pinterest uses Hadoop
The MapReduce workflow starts to process experiment data nightly when data of the previous day is copied over from Kafka. At this time, all the raw log requests are transformed into meaningful experiment results and in-depth analysis. To populate experiment data for the dashboard, we have around 50 jobs running to do all the calculations and transforms of data.
Pinterest uses HBase
The final output is inserted into HBase to serve the experiment dashboard. We also load the output data to Redshift for ad-hoc analysis. For real-time experiment data processing, we use Storm to tail Kafka and process data in real-time and insert metrics into MySQL, so we could identify group allocation problems and send out real-time alerts and metrics.
King's Digital Lab uses CouchDB
Document (JSON) DB.
Yelp uses Hadoop
in 2009 we open sourced mrjob, which allows any engineer to write a MapReduce job without contending for resources. We’re only limited by the amount of machines in an Amazon data center (which is an issue we’ve rarely encountered).
Smileupps uses CouchDB
By being built on, of, in and around CouchDB, Smileupps offers to its customers secure and reliable CouchDB hosting and a CouchDB-based app store to build and sell serious business-enabled web applications
Pinterest uses Hadoop
The massive volume of discovery data that powers Pinterest and enables people to save Pins, create boards and follow other users, is generated through daily Hadoop jobs...