Hadoop vs Memcached: What are the differences?
Developers describe Hadoop as "Open-source software for reliable, scalable, distributed computing". The Apache Hadoop software library is a framework that allows for the distributed processing of large data sets across clusters of computers using simple programming models. It is designed to scale up from single servers to thousands of machines, each offering local computation and storage. On the other hand, Memcached is detailed as "High-performance, distributed memory object caching system". Memcached is an in-memory key-value store for small chunks of arbitrary data (strings, objects) from results of database calls, API calls, or page rendering.
Hadoop and Memcached can be primarily classified as "Databases" tools.
"Great ecosystem" is the primary reason why developers consider Hadoop over the competitors, whereas "Fast object cache" was stated as the key factor in picking Memcached.
Hadoop and Memcached are both open source tools. It seems that Hadoop with 9.26K GitHub stars and 5.78K forks on GitHub has more adoption than Memcached with 8.99K GitHub stars and 2.6K GitHub forks.
Facebook, Instagram, and Dropbox are some of the popular companies that use Memcached, whereas Hadoop is used by Airbnb, Uber Technologies, and Spotify. Memcached has a broader approval, being mentioned in 755 company stacks & 267 developers stacks; compared to Hadoop, which is listed in 237 company stacks and 127 developer stacks.
What is Hadoop?
What is Memcached?
Want advice about which of these to choose?Ask the StackShare community!
Sign up to add, upvote and see more prosMake informed product decisions
What are the cons of using Hadoop?
What are the cons of using Memcached?
Sign up to get full access to all the companiesMake informed product decisions
Sign up to get full access to all the tool integrationsMake informed product decisions
The MapReduce workflow starts to process experiment data nightly when data of the previous day is copied over from Kafka. At this time, all the raw log requests are transformed into meaningful experiment results and in-depth analysis. To populate experiment data for the dashboard, we have around 50 jobs running to do all the calculations and transforms of data.
in 2009 we open sourced mrjob, which allows any engineer to write a MapReduce job without contending for resources. We’re only limited by the amount of machines in an Amazon data center (which is an issue we’ve rarely encountered).
The massive volume of discovery data that powers Pinterest and enables people to save Pins, create boards and follow other users, is generated through daily Hadoop jobs...
As part of the cacheing system within Drupal.
Memcached mainly took care of creating and rebuilding the REST API cache once changes had been made within Drupal.
Distributed cache exposed through Google App Engine APIs; use to stage fresh data (incoming and recently processed) for faster access in data processing pipeline.
Memcache caches database results and articles, reducing overall DB load and allowing seamless DB maintenance during quiet periods.
Used to cache most used files for our clients. Connected with CloudFlare Railgun Optimizer.
Importing/Exporting data, interpreting results. Possible integration with SAS
TBD. Good to have I think. Analytics on loads of data, recommendations?