Hadoop vs RocksDB: What are the differences?
What is Hadoop? Open-source software for reliable, scalable, distributed computing. The Apache Hadoop software library is a framework that allows for the distributed processing of large data sets across clusters of computers using simple programming models. It is designed to scale up from single servers to thousands of machines, each offering local computation and storage.
What is RocksDB? Embeddable persistent key-value store for fast storage, developed and maintained by Facebook Database Engineering Team. RocksDB is an embeddable persistent key-value store for fast storage. RocksDB can also be the foundation for a client-server database but our current focus is on embedded workloads. RocksDB builds on LevelDB to be scalable to run on servers with many CPU cores, to efficiently use fast storage, to support IO-bound, in-memory and write-once workloads, and to be flexible to allow for innovation.
Hadoop and RocksDB belong to "Databases" category of the tech stack.
"Great ecosystem" is the primary reason why developers consider Hadoop over the competitors, whereas "Very fast" was stated as the key factor in picking RocksDB.
Hadoop and RocksDB are both open source tools. It seems that RocksDB with 14.3K GitHub stars and 3.12K forks on GitHub has more adoption than Hadoop with 9.26K GitHub stars and 5.78K GitHub forks.
Airbnb, Uber Technologies, and Spotify are some of the popular companies that use Hadoop, whereas RocksDB is used by Facebook, LinkedIn, and Skry, Inc.. Hadoop has a broader approval, being mentioned in 237 company stacks & 127 developers stacks; compared to RocksDB, which is listed in 6 company stacks and 7 developer stacks.
What is Hadoop?
What is RocksDB?
Want advice about which of these to choose?Ask the StackShare community!
What are the cons of using Hadoop?
What are the cons of using RocksDB?
Sign up to get full access to all the companiesMake informed product decisions
Sign up to get full access to all the tool integrationsMake informed product decisions
The MapReduce workflow starts to process experiment data nightly when data of the previous day is copied over from Kafka. At this time, all the raw log requests are transformed into meaningful experiment results and in-depth analysis. To populate experiment data for the dashboard, we have around 50 jobs running to do all the calculations and transforms of data.
in 2009 we open sourced mrjob, which allows any engineer to write a MapReduce job without contending for resources. We’re only limited by the amount of machines in an Amazon data center (which is an issue we’ve rarely encountered).
The massive volume of discovery data that powers Pinterest and enables people to save Pins, create boards and follow other users, is generated through daily Hadoop jobs...
Importing/Exporting data, interpreting results. Possible integration with SAS