Hadoop vs Neo4j: What are the differences?
Developers describe Hadoop as "Open-source software for reliable, scalable, distributed computing". The Apache Hadoop software library is a framework that allows for the distributed processing of large data sets across clusters of computers using simple programming models. It is designed to scale up from single servers to thousands of machines, each offering local computation and storage. On the other hand, Neo4j is detailed as "The world’s leading Graph Database". Neo4j stores data in nodes connected by directed, typed relationships with properties on both, also known as a Property Graph. It is a high performance graph store with all the features expected of a mature and robust database, like a friendly query language and ACID transactions.
Hadoop and Neo4j are primarily classified as "Databases" and "Graph Databases" tools respectively.
"Great ecosystem" is the primary reason why developers consider Hadoop over the competitors, whereas "Cypher – graph query language" was stated as the key factor in picking Neo4j.
Hadoop and Neo4j are both open source tools. Hadoop with 9.18K GitHub stars and 5.74K forks on GitHub appears to be more popular than Neo4j with 6.56K GitHub stars and 1.62K GitHub forks.
Slack, Shopify, and SendGrid are some of the popular companies that use Hadoop, whereas Neo4j is used by Movielala, Hinge, and Sportsy. Hadoop has a broader approval, being mentioned in 237 company stacks & 116 developers stacks; compared to Neo4j, which is listed in 114 company stacks and 47 developer stacks.
What is Hadoop?
What is Neo4j?
Need advice about which tool to choose?Ask the StackShare community!
Sign up to add, upvote and see more prosMake informed product decisions
What are the cons of using Hadoop?
Sign up to get full access to all the companiesMake informed product decisions
Sign up to get full access to all the tool integrationsMake informed product decisions
Since the beginning, Cal Henderson has been the CTO of Slack. Earlier this year, he commented on a Quora question summarizing their current stack.Apps
- Desktop: And Electron to ship it as a desktop application.
- Android: a mix of Java and Kotlin.
- iOS: written in a mix of Objective C and Swift.
- The core application and the API written in PHP/Hack that runs on HHVM.
- The data is stored in MySQL using Vitess.
- Caching is done using Memcached and MCRouter.
- The search service takes help from SolrCloud, with various Java services.
- The messaging system uses WebSockets with many services in Java and Go.
- Load balancing is done using HAproxy with Consul for configuration.
- Most services talk to each other over gRPC,
- Some Thrift and JSON-over-HTTP
- Voice and video calling service was built in Elixir.
- Built using open source tools including Presto, Spark, Airflow, Hadoop and Kafka.
Neo4j is a great graph database, but it's also a great tool for any application in general. The data model is easy to figure out and is flexible to use as your application changes in the early stages. Further, there are constraints you can add to get data consistency once you have a firm data model. The built in admin tool makes it easy to review the data, see how your application is being used, and has a great query plan visualizer for when you want to optimize for performance.
To be evaluated
- + Leading Graph DB provider, large community
- + Rich querying language
- + Tools to visualise and interact visually with results
Possible alternative to triple store.
- does it support full text search?
- does it support some sort of inference or derived relationships (e.g. transitivity, symmetry)?
- does it support faceted search?
The MapReduce workflow starts to process experiment data nightly when data of the previous day is copied over from Kafka. At this time, all the raw log requests are transformed into meaningful experiment results and in-depth analysis. To populate experiment data for the dashboard, we have around 50 jobs running to do all the calculations and transforms of data.
in 2009 we open sourced mrjob, which allows any engineer to write a MapReduce job without contending for resources. We’re only limited by the amount of machines in an Amazon data center (which is an issue we’ve rarely encountered).
The massive volume of discovery data that powers Pinterest and enables people to save Pins, create boards and follow other users, is generated through daily Hadoop jobs...
Importing/Exporting data, interpreting results. Possible integration with SAS
TBD. Good to have I think. Analytics on loads of data, recommendations?