Need advice about which tool to choose?Ask the StackShare community!
ArangoDB vs Hadoop: What are the differences?
Data Model: ArangoDB is a multi-model database that supports key-value pairs, documents, and graphs, allowing for flexible data modeling. On the other hand, Hadoop is primarily designed for handling large-scale distributed data processing using a file system approach.
Query Language: ArangoDB uses its query language called AQL (ArangoDB Query Language), which is SQL-like and allows for complex queries involving joins, aggregations, and graph traversals. Hadoop, on the other hand, relies on MapReduce for data processing, which requires writing and executing custom Java code for querying.
Scalability: ArangoDB can scale both vertically and horizontally, allowing for increased performance as data volume grows by adding more resources to a single server or distributing data across multiple servers. In contrast, Hadoop excels in horizontal scalability by distributing data and processing across a cluster of commodity hardware.
Real-time Processing: ArangoDB supports real-time processing and analytics on live data streams, making it suitable for applications requiring instant insights or responses. Hadoop, in comparison, is better suited for batch processing and offline analytics due to its reliance on MapReduce.
Ease of Use: ArangoDB provides a user-friendly interface and comprehensive documentation, making it easier for developers to get started with data modeling, querying, and administration. Hadoop, on the other hand, has a steeper learning curve due to its complex ecosystem of tools and dependencies, requiring specialized skills to set up and manage clusters effectively.
Use Cases: ArangoDB is well-suited for applications requiring a combination of document, key-value, and graph data models, such as social networking, content management, and recommendation systems. Meanwhile, Hadoop is commonly used for processing large volumes of data in batch mode, such as log analysis, data warehousing, and ETL (Extract, Transform, Load) operations.
In Summary, ArangoDB excels in its flexibility and ease of use for multi-model data processing, real-time analytics, and diverse use cases, while Hadoop is optimized for handling large-scale distributed data processing through its MapReduce framework.
Hello All, I'm building an app that will enable users to create documents using ckeditor or TinyMCE editor. The data is then stored in a database and retrieved to display to the user, these docs can contain image data also. The number of pages generated for a single document can go up to 1000. Therefore by design, each page is stored in a separate JSON. I'm wondering which database is the right one to choose between ArangoDB and PostgreSQL. Your thoughts, advice please. Thanks, Kashyap
Which Graph DB features are you planning to use?
For a property and casualty insurance company, we currently use MarkLogic and Hadoop for our raw data lake. Trying to figure out how snowflake fits in the picture. Does anybody have some good suggestions/best practices for when to use and what data to store in Mark logic versus Snowflake versus a hadoop or all three of these platforms redundant with one another?
for property and casualty insurance company we current Use marklogic and Hadoop for our raw data lake. Trying to figure out how snowflake fits in the picture. Does anybody have some good suggestions/best practices for when to use and what data to store in Mark logic versus snowflake versus a hadoop or all three of these platforms redundant with one another?
As i see it, you can use Snowflake as your data warehouse and marklogic as a data lake. You can add all your raw data to ML and curate it to a company data model to then supply this to Snowflake. You could try to implement the dw functionality on marklogic but it will just cost you alot of time. If you are using Aws version of Snowflake you can use ML spark connector to access the data. As an extra you can use the ML also as an Operational report system if you join it with a Reporting tool lie PowerBi. With extra apis you can also provide data to other systems with ML as source.
I have a lot of data that's currently sitting in a MariaDB database, a lot of tables that weigh 200gb with indexes. Most of the large tables have a date column which is always filtered, but there are usually 4-6 additional columns that are filtered and used for statistics. I'm trying to figure out the best tool for storing and analyzing large amounts of data. Preferably self-hosted or a cheap solution. The current problem I'm running into is speed. Even with pretty good indexes, if I'm trying to load a large dataset, it's pretty slow.
Druid Could be an amazing solution for your use case, My understanding, and the assumption is you are looking to export your data from MariaDB for Analytical workload. It can be used for time series database as well as a data warehouse and can be scaled horizontally once your data increases. It's pretty easy to set up on any environment (Cloud, Kubernetes, or Self-hosted nix system). Some important features which make it a perfect solution for your use case. 1. It can do streaming ingestion (Kafka, Kinesis) as well as batch ingestion (Files from Local & Cloud Storage or Databases like MySQL, Postgres). In your case MariaDB (which has the same drivers to MySQL) 2. Columnar Database, So you can query just the fields which are required, and that runs your query faster automatically. 3. Druid intelligently partitions data based on time and time-based queries are significantly faster than traditional databases. 4. Scale up or down by just adding or removing servers, and Druid automatically rebalances. Fault-tolerant architecture routes around server failures 5. Gives ana amazing centralized UI to manage data sources, query, tasks.
Pros of ArangoDB
- Grahps and documents in one DB37
- Intuitive and rich query language26
- Good documentation25
- Open source25
- Joins for collections21
- Foxx is great platform15
- Great out of the box web interface with API playground14
- Good driver support6
- Low maintenance efforts6
- Clustering6
- Easy microservice creation with foxx5
- You can write true backendless apps4
- Managed solution available2
- Performance0
Pros of Hadoop
- Great ecosystem39
- One stack to rule them all11
- Great load balancer4
- Amazon aws1
- Java syntax1
Sign up to add or upvote prosMake informed product decisions
Cons of ArangoDB
- Web ui has still room for improvement3
- No support for blueprints standard, using custom AQL2