What is Dgraph and what are its top alternatives?
Dgraph is a distributed, transactional, and scalable graph database built specifically for handling massive datasets and complex queries in real-time. Its key features include horizontal scalability, distributed architecture, sharding, GraphQL support, ACID transactions, and customizable indexing. However, some limitations of Dgraph include a relatively steep learning curve for beginners, limited driver support compared to other graph databases, and potential performance bottlenecks under heavy write workloads.
- Neo4j: Neo4j is a popular graph database known for its high performance and powerful query language, Cypher. Key features include ACID transactions, native graph storage, and flexible schema. Pros include strong community support, comprehensive documentation, and a user-friendly interface. However, Neo4j can be resource-intensive and costly for large-scale deployments.
- ArangoDB: ArangoDB is a multi-model database that supports documents, key-value pairs, and graphs. It offers a flexible data model, distributed architecture, and a rich query language (AQL). Pros include multi-model capabilities, horizontal scalability, and strong community support. However, ArangoDB may not be as optimized for graph-specific workloads as Dgraph.
- TigerGraph: TigerGraph is a high-performance graph database designed for real-time deep link analytics. Key features include parallel graph processing, scalable graph storage, and built-in machine learning algorithms. Pros include fast query performance, easy data import/export, and strong support for complex graph algorithms. However, TigerGraph may be overkill for simpler graph applications compared to Dgraph.
- Amazon Neptune: Amazon Neptune is a fully managed graph database service offered by AWS. It supports both property graph and RDF models, with features like high availability, automatic backups, and encryption at rest. Pros include seamless integration with other AWS services, global scalability, and managed infrastructure. However, Neptune may have limitations in terms of customization and control compared to self-managed solutions like Dgraph.
- JanusGraph: JanusGraph is an open-source distributed graph database built on the Apache TinkerPop stack. It offers support for various storage backends, graph analytics, and integration with popular big data tools like Apache Hadoop and Apache Spark. Pros include flexibility in storage options, strong community backing, and robust scalability. However, JanusGraph may require more manual configuration and maintenance compared to Dgraph.
- OrientDB: OrientDB is a multi-model database that supports documents, graphs, and key-value pairs. It offers features like ACID transactions, distributed architecture, and a SQL-like query language. Pros include versatile data modeling capabilities, built-in visualization tools, and active community support. However, OrientDB may lack specialized optimizations for graph-specific workloads compared to Dgraph.
- AllegroGraph: AllegroGraph is an RDF triplestore and graph database designed for semantic data management. It provides features such as semantic reasoning, SPARQL query support, and geospatial querying. Pros include advanced semantic data processing capabilities, high performance for graph analytics, and cross-platform compatibility. However, AllegroGraph may have a steeper learning curve and higher resource requirements compared to Dgraph.
- GraphBase: GraphBase is a distributed graph database that focuses on high availability and fault tolerance. It offers features like real-time data replication, automatic failover, and distributed query processing. Pros include robust data consistency guarantees, performance optimizations for distributed systems, and ease of integration with existing infrastructure. However, GraphBase may have limitations in terms of query expressiveness and advanced graph algorithms compared to Dgraph.
- AnzoGraph DB: AnzoGraph DB is a scalable graph database designed for analyzing complex relationships in large datasets. It offers features like MPP query processing, GPU acceleration, and native RDF support. Pros include fast query performance on large graphs, easy data loading capabilities, and compatibility with semantic web standards. However, AnzoGraph DB may be more specialized for RDF-based applications compared to the general-purpose capabilities of Dgraph.
- AllegroGraph: AllegroGraph is a powerful RDF triplestore and graph database that offers support for semantic reasoning, geospatial querying, and SPARQL query language. Its key features include high-performance inferencing, graph analytics, & integration with RDF ontologies. Pros include comprehensive toolset for ontologies, customizable inferencing rules, & scalable architecture. However, AllegroGraph may have a higher learning curve due to its focus on semantic data processing and querying compared to Dgraph.
Top Alternatives to Dgraph
- Neo4j
Neo4j stores data in nodes connected by directed, typed relationships with properties on both, also known as a Property Graph. It is a high performance graph store with all the features expected of a mature and robust database, like a friendly query language and ACID transactions. ...
- Titan
Titan is a scalable graph database optimized for storing and querying graphs containing hundreds of billions of vertices and edges distributed across a multi-machine cluster. Titan is a transactional database that can support thousands of concurrent users executing complex graph traversals in real time. ...
- ArangoDB
A distributed free and open-source database with a flexible data model for documents, graphs, and key-values. Build high performance applications using a convenient SQL-like query language or JavaScript extensions. ...
- Cayley
Cayley is an open-source graph inspired by the graph database behind Freebase and Google's Knowledge Graph. Its goal is to be a part of the developer's toolbox where Linked Data and graph-shaped data (semantic webs, social networks, etc) in general are concerned. ...
- GraphQL
GraphQL is a data query language and runtime designed and used at Facebook to request and deliver data to mobile and web apps since 2012. ...
- MongoDB
MongoDB stores data in JSON-like documents that can vary in structure, offering a dynamic, flexible schema. MongoDB was also designed for high availability and scalability, with built-in replication and auto-sharding. ...
- JanusGraph
It is a scalable graph database optimized for storing and querying graphs containing hundreds of billions of vertices and edges distributed across a multi-machine cluster. It is a transactional database that can support thousands of concurrent users executing complex graph traversals in real time. ...
- Neptune
It brings organization and collaboration to data science projects. All the experiement-related objects are backed-up and organized ready to be analyzed, reproduced and shared with others. Works with all common technologies and integrates with other tools. ...
Dgraph alternatives & related posts
- Cypher – graph query language69
- Great graphdb61
- Open source33
- Rest api31
- High-Performance Native API27
- ACID23
- Easy setup21
- Great support17
- Clustering11
- Hot Backups9
- Great Web Admin UI8
- Powerful, flexible data model7
- Mature7
- Embeddable6
- Easy to Use and Model5
- Highly-available4
- Best Graphdb4
- It's awesome, I wanted to try it2
- Great onboarding process2
- Great query language and built in data browser2
- Used by Crunchbase2
- Comparably slow9
- Can't store a vertex as JSON4
- Doesn't have a managed cloud service at low cost1
related Neo4j posts
Hello Stackshare. I'm currently doing some research on real-time reporting and analytics architectures. We have a use case where 1million+ records of users, 4million+ activities, and messages that we want to report against. The start was to present it directly from MySQL, which didn't go well and puts a heavy load on the database. Anybody can suggest something where we feed the data and can report in realtime? Read some articles about ElasticSearch and Kafka https://medium.com/@D11Engg/building-scalable-real-time-analytics-alerting-and-anomaly-detection-architecture-at-dream11-e20edec91d33 EDIT: also considering Neo4j
Google Maps lets "property owners and their authorized representatives" upload indoor maps, but this appears to lack navigation ("wayfinding").
MappedIn is a platform and has SDKs for building indoor mapping experiences (https://www.mappedin.com/) and ESRI ArcGIS also offers some indoor mapping tools (https://www.esri.com/en-us/arcgis/indoor-gis/overview). Finally, there used to be a company called LocusLabs that is now a part of Atrius and they were often integrated into airlines' apps to provide airport maps with wayfinding (https://atrius.com/solutions/personal-experiences/personal-wayfinder/).
I previously worked at Mapbox and while I believe that it's a great platform for building map-based experiences, they don't have any simple solutions for indoor wayfinding. If I were doing this for fun as a side-project and prioritized saving money over saving time, here is what I would do:
Create a graph-based dataset representing the walking paths around your university, where nodes/vertexes represent the intersections of paths, and edges represent paths (literally paths outside, hallways, short path segments that represent entering rooms). You could store this in a hosted graph-based database like Neo4j, Amazon Neptune , or Azure Cosmos DB (with its Gremlin API) and use built-in "shortest path" queries, or deploy a PostgreSQL service with pgRouting.
Add two properties to each edge: one property for the distance between its nodes (libraries like @turf/helpers will have a distance function if you have the latitude & longitude of each node), and another property estimating the walking time (based on the distance). Once you have these values saved in a graph-based format, you should be able to easily query and find the data representation of paths between two points.
At this point, you'd have the routing problem solved and it would come down to building a UI. Mapbox arguably leads the industry in developer tools for custom map experiences. You could convert your nodes/edges to GeoJSON, then either upload to Mapbox and create a Tileset to visualize the paths, or add the GeoJSON to the map on the fly.
*You might be able to use open source routing tools like OSRM (https://github.com/Project-OSRM/osrm-backend/issues/6257) or Graphhopper (instead of a custom graph database implementation), but it would likely be more involved to maintain these services.
related Titan posts
ArangoDB
- Grahps and documents in one DB37
- Intuitive and rich query language26
- Good documentation25
- Open source25
- Joins for collections21
- Foxx is great platform15
- Great out of the box web interface with API playground14
- Good driver support6
- Low maintenance efforts6
- Clustering6
- Easy microservice creation with foxx5
- You can write true backendless apps4
- Managed solution available2
- Performance0
- Web ui has still room for improvement3
- No support for blueprints standard, using custom AQL2
related ArangoDB posts
We have an in-house build experiment management system. We produce samples as input to the next step, which then could produce 1 sample(1-1) and many samples (1 - many). There are many steps like this. So far, we are tracking genealogy (limited tracking) in the MySQL database, which is becoming hard to trace back to the original material or sample(I can give more details if required). So, we are considering a Graph database. I am requesting advice from the experts.
- Is a graph database the right choice, or can we manage with RDBMS?
- If RDBMS, which RDMS, which feature, or which approach could make this manageable or sustainable
- If Graph database(Neo4j, OrientDB, Azure Cosmos DB, Amazon Neptune, ArangoDB), which one is good, and what are the best practices?
I am sorry that this might be a loaded question.
Hello All, I'm building an app that will enable users to create documents using ckeditor or TinyMCE editor. The data is then stored in a database and retrieved to display to the user, these docs can contain image data also. The number of pages generated for a single document can go up to 1000. Therefore by design, each page is stored in a separate JSON. I'm wondering which database is the right one to choose between ArangoDB and PostgreSQL. Your thoughts, advice please. Thanks, Kashyap
- Full open source7
related Cayley posts
- Schemas defined by the requests made by the user75
- Will replace RESTful interfaces63
- The future of API's62
- The future of databases49
- Get many resources in a single request12
- Self-documenting12
- Ask for what you need, get exactly that6
- Query Language6
- Fetch different resources in one request3
- Type system3
- Evolve your API without versions3
- Ease of client creation2
- GraphiQL2
- Easy setup2
- "Open" document1
- Fast prototyping1
- Supports subscription1
- Standard1
- Good for apps that query at build time. (SSR/Gatsby)1
- 1. Describe your data1
- Better versioning1
- Backed by Facebook1
- Easy to learn1
- Hard to migrate from GraphQL to another technology4
- More code to type.4
- Takes longer to build compared to schemaless.2
- No support for caching1
- All the pros sound like NFT pitches1
- No support for streaming1
- Works just like any other API at runtime1
- N+1 fetch problem1
- No built in security1
related GraphQL posts
I just finished the very first version of my new hobby project: #MovieGeeks. It is a minimalist online movie catalog for you to save the movies you want to see and for rating the movies you already saw. This is just the beginning as I am planning to add more features on the lines of sharing and discovery
For the #BackEnd I decided to use Node.js , GraphQL and MongoDB:
Node.js has a huge community so it will always be a safe choice in terms of libraries and finding solutions to problems you may have
GraphQL because I needed to improve my skills with it and because I was never comfortable with the usual REST approach. I believe GraphQL is a better option as it feels more natural to write apis, it improves the development velocity, by definition it fixes the over-fetching and under-fetching problem that is so common on REST apis, and on top of that, the community is getting bigger and bigger.
MongoDB was my choice for the database as I already have a lot of experience working on it and because, despite of some bad reputation it has acquired in the last months, I still believe it is a powerful database for at least a very long list of use cases such as the one I needed for my website
When I joined NYT there was already broad dissatisfaction with the LAMP (Linux Apache HTTP Server MySQL PHP) Stack and the front end framework, in particular. So, I wasn't passing judgment on it. I mean, LAMP's fine, you can do good work in LAMP. It's a little dated at this point, but it's not ... I didn't want to rip it out for its own sake, but everyone else was like, "We don't like this, it's really inflexible." And I remember from being outside the company when that was called MIT FIVE when it had launched. And been observing it from the outside, and I was like, you guys took so long to do that and you did it so carefully, and yet you're not happy with your decisions. Why is that? That was more the impetus. If we're going to do this again, how are we going to do it in a way that we're gonna get a better result?
So we're moving quickly away from LAMP, I would say. So, right now, the new front end is React based and using Apollo. And we've been in a long, protracted, gradual rollout of the core experiences.
React is now talking to GraphQL as a primary API. There's a Node.js back end, to the front end, which is mainly for server-side rendering, as well.
Behind there, the main repository for the GraphQL server is a big table repository, that we call Bodega because it's a convenience store. And that reads off of a Kafka pipeline.
- Document-oriented storage829
- No sql594
- Ease of use554
- Fast465
- High performance410
- Free255
- Open source219
- Flexible180
- Replication & high availability145
- Easy to maintain112
- Querying42
- Easy scalability39
- Auto-sharding38
- High availability37
- Map/reduce31
- Document database27
- Easy setup25
- Full index support25
- Reliable16
- Fast in-place updates15
- Agile programming, flexible, fast14
- No database migrations12
- Easy integration with Node.Js8
- Enterprise8
- Enterprise Support6
- Great NoSQL DB5
- Support for many languages through different drivers4
- Schemaless3
- Aggregation Framework3
- Drivers support is good3
- Fast2
- Managed service2
- Easy to Scale2
- Awesome2
- Consistent2
- Good GUI1
- Acid Compliant1
- Very slowly for connected models that require joins6
- Not acid compliant3
- Proprietary query language2
related MongoDB posts
Recently we were looking at a few robust and cost-effective ways of replicating the data that resides in our production MongoDB to a PostgreSQL database for data warehousing and business intelligence.
We set ourselves the following criteria for the optimal tool that would do this job: - The data replication must be near real-time, yet it should NOT impact the production database - The data replication must be horizontally scalable (based on the load), asynchronous & crash-resilient
Based on the above criteria, we selected the following tools to perform the end to end data replication:
We chose MongoDB Stitch for picking up the changes in the source database. It is the serverless platform from MongoDB. One of the services offered by MongoDB Stitch is Stitch Triggers. Using stitch triggers, you can execute a serverless function (in Node.js) in real time in response to changes in the database. When there are a lot of database changes, Stitch automatically "feeds forward" these changes through an asynchronous queue.
We chose Amazon SQS as the pipe / message backbone for communicating the changes from MongoDB to our own replication service. Interestingly enough, MongoDB stitch offers integration with AWS services.
In the Node.js function, we wrote minimal functionality to communicate the database changes (insert / update / delete / replace) to Amazon SQS.
Next we wrote a minimal micro-service in Python to listen to the message events on SQS, pickup the data payload & mirror the DB changes on to the target Data warehouse. We implemented source data to target data translation by modelling target table structures through SQLAlchemy . We deployed this micro-service as AWS Lambda with Zappa. With Zappa, deploying your services as event-driven & horizontally scalable Lambda service is dumb-easy.
In the end, we got to implement a highly scalable near realtime Change Data Replication service that "works" and deployed to production in a matter of few days!
We use MongoDB as our primary #datastore. Mongo's approach to replica sets enables some fantastic patterns for operations like maintenance, backups, and #ETL.
As we pull #microservices from our #monolith, we are taking the opportunity to build them with their own datastores using PostgreSQL. We also use Redis to cache data we’d never store permanently, and to rate-limit our requests to partners’ APIs (like GitHub).
When we’re dealing with large blobs of immutable data (logs, artifacts, and test results), we store them in Amazon S3. We handle any side-effects of S3’s eventual consistency model within our own code. This ensures that we deal with user requests correctly while writes are in process.
related JanusGraph posts
- Aws managed services1
- Supports both gremlin and openCypher query languages1
- Doesn't have much support for openCypher clients1
- Doesn't have proper clients for different lanuages1
- Doesn't have much community support1