What is FaunaDB and what are its top alternatives?
Top Alternatives to FaunaDB
Firebase is a cloud service designed to power real-time, collaborative applications. Simply add the Firebase library to your application to gain access to a shared data structure; any changes you make to that data are automatically synchronized with the Firebase cloud and with other clients within milliseconds. ...
It allows you to deploy a database on-prem, in the cloud or even across clouds, all as a single store. It is a simple and straightforward bridge to your future, cloud-based data architecture. ...
Partitioning means that Cassandra can distribute your data across multiple machines in an application-transparent matter. Cassandra will automatically repartition as machines are added and removed from the cluster. Row store means that like relational databases, Cassandra organizes data by rows and columns. The Cassandra Query Language (CQL) is a close relative of SQL. ...
MongoDB stores data in JSON-like documents that can vary in structure, offering a dynamic, flexible schema. MongoDB was also designed for high availability and scalability, with built-in replication and auto-sharding. ...
FoundationDB is a NoSQL database with a shared nothing architecture. Designed around a "core" ordered key-value database, additional features and data models are supplied in layers. The key-value database, as well as all layers, supports full, cross-key and cross-server ACID transactions. ...
An open source GraphQL engine that deploys instant, realtime GraphQL APIs on any Postgres database. ...
Prisma is an open-source database toolkit. It replaces traditional ORMs and makes database access easy with an auto-generated query builder for TypeScript & Node.js. ...
The MySQL software delivers a very fast, multi-threaded, multi-user, and robust SQL (Structured Query Language) database server. MySQL Server is intended for mission-critical, heavy-load production systems as well as for embedding into mass-deployed software. ...
FaunaDB alternatives & related posts
- Realtime backend made easy361
- Fast and responsive264
- Easy setup234
- Backed by google121
- Angular adaptor81
- Great customer support36
- Great documentation26
- Real-time synchronization23
- Mobile friendly20
- Rapid prototyping17
- Great security12
- Automatic scaling11
- Freakingly awesome10
- Angularfire is an amazing addition!8
- Super fast development8
- Awesome next-gen backend6
- Ios adaptor6
- Built in user auth/oauth5
- Firebase hosting5
- Speed of light4
- Very easy to use4
- It's made development super fast3
- Brilliant for startups3
- Great all-round functionality2
- Low battery consumption2
- I can quickly create static web apps with no backend2
- The concurrent updates create a great experience2
- JS Offline and Sync suport2
- Faster workflow1
- Free SSL1
- Good Free Limits1
- Push notification1
- Easy to use1
- Easy Reactjs integration1
- Can become expensive28
- Scalability is not infinite15
- No open source, you depend on external company14
- Not Flexible Enough9
- Cant filter queries5
- Very unstable server3
- Too many errors2
- No Relational Data2
related Firebase posts
This is my stack in Application & Data
My Utilities Tools
Google Analytics Postman Elasticsearch
My Devops Tools
Git GitHub GitLab npm Visual Studio Code Kibana Sentry BrowserStack
My Business Tools
related CockroachDB posts
- High performance93
- High availability80
- Easy scalability74
- Multi datacenter deployments26
- Schema optional7
- Open source6
- Workload separation (via MDC)2
- Reliability of replication2
related Cassandra posts
1.0 of Stream leveraged Cassandra for storing the feed. Cassandra is a common choice for building feeds. Instagram, for instance started, out with Redis but eventually switched to Cassandra to handle their rapid usage growth. Cassandra can handle write heavy workloads very efficiently.
Cassandra is a great tool that allows you to scale write capacity simply by adding more nodes, though it is also very complex. This complexity made it hard to diagnose performance fluctuations. Even though we had years of experience with running Cassandra, it still felt like a bit of a black box. When building Stream 2.0 we decided to go for a different approach and build Keevo. Keevo is our in-house key-value store built upon RocksDB, gRPC and Raft.
RocksDB is a highly performant embeddable database library developed and maintained by Facebook’s data engineering team. RocksDB started as a fork of Google’s LevelDB that introduced several performance improvements for SSD. Nowadays RocksDB is a project on its own and is under active development. It is written in C++ and it’s fast. Have a look at how this benchmark handles 7 million QPS. In terms of technology it’s much more simple than Cassandra.
This translates into reduced maintenance overhead, improved performance and, most importantly, more consistent performance. It’s interesting to note that LinkedIn also uses RocksDB for their feed.
#InMemoryDatabases #DataStores #Databases
Developing a solution that collects Telemetry Data from different devices, nearly 1000 devices minimum and maximum 12000. Each device is sending 2 packets in 1 second. This is time-series data, and this data definition and different reports are saved on PostgreSQL. Like Building information, maintenance records, etc. I want to know about the best solution. This data is required for Math and ML to run different algorithms. Also, data is raw without definitions and information stored in PostgreSQL. Initially, I went with TimescaleDB due to PostgreSQL support, but to increase in sites, I started facing many issues with timescale DB in terms of flexibility of storing data.
My major requirement is also the replication of the database for reporting and different purposes. You may also suggest other options other than Druid and Cassandra. But an open source solution is appreciated.
- Document-oriented storage823
- No sql590
- Ease of use546
- High performance405
- Open source215
- Replication & high availability142
- Easy to maintain109
- Easy scalability37
- High availability35
- Document database26
- Easy setup24
- Full index support24
- Fast in-place updates14
- Agile programming, flexible, fast13
- No database migrations11
- Easy integration with Node.Js7
- Enterprise Support5
- Great NoSQL DB4
- Aggregation Framework3
- Support for many languages through different drivers3
- Drivers support is good3
- Managed service2
- Easy to Scale2
- Very slowly for connected models that require joins5
- Not acid compliant3
- Proprietary query language1
related MongoDB posts
Recently we were looking at a few robust and cost-effective ways of replicating the data that resides in our production MongoDB to a PostgreSQL database for data warehousing and business intelligence.
We set ourselves the following criteria for the optimal tool that would do this job: - The data replication must be near real-time, yet it should NOT impact the production database - The data replication must be horizontally scalable (based on the load), asynchronous & crash-resilient
Based on the above criteria, we selected the following tools to perform the end to end data replication:
We chose MongoDB Stitch for picking up the changes in the source database. It is the serverless platform from MongoDB. One of the services offered by MongoDB Stitch is Stitch Triggers. Using stitch triggers, you can execute a serverless function (in Node.js) in real time in response to changes in the database. When there are a lot of database changes, Stitch automatically "feeds forward" these changes through an asynchronous queue.
We chose Amazon SQS as the pipe / message backbone for communicating the changes from MongoDB to our own replication service. Interestingly enough, MongoDB stitch offers integration with AWS services.
In the Node.js function, we wrote minimal functionality to communicate the database changes (insert / update / delete / replace) to Amazon SQS.
Next we wrote a minimal micro-service in Python to listen to the message events on SQS, pickup the data payload & mirror the DB changes on to the target Data warehouse. We implemented source data to target data translation by modelling target table structures through SQLAlchemy . We deployed this micro-service as AWS Lambda with Zappa. With Zappa, deploying your services as event-driven & horizontally scalable Lambda service is dumb-easy.
In the end, we got to implement a highly scalable near realtime Change Data Replication service that "works" and deployed to production in a matter of few days!
We use MongoDB as our primary #datastore. Mongo's approach to replica sets enables some fantastic patterns for operations like maintenance, backups, and #ETL.
As we pull #microservices from our #monolith, we are taking the opportunity to build them with their own datastores using PostgreSQL. We also use Redis to cache data we’d never store permanently, and to rate-limit our requests to partners’ APIs (like GitHub).
When we’re dealing with large blobs of immutable data (logs, artifacts, and test results), we store them in Amazon S3. We handle any side-effects of S3’s eventual consistency model within our own code. This ensures that we deal with user requests correctly while writes are in process.
- ACID transactions6
- Linear scalability4
- Multi-model database3
- Key-Value Store3
- Great Foundation3
- SQL Layer1
related FoundationDB posts
- Easy GraphQL subscriptions16
- Easy setup of relationships and permissions14
- Automatically generates your GraphQL schema13
- Minimal learning curve13
- No back-end code required12
- Works with new and existing databases11
- Instant production ready GraphQL10
- Great UX10
- Low usage of resources2
- Cumbersome validations2
related Hasura posts
- Type-safe database access9
- Open Source8
- Auto-generated query builder7
- Increases confidence during development6
- Built specifically for Postgres and TypeScript4
- Supports multible database systems4
- Productive application development4
- Supports multible RDBMSs0
- Robust migrations system0
- Doesn't support downward/back migrations1
related Prisma posts
I just finished a web app meant for a business that offers training programs for certain professional courses. I chose this stack to test out my skills in graphql and react. I used Node.js , GraphQL , MySQL for the #Backend utilizing Prisma as a database interface for MySQL to provide CRUD APIs and graphql-yoga as a server. For the #frontend I chose React, styled-components for styling, Next.js for routing and SSR and Apollo for data management. I really liked the outcome and I will definitely use this stack in future projects.
In my last side project, I built a web posting application that has similar features as Facebook and hosted on Heroku. The user can register an account, create posts, upload images and share with others. I took an advantage of graphql-subscriptions to handle realtime notifications in the comments section. Currently, I'm at the last stage of styling and building layouts.
For the #Backend I used graphql-yoga, Prisma, GraphQL with PostgreSQL database. For the #FrontEnd: React, styled-components with Apollo. The app is hosted on Heroku.
- Widely used525
- Open source485
- High availability180
- Cross-platform support158
- Great community103
- Full-text indexing and searching75
- Fast, open, available25
- SSL support14
- Enterprise Version8
- Easy to set up on all platforms7
- NoSQL access to JSON data type2
- Relational database1
- Easy, light, scalable1
- Sequel Pro (best SQL GUI)1
- Replica Support1
- Owned by a company with their own agenda14
- Can't roll back schema changes1
related MySQL posts
We've been using PostgreSQL since the very early days of Zulip, but we actually didn't use it from the beginning. Zulip started out as a MySQL project back in 2012, because we'd heard it was a good choice for a startup with a wide community. However, we found that even though we were using the Django ORM for most of our database access, we spent a lot of time fighting with MySQL. Issues ranged from bad collation defaults, to bad query plans which required a lot of manual query tweaks.
We ended up getting so frustrated that we tried out PostgresQL, and the results were fantastic. We didn't have to do any real customization (just some tuning settings for how big a server we had), and all of our most important queries were faster out of the box. As a result, we were able to delete a bunch of custom queries escaping the ORM that we'd written to make the MySQL query planner happy (because postgres just did the right thing automatically).
And then after that, we've just gotten a ton of value out of postgres. We use its excellent built-in full-text search, which has helped us avoid needing to bring in a tool like Elasticsearch, and we've really enjoyed features like its partial indexes, which saved us a lot of work adding unnecessary extra tables to get good performance for things like our "unread messages" and "starred messages" indexes.
I can't recommend it highly enough.
Our most popular (& controversial!) article to date on the Uber Engineering blog in 3+ yrs. Why we moved from PostgreSQL to MySQL. In essence, it was due to a variety of limitations of Postgres at the time. Fun fact -- earlier in Uber's history we'd actually moved from MySQL to Postgres before switching back for good, & though we published the article in Summer 2016 we haven't looked back since:
The early architecture of Uber consisted of a monolithic backend application written in Python that used Postgres for data persistence. Since that time, the architecture of Uber has changed significantly, to a model of microservices and new data platforms. Specifically, in many of the cases where we previously used Postgres, we now use Schemaless, a novel database sharding layer built on top of MySQL (https://eng.uber.com/schemaless-part-one/). In this article, we’ll explore some of the drawbacks we found with Postgres and explain the decision to build Schemaless and other backend services on top of MySQL: