What is Fauna and what are its top alternatives?
Top Alternatives to Fauna
- Firebase
Firebase is a cloud service designed to power real-time, collaborative applications. Simply add the Firebase library to your application to gain access to a shared data structure; any changes you make to that data are automatically synchronized with the Firebase cloud and with other clients within milliseconds. ...
- CockroachDB
CockroachDB is distributed SQL database that can be deployed in serverless, dedicated, or on-prem. Elastic scale, multi-active availability for resilience, and low latency performance. ...
- Cassandra
Partitioning means that Cassandra can distribute your data across multiple machines in an application-transparent matter. Cassandra will automatically repartition as machines are added and removed from the cluster. Row store means that like relational databases, Cassandra organizes data by rows and columns. The Cassandra Query Language (CQL) is a close relative of SQL. ...
- MongoDB
MongoDB stores data in JSON-like documents that can vary in structure, offering a dynamic, flexible schema. MongoDB was also designed for high availability and scalability, with built-in replication and auto-sharding. ...
- FoundationDB
FoundationDB is a NoSQL database with a shared nothing architecture. Designed around a "core" ordered key-value database, additional features and data models are supplied in layers. The key-value database, as well as all layers, supports full, cross-key and cross-server ACID transactions. ...
- Hasura
An open source GraphQL engine that deploys instant, realtime GraphQL APIs on any Postgres database. ...
- Prisma
Prisma is an open-source database toolkit. It replaces traditional ORMs and makes database access easy with an auto-generated query builder for TypeScript & Node.js. ...
- MySQL
The MySQL software delivers a very fast, multi-threaded, multi-user, and robust SQL (Structured Query Language) database server. MySQL Server is intended for mission-critical, heavy-load production systems as well as for embedding into mass-deployed software. ...
Fauna alternatives & related posts
- Realtime backend made easy371
- Fast and responsive270
- Easy setup242
- Real-time215
- JSON191
- Free134
- Backed by google128
- Angular adaptor83
- Reliable68
- Great customer support36
- Great documentation32
- Real-time synchronization25
- Mobile friendly21
- Rapid prototyping19
- Great security14
- Automatic scaling12
- Freakingly awesome11
- Super fast development8
- Angularfire is an amazing addition!8
- Chat8
- Firebase hosting6
- Built in user auth/oauth6
- Awesome next-gen backend6
- Ios adaptor6
- Speed of light4
- Very easy to use4
- Great3
- It's made development super fast3
- Brilliant for startups3
- Free hosting2
- Cloud functions2
- JS Offline and Sync suport2
- Low battery consumption2
- .net2
- The concurrent updates create a great experience2
- Push notification2
- I can quickly create static web apps with no backend2
- Great all-round functionality2
- Free authentication solution2
- Easy Reactjs integration1
- Google's support1
- Free SSL1
- CDN & cache out of the box1
- Easy to use1
- Large1
- Faster workflow1
- Serverless1
- Good Free Limits1
- Simple and easy1
- Can become expensive31
- No open source, you depend on external company16
- Scalability is not infinite15
- Not Flexible Enough9
- Cant filter queries7
- Very unstable server3
- No Relational Data3
- Too many errors2
- No offline sync2
related Firebase posts
Hi Otensia! I'd definitely recommend using the skills you've already got and building with JavaScript is a smart way to go these days. Most platform services have JavaScript/Node SDKs or NPM packages, many serverless platforms support Node in case you need to write any backend logic, and JavaScript is incredibly popular - meaning it will be easy to hire for, should you ever need to.
My advice would be "don't reinvent the wheel". If you already have a skill set that will work well to solve the problem at hand, and you don't need it for any other projects, don't spend the time jumping into a new language. If you're looking for an excuse to learn something new, it would be better to invest that time in learning a new platform/tool that compliments your knowledge of JavaScript. For this project, I might recommend using Netlify, Vercel, or Google Firebase to quickly and easily deploy your web app. If you need to add user authentication, there are great examples out there for Firebase Authentication, Auth0, or even Magic (a newcomer on the Auth scene, but very user friendly). All of these services work very well with a JavaScript-based application.
For inboxkitten.com, an opensource disposable email service;
We migrated our serverless workload from Cloud Functions for Firebase to CloudFlare workers, taking advantage of the lower cost and faster-performing edge computing of Cloudflare network. Made possible due to our extremely low CPU and RAM overhead of our serverless functions.
If I were to summarize the limitation of Cloudflare (as oppose to firebase/gcp functions), it would be ...
- <5ms CPU time limit
- Incompatible with express.js
- one script limitation per domain
Limitations our workload is able to conform with (YMMV)
For hosting of static files, we migrated from Firebase to CommonsHost
More details on the trade-off in between both serverless providers is in the article
CockroachDB
related CockroachDB posts
Cassandra
- Distributed119
- High performance98
- High availability81
- Easy scalability74
- Replication53
- Reliable26
- Multi datacenter deployments26
- Schema optional10
- OLTP9
- Open source8
- Workload separation (via MDC)2
- Fast1
- Reliability of replication3
- Size1
- Updates1
related Cassandra posts
1.0 of Stream leveraged Cassandra for storing the feed. Cassandra is a common choice for building feeds. Instagram, for instance started, out with Redis but eventually switched to Cassandra to handle their rapid usage growth. Cassandra can handle write heavy workloads very efficiently.
Cassandra is a great tool that allows you to scale write capacity simply by adding more nodes, though it is also very complex. This complexity made it hard to diagnose performance fluctuations. Even though we had years of experience with running Cassandra, it still felt like a bit of a black box. When building Stream 2.0 we decided to go for a different approach and build Keevo. Keevo is our in-house key-value store built upon RocksDB, gRPC and Raft.
RocksDB is a highly performant embeddable database library developed and maintained by Facebook’s data engineering team. RocksDB started as a fork of Google’s LevelDB that introduced several performance improvements for SSD. Nowadays RocksDB is a project on its own and is under active development. It is written in C++ and it’s fast. Have a look at how this benchmark handles 7 million QPS. In terms of technology it’s much more simple than Cassandra.
This translates into reduced maintenance overhead, improved performance and, most importantly, more consistent performance. It’s interesting to note that LinkedIn also uses RocksDB for their feed.
#InMemoryDatabases #DataStores #Databases
Trying to establish a data lake(or maybe puddle) for my org's Data Sharing project. The idea is that outside partners would send cuts of their PHI data, regardless of format/variables/systems, to our Data Team who would then harmonize the data, create data marts, and eventually use it for something. End-to-end, I'm envisioning:
- Ingestion->Secure, role-based, self service portal for users to upload data (1a. bonus points if it can preform basic validations/masking)
- Storage->Amazon S3 seems like the cheapest. We probably won't need very big, even at full capacity. Our current storage is a secure Box folder that has ~4GB with several batches of test data, code, presentations, and planning docs.
- Data Catalog-> AWS Glue? Azure Data Factory? Snowplow? is the main difference basically based on the vendor? We also will have Data Dictionaries/Codebooks from submitters. Where would they fit in?
- Partitions-> I've seen Cassandra and YARN mentioned, but have no experience with either
- Processing-> We want to use SAS if at all possible. What will work with SAS code?
- Pipeline/Automation->The check-in and verification processes that have been outlined are rather involved. Some sort of automated messaging or approval workflow would be nice
- I have very little guidance on what a "Data Mart" should look like, so I'm going with the idea that it would be another "experimental" partition. Unless there's an actual mart-building paradigm I've missed?
- An end user might use the catalog to pull certain de-identified data sets from the marts. Again, role-based access and self-service gui would be preferable. I'm the only full-time tech person on this project, but I'm mostly an OOP, HTML, JavaScript, and some SQL programmer. Most of this is out of my repertoire. I've done a lot of research, but I can't be an effective evangelist without hands-on experience. Since we're starting a new year of our grant, they've finally decided to let me try some stuff out. Any pointers would be appreciated!
- Document-oriented storage827
- No sql593
- Ease of use553
- Fast464
- High performance410
- Free255
- Open source218
- Flexible180
- Replication & high availability145
- Easy to maintain112
- Querying42
- Easy scalability39
- Auto-sharding38
- High availability37
- Map/reduce31
- Document database27
- Easy setup25
- Full index support25
- Reliable16
- Fast in-place updates15
- Agile programming, flexible, fast14
- No database migrations12
- Easy integration with Node.Js8
- Enterprise8
- Enterprise Support6
- Great NoSQL DB5
- Support for many languages through different drivers4
- Schemaless3
- Aggregation Framework3
- Drivers support is good3
- Fast2
- Managed service2
- Easy to Scale2
- Awesome2
- Consistent2
- Good GUI1
- Acid Compliant1
- Very slowly for connected models that require joins6
- Not acid compliant3
- Proprietary query language2
related MongoDB posts
Recently we were looking at a few robust and cost-effective ways of replicating the data that resides in our production MongoDB to a PostgreSQL database for data warehousing and business intelligence.
We set ourselves the following criteria for the optimal tool that would do this job: - The data replication must be near real-time, yet it should NOT impact the production database - The data replication must be horizontally scalable (based on the load), asynchronous & crash-resilient
Based on the above criteria, we selected the following tools to perform the end to end data replication:
We chose MongoDB Stitch for picking up the changes in the source database. It is the serverless platform from MongoDB. One of the services offered by MongoDB Stitch is Stitch Triggers. Using stitch triggers, you can execute a serverless function (in Node.js) in real time in response to changes in the database. When there are a lot of database changes, Stitch automatically "feeds forward" these changes through an asynchronous queue.
We chose Amazon SQS as the pipe / message backbone for communicating the changes from MongoDB to our own replication service. Interestingly enough, MongoDB stitch offers integration with AWS services.
In the Node.js function, we wrote minimal functionality to communicate the database changes (insert / update / delete / replace) to Amazon SQS.
Next we wrote a minimal micro-service in Python to listen to the message events on SQS, pickup the data payload & mirror the DB changes on to the target Data warehouse. We implemented source data to target data translation by modelling target table structures through SQLAlchemy . We deployed this micro-service as AWS Lambda with Zappa. With Zappa, deploying your services as event-driven & horizontally scalable Lambda service is dumb-easy.
In the end, we got to implement a highly scalable near realtime Change Data Replication service that "works" and deployed to production in a matter of few days!
We use MongoDB as our primary #datastore. Mongo's approach to replica sets enables some fantastic patterns for operations like maintenance, backups, and #ETL.
As we pull #microservices from our #monolith, we are taking the opportunity to build them with their own datastores using PostgreSQL. We also use Redis to cache data we’d never store permanently, and to rate-limit our requests to partners’ APIs (like GitHub).
When we’re dealing with large blobs of immutable data (logs, artifacts, and test results), we store them in Amazon S3. We handle any side-effects of S3’s eventual consistency model within our own code. This ensures that we deal with user requests correctly while writes are in process.
FoundationDB
- ACID transactions6
- Linear scalability5
- Multi-model database3
- Key-Value Store3
- Great Foundation3
- SQL Layer1
related FoundationDB posts
Hasura
- Fast23
- Easy GraphQL subscriptions18
- Easy setup of relationships and permissions16
- Automatically generates your GraphQL schema15
- Minimal learning curve15
- No back-end code required13
- Works with new and existing databases13
- Instant production ready GraphQL12
- Great UX11
- Low usage of resources4
- Simple4
- Cumbersome validations3
related Hasura posts
- Type-safe database access12
- Open Source10
- Auto-generated query builder8
- Supports multible database systems6
- Increases confidence during development6
- Built specifically for Postgres and TypeScript4
- Productive application development4
- Supports multible RDBMSs2
- Robust migrations system2
- Doesn't support downward/back migrations2
- Doesn't support JSONB1
- Do not support JSONB1
- Mutation of JSON is really confusing1
- Do not support JSONB1
related Prisma posts
I just finished a web app meant for a business that offers training programs for certain professional courses. I chose this stack to test out my skills in graphql and react. I used Node.js , GraphQL , MySQL for the #Backend utilizing Prisma as a database interface for MySQL to provide CRUD APIs and graphql-yoga as a server. For the #frontend I chose React, styled-components for styling, Next.js for routing and SSR and Apollo for data management. I really liked the outcome and I will definitely use this stack in future projects.
We are starting to build one shirt data logic, structure and as an online clothing store we believe good ux and ui is a goal to drive a lot of click through. The problem is, how do we fetch data and how do we abstract the gap between the Front-end devs and backend-devs as we are just two in the technical unit. We decided to go for GraphQL as our application-layer tool and Prisma for our database-layer abstracter.
Reasons :
GraphQL :
GraphQL makes fetching of data less painful and organised.
GraphQL gives you 100% assurance on data you getting back as opposed to the Rest design .
GraphQL comes with a bunch of real-time functionality in form of. subscriptions and finally because we are using React (GraphQL is not React demanding, it's doesn't require a specific framework, language or tool, but it definitely makes react apps fly )
Prisma :
Writing revolvers can be fun, but imagine writing revolvers nested deep down, curry braces flying around. This is sure a welcome note to bugs and as a small team we need to focus more on what that matters more. Prisma generates this necessary CRUD resolves, mutations and subscription out of the box.
We don't really have much budget at the moment so we are going to run our logic in a scalable cheap and cost effective cloud environment. Oh! It's AWS Lambda and deploying our schema to Lambda is our best bet to minimize cost and same time scale.
We are still at development stage and I believe, working on this start up will increase my dev knowledge. Off for Lunch :)
- Sql800
- Free679
- Easy562
- Widely used528
- Open source490
- High availability180
- Cross-platform support160
- Great community104
- Secure79
- Full-text indexing and searching75
- Fast, open, available26
- Reliable16
- SSL support16
- Robust15
- Enterprise Version9
- Easy to set up on all platforms7
- NoSQL access to JSON data type3
- Relational database1
- Easy, light, scalable1
- Sequel Pro (best SQL GUI)1
- Replica Support1
- Owned by a company with their own agenda16
- Can't roll back schema changes3
related MySQL posts
When I joined NYT there was already broad dissatisfaction with the LAMP (Linux Apache HTTP Server MySQL PHP) Stack and the front end framework, in particular. So, I wasn't passing judgment on it. I mean, LAMP's fine, you can do good work in LAMP. It's a little dated at this point, but it's not ... I didn't want to rip it out for its own sake, but everyone else was like, "We don't like this, it's really inflexible." And I remember from being outside the company when that was called MIT FIVE when it had launched. And been observing it from the outside, and I was like, you guys took so long to do that and you did it so carefully, and yet you're not happy with your decisions. Why is that? That was more the impetus. If we're going to do this again, how are we going to do it in a way that we're gonna get a better result?
So we're moving quickly away from LAMP, I would say. So, right now, the new front end is React based and using Apollo. And we've been in a long, protracted, gradual rollout of the core experiences.
React is now talking to GraphQL as a primary API. There's a Node.js back end, to the front end, which is mainly for server-side rendering, as well.
Behind there, the main repository for the GraphQL server is a big table repository, that we call Bodega because it's a convenience store. And that reads off of a Kafka pipeline.
We've been using PostgreSQL since the very early days of Zulip, but we actually didn't use it from the beginning. Zulip started out as a MySQL project back in 2012, because we'd heard it was a good choice for a startup with a wide community. However, we found that even though we were using the Django ORM for most of our database access, we spent a lot of time fighting with MySQL. Issues ranged from bad collation defaults, to bad query plans which required a lot of manual query tweaks.
We ended up getting so frustrated that we tried out PostgresQL, and the results were fantastic. We didn't have to do any real customization (just some tuning settings for how big a server we had), and all of our most important queries were faster out of the box. As a result, we were able to delete a bunch of custom queries escaping the ORM that we'd written to make the MySQL query planner happy (because postgres just did the right thing automatically).
And then after that, we've just gotten a ton of value out of postgres. We use its excellent built-in full-text search, which has helped us avoid needing to bring in a tool like Elasticsearch, and we've really enjoyed features like its partial indexes, which saved us a lot of work adding unnecessary extra tables to get good performance for things like our "unread messages" and "starred messages" indexes.
I can't recommend it highly enough.