MongoDB vs. Azure Cosmos DB

  • 623
  • 715
  • 113K
  • -
  • 253
  • 0
No public GitHub repository stats available

What is MongoDB?

MongoDB stores data in JSON-like documents that can vary in structure, offering a dynamic, flexible schema. MongoDB was also designed for high availability and scalability, with built-in replication and auto-sharding.

What is Azure Cosmos DB?

Azure DocumentDB is a fully managed NoSQL database service built for fast and predictable performance, high availability, elastic scaling, global distribution, and ease of development.
Why do developers choose MongoDB?
Why do you like MongoDB?

Why do developers choose Azure Cosmos DB?
Why do you like Azure Cosmos DB?

What are the cons of using MongoDB?
Downsides of MongoDB?

What are the cons of using Azure Cosmos DB?
Downsides of Azure Cosmos DB?

Want advice about which of these to choose?Ask the StackShare community!

How much does MongoDB cost?
MongoDB Pricing
How much does Azure Cosmos DB cost?
Azure Cosmos DB Pricing
What companies use MongoDB?
2743 companies on StackShare use MongoDB
What companies use Azure Cosmos DB?
33 companies on StackShare use Azure Cosmos DB
What tools integrate with MongoDB?
45 tools on StackShare integrate with MongoDB
What tools integrate with Azure Cosmos DB?
16 tools on StackShare integrate with Azure Cosmos DB

What are some alternatives to MongoDB and Azure Cosmos DB?

  • MySQL - The world's most popular open source database
  • PostgreSQL - A powerful, open source object-relational database system
  • MariaDB - An enhanced, drop-in replacement for MySQL
  • Microsoft SQL Server - A relational database management system developed by Microsoft

See all alternatives to MongoDB

Building with Patterns: The Schema Versioning Pattern
Hacking For Single Mothers And Kids In Poverty
This blog post has charts and yours could too!
Podcast: App Monetization, .NET Standard, Azure Cosm...
Going Global with Xamarin and Azure Cosmos DB
Related Stack Decisions
Jeyabalaji Subramanian
Jeyabalaji Subramanian
CTO at FundsCorner | 23 upvotes 22108 views
AWS Lambda
Amazon SQS
MongoDB Stitch

Recently we were looking at a few robust and cost-effective ways of replicating the data that resides in our production MongoDB to a PostgreSQL database for data warehousing and business intelligence.

We set ourselves the following criteria for the optimal tool that would do this job: - The data replication must be near real-time, yet it should NOT impact the production database - The data replication must be horizontally scalable (based on the load), asynchronous & crash-resilient

Based on the above criteria, we selected the following tools to perform the end to end data replication:

We chose MongoDB Stitch for picking up the changes in the source database. It is the serverless platform from MongoDB. One of the services offered by MongoDB Stitch is Stitch Triggers. Using stitch triggers, you can execute a serverless function (in Node.js) in real time in response to changes in the database. When there are a lot of database changes, Stitch automatically "feeds forward" these changes through an asynchronous queue.

We chose Amazon SQS as the pipe / message backbone for communicating the changes from MongoDB to our own replication service. Interestingly enough, MongoDB stitch offers integration with AWS services.

In the Node.js function, we wrote minimal functionality to communicate the database changes (insert / update / delete / replace) to Amazon SQS.

Next we wrote a minimal micro-service in Python to listen to the message events on SQS, pickup the data payload & mirror the DB changes on to the target Data warehouse. We implemented source data to target data translation by modelling target table structures through SQLAlchemy . We deployed this micro-service as AWS Lambda with Zappa. With Zappa, deploying your services as event-driven & horizontally scalable Lambda service is dumb-easy.

In the end, we got to implement a highly scalable near realtime Change Data Replication service that "works" and deployed to production in a matter of few days!

See more
Jeyabalaji Subramanian
Jeyabalaji Subramanian
CTO at FundsCorner | 10 upvotes 10539 views
MongoDB Atlas

Database is at the heart of any technology stack. It is no wonder we spend a lot of time choosing the right database before we dive deep into product building.

When we were faced with the question of what database to choose, we set the following criteria: The database must (1) Have a very high transaction throughput. We wanted to err on the side of "reads" but not on the "writes". (2) be flexible. I.e. be adaptive enough to take - in data variations. Since we are an early-stage start-up, not everything is set in stone. (3) Fast & easy to work with (4) Cloud Native. We did not want to spend our time in "ANY" infrastructure management.

Based on the above, we picked PostgreSQL and MongoDB for evaluation. We tried a few iterations on hardening the data model with PostgreSQL, but realised that we can move much faster by loosely defining the schema (with just a few fundamental principles intact).

Thus we switched to MongoDB. Before diving in, we validated a few core principles such as: (1) Transaction guarantee. Until 3.6, MongoDB supports Transaction guarantee at Document level. From 4.0 onwards, you can achieve transaction guarantee across multiple documents.

(2) Primary Keys & Indexing: Like any RDBMS, MongoDB supports unique keys & indexes to ensure data integrity & search ability

(3) Ability to join data across data sets: MongoDB offers a super-rich aggregate framework that enables one to filter and group data

(4) Concurrency handling: MongoDB offers specific operations (such as findOneAndUpdate), which when coupled with Optimistic Locking, can be used to achieve concurrency.

Above all, MongoDB offers a complete no-frills Cloud Database as a service - MongoDB Atlas. This kind of sealed the deal for us.

Looking back, choosing MongoDB with MongoDB Atlas was one of the best decisions we took and it is serving us well. My only gripe is that there must be a way to scale-up or scale-down the Atlas configuration at different parts of the day with minimal downtime.

See more
Ajit Parthan
Ajit Parthan
CTO at Shaw Academy | 1 upvotes 2693 views
atShaw Academy

Initial storage was traditional MySQL. The pace of changes during a startup mode made it very difficult to have a clean and consistent schema. Large portions ended up as unstructured data stuffed into CLOBs and BLOBs.

Moving to MongoDB definitely made this part much easier.

Accessing data for analysis is a little bit of a challenge - especially for people coming from the world of SQL Workbench. But with tools like Exploratory this is becoming less of a problem.


See more

Interest Over Time