Amazon S3 vs MongoDB

Get Advice Icon

Need advice about which tool to choose?Ask the StackShare community!

Amazon S3
Amazon S3

18.2K
12K
+ 1
2K
MongoDB
MongoDB

21.4K
17.9K
+ 1
3.9K
Add tool

Amazon S3 vs MongoDB: What are the differences?

Developers describe Amazon S3 as "Store and retrieve any amount of data, at any time, from anywhere on the web". Amazon Simple Storage Service provides a fully redundant data storage infrastructure for storing and retrieving any amount of data, at any time, from anywhere on the web. On the other hand, MongoDB is detailed as "The database for giant ideas". MongoDB stores data in JSON-like documents that can vary in structure, offering a dynamic, flexible schema. MongoDB was also designed for high availability and scalability, with built-in replication and auto-sharding.

Amazon S3 can be classified as a tool in the "Cloud Storage" category, while MongoDB is grouped under "Databases".

"Reliable", "Scalable" and "Cheap" are the key factors why developers consider Amazon S3; whereas "Document-oriented storage", "No sql" and "Ease of use" are the primary reasons why MongoDB is favored.

MongoDB is an open source tool with 16.3K GitHub stars and 4.1K GitHub forks. Here's a link to MongoDB's open source repository on GitHub.

According to the StackShare community, Amazon S3 has a broader approval, being mentioned in 3231 company stacks & 1611 developers stacks; compared to MongoDB, which is listed in 2189 company stacks and 2218 developer stacks.

- No public GitHub repository available -

What is Amazon S3?

Amazon Simple Storage Service provides a fully redundant data storage infrastructure for storing and retrieving any amount of data, at any time, from anywhere on the web

What is MongoDB?

MongoDB stores data in JSON-like documents that can vary in structure, offering a dynamic, flexible schema. MongoDB was also designed for high availability and scalability, with built-in replication and auto-sharding.
Get Advice Icon

Need advice about which tool to choose?Ask the StackShare community!

Why do developers choose Amazon S3?
Why do developers choose MongoDB?

Sign up to add, upvote and see more prosMake informed product decisions

What companies use Amazon S3?
What companies use MongoDB?

Sign up to get full access to all the companiesMake informed product decisions

What tools integrate with Amazon S3?
What tools integrate with MongoDB?

Sign up to get full access to all the tool integrationsMake informed product decisions

What are some alternatives to Amazon S3 and MongoDB?
Amazon Glacier
In order to keep costs low, Amazon Glacier is optimized for data that is infrequently accessed and for which retrieval times of several hours are suitable. With Amazon Glacier, customers can reliably store large or small amounts of data for as little as $0.01 per gigabyte per month, a significant savings compared to on-premises solutions.
Amazon EBS
Amazon EBS volumes are network-attached, and persist independently from the life of an instance. Amazon EBS provides highly available, highly reliable, predictable storage volumes that can be attached to a running Amazon EC2 instance and exposed as a device within the instance. Amazon EBS is particularly suited for applications that require a database, file system, or access to raw block level storage.
Amazon EC2
It is a web service that provides resizable compute capacity in the cloud. It is designed to make web-scale computing easier for developers.
Google Drive
Keep photos, stories, designs, drawings, recordings, videos, and more. Your first 15 GB of storage are free with a Google Account. Your files in Drive can be reached from any smartphone, tablet, or computer.
Microsoft Azure
Azure is an open and flexible cloud platform that enables you to quickly build, deploy and manage applications across a global network of Microsoft-managed datacenters. You can build applications using any language, tool or framework. And you can integrate your public cloud applications with your existing IT environment.
See all alternatives
Decisions about Amazon S3 and MongoDB
MongoDB
MongoDB

I starting using MongoDB because it was much easier to implement in production then hosted SQL, and found that a lot of the limitation you think of from a document store vs a relational database were overcome by connecting the application to a graphql API, making retrieval seamless. Mongos latest upgrades as well as Stitch and Mongo mobile make it a perfect fit especially if your application will be cross platform web and mobile.

See more
Zach Coffin
Zach Coffin
Software Developer · | 4 upvotes · 8.3K views
PostgreSQL
PostgreSQL
MongoDB
MongoDB

I started using PostgreSQL because I started a job at a company that was already using it as well as MongoDB. The main difference between the two from my perspective is that postgres columns are a chore to add/remove/modify whereas you can throw whatever you want into a mongo collection. And personally I prefer the query language for postgres over that of mongo, but they both have their merits. Maybe someday I'll be a DBA and have more insight to share but for now there's my 2 cents.

See more
Antonio Sanchez
Antonio Sanchez
CEO at Kokoen GmbH · | 13 upvotes · 217.2K views
atKokoen GmbHKokoen GmbH
PHP
PHP
Laravel
Laravel
MySQL
MySQL
Go
Go
MongoDB
MongoDB
JavaScript
JavaScript
Node.js
Node.js
ExpressJS
ExpressJS

Back at the start of 2017, we decided to create a web-based tool for the SEO OnPage analysis of our clients' websites. We had over 2.000 websites to analyze, so we had to perform thousands of requests to get every single page from those websites, process the information and save the big amounts of data somewhere.

Very soon we realized that the initial chosen script language and database, PHP, Laravel and MySQL, was not going to be able to cope efficiently with such a task.

By that time, we were doing some experiments for other projects with a language we had recently get to know, Go , so we decided to get a try and code the crawler using it. It was fantastic, we could process much more data with way less CPU power and in less time. By using the concurrency abilites that the language has to offers, we could also do more Http requests in less time.

Unfortunately, I have no comparison numbers to show about the performance differences between Go and PHP since the difference was so clear from the beginning and that we didn't feel the need to do further comparison tests nor document it. We just switched fully to Go.

There was still a problem: despite the big amount of Data we were generating, MySQL was performing very well, but as we were adding more and more features to the software and with those features more and more different type of data to save, it was a nightmare for the database architects to structure everything correctly on the database, so it was clear what we had to do next: switch to a NoSQL database. So we switched to MongoDB, and it was also fantastic: we were expending almost zero time in thinking how to structure the Database and the performance also seemed to be better, but again, I have no comparison numbers to show due to the lack of time.

We also decided to switch the website from PHP and Laravel to JavaScript and Node.js and ExpressJS since working with the JSON Data that we were saving now in the Database would be easier.

As of now, we don't only use the tool intern but we also opened it for everyone to use for free: https://tool-seo.com

See more
Jeyabalaji Subramanian
Jeyabalaji Subramanian
CTO at FundsCorner · | 24 upvotes · 741.1K views
atFundsCornerFundsCorner
MongoDB
MongoDB
PostgreSQL
PostgreSQL
MongoDB Stitch
MongoDB Stitch
Node.js
Node.js
Amazon SQS
Amazon SQS
Python
Python
SQLAlchemy
SQLAlchemy
AWS Lambda
AWS Lambda
Zappa
Zappa

Recently we were looking at a few robust and cost-effective ways of replicating the data that resides in our production MongoDB to a PostgreSQL database for data warehousing and business intelligence.

We set ourselves the following criteria for the optimal tool that would do this job: - The data replication must be near real-time, yet it should NOT impact the production database - The data replication must be horizontally scalable (based on the load), asynchronous & crash-resilient

Based on the above criteria, we selected the following tools to perform the end to end data replication:

We chose MongoDB Stitch for picking up the changes in the source database. It is the serverless platform from MongoDB. One of the services offered by MongoDB Stitch is Stitch Triggers. Using stitch triggers, you can execute a serverless function (in Node.js) in real time in response to changes in the database. When there are a lot of database changes, Stitch automatically "feeds forward" these changes through an asynchronous queue.

We chose Amazon SQS as the pipe / message backbone for communicating the changes from MongoDB to our own replication service. Interestingly enough, MongoDB stitch offers integration with AWS services.

In the Node.js function, we wrote minimal functionality to communicate the database changes (insert / update / delete / replace) to Amazon SQS.

Next we wrote a minimal micro-service in Python to listen to the message events on SQS, pickup the data payload & mirror the DB changes on to the target Data warehouse. We implemented source data to target data translation by modelling target table structures through SQLAlchemy . We deployed this micro-service as AWS Lambda with Zappa. With Zappa, deploying your services as event-driven & horizontally scalable Lambda service is dumb-easy.

In the end, we got to implement a highly scalable near realtime Change Data Replication service that "works" and deployed to production in a matter of few days!

See more
Khauth György
Khauth György
CTO at SalesAutopilot Kft. · | 12 upvotes · 193.9K views
atSalesAutopilot Kft.SalesAutopilot Kft.
Amazon CloudWatch
Amazon CloudWatch
Amazon SNS
Amazon SNS
Amazon CloudFront
Amazon CloudFront
Amazon Route 53
Amazon Route 53
MySQL
MySQL
MongoDB
MongoDB
Redis
Redis
jQuery UI
jQuery UI
Vue.js
Vue.js
Vuetify
Vuetify
vuex
vuex
Docker
Docker
Jenkins
Jenkins
AWS CodePipeline
AWS CodePipeline
GitHub
GitHub

I'm the CTO of a marketing automation SaaS. Because of the continuously increasing load we moved to the AWSCloud. We are using more and more features of AWS: Amazon CloudWatch, Amazon SNS, Amazon CloudFront, Amazon Route 53 and so on.

Our main Database is MySQL but for the hundreds of GB document data we use MongoDB more and more. We started to use Redis for cache and other time sensitive operations.

On the front-end we use jQuery UI + Smarty but now we refactor our app to use Vue.js with Vuetify. Because our app is relatively complex we need to use vuex as well.

On the development side we use GitHub as our main repo, Docker for local and server environment and Jenkins and AWS CodePipeline for Continuous Integration.

See more
Jeyabalaji Subramanian
Jeyabalaji Subramanian
CTO at FundsCorner · | 12 upvotes · 32.3K views
atFundsCornerFundsCorner
PostgreSQL
PostgreSQL
MongoDB
MongoDB
MongoDB Atlas
MongoDB Atlas

Database is at the heart of any technology stack. It is no wonder we spend a lot of time choosing the right database before we dive deep into product building.

When we were faced with the question of what database to choose, we set the following criteria: The database must (1) Have a very high transaction throughput. We wanted to err on the side of "reads" but not on the "writes". (2) be flexible. I.e. be adaptive enough to take - in data variations. Since we are an early-stage start-up, not everything is set in stone. (3) Fast & easy to work with (4) Cloud Native. We did not want to spend our time in "ANY" infrastructure management.

Based on the above, we picked PostgreSQL and MongoDB for evaluation. We tried a few iterations on hardening the data model with PostgreSQL, but realised that we can move much faster by loosely defining the schema (with just a few fundamental principles intact).

Thus we switched to MongoDB. Before diving in, we validated a few core principles such as: (1) Transaction guarantee. Until 3.6, MongoDB supports Transaction guarantee at Document level. From 4.0 onwards, you can achieve transaction guarantee across multiple documents.

(2) Primary Keys & Indexing: Like any RDBMS, MongoDB supports unique keys & indexes to ensure data integrity & search ability

(3) Ability to join data across data sets: MongoDB offers a super-rich aggregate framework that enables one to filter and group data

(4) Concurrency handling: MongoDB offers specific operations (such as findOneAndUpdate), which when coupled with Optimistic Locking, can be used to achieve concurrency.

Above all, MongoDB offers a complete no-frills Cloud Database as a service - MongoDB Atlas. This kind of sealed the deal for us.

Looking back, choosing MongoDB with MongoDB Atlas was one of the best decisions we took and it is serving us well. My only gripe is that there must be a way to scale-up or scale-down the Atlas configuration at different parts of the day with minimal downtime.

See more
Ajit Parthan
Ajit Parthan
CTO at Shaw Academy · | 1 upvotes · 5.5K views
atShaw AcademyShaw Academy
MySQL
MySQL
MongoDB
MongoDB
#NosqlDatabaseAsAService

Initial storage was traditional MySQL. The pace of changes during a startup mode made it very difficult to have a clean and consistent schema. Large portions ended up as unstructured data stuffed into CLOBs and BLOBs.

Moving to MongoDB definitely made this part much easier.

Accessing data for analysis is a little bit of a challenge - especially for people coming from the world of SQL Workbench. But with tools like Exploratory this is becoming less of a problem.

#NosqlDatabaseAsAService

See more
Tim Nolet
Tim Nolet
Founder, Engineer & Dishwasher at Checkly · | 8 upvotes · 74.6K views
atChecklyHQChecklyHQ
PostgreSQL
PostgreSQL
Heroku
Heroku
Node.js
Node.js
MongoDB
MongoDB
Amazon DynamoDB
Amazon DynamoDB

PostgreSQL Heroku Node.js MongoDB Amazon DynamoDB

When I started building Checkly, one of the first things on the agenda was how to actually structure our SaaS database model: think accounts, users, subscriptions etc. Weirdly, there is not a lot of information on this on the "blogopshere" (cringe...). After research and some false starts with MongoDB and Amazon DynamoDB we ended up with PostgreSQL and a schema consisting of just four tables that form the backbone of all generic "Saasy" stuff almost any B2B SaaS bumps into.

In a nutshell:cPostgreSQL Heroku Node.js MongoDB Amazon DynamoDB

When I started building Checkly, one of the first things on the agenda was how to actually structure our SaaS database model: think accounts, users, subscriptions etc. Weirdly, there is not a lot of information on this on the "blogopshere" (cringe...). After research and some false starts with MongoDB and Amazon DynamoDB we ended up with PostgreSQL and a schema consisting of just four tables that form the backbone of all generic "Saasy" stuff almost any B2B SaaS bumps into.

In a nutshell:

  • We use Postgres on Heroku.
  • We use a "one database, on schema" approach for partitioning customer data.
  • We use an accounts, memberships and users table to create a many-to-many relation between users and accounts.
  • We completely decouple prices, payments and the exact ingredients for a customer's plan.

All the details including a database schema diagram are in the linked blog post.

See more
Łukasz Korecki
Łukasz Korecki
CTO & Co-founder at EnjoyHQ · | 12 upvotes · 112.2K views
atEnjoyHQEnjoyHQ
RethinkDB
RethinkDB
MongoDB
MongoDB
PostgreSQL
PostgreSQL

We initially chose RethinkDB because of the schema-less document store features, and better durability resilience/story than MongoDB In the end, it didn't work out quite as we expected: there's plenty of scalability issues, it's near impossible to run analytical workloads and small community makes working with Rethink a challenge. We're in process of migrating all our workloads to PostgreSQL and hopefully, we will be able to decommission our RethinkDB deployment soon.

See more
Mauro Bennici
Mauro Bennici
CTO at You Are My GUide · | 7 upvotes · 26.9K views
atYou Are My GUideYou Are My GUide
PostgreSQL
PostgreSQL
TimescaleDB
TimescaleDB
MongoDB
MongoDB

PostgreSQL plus TimescaleDB allow us to concentrate the business effort on how to analyze valuable data instead of manage them on IT side. We are now able to ingest thousand of social shares "managed" data without compromise the scalability of the system or the time query. TimescaleDB is transparent to PostgreSQL , so we continue to use the same SQL syntax without any changes. At the same time, because we need to manage few document objects we dismissed the MongoDB cluster.

See more
Robert Zuber
Robert Zuber
CTO at CircleCI · | 22 upvotes · 552.9K views
atCircleCICircleCI
MongoDB
MongoDB
PostgreSQL
PostgreSQL
Redis
Redis
GitHub
GitHub
Amazon S3
Amazon S3

We use MongoDB as our primary #datastore. Mongo's approach to replica sets enables some fantastic patterns for operations like maintenance, backups, and #ETL.

As we pull #microservices from our #monolith, we are taking the opportunity to build them with their own datastores using PostgreSQL. We also use Redis to cache data we’d never store permanently, and to rate-limit our requests to partners’ APIs (like GitHub).

When we’re dealing with large blobs of immutable data (logs, artifacts, and test results), we store them in Amazon S3. We handle any side-effects of S3’s eventual consistency model within our own code. This ensures that we deal with user requests correctly while writes are in process.

See more
Martin Johannesson
Martin Johannesson
Senior Software Developer at IT Minds · | 11 upvotes · 31.7K views
atIT MindsIT Minds
TypeScript
TypeScript
Node.js
Node.js
TypeORM
TypeORM
PostgreSQL
PostgreSQL
Apollo
Apollo
GraphQL
GraphQL
Next.js
Next.js
MongoDB
MongoDB
React
React
PWA
PWA
AMP
AMP
#B2B
#Backend
#Serverless

At IT Minds we create customized internal or #B2B web and mobile apps. I have a go to stack that I pitch to our customers consisting of 3 core areas. 1) A data core #backend . 2) A micro #serverless #backend. 3) A user client #frontend.

For the Data Core I create a backend using TypeScript Node.js and with TypeORM connecting to a PostgreSQL Exposing an action based api with Apollo GraphQL

For the micro serverless backend, which purpose is verification for authentication, autorization, logins and the likes. It is created with Next.js api pages. Using MongoDB to store essential information, caching etc.

Finally the frontend is built with React using Next.js , TypeScript and @Apollo. We create the frontend as a PWA and have a AMP landing page by default.

See more
C#
C#
.NET Core
.NET Core
MySQL
MySQL
MongoDB
MongoDB

Hi! I needed to choose a full stack of tools for a web drop shipping site without the payment process for a family startup proyect. It will feed from several web services (JSON), I'm looking forward a 4,200 articles tops. For web use only and for a few clients at the beginning.

I'm considering C# with .NET Core 3.0 as is the one language I'm starting to learn. For the Database I haven´t made my mind yet, but could be MySQL or MongoDB any advice is welcome as I'm getting back to programming after year away from this awesome world. Thanks

See more
Nicolas Apx
Nicolas Apx
CEO - FullStack Javascript at Apx Development Limited · | 14 upvotes · 33.3K views
atAPX DevelopmentAPX Development
Python
Python
Node.js
Node.js
MongoDB
MongoDB
PostgreSQL
PostgreSQL

I am planning on building a micro-service eCommerce back-end to be easy to reuse in any project as we need. I would like to use both Python and Node.js and MongoDB & PostgreSQL , in your opinion which one would best suited for the following services:

  • Users-service
  • Products-service
  • Auth-service
  • Inventory-service
  • Order-service
  • Payment-service
  • Sku-service
  • And more not yet defined....

Thanks

Nicolas

See more
Bryam Rodriguez
Bryam Rodriguez
JavaScript
JavaScript
Bit
Bit
Python
Python
Next.js
Next.js
React Native
React Native
Redis
Redis
MongoDB
MongoDB
PostgreSQL
PostgreSQL
RSpec
RSpec
react-testing-library
react-testing-library
Jest
Jest
Create React App
Create React App
Redux
Redux