1.7K
1K
+ 1
163

What is Amazon DynamoDB?

With it , you can offload the administrative burden of operating and scaling a highly available distributed database cluster, while paying a low price for only what you use.
Amazon DynamoDB is a tool in the NoSQL Database as a Service category of a tech stack.

Who uses Amazon DynamoDB?

Companies
614 companies reportedly use Amazon DynamoDB in their tech stacks, including Netflix, medium.com, and Lyft.

Developers
991 developers on StackShare have stated that they use Amazon DynamoDB.

Amazon DynamoDB Integrations

MySQL, PostgreSQL, SQLite, Datadog, and Amazon RDS for PostgreSQL are some of the popular tools that integrate with Amazon DynamoDB. Here's a list of all 24 tools that integrate with Amazon DynamoDB.

Why developers like Amazon DynamoDB?

Here’s a list of reasons why companies and developers use Amazon DynamoDB
Amazon DynamoDB Reviews

Here are some stack decisions, common use cases and reviews by companies and developers who chose Amazon DynamoDB in their tech stack.

Julien DeFrance
Julien DeFrance
Principal Software Engineer at Tophatter · | 16 upvotes · 520.7K views
atSmartZipSmartZip
Rails
Rails
Rails API
Rails API
AWS Elastic Beanstalk
AWS Elastic Beanstalk
Capistrano
Capistrano
Docker
Docker
Amazon S3
Amazon S3
Amazon RDS
Amazon RDS
MySQL
MySQL
Amazon RDS for Aurora
Amazon RDS for Aurora
Amazon ElastiCache
Amazon ElastiCache
Memcached
Memcached
Amazon CloudFront
Amazon CloudFront
Segment
Segment
Zapier
Zapier
Amazon Redshift
Amazon Redshift
Amazon Quicksight
Amazon Quicksight
Superset
Superset
Elasticsearch
Elasticsearch
Amazon Elasticsearch Service
Amazon Elasticsearch Service
New Relic
New Relic
AWS Lambda
AWS Lambda
Node.js
Node.js
Ruby
Ruby
Amazon DynamoDB
Amazon DynamoDB
Algolia
Algolia

Back in 2014, I was given an opportunity to re-architect SmartZip Analytics platform, and flagship product: SmartTargeting. This is a SaaS software helping real estate professionals keeping up with their prospects and leads in a given neighborhood/territory, finding out (thanks to predictive analytics) who's the most likely to list/sell their home, and running cross-channel marketing automation against them: direct mail, online ads, email... The company also does provide Data APIs to Enterprise customers.

I had inherited years and years of technical debt and I knew things had to change radically. The first enabler to this was to make use of the cloud and go with AWS, so we would stop re-inventing the wheel, and build around managed/scalable services.

For the SaaS product, we kept on working with Rails as this was what my team had the most knowledge in. We've however broken up the monolith and decoupled the front-end application from the backend thanks to the use of Rails API so we'd get independently scalable micro-services from now on.

Our various applications could now be deployed using AWS Elastic Beanstalk so we wouldn't waste any more efforts writing time-consuming Capistrano deployment scripts for instance. Combined with Docker so our application would run within its own container, independently from the underlying host configuration.

Storage-wise, we went with Amazon S3 and ditched any pre-existing local or network storage people used to deal with in our legacy systems. On the database side: Amazon RDS / MySQL initially. Ultimately migrated to Amazon RDS for Aurora / MySQL when it got released. Once again, here you need a managed service your cloud provider handles for you.

Future improvements / technology decisions included:

Caching: Amazon ElastiCache / Memcached CDN: Amazon CloudFront Systems Integration: Segment / Zapier Data-warehousing: Amazon Redshift BI: Amazon Quicksight / Superset Search: Elasticsearch / Amazon Elasticsearch Service / Algolia Monitoring: New Relic

As our usage grows, patterns changed, and/or our business needs evolved, my role as Engineering Manager then Director of Engineering was also to ensure my team kept on learning and innovating, while delivering on business value.

One of these innovations was to get ourselves into Serverless : Adopting AWS Lambda was a big step forward. At the time, only available for Node.js (Not Ruby ) but a great way to handle cost efficiency, unpredictable traffic, sudden bursts of traffic... Ultimately you want the whole chain of services involved in a call to be serverless, and that's when we've started leveraging Amazon DynamoDB on these projects so they'd be fully scalable.

See more
Dmitry Mukhin
Dmitry Mukhin
at Uploadcare · | 15 upvotes · 70.4K views
atUploadcareUploadcare
Google App Engine
Google App Engine
Python
Python
Redis
Redis
Amazon S3
Amazon S3
Amazon DynamoDB
Amazon DynamoDB
PostgreSQL
PostgreSQL

Uploadcare has built an infinitely scalable infrastructure by leveraging AWS. Building on top of AWS allows us to process 350M daily requests for file uploads, manipulations, and deliveries. When we started in 2011 the only cloud alternative to AWS was Google App Engine which was a no-go for a rather complex solution we wanted to build. We also didn’t want to buy any hardware or use co-locations.

Our stack handles receiving files, communicating with external file sources, managing file storage, managing user and file data, processing files, file caching and delivery, and managing user interface dashboards.

At its core, Uploadcare runs on Python. The Europython 2011 conference in Florence really inspired us, coupled with the fact that it was general enough to solve all of our challenges informed this decision. Additionally we had prior experience working in Python.

We chose to build the main application with Django because of its feature completeness and large footprint within the Python ecosystem.

All the communications within our ecosystem occur via several HTTP APIs, Redis, Amazon S3, and Amazon DynamoDB. We decided on this architecture so that our our system could be scalable in terms of storage and database throughput. This way we only need Django running on top of our database cluster. We use PostgreSQL as our database because it is considered an industry standard when it comes to clustering and scaling.

See more
Tim Specht
Tim Specht
‎Co-Founder and CTO at Dubsmash · | 13 upvotes · 64.4K views
atDubsmashDubsmash
PostgreSQL
PostgreSQL
Heroku
Heroku
Amazon RDS
Amazon RDS
Amazon DynamoDB
Amazon DynamoDB
Redis
Redis
Amazon RDS for Aurora
Amazon RDS for Aurora
#SqlDatabaseAsAService
#NosqlDatabaseAsAService
#Databases
#PlatformAsAService

Over the years we have added a wide variety of different storages to our stack including PostgreSQL (some hosted by Heroku, some by Amazon RDS) for storing relational data, Amazon DynamoDB to store non-relational data like recommendations & user connections, or Redis to hold pre-aggregated data to speed up API endpoints.

Since we started running Postgres ourselves on RDS instead of only using the managed offerings of Heroku, we've gained additional flexibility in scaling our application while reducing costs at the same time.

We are also heavily testing Amazon RDS for Aurora in its Postgres-compatible version and will also give the new release of Aurora Serverless a try!

#SqlDatabaseAsAService #NosqlDatabaseAsAService #Databases #PlatformAsAService

See more
Praveen Mooli
Praveen Mooli
Technical Leader at Taylor and Francis · | 11 upvotes · 241.7K views
MongoDB Atlas
MongoDB Atlas
Java
Java
Spring Boot
Spring Boot
Node.js
Node.js
ExpressJS
ExpressJS
Python
Python
Flask
Flask
Amazon Kinesis
Amazon Kinesis
Amazon Kinesis Firehose
Amazon Kinesis Firehose
Amazon SNS
Amazon SNS
Amazon SQS
Amazon SQS
AWS Lambda
AWS Lambda
Angular 2
Angular 2
RxJS
RxJS
GitHub
GitHub
Travis CI
Travis CI
Terraform
Terraform
Docker
Docker
Serverless
Serverless
Amazon RDS
Amazon RDS
Amazon DynamoDB
Amazon DynamoDB
Amazon S3
Amazon S3
#Backend
#Microservices
#Eventsourcingframework
#Webapps
#Devops
#Data

We are in the process of building a modern content platform to deliver our content through various channels. We decided to go with Microservices architecture as we wanted scale. Microservice architecture style is an approach to developing an application as a suite of small independently deployable services built around specific business capabilities. You can gain modularity, extensive parallelism and cost-effective scaling by deploying services across many distributed servers. Microservices modularity facilitates independent updates/deployments, and helps to avoid single point of failure, which can help prevent large-scale outages. We also decided to use Event Driven Architecture pattern which is a popular distributed asynchronous architecture pattern used to produce highly scalable applications. The event-driven architecture is made up of highly decoupled, single-purpose event processing components that asynchronously receive and process events.

To build our #Backend capabilities we decided to use the following: 1. #Microservices - Java with Spring Boot , Node.js with ExpressJS and Python with Flask 2. #Eventsourcingframework - Amazon Kinesis , Amazon Kinesis Firehose , Amazon SNS , Amazon SQS, AWS Lambda 3. #Data - Amazon RDS , Amazon DynamoDB , Amazon S3 , MongoDB Atlas

To build #Webapps we decided to use Angular 2 with RxJS

#Devops - GitHub , Travis CI , Terraform , Docker , Serverless

See more
Tim Nolet
Tim Nolet
Founder, Engineer & Dishwasher at Checkly · | 8 upvotes · 64.7K views
atChecklyHQChecklyHQ
PostgreSQL
PostgreSQL
Heroku
Heroku
Node.js
Node.js
MongoDB
MongoDB
Amazon DynamoDB
Amazon DynamoDB

PostgreSQL Heroku Node.js MongoDB Amazon DynamoDB

When I started building Checkly, one of the first things on the agenda was how to actually structure our SaaS database model: think accounts, users, subscriptions etc. Weirdly, there is not a lot of information on this on the "blogopshere" (cringe...). After research and some false starts with MongoDB and Amazon DynamoDB we ended up with PostgreSQL and a schema consisting of just four tables that form the backbone of all generic "Saasy" stuff almost any B2B SaaS bumps into.

In a nutshell:cPostgreSQL Heroku Node.js MongoDB Amazon DynamoDB

When I started building Checkly, one of the first things on the agenda was how to actually structure our SaaS database model: think accounts, users, subscriptions etc. Weirdly, there is not a lot of information on this on the "blogopshere" (cringe...). After research and some false starts with MongoDB and Amazon DynamoDB we ended up with PostgreSQL and a schema consisting of just four tables that form the backbone of all generic "Saasy" stuff almost any B2B SaaS bumps into.

In a nutshell:

  • We use Postgres on Heroku.
  • We use a "one database, on schema" approach for partitioning customer data.
  • We use an accounts, memberships and users table to create a many-to-many relation between users and accounts.
  • We completely decouple prices, payments and the exact ingredients for a customer's plan.

All the details including a database schema diagram are in the linked blog post.

See more
Chris McFadden
Chris McFadden
VP, Engineering at SparkPost · | 8 upvotes · 45.3K views
atSparkPostSparkPost
Amazon DynamoDB
Amazon DynamoDB
Amazon ElastiCache
Amazon ElastiCache
Amazon CloudSearch
Amazon CloudSearch
Node.js
Node.js
Amazon Elasticsearch Service
Amazon Elasticsearch Service

We send over 20 billion emails a month on behalf of our customers. As a result, we manage hundreds of millions of "suppression" records that track when an email address is invalid as well as when a user unsubscribes or flags an email as spam. This way we can help ensure our customers are only sending email that their recipients want, which boosts overall delivery rates and engagement. We need to support two primary use cases: (1) fast and reliable real-time lookup against the list when sending email and (2) allow customers to search, edit, and bulk upload/download their list via API and in the UI. A single enterprise customer's list can be well over 100 million. Over the years as the size of this data started small and has grown increasingly we have tried multiple things that didn't scale very well. In the recent past we used Amazon DynamoDB for the system of record as well as a cache in Amazon ElastiCache (Redis) for the fast lookups and Amazon CloudSearch for the search function. This architecture was overly complicated and expensive. We were able to eliminate the use of Redis, replacing it with direct lookups against DynamoDB, fronted with a stripped down Node.js API that performs consistently around 10ms. The new dynamic bursting of DynamoDB has helped ensure reliable and consistent performance for real-time lookups. We also moved off the clunky and expensive CloudSearch to Amazon Elasticsearch Service for the search functionality. Beyond the high price tag for CloudSearch it also had severe limits streaming updates from DynamoDB, which forced us to batch them - adding extra complexity and CX challenges. We love the fact that DynamoDB can stream directly to ElasticSearch and believe using these two technologies together will handle our scaling needs in an economical way for the foreseeable future.

See more

Amazon DynamoDB's Features

  • Automated Storage Scaling – There is no limit to the amount of data you can store in a DynamoDB table, and the service automatically allocates more storage, as you store more data using the DynamoDB write APIs
  • Provisioned Throughput – When creating a table, simply specify how much request capacity you require. DynamoDB allocates dedicated resources to your table to meet your performance requirements, and automatically partitions data over a sufficient number of servers to meet your request capacity
  • Fully Distributed, Shared Nothing Architecture

Amazon DynamoDB Alternatives & Comparisons

What are some alternatives to Amazon DynamoDB?
Google Cloud Datastore
Use a managed, NoSQL, schemaless database for storing non-relational data. Cloud Datastore automatically scales as you need it and supports transactions as well as robust, SQL-like queries.
MongoDB
MongoDB stores data in JSON-like documents that can vary in structure, offering a dynamic, flexible schema. MongoDB was also designed for high availability and scalability, with built-in replication and auto-sharding.
Amazon SimpleDB
Developers simply store and query data items via web services requests and Amazon SimpleDB does the rest. Behind the scenes, Amazon SimpleDB creates and manages multiple geographically distributed replicas of your data automatically to enable high availability and data durability. Amazon SimpleDB provides a simple web services interface to create and store multiple data sets, query your data easily, and return the results. Your data is automatically indexed, making it easy to quickly find the information that you need. There is no need to pre-define a schema or change a schema if new data is added later. And scale-out is as simple as creating new domains, rather than building out new servers.
MySQL
The MySQL software delivers a very fast, multi-threaded, multi-user, and robust SQL (Structured Query Language) database server. MySQL Server is intended for mission-critical, heavy-load production systems as well as for embedding into mass-deployed software.
Amazon S3
Amazon Simple Storage Service provides a fully redundant data storage infrastructure for storing and retrieving any amount of data, at any time, from anywhere on the web
See all alternatives

Amazon DynamoDB's Followers
1000 developers follow Amazon DynamoDB to keep up with related blogs and decisions.
Annecito Fernandes
Shahdat Hossain
Lê Thành
Dmitry Kovalenko
Alex Gauthier
Ankur Bansal
Antonio Terreno
mohamed_jebali
Lalit Nayyar
Tư vấn bác sĩ