What is Amazon DynamoDB?

All data items are stored on Solid State Drives (SSDs), and are replicated across 3 Availability Zones for high availability and durability. With DynamoDB, you can offload the administrative burden of operating and scaling a highly available distributed database cluster, while paying a low price for only what you use.
Amazon DynamoDB is a tool in the NoSQL Database as a Service category of a tech stack.

Who uses Amazon DynamoDB?

Companies
426 companies use Amazon DynamoDB in their tech stacks, including Intuit, Netflix, and AdRoll.

Developers
171 developers use Amazon DynamoDB.

Amazon DynamoDB Integrations

Datadog, Redash, AWS AppSync, Architect, and Cloudcraft are some of the popular tools that integrate with Amazon DynamoDB. Here's a list of all 12 tools that integrate with Amazon DynamoDB.

Why developers like Amazon DynamoDB?

Here’s a list of reasons why companies and developers use Amazon DynamoDB
Amazon DynamoDB Reviews

Here are some stack decisions, common use cases and reviews by companies and developers who chose Amazon DynamoDB in their tech stack.

Julien DeFrance
Julien DeFrance
Full Stack Engineering Manager at ValiMail · | 16 upvotes · 66.4K views
atSmartZip
Amazon DynamoDB
Ruby
Node.js
AWS Lambda
New Relic
Amazon Elasticsearch Service
Elasticsearch
Superset
Amazon Quicksight
Amazon Redshift
Zapier
Segment
Amazon CloudFront
Memcached
Amazon ElastiCache
Amazon RDS for Aurora
MySQL
Amazon RDS
Amazon S3
Docker
Capistrano
AWS Elastic Beanstalk
Rails API
Rails
Algolia

Back in 2014, I was given an opportunity to re-architect SmartZip Analytics platform, and flagship product: SmartTargeting. This is a SaaS software helping real estate professionals keeping up with their prospects and leads in a given neighborhood/territory, finding out (thanks to predictive analytics) who's the most likely to list/sell their home, and running cross-channel marketing automation against them: direct mail, online ads, email... The company also does provide Data APIs to Enterprise customers.

I had inherited years and years of technical debt and I knew things had to change radically. The first enabler to this was to make use of the cloud and go with AWS, so we would stop re-inventing the wheel, and build around managed/scalable services.

For the SaaS product, we kept on working with Rails as this was what my team had the most knowledge in. We've however broken up the monolith and decoupled the front-end application from the backend thanks to the use of Rails API so we'd get independently scalable micro-services from now on.

Our various applications could now be deployed using AWS Elastic Beanstalk so we wouldn't waste any more efforts writing time-consuming Capistrano deployment scripts for instance. Combined with Docker so our application would run within its own container, independently from the underlying host configuration.

Storage-wise, we went with Amazon S3 and ditched any pre-existing local or network storage people used to deal with in our legacy systems. On the database side: Amazon RDS / MySQL initially. Ultimately migrated to Amazon RDS for Aurora / MySQL when it got released. Once again, here you need a managed service your cloud provider handles for you.

Future improvements / technology decisions included:

Caching: Amazon ElastiCache / Memcached CDN: Amazon CloudFront Systems Integration: Segment / Zapier Data-warehousing: Amazon Redshift BI: Amazon Quicksight / Superset Search: Elasticsearch / Amazon Elasticsearch Service / Algolia Monitoring: New Relic

As our usage grows, patterns changed, and/or our business needs evolved, my role as Engineering Manager then Director of Engineering was also to ensure my team kept on learning and innovating, while delivering on business value.

One of these innovations was to get ourselves into Serverless : Adopting AWS Lambda was a big step forward. At the time, only available for Node.js (Not Ruby ) but a great way to handle cost efficiency, unpredictable traffic, sudden bursts of traffic... Ultimately you want the whole chain of services involved in a call to be serverless, and that's when we've started leveraging Amazon DynamoDB on these projects so they'd be fully scalable.

See more
Dmitry Mukhin
Dmitry Mukhin
at Uploadcare · | 15 upvotes · 15.2K views
atUploadcare
PostgreSQL
Amazon DynamoDB
Amazon S3
Redis
Python
Google App Engine

Uploadcare has built an infinitely scalable infrastructure by leveraging AWS. Building on top of AWS allows us to process 350M daily requests for file uploads, manipulations, and deliveries. When we started in 2011 the only cloud alternative to AWS was Google App Engine which was a no-go for a rather complex solution we wanted to build. We also didn’t want to buy any hardware or use co-locations.

Our stack handles receiving files, communicating with external file sources, managing file storage, managing user and file data, processing files, file caching and delivery, and managing user interface dashboards.

At its core, Uploadcare runs on Python. The Europython 2011 conference in Florence really inspired us, coupled with the fact that it was general enough to solve all of our challenges informed this decision. Additionally we had prior experience working in Python.

We chose to build the main application with Django because of its feature completeness and large footprint within the Python ecosystem.

All the communications within our ecosystem occur via several HTTP APIs, Redis, Amazon S3, and Amazon DynamoDB. We decided on this architecture so that our our system could be scalable in terms of storage and database throughput. This way we only need Django running on top of our database cluster. We use PostgreSQL as our database because it is considered an industry standard when it comes to clustering and scaling.

See more
Tim Specht
Tim Specht
‎Co-Founder and CTO at Dubsmash · | 13 upvotes · 16.8K views
atDubsmash
Amazon RDS for Aurora
Redis
Amazon DynamoDB
Amazon RDS
Heroku
PostgreSQL
#PlatformAsAService
#Databases
#NosqlDatabaseAsAService
#SqlDatabaseAsAService

Over the years we have added a wide variety of different storages to our stack including PostgreSQL (some hosted by Heroku, some by Amazon RDS) for storing relational data, Amazon DynamoDB to store non-relational data like recommendations & user connections, or Redis to hold pre-aggregated data to speed up API endpoints.

Since we started running Postgres ourselves on RDS instead of only using the managed offerings of Heroku, we've gained additional flexibility in scaling our application while reducing costs at the same time.

We are also heavily testing Amazon RDS for Aurora in its Postgres-compatible version and will also give the new release of Aurora Serverless a try!

#SqlDatabaseAsAService #NosqlDatabaseAsAService #Databases #PlatformAsAService

See more
Tim Nolet
Tim Nolet
Founder, Engineer & Dishwasher at Checkly · | 8 upvotes · 43K views
atChecklyHQ
Amazon DynamoDB
MongoDB
Node.js
Heroku
PostgreSQL

PostgreSQL Heroku Node.js MongoDB Amazon DynamoDB

When I started building Checkly, one of the first things on the agenda was how to actually structure our SaaS database model: think accounts, users, subscriptions etc. Weirdly, there is not a lot of information on this on the "blogopshere" (cringe...). After research and some false starts with MongoDB and Amazon DynamoDB we ended up with PostgreSQL and a schema consisting of just four tables that form the backbone of all generic "Saasy" stuff almost any B2B SaaS bumps into.

In a nutshell:cPostgreSQL Heroku Node.js MongoDB Amazon DynamoDB

When I started building Checkly, one of the first things on the agenda was how to actually structure our SaaS database model: think accounts, users, subscriptions etc. Weirdly, there is not a lot of information on this on the "blogopshere" (cringe...). After research and some false starts with MongoDB and Amazon DynamoDB we ended up with PostgreSQL and a schema consisting of just four tables that form the backbone of all generic "Saasy" stuff almost any B2B SaaS bumps into.

In a nutshell:

  • We use Postgres on Heroku.
  • We use a "one database, on schema" approach for partitioning customer data.
  • We use an accounts, memberships and users table to create a many-to-many relation between users and accounts.
  • We completely decouple prices, payments and the exact ingredients for a customer's plan.

All the details including a database schema diagram are in the linked blog post.

See more
Chris McFadden
Chris McFadden
VP, Engineering at SparkPost · | 8 upvotes · 7.7K views
atSparkPost
Amazon Elasticsearch Service
Node.js
Amazon CloudSearch
Amazon ElastiCache
Amazon DynamoDB

We send over 20 billion emails a month on behalf of our customers. As a result, we manage hundreds of millions of "suppression" records that track when an email address is invalid as well as when a user unsubscribes or flags an email as spam. This way we can help ensure our customers are only sending email that their recipients want, which boosts overall delivery rates and engagement. We need to support two primary use cases: (1) fast and reliable real-time lookup against the list when sending email and (2) allow customers to search, edit, and bulk upload/download their list via API and in the UI. A single enterprise customer's list can be well over 100 million. Over the years as the size of this data started small and has grown increasingly we have tried multiple things that didn't scale very well. In the recent past we used Amazon DynamoDB for the system of record as well as a cache in Amazon ElastiCache (Redis) for the fast lookups and Amazon CloudSearch for the search function. This architecture was overly complicated and expensive. We were able to eliminate the use of Redis, replacing it with direct lookups against DynamoDB, fronted with a stripped down Node.js API that performs consistently around 10ms. The new dynamic bursting of DynamoDB has helped ensure reliable and consistent performance for real-time lookups. We also moved off the clunky and expensive CloudSearch to Amazon Elasticsearch Service for the search functionality. Beyond the high price tag for CloudSearch it also had severe limits streaming updates from DynamoDB, which forced us to batch them - adding extra complexity and CX challenges. We love the fact that DynamoDB can stream directly to ElasticSearch and believe using these two technologies together will handle our scaling needs in an economical way for the foreseeable future.

See more
Glenn 'devalias' Grant
Glenn 'devalias' Grant
Hack. Dev. Transcend. · | 5 upvotes · 10.2K views
GitLab
Git
WebStorm
Amazon DynamoDB
AWS CloudFormation
AWS Lambda
Go
Bootstrap
redux-saga
Redux
React
#JetBrains
#Serverless

Working on a project recently, wanted an easy modern frontend to work with, decoupled from our backend. To get things going quickly, decided to go with React, Redux.js, redux-saga, Bootstrap.

On the backend side, Go is a personal favourite, and wanted to minimize server overheads so went with a #serverless architecture leveraging AWS Lambda, AWS CloudFormation, Amazon DynamoDB, etc.

For IDE/tooling I tend to stick to the #JetBrains tools: WebStorm / Goland.

Obviously using Git, with GitLab private repo's for managing code/issues/etc.

See more

Amazon DynamoDB's features

  • Automated Storage Scaling – There is no limit to the amount of data you can store in a DynamoDB table, and the service automatically allocates more storage, as you store more data using the DynamoDB write APIs.
  • Provisioned Throughput – When creating a table, simply specify how much request capacity you require. DynamoDB allocates dedicated resources to your table to meet your performance requirements, and automatically partitions data over a sufficient number of servers to meet your request capacity. If your throughput requirements change, simply update your table's request capacity using the AWS Management Console or the Amazon DynamoDB APIs. You are still able to achieve your prior throughput levels while scaling is underway.
  • Fully Distributed, Shared Nothing Architecture – Amazon DynamoDB scales horizontally and can seamlessly scale a single table over hundreds of servers.

Amazon DynamoDB Alternatives & Comparisons

What are some alternatives to Amazon DynamoDB?
Google Cloud Datastore
Use a managed, NoSQL, schemaless database for storing non-relational data. Cloud Datastore automatically scales as you need it and supports transactions as well as robust, SQL-like queries.
MongoDB
MongoDB stores data in JSON-like documents that can vary in structure, offering a dynamic, flexible schema. MongoDB was also designed for high availability and scalability, with built-in replication and auto-sharding.
Amazon SimpleDB
Developers simply store and query data items via web services requests and Amazon SimpleDB does the rest. Behind the scenes, Amazon SimpleDB creates and manages multiple geographically distributed replicas of your data automatically to enable high availability and data durability. Amazon SimpleDB provides a simple web services interface to create and store multiple data sets, query your data easily, and return the results. Your data is automatically indexed, making it easy to quickly find the information that you need. There is no need to pre-define a schema or change a schema if new data is added later. And scale-out is as simple as creating new domains, rather than building out new servers.
Amazon S3
Amazon Simple Storage Service provides a fully redundant data storage infrastructure for storing and retrieving any amount of data, at any time, from anywhere on the web
MySQL
The MySQL software delivers a very fast, multi-threaded, multi-user, and robust SQL (Structured Query Language) database server. MySQL Server is intended for mission-critical, heavy-load production systems as well as for embedding into mass-deployed software.
See all alternatives

Amazon DynamoDB's Stats

- No public GitHub repository available -