Alternatives to Amazon DynamoDB logo

Alternatives to Amazon DynamoDB

Google Cloud Datastore, MongoDB, Amazon SimpleDB, MySQL, and Amazon S3 are the most popular alternatives and competitors to Amazon DynamoDB.
1.8K
1.2K
+ 1
163

What is Amazon DynamoDB and what are its top alternatives?

With it , you can offload the administrative burden of operating and scaling a highly available distributed database cluster, while paying a low price for only what you use.
Amazon DynamoDB is a tool in the NoSQL Database as a Service category of a tech stack.

Amazon DynamoDB alternatives & related posts

MongoDB logo

MongoDB

18.8K
15.3K
3.9K
18.8K
15.3K
+ 1
3.9K
The database for giant ideas
MongoDB logo
MongoDB
VS
Amazon DynamoDB logo
Amazon DynamoDB

related MongoDB posts

Jeyabalaji Subramanian
Jeyabalaji Subramanian
CTO at FundsCorner · | 24 upvotes · 533.6K views
atFundsCornerFundsCorner
MongoDB
MongoDB
PostgreSQL
PostgreSQL
MongoDB Stitch
MongoDB Stitch
Node.js
Node.js
Amazon SQS
Amazon SQS
Python
Python
SQLAlchemy
SQLAlchemy
AWS Lambda
AWS Lambda
Zappa
Zappa

Recently we were looking at a few robust and cost-effective ways of replicating the data that resides in our production MongoDB to a PostgreSQL database for data warehousing and business intelligence.

We set ourselves the following criteria for the optimal tool that would do this job: - The data replication must be near real-time, yet it should NOT impact the production database - The data replication must be horizontally scalable (based on the load), asynchronous & crash-resilient

Based on the above criteria, we selected the following tools to perform the end to end data replication:

We chose MongoDB Stitch for picking up the changes in the source database. It is the serverless platform from MongoDB. One of the services offered by MongoDB Stitch is Stitch Triggers. Using stitch triggers, you can execute a serverless function (in Node.js) in real time in response to changes in the database. When there are a lot of database changes, Stitch automatically "feeds forward" these changes through an asynchronous queue.

We chose Amazon SQS as the pipe / message backbone for communicating the changes from MongoDB to our own replication service. Interestingly enough, MongoDB stitch offers integration with AWS services.

In the Node.js function, we wrote minimal functionality to communicate the database changes (insert / update / delete / replace) to Amazon SQS.

Next we wrote a minimal micro-service in Python to listen to the message events on SQS, pickup the data payload & mirror the DB changes on to the target Data warehouse. We implemented source data to target data translation by modelling target table structures through SQLAlchemy . We deployed this micro-service as AWS Lambda with Zappa. With Zappa, deploying your services as event-driven & horizontally scalable Lambda service is dumb-easy.

In the end, we got to implement a highly scalable near realtime Change Data Replication service that "works" and deployed to production in a matter of few days!

See more
Robert Zuber
Robert Zuber
CTO at CircleCI · | 22 upvotes · 385.5K views
atCircleCICircleCI
MongoDB
MongoDB
PostgreSQL
PostgreSQL
Redis
Redis
GitHub
GitHub
Amazon S3
Amazon S3

We use MongoDB as our primary #datastore. Mongo's approach to replica sets enables some fantastic patterns for operations like maintenance, backups, and #ETL.

As we pull #microservices from our #monolith, we are taking the opportunity to build them with their own datastores using PostgreSQL. We also use Redis to cache data we’d never store permanently, and to rate-limit our requests to partners’ APIs (like GitHub).

When we’re dealing with large blobs of immutable data (logs, artifacts, and test results), we store them in Amazon S3. We handle any side-effects of S3’s eventual consistency model within our own code. This ensures that we deal with user requests correctly while writes are in process.

See more
Amazon SimpleDB logo

Amazon SimpleDB

14
20
0
14
20
+ 1
0
Highly available and flexible non-relational data store
    Be the first to leave a pro
    Amazon SimpleDB logo
    Amazon SimpleDB
    VS
    Amazon DynamoDB logo
    Amazon DynamoDB
    MySQL logo

    MySQL

    26.1K
    20.6K
    3.7K
    26.1K
    20.6K
    + 1
    3.7K
    The world's most popular open source database
    MySQL logo
    MySQL
    VS
    Amazon DynamoDB logo
    Amazon DynamoDB

    related MySQL posts

    Tim Abbott
    Tim Abbott
    Founder at Zulip · | 21 upvotes · 169K views
    atZulipZulip
    PostgreSQL
    PostgreSQL
    MySQL
    MySQL
    Elasticsearch
    Elasticsearch

    We've been using PostgreSQL since the very early days of Zulip, but we actually didn't use it from the beginning. Zulip started out as a MySQL project back in 2012, because we'd heard it was a good choice for a startup with a wide community. However, we found that even though we were using the Django ORM for most of our database access, we spent a lot of time fighting with MySQL. Issues ranged from bad collation defaults, to bad query plans which required a lot of manual query tweaks.

    We ended up getting so frustrated that we tried out PostgresQL, and the results were fantastic. We didn't have to do any real customization (just some tuning settings for how big a server we had), and all of our most important queries were faster out of the box. As a result, we were able to delete a bunch of custom queries escaping the ORM that we'd written to make the MySQL query planner happy (because postgres just did the right thing automatically).

    And then after that, we've just gotten a ton of value out of postgres. We use its excellent built-in full-text search, which has helped us avoid needing to bring in a tool like Elasticsearch, and we've really enjoyed features like its partial indexes, which saved us a lot of work adding unnecessary extra tables to get good performance for things like our "unread messages" and "starred messages" indexes.

    I can't recommend it highly enough.

    See more
    Julien DeFrance
    Julien DeFrance
    Principal Software Engineer at Tophatter · | 16 upvotes · 839.2K views
    atSmartZipSmartZip
    Rails
    Rails
    Rails API
    Rails API
    AWS Elastic Beanstalk
    AWS Elastic Beanstalk
    Capistrano
    Capistrano
    Docker
    Docker
    Amazon S3
    Amazon S3
    Amazon RDS
    Amazon RDS
    MySQL
    MySQL
    Amazon RDS for Aurora
    Amazon RDS for Aurora
    Amazon ElastiCache
    Amazon ElastiCache
    Memcached
    Memcached
    Amazon CloudFront
    Amazon CloudFront
    Segment
    Segment
    Zapier
    Zapier
    Amazon Redshift
    Amazon Redshift
    Amazon Quicksight
    Amazon Quicksight
    Superset
    Superset
    Elasticsearch
    Elasticsearch
    Amazon Elasticsearch Service
    Amazon Elasticsearch Service
    New Relic
    New Relic
    AWS Lambda
    AWS Lambda
    Node.js
    Node.js
    Ruby
    Ruby
    Amazon DynamoDB
    Amazon DynamoDB
    Algolia
    Algolia

    Back in 2014, I was given an opportunity to re-architect SmartZip Analytics platform, and flagship product: SmartTargeting. This is a SaaS software helping real estate professionals keeping up with their prospects and leads in a given neighborhood/territory, finding out (thanks to predictive analytics) who's the most likely to list/sell their home, and running cross-channel marketing automation against them: direct mail, online ads, email... The company also does provide Data APIs to Enterprise customers.

    I had inherited years and years of technical debt and I knew things had to change radically. The first enabler to this was to make use of the cloud and go with AWS, so we would stop re-inventing the wheel, and build around managed/scalable services.

    For the SaaS product, we kept on working with Rails as this was what my team had the most knowledge in. We've however broken up the monolith and decoupled the front-end application from the backend thanks to the use of Rails API so we'd get independently scalable micro-services from now on.

    Our various applications could now be deployed using AWS Elastic Beanstalk so we wouldn't waste any more efforts writing time-consuming Capistrano deployment scripts for instance. Combined with Docker so our application would run within its own container, independently from the underlying host configuration.

    Storage-wise, we went with Amazon S3 and ditched any pre-existing local or network storage people used to deal with in our legacy systems. On the database side: Amazon RDS / MySQL initially. Ultimately migrated to Amazon RDS for Aurora / MySQL when it got released. Once again, here you need a managed service your cloud provider handles for you.

    Future improvements / technology decisions included:

    Caching: Amazon ElastiCache / Memcached CDN: Amazon CloudFront Systems Integration: Segment / Zapier Data-warehousing: Amazon Redshift BI: Amazon Quicksight / Superset Search: Elasticsearch / Amazon Elasticsearch Service / Algolia Monitoring: New Relic

    As our usage grows, patterns changed, and/or our business needs evolved, my role as Engineering Manager then Director of Engineering was also to ensure my team kept on learning and innovating, while delivering on business value.

    One of these innovations was to get ourselves into Serverless : Adopting AWS Lambda was a big step forward. At the time, only available for Node.js (Not Ruby ) but a great way to handle cost efficiency, unpredictable traffic, sudden bursts of traffic... Ultimately you want the whole chain of services involved in a call to be serverless, and that's when we've started leveraging Amazon DynamoDB on these projects so they'd be fully scalable.

    See more
    Amazon S3 logo

    Amazon S3

    15.9K
    9.8K
    2K
    15.9K
    9.8K
    + 1
    2K
    Store and retrieve any amount of data, at any time, from anywhere on the web
    Amazon S3 logo
    Amazon S3
    VS
    Amazon DynamoDB logo
    Amazon DynamoDB

    related Amazon S3 posts

    Ashish Singh
    Ashish Singh
    Tech Lead, Big Data Platform at Pinterest · | 26 upvotes · 84.6K views
    Apache Hive
    Apache Hive
    Kubernetes
    Kubernetes
    Kafka
    Kafka
    Amazon S3
    Amazon S3
    Amazon EC2
    Amazon EC2
    Presto
    Presto
    #DataScience
    #DataEngineering
    #AWS
    #BigData

    To provide employees with the critical need of interactive querying, we’ve worked with Presto, an open-source distributed SQL query engine, over the years. Operating Presto at Pinterest’s scale has involved resolving quite a few challenges like, supporting deeply nested and huge thrift schemas, slow/ bad worker detection and remediation, auto-scaling cluster, graceful cluster shutdown and impersonation support for ldap authenticator.

    Our infrastructure is built on top of Amazon EC2 and we leverage Amazon S3 for storing our data. This separates compute and storage layers, and allows multiple compute clusters to share the S3 data.

    We have hundreds of petabytes of data and tens of thousands of Apache Hive tables. Our Presto clusters are comprised of a fleet of 450 r4.8xl EC2 instances. Presto clusters together have over 100 TBs of memory and 14K vcpu cores. Within Pinterest, we have close to more than 1,000 monthly active users (out of total 1,600+ Pinterest employees) using Presto, who run about 400K queries on these clusters per month.

    Each query submitted to Presto cluster is logged to a Kafka topic via Singer. Singer is a logging agent built at Pinterest and we talked about it in a previous post. Each query is logged when it is submitted and when it finishes. When a Presto cluster crashes, we will have query submitted events without corresponding query finished events. These events enable us to capture the effect of cluster crashes over time.

    Each Presto cluster at Pinterest has workers on a mix of dedicated AWS EC2 instances and Kubernetes pods. Kubernetes platform provides us with the capability to add and remove workers from a Presto cluster very quickly. The best-case latency on bringing up a new worker on Kubernetes is less than a minute. However, when the Kubernetes cluster itself is out of resources and needs to scale up, it can take up to ten minutes. Some other advantages of deploying on Kubernetes platform is that our Presto deployment becomes agnostic of cloud vendor, instance types, OS, etc.

    #BigData #AWS #DataScience #DataEngineering

    See more
    John-Daniel Trask
    John-Daniel Trask
    Co-founder & CEO at Raygun · | 19 upvotes · 104.3K views
    atRaygunRaygun
    Amazon S3
    Amazon S3
    Amazon RDS
    Amazon RDS
    nginx
    nginx
    Amazon EC2
    Amazon EC2
    AWS Elastic Load Balancing (ELB)
    AWS Elastic Load Balancing (ELB)
    #CloudHosting
    #WebServers
    #CloudStorage
    #LoadBalancerReverseProxy

    We chose AWS because, at the time, it was really the only cloud provider to choose from.

    We tend to use their basic building blocks (EC2, ELB, Amazon S3, Amazon RDS) rather than vendor specific components like databases and queuing. We deliberately decided to do this to ensure we could provide multi-cloud support or potentially move to another cloud provider if the offering was better for our customers.

    We’ve utilized c3.large nodes for both the Node.js deployment and then for the .NET Core deployment. Both sit as backends behind an nginx instance and are managed using scaling groups in Amazon EC2 sitting behind a standard AWS Elastic Load Balancing (ELB).

    While we’re satisfied with AWS, we do review our decision each year and have looked at Azure and Google Cloud offerings.

    #CloudHosting #WebServers #CloudStorage #LoadBalancerReverseProxy

    See more
    Cassandra logo

    Cassandra

    2.3K
    1.8K
    444
    2.3K
    1.8K
    + 1
    444
    A partitioned row store. Rows are organized into tables with a required primary key.
    Cassandra logo
    Cassandra
    VS
    Amazon DynamoDB logo
    Amazon DynamoDB

    related Cassandra posts

    Thierry Schellenbach
    Thierry Schellenbach
    CEO at Stream · | 17 upvotes · 168K views
    atStreamStream
    Redis
    Redis
    Cassandra
    Cassandra
    RocksDB
    RocksDB
    #InMemoryDatabases
    #DataStores
    #Databases

    1.0 of Stream leveraged Cassandra for storing the feed. Cassandra is a common choice for building feeds. Instagram, for instance started, out with Redis but eventually switched to Cassandra to handle their rapid usage growth. Cassandra can handle write heavy workloads very efficiently.

    Cassandra is a great tool that allows you to scale write capacity simply by adding more nodes, though it is also very complex. This complexity made it hard to diagnose performance fluctuations. Even though we had years of experience with running Cassandra, it still felt like a bit of a black box. When building Stream 2.0 we decided to go for a different approach and build Keevo. Keevo is our in-house key-value store built upon RocksDB, gRPC and Raft.

    RocksDB is a highly performant embeddable database library developed and maintained by Facebook’s data engineering team. RocksDB started as a fork of Google’s LevelDB that introduced several performance improvements for SSD. Nowadays RocksDB is a project on its own and is under active development. It is written in C++ and it’s fast. Have a look at how this benchmark handles 7 million QPS. In terms of technology it’s much more simple than Cassandra.

    This translates into reduced maintenance overhead, improved performance and, most importantly, more consistent performance. It’s interesting to note that LinkedIn also uses RocksDB for their feed.

    #InMemoryDatabases #DataStores #Databases

    See more
    Laravel
    Laravel
    Zend Framework
    Zend Framework
    MySQL
    MySQL
    MongoDB
    MongoDB
    Cassandra
    Cassandra
    React
    React
    AngularJS
    AngularJS
    jQuery
    jQuery
    Docker
    Docker
    Linux
    Linux

    React AngularJS jQuery

    Laravel Zend Framework

    MySQL MongoDB Cassandra

    Docker

    Linux

    See more
    Amazon Redshift logo

    Amazon Redshift

    701
    393
    86
    701
    393
    + 1
    86
    Fast, fully managed, petabyte-scale data warehouse service
    Amazon Redshift logo
    Amazon Redshift
    VS
    Amazon DynamoDB logo
    Amazon DynamoDB

    related Amazon Redshift posts

    Julien DeFrance
    Julien DeFrance
    Principal Software Engineer at Tophatter · | 16 upvotes · 839.2K views
    atSmartZipSmartZip
    Rails
    Rails
    Rails API
    Rails API
    AWS Elastic Beanstalk
    AWS Elastic Beanstalk
    Capistrano
    Capistrano
    Docker
    Docker
    Amazon S3
    Amazon S3
    Amazon RDS
    Amazon RDS
    MySQL
    MySQL
    Amazon RDS for Aurora
    Amazon RDS for Aurora
    Amazon ElastiCache
    Amazon ElastiCache
    Memcached
    Memcached
    Amazon CloudFront
    Amazon CloudFront
    Segment
    Segment
    Zapier
    Zapier
    Amazon Redshift
    Amazon Redshift
    Amazon Quicksight
    Amazon Quicksight
    Superset
    Superset
    Elasticsearch
    Elasticsearch
    Amazon Elasticsearch Service
    Amazon Elasticsearch Service
    New Relic
    New Relic
    AWS Lambda
    AWS Lambda
    Node.js
    Node.js
    Ruby
    Ruby
    Amazon DynamoDB
    Amazon DynamoDB
    Algolia
    Algolia

    Back in 2014, I was given an opportunity to re-architect SmartZip Analytics platform, and flagship product: SmartTargeting. This is a SaaS software helping real estate professionals keeping up with their prospects and leads in a given neighborhood/territory, finding out (thanks to predictive analytics) who's the most likely to list/sell their home, and running cross-channel marketing automation against them: direct mail, online ads, email... The company also does provide Data APIs to Enterprise customers.

    I had inherited years and years of technical debt and I knew things had to change radically. The first enabler to this was to make use of the cloud and go with AWS, so we would stop re-inventing the wheel, and build around managed/scalable services.

    For the SaaS product, we kept on working with Rails as this was what my team had the most knowledge in. We've however broken up the monolith and decoupled the front-end application from the backend thanks to the use of Rails API so we'd get independently scalable micro-services from now on.

    Our various applications could now be deployed using AWS Elastic Beanstalk so we wouldn't waste any more efforts writing time-consuming Capistrano deployment scripts for instance. Combined with Docker so our application would run within its own container, independently from the underlying host configuration.

    Storage-wise, we went with Amazon S3 and ditched any pre-existing local or network storage people used to deal with in our legacy systems. On the database side: Amazon RDS / MySQL initially. Ultimately migrated to Amazon RDS for Aurora / MySQL when it got released. Once again, here you need a managed service your cloud provider handles for you.

    Future improvements / technology decisions included:

    Caching: Amazon ElastiCache / Memcached CDN: Amazon CloudFront Systems Integration: Segment / Zapier Data-warehousing: Amazon Redshift BI: Amazon Quicksight / Superset Search: Elasticsearch / Amazon Elasticsearch Service / Algolia Monitoring: New Relic

    As our usage grows, patterns changed, and/or our business needs evolved, my role as Engineering Manager then Director of Engineering was also to ensure my team kept on learning and innovating, while delivering on business value.

    One of these innovations was to get ourselves into Serverless : Adopting AWS Lambda was a big step forward. At the time, only available for Node.js (Not Ruby ) but a great way to handle cost efficiency, unpredictable traffic, sudden bursts of traffic... Ultimately you want the whole chain of services involved in a call to be serverless, and that's when we've started leveraging Amazon DynamoDB on these projects so they'd be fully scalable.

    See more
    Ankit Sobti
    Ankit Sobti
    CTO at Postman Inc · | 11 upvotes · 104.7K views
    atPostmanPostman
    Looker
    Looker
    Stitch
    Stitch
    Amazon Redshift
    Amazon Redshift
    dbt
    dbt

    Looker , Stitch , Amazon Redshift , dbt

    We recently moved our Data Analytics and Business Intelligence tooling to Looker . It's already helping us create a solid process for reusable SQL-based data modeling, with consistent definitions across the entire organizations. Looker allows us to collaboratively build these version-controlled models and push the limits of what we've traditionally been able to accomplish with analytics with a lean team.

    For Data Engineering, we're in the process of moving from maintaining our own ETL pipelines on AWS to a managed ELT system on Stitch. We're also evaluating the command line tool, dbt to manage data transformations. Our hope is that Stitch + dbt will streamline the ELT bit, allowing us to focus our energies on analyzing data, rather than managing it.

    See more

    related Cloud Firestore posts

    fontumi
    fontumi
    Firebase
    Firebase
    Node.js
    Node.js
    FeathersJS
    FeathersJS
    Vue.js
    Vue.js
    Google Compute Engine
    Google Compute Engine
    Dialogflow
    Dialogflow
    Cloud Firestore
    Cloud Firestore
    Git
    Git
    GitHub
    GitHub
    Visual Studio Code
    Visual Studio Code

    Fontumi focuses on the development of telecommunications solutions. We have opted for technologies that allow agile development and great scalability.

    Firebase and Node.js + FeathersJS are technologies that we have used on the server side. Vue.js is our main framework for clients.

    Our latest products launched have been focused on the integration of AI systems for enriched conversations. Google Compute Engine , along with Dialogflow and Cloud Firestore have been important tools for this work.

    Git + GitHub + Visual Studio Code is a killer stack.

    See more
    Pran B.
    Pran B.
    Fullstack Developer at Growbox · | 6 upvotes · 39.5K views
    Flutter
    Flutter
    Cloud Firestore
    Cloud Firestore
    SQLite
    SQLite

    Goal/Problem: A small mobile app (using Flutter ) for saving data offline ( some data offline) and rest data need to be synced with Cloud Firestore Tools: Cloud Firestore , SQLite Decision/Considering/Need suggestions: There is no state management in the app yet. There is a requirement to store some data offline and it should be available easily (when the phone is offline) and some data needs to stored in the cloud. I am considering using sqlflite for phone storage and firestore to sync and manage the online database. I am using flutter to build the app, I couldn't find a reliable way to use firestore cache for reading the data when phonphone is offline. So I came up with the above solution. Please suggest is this good?

    See more
    Google Cloud Bigtable logo

    Google Cloud Bigtable

    64
    66
    12
    64
    66
    + 1
    12
    The same database that powers Google Search, Gmail and Analytics
    Google Cloud Bigtable logo
    Google Cloud Bigtable
    VS
    Amazon DynamoDB logo
    Amazon DynamoDB
    Cloudant logo

    Cloudant

    32
    33
    20
    32
    33
    + 1
    20
    Distributed database-as-a-service (DBaaS) for web & mobile apps.
    Cloudant logo
    Cloudant
    VS
    Amazon DynamoDB logo
    Amazon DynamoDB

    related Cloudant posts

    Josh Dzielak
    Josh Dzielak
    Developer Advocate at DeveloperMode · | 5 upvotes · 78.9K views
    Firebase
    Firebase
    Pouchdb
    Pouchdb
    CouchDB
    CouchDB
    Cloudant
    Cloudant

    As a side project, I was building a note taking app that needed to synchronize between the client and the server so that it would work offline. At first I used Firebase to store the data on the server and wrote my own code to cache Firebase data in local storage and synchronize it. This was brittle and not performant. I figured that someone else must have solved this in a better way so I went looking for a better solution.

    I needed a tool where I could write the data once and it would write to client and server, and when clients came back on line they would automatically catch the client up. I also needed conflict resolution. I was thrilled to discover Pouchdb and its server-side counterpart CouchDB. Together, they met nearly all of my requirements and were very easy to implement - I was able to remove a ton of custom code and have found the synchronization to be very robust. Pouchdb 7 has improved mobile support too, so I can run the app on iOS or Android browsers.

    My Couchdb instance is actually a Cloudant instance running on IBM Bluemix. For my fairly low level of API usage, it's been totally free, and it has a decent GUI for managing users and replications.

    See more
    Firebase Realtime Database logo

    Firebase Realtime Database

    14
    6
    0
    14
    6
    + 1
    0
    Store and sync data in real time
      Be the first to leave a pro
      Firebase Realtime Database logo
      Firebase Realtime Database
      VS
      Amazon DynamoDB logo
      Amazon DynamoDB
      restdb.io logo

      restdb.io

      10
      13
      1
      10
      13
      + 1
      1
      A plug and play database service for the web and beyond
      restdb.io logo
      restdb.io
      VS
      Amazon DynamoDB logo
      Amazon DynamoDB