Memcached vs MongoDB

Get Advice Icon

Need advice about which tool to choose?Ask the StackShare community!

Memcached
Memcached

2.7K
1.6K
+ 1
452
MongoDB
MongoDB

16.6K
13.1K
+ 1
3.8K
Add tool

Memcached vs MongoDB: What are the differences?

What is Memcached? High-performance, distributed memory object caching system. Memcached is an in-memory key-value store for small chunks of arbitrary data (strings, objects) from results of database calls, API calls, or page rendering.

What is MongoDB? The database for giant ideas. MongoDB stores data in JSON-like documents that can vary in structure, offering a dynamic, flexible schema. MongoDB was also designed for high availability and scalability, with built-in replication and auto-sharding.

Memcached and MongoDB can be categorized as "Databases" tools.

"Fast object cache", "High-performance" and "Stable" are the key factors why developers consider Memcached; whereas "Document-oriented storage", "No sql" and "Ease of use" are the primary reasons why MongoDB is favored.

Memcached and MongoDB are both open source tools. It seems that MongoDB with 16.3K GitHub stars and 4.1K forks on GitHub has more adoption than Memcached with 8.99K GitHub stars and 2.6K GitHub forks.

Uber Technologies, Lyft, and Codecademy are some of the popular companies that use MongoDB, whereas Memcached is used by Facebook, Instagram, and Dropbox. MongoDB has a broader approval, being mentioned in 2189 company stacks & 2218 developers stacks; compared to Memcached, which is listed in 755 company stacks and 267 developer stacks.

What is Memcached?

Memcached is an in-memory key-value store for small chunks of arbitrary data (strings, objects) from results of database calls, API calls, or page rendering.

What is MongoDB?

MongoDB stores data in JSON-like documents that can vary in structure, offering a dynamic, flexible schema. MongoDB was also designed for high availability and scalability, with built-in replication and auto-sharding.
Get Advice Icon

Need advice about which tool to choose?Ask the StackShare community!

Why do developers choose Memcached?
Why do developers choose MongoDB?

Sign up to add, upvote and see more prosMake informed product decisions

    Be the first to leave a con
    What companies use Memcached?
    What companies use MongoDB?

    Sign up to get full access to all the companiesMake informed product decisions

    What tools integrate with Memcached?
    What tools integrate with MongoDB?

    Sign up to get full access to all the tool integrationsMake informed product decisions

    What are some alternatives to Memcached and MongoDB?
    Redis
    Redis is an open source, BSD licensed, advanced key-value store. It is often referred to as a data structure server since keys can contain strings, hashes, lists, sets and sorted sets.
    Ehcache
    Ehcache is an open source, standards-based cache for boosting performance, offloading your database, and simplifying scalability. It's the most widely-used Java-based cache because it's robust, proven, and full-featured. Ehcache scales from in-process, with one or more nodes, all the way to mixed in-process/out-of-process configurations with terabyte-sized caches.
    Varnish
    Varnish Cache is a web application accelerator also known as a caching HTTP reverse proxy. You install it in front of any server that speaks HTTP and configure it to cache the contents. Varnish Cache is really, really fast. It typically speeds up delivery with a factor of 300 - 1000x, depending on your architecture.
    Hazelcast
    With its various distributed data structures, distributed caching capabilities, elastic nature, memcache support, integration with Spring and Hibernate and more importantly with so many happy users, Hazelcast is feature-rich, enterprise-ready and developer-friendly in-memory data grid solution.
    Couchbase
    Developed as an alternative to traditionally inflexible SQL databases, the Couchbase NoSQL database is built on an open source foundation and architected to help developers solve real-world problems and meet high scalability demands.
    See all alternatives
    Decisions about Memcached and MongoDB
    MongoDB
    MongoDB

    I starting using MongoDB because it was much easier to implement in production then hosted SQL, and found that a lot of the limitation you think of from a document store vs a relational database were overcome by connecting the application to a graphql API, making retrieval seamless. Mongos latest upgrades as well as Stitch and Mongo mobile make it a perfect fit especially if your application will be cross platform web and mobile.

    See more
    Jeyabalaji Subramanian
    Jeyabalaji Subramanian
    CTO at FundsCorner · | 24 upvotes · 282K views
    atFundsCornerFundsCorner
    Zappa
    Zappa
    AWS Lambda
    AWS Lambda
    SQLAlchemy
    SQLAlchemy
    Python
    Python
    Amazon SQS
    Amazon SQS
    Node.js
    Node.js
    MongoDB Stitch
    MongoDB Stitch
    PostgreSQL
    PostgreSQL
    MongoDB
    MongoDB

    Recently we were looking at a few robust and cost-effective ways of replicating the data that resides in our production MongoDB to a PostgreSQL database for data warehousing and business intelligence.

    We set ourselves the following criteria for the optimal tool that would do this job: - The data replication must be near real-time, yet it should NOT impact the production database - The data replication must be horizontally scalable (based on the load), asynchronous & crash-resilient

    Based on the above criteria, we selected the following tools to perform the end to end data replication:

    We chose MongoDB Stitch for picking up the changes in the source database. It is the serverless platform from MongoDB. One of the services offered by MongoDB Stitch is Stitch Triggers. Using stitch triggers, you can execute a serverless function (in Node.js) in real time in response to changes in the database. When there are a lot of database changes, Stitch automatically "feeds forward" these changes through an asynchronous queue.

    We chose Amazon SQS as the pipe / message backbone for communicating the changes from MongoDB to our own replication service. Interestingly enough, MongoDB stitch offers integration with AWS services.

    In the Node.js function, we wrote minimal functionality to communicate the database changes (insert / update / delete / replace) to Amazon SQS.

    Next we wrote a minimal micro-service in Python to listen to the message events on SQS, pickup the data payload & mirror the DB changes on to the target Data warehouse. We implemented source data to target data translation by modelling target table structures through SQLAlchemy . We deployed this micro-service as AWS Lambda with Zappa. With Zappa, deploying your services as event-driven & horizontally scalable Lambda service is dumb-easy.

    In the end, we got to implement a highly scalable near realtime Change Data Replication service that "works" and deployed to production in a matter of few days!

    See more
    Khauth György
    Khauth György
    CTO at SalesAutopilot Kft. · | 11 upvotes · 97.3K views
    atSalesAutopilot Kft.SalesAutopilot Kft.
    AWS CodePipeline
    AWS CodePipeline
    Jenkins
    Jenkins
    Docker
    Docker
    vuex
    vuex
    Vuetify
    Vuetify
    Vue.js
    Vue.js
    jQuery UI
    jQuery UI
    Redis
    Redis
    MongoDB
    MongoDB
    MySQL
    MySQL
    Amazon Route 53
    Amazon Route 53
    Amazon CloudFront
    Amazon CloudFront
    Amazon SNS
    Amazon SNS
    Amazon CloudWatch
    Amazon CloudWatch
    GitHub
    GitHub

    I'm the CTO of a marketing automation SaaS. Because of the continuously increasing load we moved to the AWSCloud. We are using more and more features of AWS: Amazon CloudWatch, Amazon SNS, Amazon CloudFront, Amazon Route 53 and so on.

    Our main Database is MySQL but for the hundreds of GB document data we use MongoDB more and more. We started to use Redis for cache and other time sensitive operations.

    On the front-end we use jQuery UI + Smarty but now we refactor our app to use Vue.js with Vuetify. Because our app is relatively complex we need to use vuex as well.

    On the development side we use GitHub as our main repo, Docker for local and server environment and Jenkins and AWS CodePipeline for Continuous Integration.

    See more
    StackShare Editors
    StackShare Editors
    Apache Thrift
    Apache Thrift
    Kotlin
    Kotlin
    Presto
    Presto
    HHVM (HipHop Virtual Machine)
    HHVM (HipHop Virtual Machine)
    gRPC
    gRPC
    Kubernetes
    Kubernetes
    Apache Spark
    Apache Spark
    Airflow
    Airflow
    Terraform
    Terraform
    Hadoop
    Hadoop
    Swift
    Swift
    Hack
    Hack
    Memcached
    Memcached
    Consul
    Consul
    Chef
    Chef
    Prometheus
    Prometheus

    Since the beginning, Cal Henderson has been the CTO of Slack. Earlier this year, he commented on a Quora question summarizing their current stack.

    Apps
    • Web: a mix of JavaScript/ES6 and React.
    • Desktop: And Electron to ship it as a desktop application.
    • Android: a mix of Java and Kotlin.
    • iOS: written in a mix of Objective C and Swift.
    Backend
    • The core application and the API written in PHP/Hack that runs on HHVM.
    • The data is stored in MySQL using Vitess.
    • Caching is done using Memcached and MCRouter.
    • The search service takes help from SolrCloud, with various Java services.
    • The messaging system uses WebSockets with many services in Java and Go.
    • Load balancing is done using HAproxy with Consul for configuration.
    • Most services talk to each other over gRPC,
    • Some Thrift and JSON-over-HTTP
    • Voice and video calling service was built in Elixir.
    Data warehouse
    • Built using open source tools including Presto, Spark, Airflow, Hadoop and Kafka.
    Etc
    See more
    Jeyabalaji Subramanian
    Jeyabalaji Subramanian
    CTO at FundsCorner · | 12 upvotes · 21.4K views
    atFundsCornerFundsCorner
    MongoDB Atlas
    MongoDB Atlas
    MongoDB
    MongoDB
    PostgreSQL
    PostgreSQL

    Database is at the heart of any technology stack. It is no wonder we spend a lot of time choosing the right database before we dive deep into product building.

    When we were faced with the question of what database to choose, we set the following criteria: The database must (1) Have a very high transaction throughput. We wanted to err on the side of "reads" but not on the "writes". (2) be flexible. I.e. be adaptive enough to take - in data variations. Since we are an early-stage start-up, not everything is set in stone. (3) Fast & easy to work with (4) Cloud Native. We did not want to spend our time in "ANY" infrastructure management.

    Based on the above, we picked PostgreSQL and MongoDB for evaluation. We tried a few iterations on hardening the data model with PostgreSQL, but realised that we can move much faster by loosely defining the schema (with just a few fundamental principles intact).

    Thus we switched to MongoDB. Before diving in, we validated a few core principles such as: (1) Transaction guarantee. Until 3.6, MongoDB supports Transaction guarantee at Document level. From 4.0 onwards, you can achieve transaction guarantee across multiple documents.

    (2) Primary Keys & Indexing: Like any RDBMS, MongoDB supports unique keys & indexes to ensure data integrity & search ability

    (3) Ability to join data across data sets: MongoDB offers a super-rich aggregate framework that enables one to filter and group data

    (4) Concurrency handling: MongoDB offers specific operations (such as findOneAndUpdate), which when coupled with Optimistic Locking, can be used to achieve concurrency.

    Above all, MongoDB offers a complete no-frills Cloud Database as a service - MongoDB Atlas. This kind of sealed the deal for us.

    Looking back, choosing MongoDB with MongoDB Atlas was one of the best decisions we took and it is serving us well. My only gripe is that there must be a way to scale-up or scale-down the Atlas configuration at different parts of the day with minimal downtime.

    See more
    Julien DeFrance
    Julien DeFrance
    Principal Software Engineer at Tophatter · | 16 upvotes · 388.4K views
    atSmartZipSmartZip
    Amazon DynamoDB
    Amazon DynamoDB
    Ruby
    Ruby
    Node.js
    Node.js
    AWS Lambda
    AWS Lambda
    New Relic
    New Relic
    Amazon Elasticsearch Service
    Amazon Elasticsearch Service
    Elasticsearch
    Elasticsearch
    Superset
    Superset
    Amazon Quicksight
    Amazon Quicksight
    Amazon Redshift
    Amazon Redshift
    Zapier
    Zapier
    Segment
    Segment
    Amazon CloudFront
    Amazon CloudFront
    Memcached
    Memcached
    Amazon ElastiCache
    Amazon ElastiCache
    Amazon RDS for Aurora
    Amazon RDS for Aurora
    MySQL
    MySQL
    Amazon RDS
    Amazon RDS
    Amazon S3
    Amazon S3
    Docker
    Docker
    Capistrano
    Capistrano
    AWS Elastic Beanstalk
    AWS Elastic Beanstalk
    Rails API
    Rails API
    Rails
    Rails
    Algolia
    Algolia

    Back in 2014, I was given an opportunity to re-architect SmartZip Analytics platform, and flagship product: SmartTargeting. This is a SaaS software helping real estate professionals keeping up with their prospects and leads in a given neighborhood/territory, finding out (thanks to predictive analytics) who's the most likely to list/sell their home, and running cross-channel marketing automation against them: direct mail, online ads, email... The company also does provide Data APIs to Enterprise customers.

    I had inherited years and years of technical debt and I knew things had to change radically. The first enabler to this was to make use of the cloud and go with AWS, so we would stop re-inventing the wheel, and build around managed/scalable services.

    For the SaaS product, we kept on working with Rails as this was what my team had the most knowledge in. We've however broken up the monolith and decoupled the front-end application from the backend thanks to the use of Rails API so we'd get independently scalable micro-services from now on.

    Our various applications could now be deployed using AWS Elastic Beanstalk so we wouldn't waste any more efforts writing time-consuming Capistrano deployment scripts for instance. Combined with Docker so our application would run within its own container, independently from the underlying host configuration.

    Storage-wise, we went with Amazon S3 and ditched any pre-existing local or network storage people used to deal with in our legacy systems. On the database side: Amazon RDS / MySQL initially. Ultimately migrated to Amazon RDS for Aurora / MySQL when it got released. Once again, here you need a managed service your cloud provider handles for you.

    Future improvements / technology decisions included:

    Caching: Amazon ElastiCache / Memcached CDN: Amazon CloudFront Systems Integration: Segment / Zapier Data-warehousing: Amazon Redshift BI: Amazon Quicksight / Superset Search: Elasticsearch / Amazon Elasticsearch Service / Algolia Monitoring: New Relic

    As our usage grows, patterns changed, and/or our business needs evolved, my role as Engineering Manager then Director of Engineering was also to ensure my team kept on learning and innovating, while delivering on business value.

    One of these innovations was to get ourselves into Serverless : Adopting AWS Lambda was a big step forward. At the time, only available for Node.js (Not Ruby ) but a great way to handle cost efficiency, unpredictable traffic, sudden bursts of traffic... Ultimately you want the whole chain of services involved in a call to be serverless, and that's when we've started leveraging Amazon DynamoDB on these projects so they'd be fully scalable.

    See more
    Yonas Beshawred
    Yonas Beshawred
    CEO at StackShare · | 9 upvotes · 26.4K views
    atStackShareStackShare
    Memcached
    Memcached
    Heroku
    Heroku
    Amazon ElastiCache
    Amazon ElastiCache
    Rails
    Rails
    PostgreSQL
    PostgreSQL
    MemCachier
    MemCachier
    #RailsCaching
    #Caching

    We decided to use MemCachier as our Memcached provider because we were seeing some serious PostgreSQL performance issues with query-heavy pages on the site. We use MemCachier for all Rails caching and pretty aggressively too for the logged out experience (fully cached pages for the most part). We really need to move to Amazon ElastiCache as soon as possible so we can stop paying so much. The only reason we're not moving is because there are some restrictions on the network side due to our main app being hosted on Heroku.

    #Caching #RailsCaching

    See more
    Ajit Parthan
    Ajit Parthan
    CTO at Shaw Academy · | 1 upvotes · 5K views
    atShaw AcademyShaw Academy
    MongoDB
    MongoDB
    MySQL
    MySQL
    #NosqlDatabaseAsAService

    Initial storage was traditional MySQL. The pace of changes during a startup mode made it very difficult to have a clean and consistent schema. Large portions ended up as unstructured data stuffed into CLOBs and BLOBs.

    Moving to MongoDB definitely made this part much easier.

    Accessing data for analysis is a little bit of a challenge - especially for people coming from the world of SQL Workbench. But with tools like Exploratory this is becoming less of a problem.

    #NosqlDatabaseAsAService

    See more
    Tim Nolet
    Tim Nolet
    Founder, Engineer & Dishwasher at Checkly · | 8 upvotes · 61.3K views
    atChecklyHQChecklyHQ
    Amazon DynamoDB
    Amazon DynamoDB
    MongoDB
    MongoDB
    Node.js
    Node.js
    Heroku
    Heroku
    PostgreSQL
    PostgreSQL

    PostgreSQL Heroku Node.js MongoDB Amazon DynamoDB

    When I started building Checkly, one of the first things on the agenda was how to actually structure our SaaS database model: think accounts, users, subscriptions etc. Weirdly, there is not a lot of information on this on the "blogopshere" (cringe...). After research and some false starts with MongoDB and Amazon DynamoDB we ended up with PostgreSQL and a schema consisting of just four tables that form the backbone of all generic "Saasy" stuff almost any B2B SaaS bumps into.

    In a nutshell:cPostgreSQL Heroku Node.js MongoDB Amazon DynamoDB

    When I started building Checkly, one of the first things on the agenda was how to actually structure our SaaS database model: think accounts, users, subscriptions etc. Weirdly, there is not a lot of information on this on the "blogopshere" (cringe...). After research and some false starts with MongoDB and Amazon DynamoDB we ended up with PostgreSQL and a schema consisting of just four tables that form the backbone of all generic "Saasy" stuff almost any B2B SaaS bumps into.

    In a nutshell:

    • We use Postgres on Heroku.
    • We use a "one database, on schema" approach for partitioning customer data.
    • We use an accounts, memberships and users table to create a many-to-many relation between users and accounts.
    • We completely decouple prices, payments and the exact ingredients for a customer's plan.

    All the details including a database schema diagram are in the linked blog post.

    See more
    Łukasz Korecki
    Łukasz Korecki
    CTO & Co-founder at EnjoyHQ · | 12 upvotes · 38.8K views
    atEnjoyHQEnjoyHQ
    PostgreSQL
    PostgreSQL
    MongoDB
    MongoDB
    RethinkDB
    RethinkDB

    We initially chose RethinkDB because of the schema-less document store features, and better durability resilience/story than MongoDB In the end, it didn't work out quite as we expected: there's plenty of scalability issues, it's near impossible to run analytical workloads and small community makes working with Rethink a challenge. We're in process of migrating all our workloads to PostgreSQL and hopefully, we will be able to decommission our RethinkDB deployment soon.

    See more
    Mauro Bennici
    Mauro Bennici
    CTO at You Are My GUide · | 7 upvotes · 10.4K views
    atYou Are My GUideYou Are My GUide
    MongoDB
    MongoDB
    TimescaleDB
    TimescaleDB
    PostgreSQL
    PostgreSQL

    PostgreSQL plus TimescaleDB allow us to concentrate the business effort on how to analyze valuable data instead of manage them on IT side. We are now able to ingest thousand of social shares "managed" data without compromise the scalability of the system or the time query. TimescaleDB is transparent to PostgreSQL , so we continue to use the same SQL syntax without any changes. At the same time, because we need to manage few document objects we dismissed the MongoDB cluster.

    See more
    Robert Zuber
    Robert Zuber
    CTO at CircleCI · | 22 upvotes · 163.4K views
    atCircleCICircleCI
    Amazon S3
    Amazon S3
    GitHub
    GitHub
    Redis
    Redis
    PostgreSQL
    PostgreSQL
    MongoDB
    MongoDB

    We use MongoDB as our primary #datastore. Mongo's approach to replica sets enables some fantastic patterns for operations like maintenance, backups, and #ETL.

    As we pull #microservices from our #monolith, we are taking the opportunity to build them with their own datastores using PostgreSQL. We also use Redis to cache data we’d never store permanently, and to rate-limit our requests to partners’ APIs (like GitHub).

    When we’re dealing with large blobs of immutable data (logs, artifacts, and test results), we store them in Amazon S3. We handle any side-effects of S3’s eventual consistency model within our own code. This ensures that we deal with user requests correctly while writes are in process.

    See more
    Martin Johannesson
    Martin Johannesson
    Senior Software Developer at IT Minds · | 10 upvotes · 15.2K views
    atIT MindsIT Minds
    AMP
    AMP
    PWA
    PWA
    React
    React
    MongoDB
    MongoDB
    Next.js
    Next.js
    GraphQL
    GraphQL
    Apollo
    Apollo
    PostgreSQL
    PostgreSQL
    TypeORM
    TypeORM
    Node.js
    Node.js
    TypeScript
    TypeScript
    #Serverless
    #Backend
    #B2B

    At IT Minds we create customized internal or #B2B web and mobile apps. I have a go to stack that I pitch to our customers consisting of 3 core areas. 1) A data core #backend . 2) A micro #serverless #backend. 3) A user client #frontend.

    For the Data Core I create a backend using TypeScript Node.js and with TypeORM connecting to a PostgreSQL Exposing an action based api with Apollo GraphQL

    For the micro serverless backend, which purpose is verification for authentication, autorization, logins and the likes. It is created with Next.js api pages. Using MongoDB to store essential information, caching etc.

    Finally the frontend is built with React using Next.js , TypeScript and @Apollo. We create the frontend as a PWA and have a AMP landing page by default.

    See more
    MongoDB
    MongoDB
    MySQL
    MySQL
    .NET Core
    .NET Core
    C#
    C#

    Hi! I needed to choose a full stack of tools for a web drop shipping site without the payment process for a family startup proyect. It will feed from several web services (JSON), I'm looking forward a 4,200 articles tops. For web use only and for a few clients at the beginning.

    I'm considering C# with .NET Core 3.0 as is the one language I'm starting to learn. For the Database I haven´t made my mind yet, but could be MySQL or MongoDB any advice is welcome as I'm getting back to programming after year away from this awesome world. Thanks

    See more
    Nicolas Apx
    Nicolas Apx
    CEO - FullStack Javascript at Apx Development Limited · | 14 upvotes · 17.4K views
    atAPX DevelopmentAPX Development
    PostgreSQL
    PostgreSQL
    MongoDB
    MongoDB
    Node.js
    Node.js
    Python
    Python

    I am planning on building a micro-service eCommerce back-end to be easy to reuse in any project as we need. I would like to use both Python and Node.js and MongoDB & PostgreSQL , in your opinion which one would best suited for the following services:

    • Users-service
    • Products-service
    • Auth-service
    • Inventory-service
    • Order-service
    • Payment-service
    • Sku-service
    • And more not yet defined....

    Thanks

    Nicolas

    See more
    Interest over time
    Reviews of Memcached and MongoDB
    No reviews found
    How developers use Memcached and MongoDB
    Avatar of Tarun Singh
    Tarun Singh uses MongoDBMongoDB

    Used MongoDB as primary database. It holds trip data of NYC taxis for the year 2013. It is a huge dataset and it's primary feature is geo coordinates with pickup and drop off locations. Also used MongoDB's map reduce to process this large dataset for aggregation. This aggregated result was then used to show visualizations.

    Avatar of Trello
    Trello uses MongoDBMongoDB

    MongoDB fills our more traditional database needs. We knew we wanted Trello to be blisteringly fast. One of the coolest and most performance-obsessed teams we know is our next-door neighbor and sister company StackExchange. Talking to their dev lead David at lunch one day, I learned that even though they use SQL Server for data storage, they actually primarily store a lot of their data in a denormalized format for performance, and normalize only when they need to.

    Avatar of Foursquare
    Foursquare uses MongoDBMongoDB

    Nearly all of our backend storage is on MongoDB. This has also worked out pretty well. It's enabled us to scale up faster/easier than if we had rolled our own solution on top of PostgreSQL (which we were using previously). There have been a few roadbumps along the way, but the team at 10gen has been a big help with thing.

    Avatar of AngeloR
    AngeloR uses MongoDBMongoDB

    We are testing out MongoDB at the moment. Currently we are only using a small EC2 setup for a delayed job queue backed by agenda. If it works out well we might look to see where it could become a primary document storage engine for us.

    Avatar of Matt Welke
    Matt Welke uses MongoDBMongoDB

    Used for proofs of concept and personal projects with a document data model, especially with need for strong geographic queries. Often not chosen in long term apps due to chance data model can end up relational as needs develop.

    Avatar of Reactor Digital
    Reactor Digital uses MemcachedMemcached

    As part of the cacheing system within Drupal.

    Memcached mainly took care of creating and rebuilding the REST API cache once changes had been made within Drupal.

    Avatar of Casey Smith
    Casey Smith uses MemcachedMemcached

    Distributed cache exposed through Google App Engine APIs; use to stage fresh data (incoming and recently processed) for faster access in data processing pipeline.

    Avatar of The Independent
    The Independent uses MemcachedMemcached

    Memcache caches database results and articles, reducing overall DB load and allowing seamless DB maintenance during quiet periods.

    Avatar of eXon Technologies
    eXon Technologies uses MemcachedMemcached

    Used to cache most used files for our clients. Connected with CloudFlare Railgun Optimizer.

    Avatar of ScholaNoctis
    ScholaNoctis uses MemcachedMemcached

    Memcached is used as a simple page cache across the whole application.

    How much does Memcached cost?
    How much does MongoDB cost?
    Pricing unavailable