Alternatives to MongoDB Atlas logo

Alternatives to MongoDB Atlas

MongoDB, MongoDB Compass, MongoDB Cloud Manager, Azure Cosmos DB, and Firebase are the most popular alternatives and competitors to MongoDB Atlas.
843
34

What is MongoDB Atlas and what are its top alternatives?

MongoDB Atlas is a fully managed cloud database service by MongoDB that allows users to easily deploy, manage, and scale MongoDB databases. Key features include automated backups, monitoring and alerts, and automated scaling. However, some limitations of MongoDB Atlas include limited customization options and potential cost for scaling up in terms of storage and performance.

  1. Amazon DocumentDB: Amazon DocumentDB is a fully managed document database service that supports MongoDB workloads. Key features include automated backups, cross-region replication, and scalable storage. Pros include integration with AWS services, while cons include limited query capabilities compared to MongoDB Atlas.
  2. Google Cloud Firestore: Google Cloud Firestore is a flexible, scalable database for mobile, web, and server development. Key features include real-time updates, offline data access, and seamless integration with Google Cloud Platform. Pros include straightforward pricing, while cons include limited query options compared to MongoDB Atlas.
  3. Azure Cosmos DB: Azure Cosmos DB is a globally distributed, multi-model database service by Microsoft. Key features include automatic scaling, multiple data models, and high availability. Pros include global distribution, but cons include potentially higher cost compared to MongoDB Atlas.
  4. Couchbase Cloud: Couchbase Cloud is a fully managed NoSQL database service based on Couchbase Server. Key features include built-in caching, flexible data model, and high performance. Pros include powerful querying capabilities, while cons include potentially higher cost for larger datasets compared to MongoDB Atlas.
  5. FaunaDB: FaunaDB is a distributed, transactional database for modern applications. Key features include global distribution, ACID compliance, and flexible data modeling. Pros include powerful transactions, while cons include potential complexity in data modeling compared to MongoDB Atlas.
  6. DynamoDB: Amazon DynamoDB is a fully managed NoSQL database service by Amazon Web Services. Key features include automatic scaling, high performance, and low latency. Pros include seamless scalability, while cons include potentially higher cost for read and write operations compared to MongoDB Atlas.
  7. Redis Enterprise Cloud: Redis Enterprise Cloud is a fully managed Redis database service. Key features include high availability, low latency, and advanced caching capabilities. Pros include fast read and write operations, while cons include potential cost for large datasets compared to MongoDB Atlas.
  8. Aerospike Database: Aerospike Database is a NoSQL database optimized for performance at scale. Key features include high throughput, low latency, and strong consistency. Pros include fast data access, while cons include potentially higher cost for certain use cases compared to MongoDB Atlas.
  9. YugabyteDB: YugabyteDB is a distributed SQL database designed for cloud-native applications. Key features include geo-distribution, strong consistency, and horizontal scalability. Pros include support for multiple data models, while cons include potential complexity in deployment compared to MongoDB Atlas.
  10. Scylla Cloud: Scylla Cloud is a fully managed NoSQL database service based on the Scylla database engine. Key features include high throughput, low latency, and seamless scaling. Pros include compatibility with Apache Cassandra, while cons include potential learning curve compared to MongoDB Atlas.

Top Alternatives to MongoDB Atlas

  • MongoDB
    MongoDB

    MongoDB stores data in JSON-like documents that can vary in structure, offering a dynamic, flexible schema. MongoDB was also designed for high availability and scalability, with built-in replication and auto-sharding. ...

  • MongoDB Compass
    MongoDB Compass

    Visually explore your data. Run ad hoc queries in seconds. Interact with your data with full CRUD functionality. View and optimize your query performance. ...

  • MongoDB Cloud Manager
    MongoDB Cloud Manager

    It is a hosted platform for managing MongoDB on the infrastructure of your choice. It saves you time, money, and helps you protect your customer experience by eliminating the guesswork from running MongoDB. ...

  • Azure Cosmos DB
    Azure Cosmos DB

    Azure DocumentDB is a fully managed NoSQL database service built for fast and predictable performance, high availability, elastic scaling, global distribution, and ease of development. ...

  • Firebase
    Firebase

    Firebase is a cloud service designed to power real-time, collaborative applications. Simply add the Firebase library to your application to gain access to a shared data structure; any changes you make to that data are automatically synchronized with the Firebase cloud and with other clients within milliseconds. ...

  • Compass
    Compass

    The compass core framework is a design-agnostic framework that provides common code that would otherwise be duplicated across other frameworks and extensions. ...

  • Stitch
    Stitch

    Stitch is a simple, powerful ETL service built for software developers. Stitch evolved out of RJMetrics, a widely used business intelligence platform. When RJMetrics was acquired by Magento in 2016, Stitch was launched as its own company. ...

  • Elasticsearch
    Elasticsearch

    Elasticsearch is a distributed, RESTful search and analytics engine capable of storing data and searching it in near real time. Elasticsearch, Kibana, Beats and Logstash are the Elastic Stack (sometimes called the ELK Stack). ...

MongoDB Atlas alternatives & related posts

MongoDB logo

MongoDB

93.5K
80.7K
4.1K
The database for giant ideas
93.5K
80.7K
+ 1
4.1K
PROS OF MONGODB
  • 828
    Document-oriented storage
  • 593
    No sql
  • 553
    Ease of use
  • 464
    Fast
  • 410
    High performance
  • 255
    Free
  • 218
    Open source
  • 180
    Flexible
  • 145
    Replication & high availability
  • 112
    Easy to maintain
  • 42
    Querying
  • 39
    Easy scalability
  • 38
    Auto-sharding
  • 37
    High availability
  • 31
    Map/reduce
  • 27
    Document database
  • 25
    Easy setup
  • 25
    Full index support
  • 16
    Reliable
  • 15
    Fast in-place updates
  • 14
    Agile programming, flexible, fast
  • 12
    No database migrations
  • 8
    Easy integration with Node.Js
  • 8
    Enterprise
  • 6
    Enterprise Support
  • 5
    Great NoSQL DB
  • 4
    Support for many languages through different drivers
  • 3
    Schemaless
  • 3
    Aggregation Framework
  • 3
    Drivers support is good
  • 2
    Fast
  • 2
    Managed service
  • 2
    Easy to Scale
  • 2
    Awesome
  • 2
    Consistent
  • 1
    Good GUI
  • 1
    Acid Compliant
CONS OF MONGODB
  • 6
    Very slowly for connected models that require joins
  • 3
    Not acid compliant
  • 2
    Proprietary query language

related MongoDB posts

Jeyabalaji Subramanian

Recently we were looking at a few robust and cost-effective ways of replicating the data that resides in our production MongoDB to a PostgreSQL database for data warehousing and business intelligence.

We set ourselves the following criteria for the optimal tool that would do this job: - The data replication must be near real-time, yet it should NOT impact the production database - The data replication must be horizontally scalable (based on the load), asynchronous & crash-resilient

Based on the above criteria, we selected the following tools to perform the end to end data replication:

We chose MongoDB Stitch for picking up the changes in the source database. It is the serverless platform from MongoDB. One of the services offered by MongoDB Stitch is Stitch Triggers. Using stitch triggers, you can execute a serverless function (in Node.js) in real time in response to changes in the database. When there are a lot of database changes, Stitch automatically "feeds forward" these changes through an asynchronous queue.

We chose Amazon SQS as the pipe / message backbone for communicating the changes from MongoDB to our own replication service. Interestingly enough, MongoDB stitch offers integration with AWS services.

In the Node.js function, we wrote minimal functionality to communicate the database changes (insert / update / delete / replace) to Amazon SQS.

Next we wrote a minimal micro-service in Python to listen to the message events on SQS, pickup the data payload & mirror the DB changes on to the target Data warehouse. We implemented source data to target data translation by modelling target table structures through SQLAlchemy . We deployed this micro-service as AWS Lambda with Zappa. With Zappa, deploying your services as event-driven & horizontally scalable Lambda service is dumb-easy.

In the end, we got to implement a highly scalable near realtime Change Data Replication service that "works" and deployed to production in a matter of few days!

See more
Robert Zuber

We use MongoDB as our primary #datastore. Mongo's approach to replica sets enables some fantastic patterns for operations like maintenance, backups, and #ETL.

As we pull #microservices from our #monolith, we are taking the opportunity to build them with their own datastores using PostgreSQL. We also use Redis to cache data we’d never store permanently, and to rate-limit our requests to partners’ APIs (like GitHub).

When we’re dealing with large blobs of immutable data (logs, artifacts, and test results), we store them in Amazon S3. We handle any side-effects of S3’s eventual consistency model within our own code. This ensures that we deal with user requests correctly while writes are in process.

See more
MongoDB Compass logo

MongoDB Compass

202
384
0
A GUI for MongoDB
202
384
+ 1
0
PROS OF MONGODB COMPASS
    Be the first to leave a pro
    CONS OF MONGODB COMPASS
      Be the first to leave a con

      related MongoDB Compass posts

      MongoDB Cloud Manager logo

      MongoDB Cloud Manager

      59
      80
      0
      A hosted platform for managing MongoDB
      59
      80
      + 1
      0
      PROS OF MONGODB CLOUD MANAGER
        Be the first to leave a pro
        CONS OF MONGODB CLOUD MANAGER
          Be the first to leave a con

          related MongoDB Cloud Manager posts

          Azure Cosmos DB logo

          Azure Cosmos DB

          589
          1.1K
          130
          A fully-managed, globally distributed NoSQL database service
          589
          1.1K
          + 1
          130
          PROS OF AZURE COSMOS DB
          • 28
            Best-of-breed NoSQL features
          • 22
            High scalability
          • 15
            Globally distributed
          • 14
            Automatic indexing over flexible json data model
          • 10
            Tunable consistency
          • 10
            Always on with 99.99% availability sla
          • 7
            Javascript language integrated transactions and queries
          • 6
            Predictable performance
          • 5
            High performance
          • 5
            Analytics Store
          • 2
            Rapid Development
          • 2
            No Sql
          • 2
            Auto Indexing
          • 2
            Ease of use
          CONS OF AZURE COSMOS DB
          • 18
            Pricing
          • 4
            Poor No SQL query support

          related Azure Cosmos DB posts

          Stephen Gheysens
          Lead Solutions Engineer at Inscribe · | 7 upvotes · 472.5K views

          Google Maps lets "property owners and their authorized representatives" upload indoor maps, but this appears to lack navigation ("wayfinding").

          MappedIn is a platform and has SDKs for building indoor mapping experiences (https://www.mappedin.com/) and ESRI ArcGIS also offers some indoor mapping tools (https://www.esri.com/en-us/arcgis/indoor-gis/overview). Finally, there used to be a company called LocusLabs that is now a part of Atrius and they were often integrated into airlines' apps to provide airport maps with wayfinding (https://atrius.com/solutions/personal-experiences/personal-wayfinder/).

          I previously worked at Mapbox and while I believe that it's a great platform for building map-based experiences, they don't have any simple solutions for indoor wayfinding. If I were doing this for fun as a side-project and prioritized saving money over saving time, here is what I would do:

          • Create a graph-based dataset representing the walking paths around your university, where nodes/vertexes represent the intersections of paths, and edges represent paths (literally paths outside, hallways, short path segments that represent entering rooms). You could store this in a hosted graph-based database like Neo4j, Amazon Neptune , or Azure Cosmos DB (with its Gremlin API) and use built-in "shortest path" queries, or deploy a PostgreSQL service with pgRouting.

          • Add two properties to each edge: one property for the distance between its nodes (libraries like @turf/helpers will have a distance function if you have the latitude & longitude of each node), and another property estimating the walking time (based on the distance). Once you have these values saved in a graph-based format, you should be able to easily query and find the data representation of paths between two points.

          • At this point, you'd have the routing problem solved and it would come down to building a UI. Mapbox arguably leads the industry in developer tools for custom map experiences. You could convert your nodes/edges to GeoJSON, then either upload to Mapbox and create a Tileset to visualize the paths, or add the GeoJSON to the map on the fly.

          *You might be able to use open source routing tools like OSRM (https://github.com/Project-OSRM/osrm-backend/issues/6257) or Graphhopper (instead of a custom graph database implementation), but it would likely be more involved to maintain these services.

          See more

          We have an in-house build experiment management system. We produce samples as input to the next step, which then could produce 1 sample(1-1) and many samples (1 - many). There are many steps like this. So far, we are tracking genealogy (limited tracking) in the MySQL database, which is becoming hard to trace back to the original material or sample(I can give more details if required). So, we are considering a Graph database. I am requesting advice from the experts.

          1. Is a graph database the right choice, or can we manage with RDBMS?
          2. If RDBMS, which RDMS, which feature, or which approach could make this manageable or sustainable
          3. If Graph database(Neo4j, OrientDB, Azure Cosmos DB, Amazon Neptune, ArangoDB), which one is good, and what are the best practices?

          I am sorry that this might be a loaded question.

          See more
          Firebase logo

          Firebase

          41K
          35.1K
          2K
          The Realtime App Platform
          41K
          35.1K
          + 1
          2K
          PROS OF FIREBASE
          • 371
            Realtime backend made easy
          • 270
            Fast and responsive
          • 242
            Easy setup
          • 215
            Real-time
          • 191
            JSON
          • 134
            Free
          • 128
            Backed by google
          • 83
            Angular adaptor
          • 68
            Reliable
          • 36
            Great customer support
          • 32
            Great documentation
          • 25
            Real-time synchronization
          • 21
            Mobile friendly
          • 19
            Rapid prototyping
          • 14
            Great security
          • 12
            Automatic scaling
          • 11
            Freakingly awesome
          • 8
            Super fast development
          • 8
            Angularfire is an amazing addition!
          • 8
            Chat
          • 6
            Firebase hosting
          • 6
            Built in user auth/oauth
          • 6
            Awesome next-gen backend
          • 6
            Ios adaptor
          • 4
            Speed of light
          • 4
            Very easy to use
          • 3
            Great
          • 3
            It's made development super fast
          • 3
            Brilliant for startups
          • 2
            Free hosting
          • 2
            Cloud functions
          • 2
            JS Offline and Sync suport
          • 2
            Low battery consumption
          • 2
            .net
          • 2
            The concurrent updates create a great experience
          • 2
            Push notification
          • 2
            I can quickly create static web apps with no backend
          • 2
            Great all-round functionality
          • 2
            Free authentication solution
          • 1
            Easy Reactjs integration
          • 1
            Google's support
          • 1
            Free SSL
          • 1
            CDN & cache out of the box
          • 1
            Easy to use
          • 1
            Large
          • 1
            Faster workflow
          • 1
            Serverless
          • 1
            Good Free Limits
          • 1
            Simple and easy
          CONS OF FIREBASE
          • 31
            Can become expensive
          • 16
            No open source, you depend on external company
          • 15
            Scalability is not infinite
          • 9
            Not Flexible Enough
          • 7
            Cant filter queries
          • 3
            Very unstable server
          • 3
            No Relational Data
          • 2
            Too many errors
          • 2
            No offline sync

          related Firebase posts

          Stephen Gheysens
          Lead Solutions Engineer at Inscribe · | 14 upvotes · 1.8M views

          Hi Otensia! I'd definitely recommend using the skills you've already got and building with JavaScript is a smart way to go these days. Most platform services have JavaScript/Node SDKs or NPM packages, many serverless platforms support Node in case you need to write any backend logic, and JavaScript is incredibly popular - meaning it will be easy to hire for, should you ever need to.

          My advice would be "don't reinvent the wheel". If you already have a skill set that will work well to solve the problem at hand, and you don't need it for any other projects, don't spend the time jumping into a new language. If you're looking for an excuse to learn something new, it would be better to invest that time in learning a new platform/tool that compliments your knowledge of JavaScript. For this project, I might recommend using Netlify, Vercel, or Google Firebase to quickly and easily deploy your web app. If you need to add user authentication, there are great examples out there for Firebase Authentication, Auth0, or even Magic (a newcomer on the Auth scene, but very user friendly). All of these services work very well with a JavaScript-based application.

          See more
          Eugene Cheah

          For inboxkitten.com, an opensource disposable email service;

          We migrated our serverless workload from Cloud Functions for Firebase to CloudFlare workers, taking advantage of the lower cost and faster-performing edge computing of Cloudflare network. Made possible due to our extremely low CPU and RAM overhead of our serverless functions.

          If I were to summarize the limitation of Cloudflare (as oppose to firebase/gcp functions), it would be ...

          1. <5ms CPU time limit
          2. Incompatible with express.js
          3. one script limitation per domain

          Limitations our workload is able to conform with (YMMV)

          For hosting of static files, we migrated from Firebase to CommonsHost

          More details on the trade-off in between both serverless providers is in the article

          See more
          Compass logo

          Compass

          354
          297
          12
          A Stylesheet Authoring Environment that makes your website design simpler to implement and easier to maintain
          354
          297
          + 1
          12
          PROS OF COMPASS
          • 9
            No vendor prefix CSS pain
          • 1
            Mixins
          • 1
            Variables
          • 1
            Compass sprites
          CONS OF COMPASS
            Be the first to leave a con

            related Compass posts

            Stitch logo

            Stitch

            149
            150
            12
            All your data. In your data warehouse. In minutes.
            149
            150
            + 1
            12
            PROS OF STITCH
            • 8
              3 minutes to set up
            • 4
              Super simple, great support
            CONS OF STITCH
              Be the first to leave a con

              related Stitch posts

              Ankit Sobti

              Looker , Stitch , Amazon Redshift , dbt

              We recently moved our Data Analytics and Business Intelligence tooling to Looker . It's already helping us create a solid process for reusable SQL-based data modeling, with consistent definitions across the entire organizations. Looker allows us to collaboratively build these version-controlled models and push the limits of what we've traditionally been able to accomplish with analytics with a lean team.

              For Data Engineering, we're in the process of moving from maintaining our own ETL pipelines on AWS to a managed ELT system on Stitch. We're also evaluating the command line tool, dbt to manage data transformations. Our hope is that Stitch + dbt will streamline the ELT bit, allowing us to focus our energies on analyzing data, rather than managing it.

              See more
              Cyril Duchon-Doris
              CTO at My Job Glasses · | 6 upvotes · 46K views

              Hello, For security and strategic reasons, we are migrating our apps from AWS/Google to a cloud provider with more security certifications and fewer functionalities, named Outscale. So far we have been using Google BigQuery as our data warehouse with ELT workflows (using Stitch and dbt ) and we need to migrate our data ecosystem to this new cloud provider.

              We are setting up a Kubernetes cluster in our new cloud provider for our apps. Regarding the data warehouse, it's not clear if there are advantages/inconvenients about setting it up on kubernetes (apart from having to create node groups and tolerations with more ram/cpu). Also, we are not sure what's the best Open source or on-premise tool to use. The main requirement is that data must remain in the secure cluster, and no external entity (especially US) can have access to it. We have a dev cluster/environment and a production cluster/environment on this cloud.

              Regarding the actual DWH usage - Today we have ~1.5TB in BigQuery in production. We're going to run our initial rests with ~50-100GB of data for our test cluster - Most of our data comes from other databases, so in most cases, we already have replicated sources somewhere, and there are only a handful of collections whose source is directly in the DWH (such as snapshots, some external data we've fetched at some point, google analytics, etc) and needs appropriate level of replication - We are a team of 30-ish people, we do not have critical needs regarding analytics speed, and we do not need real time. We rebuild our DBT models 2-3 times a day and this usually proves enough

              Apart from postgreSQL, I haven't really found open-source or on-premise alternatives for setting up a data warehouse, and running transformations with DBT. There is also the question of data ingestion, I've selected Airbyte and @meltano and I have troubles understanding if one of the 2 is better but Airbytes seems to have a bigger community.

              What do you suggest regarding the data warehouse, and the ELT workflows ? - Kubernetes or not kubernetes ? - Postgresql or something else ? if postgre, what are the important configs you'd have in mind ? - Airbyte/DBT or something else.

              See more
              Elasticsearch logo

              Elasticsearch

              34.5K
              26.9K
              1.6K
              Open Source, Distributed, RESTful Search Engine
              34.5K
              26.9K
              + 1
              1.6K
              PROS OF ELASTICSEARCH
              • 328
                Powerful api
              • 315
                Great search engine
              • 231
                Open source
              • 214
                Restful
              • 200
                Near real-time search
              • 98
                Free
              • 85
                Search everything
              • 54
                Easy to get started
              • 45
                Analytics
              • 26
                Distributed
              • 6
                Fast search
              • 5
                More than a search engine
              • 4
                Great docs
              • 4
                Awesome, great tool
              • 3
                Highly Available
              • 3
                Easy to scale
              • 2
                Potato
              • 2
                Document Store
              • 2
                Great customer support
              • 2
                Intuitive API
              • 2
                Nosql DB
              • 2
                Great piece of software
              • 2
                Reliable
              • 2
                Fast
              • 2
                Easy setup
              • 1
                Open
              • 1
                Easy to get hot data
              • 1
                Github
              • 1
                Elaticsearch
              • 1
                Actively developing
              • 1
                Responsive maintainers on GitHub
              • 1
                Ecosystem
              • 1
                Not stable
              • 1
                Scalability
              • 0
                Community
              CONS OF ELASTICSEARCH
              • 7
                Resource hungry
              • 6
                Diffecult to get started
              • 5
                Expensive
              • 4
                Hard to keep stable at large scale

              related Elasticsearch posts

              Tim Abbott

              We've been using PostgreSQL since the very early days of Zulip, but we actually didn't use it from the beginning. Zulip started out as a MySQL project back in 2012, because we'd heard it was a good choice for a startup with a wide community. However, we found that even though we were using the Django ORM for most of our database access, we spent a lot of time fighting with MySQL. Issues ranged from bad collation defaults, to bad query plans which required a lot of manual query tweaks.

              We ended up getting so frustrated that we tried out PostgresQL, and the results were fantastic. We didn't have to do any real customization (just some tuning settings for how big a server we had), and all of our most important queries were faster out of the box. As a result, we were able to delete a bunch of custom queries escaping the ORM that we'd written to make the MySQL query planner happy (because postgres just did the right thing automatically).

              And then after that, we've just gotten a ton of value out of postgres. We use its excellent built-in full-text search, which has helped us avoid needing to bring in a tool like Elasticsearch, and we've really enjoyed features like its partial indexes, which saved us a lot of work adding unnecessary extra tables to get good performance for things like our "unread messages" and "starred messages" indexes.

              I can't recommend it highly enough.

              See more
              Tymoteusz Paul
              Devops guy at X20X Development LTD · | 23 upvotes · 9.7M views

              Often enough I have to explain my way of going about setting up a CI/CD pipeline with multiple deployment platforms. Since I am a bit tired of yapping the same every single time, I've decided to write it up and share with the world this way, and send people to read it instead ;). I will explain it on "live-example" of how the Rome got built, basing that current methodology exists only of readme.md and wishes of good luck (as it usually is ;)).

              It always starts with an app, whatever it may be and reading the readmes available while Vagrant and VirtualBox is installing and updating. Following that is the first hurdle to go over - convert all the instruction/scripts into Ansible playbook(s), and only stopping when doing a clear vagrant up or vagrant reload we will have a fully working environment. As our Vagrant environment is now functional, it's time to break it! This is the moment to look for how things can be done better (too rigid/too lose versioning? Sloppy environment setup?) and replace them with the right way to do stuff, one that won't bite us in the backside. This is the point, and the best opportunity, to upcycle the existing way of doing dev environment to produce a proper, production-grade product.

              I should probably digress here for a moment and explain why. I firmly believe that the way you deploy production is the same way you should deploy develop, shy of few debugging-friendly setting. This way you avoid the discrepancy between how production work vs how development works, which almost always causes major pains in the back of the neck, and with use of proper tools should mean no more work for the developers. That's why we start with Vagrant as developer boxes should be as easy as vagrant up, but the meat of our product lies in Ansible which will do meat of the work and can be applied to almost anything: AWS, bare metal, docker, LXC, in open net, behind vpn - you name it.

              We must also give proper consideration to monitoring and logging hoovering at this point. My generic answer here is to grab Elasticsearch, Kibana, and Logstash. While for different use cases there may be better solutions, this one is well battle-tested, performs reasonably and is very easy to scale both vertically (within some limits) and horizontally. Logstash rules are easy to write and are well supported in maintenance through Ansible, which as I've mentioned earlier, are at the very core of things, and creating triggers/reports and alerts based on Elastic and Kibana is generally a breeze, including some quite complex aggregations.

              If we are happy with the state of the Ansible it's time to move on and put all those roles and playbooks to work. Namely, we need something to manage our CI/CD pipelines. For me, the choice is obvious: TeamCity. It's modern, robust and unlike most of the light-weight alternatives, it's transparent. What I mean by that is that it doesn't tell you how to do things, doesn't limit your ways to deploy, or test, or package for that matter. Instead, it provides a developer-friendly and rich playground for your pipelines. You can do most the same with Jenkins, but it has a quite dated look and feel to it, while also missing some key functionality that must be brought in via plugins (like quality REST API which comes built-in with TeamCity). It also comes with all the common-handy plugins like Slack or Apache Maven integration.

              The exact flow between CI and CD varies too greatly from one application to another to describe, so I will outline a few rules that guide me in it: 1. Make build steps as small as possible. This way when something breaks, we know exactly where, without needing to dig and root around. 2. All security credentials besides development environment must be sources from individual Vault instances. Keys to those containers should exist only on the CI/CD box and accessible by a few people (the less the better). This is pretty self-explanatory, as anything besides dev may contain sensitive data and, at times, be public-facing. Because of that appropriate security must be present. TeamCity shines in this department with excellent secrets-management. 3. Every part of the build chain shall consume and produce artifacts. If it creates nothing, it likely shouldn't be its own build. This way if any issue shows up with any environment or version, all developer has to do it is grab appropriate artifacts to reproduce the issue locally. 4. Deployment builds should be directly tied to specific Git branches/tags. This enables much easier tracking of what caused an issue, including automated identifying and tagging the author (nothing like automated regression testing!).

              Speaking of deployments, I generally try to keep it simple but also with a close eye on the wallet. Because of that, I am more than happy with AWS or another cloud provider, but also constantly peeking at the loads and do we get the value of what we are paying for. Often enough the pattern of use is not constantly erratic, but rather has a firm baseline which could be migrated away from the cloud and into bare metal boxes. That is another part where this approach strongly triumphs over the common Docker and CircleCI setup, where you are very much tied in to use cloud providers and getting out is expensive. Here to embrace bare-metal hosting all you need is a help of some container-based self-hosting software, my personal preference is with Proxmox and LXC. Following that all you must write are ansible scripts to manage hardware of Proxmox, similar way as you do for Amazon EC2 (ansible supports both greatly) and you are good to go. One does not exclude another, quite the opposite, as they can live in great synergy and cut your costs dramatically (the heavier your base load, the bigger the savings) while providing production-grade resiliency.

              See more