Alternatives to TiDB logo

Alternatives to TiDB

MySQL, CockroachDB, Cassandra, Vitess, and MongoDB are the most popular alternatives and competitors to TiDB.
43
107
+ 1
12

What is TiDB and what are its top alternatives?

Inspired by the design of Google F1, TiDB supports the best features of both traditional RDBMS and NoSQL.
TiDB is a tool in the Databases category of a tech stack.
TiDB is an open source tool with 28.1K GitHub stars and 4.4K GitHub forks. Here’s a link to TiDB's open source repository on GitHub

Top Alternatives to TiDB

  • MySQL

    MySQL

    The MySQL software delivers a very fast, multi-threaded, multi-user, and robust SQL (Structured Query Language) database server. MySQL Server is intended for mission-critical, heavy-load production systems as well as for embedding into mass-deployed software. ...

  • CockroachDB

    CockroachDB

    It allows you to deploy a database on-prem, in the cloud or even across clouds, all as a single store. It is a simple and straightforward bridge to your future, cloud-based data architecture. ...

  • Cassandra

    Cassandra

    Partitioning means that Cassandra can distribute your data across multiple machines in an application-transparent matter. Cassandra will automatically repartition as machines are added and removed from the cluster. Row store means that like relational databases, Cassandra organizes data by rows and columns. The Cassandra Query Language (CQL) is a close relative of SQL. ...

  • Vitess

    Vitess

    It is a database solution for deploying, scaling and managing large clusters of MySQL instances. It’s architected to run as effectively in a public or private cloud architecture as it does on dedicated hardware. It combines and extends many important MySQL features with the scalability of a NoSQL database. ...

  • MongoDB

    MongoDB

    MongoDB stores data in JSON-like documents that can vary in structure, offering a dynamic, flexible schema. MongoDB was also designed for high availability and scalability, with built-in replication and auto-sharding. ...

  • Clickhouse

    Clickhouse

    It allows analysis of data that is updated in real time. It offers instant results in most cases: the data is processed faster than it takes to create a query. ...

  • PostgreSQL

    PostgreSQL

    PostgreSQL is an advanced object-relational database management system that supports an extended subset of the SQL standard, including transactions, foreign keys, subqueries, triggers, user-defined types and functions. ...

  • Apache Aurora

    Apache Aurora

    Apache Aurora is a service scheduler that runs on top of Mesos, enabling you to run long-running services that take advantage of Mesos' scalability, fault-tolerance, and resource isolation. ...

TiDB alternatives & related posts

MySQL logo

MySQL

75.1K
59.6K
3.7K
The world's most popular open source database
75.1K
59.6K
+ 1
3.7K
PROS OF MYSQL
  • 790
    Sql
  • 673
    Free
  • 557
    Easy
  • 527
    Widely used
  • 485
    Open source
  • 180
    High availability
  • 158
    Cross-platform support
  • 103
    Great community
  • 78
    Secure
  • 75
    Full-text indexing and searching
  • 25
    Fast, open, available
  • 14
    SSL support
  • 13
    Reliable
  • 13
    Robust
  • 8
    Enterprise Version
  • 7
    Easy to set up on all platforms
  • 2
    NoSQL access to JSON data type
  • 1
    Replica Support
  • 1
    Easy, light, scalable
  • 1
    Relational database
  • 1
    Sequel Pro (best SQL GUI)
CONS OF MYSQL
  • 14
    Owned by a company with their own agenda
  • 1
    Can't roll back schema changes

related MySQL posts

Tim Abbott

We've been using PostgreSQL since the very early days of Zulip, but we actually didn't use it from the beginning. Zulip started out as a MySQL project back in 2012, because we'd heard it was a good choice for a startup with a wide community. However, we found that even though we were using the Django ORM for most of our database access, we spent a lot of time fighting with MySQL. Issues ranged from bad collation defaults, to bad query plans which required a lot of manual query tweaks.

We ended up getting so frustrated that we tried out PostgresQL, and the results were fantastic. We didn't have to do any real customization (just some tuning settings for how big a server we had), and all of our most important queries were faster out of the box. As a result, we were able to delete a bunch of custom queries escaping the ORM that we'd written to make the MySQL query planner happy (because postgres just did the right thing automatically).

And then after that, we've just gotten a ton of value out of postgres. We use its excellent built-in full-text search, which has helped us avoid needing to bring in a tool like Elasticsearch, and we've really enjoyed features like its partial indexes, which saved us a lot of work adding unnecessary extra tables to get good performance for things like our "unread messages" and "starred messages" indexes.

I can't recommend it highly enough.

See more
Conor Myhrvold
Tech Brand Mgr, Office of CTO at Uber · | 21 upvotes · 980.7K views

Our most popular (& controversial!) article to date on the Uber Engineering blog in 3+ yrs. Why we moved from PostgreSQL to MySQL. In essence, it was due to a variety of limitations of Postgres at the time. Fun fact -- earlier in Uber's history we'd actually moved from MySQL to Postgres before switching back for good, & though we published the article in Summer 2016 we haven't looked back since:

The early architecture of Uber consisted of a monolithic backend application written in Python that used Postgres for data persistence. Since that time, the architecture of Uber has changed significantly, to a model of microservices and new data platforms. Specifically, in many of the cases where we previously used Postgres, we now use Schemaless, a novel database sharding layer built on top of MySQL (https://eng.uber.com/schemaless-part-one/). In this article, we’ll explore some of the drawbacks we found with Postgres and explain the decision to build Schemaless and other backend services on top of MySQL:

https://eng.uber.com/mysql-migration/

See more
CockroachDB logo

CockroachDB

123
196
0
A cloud-native SQL database for building global, scalable cloud services that survive disasters.
123
196
+ 1
0
PROS OF COCKROACHDB
    Be the first to leave a pro
    CONS OF COCKROACHDB
      Be the first to leave a con

      related CockroachDB posts

      Cassandra logo

      Cassandra

      3.1K
      3K
      463
      A partitioned row store. Rows are organized into tables with a required primary key.
      3.1K
      3K
      + 1
      463
      PROS OF CASSANDRA
      • 107
        Distributed
      • 90
        High performance
      • 77
        High availability
      • 71
        Easy scalability
      • 50
        Replication
      • 25
        Reliable
      • 24
        Multi datacenter deployments
      • 6
        Schema optional
      • 6
        OLTP
      • 5
        Open source
      • 2
        Workload separation (via MDC)
      CONS OF CASSANDRA
      • 2
        Reliability of replication
      • 1
        Updates

      related Cassandra posts

      Thierry Schellenbach
      Shared insights
      on
      Redis
      Cassandra
      RocksDB
      at

      1.0 of Stream leveraged Cassandra for storing the feed. Cassandra is a common choice for building feeds. Instagram, for instance started, out with Redis but eventually switched to Cassandra to handle their rapid usage growth. Cassandra can handle write heavy workloads very efficiently.

      Cassandra is a great tool that allows you to scale write capacity simply by adding more nodes, though it is also very complex. This complexity made it hard to diagnose performance fluctuations. Even though we had years of experience with running Cassandra, it still felt like a bit of a black box. When building Stream 2.0 we decided to go for a different approach and build Keevo. Keevo is our in-house key-value store built upon RocksDB, gRPC and Raft.

      RocksDB is a highly performant embeddable database library developed and maintained by Facebook’s data engineering team. RocksDB started as a fork of Google’s LevelDB that introduced several performance improvements for SSD. Nowadays RocksDB is a project on its own and is under active development. It is written in C++ and it’s fast. Have a look at how this benchmark handles 7 million QPS. In terms of technology it’s much more simple than Cassandra.

      This translates into reduced maintenance overhead, improved performance and, most importantly, more consistent performance. It’s interesting to note that LinkedIn also uses RocksDB for their feed.

      #InMemoryDatabases #DataStores #Databases

      See more
      Umair Iftikhar
      Technical Architect at Vappar · | 3 upvotes · 65.3K views

      Developing a solution that collects Telemetry Data from different devices, nearly 1000 devices minimum and maximum 12000. Each device is sending 2 packets in 1 second. This is time-series data, and this data definition and different reports are saved on PostgreSQL. Like Building information, maintenance records, etc. I want to know about the best solution. This data is required for Math and ML to run different algorithms. Also, data is raw without definitions and information stored in PostgreSQL. Initially, I went with TimescaleDB due to PostgreSQL support, but to increase in sites, I started facing many issues with timescale DB in terms of flexibility of storing data.

      My major requirement is also the replication of the database for reporting and different purposes. You may also suggest other options other than Druid and Cassandra. But an open source solution is appreciated.

      See more
      Vitess logo

      Vitess

      34
      89
      0
      A database clustering system for horizontal scaling of MySQL
      34
      89
      + 1
      0
      PROS OF VITESS
        Be the first to leave a pro
        CONS OF VITESS
          Be the first to leave a con

          related Vitess posts

          Shared insights
          on
          MySQL
          Vitess
          at

          They're critical to the business data and operated by an ecosystem of tools. But once the tools have been used, it was important to verify that the data remains as expected at all times. Even with the best efforts to prevent errors, inconsistencies are bound to creep at any stage. In order to test the code in a comprehensive manner, Slack developed a structure known as a consistency check framework.

          This is a responsive and personalized framework that can meaningfully analyze and report on your data with a number of proactive and reactive benefits. This framework is important because it can help with repair and recovery from an outage or bug, it can help ensure effective data migration through scripts that test the code post-migration, and find bugs throughout the database. This framework helped prevent duplication and identifies the canonical code in each case, running as reusable code.

          The framework was created by creating generic versions of the scanning and reporting code and an interface for the checking code. The checks could be run from the command line and either a single team could be scanned or the whole system. The process was improved over time to further customize the checks and make them more specific. In order to make this framework accessible to everyone, a GUI was added and connected to the internal administrative system. The framework was also modified to include code that can fix certain problems, while others are left for manual intervention. For Slack, such a tool proved extremely beneficial in ensuring data integrity both internally and externally.

          See more
          MongoDB logo

          MongoDB

          56.3K
          46.2K
          4.1K
          The database for giant ideas
          56.3K
          46.2K
          + 1
          4.1K
          PROS OF MONGODB
          • 823
            Document-oriented storage
          • 589
            No sql
          • 545
            Ease of use
          • 463
            Fast
          • 405
            High performance
          • 253
            Free
          • 214
            Open source
          • 178
            Flexible
          • 140
            Replication & high availability
          • 108
            Easy to maintain
          • 40
            Querying
          • 36
            Easy scalability
          • 35
            Auto-sharding
          • 34
            High availability
          • 30
            Map/reduce
          • 26
            Document database
          • 24
            Easy setup
          • 24
            Full index support
          • 15
            Reliable
          • 14
            Fast in-place updates
          • 13
            Agile programming, flexible, fast
          • 11
            No database migrations
          • 7
            Enterprise
          • 7
            Easy integration with Node.Js
          • 5
            Enterprise Support
          • 4
            Great NoSQL DB
          • 3
            Aggregation Framework
          • 3
            Support for many languages through different drivers
          • 3
            Drivers support is good
          • 2
            Schemaless
          • 2
            Fast
          • 2
            Awesome
          • 2
            Managed service
          • 2
            Easy to Scale
          • 1
            Consistent
          CONS OF MONGODB
          • 5
            Very slowly for connected models that require joins
          • 3
            Not acid compliant
          • 1
            Proprietary query language

          related MongoDB posts

          Jeyabalaji Subramanian

          Recently we were looking at a few robust and cost-effective ways of replicating the data that resides in our production MongoDB to a PostgreSQL database for data warehousing and business intelligence.

          We set ourselves the following criteria for the optimal tool that would do this job: - The data replication must be near real-time, yet it should NOT impact the production database - The data replication must be horizontally scalable (based on the load), asynchronous & crash-resilient

          Based on the above criteria, we selected the following tools to perform the end to end data replication:

          We chose MongoDB Stitch for picking up the changes in the source database. It is the serverless platform from MongoDB. One of the services offered by MongoDB Stitch is Stitch Triggers. Using stitch triggers, you can execute a serverless function (in Node.js) in real time in response to changes in the database. When there are a lot of database changes, Stitch automatically "feeds forward" these changes through an asynchronous queue.

          We chose Amazon SQS as the pipe / message backbone for communicating the changes from MongoDB to our own replication service. Interestingly enough, MongoDB stitch offers integration with AWS services.

          In the Node.js function, we wrote minimal functionality to communicate the database changes (insert / update / delete / replace) to Amazon SQS.

          Next we wrote a minimal micro-service in Python to listen to the message events on SQS, pickup the data payload & mirror the DB changes on to the target Data warehouse. We implemented source data to target data translation by modelling target table structures through SQLAlchemy . We deployed this micro-service as AWS Lambda with Zappa. With Zappa, deploying your services as event-driven & horizontally scalable Lambda service is dumb-easy.

          In the end, we got to implement a highly scalable near realtime Change Data Replication service that "works" and deployed to production in a matter of few days!

          See more
          Robert Zuber

          We use MongoDB as our primary #datastore. Mongo's approach to replica sets enables some fantastic patterns for operations like maintenance, backups, and #ETL.

          As we pull #microservices from our #monolith, we are taking the opportunity to build them with their own datastores using PostgreSQL. We also use Redis to cache data we’d never store permanently, and to rate-limit our requests to partners’ APIs (like GitHub).

          When we’re dealing with large blobs of immutable data (logs, artifacts, and test results), we store them in Amazon S3. We handle any side-effects of S3’s eventual consistency model within our own code. This ensures that we deal with user requests correctly while writes are in process.

          See more
          Clickhouse logo

          Clickhouse

          205
          262
          56
          A column-oriented database management system
          205
          262
          + 1
          56
          PROS OF CLICKHOUSE
          • 13
            Fast, very very fast
          • 10
            Good compression ratio
          • 5
            Horizontally scalable
          • 4
            Utilizes all CPU resources
          • 4
            Great CLI
          • 4
            RESTful
          • 3
            Has no transactions
          • 3
            Great number of SQL functions
          • 2
            Open-source
          • 2
            Buggy
          • 1
            In IDEA data import via HTTP interface not working
          • 1
            Server crashes its normal :(
          • 1
            Highly available
          • 1
            Flexible compression options
          • 1
            Flexible connection options
          • 1
            ODBC
          CONS OF CLICKHOUSE
          • 2
            Slow insert operations

          related Clickhouse posts

          PostgreSQL logo

          PostgreSQL

          56.5K
          44.7K
          3.5K
          A powerful, open source object-relational database system
          56.5K
          44.7K
          + 1
          3.5K
          PROS OF POSTGRESQL
          • 753
            Relational database
          • 506
            High availability
          • 436
            Enterprise class database
          • 379
            Sql
          • 298
            Sql + nosql
          • 171
            Great community
          • 145
            Easy to setup
          • 129
            Heroku
          • 128
            Secure by default
          • 111
            Postgis
          • 48
            Supports Key-Value
          • 46
            Great JSON support
          • 32
            Cross platform
          • 29
            Extensible
          • 26
            Replication
          • 24
            Triggers
          • 22
            Rollback
          • 21
            Multiversion concurrency control
          • 20
            Open source
          • 17
            Heroku Add-on
          • 14
            Stable, Simple and Good Performance
          • 13
            Powerful
          • 12
            Lets be serious, what other SQL DB would you go for?
          • 9
            Good documentation
          • 7
            Intelligent optimizer
          • 7
            Scalable
          • 6
            Transactional DDL
          • 6
            Modern
          • 6
            Reliable
          • 5
            Free
          • 5
            One stop solution for all things sql no matter the os
          • 4
            Relational database with MVCC
          • 3
            Faster Development
          • 3
            Full-Text Search
          • 3
            Developer friendly
          • 2
            Excellent source code
          • 2
            search
          • 2
            Great DB for Transactional system or Application
          • 1
            Full-text
          • 1
            Free version
          • 1
            Text
          • 1
            Open-source
          CONS OF POSTGRESQL
          • 9
            Table/index bloatings

          related PostgreSQL posts

          Jeyabalaji Subramanian

          Recently we were looking at a few robust and cost-effective ways of replicating the data that resides in our production MongoDB to a PostgreSQL database for data warehousing and business intelligence.

          We set ourselves the following criteria for the optimal tool that would do this job: - The data replication must be near real-time, yet it should NOT impact the production database - The data replication must be horizontally scalable (based on the load), asynchronous & crash-resilient

          Based on the above criteria, we selected the following tools to perform the end to end data replication:

          We chose MongoDB Stitch for picking up the changes in the source database. It is the serverless platform from MongoDB. One of the services offered by MongoDB Stitch is Stitch Triggers. Using stitch triggers, you can execute a serverless function (in Node.js) in real time in response to changes in the database. When there are a lot of database changes, Stitch automatically "feeds forward" these changes through an asynchronous queue.

          We chose Amazon SQS as the pipe / message backbone for communicating the changes from MongoDB to our own replication service. Interestingly enough, MongoDB stitch offers integration with AWS services.

          In the Node.js function, we wrote minimal functionality to communicate the database changes (insert / update / delete / replace) to Amazon SQS.

          Next we wrote a minimal micro-service in Python to listen to the message events on SQS, pickup the data payload & mirror the DB changes on to the target Data warehouse. We implemented source data to target data translation by modelling target table structures through SQLAlchemy . We deployed this micro-service as AWS Lambda with Zappa. With Zappa, deploying your services as event-driven & horizontally scalable Lambda service is dumb-easy.

          In the end, we got to implement a highly scalable near realtime Change Data Replication service that "works" and deployed to production in a matter of few days!

          See more
          Tim Abbott

          We've been using PostgreSQL since the very early days of Zulip, but we actually didn't use it from the beginning. Zulip started out as a MySQL project back in 2012, because we'd heard it was a good choice for a startup with a wide community. However, we found that even though we were using the Django ORM for most of our database access, we spent a lot of time fighting with MySQL. Issues ranged from bad collation defaults, to bad query plans which required a lot of manual query tweaks.

          We ended up getting so frustrated that we tried out PostgresQL, and the results were fantastic. We didn't have to do any real customization (just some tuning settings for how big a server we had), and all of our most important queries were faster out of the box. As a result, we were able to delete a bunch of custom queries escaping the ORM that we'd written to make the MySQL query planner happy (because postgres just did the right thing automatically).

          And then after that, we've just gotten a ton of value out of postgres. We use its excellent built-in full-text search, which has helped us avoid needing to bring in a tool like Elasticsearch, and we've really enjoyed features like its partial indexes, which saved us a lot of work adding unnecessary extra tables to get good performance for things like our "unread messages" and "starred messages" indexes.

          I can't recommend it highly enough.

          See more
          Apache Aurora logo

          Apache Aurora

          59
          75
          0
          An Apcahe Mesos framework for scheduling jobs, originally developed by Twitter
          59
          75
          + 1
          0
          PROS OF APACHE AURORA
            Be the first to leave a pro
            CONS OF APACHE AURORA
              Be the first to leave a con

              related Apache Aurora posts

              Docker containers on Mesos run their microservices with consistent configurations at scale, along with Aurora for long-running services and cron jobs.

              See more