Need advice about which tool to choose?Ask the StackShare community!
InfluxDB vs PostgreSQL: What are the differences?
Introduction
In this article, we will be comparing InfluxDB and PostgreSQL, two popular database management systems, and highlighting their key differences.
Data Model: InfluxDB is a time series database designed specifically for handling time-stamped and sequential data. It excels at ingesting and analyzing large volumes of time series data with high write and query performance. On the other hand, PostgreSQL is a general-purpose relational database that can handle various types of data and complex relationships between them.
Scalability and Performance: InfluxDB is optimized for high write and query performance of time series data. It is horizontally scalable, allowing you to easily add more nodes to handle increasing data volumes. PostgreSQL, on the other hand, is known for its excellent transactional capabilities and has a proven track record of handling large datasets and complex queries.
Query Language: InfluxDB uses its own query language called InfluxQL, which is specifically designed for time series data and provides functionalities like aggregations, downsampling, and windowing functions. On the other hand, PostgreSQL uses SQL, a widely-used standard query language for relational databases, which allows you to perform complex operations on data stored in various tables.
Schema Flexibility: InfluxDB has a flexible schema, allowing you to add new fields to data points on the fly without altering the existing schema. This makes it easy to handle evolving time series data where new fields may be added over time. PostgreSQL, being a relational database, requires a predefined schema before inserting data, providing strict structure and enforcing data integrity.
Availability and Replication: InfluxDB supports high availability through its clustering feature, allowing you to have multiple nodes that can provide failover and data replication. On the other hand, PostgreSQL also supports replication, but it requires additional configuration and setup to achieve high availability.
Use Cases: InfluxDB is commonly used in applications that deal with monitoring and analyzing large volumes of time series data, such as IoT sensor data, financial data, and application performance metrics. PostgreSQL, on the other hand, is suitable for a wide range of use cases including web applications, e-commerce platforms, and data warehousing.
In summary, InfluxDB is a specialized time series database optimized for handling time-stamped data with high write and query performance, while PostgreSQL is a general-purpose relational database that provides excellent transactional capabilities and supports various types of data and complex relationships.
Hello All, I'm building an app that will enable users to create documents using ckeditor or TinyMCE editor. The data is then stored in a database and retrieved to display to the user, these docs can contain image data also. The number of pages generated for a single document can go up to 1000. Therefore by design, each page is stored in a separate JSON. I'm wondering which database is the right one to choose between ArangoDB and PostgreSQL. Your thoughts, advice please. Thanks, Kashyap
Which Graph DB features are you planning to use?
I am building a fintech startup with a friend, we decided to use Go for its performance and friendly syntax. We want to know if we should use a web framework or just use the pure net/http lib and also for the databases we put PostgreSQL and MySQL on the table, we want to know which one is better, from the community support to the best open-source implementation?
Postgres is a better option to consider compared to MySQL. With respect to performance, postgres has an edge over MySQL. Don't use net/http for production. Read this https://medium.com/@nate510/don-t-use-go-s-default-http-client-4804cb19f779 I prefer gorilla/mux as it is simple and provides all the basic features. Other lib seems to be an overhead if you just need basic routing.
MySQL and Postgre both are great and awesome and great support, community, support. Whatever will be good. Postgree have some little advantages.
I recommend Elixir, even though I work in a fintech with Go, Elixir is a FP language so in my opinion the immutability is a important topic when working with money.
I'm planning to build a freelance marketplace website, using tools like Next.js, Firebase Authentication, Node.js, but I need to know which type of database is suitable with performance and powerful features. I'm trying to figure out what the best stack is for this project. If anyone has advice please, I’d love to hear more details. Thanks.
Postgres and MySQL are very similar, but Mongo has differences in terms of storage type and the CAP theorem. For your requirement, I prefer Postgres (or MySQL) over MongoDB. Mongo gives you no schema which is not always good. on the other hand, it is more common in NodeJS community, so you may find more articles about Node-Mongo stuff. I suggest to stay with RDBMS if possible.
This is a little about experience. Postgresql is fine. You can use either the related table structure or the json table structure.
We have a ready-made engine for the online exchange and marketplace. To customize it, you only need to know sql. Connecting any database is not a problem. https://falconspace.site/list/solutions
For learning purposes, I am trying to design a dashboard that displays the total revenue from all connected webshops/marketplaces, displaying incoming orders, total orders, etc.
So I will need to get the data (using Node backend) from the Shopify and marketplace APIs, storing this in the database, and get the data from the back end.
My question is:
What kind of database should I use? Is MongoDB fine for storing this kind of data? Or should I go with a SQL database?
Postgres is a solid database with a promising background. In the relational side of database design, I see Postgres as an absolute; Now the arguments and conflicts come in when talking about NoSQL data types. The truth is jsonb in Postgres is efficient and gives a good performance and storage. In a comparison with MongoDB with the same resources (such as RAM and CPU) with better tools and community, I think you should go for Postgres and use jsonb for some of the data. All in all, don't use a NoSQL database just cause you have the data type matching this tech, have both SQL and NoSQL at the same time.
I have found MongoDB easier to work with. Postgres and SQL in general, in my experience, is harder to work with. While Postgres does provide data consistency, MongoDB provides flexibility. I've found the MongoDB ecosystem to be really great with a good community. I've worked with MongoDB in production and it's been great. I really like the aggregation system and using query operators such as $in, $pull, $push.
While my opinion may be unpopular, I have found MongoDB really great for relational data, using aggregations from a code perspective. In general, data types are also more flexible with MongoDB.
I will use PostgreSQL because you have more powerfull feature for data agregation and views (the raw data from shopify and others could be stored as is) and then use views to produce diff. kind of reports unless you wanna create those aggregations/views in nodejs code. HTH
I want to store the data retrieved from multiple APIs and perform some analytics on it. The data stored in DB will never/hardly change. First, I thought it would be better to retrieve the data and create table columns for them, but some data might have different columns than others. So I thought about storing the JSON response from API directly to the table and use it. So which database will be the better choice, PostgreSQL or MongoDB.
Hey Krunal, your requirement sounds pretty clear and specific to what you want to do with that data. My recommendation to you, would be to use MongoDB. Since schema-less IO is faster in MongoDB, your general speed of reading / writing from and to the database would be quick. Additionally, the aggregate framework is very powerful with large data so that is also something that you can use in computing your analytics.
I suggest you to go with MongoDB
, because it is schema-less, i.e., it permits you to easily manipulate the schema of a table. If you want to add a column, it can be done without much effort. Moreover, MongoDB
can deal with more types of data, since the latest is stored as key-value pair. I do not what kind of analysis you are going to do, but NoSQL
is not the best choice if you are going to use complex queries. In addition, if you are working with huge amount of data and you are interested in optimising the performance, I suggest you PostgreSQL
.
Since you are speaking about API and JSON, I guess that you may using Node JS
for fetching API. I suggest you to try Mongoose
, which facilitate the use of MongoDB
with Node JS
.
Looks like the use case is to store JSON data. mongoDB and Postgres differ in so many aspects like scaling and consistency. Postgres has excellent JSON support now with the power of SQL. MongoDB is good in handling schema less data. However in this case it seems these differences don’t matter that much. I’d recommend you go with what you are most comfortable with.
I don't have an unquestionable opinion regarding your use case. I only trend to pick the MongoDB since it is schemaless avoiding null columns that you not always know when it is used (it depends on the source of the data). The only drawback that I could consider is the query's complexity in MongoDB, sometimes it is a bit tricky, when compared to the traditional SQL queries.
This is largely a matter of opinion. I see that someone else responded and recommended MongoDB but since you are doing data analytics, I highly recommend you go with SQL. You're going to have a really hard time normalizing the data when you can't manipulate relationships and bulk edit with a nice update query.
I'm much more experienced with MySQL than any other database and I am having a hard time getting on board with noSQL entirely because it's really hard to query complex data with relationships using noSQL. I'm using Firestore with one of my apps and MongoDB with another app but they both use MySQL for the heavy lifting and then a document database for things like permissions, caching, etc.
It sounds like the type of problem you need to reverse engineer. I'm sure you can imagine what the data sets would look like if you use MongoDB or Postgres. I suspect that putting in a little bit more work up front will pay high dividends and productivity once the data is normalized.
Again - it's largely a matter of preference but I prefer SQL almost every time.
I need urgent advice from you all! I am making a web-based food ordering platform which includes 3 different ordering methods (Dine-in using QR code scanning + Take away + Home Delivery) and a table reservation system. We are using React for the front-end, and I need your advice if I should use NestJS or ExpressJS for the backend. And regarding the database, which database should I use, MongoDB or PostgreSQL? Which combination will be better? PS. We want to follow the microservice architecture as scalability, reliability, and usability are the most important Non Functional requirements. Expert advice is needed, please. A load of thanks in advance. Kind Regards, Miqdad
I can't speak for the NestJS vs ExpressJS discussion, but I can given a viewpoint on databases.
The main thing to consider around database choice, is what "shape" the data will be in, and the kind of read/write patterns you expect of that data. The blog example shows up so much for DBMS like MongoDB, because it showcases what NoSQL / document storage is very scalable and performant in: mostly isolated documents with a few views / ways to order them and filter them. In your case, I can imagine a number of "relations" already, which suggest a more traditional SQL solution would work well: You have restaurants, they have maybe a few menus (regular, gluten-free etc), with menu items in, which have different prices over time (25% discount on christmas food just after christmas, 50% off pizzas on wednesdays). Then there's a whole different set of "relations" for people ordering, like showing them past orders, which need to refer to the restaurant etc, and credit card transaction information for refunds etc. That to me suggests PostgreSQL, which will scale quite well if you database design is okay.
PostgreSQL also offers you some extensions, which are just amazing for your use-case. https://postgis.net/ for example will let you query for restaurants based on location, without the big cost that comes from constantly using something like Google Maps API to work out which restaurants are near to someone ordering. Partitioning and window functions will be great for your own use internally too, like answering questions of "What types of takeways perform the best for us, Italian, Mexican?" or in combination with PostGIS, answering questions like "What kind of takeways do we need to market to, to improve our selection?".
While these things can all be implemented in MongoDB, you tend to lose some of the convenience of ACID or have to deal with things like eventual consistency, which requires more thinking on the part of your engineers. PostgreSQL offers decent (if more complex) scalablity and redundancy solutions, and is honestly very well proven and plenty of documentation exists on optimising queries.
Hello, i build microservice systems using Angular And Spring (Java) so i can't help with with ur back end choice, BUT, i definitely advice you to use a Nosql database, thus MongoDB of course or even Cassandra if your looking for extreme scalability with zero point of failure. Anyway, Nosql if much more faster then Sql (in your case Postresql DB). All you wanna do with sql can also be done by nosql (not the opposite of course).I also advice you to use docker containers + kubernetes to orchestrate them, if you need scalability and replication, that way your app can support auto scalability (in case ur users number goes high). Best of luck
About PostgreSQL vs MongoDB: short answer. Both are great. Choose what you like the most. Only if you expect millions of users, I‘ll incline with MongoDB.
I need to add a DBMS to my stack, but I don't know which. I'm tempted to learn SQLite since it would be useful to me with its focus on local access without concurrency. However, doing so feels like I would be defeating the purpose of trying to expand my skill set since it seems like most enterprise applications have the opposite requirements.
To be able to apply what I learn to more projects, what should I try to learn? MySQL? PostgreSQL? Something else? Is there a comfortable middle ground between high applicability and ease of use?
You can easily start with SQlite. Really easy to startup since it doesn't require you to install any additional software since is self-contained. It has interfaces in almost any language and also GUIs. Start learning SQL basics and simpler data models and structures. There are many tutorials, also available in the official website. From there you will easily migrate to another database. MySQL could be next, sonce it's easier to learn at first and has more resources available. PostgreSQL is less widespread, more challenging and has the fewer resorces, but once you have some experience with MySQL is really easy to learn as well. All these technologies are really widespread and used accross the industry so you won't make a wrong decision with any of these.
A question you might want to think about is "What kind of experience do I want to gain, by using a DBMS?". If your aim is to have experience with SQL and any related libraries and frameworks for your language of choice (python, I think?), then it kind of doesn't matter too much which you pick so much. As others have said, SQLite would offer you the ability to very easily get started, and would give you a reasonably standard (if a little basic) SQL dialect to work with.
If your aim is actually to have a bit of "operational" experience, in terms of things like what command line tools might be available as standard for the DBMS, understanding how the DBMS handles multiple databases, when to use multiple schemas vs multiple databases, some basic privilege management etc. Then I would recommend PostgreSQL. SQLite's simplicity actually avoids most of these experiences, which is not helpful to you if that is what you hope to learn. MySQL has a few "quirks" to how it manages things like multiple databases, which may lead you to making less good decisions if you tried to take your experience over to different DBMS, especially in bigger enterprise roles. PostgreSQL is kind of a happy middle ground here, with the ability to start PostgreSQL servers via docker or docker-compose making the actual day-to-day management pretty easy, while still giving you experience of the kinds of considerations I have listed above.
At Vital Beats we make use of PostgreSQL, largely because it offers us a happy balance between good management and backup of data, and good standard command line tools, which is essential for us where we are deploying our solutions within Kubernetes / docker, and so more graphical tools are not always appropriate for us. PostgreSQL is also pretty universally supported in terms of language libraries and frameworks, without having to make compromises on how we want to store and layout our data.
MySQL's very popular, easy to install, is also available as a managed service across most popular cloud offerings. The support/default tooling (such as MySQL Query Workbench) certainly is a little more baked than what you'll find for Postgres.
I have a lot of data that's currently sitting in a MariaDB database, a lot of tables that weigh 200gb with indexes. Most of the large tables have a date column which is always filtered, but there are usually 4-6 additional columns that are filtered and used for statistics. I'm trying to figure out the best tool for storing and analyzing large amounts of data. Preferably self-hosted or a cheap solution. The current problem I'm running into is speed. Even with pretty good indexes, if I'm trying to load a large dataset, it's pretty slow.
Druid Could be an amazing solution for your use case, My understanding, and the assumption is you are looking to export your data from MariaDB for Analytical workload. It can be used for time series database as well as a data warehouse and can be scaled horizontally once your data increases. It's pretty easy to set up on any environment (Cloud, Kubernetes, or Self-hosted nix system). Some important features which make it a perfect solution for your use case. 1. It can do streaming ingestion (Kafka, Kinesis) as well as batch ingestion (Files from Local & Cloud Storage or Databases like MySQL, Postgres). In your case MariaDB (which has the same drivers to MySQL) 2. Columnar Database, So you can query just the fields which are required, and that runs your query faster automatically. 3. Druid intelligently partitions data based on time and time-based queries are significantly faster than traditional databases. 4. Scale up or down by just adding or removing servers, and Druid automatically rebalances. Fault-tolerant architecture routes around server failures 5. Gives ana amazing centralized UI to manage data sources, query, tasks.
Hi all. I am an informatics student, and I need to realise a simple website for my friend. I am planning to realise the website using Node.js and Mongoose, since I have already done a project using these technologies. I also know SQL, and I have used PostgreSQL and MySQL previously.
The website will show a possible travel destination and local transportation. The database is used to store information about traveling, so only admin will manage the content (especially photos). While clients will see the content uploaded by the admin. I am planning to use Mongoose because it is very simple and efficient for this project. Please give me your opinion about this choice.
Your requirements seem nothing special. on the other hand, MongoDB is commonly used with Node. you could use Mongo without defining a Schema, does it give you any benefits? Also, note that development speed matters. In most cases RDBMS are the best choice, Learn and use Postgres for life!
The use case you are describing would benefit from a self-hosted headless CMS like contentful. You can also go for Strapi with a database of your choice but here you would have to host Strapi and the underlying database (if not using SQLite) yourself. If you want to use Strapi, you can ease your work by using something like PlanetSCaleDB as the backing database for Strapi.
SQL is not so good at query lat long out of the box. you might need to use additional tools for that like UTM coordinates or Uber's H3.
If you use mongoDB, it support 2d coordinate query out of the box.
Any database will be a great choice for your app, which is less of a technical challenge and more about great content. Go for it, the geographical search features maybe be actually handy for you.
MongoDB and Mongoose are commonly used with Node.js and the use case doesn't seem to be requiring any special considerations as of now. However using MongoDB now will allow you to easily expand and modify your use case in future.
If not MongoDB, then my second choice will be PostgreSQL. It's a generic purpose database with jsonb support (if you need it) and lots of resources online. Nobody was fired for choosing PostgreSQL.
Any database engine should work well but I vote for Postgres because of PostGIS extension that may be handy for travel related site. There's nothing special about your requirements.
Hi, Maxim! Most likely, the site is almost ready. But we would like to share our development with you. https://falcon.web-automation.ru/ This is a constructor for web application. With it, you can create almost any site with different roles which have different levels of access to information and different functionality. The platform is managed via sql. knowing sql, you will be able to change the business logic as necessary and during further project maintenance. We will be glad to hear your feedback about the platform.
I am going to work on a real estate project and have to decide on a database. Now, SQL databases can be very efficient if appropriately designed. More relations between the data and less redundancy. But with a #NoSQL database, the development time is reduced, and it is easy to query. Since this is my first time working on the real estate domain, I would like to pick a database that would be efficient in the long run.
I recommend PostgreSQL as it’s the most powerful out of the 3 databases you mentioned. It supports JSON objects so you can mimic the MongoDB functionality, but I would also argue that SQL is actually quite powerful and in many cases significantly easier to work with than with NoSQL databases.
Stay away from foreign keys, keep it fast and simple. Define your data structures well in advance. Try to model your data structures based on your system’s vision; based on where it’s going and not based solely on what you currently need it to do. This will help you avoid drastic changes to your database after your system is launched. Populate the database with fake data and run tests. PostgreSQL allows you to create Views from multiple tables. Try to create those views and make sure you can easily create useful views from multiple tables. Run an Explain on those view queries to make sure you created your indexes correctly. Make sure it’s fast!
Any of those three databases are going to be efficient, scalable, and reliable in the long term if you configure and use them correctly. They all also have solid hosting solutions.
All things being equal, I would agree with other posters that Postgres is my preference among the three, but there are caveats.
MongoDB and MySQL have better support for mutli-region replication in your big three cloud environments. Azure recently bought Citus Data, which was a best-in-class Postgres replication solution, so they might be the only one I trust to provide cross-region replication at the moment.
If you have a single region deployment and are on AWS, I can't recommend Aurora Postgres highly enough. It's a very good implementation and extremely performant.
I'll second another piece of advice. Postgresql's JSON columns are a dream when it comes to productivity and I use them frequently with our Rails application. In these cases, no migration is required to change schema. We store payloads with dozens or hundreds of keys and performance has not been an issue. We also have a lot of relational tables, so the joins we get with SQL are very important to us and hard to replicate with a NoQL solution.
That really depends of where do you see you application in the long run. On any application, any of those choices are excellent. You could argue about good support on JSON binaries, but even MySQL has an excellent support for that on the latest versions.
On the long run, when your application gets hundreds of thousands of requests per second, you might start thinking about how many inputs you will have in the database compared to the outputs. PostgresSQL it’s excellent at giving you outputs, but table corruption can happen when you start receiving this massive number of inputs (Which was the reason Uber switched from Postgres to MySQL)
On our OPS Platform at CTO.ai , we decided to use Postgres, because we need a reliable and agile way to send the output to our users, so that was out best choice in the long run for our product.
Fauna is a serverless database where you store data as JSON. Also, you have build in a HTTP GraphQL interface with a full authentication & authorization layer. That means you can skip your Backend and call it directly from the Frontend. With the power, that you can write data transformation function within Fauna with her own language called FQL, we're getting a blazing fast application.
Also, Fauna takes care about scaling and backups (All data are sharded on three different locations on the globe). That means we can fully focus on writing business logic and don't have to worry anymore about infrastructure.
As an advanced user, I prefer Postgres over MySQL. MySQL was the first database I learned from my institute. I always have to undergo that infamous date and time dilemma many Java devs know. Both are adequate for a small project. When I worked on a project with a date and time-intensive data, I spent a lot of time dealing with the conversion and transition, leaving me frustrated. I tried Postgres to see how well it can perform. To my surprise, all became a breeze, and the transactions were faster too. I've been using Postgres ever since, and no more dilemma.
We started using PostgreSQL because there's no need to upgrade to an enterprise plan to access certain essential features. Postgres is essentially plug-and-play; you download it, install it, and there you go!
Another benefit of using Postgres is that you get to use SQL (Structured Query Language)—which isn't for everyone, but I enjoy how flexible and versatile it is.
Postgres also has point-in-time recovery, which you can export wherever you want—This means you can restore data from any given point in time. With this in mind, if you delete something accidentally, you can go back in time and grab said data without restoring the whole database.
Not to mention Postgres is remarkably fast with several thorough benchmarks comparing it to MongoDB, where Postgres mostly came out on top.
As a startup, managing my own database, backups and even the schemas/migrations are all overhead. Next to that, I needed both Backend and Frontend ways to write to the database. With firebase this is possible, this saved us some time: Some API calls were not needed because I could directly fetch data in the FE.
Offline support & realtime data updates is also supported out of the box. No need to write your own websockets.
Once the startup grows, moving to a different relational database might make sense. But in a pre-product-market-fit startup, Firebase is a good, and cheaper fit!
The pricing model of firebase firestore is a bit risky. But it saves a lot of time to get quickly to market.
We will be getting data in the form of CSVs. Because the data in a CSV is highly structured, it will be easy to create schemas and it works well in a SQL database as opposed to noSQL. For a SQL database, both mySQL and Postgres are very viable options. Both of them are highly performant, definitely enough for our application, even if we needed to scale drastically. Postgres does include some extra features over mySQL such as table inheritance and function overloading. However, the extra features are not advantageous to us given our database use case. Because both databases seemed to suit our use case perfectly, we chose to use mySQL simply because it is more familiar tech within our team.
One of our biggest technical pillars is to "let the pros manage it", thus we settled on using Heroku PostgreSQL
to manage our SQL cluster. We can take advantage of the free tier and the requests will be fast since it is integrated into Heroku. PostgreSQL
also support Full text search which can come into handy with manually searching through the tables.
All the benefits of relational joins and constraints, with JSON field types in Postgres to allow for flexibility like mongo. Objection ORM makes query building seamless and abstracts away a lot of complexity of SQL queries.
MongoDB tends to get slow with scale and requires a lot of code to maintain consistency across collections as foreign keys and other constraints are harder to implement. PostgreSQL also has a vibrant community with battle tested stability and horizontal scalability when needed.
I was looking into PostgreSQL for a database option solely for the reason that it was popular, had good community support, and was used by many companies planning to develop social media platforms similar to Calosmic.
However, I was very unfamiliar with relational databases and had only gotten acquainted with the basics of column-family database models with technologies like SqlLite3.
Furthermore, I had already been using MongoDB, a document-based database, in a previous project so I was looking for options similar to the aforementioned technology.
Last but not least, I wasn't all too into having to manage my database; I wanted to have a place to store my data, and be able to effectively query, and mutate the data without the hassle of learning SQL or maintaining an entire database. I found out about FaunaDB a couple of weeks ago and was very excited about the native GraphQL support, a combination of both document-based and relational database models, and the low-maintenance structure of the database. I am currently experimenting with using FaunaDB in my stack :)
- One disadvantage I noticed while using FaunaDB and GraphQL is the lack of certain features that one expects when using the latter. Even though FaunaDB has native support for GraphQL it seems as if it's missing numerous features that are commonplace in the language such as unions and interfaces.
MongoDB's document-oriented paradigm is nicely suited to the results of our ML model. We felt that this compatibility offered some time savings on figuring out and implementing an extensive data formatting and processing system. MongoDB's flexible schemas schemas (due to it being non-relational) were also attractive as a source of additional agility for our development process. The MongoDB ecosystem also has great GUI tools to simplify testing.
Backend:
- Considering that our main app functionality involves data processing, we chose
Python
as the programming language because it offers many powerful math libraries for data-related tasks. We will useFlask
for the server due to its good integration with Python. We will use a relational database because it has good performance and we are mostly dealing with CSV files that have a fixed structure. We originally choseSQLite
, but after realizing the limitations of file-based databases, we decided to switch toPostgreSQL
, which has better compatibility with our hosting service,Heroku
.
Pros of InfluxDB
- Time-series data analysis59
- Easy setup, no dependencies30
- Fast, scalable & open source24
- Open source21
- Real-time analytics20
- Continuous Query support6
- Easy Query Language5
- HTTP API4
- Out-of-the-box, automatic Retention Policy4
- Offers Enterprise version1
- Free Open Source version1
Pros of PostgreSQL
- Relational database763
- High availability510
- Enterprise class database439
- Sql383
- Sql + nosql304
- Great community173
- Easy to setup147
- Heroku131
- Secure by default130
- Postgis113
- Supports Key-Value50
- Great JSON support48
- Cross platform34
- Extensible33
- Replication28
- Triggers26
- Multiversion concurrency control23
- Rollback23
- Open source21
- Heroku Add-on18
- Stable, Simple and Good Performance17
- Powerful15
- Lets be serious, what other SQL DB would you go for?13
- Good documentation11
- Scalable9
- Free8
- Reliable8
- Intelligent optimizer8
- Transactional DDL7
- Modern7
- One stop solution for all things sql no matter the os6
- Relational database with MVCC5
- Faster Development5
- Full-Text Search4
- Developer friendly4
- Excellent source code3
- Free version3
- Great DB for Transactional system or Application3
- Relational datanbase3
- search3
- Open-source3
- Text2
- Full-text2
- Can handle up to petabytes worth of size1
- Composability1
- Multiple procedural languages supported1
- Native0
Sign up to add or upvote prosMake informed product decisions
Cons of InfluxDB
- Instability4
- Proprietary query language1
- HA or Clustering is only in paid version1
Cons of PostgreSQL
- Table/index bloatings10