PostgreSQL

Application and Data / Data Stores / Databases
Shared insights

PostgreSQL is responsible for nearly all data storage, validation and integrity. We leverage constraints, functions and custom extensions to ensure we have only one source of truth for our data access rules and that those rules live as close to the data as possible. Call us crazy, but ORMs only lead to ruin and despair. PostgreSQL

READ MORE
1 upvote·24.3K views
Senior Architect at Rainforest QA·

We recently moved our main applications from Heroku to Kubernetes . The 3 main driving factors behind the switch were scalability (database size limits), security (the inability to set up PostgreSQL instances in private networks), and costs (GCP is cheaper for raw computing resources).

We prefer using managed services, so we are using Google Kubernetes Engine with Google Cloud SQL for PostgreSQL for our PostgreSQL databases and Google Cloud Memorystore for Redis . For our CI/CD pipeline, we are using CircleCI and Google Cloud Build to deploy applications managed with Helm . The new infrastructure is managed with Terraform .

Read the blog post to go more in depth.

READ MORE
Why Rainforest QA Moved from Heroku to Google Kubernetes Engine (rainforestqa.com)
14 upvotes·1 comment·582.5K views
Dev Suryawanshi
Dev Suryawanshi
·
January 19th 2020 at 10:11AM

Great information

·
Reply
CEO at Scrayos UG (haftungsbeschränkt)·

We primarily use MariaDB but use PostgreSQL as a part of GitLab , Sentry and Nextcloud , which (initially) forced us to use it anyways. While this isn't much of a decision – because we didn't have one (ha ha) – we learned to love the perks and advantages of PostgreSQL anyways. PostgreSQL's extension system makes it even more flexible than a lot of the other SQL-based DBs (that only offer stored procedures) and the additional JOIN options, the enhanced role management and the different authentication options came in really handy, when doing manual maintenance on the databases.

READ MORE
11 upvotes·233.9K views
Software Engineer ·
Needs advice
on
Python
and
Tornado

Investigating Tortoise ORM and GINO ORM...

I need to introduce some async ORM with the current stack: Tornado with asyncio loop, AIOHTTP, with PostgreSQL and MSSQL. This project revolves heavily around realtime and due to the realtime requirements, blocking during database access is not acceptable.

Considering that these ORMs are both young projects, I wondered if anybody had some experience with similar stack and these async ORMs?

READ MORE
6 upvotes·141.5K views
Replies (8)
Founder at AlterEstate·
Recommends
Django

I honestly don't know if this could help but, I'll speak from my similar experience with Django. I had to do for a client a dynamic map in real time so that users can select their seats for a specific event. In the first MVP tickets sold in less then 10 minutes, over 400 users at the same time online trying to buy the seats. (I mention this to get in the context of why I used it).

We achieved this with Django, Socket.IO, PostgreSQL and for the frontend React. With the power of Socket IO we manage to achieve a real time connection with the database with a reaaally low response time.

The drawbacks of Django are that everyone knows that Django is not really good for real-time activities but in combination with Socket.IO this can be achieve really good.

I hope that in some way I helped. 👍🏼

READ MORE
7 upvotes·33K views
CTO at Refinery Labs·
Recommends
asyncio

I asked a friend who's a 10x Python engineer for some help getting you an answer here. He recommends:

I don't think there are any robust ORMs for asyncio. There's Gino but I haven't evaluated it. I would honestly use an asyncio SQL connector, manually write the request, and implement a mapping layer myself.

Asyncio is still a bit weird a lot of libraries hack compatibility layers onto it. It's probably better to use aiohttp alone instead of putting Tornado on top of it.

ORMs can be painful and newly created ones on new platforms are even more painful.

Hope that helps!

READ MORE
7 upvotes·610 views
View all (8)
Senior Software Developer at IT Minds·
Shared insights
at
()

At IT Minds we create customized internal or #B2B web and mobile apps. I have a go to stack that I pitch to our customers consisting of 3 core areas. 1) A data core #backend . 2) A micro #serverless #backend. 3) A user client #frontend.

For the Data Core I create a backend using TypeScript Node.js and with TypeORM connecting to a PostgreSQL Exposing an action based api with Apollo GraphQL

For the micro serverless backend, which purpose is verification for authentication, autorization, logins and the likes. It is created with Next.js api pages. Using MongoDB to store essential information, caching etc.

Finally the frontend is built with React using Next.js , TypeScript and @Apollo. We create the frontend as a PWA and have a AMP landing page by default.

READ MORE
The Leading Web stack of an IT Minds Senior Developer. - DEV Community 👩‍💻👨‍💻 (dev.to)
12 upvotes·241.9K views
Shared insights
at

I'm working as one of the engineering leads in RunaHR. As our platform is a Saas, we thought It'd be good to have an API (We chose Ruby and Rails for this) and a SPA (built with React and Redux ) connected. We started the SPA with Create React App since It's pretty easy to start.

We use Jest as the testing framework and react-testing-library to test React components. In Rails we make tests using RSpec.

Our main database is PostgreSQL, but we also use MongoDB to store some type of data. We started to use Redis  for cache and other time sensitive operations.

We have a couple of extra projects: One is an Employee app built with React Native and the other is an internal back office dashboard built with Next.js for the client and Python in the backend side.

Since we have different frontend apps we have found useful to have Bit to document visual components and utils in JavaScript.

READ MORE
19 upvotes·1.1M views
Needs advice
on
PostgreSQL
and
MongoDB

Hi everybody, I'm developing an application to be used in a gym setting where athletes fill out a health survey, and coaches can analyze the results. However, due to the dynamic nature of some aspects of the app and more static aspects of the other, I am wondering if/how I would integrate MongoDB with my existing PostgreSQL database. I would like to store things like registrations, license information, and club information in Postgres, while I am thinking about moving things like user surveys, logging, and user settings information over to MongoDB. Some fields on the survey are integers, some large blocks of text, and some are arrays. My thought is, if I moved that data to MongoDB, it would give us greater flexibility in terms of adding and removing fields and data to them, and it would scale a lot easier than Postgres. Not to mention it will be easier to organize that kind of data. Is that overkill or am I approaching this issue the right way? Thank you!

READ MORE
8 upvotes·60.7K views
Replies (4)
Recommends
MongoDB

With PostgreSQL you could easily integrate JSON or array type columns and develope a simple interface to add columns on your application. Anyway handling all the data this way will require some intermediate skill with PostgreSQL dialect and a mix and match of syntaxes for your analitical queryes. Also you will need to have a good design for you backend to handle all this. MongoDB will handle all this in a more natural way and I believe will be more easily integrated with a Node.js backend.

READ MORE
6 upvotes·1 comment·22.3K views
BrockHerion
BrockHerion
·
May 7th 2020 at 3:52AM

Thanks for the response. Analytics is definitely a big part of this project and I would agree that Mongo would probably handle it more naturally than Postgres. My backend is a 3-tier Java Spring REST API, but I'm toying with the idea of using GraphQL for managing the more dynamic data coming from Mongo.

·
Reply
CTO at Merryfield·
Recommends
PostgreSQL

You can have your cake and eat it too. If you really need the flexibility of a document store, Postgresql's JSONB support allows you to mix and match relational data and document data within the same database/table. You can just as easily run analytical queries against JSONB data in Postgresql as you can against "normal" relational data. MongoDB comes with a significant operational overhead and cost (hello replica sets), so unless you really need MongoDB's sharding capabilities (which you shouldn't until you get to extreme scaling numbers), then just stick with Postgresql and use JSONB where you need it.

READ MORE
6 upvotes·22.3K views
View all (4)
Needs advice
on
PostgreSQL
and
MongoDB

I need urgent advice from you all! I am making a web-based food ordering platform which includes 3 different ordering methods (Dine-in using QR code scanning + Take away + Home Delivery) and a table reservation system. We are using React for the front-end, and I need your advice if I should use NestJS or ExpressJS for the backend. And regarding the database, which database should I use, MongoDB or PostgreSQL? Which combination will be better? PS. We want to follow the microservice architecture as scalability, reliability, and usability are the most important Non Functional requirements. Expert advice is needed, please. A load of thanks in advance. Kind Regards, Miqdad

READ MORE
7 upvotes·67.4K views
Replies (3)
Senior DevOps Engineer at Vital Beats·

I can't speak for the NestJS vs ExpressJS discussion, but I can given a viewpoint on databases.

The main thing to consider around database choice, is what "shape" the data will be in, and the kind of read/write patterns you expect of that data. The blog example shows up so much for DBMS like MongoDB, because it showcases what NoSQL / document storage is very scalable and performant in: mostly isolated documents with a few views / ways to order them and filter them. In your case, I can imagine a number of "relations" already, which suggest a more traditional SQL solution would work well: You have restaurants, they have maybe a few menus (regular, gluten-free etc), with menu items in, which have different prices over time (25% discount on christmas food just after christmas, 50% off pizzas on wednesdays). Then there's a whole different set of "relations" for people ordering, like showing them past orders, which need to refer to the restaurant etc, and credit card transaction information for refunds etc. That to me suggests PostgreSQL, which will scale quite well if you database design is okay.

PostgreSQL also offers you some extensions, which are just amazing for your use-case. https://postgis.net/ for example will let you query for restaurants based on location, without the big cost that comes from constantly using something like Google Maps API to work out which restaurants are near to someone ordering. Partitioning and window functions will be great for your own use internally too, like answering questions of "What types of takeways perform the best for us, Italian, Mexican?" or in combination with PostGIS, answering questions like "What kind of takeways do we need to market to, to improve our selection?".

While these things can all be implemented in MongoDB, you tend to lose some of the convenience of ACID or have to deal with things like eventual consistency, which requires more thinking on the part of your engineers. PostgreSQL offers decent (if more complex) scalablity and redundancy solutions, and is honestly very well proven and plenty of documentation exists on optimising queries.

READ MORE
9 upvotes·43K views
Founder at Odix·
Recommends
MongoDB

Hello, i build microservice systems using Angular And Spring (Java) so i can't help with with ur back end choice, BUT, i definitely advice you to use a Nosql database, thus MongoDB of course or even Cassandra if your looking for extreme scalability with zero point of failure. Anyway, Nosql if much more faster then Sql (in your case Postresql DB). All you wanna do with sql can also be done by nosql (not the opposite of course).I also advice you to use docker containers + kubernetes to orchestrate them, if you need scalability and replication, that way your app can support auto scalability (in case ur users number goes high). Best of luck

READ MORE
3 upvotes·35.6K views
View all (3)
Senior Fullstack Developer at QUANTUSflow Software GmbH·

Our whole DevOps stack consists of the following tools:

  • GitHub (incl. GitHub Pages/Markdown for Documentation, GettingStarted and HowTo's) for collaborative review and code management tool
  • Respectively Git as revision control system
  • SourceTree as Git GUI
  • Visual Studio Code as IDE
  • CircleCI for continuous integration (automatize development process)
  • Prettier / TSLint / ESLint as code linter
  • SonarQube as quality gate
  • Docker as container management (incl. Docker Compose for multi-container application management)
  • VirtualBox for operating system simulation tests
  • Kubernetes as cluster management for docker containers
  • Heroku for deploying in test environments
  • nginx as web server (preferably used as facade server in production environment)
  • SSLMate (using OpenSSL) for certificate management
  • Amazon EC2 (incl. Amazon S3) for deploying in stage (production-like) and production environments
  • PostgreSQL as preferred database system
  • Redis as preferred in-memory database/store (great for caching)

The main reason we have chosen Kubernetes over Docker Swarm is related to the following artifacts:

  • Key features: Easy and flexible installation, Clear dashboard, Great scaling operations, Monitoring is an integral part, Great load balancing concepts, Monitors the condition and ensures compensation in the event of failure.
  • Applications: An application can be deployed using a combination of pods, deployments, and services (or micro-services).
  • Functionality: Kubernetes as a complex installation and setup process, but it not as limited as Docker Swarm.
  • Monitoring: It supports multiple versions of logging and monitoring when the services are deployed within the cluster (Elasticsearch/Kibana (ELK), Heapster/Grafana, Sysdig cloud integration).
  • Scalability: All-in-one framework for distributed systems.
  • Other Benefits: Kubernetes is backed by the Cloud Native Computing Foundation (CNCF), huge community among container orchestration tools, it is an open source and modular tool that works with any OS.
READ MORE
28 upvotes·2 comments·2.6M views
Larry Gryziak
Larry Gryziak
·
April 30th 2020 at 6:34PM

So why is your deployment different for your (Heroku) test/dev and your stage/production?

·
Reply
Simon Reymann
Simon Reymann
·
May 1st 2020 at 10:32AM

When it comes to testing our web app we do not demand great computational resources and need a very simple, convenient and fast PaaS solution for deploying the app to our testers. In production though, the demand of great computational resources can rise very fast. With Amazon we are able to control that in better way.

·
Reply
Needs advice
on
PostgreSQL
MySQL
and
MongoDB

Hello,

I am trying to design an online ordering app similar to Doordash or Uber Eats. I'm having a hard time trying to finalise on what database (or mixture of databases) to use. I'm leaning towards using a relational database like MySQL or PostgreSQL. But, when the application grows, I don't want to join on 20 tables to get a data. Any help would be greatly appreciated. Thank you for your time.

READ MORE
7 upvotes·164.7K views
Replies (2)
CTO at Voila Cab's·
Recommends
MySQL

Hello Suhas , We build our product www.voilacabs.com which is in the same lines as yours but we have used a combination of Mysql and MongoDB. When using MySQL, i would recommend doing the following: 1. Use Mysql only for storage only and for realtime updates we recommend MongoDB. 2. Don't try to Join more than 3 tables. ( the moment you reach 3 join stop there and try to un-normalized database. 3. Never or very rarely use Auto-increments. ( we recommend using UUIDS ) . Use UUIDS always for Auto increments for MYSQL. If you using Postgre SQL then i would suggest you to please check this https://instagram-engineering.com/sharding-ids-at-instagram-1cf5a71e5a5c There is a stored procedure that generated unique keys instead of auto-increment keys and that will help you sharding or clustering database without sync errors. 4. Also For MongoDB if you can put a layer of REDIS Cache then that will boost your api performance under large loads. 5. Use Node.js programing language as that function asynchronously .

Let me know if you still need any suggestion's . Thanks & Regards Rupen Makhecha CTO @ Voila Cab's www.voilacabs.com

READ MORE
4 upvotes·3 comments·135.9K views
Raunaq Gupta
Raunaq Gupta
·
June 5th 2020 at 7:34AM

If you use MySQL and want a better sharding solution on top of it, then I suggest looking into @Vitess. You can also explore https://planetscale.com as a dbaas alternative to managing those services yourself.

·
Reply
Rupen Makhecha
Rupen Makhecha
·
June 5th 2020 at 8:05AM

Great, i think Vitess sounds promising. Thank you for this nice recommendation

·
Reply
Raunaq Gupta
Raunaq Gupta
·
June 5th 2020 at 8:37AM

Disclaimer: I work for PlanetScale where we maintain Vitess as well. We also have a beta feature currently in the works where someone can try out Vitess on top of their existing MySQL infra without making any changes. Please feel free to contact me if you'd be interested to test it out.

·
Reply
Cofounder at Wanderloop·
Recommends
MySQL
at

I would recommend a mixture of MySQL and MongoDB. Using MongoDB for the Content Distribution Network (CDN) will make it easy to store high volume incoming data. MySQL is recommended to be used for business logic. PostgreSQL is not recommended since you will be faced with inefficient database replication features and constant migration from one PostgreSQL version to another.

READ MORE
1 upvote·2 comments·136.1K views
aleyrizvi
aleyrizvi
·
May 26th 2020 at 4:07PM

How about mysql 8 with xdev protocol? Basically it offers a document store right out of the box.

·
Reply
Rafey Iqbal Rahman
Rafey Iqbal Rahman
·
May 27th 2020 at 7:12AM

The X Protocol is good.

·
Reply