PostgreSQL logo

PostgreSQL

A powerful, open source object-relational database system

What is PostgreSQL?

PostgreSQL is an advanced object-relational database management system that supports an extended subset of the SQL standard, including transactions, foreign keys, subqueries, triggers, user-defined types and functions.
PostgreSQL is a tool in the Databases category of a tech stack.
PostgreSQL is an open source tool with 6.5K GitHub stars and 2.2K GitHub forks. Here’s a link to PostgreSQL's open source repository on GitHub

Who uses PostgreSQL?

Companies
4044 companies reportedly use PostgreSQL in their tech stacks, including Uber, Netflix, and Spotify.

Developers
20779 developers on StackShare have stated that they use PostgreSQL.

PostgreSQL Integrations

Slick, Datadog, Amazon DynamoDB, JSON, and Sequelize are some of the popular tools that integrate with PostgreSQL. Here's a list of all 201 tools that integrate with PostgreSQL.

Why developers like PostgreSQL?

Here’s a list of reasons why companies and developers use PostgreSQL
Private Decisions at about PostgreSQL
Private to your company

Here are some stack decisions, common use cases and reviews by members of with PostgreSQL in their tech stack.

Mojolicious
Mojolicious
Perl
Perl
Redmine
Redmine
Redis
Redis
AWS CodeCommit
AWS CodeCommit
Amazon SES
Amazon SES
PostgreSQL
PostgreSQL
Postman
Postman
Docker
Docker
jQuery
jQuery
VirtualBox
VirtualBox
Sublime Text
Sublime Text
GitHub
GitHub
Git
Git
GitLab CI
GitLab CI

Mojolicious Perl Redmine Redis AWS CodeCommit Amazon SES PostgreSQL Postman Docker jQuery VirtualBox Sublime Text GitHub Git GitLab CI @DBIx::Class @metacpan @TheBat

See more
Rails
Rails
Sidekiq
Sidekiq
PostgreSQL
PostgreSQL
Redis
Redis
MongoDB
MongoDB
Vue.js
Vue.js
vuex
vuex
jQuery
jQuery
React
React
Redux
Redux
Yarn
Yarn
#Bulma.io
#Font-awesome

I'm building a new process management tool. I decided to build with Rails as my backend, using Sidekiq for background jobs. I chose to work with these tools because I've worked with them before and know that they're able to get the job done. They may not be the sexiest tools, but they work and are reliable, which is what I was optimizing for. For data stores, I opted for PostgreSQL and Redis. Because I'm planning on offering dashboards, I wanted a SQL database instead of something like MongoDB that might work early on, but be difficult to use as soon as I want to facilitate aggregate queries.

On the front-end I'm using Vue.js and vuex in combination with #Turbolinks. In effect, I want to render most pages on the server side without key interactions being managed by Vue.js . This is the first project I'm working on where I've explicitly decided not to include jQuery . I have found React and Redux.js more confusing to setup. I appreciate the opinionated approach from the Vue.js community and that things just work together the way that I'd expect. To manage my javascript dependencies, I'm using Yarn .

For CSS frameworks, I'm using #Bulma.io. I really appreciate it's minimal nature and that there are no hard javascript dependencies. And to add a little spice, I'm using #font-awesome.

See more
Luke Hamilton
Luke Hamilton
Sr. Engineer at StackShare · | 6 upvotes · 2K views
Prisma
Prisma
GraphQL
GraphQL
PostgreSQL
PostgreSQL
Prisma Cloud
Prisma Cloud

I used Prisma for creating a ready-to-use GraphQL API in front of my PostgreSQL database. It allowed me to get up and running very quickly because I didn’t need to worry about writing the logic that interacts with the database. You simply define your data model using the GraphQL schema definition language and then use the Prisma CLI tool to deploy your changes. Based on your data model Prisma will generate a ready-to-use GraphQL API with CRUD functionality. Additionally the API includes filtering, sorting and pagination out of the box. You can then use Prisma Cloud to manage your data.

See more
Zach Coffin
Zach Coffin
Software Developer · | 4 upvotes · 8.3K views
PostgreSQL
PostgreSQL
MongoDB
MongoDB

I started using PostgreSQL because I started a job at a company that was already using it as well as MongoDB. The main difference between the two from my perspective is that postgres columns are a chore to add/remove/modify whereas you can throw whatever you want into a mongo collection. And personally I prefer the query language for postgres over that of mongo, but they both have their merits. Maybe someday I'll be a DBA and have more insight to share but for now there's my 2 cents.

See more
James Fernandez
James Fernandez
Full Stack Developer · | 1 upvotes · 62 views
PostgreSQL
PostgreSQL

I started using PostgreSQL because

See more
Ramon Rodrigues
Ramon Rodrigues
Developer at Runtime Revolution · | 1 upvotes · 220 views
PostgreSQL
PostgreSQL

I started using PostgreSQL because it's really simple to use it with rails.

See more
Public Decisions about PostgreSQL

Here are some stack decisions, common use cases and reviews by companies and developers who chose PostgreSQL in their tech stack.

Jeyabalaji Subramanian
Jeyabalaji Subramanian
CTO at FundsCorner · | 25 upvotes · 883.2K views
atFundsCornerFundsCorner
MongoDB
MongoDB
PostgreSQL
PostgreSQL
MongoDB Stitch
MongoDB Stitch
Node.js
Node.js
Amazon SQS
Amazon SQS
Python
Python
SQLAlchemy
SQLAlchemy
AWS Lambda
AWS Lambda
Zappa
Zappa

Recently we were looking at a few robust and cost-effective ways of replicating the data that resides in our production MongoDB to a PostgreSQL database for data warehousing and business intelligence.

We set ourselves the following criteria for the optimal tool that would do this job: - The data replication must be near real-time, yet it should NOT impact the production database - The data replication must be horizontally scalable (based on the load), asynchronous & crash-resilient

Based on the above criteria, we selected the following tools to perform the end to end data replication:

We chose MongoDB Stitch for picking up the changes in the source database. It is the serverless platform from MongoDB. One of the services offered by MongoDB Stitch is Stitch Triggers. Using stitch triggers, you can execute a serverless function (in Node.js) in real time in response to changes in the database. When there are a lot of database changes, Stitch automatically "feeds forward" these changes through an asynchronous queue.

We chose Amazon SQS as the pipe / message backbone for communicating the changes from MongoDB to our own replication service. Interestingly enough, MongoDB stitch offers integration with AWS services.

In the Node.js function, we wrote minimal functionality to communicate the database changes (insert / update / delete / replace) to Amazon SQS.

Next we wrote a minimal micro-service in Python to listen to the message events on SQS, pickup the data payload & mirror the DB changes on to the target Data warehouse. We implemented source data to target data translation by modelling target table structures through SQLAlchemy . We deployed this micro-service as AWS Lambda with Zappa. With Zappa, deploying your services as event-driven & horizontally scalable Lambda service is dumb-easy.

In the end, we got to implement a highly scalable near realtime Change Data Replication service that "works" and deployed to production in a matter of few days!

See more
Tim Abbott
Tim Abbott
Founder at Zulip · | 23 upvotes · 337.2K views
atZulipZulip
PostgreSQL
PostgreSQL
MySQL
MySQL
Elasticsearch
Elasticsearch

We've been using PostgreSQL since the very early days of Zulip, but we actually didn't use it from the beginning. Zulip started out as a MySQL project back in 2012, because we'd heard it was a good choice for a startup with a wide community. However, we found that even though we were using the Django ORM for most of our database access, we spent a lot of time fighting with MySQL. Issues ranged from bad collation defaults, to bad query plans which required a lot of manual query tweaks.

We ended up getting so frustrated that we tried out PostgresQL, and the results were fantastic. We didn't have to do any real customization (just some tuning settings for how big a server we had), and all of our most important queries were faster out of the box. As a result, we were able to delete a bunch of custom queries escaping the ORM that we'd written to make the MySQL query planner happy (because postgres just did the right thing automatically).

And then after that, we've just gotten a ton of value out of postgres. We use its excellent built-in full-text search, which has helped us avoid needing to bring in a tool like Elasticsearch, and we've really enjoyed features like its partial indexes, which saved us a lot of work adding unnecessary extra tables to get good performance for things like our "unread messages" and "starred messages" indexes.

I can't recommend it highly enough.

See more
Robert Zuber
Robert Zuber
CTO at CircleCI · | 22 upvotes · 737.7K views
atCircleCICircleCI
MongoDB
MongoDB
PostgreSQL
PostgreSQL
Redis
Redis
GitHub
GitHub
Amazon S3
Amazon S3

We use MongoDB as our primary #datastore. Mongo's approach to replica sets enables some fantastic patterns for operations like maintenance, backups, and #ETL.

As we pull #microservices from our #monolith, we are taking the opportunity to build them with their own datastores using PostgreSQL. We also use Redis to cache data we’d never store permanently, and to rate-limit our requests to partners’ APIs (like GitHub).

When we’re dealing with large blobs of immutable data (logs, artifacts, and test results), we store them in Amazon S3. We handle any side-effects of S3’s eventual consistency model within our own code. This ensures that we deal with user requests correctly while writes are in process.

See more
James Cunningham
James Cunningham
Operations Engineer at Sentry · | 21 upvotes · 238.1K views
atSentrySentry
Django
Django
Celery
Celery
PostgreSQL
PostgreSQL
Redis
Redis
#MessageQueue
#InMemoryDatabases

Sentry started as (and remains) an open-source project, growing out of an error logging tool built in 2008. That original build nine years ago was Django and Celery (Python’s asynchronous task codebase), with PostgreSQL as the database and Redis as the power behind Celery.

We displayed a truly shrewd notion of branding even then, giving the project a catchy name that companies the world over remain jealous of to this day: django-db-log. For the longest time, Sentry’s subtitle on GitHub was “A simple Django app, built with love.” A slightly more accurate description probably would have included Starcraft and Soylent alongside love; regardless, this captured what Sentry was all about.

#MessageQueue #InMemoryDatabases

See more
Tim Nolet
Tim Nolet
Founder, Engineer & Dishwasher at Checkly · | 20 upvotes · 1.1M views
atChecklyHQChecklyHQ
Heroku
Heroku
Docker
Docker
GitHub
GitHub
Node.js
Node.js
hapi
hapi
Vue.js
Vue.js
AWS Lambda
AWS Lambda
Amazon S3
Amazon S3
PostgreSQL
PostgreSQL
Knex.js
Knex.js
vuex
vuex

Heroku Docker GitHub Node.js hapi Vue.js AWS Lambda Amazon S3 PostgreSQL Knex.js Checkly is a fairly young company and we're still working hard to find the correct mix of product features, price and audience.

We are focussed on tech B2B, but I always wanted to serve solo developers too. So I decided to make a $7 plan.

Why $7? Simply put, it seems to be a sweet spot for tech companies: Heroku, Docker, Github, Appoptics (Librato) all offer $7 plans. They must have done a ton of research into this, so why not piggy back that and try it out.

Enough biz talk, onto tech. The challenges were:

  • Slice of a portion of the functionality so a $7 plan is still profitable. We call this the "plan limits"
  • Update API and back end services to handle and enforce plan limits.
  • Update the UI to kindly state plan limits are in effect on some part of the UI.
  • Update the pricing page to reflect all changes.
  • Keep the actual processing backend, storage and API's as untouched as possible.

In essence, we went from strictly volume based pricing to value based pricing. Here come the technical steps & decisions we made to get there.

  1. We updated our PostgreSQL schema so plans now have an array of "features". These are string constants that represent feature toggles.
  2. The Vue.js frontend reads these from the vuex store on login.
  3. Based on these values, the UI has simple v-if statements to either just show the feature or show a friendly "please upgrade" button.
  4. The hapi API has a hook on each relevant API endpoint that checks whether a user's plan has the feature enabled, or not.

Side note: We offer 10 SMS messages per month on the developer plan. However, we were not actually counting how many people were sending. We had to update our alerting daemon (that runs on Heroku and triggers SMS messages via AWS SNS) to actually bump a counter.

What we build is basically feature-toggling based on plan features. It is very extensible for future additions. Our scheduling and storage backend that actually runs users' monitoring requests (AWS Lambda) and stores the results (S3 and Postgres) has no knowledge of all of this and remained unchanged.

Hope this helps anyone building out their SaaS and is in a similar situation.

See more
Eric Colson
Eric Colson
Chief Algorithms Officer at Stitch Fix · | 19 upvotes · 895.5K views
atStitch FixStitch Fix
Kafka
Kafka
PostgreSQL
PostgreSQL
Amazon S3
Amazon S3
Apache Spark
Apache Spark
Presto
Presto
Python
Python
R Language
R Language
PyTorch
PyTorch
Docker
Docker
Amazon EC2 Container Service
Amazon EC2 Container Service
#AWS
#Etl
#ML
#DataScience
#DataStack
#Data

The algorithms and data infrastructure at Stitch Fix is housed in #AWS. Data acquisition is split between events flowing through Kafka, and periodic snapshots of PostgreSQL DBs. We store data in an Amazon S3 based data warehouse. Apache Spark on Yarn is our tool of choice for data movement and #ETL. Because our storage layer (s3) is decoupled from our processing layer, we are able to scale our compute environment very elastically. We have several semi-permanent, autoscaling Yarn clusters running to serve our data processing needs. While the bulk of our compute infrastructure is dedicated to algorithmic processing, we also implemented Presto for adhoc queries and dashboards.

Beyond data movement and ETL, most #ML centric jobs (e.g. model training and execution) run in a similarly elastic environment as containers running Python and R code on Amazon EC2 Container Service clusters. The execution of batch jobs on top of ECS is managed by Flotilla, a service we built in house and open sourced (see https://github.com/stitchfix/flotilla-os).

At Stitch Fix, algorithmic integrations are pervasive across the business. We have dozens of data products actively integrated systems. That requires serving layer that is robust, agile, flexible, and allows for self-service. Models produced on Flotilla are packaged for deployment in production using Khan, another framework we've developed internally. Khan provides our data scientists the ability to quickly productionize those models they've developed with open source frameworks in Python 3 (e.g. PyTorch, sklearn), by automatically packaging them as Docker containers and deploying to Amazon ECS. This provides our data scientist a one-click method of getting from their algorithms to production. We then integrate those deployments into a service mesh, which allows us to A/B test various implementations in our product.

For more info:

#DataScience #DataStack #Data

See more

PostgreSQL Alternatives & Comparisons

What are some alternatives to PostgreSQL?
MySQL
The MySQL software delivers a very fast, multi-threaded, multi-user, and robust SQL (Structured Query Language) database server. MySQL Server is intended for mission-critical, heavy-load production systems as well as for embedding into mass-deployed software.
MariaDB
Started by core members of the original MySQL team, MariaDB actively works with outside developers to deliver the most featureful, stable, and sanely licensed open SQL server in the industry. MariaDB is designed as a drop-in replacement of MySQL(R) with more features, new storage engines, fewer bugs, and better performance.
Oracle
Oracle Database is an RDBMS. An RDBMS that implements object-oriented features such as user-defined types, inheritance, and polymorphism is called an object-relational database management system (ORDBMS). Oracle Database has extended the relational model to an object-relational model, making it possible to store complex business models in a relational database.
MongoDB
MongoDB stores data in JSON-like documents that can vary in structure, offering a dynamic, flexible schema. MongoDB was also designed for high availability and scalability, with built-in replication and auto-sharding.
SQLite
SQLite is an embedded SQL database engine. Unlike most other SQL databases, SQLite does not have a separate server process. SQLite reads and writes directly to ordinary disk files. A complete SQL database with multiple tables, indices, triggers, and views, is contained in a single disk file.
See all alternatives

PostgreSQL's Followers
21182 developers follow PostgreSQL to keep up with related blogs and decisions.
Anton Vasiljev
Einav Friedman
Pravs87
Frederiek Vanhove
Shania Dicen
Yusron Izza Faradisa
Scott Idem
lizzardsteam
Tomas Pleiris
SamuelCBarnes