Feed powered byStream Blue Logo Copy 5Created with Sketch.
PostgreSQL

PostgreSQL

Application and Data / Data Stores / Databases

Decision at Sentry about Redis, PostgreSQL, Celery, Django, InMemoryDatabases, MessageQueue

Avatar of jtcunning
Operations Engineer at Sentry ·
RedisRedis
PostgreSQLPostgreSQL
CeleryCelery
DjangoDjango
#InMemoryDatabases
#MessageQueue

Sentry started as (and remains) an open-source project, growing out of an error logging tool built in 2008. That original build nine years ago was Django and Celery (Python’s asynchronous task codebase), with PostgreSQL as the database and Redis as the power behind Celery.

We displayed a truly shrewd notion of branding even then, giving the project a catchy name that companies the world over remain jealous of to this day: django-db-log. For the longest time, Sentry’s subtitle on GitHub was “A simple Django app, built with love.” A slightly more accurate description probably would have included Starcraft and Soylent alongside love; regardless, this captured what Sentry was all about.

#MessageQueue #InMemoryDatabases

19 upvotes·13.5K views

Decision at Heap about Citus, PostgreSQL, Databases, DataStores

Avatar of drob
CitusCitus
PostgreSQLPostgreSQL
#Databases
#DataStores

PostgreSQL was an easy early decision for the founding team. The relational data model fit the types of analyses they would be doing: filtering, grouping, joining, etc., and it was the database they knew best.

Shortly after adopting PG, they discovered Citus, which is a tool that makes it easy to distribute queries. Although it was a young project and a fork of Postgres at that point, Dan says the team was very available, highly expert, and it wouldn’t be very difficult to move back to PG if they needed to.

The stuff they forked was in query execution. You could treat the worker nodes like regular PG instances. Citus also gave them a ton of flexibility to make queries fast, and again, they felt the data model was the best fit for their application.

#DataStores #Databases

16 upvotes·251 views

Decision at Uploadcare about PostgreSQL, Amazon DynamoDB, Amazon S3, Redis, Python, Google App Engine

Avatar of dmitry-mukhin
PostgreSQLPostgreSQL
Amazon DynamoDBAmazon DynamoDB
Amazon S3Amazon S3
RedisRedis
PythonPython
Google App EngineGoogle App Engine

Uploadcare has built an infinitely scalable infrastructure by leveraging AWS. Building on top of AWS allows us to process 350M daily requests for file uploads, manipulations, and deliveries. When we started in 2011 the only cloud alternative to AWS was Google App Engine which was a no-go for a rather complex solution we wanted to build. We also didn’t want to buy any hardware or use co-locations.

Our stack handles receiving files, communicating with external file sources, managing file storage, managing user and file data, processing files, file caching and delivery, and managing user interface dashboards.

At its core, Uploadcare runs on Python. The Europython 2011 conference in Florence really inspired us, coupled with the fact that it was general enough to solve all of our challenges informed this decision. Additionally we had prior experience working in Python.

We chose to build the main application with Django because of its feature completeness and large footprint within the Python ecosystem.

All the communications within our ecosystem occur via several HTTP APIs, Redis, Amazon S3, and Amazon DynamoDB. We decided on this architecture so that our our system could be scalable in terms of storage and database throughput. This way we only need Django running on top of our database cluster. We use PostgreSQL as our database because it is considered an industry standard when it comes to clustering and scaling.

15 upvotes·634 views

Decision at Stitch about PostgreSQL, MySQL, Clojure

Avatar of jakestein
CEO at Stitch ·
PostgreSQLPostgreSQL
MySQLMySQL
ClojureClojure

The majority of our Clojure microservices are simple web services that wrap a transactional database with CRUD operations and a little bit of business logic. We use both MySQL and PostgreSQL for transactional data persistence, having transitioned from the former to the latter for newer services to take advantage of the new features coming out of the Postgres community.

Most of our Clojure best practices can be summed up by the phrase "keep it simple." We avoid more complex web frameworks in favor of using the Ring library to build web service routes, and we prefer sending SQL directly to the JDBC library rather than using a complicated ORM or SQL DSL.

15 upvotes·437 views

Decision at Heap about Heap, Node.js, Kafka, PostgreSQL, Citus, FrameworksFullStack, Databases, MessageQueue

Avatar of drob
HeapHeap
Node.jsNode.js
KafkaKafka
PostgreSQLPostgreSQL
CitusCitus
#FrameworksFullStack
#Databases
#MessageQueue

Heap searched for an existing tool that would allow them to express the full range of analyses they needed, index the event definitions that made up the analyses, and was a mature, natively distributed system.

After coming up empty on this search, they decided to compromise on the “maturity” requirement and build their own distributed system around Citus and sharded PostgreSQL. It was at this point that they also introduced Kafka as a queueing layer between the Node.js application servers and Postgres.

If we could go back in time, we probably would have started using Kafka on day one. One of the biggest benefits in adopting Kafka has been the peace of mind that it brings. In an analytics infrastructure, it’s often possible to make data ingestion idempotent. In Heap’s case, that means that, if anything downstream from Kafka goes down, they won’t lose any data – it’s just going to take a bit longer to get to its destination. He’s also learned that you want the path between data hitting your servers and your initial persistence layer (in this case, Kafka) to be as short and simple as possible, since that is the surface area where a failure means you can lose customer data. We learned that it’s a very good fit for an analytics tool, since you can handle a huge number of incoming writes with relatively low latency. Kafka also gives you the ability to “replay” the data flow: “It’s like a commit log for your whole infrastructure #MessageQueue #Databases #FrameworksFullStack

14 upvotes·474 views

Decision at Dubsmash about Amazon RDS for Aurora, Redis, Amazon DynamoDB, Amazon RDS, Heroku, PostgreSQL, Databases, PlatformAsAService, NosqlDatabaseAsAService, SqlDatabaseAsAService

Avatar of tspecht
‎Co-Founder and CTO at Dubsmash ·
Amazon RDS for AuroraAmazon RDS for Aurora
RedisRedis
Amazon DynamoDBAmazon DynamoDB
Amazon RDSAmazon RDS
HerokuHeroku
PostgreSQLPostgreSQL
#Databases
#PlatformAsAService
#NosqlDatabaseAsAService
#SqlDatabaseAsAService

Over the years we have added a wide variety of different storages to our stack including PostgreSQL (some hosted by Heroku, some by Amazon RDS) for storing relational data, Amazon DynamoDB to store non-relational data like recommendations & user connections, or Redis to hold pre-aggregated data to speed up API endpoints.

Since we started running Postgres ourselves on RDS instead of only using the managed offerings of Heroku, we've gained additional flexibility in scaling our application while reducing costs at the same time.

We are also heavily testing Amazon RDS for Aurora in its Postgres-compatible version and will also give the new release of Aurora Serverless a try!

#SqlDatabaseAsAService #NosqlDatabaseAsAService #Databases #PlatformAsAService

13 upvotes·412 views

Decision at Thumbtack about PostgreSQL

Avatar of marcoalmeida
PostgreSQLPostgreSQL

Running PostgreSQL on a single primary master node is simple and convenient. There is a single source of truth, one instance to handle all reads and writes, one target for all clients to connect to, and only a single configuration file to maintain. However, such a setup usually does not last forever. As traffic increases, so does the number of concurrent reads and writes, the read/write ratio may become too high, a fast and reliable recovery plan needs to exist, the list goes on…

No single approach solves all possible scaling challenges, but there are quite a few options for scaling PostgreSQL depending on the requirements. When the read/write ratio is high enough, there is fairly straightforward scaling strategy: setup secondary PostgreSQL nodes (replicas) that stream data from the primary node (master) and split SQL traffic by sending all writes (INSERT, DELETE, UPDATE, UPSERT) to the single master node and all reads (SELECT) to the replicas. There can be many replicas, so this strategy scales better with a higher read/write ratio. Replicas are also valuable to implement a disaster recovery plan as it’s possible to promote one to master in the event of a failure.

11 upvotes·170 views

Decision at ReadMe.io about Compose, MongoLab, MongoDB Atlas, PostgreSQL, MySQL, MongoDB

Avatar of gkoberger
ComposeCompose
MongoLabMongoLab
MongoDB AtlasMongoDB Atlas
PostgreSQLPostgreSQL
MySQLMySQL
MongoDBMongoDB

We went with MongoDB , almost by mistake. I had never used it before, but I knew I wanted the *EAN part of the MEAN stack, so why not go all in. I come from a background of SQL (first MySQL , then PostgreSQL ), so I definitely abused Mongo at first... by trying to turn it into something more relational than it should be. But hey, data is supposed to be relational, so there wasn't really any way to get around that.

There's a lot I love about MongoDB, and a lot I hate. I still don't know if we made the right decision. We've been able to build much quicker, but we also have had some growing pains. We host our databases on MongoDB Atlas , and I can't say enough good things about it. We had tried MongoLab and Compose before it, and with MongoDB Atlas I finally feel like things are in a good place. I don't know if I'd use it for a one-off small project, but for a large product Atlas has given us a ton more control, stability and trust.

8 upvotes·2.8K views

Decision at Healthchecks-io about jQuery, Bootstrap, PostgreSQL, Django, Python

Avatar of cuu508
jQueryjQuery
BootstrapBootstrap
PostgreSQLPostgreSQL
DjangoDjango
PythonPython

Python Django PostgreSQL Bootstrap jQuery

Healthchecks.io is a SaaS cron monitoring service. I needed a tool to monitor my cron jobs. I was not happy with the existing options, so I wrote one. The initial goal was to get to a MVP state, and use it myself. The followup goals were to add functionality and polish the user interface, while keeping the UI and the under the hood stuff as simple and clean as possible.

Python and DJango were obvious choices as I was already familiar with them, and knew that many of Django's built-in features would come handy in this project: ORM, testing infrastructure, user authentication, templates, form handling.

On the UI side, instead of doing the trendy "React JS app talking to API endpoints" thing, I went with the traditional HTML forms, and full page reloads. I was aiming for the max simplicity. Paraphrasing Kevin from The Office, why waste time write lot JS when form submit do trick. The frontend does however use some JS, for example, to support live-updating dashboards.

The backend is also aiming for max simplicity, and I've tried to keep the number of components to the minimum. For example, a message broker or a key-value store could be handy, but so far I'm getting away with storing everything in the Postgres database.

The deployment and hosting setup is also rather primitive by today's standards. uWSGI runs the Django app, with a nginx reverse proxy in front. uWSGI and nginx are run as systemd services on bare metal servers. Traffic is proxied through Cloudflare Load Balancer, which allows for relatively easy rolling code upgrades. I use Fabric for automating server maintenance. I did use Ansible for a while but moved back to Fabric: my Ansible playbooks were slower, and I could not get used to mixing YAML and Jinja templating.

Healthchecks.io tech decisions in one word: KISS. Use boring tools that get the job done.

6 upvotes·1K views

Decision about Rails, PostgreSQL, DataTypes, Lookups

Avatar of jeromedalbert
Backend Engineer at StackShare ·
RailsRails
PostgreSQLPostgreSQL
#DataTypes
#Lookups

Does your PostgreSQL-backed Rails app deal with slugs, emails or usernames? Do you find yourself littering your code with things like Model.where('lower(slug) = ?', slug.downcase)?

Postgres strings are case-sensitive, but you often want to look these fields up regardless of case. So you use downcase/lower everywhere... You may refactor this inconvenience in dedicated methods like find_by_slug, but all too often your team will forget about it and use find_by(slug:, leading to inevitable bugs.

What if I told you that you could delegate all this dirty work to Postgres, thanks to the case-insensitive citext type? You can change your column type to citext like so:

class ChangeSlugsToCitext < ActiveRecord::Migration
  def change
    enable_extension('citext')
    change_column :blah, :slug, :citext
  end
end

Now, you can use find_by(slug: as you are used to, and Postgres will internally call lower on the two compared values. Problem solved!

#Lookups #DataTypes

5 upvotes·3 comments·2.6K views