Redis

Redis

Application and Data / Data Stores / In-Memory Databases

Decision at Sentry about Redis, PostgreSQL, Celery, Django, InMemoryDatabases, MessageQueue

Avatar of jtcunning
Operations Engineer at Sentry ·
RedisRedisPostgreSQLPostgreSQLCeleryCeleryDjangoDjango
#InMemoryDatabases
#MessageQueue

Sentry started as (and remains) an open-source project, growing out of an error logging tool built in 2008. That original build nine years ago was Django and Celery (Python’s asynchronous task codebase), with PostgreSQL as the database and Redis as the power behind Celery.

We displayed a truly shrewd notion of branding even then, giving the project a catchy name that companies the world over remain jealous of to this day: django-db-log. For the longest time, Sentry’s subtitle on GitHub was “A simple Django app, built with love.” A slightly more accurate description probably would have included Starcraft and Soylent alongside love; regardless, this captured what Sentry was all about.

#MessageQueue #InMemoryDatabases

22 upvotes·87.6K views

Decision at CircleCI about Amazon S3, GitHub, Redis, PostgreSQL, MongoDB

Avatar of z00b
CTO at CircleCI ·

We use MongoDB as our primary #datastore. Mongo's approach to replica sets enables some fantastic patterns for operations like maintenance, backups, and #ETL.

As we pull #microservices from our #monolith, we are taking the opportunity to build them with their own datastores using PostgreSQL. We also use Redis to cache data we’d never store permanently, and to rate-limit our requests to partners’ APIs (like GitHub).

When we’re dealing with large blobs of immutable data (logs, artifacts, and test results), we store them in Amazon S3. We handle any side-effects of S3’s eventual consistency model within our own code. This ensures that we deal with user requests correctly while writes are in process.

21 upvotes·1 comment·20.1K views

Decision at Uploadcare about PostgreSQL, Amazon DynamoDB, Amazon S3, Redis, Python, Google App Engine

Avatar of dmitry-mukhin

Uploadcare has built an infinitely scalable infrastructure by leveraging AWS. Building on top of AWS allows us to process 350M daily requests for file uploads, manipulations, and deliveries. When we started in 2011 the only cloud alternative to AWS was Google App Engine which was a no-go for a rather complex solution we wanted to build. We also didn’t want to buy any hardware or use co-locations.

Our stack handles receiving files, communicating with external file sources, managing file storage, managing user and file data, processing files, file caching and delivery, and managing user interface dashboards.

At its core, Uploadcare runs on Python. The Europython 2011 conference in Florence really inspired us, coupled with the fact that it was general enough to solve all of our challenges informed this decision. Additionally we had prior experience working in Python.

We chose to build the main application with Django because of its feature completeness and large footprint within the Python ecosystem.

All the communications within our ecosystem occur via several HTTP APIs, Redis, Amazon S3, and Amazon DynamoDB. We decided on this architecture so that our our system could be scalable in terms of storage and database throughput. This way we only need Django running on top of our database cluster. We use PostgreSQL as our database because it is considered an industry standard when it comes to clustering and scaling.

15 upvotes·31K views

Decision at LaunchDarkly about Kafka, Amazon Kinesis, Redis, Amazon EC2, Amazon ElastiCache, Consul, Patroni, TimescaleDB, PostgreSQL, Amazon RDS

Avatar of jkodumal
CTO at LaunchDarkly ·

As we've evolved or added additional infrastructure to our stack, we've biased towards managed services. Most new backing stores are Amazon RDS instances now. We do use self-managed PostgreSQL with TimescaleDB for time-series data—this is made HA with the use of Patroni and Consul.

We also use managed Amazon ElastiCache instances instead of spinning up Amazon EC2 instances to run Redis workloads, as well as shifting to Amazon Kinesis instead of Kafka.

14 upvotes·51.7K views

Decision at Shopify about Memcached, Redis, MySQL, Google Kubernetes Engine, Kubernetes, Docker

Avatar of kirs
Production Engineer at Shopify ·

At Shopify, over the years, we moved from shards to the concept of "pods". A pod is a fully isolated instance of Shopify with its own datastores like MySQL, Redis, Memcached. A pod can be spawned in any region. This approach has helped us eliminate global outages. As of today, we have more than a hundred pods, and since moving to this architecture we haven't had any major outages that affected all of Shopify. An outage today only affects a single pod or region.

As we grew into hundreds of shards and pods, it became clear that we needed a solution to orchestrate those deployments. Today, we use Docker, Kubernetes, and Google Kubernetes Engine to make it easy to bootstrap resources for new Shopify Pods.

13 upvotes·53.5K views

Decision at Dubsmash about Amazon RDS for Aurora, Redis, Amazon DynamoDB, Amazon RDS, Heroku, PostgreSQL, PlatformAsAService, Databases, NosqlDatabaseAsAService, SqlDatabaseAsAService

Avatar of tspecht
‎Co-Founder and CTO at Dubsmash ·
Amazon RDS for AuroraAmazon RDS for AuroraRedisRedisAmazon DynamoDBAmazon DynamoDBAmazon RDSAmazon RDSHerokuHerokuPostgreSQLPostgreSQL
#PlatformAsAService
#Databases
#NosqlDatabaseAsAService
#SqlDatabaseAsAService

Over the years we have added a wide variety of different storages to our stack including PostgreSQL (some hosted by Heroku, some by Amazon RDS) for storing relational data, Amazon DynamoDB to store non-relational data like recommendations & user connections, or Redis to hold pre-aggregated data to speed up API endpoints.

Since we started running Postgres ourselves on RDS instead of only using the managed offerings of Heroku, we've gained additional flexibility in scaling our application while reducing costs at the same time.

We are also heavily testing Amazon RDS for Aurora in its Postgres-compatible version and will also give the new release of Aurora Serverless a try!

#SqlDatabaseAsAService #NosqlDatabaseAsAService #Databases #PlatformAsAService

13 upvotes·34.4K views

Decision at StackShare about Redis, CircleCI, Webpack, Amazon CloudFront, Amazon S3, GitHub, Heroku, Rails, Node.js, Apollo, Glamorous, React, FrontEndRepoSplit, Microservices, SSR, StackDecisionsLaunch

Avatar of ruswerner
Lead Engineer at StackShare ·

StackShare Feed is built entirely with React, Glamorous, and Apollo. One of our objectives with the public launch of the Feed was to enable a Server-side rendered (SSR) experience for our organic search traffic. When you visit the StackShare Feed, and you aren't logged in, you are delivered the Trending feed experience. We use an in-house Node.js rendering microservice to generate this HTML. This microservice needs to run and serve requests independent of our Rails web app. Up until recently, we had a mono-repo with our Rails and React code living happily together and all served from the same web process. In order to deploy our SSR app into a Heroku environment, we needed to split out our front-end application into a separate repo in GitHub. The driving factor in this decision was mostly due to limitations imposed by Heroku specifically with how processes can't communicate with each other. A new SSR app was created in Heroku and linked directly to the frontend repo so it stays in-sync with changes.

Related to this, we need a way to "deploy" our frontend changes to various server environments without building & releasing the entire Ruby application. We built a hybrid Amazon S3 Amazon CloudFront solution to host our Webpack bundles. A new CircleCI script builds the bundles and uploads them to S3. The final step in our rollout is to update some keys in Redis so our Rails app knows which bundles to serve. The result of these efforts were significant. Our frontend team now moves independently of our backend team, our build & release process takes only a few minutes, we are now using an edge CDN to serve JS assets, and we have pre-rendered React pages!

#StackDecisionsLaunch #SSR #Microservices #FrontEndRepoSplit

12 upvotes·133.1K views

Decision at Rainforest QA about Terraform, Helm, Google Cloud Build, CircleCI, Redis, Google Cloud Memorystore, PostgreSQL, Google Cloud SQL for PostgreSQL, Google Kubernetes Engine, Kubernetes, Heroku

Avatar of shosti
Senior Architect at Rainforest QA ·

We recently moved our main applications from Heroku to Kubernetes . The 3 main driving factors behind the switch were scalability (database size limits), security (the inability to set up PostgreSQL instances in private networks), and costs (GCP is cheaper for raw computing resources).

We prefer using managed services, so we are using Google Kubernetes Engine with Google Cloud SQL for PostgreSQL for our PostgreSQL databases and Google Cloud Memorystore for Redis . For our CI/CD pipeline, we are using CircleCI and Google Cloud Build to deploy applications managed with Helm . The new infrastructure is managed with Terraform .

Read the blog post to go more in depth.

12 upvotes·64.3K views

Decision at Shopify about Redis, Memcached, MySQL, Rails

Avatar of kirs
Production Engineer at Shopify ·

As is common in the Rails stack, since the very beginning, we've stayed with MySQL as a relational database, Memcached for key/value storage and Redis for queues and background jobs.

In 2014, we could no longer store all our data in a single MySQL instance - even by buying better hardware. We decided to use sharding and split all of Shopify into dozens of database partitions.

Sharding played nicely for us because Shopify merchants are isolated from each other and we were able to put a subset of merchants on a single shard. It would have been harder if our business assumed shared data between customers.

The sharding project bought us some time regarding database capacity, but as we soon found out, there was a huge single point of failure in our infrastructure. All those shards were still using a single Redis. At one point, the outage of that Redis took down all of Shopify, causing a major disruption we later called “Redismageddon”. This taught us an important lesson to avoid any resources that are shared across all of Shopify.

Over the years, we moved from shards to the concept of "pods". A pod is a fully isolated instance of Shopify with its own datastores like MySQL, Redis, memcached. A pod can be spawned in any region. This approach has helped us eliminate global outages. As of today, we have more than a hundred pods, and since moving to this architecture we haven't had any major outages that affected all of Shopify. An outage today only affects a single pod or region.

12 upvotes·37.8K views

Decision at Mixmax about Redis, Elasticsearch, MongoDB

Avatar of ttacon

Originally, we had a single MongoDB replica set that we stored everything on. As we scaled, we realized two things:

  • A single Mongo replica set wasn’t going to cut it for our many quickly growing collections
  • Analytics and rich searching don’t scale well in Mongo.

To solve for the first item, we now run multiple large scale Mongo deployments with a mix of replica sets and sharded replica sets (depends on the application activity for the given database). In solving for the second item, we now run multiple large Elasticsearch deployments to provide the majority of our rich searching functionality.

We also heavily use Redis across the entire platform for things like distributed locking, caching, and backing part of our job queuing layer. This has led to our most recent (and ongoing!) scaling challenge.

12 upvotes·8K views