Redis

Redis

Application and Data / Data Stores / In-Memory Databases
Shared insights

Light weight queueing and smart caching for our data collection processes. Redis

READ MORE
1 upvote·64.7K views
CEO at Scrayos UG (haftungsbeschränkt)·

We make extensive use of Redis for our caches and use it as a way to save "semi-permanent" stuff like user-submit settings (that get refreshed on each login) or cooldowns that expire very fast. Additionally we also utilize the Pub-Sub capabilities that Redis has to offer.

We decided against using a dedicated Message-Broker/Streaming Platform like RabbitMQ or Kafka, as we already had a packet-based, custom protocol for communication between servers and services, and we only needed some "tiny" Pub-Sub magic to fill in the gaps. An entire additional service just for this oddjob would've been a total overkill.

READ MORE
1 upvote·176.7K views
Managing Director at Bettison.org Limited·
Shared insights
at

In 2012 we made the very difficult decision to entirely re-engineer our existing monolithic LAMP application from the ground up in order to address some growing concerns about it's long term viability as a platform.

Full application re-write is almost always never the answer, because of the risks involved. However the situation warranted drastic action as it was clear that the existing product was going to face severe scaling issues. We felt it better address these sooner rather than later and also take the opportunity to improve the international architecture and also to refactor the database in. order that it better matched the changes in core functionality.

PostgreSQL was chosen for its reputation as being solid ACID compliant database backend, it was available as an offering AWS RDS service which reduced the management overhead of us having to configure it ourselves. In order to reduce read load on the primary database we implemented an Elasticsearch layer for fast and scalable search operations. Synchronisation of these indexes was to be achieved through the use of Sidekiq's Redis based background workers on Amazon ElastiCache. Again the AWS solution here looked to be an easy way to keep our involvement in managing this part of the platform at a minimum. Allowing us to focus on our core business.

Rails ls was chosen for its ability to quickly get core functionality up and running, its MVC architecture and also its focus on Test Driven Development using RSpec and Selenium with Travis CI providing continual integration. We also liked Ruby for its terse, clean and elegant syntax. Though YMMV on that one!

Unicorn was chosen for its continual deployment and reputation as a reliable application server, nginx for its reputation as a fast and stable reverse-proxy. We also took advantage of the Amazon CloudFront CDN here to further improve performance by caching static assets globally.

We tried to strike a balance between having control over management and configuration of our core application with the convenience of being able to leverage AWS hosted services for ancillary functions (Amazon SES , Amazon SQS Amazon Route 53 all hosted securely inside Amazon VPC of course!).

Whilst there is some compromise here with potential vendor lock in, the tasks being performed by these ancillary services are no particularly specialised which should mitigate this risk. Furthermore we have already containerised the stack in our development using Docker environment, and looking to how best to bring this into production - potentially using Amazon EC2 Container Service

READ MORE
8 upvotes·750.1K views
Senior Architect at Rainforest QA·

We recently moved our main applications from Heroku to Kubernetes . The 3 main driving factors behind the switch were scalability (database size limits), security (the inability to set up PostgreSQL instances in private networks), and costs (GCP is cheaper for raw computing resources).

We prefer using managed services, so we are using Google Kubernetes Engine with Google Cloud SQL for PostgreSQL for our PostgreSQL databases and Google Cloud Memorystore for Redis . For our CI/CD pipeline, we are using CircleCI and Google Cloud Build to deploy applications managed with Helm . The new infrastructure is managed with Terraform .

Read the blog post to go more in depth.

READ MORE
Why Rainforest QA Moved from Heroku to Google Kubernetes Engine (rainforestqa.com)
20 upvotes·1 comment·1.5M views
Dev Suryawanshi
Dev Suryawanshi
·
January 19th 2020 at 10:11AM

Great information

·
Reply
CTO at La Cupula Music SL·
Shared insights
at

Our base infrastructure is composed of Debian based servers running in Amazon EC2 , asset storage with Amazon S3 , and Amazon RDS for Aurora and Redis under Amazon ElastiCache for data storage.

We are starting to work in automated provisioning and management with Terraform.

READ MORE
8 upvotes·626.1K views
Shared insights
at

I'm working as one of the engineering leads in RunaHR. As our platform is a Saas, we thought It'd be good to have an API (We chose Ruby and Rails for this) and a SPA (built with React and Redux ) connected. We started the SPA with Create React App since It's pretty easy to start.

We use Jest as the testing framework and react-testing-library to test React components. In Rails we make tests using RSpec.

Our main database is PostgreSQL, but we also use MongoDB to store some type of data. We started to use Redis  for cache and other time sensitive operations.

We have a couple of extra projects: One is an Employee app built with React Native and the other is an internal back office dashboard built with Next.js for the client and Python in the backend side.

Since we have different frontend apps we have found useful to have Bit to document visual components and utils in JavaScript.

READ MORE
22 upvotes·2.9M views
CEO at Scrayos UG (haftungsbeschränkt)·

We use GraphQL for the communication between our Minecraft-Proxies/Load-Balancers and our global Minecraft-Orchestration-Service JCOverseer.

This connection proved to be especially challenging, as there were so many available options and very specific requirements and we tried our hardest to put as little complexity into this interface as possible.

Initially we considered designing our very own Netty based Packet-Protocol. While the performance of this approach probably would've been noteworthy, we would have had to write a lot of packets as the individual payloads would differ a lot and for the protocol specification a new project would've been needed, so we scrapped that idea.

Our second idea was to use a combination of Redis Key/Value store (in particular the ability to write whole, complex sets as the values of keys) for existing data, Redis Pub-Sub for the synchronization of new/changed/deleted data and a Vert.x based REST API for the mutation requests of the clients. While this would certainly have been possible, we decided against it, as redis offers no real other data types than strings and typing was important to us.

So we finally settled for GraphQL as it would allow us to define dynamic queries and mutations and additionally has subscriptions in store, so we would only need one component instead of three separate. The proxies register as subscribers to the server changes channel and fetch the current data set in advance. If they need to request changes, this is done through a mutation in GraphQL aswell.

The status of the invidiual servers is fetched through Docker healthchecks and a Docker client in the orchestration service, that subscribes to changed HEALTHINESS values in docker. If a service becomes unhealthy it is unregistered and synchronized through GraphQL. The healthcheck is comparable to a ping packet that expects a response in a given time frame.

READ MORE
6 upvotes·2.1M views
Needs advice
on
Amazon S3Amazon S3
and
HBaseHBase

Hi, I'm building a machine learning pipelines to store image bytes and image vectors in the backend.

So, when users query for the random access image data (key), we return the image bytes and perform machine learning model operations on it.

I'm currently considering going with Amazon S3 (in the future, maybe add Redis caching layer) as the backend system to store the information (s3 buckets with sharded prefixes).

As the latency of S3 is 100-200ms (get/put) and it has a high throughput of 3500 puts/sec and 5500 gets/sec for a given bucker/prefix. In the future I need to reduce the latency, I can add Redis cache.

Also, s3 costs are way fewer than HBase (on Amazon EC2 instances with 3x replication factor)

I have not personally used HBase before, so can someone help me if I'm making the right choice here? I'm not aware of Hbase latencies and I have learned that the MOB feature on Hbase has to be turned on if we have store image bytes on of the column families as the avg image bytes are 240Kb.

READ MORE
4 upvotes·138.8K views
Senior Fullstack Developer at QUANTUSflow Software GmbH·

Our whole DevOps stack consists of the following tools:

  • GitHub (incl. GitHub Pages/Markdown for Documentation, GettingStarted and HowTo's) for collaborative review and code management tool
  • Respectively Git as revision control system
  • SourceTree as Git GUI
  • Visual Studio Code as IDE
  • CircleCI for continuous integration (automatize development process)
  • Prettier / TSLint / ESLint as code linter
  • SonarQube as quality gate
  • Docker as container management (incl. Docker Compose for multi-container application management)
  • VirtualBox for operating system simulation tests
  • Kubernetes as cluster management for docker containers
  • Heroku for deploying in test environments
  • nginx as web server (preferably used as facade server in production environment)
  • SSLMate (using OpenSSL) for certificate management
  • Amazon EC2 (incl. Amazon S3) for deploying in stage (production-like) and production environments
  • PostgreSQL as preferred database system
  • Redis as preferred in-memory database/store (great for caching)

The main reason we have chosen Kubernetes over Docker Swarm is related to the following artifacts:

  • Key features: Easy and flexible installation, Clear dashboard, Great scaling operations, Monitoring is an integral part, Great load balancing concepts, Monitors the condition and ensures compensation in the event of failure.
  • Applications: An application can be deployed using a combination of pods, deployments, and services (or micro-services).
  • Functionality: Kubernetes as a complex installation and setup process, but it not as limited as Docker Swarm.
  • Monitoring: It supports multiple versions of logging and monitoring when the services are deployed within the cluster (Elasticsearch/Kibana (ELK), Heapster/Grafana, Sysdig cloud integration).
  • Scalability: All-in-one framework for distributed systems.
  • Other Benefits: Kubernetes is backed by the Cloud Native Computing Foundation (CNCF), huge community among container orchestration tools, it is an open source and modular tool that works with any OS.
READ MORE
30 upvotes·2 comments·8.9M views
Larry Gryziak
Larry Gryziak
·
April 30th 2020 at 6:34PM

So why is your deployment different for your (Heroku) test/dev and your stage/production?

·
Reply
Simon Reymann
Simon Reymann
·
May 1st 2020 at 10:32AM

When it comes to testing our web app we do not demand great computational resources and need a very simple, convenient and fast PaaS solution for deploying the app to our testers. In production though, the demand of great computational resources can rise very fast. With Amazon we are able to control that in better way.

·
Reply
Shared insights
at
()

Repost

Overview: To put it simply, we plan to use the MERN stack to build our web application. MongoDB will be used as our primary database. We will use ExpressJS alongside Node.js to set up our API endpoints. Additionally, we plan to use React to build our SPA on the client side and use Redis on the server side as our primary caching solution. Initially, while working on the project, we plan to deploy our server and client both on Heroku . However, Heroku is very limited and we will need the benefits of an Infrastructure as a Service so we will use Amazon EC2 to later deploy our final version of the application.

Serverside: nodemon will allow us to automatically restart a running instance of our node app when files changes take place. We decided to use MongoDB because it is a non relational database which uses the Document Object Model. This allows a lot of flexibility as compared to a RDMS like SQL which requires a very structural model of data that does not change too much. Another strength of MongoDB is its ease in scalability. We will use Mongoose along side MongoDB to model our application data. Additionally, we will host our MongoDB cluster remotely on MongoDB Atlas. Bcrypt will be used to encrypt user passwords that will be stored in the DB. This is to avoid the risks of storing plain text passwords. Moreover, we will use Cloudinary to store images uploaded by the user. We will also use the Twilio SendGrid API to enable automated emails sent by our application. To protect private API endpoints, we will use JSON Web Token and Passport. Also, PayPal will be used as a payment gateway to accept payments from users.

Client Side: As mentioned earlier, we will use React to build our SPA. React uses a virtual DOM which is very efficient in rendering a page. Also React will allow us to reuse components. Furthermore, it is very popular and there is a large community that uses React so it can be helpful if we run into issues. We also plan to make a cross platform mobile application later and using React will allow us to reuse a lot of our code with React Native. Redux will be used to manage state. Redux works great with React and will help us manage a global state in the app and avoid the complications of each component having its own state. Additionally, we will use Bootstrap components and custom CSS to style our app.

Other: Git will be used for version control. During the later stages of our project, we will use Google Analytics to collect useful data regarding user interactions. Moreover, Slack will be our primary communication tool. Also, we will use Visual Studio Code as our primary code editor because it is very light weight and has a wide variety of extensions that will boost productivity. Postman will be used to interact with and debug our API endpoints.

READ MORE
19 upvotes·2 comments·2M views
John Akhilomen
John Akhilomen
·
April 1st 2020 at 3:00PM

I like the tech stack you guys have selected. You guys seem to have it all figured out, and well planned. Good luck!

·
Reply
Ne Labs
Ne Labs
·
March 9th 2020 at 12:34PM

RDBMS like Postgres can also store, index and query schemaless data as JSON fields, while also supporting relations where it makes sense. A document model is actually a downside, since usually data will still have relations, and it makes modeling them inconvenient.

·
Reply