Redis

Redis

Application and Data / Data Stores / In-Memory Databases
Technical operations Engineer
Needs advice
on
Node.jsNode.jsRabbitMQRabbitMQ
and
RedisRedis

I am developing a microservice architecture that necessitates service-to-service communication. Validating authorization using a token that is passed from the auth service to the other services is one of these needs. I'm thinking about using the RPC communication strategy with Redis or RabbitMQ. Any suggestions?

READ MORE
6 upvotes8.7K views
Replies (1)
CTO/founder at Meet Kinksters

I think you may be mixing concerns a bit. Also, the authentication mechanisms for various tools may not necessarily play well with another, and/or you may need different intra-service auth vs. public-facing/customer sessions/tokens. If you need to have disparate tools authenticate to one another with a bearer-type token mechanism, consider Vault. It has excellent support for at least one of the tools you mention (RabbitMQ.)

For flexible token auth, JWTs.

READ MORE
5 upvotes7K views

My UI brings data from multiple microservices. What is the best way to handle such a use case? Data is not changed frequently over the period. I can think of the following options

  1. Use aggregation layer in API gateway
  2. Create materialize view
  3. Have caching layer in place using Redis
READ MORE
4 upvotes228 views
Replies (1)
CTO at Stukent
Recommends
apigateway
Redis

If your data does not change, the most straightforward approach would be to use the Azure API gateway to cache the responses. This technically uses Redis by default under the hood, but doesn't require you to setup your own instance unless your data is very large.

READ MORE
3 upvotes34 views
Full Stack (Founder) at Peuconomia Int'l Pvt. Ltd.

We use Go for the first-off due to our knowledge of it. Second off, it's highly performant and optimized for scalability. We run it using dockerized containers for our backend REST APIs.

For Frontend, we use React with Next.js at vercel. We use NextJS here mostly due to our need for Server Side Rendering and easier route management.

For Database, we use MySQL as it is first-off free and always has been in use with us. We use Google Cloud SQL from GCP that manages its storage and versions along with HA.

All stacks are free to use and get the best juice out of the system. We also use Redis for caching for enterprise-grade apps where data retrieval latency matters the most.

READ MORE
10 upvotes26.1K views
Replies (1)
Recommends
PostgreSQL

I am a business analyst with many years of experience. Your stack looks good to me. Go is a great choice for backend speed, React is the most used JS front-end library/framework, and Next will help the SEO of your web apps, due to its feature to be as a Server-Side Rendering framework developed on top of React. My only advice is to start looking at Postgres as a relational database instead of MySQL. Postgres is 100% open source. MySQL is owned by Oracle, and the only way to go totally free is having the community edition which is limited. MySQL is a simpler database. Postgres can handle massive databases, it is an Object-Relational with extensive features. Postgres has native NoSQL capabilities to deal with complex queries. There is PGAdmin UI to handle Postgres databases design, similar as Workbench is for mySQL.

READ MORE
12 upvotes15.2K views
Needs advice
on
KafkaKafkaRabbitMQRabbitMQ
and
RedisRedis

We are currently moving to a microservice architecture and are debating about the different options there are to handle communication between services. We are currently considering Kafka, Redis or RabbitMQ as a message broker. As RabbitMQ is a little bit older, we thought that it may be outdated. Is that true? Can RabbitMQ hold up to more modern tools like Redis and Kafka?

READ MORE
4 upvotes103.6K views
Replies (4)
Senior Software Developer at Okta

We have faced the same question some time ago. Before I begin, DO NOT use Redis as a message broker. It is fast and easy to set up in the beginning but it does not scale. It is not made to be reliable in scale and that is mentioned in the official docs. This analysis of our problems with Redis may help you.

We have used Kafka and RabbitMQ both in scale. We concluded that RabbitMQ is a really good general purpose message broker (for our case) and Kafka is really fast but limited in features. That鈥檚 the trade off that we understood from using it. In-fact I blogged about the trade offs between Kafka and RabbitMQ to document it. I hope it helps you in choosing the best pub-sub layer for your use case.

READ MORE
Tag: message-queue (tarunbatra.com)
10 upvotes101.2K views

It depends on your requirements like number of messages to be processed per second, real time messages vs delayed, number of servers available for your cluster, whether you need streaming, etc.. Kafka works for most use cases. Not related to answer but would like to add no matter whatever broker you chose, for connecting to the broker always go for the library provided by the broker rather than Spring kafka or Spring AMQP. If you use Spring, then you will be stuck with specific Spring versions. In case you find bugs in spring then difficult because you will have to upgrade entire application to use a later Spring core version. In general, use as minimum libraries as possible to get rid of nuisance of upgrading them when they are outdated or bugs are found with them.

READ MORE
5 upvotes3 comments102.1K views
kaffarell
kaffarell
December 1st 2021 at 12:16PM

Thanks for the insight! A fast message broker would be important, persistency isn't. We also plan to deploy the message broker as a docker container to our cluster. I read somewhere online that kafka is not meant to be deployed as a container... Is that true? (What also confused me is that there isn't a official docker image for kafka).

Reply
Makarand Joshi
Makarand Joshi
December 2nd 2021 at 11:03AM

Radis is bit different compared to Rabbit MQand Kafka so use Redis only if its for non critical message flow. Between Rabbit MQ and Kafka , our experience as been for large message processing application Rabbit becomes really unstable and even have encountered corrupt data so we switched to Kafka which is more reliable

Reply
Bipul Agarwal
Bipul Agarwal
December 16th 2021 at 8:37PM

Hi Makarand, how easy was your journey from RabbitMQ to Kafka? Is it okay to ask if you have had any specific challenges as RabbitMQ sends to consumers while in Kafka Consumer needs to read. + Message structure is a bit different too?

Also, It would be nice to know if you migrated to Managed Kafka service or self hosted? (I am trying to understand how tough would it be to manage our own Kafka as we are almost finalising going with Kafka) :)

Thanks

Reply
View all (4)
Software Developer
Needs advice
on
DjangoDjangoNode.jsNode.js
and
RedisRedis

Hey everyone, I am planning to start a personal project that would be yet another social media project with real-time communication facilities like one-to-one chat, group chat, and later voice and video chat using WebRTC. The thing I am concerned about is Django being able to handle all the real-time stuff using websockets. I can use Django Channels, but I don't think that would be a very scalable solution. Moreover, django_channels require alto of configurations, and deployment is also a pain. My plan is to use a separate Node.js server to handle all the socket connections and have it talk to the main django server through Redis. My question is whether the above-mentioned solution is a good choice? If yes, how this can be achieved, keeping in mind all the authentication other related problems. It might be simple, but I have never done this before, which might be the main reason I am concerned. But any suggestion will be appreciated.

Thanks in advance 馃槉

READ MORE
6 upvotes85.1K views
Replies (1)
Founder at AirLabs.Co

Try to do it with less - Nodejs + Redis + socket.io, optionally you can always communicate with django, but you can do it all in Nodejs, use pm2 and cluster too. For Redis you can also use Pub/Sub, is a good combination for future scaling.

READ MORE
10 upvotes77.2K views

For our web application's backend, we have decided to create our server using Node.js and npm as our package manager, as this allows us to utilize a developer's skills and knowledge in JS for both the frontend and backend. ExpressJS provides us an easy to learn framework that saves us effort, time and improves productivity in creating our server, while affording us room to add complexity. Passport will be used to implement Oauth2.0 authentication for our web application, allowing our users to sign in with their existing accounts (no one wants to create a remember the password for yet another account). Mongoose will be used to make calls to our backend, this framework will help make these calls more accessible and organized. We have decided to use Redis on our server for any caching we need. This will greatly speed up retrieval times and reduce calls to external sources for any data that could instead be cached on our server. Lastly, we will use Jest as our unit testing framework for our backend as it is very popular and has support for Node.js . Furthermore, this is the same testing framework we will be using for our frontend, thus allowing use quickly learn and implement testing in both frontend and backend.

We have decided to use Heroku as our hosting platform for our server. Heroku provides clear documentation and a quick and simple process to host Node.js applications with their service, along with great support with our version control Git. Furthermore, Heroku also provides a free tier, which allows us to deploy and test our web application from the beginning of development.

MongoDB is our chosen database as a NoSQL database will give us flexibility in storing different types of data and room for scaling our product. We have decided to use MongoDB Atlas to host our database. As they provide a quick and simple start up along with a free tier to host database. Thus, allowing us to rapidly test our server's uses with the database.

READ MORE
4 upvotes48.1K views
Senior Software Engineer at Rubika
Needs advice
on
JavaScriptJavaScriptredisredis
and
RedisRedis

We use Redis for caching, load balance, and fault tolerance. We use a static hash function to select some cache node to set and get data. Which is the best solution to load balance redis nodes, when redis nodes can be added or removed in runtime?

READ MORE
2 upvotes121 views
Needs advice
on
Amazon S3Amazon S3
and
HBaseHBase

Hi, I'm building a machine learning pipelines to store image bytes and image vectors in the backend.

So, when users query for the random access image data (key), we return the image bytes and perform machine learning model operations on it.

I'm currently considering going with Amazon S3 (in the future, maybe add Redis caching layer) as the backend system to store the information (s3 buckets with sharded prefixes).

As the latency of S3 is 100-200ms (get/put) and it has a high throughput of 3500 puts/sec and 5500 gets/sec for a given bucker/prefix. In the future I need to reduce the latency, I can add Redis cache.

Also, s3 costs are way fewer than HBase (on Amazon EC2 instances with 3x replication factor)

I have not personally used HBase before, so can someone help me if I'm making the right choice here? I'm not aware of Hbase latencies and I have learned that the MOB feature on Hbase has to be turned on if we have store image bytes on of the column families as the avg image bytes are 240Kb.

READ MORE
4 upvotes102.1K views

Server side

We decided to use Python for our backend because it is one of the industry standard languages for data analysis and machine learning. It also has a lot of support due to its large user base.

  • Web Server: We chose Flask because we want to keep our machine learning / data analysis and the web server in the same language. Flask is easy to use and we all have experience with it. Postman will be used for creating and testing APIs due to its convenience.

  • Machine Learning: We decided to go with PyTorch for machine learning since it is one of the most popular libraries. It is also known to have an easier learning curve than other popular libraries such as Tensorflow. This is important because our team lacks ML experience and learning the tool as fast as possible would increase productivity.

  • Data Analysis: Some common Python libraries will be used to analyze our data. These include NumPy, Pandas , and matplotlib. These tools combined will help us learn the properties and characteristics of our data. Jupyter notebook will be used to help organize the data analysis process, and improve the code readability.

Client side

  • UI: We decided to use React for the UI because it helps organize the data and variables of the application into components, making it very convenient to maintain our dashboard. Since React is one of the most popular front end frameworks right now, there will be a lot of support for it as well as a lot of potential new hires that are familiar with the framework. CSS 3 and HTML5 will be used for the basic styling and structure of the web app, as they are the most widely used front end languages.

  • State Management: We decided to use Redux to manage the state of the application since it works naturally to React. Our team also already has experience working with Redux which gave it a slight edge over the other state management libraries.

  • Data Visualization: We decided to use the React-based library Victory to visualize the data. They have very user friendly documentation on their official website which we find easy to learn from.

Cache

  • Caching: We decided between Redis and memcached because they are two of the most popular open-source cache engines. We ultimately decided to use Redis to improve our web app performance mainly due to the extra functionalities it provides such as fine-tuning cache contents and durability.

Database

  • Database: We decided to use a NoSQL database over a relational database because of its flexibility from not having a predefined schema. The user behavior analytics has to be flexible since the data we plan to store may change frequently. We decided on MongoDB because it is lightweight and we can easily host the database with MongoDB Atlas . Everyone on our team also has experience working with MongoDB.

Infrastructure

  • Deployment: We decided to use Heroku over AWS, Azure, Google Cloud because it is free. Although there are advantages to the other cloud services, Heroku makes the most sense to our team because our primary goal is to build an MVP.

Other Tools

  • Communication Slack will be used as the primary source of communication. It provides all the features needed for basic discussions. In terms of more interactive meetings, Zoom will be used for its video calls and screen sharing capabilities.

  • Source Control The project will be stored on GitHub and all code changes will be done though pull requests. This will help us keep the codebase clean and make it easy to revert changes when we need to.

READ MORE
13 upvotes441.9K views
Integration Team Leader at Davut G脺RB脺Z
Needs advice
on
JavaJavaRedisRedis
and
Spring BootSpring Boot

We've decided to use Redis in our solution which consists of several microservices. We noticed that there are different Redis Java clients available in the market. https://redis.io/clients#java

Actually, we've ended up with @Redisson because there are different serialization codecs and async support, several other features listed in Redisson.org. However, there is also redisson.pro commercial edition that is said providing away better performance.

A serious alternative Lettuce.io is also doing good and we're not able to foresee which one will stay longer in the arena. We'll do the abstraction to switch the strategy if we feel required. We would like to hear from you, too. How's your experience? Appreciate your recommendations.

READ MORE
6 upvotes6.5K views
Replies (2)
Develop Manager at Logesta

Hi Davut, In case it helps you, in my case we use Redis as a Cach茅, and we do not consume directly it, but in front we have incorporated a microservice that specifically performs these tasks, so that the rest of the microservices communicate with it with an API. We have that microservice with Jedis, and we have many limitations in serialization and deserialization with Jackson, especially at the ZonedDataTime level. Regards.

READ MORE
3 upvotes1 comment4.1K views
Davut G脺RB脺Z
Davut G脺RB脺Z
September 24th 2020 at 10:46AM

Can you possibly elaborate it ? We're planning to have a Redis client in each microservice and receive a publidhed message when common shared source object changed with a POST/PUT...

If you have a dedicated microservice communicating with Redis and other services getting the latest from this one , are you polling from time to time or do you have another publish/subscribe method ? What's the benefit in your use exactly ?

Reply
Lead Engineer at Chaayos

Hi Davut, I don't know much about Lettuce but Redisson is a good choice. We have used it in our applications and it provides nice abstraction layer over core Redis features. Commercial version is hosted by Redisson team only and hence they provide you extra features like faster speeds and cluster deployments etc. But I think standard version is good enough but you need to set it up by yourself. In our case we have build another interface over Redisson functions that allows us to easily switch to another caching solution like Hazelcast.

READ MORE
2 upvotes1 comment4.1K views
Davut G脺RB脺Z
Davut G脺RB脺Z
October 7th 2020 at 10:46PM

Thanks for your comment.

Using Cache Aside patter I coded an abstraction layer and did all tests. Looks nice. Clients are getting redis cache updates. Genetics and and generic of genetics T<List<?>> is tricky in java in comparison to C#.net but managed somehow.

Will go to prod env soon. Topic Publish/Subscription also works but I didn't implement it somewhere.

Are you using topic pub/subs ,if so in what sort of cases you needed? 陌nterservices comm, asynchronous message broker need etc ?

Reply