Kafka vs Redis: What are the differences?
Kafka: Distributed, fault tolerant, high throughput pub-sub messaging system. Kafka is a distributed, partitioned, replicated commit log service. It provides the functionality of a messaging system, but with a unique design; Redis: An in-memory database that persists on disk. Redis is an open source, BSD licensed, advanced key-value store. It is often referred to as a data structure server since keys can contain strings, hashes, lists, sets and sorted sets.
Kafka can be classified as a tool in the "Message Queue" category, while Redis is grouped under "In-Memory Databases".
"High-throughput", "Distributed" and "Scalable" are the key factors why developers consider Kafka; whereas "Performance", "Super fast" and "Ease of use " are the primary reasons why Redis is favored.
Kafka and Redis are both open source tools. Redis with 37.4K GitHub stars and 14.4K forks on GitHub appears to be more popular than Kafka with 12.7K GitHub stars and 6.81K GitHub forks.
Airbnb, Uber Technologies, and Instagram are some of the popular companies that use Redis, whereas Kafka is used by Uber Technologies, Spotify, and Slack. Redis has a broader approval, being mentioned in 3261 company stacks & 1781 developers stacks; compared to Kafka, which is listed in 509 company stacks and 470 developer stacks.
What is Kafka?
What is Redis?
Need advice about which tool to choose?Ask the StackShare community!
Sign up to add, upvote and see more prosMake informed product decisions
Sign up to get full access to all the companiesMake informed product decisions
Sign up to get full access to all the tool integrationsMake informed product decisions
As of 2017, Slack was handling a peak of 1.4 billion jobs a day, (33,000 jobs every second). Until recently, Slack had continued to depend on their initial job queue implementation system based on Redis. While it had allowed them to grow exponentially and diversify their services, they soon outgrew the existing system. Also, dequeuing jobs required memory that was unavailable. Allowing job workers to scale up further burdened Redis, slowing the entire system.
Slack decided to use Kafka to ease the process and allow them to scale up without getting rid of the existing architecture. To build on it, they added Kafka in front of Redis leaving the existing queuing interface in place. A stateless service called Kafkagate was developed in Go to enqueue jobs to Kafka. It exposes an HTTP POST interface with each request comprising a topic, partition, and content. Kafkagate's design reduces latency while writing jobs and allows greater flexibility in job queue design. JQRelay, a stateless service, is used to relay jobs from a Kafka topic to Redis. It ensures only one relay process is assigned to each topic, failures are self-healing, and job-specific errors are corrected by re-enqueuing the job to Kafka. The new system was rolled out by double writing all jobs to both Redis and Kafka, with JQRelay operating in 'shadow mode' - dropping all jobs after reading it from Kafka. Jobs were verified by being tracked at each part of the system through its lifetime. By using durable storage and JQRelay, the enqueuing rate could be paused or adjusted to give Redis the necessary breathing room and make Slack a much more resilient service.
Redis is a good caching tool for a cluster, but our application had performance issues while using Aws Elasticache Redis since some page had 3000 cache hits per a page load and Redis just couldn't quickly process them all in once + latency and object deseialization time - page load took 8-9 seconds. We create a custom hybrid caching based on Redis and EhCache which worked great for our goals. Check it out on github, it's called HybriCache - https://github.com/batir-akhmerov/hybricache.
Front-end messages are logged to Kafka by our API and application servers. We have batch processing (on the middle-left) and real-time processing (on the middle-right) pipelines to process the experiment data. For batch processing, after daily raw log get to s3, we start our nightly experiment workflow to figure out experiment users groups and experiment metrics. We use our in-house workflow management system Pinball to manage the dependencies of all these MapReduce jobs.
Redis is used for storing all ephemeral (that's data you don't necessarily want to store permanently) user data, such as mapping of session IDs (stored in cookies) to current session variables at Cloudcraft.co. The many datastructures supported by Redis also makes it an excellent caching and realtime statistics layer. It doesn't hurt that the author, Antirez, is the nicest guy ever! These days, I would be really hard pressed to find any situation where I would pick something like Memcached over Redis.
Trello uses Redis for ephemeral data that needs to be shared between server processes but not persisted to disk. Things like the activity level of a session or a temporary OpenID key are stored in Redis, and the application is built to recover gracefully if any of these (or all of them) are lost. We run with allkeys-lru enabled and about five times as much space as its actual working set needs, so Redis automatically discards data that hasn’t been accessed lately, and reconstructs it when necessary.
The UI has message inbox that is sent a message when you get a new badge, receive a message, significant event, etc. Done using WebSockets and is powered by redis. Redis has 2 slaves, SQL has 2 replicas, tag engine has 3 nodes, elastic has 3 nodes - any other service has high availability as well (and exists in both data centers).
Redis makes certain operations very easy. When I need a high-availability store, I typically look elsewhere, but for rapid development with the ability to land on your feet in prod, Redis is great. The available data types make it easy to build non-trivial indexes that would require complex queries in postgres.
I use Redis for cacheing, data storage, mining and augmentation, proprietary distributed event system for disparate apps and services to talk to each other, and more. Redis has some very useful native data types for tracking, slicing and dicing information.
Building out real-time streaming server to present data insights to Coolfront Mobile customers and internal sales and marketing teams.