What is Kafka?
Who uses Kafka?
Kafka Integrations
Here are some stack decisions, common use cases and reviews by companies and developers who chose Kafka in their tech stack.
I need to build the Alert & Notification framework with the use of a scheduled program. We will analyze the events from the database table and filter events that are falling under a day timespan and send these event messages over email. Currently, we are using Kafka Pub/Sub for messaging. The customer wants us to move on Apache Flink, I am trying to understand how Apache Flink could be fit better for us.
To provide employees with the critical need of interactive querying, we’ve worked with Presto, an open-source distributed SQL query engine, over the years. Operating Presto at Pinterest’s scale has involved resolving quite a few challenges like, supporting deeply nested and huge thrift schemas, slow/ bad worker detection and remediation, auto-scaling cluster, graceful cluster shutdown and impersonation support for ldap authenticator.
Our infrastructure is built on top of Amazon EC2 and we leverage Amazon S3 for storing our data. This separates compute and storage layers, and allows multiple compute clusters to share the S3 data.
We have hundreds of petabytes of data and tens of thousands of Apache Hive tables. Our Presto clusters are comprised of a fleet of 450 r4.8xl EC2 instances. Presto clusters together have over 100 TBs of memory and 14K vcpu cores. Within Pinterest, we have close to more than 1,000 monthly active users (out of total 1,600+ Pinterest employees) using Presto, who run about 400K queries on these clusters per month.
Each query submitted to Presto cluster is logged to a Kafka topic via Singer. Singer is a logging agent built at Pinterest and we talked about it in a previous post. Each query is logged when it is submitted and when it finishes. When a Presto cluster crashes, we will have query submitted events without corresponding query finished events. These events enable us to capture the effect of cluster crashes over time.
Each Presto cluster at Pinterest has workers on a mix of dedicated AWS EC2 instances and Kubernetes pods. Kubernetes platform provides us with the capability to add and remove workers from a Presto cluster very quickly. The best-case latency on bringing up a new worker on Kubernetes is less than a minute. However, when the Kubernetes cluster itself is out of resources and needs to scale up, it can take up to ten minutes. Some other advantages of deploying on Kubernetes platform is that our Presto deployment becomes agnostic of cloud vendor, instance types, OS, etc.
#BigData #AWS #DataScience #DataEngineering
We make extensive use of Redis for our caches and use it as a way to save "semi-permanent" stuff like user-submit settings (that get refreshed on each login) or cooldowns that expire very fast. Additionally we also utilize the Pub-Sub capabilities that Redis has to offer.
We decided against using a dedicated Message-Broker/Streaming Platform like RabbitMQ or Kafka, as we already had a packet-based, custom protocol for communication between servers and services, and we only needed some "tiny" Pub-Sub magic to fill in the gaps. An entire additional service just for this oddjob would've been a total overkill.
I want to read data from Kafka. The file is in CSV, or whichever format is coming from SAP and read from another 3rd party application. So I need to create a Message Bus for the same. Please suggest. I can use a microservice as well.
I have to build a data processing application with an Apache Beam stack and Apache Flink runner on an Amazon EMR cluster. I saw some instability with the process and EMR clusters that keep going down. Here, the Apache Beam application gets inputs from Kafka and sends the accumulative data streams to another Kafka topic. Any advice on how to make the process more stable?
I'm building a website where users can participate, like and dislike any given challenge.
Problem : If 10k or 1 million users join the given challenge at a time it can cause a race condition in my database MySQL and in also Redis.
What I want : Aggregating joined participated users, likes and dislikes.
Solution : I'm thinking about using Kafka as a Queue message broker then users event one by one saving into Redis, database and aggregate them.
One problem is also here saving and doing aggregate takes time now; how can I show users they have successfully joined the challenge?
One solution is that when a user joins the challenge I send a request to the Kafka queue then update the current user UI and show a success message (not updating the other users' joined messages to current user because I am not using Websockets)
Other App example Take the same example of https://stackshare.io posts. On posts users can like, dislike and comments.
Estimated users : 1 million Stack : Django, Mysql, Redis and Kafka
Questions
- How I can manage these kinds of things?
- How do big tech companies handle this?
- Where am I right or wrong?
- Are there other tools that can help me in this situation?
- I am using locks in Redis when total like, dislike and joined users increment or decrement. Should I be doing this? Is it the same for transactions in MySQL?
I need the best approach to handle this situation that can also be scalable.
Thanks in advance for reading my post and giving me suggestions on this. ☺️
Blog Posts
Kafka's Features
- Written at LinkedIn in Scala
- Used by LinkedIn to offload processing of all page and other views
- Defaults to using persistence, uses OS disk cache for hot data (has higher throughput then any of the above having persistence enabled)
- Supports both on-line as off-line processing