Avatar of Alonso Isidoro Roman

Alonso Isidoro Roman

Ingeniero de software, bigdata architect
Ingeniero de software, bigdata architect ·

So, you are using Apache Beam and Apache Flink to read from an input kafka topic, apply some transformations to the input and then write to another output kafka topic? it looks like that this is a solution for kafka-streams framework, isn't?. if the process is not very stable, it is probably because you don't have the right amount of memory for these processes, or you don't have enough dedicated cores for it.

Investigate using the Confluent platform's control-center tool, look at logs, examine process exceptions, focus on caused by.

Unless you have a great need to use Apache Flink's supposedly better real-time data streaming capabilities, stick with kafka-streams to do that task. Then look into doing the same with Beam and Flink, but when you have it, you can measure if you really have a big performance improvement when reading and writing to kafka topics. I honestly doubt it.

4 upvotes·913 views