For Syslog, you can certainly use TCP Input. Really interested to know what is your syslog client( which will ship logs to logstash). Anyways you can check that and see if that client has capability to configure multiple logstash host ports so that it works as a load balancer. This will increase throughput. Also check pipeline-to-pipeline communcation of logstash: https://www.elastic.co/guide/en/logstash/current/pipeline-to-pipeline.html This helps to implement distributor pattern of pipeline where multiple type of data is coming to same input and you may want to route filtering and processing based on types. It increases parallelism. About Elasticsearch: Its a native component and perfectly fits with logstash so you can use elasticsearch for storage and search. Its one of the datasource of grafana.

Sunil Chaudhari
Hi, We have a situation, where we are using Prometheus to get system metrics from PCF (Pivotal Cloud Foundry) platform. We send that as time-series data to Cortex via a Prometheus server and built a dashboard using Grafana. There is another pipeline where we need to read metrics from a Linux server using Metricbeat, CPU, memory, and Disk. That will be sent to Elasticsearch and Grafana will pull and show the data in a dashboard.
Is it OK to use Metricbeat for Linux server or can we use Prometheus?
What is the difference in system metrics sent by Metricbeat and Prometheus node exporters?
Regards, Sunil.
If you're already using Prometheus for your system metrics, then it seems like standing up Elasticsearch just for Linux host monitoring is excessive. The node_exporter is probably sufficient if you'e looking for standard system metrics.
Another thing to consider is that Metricbeat / ELK use a push model for metrics delivery, whereas Prometheus pulls metrics from each node it is monitoring. Depending on how you manage your network security, opting for one solution over two may make things simpler.
Hi Sunil! Unfortunately, I don´t have much experience with Metricbeat so I can´t advise on the diffs with Prometheus...for Linux server, I encourage you to use Prometheus node exporter and for PCF, I would recommend using the instana tile (https://www.instana.com/supported-technologies/pivotal-cloud-foundry/). Let me know if you have further questions! Regards Jose
Hi, First of all, understand the difference. All works as message brokers. All know JSON. But Redis is not a queue or topic. Its an in memory cache. RabbitMq/Kafka persists data on file system but redis holds in a temporary memory. If redis is down and if u cant use the cache dump wisely then ur data gone. Redis is very short lived broker though its fast due to less i/o operations and other commits that kafak does. I dont have much exp in RabbitMq so cant comment on that but RabbitMq is better in administration than open sourcen kafka in case your maanger dont want to pay money ….. 😂
Coming to decision… If You are ready to risk/ compromise the in memory data inside cache then go for redis. If you are not concerned about horizontal scaling and can do ur job by vertical scaling then go for redis. If you want horizontal scaling and want to persist data on disk for fetching later then go for kafka. But u cant achieve speed unless you fine tune your consumers. Need better understanding of consumer threads, partitioning, poll intervals etc. If you use confluent platform the dont even compare kafka with redis and rabbitmq.
Cheers! Sunil.