So, I am working in a big company where they have multiple different microservices running that are written in Golang. I am currently searching for a technology that can give me all the metric data from the microservices. What time-series databases would you recommend? or which databases would you recommend to further investigate? I appreciate any input.
Each of these tools can help you with micro service workload and work well. I will try to go through some good, bad and ugly of each.
Datadog has an easy setup and time to get something tangible out of it. The cost model is by host so this is something to take into consideration how it will affect your use case. Also as a large organization at some point you will probably want control over some/all of your telemetry data to run your own ML or AI processes. With Datadog you this can be difficult as you will need to create processes outside of its closed eco system to get Raw metrics.
Prometheus is a great tool. It also has a fairly straight forward setup especially with Kubernetes. If you are running your micro services in k8s then this is going to get used one way or another; it is a first class citizen there with heavy utilization of K8s API. I also like the fact that Kubernetes architecture is easy to understand and that it utilizes Grafana for the visualization engine. Prometheus at scale can be done but it is a pain. Especially with a distributed infrastructure across multiple workloads.
Influxdb (TICK stack in v1) is known for its scalability and flexibility as a time series database. Telegraf is the main input/data-forwarder of the architecture and is completely decoupled from the database as are the other 3 components of the stack. Influx has made it very easy to just use one component on its own. I have worked on stacks that just used telegraf for ingestion into Kinesis or another data stream. I have also worked on stacks that used Influx database but used a different ETL process for analyzing the data in realtime instead of using their v1 architectures Kapacitor query engine. Influx database is a great performing time series database that in version 2 runs within kubernetes and utilizes Flux as the query language. Flux is a nice query language that is fairly easy to learn and has a lot of flexibility. As a last positive note Telegraf is written in Go so that would fit well with your current team.
The difficulties of Influx are that it is hard to get something really tangible out of it. Initial time to see something is fast but all the other work involved is a lot. You also have to understand the architecture well. The management of Influx can be cumbersome but it can scale up better than the other two when Datadogs cost is taken into consideration. They have a lot of API hooks in their V1 enterprise edition to wire and configure it. They do offer a mange service to offload this cost until later.
My overall choice here is probably to go with some of the influx as you can rip and/or add components as needed into the flow. Eventually you will probably want to run an ML process within there (can be done within Kapacitor but of course can also use your cloud provider here too) and this gives you the flexibility to do it anywhere. I would still go through prometheus because you will most likely use it also, but it does have forwarders to Influxdb so still fits.
We're running Prometheus/Alertmanager/Grafana across our whole company for any monitoring and metrics requirement, from the infrastructure layer all the way up to Springboot endpoint services, the prometheus exporter / scraping approach works pretty well for us. It's really easy to setup and more importantly; to maintain it without much effort, all the Prometheus configs get automatically created through Terraform outputs and Ansible jobs. Combine it with Grafana and you're smiling.