What is Prometheus?
Who uses Prometheus?
Here are some stack decisions, common use cases and reviews by companies and developers who chose Prometheus in their tech stack.
Why we spent several years building an open source, large-scale metrics alerting system, M3, built for Prometheus:
By late 2014, all services, infrastructure, and servers at Uber emitted metrics to a Graphite stack that stored them using the Whisper file format in a sharded Carbon cluster. We used Grafana for dashboarding and Nagios for alerting, issuing Graphite threshold checks via source-controlled scripts. While this worked for a while, expanding the Carbon cluster required a manual resharding process and, due to lack of replication, any single node’s disk failure caused permanent loss of its associated metrics. In short, this solution was not able to meet our needs as the company continued to grow.
To ensure the scalability of Uber’s metrics backend, we decided to build out a system that provided fault tolerant metrics ingestion, storage, and querying as a managed platform...
(GitHub : https://github.com/m3db/m3)
We have Prometheus as a monitoring engine as a part of our stack which contains Kubernetes cluster, container images and other open source tools. Also, I am aware that Sysdig can be integrated with Prometheus but I really wanted to know whether Sysdig or sysdig+prometheus will make better monitoring solution.
We are looking for a centralised monitoring solution for our application deployed on Amazon EKS. We would like to monitor using metrics from Kubernetes, AWS services (NeptuneDB, AWS Elastic Load Balancing (ELB), Amazon EBS, Amazon S3, etc) and application microservice's custom metrics.
We are expected to use around 80 microservices (not replicas). I think a total of 200-250 microservices will be there in the system with 10-12 slave nodes.
We tried Prometheus but it looks like maintenance is a big issue. We need to manage scaling, maintaining the storage, and dealing with multiple exporters and Grafana. I felt this itself needs few dedicated resources (at least 2-3 people) to manage. Not sure if I am thinking in the correct direction. Please confirm.
You mentioned Datadog and Sysdig charges per host. Does it charge per slave node?
We recently implemented Thanos alongside Prometheus into our Kubernetes clusters, we had previously used a variety of different metrics systems and we wanted to make life simpler for everyone by just picking one.
Prometheus seemed like an obvious choice due to its powerful querying language, native Kubernetes support and great community. However we found it somewhat lacking when it came to being highly available, something that would be very important if we wanted this to be the single source of all our metrics.
Thanos came along and solved a lot of these problems. It allowed us to run multiple Prometheis without duplicating metrics, query multiple Prometheus clusters at once, and easily back up data and then query it. Now we have a single place to go if you want to view metrics across all our clusters, with many layers of redundancy to make sure this monitoring solution is as reliable and resilient as we could reasonably make it.
If you're interested in a bit more detail feel free to check out the blog I wrote on the subject that's linked.
At Kong while building an internal tool, we struggled to route metrics to Prometheus and logs to Logstash without incurring too much latency in our metrics collection.
We replaced nginx with OpenResty on the edge of our tool which allowed us to use the lua-nginx-module to run Lua code that captures metrics and records telemetry data during every request’s log phase. Our code then pushes the metrics to a local aggregator process (written in Go) which in turn exposes them in Prometheus Exposition Format for consumption by Prometheus. This solution reduced the number of components we needed to maintain and is fast thanks to NGINX and LuaJIT.
Hi, We have a situation, where we are using Prometheus to get system metrics from PCF (Pivotal Cloud Foundry) platform. We send that as time-series data to Cortex via a Prometheus server and built a dashboard using Grafana. There is another pipeline where we need to read metrics from a Linux server using Metricbeat, CPU, memory, and Disk. That will be sent to Elasticsearch and Grafana will pull and show the data in a dashboard.
Is it OK to use Metricbeat for Linux server or can we use Prometheus?
What is the difference in system metrics sent by Metricbeat and Prometheus node exporters?
- a multi-dimensional data model (timeseries defined by metric name and set of key/value dimensions)
- a flexible query language to leverage this dimensionality
- no dependency on distributed storage
- single server nodes are autonomous
- timeseries collection happens via a pull model over HTTP
- pushing timeseries is supported via an intermediary gateway
- targets are discovered via service discovery or static configuration
- multiple modes of graphing and dashboarding support
- federation support coming soon