Alternatives to Kibana logo
Grafana, Loggly, Graylog, Splunk, and Prometheus are the most popular alternatives and competitors to Kibana.
4.2K
2.8K
+ 1
226

What is Kibana?

Kibana is an open source (Apache Licensed), browser based analytics and search dashboard for Elasticsearch. Kibana is a snap to setup and start using. Kibana strives to be easy to get started with, while also being flexible and powerful, just like Elasticsearch.
Kibana is a tool in the Monitoring Tools category of a tech stack.
Kibana is an open source tool with 12.9K GitHub stars and 5K GitHub forks. Here’s a link to Kibana's open source repository on GitHub

Kibana alternatives & related posts

related Grafana posts

Conor Myhrvold
Conor Myhrvold
Tech Brand Mgr, Office of CTO at Uber · | 9 upvotes · 359.2K views
atUber TechnologiesUber Technologies
Nagios
Nagios
Grafana
Grafana
Graphite
Graphite
Prometheus
Prometheus

Why we spent several years building an open source, large-scale metrics alerting system, M3, built for Prometheus:

By late 2014, all services, infrastructure, and servers at Uber emitted metrics to a Graphite stack that stored them using the Whisper file format in a sharded Carbon cluster. We used Grafana for dashboarding and Nagios for alerting, issuing Graphite threshold checks via source-controlled scripts. While this worked for a while, expanding the Carbon cluster required a manual resharding process and, due to lack of replication, any single node’s disk failure caused permanent loss of its associated metrics. In short, this solution was not able to meet our needs as the company continued to grow.

To ensure the scalability of Uber’s metrics backend, we decided to build out a system that provided fault tolerant metrics ingestion, storage, and querying as a managed platform...

https://eng.uber.com/m3/

(GitHub : https://github.com/m3db/m3)

See more
Gopalakrishna Palem
Gopalakrishna Palem
Kibana
Kibana
Grafana
Grafana

For our Predictive Analytics platform, we have used both Grafana and Kibana

Kibana has predictions and ML algorithms support, so if you need them, you may be better off with Kibana . The multi-variate analysis features it provide are very unique (not available in Grafana).

For everything else, definitely Grafana . Especially the number of supported data sources, and plugins clearly makes Grafana a winner (in just visualization and reporting sense). Creating your own plugin is also very easy. The top pros of Grafana (which it does better than Kibana ) are:

  • Creating and organizing visualization panels
  • Templating the panels on dashboards for repetetive tasks
  • Realtime monitoring, filtering of charts based on conditions and variables
  • Export / Import in JSON format (that allows you to version and save your dashboard as part of git)
See more
Splunk logo

Splunk

131
66
0
131
66
+ 1
0
Search, monitor, analyze and visualize machine data
    Be the first to leave a pro
    Splunk logo
    VS
    Kibana logo
    Compare Splunk vs Kibana
    Splunk logo
    Splunk
    VS
    Kibana logo
    Kibana

    related Splunk posts

    Grafana
    Grafana
    Splunk
    Splunk
    Kibana
    Kibana

    I use Kibana because it ships with the ELK stack. I don't find it as powerful as Splunk however it is light years above grepping through log files. We previously used Grafana but found it to be annoying to maintain a separate tool outside of the ELK stack. We were able to get everything we needed from Kibana.

    See more
    Prometheus logo

    Prometheus

    867
    637
    183
    867
    637
    + 1
    183
    An open-source service monitoring system and time series database, developed by SoundCloud
    Prometheus logo
    VS
    Kibana logo
    Compare Prometheus vs Kibana
    Prometheus logo
    Prometheus
    VS
    Kibana logo
    Kibana

    related Prometheus posts

    Conor Myhrvold
    Conor Myhrvold
    Tech Brand Mgr, Office of CTO at Uber · | 9 upvotes · 359.2K views
    atUber TechnologiesUber Technologies
    Nagios
    Nagios
    Grafana
    Grafana
    Graphite
    Graphite
    Prometheus
    Prometheus

    Why we spent several years building an open source, large-scale metrics alerting system, M3, built for Prometheus:

    By late 2014, all services, infrastructure, and servers at Uber emitted metrics to a Graphite stack that stored them using the Whisper file format in a sharded Carbon cluster. We used Grafana for dashboarding and Nagios for alerting, issuing Graphite threshold checks via source-controlled scripts. While this worked for a while, expanding the Carbon cluster required a manual resharding process and, due to lack of replication, any single node’s disk failure caused permanent loss of its associated metrics. In short, this solution was not able to meet our needs as the company continued to grow.

    To ensure the scalability of Uber’s metrics backend, we decided to build out a system that provided fault tolerant metrics ingestion, storage, and querying as a managed platform...

    https://eng.uber.com/m3/

    (GitHub : https://github.com/m3db/m3)

    See more
    Raja Subramaniam Mahali
    Raja Subramaniam Mahali
    Sysdig
    Sysdig
    Kubernetes
    Kubernetes
    Prometheus
    Prometheus

    We have Prometheus as a monitoring engine as a part of our stack which contains Kubernetes cluster, container images and other open source tools. Also, I am aware that Sysdig can be integrated with Prometheus but I really wanted to know whether Sysdig or sysdig+prometheus will make better monitoring solution.

    See more
    Tableau logo

    Tableau

    194
    89
    0
    194
    89
    + 1
    0
    Tableau helps people see and understand data.
      Be the first to leave a pro
      Tableau logo
      VS
      Kibana logo
      Compare Tableau vs Kibana
      Tableau logo
      Tableau
      VS
      Kibana logo
      Kibana
      New Relic logo

      New Relic

      14.3K
      2.8K
      1.9K
      14.3K
      2.8K
      + 1
      1.9K
      SaaS Application Performance Management for Ruby, PHP, .Net, Java, Python, and Node.js Apps.
      New Relic logo
      VS
      Kibana logo
      Compare New Relic vs Kibana
      New Relic logo
      New Relic
      VS
      Kibana logo
      Kibana

      related New Relic posts

      Sebastian Gębski
      Sebastian Gębski
      CTO at Shedul/Fresha · | 4 upvotes · 205.1K views
      atFresha EngineeringFresha Engineering
      Logentries
      Logentries
      Sentry
      Sentry
      AppSignal
      AppSignal
      New Relic
      New Relic
      GitHub
      GitHub
      Git
      Git
      Jenkins
      Jenkins
      CircleCI
      CircleCI

      Regarding Continuous Integration - we've started with something very easy to set up - CircleCI , but with time we're adding more & more complex pipelines - we use Jenkins to configure & run those. It's much more effort, but at some point we had to pay for the flexibility we expected. Our source code version control is Git (which probably doesn't require a rationale these days) and we keep repos in GitHub - since the very beginning & we never considered moving out. Our primary monitoring these days is in New Relic (Ruby & SPA apps) and AppSignal (Elixir apps) - we're considering unifying it in New Relic , but this will require some improvements in Elixir app observability. For error reporting we use Sentry (a very popular choice in this class) & we collect our distributed logs using Logentries (to avoid semi-manual handling here).

      See more
      Julien DeFrance
      Julien DeFrance
      Full Stack Engineering Manager at ValiMail · | 3 upvotes · 30.2K views
      atStessaStessa
      Datadog
      Datadog
      New Relic
      New Relic
      #APM

      Which #APM / #Infrastructure #Monitoring solution to use?

      The 2 major players in that space are New Relic and Datadog Both are very comparable in terms of pricing, capabilities (Datadog recently introduced APM as well).

      In our use case, keeping the number of tools minimal was a major selection criteria.

      As we were already using #NewRelic, my recommendation was to move to the pro tier so we would benefit from advanced APM features, synthetics, mobile & infrastructure monitoring. And gain 360 degree view of our infrastructure.

      Few things I liked about New Relic: - Mobile App and push notificatin - Ease of setting up new alerts - Being notified via email and push notifications without requiring another alerting 3rd party solution

      I've certainly seen use cases where NewRelic can also be used as an input data source for Datadog. Therefore depending on your use case, it might also be worth evaluating a joint usage of both solutions.

      See more

      related Datadog posts

      Robert Zuber
      Robert Zuber
      CTO at CircleCI · | 8 upvotes · 34K views
      atCircleCICircleCI
      Looker
      Looker
      PostgreSQL
      PostgreSQL
      Amplitude
      Amplitude
      Segment
      Segment
      Rollbar
      Rollbar
      Honeycomb
      Honeycomb
      PagerDuty
      PagerDuty
      Datadog
      Datadog

      Our primary source of monitoring and alerting is Datadog. We’ve got prebuilt dashboards for every scenario and integration with PagerDuty to manage routing any alerts. We’ve definitely scaled past the point where managing dashboards is easy, but we haven’t had time to invest in using features like Anomaly Detection. We’ve started using Honeycomb for some targeted debugging of complex production issues and we are liking what we’ve seen. We capture any unhandled exceptions with Rollbar and, if we realize one will keep happening, we quickly convert the metrics to point back to Datadog, to keep Rollbar as clean as possible.

      We use Segment to consolidate all of our trackers, the most important of which goes to Amplitude to analyze user patterns. However, if we need a more consolidated view, we push all of our data to our own data warehouse running PostgreSQL; this is available for analytics and dashboard creation through Looker.

      See more
      StackShare Editors
      StackShare Editors
      Flask
      Flask
      AWS EC2
      AWS EC2
      Celery
      Celery
      Datadog
      Datadog
      PagerDuty
      PagerDuty
      Airflow
      Airflow
      StatsD
      StatsD
      Grafana
      Grafana

      Data science and engineering teams at Lyft maintain several big data pipelines that serve as the foundation for various types of analysis throughout the business.

      Apache Airflow sits at the center of this big data infrastructure, allowing users to “programmatically author, schedule, and monitor data pipelines.” Airflow is an open source tool, and “Lyft is the very first Airflow adopter in production since the project was open sourced around three years ago.”

      There are several key components of the architecture. A web UI allows users to view the status of their queries, along with an audit trail of any modifications the query. A metadata database stores things like job status and task instance status. A multi-process scheduler handles job requests, and triggers the executor to execute those tasks.

      Airflow supports several executors, though Lyft uses CeleryExecutor to scale task execution in production. Airflow is deployed to three Amazon Auto Scaling Groups, with each associated with a celery queue.

      Audit logs supplied to the web UI are powered by the existing Airflow audit logs as well as Flask signal.

      Datadog, Statsd, Grafana, and PagerDuty are all used to monitor the Airflow system.

      See more
      Logstash logo

      Logstash

      2.7K
      1.8K
      94
      2.7K
      1.8K
      + 1
      94
      Collect, Parse, & Enrich Data
      Logstash logo
      VS
      Kibana logo
      Compare Logstash vs Kibana
      Logstash logo
      Logstash
      VS
      Kibana logo
      Kibana

      related Logstash posts

      Tymoteusz Paul
      Tymoteusz Paul
      Devops guy at X20X Development LTD · | 11 upvotes · 140.2K views
      Amazon EC2
      Amazon EC2
      LXC
      LXC
      CircleCI
      CircleCI
      Docker
      Docker
      Git
      Git
      Vault
      Vault
      Apache Maven
      Apache Maven
      Slack
      Slack
      Jenkins
      Jenkins
      TeamCity
      TeamCity
      Logstash
      Logstash
      Kibana
      Kibana
      Elasticsearch
      Elasticsearch
      Ansible
      Ansible
      VirtualBox
      VirtualBox
      Vagrant
      Vagrant

      Often enough I have to explain my way of going about setting up a CI/CD pipeline with multiple deployment platforms. Since I am a bit tired of yapping the same every single time, I've decided to write it up and share with the world this way, and send people to read it instead ;). I will explain it on "live-example" of how the Rome got built, basing that current methodology exists only of readme.md and wishes of good luck (as it usually is ;)).

      It always starts with an app, whatever it may be and reading the readmes available while Vagrant and VirtualBox is installing and updating. Following that is the first hurdle to go over - convert all the instruction/scripts into Ansible playbook(s), and only stopping when doing a clear vagrant up or vagrant reload we will have a fully working environment. As our Vagrant environment is now functional, it's time to break it! This is the moment to look for how things can be done better (too rigid/too lose versioning? Sloppy environment setup?) and replace them with the right way to do stuff, one that won't bite us in the backside. This is the point, and the best opportunity, to upcycle the existing way of doing dev environment to produce a proper, production-grade product.

      I should probably digress here for a moment and explain why. I firmly believe that the way you deploy production is the same way you should deploy develop, shy of few debugging-friendly setting. This way you avoid the discrepancy between how production work vs how development works, which almost always causes major pains in the back of the neck, and with use of proper tools should mean no more work for the developers. That's why we start with Vagrant as developer boxes should be as easy as vagrant up, but the meat of our product lies in Ansible which will do meat of the work and can be applied to almost anything: AWS, bare metal, docker, LXC, in open net, behind vpn - you name it.

      We must also give proper consideration to monitoring and logging hoovering at this point. My generic answer here is to grab Elasticsearch, Kibana, and Logstash. While for different use cases there may be better solutions, this one is well battle-tested, performs reasonably and is very easy to scale both vertically (within some limits) and horizontally. Logstash rules are easy to write and are well supported in maintenance through Ansible, which as I've mentioned earlier, are at the very core of things, and creating triggers/reports and alerts based on Elastic and Kibana is generally a breeze, including some quite complex aggregations.

      If we are happy with the state of the Ansible it's time to move on and put all those roles and playbooks to work. Namely, we need something to manage our CI/CD pipelines. For me, the choice is obvious: TeamCity. It's modern, robust and unlike most of the light-weight alternatives, it's transparent. What I mean by that is that it doesn't tell you how to do things, doesn't limit your ways to deploy, or test, or package for that matter. Instead, it provides a developer-friendly and rich playground for your pipelines. You can do most the same with Jenkins, but it has a quite dated look and feel to it, while also missing some key functionality that must be brought in via plugins (like quality REST API which comes built-in with TeamCity). It also comes with all the common-handy plugins like Slack or Apache Maven integration.

      The exact flow between CI and CD varies too greatly from one application to another to describe, so I will outline a few rules that guide me in it: 1. Make build steps as small as possible. This way when something breaks, we know exactly where, without needing to dig and root around. 2. All security credentials besides development environment must be sources from individual Vault instances. Keys to those containers should exist only on the CI/CD box and accessible by a few people (the less the better). This is pretty self-explanatory, as anything besides dev may contain sensitive data and, at times, be public-facing. Because of that appropriate security must be present. TeamCity shines in this department with excellent secrets-management. 3. Every part of the build chain shall consume and produce artifacts. If it creates nothing, it likely shouldn't be its own build. This way if any issue shows up with any environment or version, all developer has to do it is grab appropriate artifacts to reproduce the issue locally. 4. Deployment builds should be directly tied to specific Git branches/tags. This enables much easier tracking of what caused an issue, including automated identifying and tagging the author (nothing like automated regression testing!).

      Speaking of deployments, I generally try to keep it simple but also with a close eye on the wallet. Because of that, I am more than happy with AWS or another cloud provider, but also constantly peeking at the loads and do we get the value of what we are paying for. Often enough the pattern of use is not constantly erratic, but rather has a firm baseline which could be migrated away from the cloud and into bare metal boxes. That is another part where this approach strongly triumphs over the common Docker and CircleCI setup, where you are very much tied in to use cloud providers and getting out is expensive. Here to embrace bare-metal hosting all you need is a help of some container-based self-hosting software, my personal preference is with Proxmox and LXC. Following that all you must write are ansible scripts to manage hardware of Proxmox, similar way as you do for Amazon EC2 (ansible supports both greatly) and you are good to go. One does not exclude another, quite the opposite, as they can live in great synergy and cut your costs dramatically (the heavier your base load, the bigger the savings) while providing production-grade resiliency.

      See more
      Tanya Bragin
      Tanya Bragin
      Product Lead, Observability at Elastic · | 10 upvotes · 19K views
      atElasticElastic
      Kibana
      Kibana
      Logstash
      Logstash
      Elasticsearch
      Elasticsearch

      ELK Stack (Elasticsearch, Logstash, Kibana) is widely known as the de facto way to centralize logs from operational systems. The assumption is that Elasticsearch (a "search engine") is a good place to put text-based logs for the purposes of free-text search. And indeed, simply searching text-based logs for the word "error" or filtering logs based on a set of a well-known tags is extremely powerful, and is often where most users start.

      See more
      Nagios logo

      Nagios

      548
      383
      92
      548
      383
      + 1
      92
      Complete monitoring and alerting for servers, switches, applications, and services
      Nagios logo
      VS
      Kibana logo
      Compare Nagios vs Kibana
      Nagios logo
      Nagios
      VS
      Kibana logo
      Kibana

      related Nagios posts

      Conor Myhrvold
      Conor Myhrvold
      Tech Brand Mgr, Office of CTO at Uber · | 9 upvotes · 359.2K views
      atUber TechnologiesUber Technologies
      Nagios
      Nagios
      Grafana
      Grafana
      Graphite
      Graphite
      Prometheus
      Prometheus

      Why we spent several years building an open source, large-scale metrics alerting system, M3, built for Prometheus:

      By late 2014, all services, infrastructure, and servers at Uber emitted metrics to a Graphite stack that stored them using the Whisper file format in a sharded Carbon cluster. We used Grafana for dashboarding and Nagios for alerting, issuing Graphite threshold checks via source-controlled scripts. While this worked for a while, expanding the Carbon cluster required a manual resharding process and, due to lack of replication, any single node’s disk failure caused permanent loss of its associated metrics. In short, this solution was not able to meet our needs as the company continued to grow.

      To ensure the scalability of Uber’s metrics backend, we decided to build out a system that provided fault tolerant metrics ingestion, storage, and querying as a managed platform...

      https://eng.uber.com/m3/

      (GitHub : https://github.com/m3db/m3)

      See more

      related Graphite posts

      Conor Myhrvold
      Conor Myhrvold
      Tech Brand Mgr, Office of CTO at Uber · | 9 upvotes · 359.2K views
      atUber TechnologiesUber Technologies
      Nagios
      Nagios
      Grafana
      Grafana
      Graphite
      Graphite
      Prometheus
      Prometheus

      Why we spent several years building an open source, large-scale metrics alerting system, M3, built for Prometheus:

      By late 2014, all services, infrastructure, and servers at Uber emitted metrics to a Graphite stack that stored them using the Whisper file format in a sharded Carbon cluster. We used Grafana for dashboarding and Nagios for alerting, issuing Graphite threshold checks via source-controlled scripts. While this worked for a while, expanding the Carbon cluster required a manual resharding process and, due to lack of replication, any single node’s disk failure caused permanent loss of its associated metrics. In short, this solution was not able to meet our needs as the company continued to grow.

      To ensure the scalability of Uber’s metrics backend, we decided to build out a system that provided fault tolerant metrics ingestion, storage, and querying as a managed platform...

      https://eng.uber.com/m3/

      (GitHub : https://github.com/m3db/m3)

      See more
      Amazon CloudWatch
      Amazon CloudWatch
      PagerDuty
      PagerDuty
      Grafana
      Grafana
      Graphite
      Graphite
      StatsD
      StatsD
      Sentry
      Sentry

      A huge part of our continuous deployment practices is to have granular alerting and monitoring across the platform. To do this, we run Sentry on-premise, inside our VPCs, for our event alerting, and we run an awesome observability and monitoring system consisting of StatsD, Graphite and Grafana. We have dashboards using this system to monitor our core subsystems so that we can know the health of any given subsystem at any moment. This system ties into our PagerDuty rotation, as well as alerts from some of our Amazon CloudWatch alarms (we’re looking to migrate all of these to our internal monitoring system soon).

      See more

      related StatsD posts

      Łukasz Korecki
      Łukasz Korecki
      CTO & Co-founder at EnjoyHQ · | 6 upvotes · 15.2K views
      atEnjoyHQEnjoyHQ
      Stackdriver
      Stackdriver
      Clojure
      Clojure
      StatsD
      StatsD
      Google Compute Engine
      Google Compute Engine
      collectd
      collectd

      We use collectd because of it's low footprint and great capabilities. We use it to monitor our Google Compute Engine machines. More interestingly we setup collectd as StatsD replacement - all our Clojure services push application-level metrics using our own metrics library and collectd pushes them to Stackdriver

      See more
      Amazon CloudWatch
      Amazon CloudWatch
      PagerDuty
      PagerDuty
      Grafana
      Grafana
      Graphite
      Graphite
      StatsD
      StatsD
      Sentry
      Sentry

      A huge part of our continuous deployment practices is to have granular alerting and monitoring across the platform. To do this, we run Sentry on-premise, inside our VPCs, for our event alerting, and we run an awesome observability and monitoring system consisting of StatsD, Graphite and Grafana. We have dashboards using this system to monitor our core subsystems so that we can know the health of any given subsystem at any moment. This system ties into our PagerDuty rotation, as well as alerts from some of our Amazon CloudWatch alarms (we’re looking to migrate all of these to our internal monitoring system soon).

      See more
      Jaeger logo

      Jaeger

      70
      49
      0
      70
      49
      + 1
      0
      Distributed tracing system released as open source by Uber
      Jaeger logo
      VS
      Kibana logo
      Compare Jaeger vs Kibana
      Jaeger logo
      Jaeger
      VS
      Kibana logo
      Kibana
      Supervisord logo

      Supervisord

      55
      17
      0
      55
      17
      + 1
      0
      A client/server system that allows its users to monitor and control a number of processes
        Be the first to leave a pro
        Supervisord logo
        VS
        Kibana logo
        Compare Supervisord vs Kibana
        Supervisord logo
        Supervisord
        VS
        Kibana logo
        Kibana
        collectd logo

        collectd

        50
        43
        3
        50
        43
        + 1
        3
        System and applications metrics collector
        collectd logo
        VS
        Kibana logo
        Compare collectd vs Kibana
        collectd logo
        collectd
        VS
        Kibana logo
        Kibana

        related collectd posts

        Łukasz Korecki
        Łukasz Korecki
        CTO & Co-founder at EnjoyHQ · | 6 upvotes · 15.2K views
        atEnjoyHQEnjoyHQ
        Stackdriver
        Stackdriver
        Clojure
        Clojure
        StatsD
        StatsD
        Google Compute Engine
        Google Compute Engine
        collectd
        collectd

        We use collectd because of it's low footprint and great capabilities. We use it to monitor our Google Compute Engine machines. More interestingly we setup collectd as StatsD replacement - all our Clojure services push application-level metrics using our own metrics library and collectd pushes them to Stackdriver

        See more
        Munin logo

        Munin

        49
        26
        5
        49
        26
        + 1
        5
        PnP networked resource monitoring tool that can help to answer the what just happened to kill our performance
        Munin logo
        VS
        Kibana logo
        Compare Munin vs Kibana
        Munin logo
        Munin
        VS
        Kibana logo
        Kibana
        Icinga logo

        Icinga

        44
        7
        0
        44
        7
        + 1
        0
        A resilient, open source monitoring system
          Be the first to leave a pro
          Icinga logo
          VS
          Kibana logo
          Compare Icinga vs Kibana
          Icinga logo
          Icinga
          VS
          Kibana logo
          Kibana

          related Icinga posts

          StackShare Editors
          StackShare Editors
          Icinga
          Icinga
          Graphite
          Graphite
          Logstash
          Logstash
          Elasticsearch
          Elasticsearch
          Grafana
          Grafana
          Kibana
          Kibana

          One size definitely doesn’t fit all when it comes to open source monitoring solutions, and executing generally understood best practices in the context of unique distributed systems presents all sorts of problems. Megan Anctil, a senior engineer on the Technical Operations team at Slack gave a talk at an O’Reilly Velocity Conference sharing pain points and lessons learned at wrangling known technologies such as Icinga, Graphite, Grafana, and the Elastic Stack to best fit the company’s use cases.

          At the time, Slack used a few well-known monitoring tools since it’s Technical Operations team wasn’t large enough to build an in-house solution for all of these. Nor did the team think it’s sustainable to throw money at the problem, given the volume of information processed and the not-insignificant price and rigidity of many vendor solutions. With thousands of servers across multiple regions and millions of metrics and documents being processed and indexed per second, the team had to figure out how to scale these technologies to fit Slack’s needs.

          On the backend, they experimented with multiple clusters in both Graphite and ELK, distributed Icinga nodes, and more. At the same time, they’ve tried to build usability into Grafana that reflects the team’s mental models of the system and have found ways to make alerts from Icinga more insightful and actionable.

          See more