What is Kibana and what are its top alternatives?
Top Alternatives to Kibana
- Datadog
Datadog is the leading service for cloud-scale monitoring. It is used by IT, operations, and development teams who build and operate applications that run on dynamic or hybrid cloud infrastructure. Start monitoring in minutes with Datadog! ...
- Grafana
Grafana is a general purpose dashboard and graph composer. It's focused on providing rich ways to visualize time series metrics, mainly though graphs but supports other ways to visualize data through a pluggable panel architecture. It currently has rich support for for Graphite, InfluxDB and OpenTSDB. But supports other data sources via plugins. ...
- Loggly
It is a SaaS solution to manage your log data. There is nothing to install and updates are automatically applied to your Loggly subdomain. ...
- Graylog
Centralize and aggregate all your log files for 100% visibility. Use our powerful query language to search through terabytes of log data to discover and analyze important information. ...
- Splunk
It provides the leading platform for Operational Intelligence. Customers use it to search, monitor, analyze and visualize machine data. ...
- Prometheus
Prometheus is a systems and service monitoring system. It collects metrics from configured targets at given intervals, evaluates rule expressions, displays the results, and can trigger alerts if some condition is observed to be true. ...
- Tableau
Tableau can help anyone see and understand their data. Connect to almost any database, drag and drop to create visualizations, and share with a click. ...
- New Relic
The world’s best software and DevOps teams rely on New Relic to move faster, make better decisions and create best-in-class digital experiences. If you run software, you need to run New Relic. More than 50% of the Fortune 100 do too. ...
Kibana alternatives & related posts
Datadog
- Monitoring for many apps (databases, web servers, etc)136
- Easy setup106
- Powerful ui86
- Powerful integrations82
- Great value69
- Great visualization53
- Events + metrics = clarity45
- Custom metrics40
- Notifications40
- Flexibility38
- Free & paid plans18
- Great customer support15
- Makes my life easier14
- Adapts automatically as i scale up9
- Easy setup and plugins8
- Super easy and powerful7
- AWS support6
- In-context collaboration6
- Rich in features5
- Docker support4
- Cost4
- Automation tools3
- Source control and bug tracking3
- Simple, powerful, great for infra3
- Cute logo3
- Expensive3
- Easy to Analyze3
- Full visibility of applications3
- Monitor almost everything3
- Best than others3
- Good for Startups2
- Free setup2
- Best in the field2
- APM1
- Expensive19
- No errors exception tracking4
- External Network Goes Down You Wont Be Logging2
- Complicated1
related Datadog posts
Our primary source of monitoring and alerting is Datadog. We’ve got prebuilt dashboards for every scenario and integration with PagerDuty to manage routing any alerts. We’ve definitely scaled past the point where managing dashboards is easy, but we haven’t had time to invest in using features like Anomaly Detection. We’ve started using Honeycomb for some targeted debugging of complex production issues and we are liking what we’ve seen. We capture any unhandled exceptions with Rollbar and, if we realize one will keep happening, we quickly convert the metrics to point back to Datadog, to keep Rollbar as clean as possible.
We use Segment to consolidate all of our trackers, the most important of which goes to Amplitude to analyze user patterns. However, if we need a more consolidated view, we push all of our data to our own data warehouse running PostgreSQL; this is available for analytics and dashboard creation through Looker.
We are looking for a centralised monitoring solution for our application deployed on Amazon EKS. We would like to monitor using metrics from Kubernetes, AWS services (NeptuneDB, AWS Elastic Load Balancing (ELB), Amazon EBS, Amazon S3, etc) and application microservice's custom metrics.
We are expected to use around 80 microservices (not replicas). I think a total of 200-250 microservices will be there in the system with 10-12 slave nodes.
We tried Prometheus but it looks like maintenance is a big issue. We need to manage scaling, maintaining the storage, and dealing with multiple exporters and Grafana. I felt this itself needs few dedicated resources (at least 2-3 people) to manage. Not sure if I am thinking in the correct direction. Please confirm.
You mentioned Datadog and Sysdig charges per host. Does it charge per slave node?
- Beautiful88
- Graphs are interactive68
- Free57
- Easy56
- Nicer than the Graphite web interface34
- Many integrations25
- Can build dashboards18
- Can collaborate on dashboards10
- Easy to specify time window10
- Dashboards contain number tiles9
- Click and drag to zoom in5
- Integration with InfluxDB5
- Open Source5
- Authentification and users management4
- Threshold limits in graphs4
- Simple and native support to Prometheus3
- It is open to cloud watch and many database3
- Alerts3
- You can visualize real time data to put alerts2
- You can use this for development to check memcache2
- Great community support2
- Plugin visualizationa0
- Grapsh as code0
- No interactive query builder1
related Grafana posts
Why we spent several years building an open source, large-scale metrics alerting system, M3, built for Prometheus:
By late 2014, all services, infrastructure, and servers at Uber emitted metrics to a Graphite stack that stored them using the Whisper file format in a sharded Carbon cluster. We used Grafana for dashboarding and Nagios for alerting, issuing Graphite threshold checks via source-controlled scripts. While this worked for a while, expanding the Carbon cluster required a manual resharding process and, due to lack of replication, any single node’s disk failure caused permanent loss of its associated metrics. In short, this solution was not able to meet our needs as the company continued to grow.
To ensure the scalability of Uber’s metrics backend, we decided to build out a system that provided fault tolerant metrics ingestion, storage, and querying as a managed platform...
(GitHub : https://github.com/m3db/m3)
Grafana and Prometheus together, running on Kubernetes , is a powerful combination. These tools are cloud-native and offer a large community and easy integrations. At PayIt we're using exporting Java application metrics using a Dropwizard metrics exporter, and our Node.js services now use the prom-client npm library to serve metrics.
- Centralized log management37
- Easy to setup25
- Great filtering21
- Live logging16
- Json log support15
- Log Management10
- Alerting10
- Great Dashboards7
- Love the product7
- Heroku Add-on4
- Easy to setup and use2
- Easy setup2
- No alerts in free plan2
- Great UI2
- Good parsing2
- Powerful2
- Fast search2
- Backup to S32
- Pricey after free plan3
related Loggly posts
- Open source19
- Powerfull13
- Well documented8
- Alerts6
- User authentification5
- Flexibel query and parsing language5
- User management3
- Easy query language and english parsing3
- Alerts and dashboards3
- Easy to install2
- A large community1
- Manage users and permissions1
- Free Version1
- Does not handle frozen indices at all1
related Graylog posts
- Ability to style search results into reports2
- Alert system based on custom query results2
- API for searching logs, running reports2
- Query engine supports joining, aggregation, stats, etc2
- Query any log as key-value pairs1
- Splunk language supports string, date manip, math, etc1
- Granular scheduling and time window support1
- Custom log parsing as well as automatic parsing1
- Dashboarding on any log contents1
- Rich GUI for searching live logs1
- Splunk query language rich so lots to learn1
related Splunk posts
I use Kibana because it ships with the ELK stack. I don't find it as powerful as Splunk however it is light years above grepping through log files. We previously used Grafana but found it to be annoying to maintain a separate tool outside of the ELK stack. We were able to get everything we needed from Kibana.
We are currently exploring Elasticsearch and Splunk for our centralized logging solution. I need some feedback about these two tools. We expect our logs in the range of upwards > of 10TB of logging data.
Prometheus
- Powerful easy to use monitoring47
- Flexible query language38
- Dimensional data model32
- Alerts27
- Active and responsive community23
- Extensive integrations22
- Easy to setup19
- Beautiful Model and Query language12
- Easy to extend7
- Nice6
- Written in Go3
- Good for experimentation2
- Easy for monitoring1
- Just for metrics12
- Bad UI6
- Needs monitoring to access metrics endpoints6
- Not easy to configure and use4
- Supports only active agents3
- Written in Go2
- TLS is quite difficult to understand2
- Requires multiple applications and tools2
- Single point of failure1
related Prometheus posts
Why we spent several years building an open source, large-scale metrics alerting system, M3, built for Prometheus:
By late 2014, all services, infrastructure, and servers at Uber emitted metrics to a Graphite stack that stored them using the Whisper file format in a sharded Carbon cluster. We used Grafana for dashboarding and Nagios for alerting, issuing Graphite threshold checks via source-controlled scripts. While this worked for a while, expanding the Carbon cluster required a manual resharding process and, due to lack of replication, any single node’s disk failure caused permanent loss of its associated metrics. In short, this solution was not able to meet our needs as the company continued to grow.
To ensure the scalability of Uber’s metrics backend, we decided to build out a system that provided fault tolerant metrics ingestion, storage, and querying as a managed platform...
(GitHub : https://github.com/m3db/m3)
Grafana and Prometheus together, running on Kubernetes , is a powerful combination. These tools are cloud-native and offer a large community and easy integrations. At PayIt we're using exporting Java application metrics using a Dropwizard metrics exporter, and our Node.js services now use the prom-client npm library to serve metrics.
- Capable of visualising billions of rows6
- Responsive1
- 31
- Intuitive and easy to learn1
- Very expensive for small companies1
related Tableau posts
Looking for the best analytics software for a medium-large-sized firm. We currently use a Microsoft SQL Server database that is analyzed in Tableau desktop/published to Tableau online for users to access dashboards. Is it worth the cost savings/time to switch over to using SSRS or Power BI? Does anyone have experience migrating from Tableau to SSRS /or Power BI? Our other option is to consider using Tableau on-premises instead of online. Using custom SQL with over 3 million rows really decreases performances and results in processing times that greatly exceed our typical experience. Thanks.
New Relic
- Easy setup415
- Really powerful344
- Awesome visualization244
- Ease of use194
- Great ui151
- Free tier107
- Great tool for insights80
- Heroku Integration66
- Market leader55
- Peace of mind49
- Push notifications21
- Email notifications20
- Heroku Add-on17
- Error Detection and Alerting16
- Multiple language support13
- SQL Analysis11
- Server Resources Monitoring11
- Transaction Tracing9
- Apdex Scores8
- Azure Add-on8
- Analysis of CPU, Disk, Memory, and Network7
- Performance of External Services6
- Error Analysis6
- Detailed reports6
- Application Response Times6
- Application Availability Monitoring and Alerting6
- JVM Performance Analyzer (Java)5
- Most Time Consuming Transactions5
- Easy to use4
- Browser Transaction Tracing4
- Top Database Operations4
- Pagoda Box integration3
- Custom Dashboards3
- Weekly Performance Email3
- Application Map3
- Background Jobs Transaction Analysis2
- App Speed Index2
- Easy visibility2
- Easy to setup2
- Free1
- Rails integration1
- Super Expensive1
- Metric Data Resolution1
- Metric Data Retention1
- Team Collaboration Tools1
- Best of the best, what more can you ask for1
- Best monitoring on the market1
- Real User Monitoring Overview1
- Real User Monitoring Analysis and Breakdown1
- Time Comparisons1
- Access to Performance Data API1
- Worst Transactions by User Dissatisfaction1
- Incident Detection and Alerting1
- Exceptions0
- Pricing model doesn't suit microservices20
- UI isn't great10
- Expensive7
- Visualizations aren't very helpful7
- Hard to understand why things in your app are breaking5
related New Relic posts
Hey there! We are looking at Datadog, Dynatrace, AppDynamics, and New Relic as options for our web application monitoring.
Current Environment: .NET Core Web app hosted on Microsoft IIS
Future Environment: Web app will be hosted on Microsoft Azure
Tech Stacks: IIS, RabbitMQ, Redis, Microsoft SQL Server
Requirement: Infra Monitoring, APM, Real - User Monitoring (User activity monitoring i.e., time spent on a page, most active page, etc.), Service Tracing, Root Cause Analysis, and Centralized Log Management.
Please advise on the above. Thanks!
Regarding Continuous Integration - we've started with something very easy to set up - CircleCI , but with time we're adding more & more complex pipelines - we use Jenkins to configure & run those. It's much more effort, but at some point we had to pay for the flexibility we expected. Our source code version control is Git (which probably doesn't require a rationale these days) and we keep repos in GitHub - since the very beginning & we never considered moving out. Our primary monitoring these days is in New Relic (Ruby & SPA apps) and AppSignal (Elixir apps) - we're considering unifying it in New Relic , but this will require some improvements in Elixir app observability. For error reporting we use Sentry (a very popular choice in this class) & we collect our distributed logs using Logentries (to avoid semi-manual handling here).