Cacti vs Nagios: What are the differences?
Cacti: Cacti stores all of the necessary information to create graphs and populate them with data in a MySQL database. Cacti is a complete network graphing solution designed to harness the power of RRDTool's data storage and graphing functionality. Cacti provides a fast poller, advanced graph templating, multiple data acquisition methods, and user management features out of the box; Nagios: Complete monitoring and alerting for servers, switches, applications, and services. Nagios is a host/service/network monitoring program written in C and released under the GNU General Public License.
Cacti and Nagios can be primarily classified as "Monitoring" tools.
Some of the features offered by Cacti are:
- Unlimited number of graph items can be defined for each graph optionally utilizing CDEFs or data sources from within cacti.
- Automatic grouping of GPRINT graph items to AREA, STACK, and LINE[1-3] to allow for quick re-sequencing of graph items.
- Auto-Padding support to make sure graph legend text lines up.
On the other hand, Nagios provides the following key features:
- Monitor your entire IT infrastructure
- Spot problems before they occur
- Know immediately when problems arise
"Free" is the top reason why over 2 developers like Cacti, while over 49 developers mention "It just works" as the leading cause for choosing Nagios.
Nagios is an open source tool with 60 GitHub stars and 36 GitHub forks. Here's a link to Nagios's open source repository on GitHub.
According to the StackShare community, Nagios has a broader approval, being mentioned in 177 company stacks & 40 developers stacks; compared to Cacti, which is listed in 5 company stacks and 5 developer stacks.
What is Cacti?
What is Nagios?
Need advice about which tool to choose?Ask the StackShare community!
Sign up to add, upvote and see more prosMake informed product decisions
What are the cons of using Cacti?
What are the cons of using Nagios?
Sign up to get full access to all the companiesMake informed product decisions
Sign up to get full access to all the tool integrationsMake informed product decisions
Why we spent several years building an open source, large-scale metrics alerting system, M3, built for Prometheus:
By late 2014, all services, infrastructure, and servers at Uber emitted metrics to a Graphite stack that stored them using the Whisper file format in a sharded Carbon cluster. We used Grafana for dashboarding and Nagios for alerting, issuing Graphite threshold checks via source-controlled scripts. While this worked for a while, expanding the Carbon cluster required a manual resharding process and, due to lack of replication, any single node’s disk failure caused permanent loss of its associated metrics. In short, this solution was not able to meet our needs as the company continued to grow.
To ensure the scalability of Uber’s metrics backend, we decided to build out a system that provided fault tolerant metrics ingestion, storage, and querying as a managed platform...
(GitHub : https://github.com/m3db/m3)
We use Nagios to monitor our stack and alert us when problems arise. Nagios allows us to monitor every aspect of each of our servers such as running processes, CPU usage, disk usage, and more. This means that as soon as problems arise, we can detect them and call out an engineer to resolve the issues as soon as possible.
We use Nagios to monitor customer instances of Bridge and proactively alert us about issues like queue sizes, downed services, errors in logs, etc.
We use nagios based OpsView to monitor our server farm and keep everything running smoothly.