Alternatives to osquery logo

Alternatives to osquery

Ossec, ELK, Prometheus, Wazuh, and Sysdig are the most popular alternatives and competitors to osquery.
15
36
+ 1
0

What is osquery and what are its top alternatives?

osquery exposes an operating system as a high-performance relational database. This allows you to write SQL-based queries to explore operating system data. With osquery, SQL tables represent abstract concepts such as running processes, loaded kernel modules, open network connections, browser plugins, hardware events or file hashes.
osquery is a tool in the Desktop Querying Tools category of a tech stack.
osquery is an open source tool with 17.4K GitHub stars and 2.1K GitHub forks. Here鈥檚 a link to osquery's open source repository on GitHub

Top Alternatives to osquery

  • Ossec

    Ossec

    It is a free, open-source host-based intrusion detection system. It performs log analysis, integrity checking, registry monitoring, rootkit detection, time-based alerting, and active response. ...

  • ELK

    ELK

    It is the acronym for three open source projects: Elasticsearch, Logstash, and Kibana. Elasticsearch is a search and analytics engine. Logstash is a server鈥憇ide data processing pipeline that ingests data from multiple sources simultaneously, transforms it, and then sends it to a "stash" like Elasticsearch. Kibana lets users visualize data with charts and graphs in Elasticsearch. ...

  • Prometheus

    Prometheus

    Prometheus is a systems and service monitoring system. It collects metrics from configured targets at given intervals, evaluates rule expressions, displays the results, and can trigger alerts if some condition is observed to be true. ...

  • Wazuh

    Wazuh

    It is a free, open source and enterprise-ready security monitoring solution for threat detection, integrity monitoring, incident response and compliance. ...

  • Sysdig

    Sysdig

    Sysdig is open source, system-level exploration: capture system state and activity from a running Linux instance, then save, filter and analyze. Sysdig is scriptable in Lua and includes a command line interface and a powerful interactive UI, csysdig, that runs in your terminal. Think of sysdig as strace + tcpdump + htop + iftop + lsof + awesome sauce. With state of the art container visibility on top. ...

  • Ansible

    Ansible

    Ansible is an IT automation tool. It can configure systems, deploy software, and orchestrate more advanced IT tasks such as continuous deployments or zero downtime rolling updates. Ansible鈥檚 goals are foremost those of simplicity and maximum ease of use. ...

  • CrowdStrike

    CrowdStrike

    It is a cloud-native endpoint security platform combines Next-Gen Av, EDR, Threat Intelligence, Threat Hunting, and much more. ...

  • FSQL

    FSQL

    Pass your query to fsql via command line argument. In general, each query requires a SELECT clause (to specify which attributes should be shown), a FROM clause (to specify the directories to search in), and a WHERE clause (to specify conditions for the files). ...

osquery alternatives & related posts

Ossec logo

Ossec

25
72
0
A Host-based Intrusion Detection System
25
72
+ 1
0
PROS OF OSSEC
    No pros available
    CONS OF OSSEC
      No cons available

      related Ossec posts

      ELK logo

      ELK

      524
      464
      5
      The acronym for three open source projects: Elasticsearch, Logstash, and Kibana
      524
      464
      + 1
      5

      related ELK posts

      Wallace Alves
      Cyber Security Analyst | 1 upvotes 路 502.2K views

      Docker Docker Compose Portainer ELK Elasticsearch Kibana Logstash nginx

      See more

      related Prometheus posts

      Conor Myhrvold
      Tech Brand Mgr, Office of CTO at Uber | 13 upvotes 路 2.6M views

      Why we spent several years building an open source, large-scale metrics alerting system, M3, built for Prometheus:

      By late 2014, all services, infrastructure, and servers at Uber emitted metrics to a Graphite stack that stored them using the Whisper file format in a sharded Carbon cluster. We used Grafana for dashboarding and Nagios for alerting, issuing Graphite threshold checks via source-controlled scripts. While this worked for a while, expanding the Carbon cluster required a manual resharding process and, due to lack of replication, any single node鈥檚 disk failure caused permanent loss of its associated metrics. In short, this solution was not able to meet our needs as the company continued to grow.

      To ensure the scalability of Uber鈥檚 metrics backend, we decided to build out a system that provided fault tolerant metrics ingestion, storage, and querying as a managed platform...

      https://eng.uber.com/m3/

      (GitHub : https://github.com/m3db/m3)

      See more

      We have Prometheus as a monitoring engine as a part of our stack which contains Kubernetes cluster, container images and other open source tools. Also, I am aware that Sysdig can be integrated with Prometheus but I really wanted to know whether Sysdig or sysdig+prometheus will make better monitoring solution.

      See more
      Wazuh logo

      Wazuh

      46
      116
      0
      Open Source and enterprise-ready security monitoring solution
      46
      116
      + 1
      0
      PROS OF WAZUH
        No pros available
        CONS OF WAZUH
          No cons available

          related Wazuh posts

          Sysdig logo

          Sysdig

          54
          108
          11
          Open source container monitoring for all Linux container technologies, including Docker, LXC, etc
          54
          108
          + 1
          11
          CONS OF SYSDIG
            No cons available

            related Sysdig posts

            We have Prometheus as a monitoring engine as a part of our stack which contains Kubernetes cluster, container images and other open source tools. Also, I am aware that Sysdig can be integrated with Prometheus but I really wanted to know whether Sysdig or sysdig+prometheus will make better monitoring solution.

            See more

            We are looking for a centralised monitoring solution for our application deployed on Amazon EKS. We would like to monitor using metrics from Kubernetes, AWS services (NeptuneDB, AWS Elastic Load Balancing (ELB), Amazon EBS, Amazon S3, etc) and application microservice's custom metrics.

            We are expected to use around 80 microservices (not replicas). I think a total of 200-250 microservices will be there in the system with 10-12 slave nodes.

            We tried Prometheus but it looks like maintenance is a big issue. We need to manage scaling, maintaining the storage, and dealing with multiple exporters and Grafana. I felt this itself needs few dedicated resources (at least 2-3 people) to manage. Not sure if I am thinking in the correct direction. Please confirm.

            You mentioned Datadog and Sysdig charges per host. Does it charge per slave node?

            See more

            related Ansible posts

            Tymoteusz Paul
            Devops guy at X20X Development LTD | 21 upvotes 路 3.8M views

            Often enough I have to explain my way of going about setting up a CI/CD pipeline with multiple deployment platforms. Since I am a bit tired of yapping the same every single time, I've decided to write it up and share with the world this way, and send people to read it instead ;). I will explain it on "live-example" of how the Rome got built, basing that current methodology exists only of readme.md and wishes of good luck (as it usually is ;)).

            It always starts with an app, whatever it may be and reading the readmes available while Vagrant and VirtualBox is installing and updating. Following that is the first hurdle to go over - convert all the instruction/scripts into Ansible playbook(s), and only stopping when doing a clear vagrant up or vagrant reload we will have a fully working environment. As our Vagrant environment is now functional, it's time to break it! This is the moment to look for how things can be done better (too rigid/too lose versioning? Sloppy environment setup?) and replace them with the right way to do stuff, one that won't bite us in the backside. This is the point, and the best opportunity, to upcycle the existing way of doing dev environment to produce a proper, production-grade product.

            I should probably digress here for a moment and explain why. I firmly believe that the way you deploy production is the same way you should deploy develop, shy of few debugging-friendly setting. This way you avoid the discrepancy between how production work vs how development works, which almost always causes major pains in the back of the neck, and with use of proper tools should mean no more work for the developers. That's why we start with Vagrant as developer boxes should be as easy as vagrant up, but the meat of our product lies in Ansible which will do meat of the work and can be applied to almost anything: AWS, bare metal, docker, LXC, in open net, behind vpn - you name it.

            We must also give proper consideration to monitoring and logging hoovering at this point. My generic answer here is to grab Elasticsearch, Kibana, and Logstash. While for different use cases there may be better solutions, this one is well battle-tested, performs reasonably and is very easy to scale both vertically (within some limits) and horizontally. Logstash rules are easy to write and are well supported in maintenance through Ansible, which as I've mentioned earlier, are at the very core of things, and creating triggers/reports and alerts based on Elastic and Kibana is generally a breeze, including some quite complex aggregations.

            If we are happy with the state of the Ansible it's time to move on and put all those roles and playbooks to work. Namely, we need something to manage our CI/CD pipelines. For me, the choice is obvious: TeamCity. It's modern, robust and unlike most of the light-weight alternatives, it's transparent. What I mean by that is that it doesn't tell you how to do things, doesn't limit your ways to deploy, or test, or package for that matter. Instead, it provides a developer-friendly and rich playground for your pipelines. You can do most the same with Jenkins, but it has a quite dated look and feel to it, while also missing some key functionality that must be brought in via plugins (like quality REST API which comes built-in with TeamCity). It also comes with all the common-handy plugins like Slack or Apache Maven integration.

            The exact flow between CI and CD varies too greatly from one application to another to describe, so I will outline a few rules that guide me in it: 1. Make build steps as small as possible. This way when something breaks, we know exactly where, without needing to dig and root around. 2. All security credentials besides development environment must be sources from individual Vault instances. Keys to those containers should exist only on the CI/CD box and accessible by a few people (the less the better). This is pretty self-explanatory, as anything besides dev may contain sensitive data and, at times, be public-facing. Because of that appropriate security must be present. TeamCity shines in this department with excellent secrets-management. 3. Every part of the build chain shall consume and produce artifacts. If it creates nothing, it likely shouldn't be its own build. This way if any issue shows up with any environment or version, all developer has to do it is grab appropriate artifacts to reproduce the issue locally. 4. Deployment builds should be directly tied to specific Git branches/tags. This enables much easier tracking of what caused an issue, including automated identifying and tagging the author (nothing like automated regression testing!).

            Speaking of deployments, I generally try to keep it simple but also with a close eye on the wallet. Because of that, I am more than happy with AWS or another cloud provider, but also constantly peeking at the loads and do we get the value of what we are paying for. Often enough the pattern of use is not constantly erratic, but rather has a firm baseline which could be migrated away from the cloud and into bare metal boxes. That is another part where this approach strongly triumphs over the common Docker and CircleCI setup, where you are very much tied in to use cloud providers and getting out is expensive. Here to embrace bare-metal hosting all you need is a help of some container-based self-hosting software, my personal preference is with Proxmox and LXC. Following that all you must write are ansible scripts to manage hardware of Proxmox, similar way as you do for Amazon EC2 (ansible supports both greatly) and you are good to go. One does not exclude another, quite the opposite, as they can live in great synergy and cut your costs dramatically (the heavier your base load, the bigger the savings) while providing production-grade resiliency.

            See more
            Pedro Arnal Puente
            CTO at La Cupula Music SL | 8 upvotes 路 413.2K views

            Our base infrastructure is composed of Debian based servers running in Amazon EC2 , asset storage with Amazon S3 , and Amazon RDS for Aurora and Redis under Amazon ElastiCache for data storage.

            We are starting to work in automated provisioning and management with Terraform , Packer , and Ansible .

            See more
            CrowdStrike logo

            CrowdStrike

            11
            24
            0
            Cloud-Native Endpoint Protection Platform
            11
            24
            + 1
            0
            PROS OF CROWDSTRIKE
              No pros available
              CONS OF CROWDSTRIKE
                No cons available

                related CrowdStrike posts

                FSQL logo

                FSQL

                1
                8
                1
                Search through your file system with SQL-esque queries
                1
                8
                + 1
                1
                PROS OF FSQL
                CONS OF FSQL
                  No cons available

                  related FSQL posts