Help developers discover the tools you use. Get visibility for your team's tech choices and contribute to the community's knowledge.
It is a robust static analysis framework for validating that LLM-generated structured output is safe. It currently supports SQL. | It is an open-source toolkit for monitoring Large Language Models (LLMs). It extracts signals from prompts & responses, ensuring safety & security. |
Validate LLM-generated output;
Makes sure that AI won't wreck your systems | Text quality;
Relevance metrics;
Sentiment analysis;
A comprehensive tool for LLM observability |
Statistics | |
GitHub Stars 113 | GitHub Stars 954 |
GitHub Forks 3 | GitHub Forks 70 |
Stacks 0 | Stacks 0 |
Followers 0 | Followers 1 |
Votes 0 | Votes 0 |
Integrations | |
| No integrations available | |

Waxell is the AI governance plane for agentic systems in production. It sits above agents, models, and integrations, enforcing constraints and defining what's allowed. Auto-instrumentation for 200+ libraries without code changes. Real-time tracing, token and cost tracking, and 11 categories of agentic governance policy enforcement.

The only self-service scanner with active adversarial probing for AI endpoints. 12 Parallel Security Checks get your results in less than a minute. No agents. No SDK. No credentials required. Paste a URL, get a security score with actionable findings.

AI security gateway for Apache APISIX. 100% air-gapped, Open Source core. CPU-capable, GPU-optional. Protect LLMs from prompt injection, PII leaks, and data exfiltration. GDPR, EU AI Act, SOC2, HIPAA compliant. Your data never leaves your VPC.
Clawsec is an open-source security plugin that blocks dangerous actions in under 5ms. One command: openclaw plugins install clawsec

Privacy-first AI assistant that protects sensitive information while preserving context.

AgentGuard is an SDK for AI agent developers that enforces budget limits, auth isolation, and MCP policy rules. Stop agents from overspending, leaking data, or exceeding their permissions. Works with any LLM stack.

Use any AI, safely. Sensitive data never leaves your device. Imagine using AI freely—without exposing who you are. Anonymize360 intercepts your sensitive data before it reaches an AI provider. The moment you send a message, it scans for names, addresses, SSNs, and medical records—replacing them with secure tokens and encrypting the originals locally with AES-256. Only the anonymized version travels to the cloud. When the response returns, your real information is seamlessly restored. Zero-knowledge architecture: even we can't access your data. No backdoors. Nothing stored outside your device. Works silently across Windows and macOS. For professionals, healthcare providers, or anyone who values privacy—powerful AI, zero compromise. Instant. Invisible. Secure.

Track your AI usage and secure sensitive data across Claude, ChatGPT, Gemini, and more. AIMetrical offers unified analytics and real-time security on a single dashboard.

TokenFence is an open-source SDK that lets developers set hard per-workflow token and cost limits for AI agents. Drop in 2 lines of code to prevent runaway API spend. Supports OpenAI, Anthropic, and more. Free and open source.
Protect MCP clients and services with a security gateway for safer launch, strict inspection, redaction, and operator visibility.