Help developers discover the tools you use. Get visibility for your team's tech choices and contribute to the community's knowledge.
It is an open-source Python package for specifying structure and type, validating and correcting the outputs of large language models (LLMs). | Floyo brings ComfyUI to your browser: find & launch open-source workflows in seconds, zero setup, free building, and creative freedom without limits. |
Enforces structure and type guarantees;
Validate and correct the outputs of large language models (LLMs);
Takes corrective actions (e.g. reasking LLM) when validation fails | Creator-First Design, Optimized GPUs + cached nodes/models, Pay only for execution time, Built for creators, not just users. |
Statistics | |
GitHub Stars 5.9K | GitHub Stars - |
GitHub Forks 471 | GitHub Forks - |
Stacks 0 | Stacks 0 |
Followers 0 | Followers 1 |
Votes 0 | Votes 1 |
Integrations | |
| No integrations available | |

Waxell is the AI governance plane for agentic systems in production. It sits above agents, models, and integrations, enforcing constraints and defining what's allowed. Auto-instrumentation for 200+ libraries without code changes. Real-time tracing, token and cost tracking, and 11 categories of agentic governance policy enforcement.

LangProtect is an AI security firewall that protects LLM and GenAI applications at runtime. It blocks prompt injection, jailbreaks, and sensitive data leakage while enforcing customizable security policies. Built for enterprise and regulated teams, it delivers real-time protection, visibility, and audit-ready governance.

AI security gateway for Apache APISIX. 100% air-gapped, Open Source core. CPU-capable, GPU-optional. Protect LLMs from prompt injection, PII leaks, and data exfiltration. GDPR, EU AI Act, SOC2, HIPAA compliant. Your data never leaves your VPC.
Clawsec is an open-source security plugin that blocks dangerous actions in under 5ms. One command: openclaw plugins install clawsec

Privacy-first AI assistant that protects sensitive information while preserving context.

A breakthrough approach to securing applications built with AI assistance. SecVibe complements your existing security stack with specialized controls.

Track your AI usage and secure sensitive data across Claude, ChatGPT, Gemini, and more. AIMetrical offers unified analytics and real-time security on a single dashboard.

Use any AI, safely. Sensitive data never leaves your device. Imagine using AI freely—without exposing who you are. Anonymize360 intercepts your sensitive data before it reaches an AI provider. The moment you send a message, it scans for names, addresses, SSNs, and medical records—replacing them with secure tokens and encrypting the originals locally with AES-256. Only the anonymized version travels to the cloud. When the response returns, your real information is seamlessly restored. Zero-knowledge architecture: even we can't access your data. No backdoors. Nothing stored outside your device. Works silently across Windows and macOS. For professionals, healthcare providers, or anyone who values privacy—powerful AI, zero compromise. Instant. Invisible. Secure.

It is a comprehensive tool designed to fortify the security of Large Language Models (LLMs). By offering sanitization, detection of harmful language, prevention of data leakage, and resistance against prompt injection attacks, it ensures that your interactions with LLMs remain safe and secure.

It is a robust static analysis framework for validating that LLM-generated structured output is safe. It currently supports SQL.