Help developers discover the tools you use. Get visibility for your team's tech choices and contribute to the community's knowledge.
It is a comprehensive tool designed to fortify the security of Large Language Models (LLMs). By offering sanitization, detection of harmful language, prevention of data leakage, and resistance against prompt injection attacks, it ensures that your interactions with LLMs remain safe and secure. | It is an open-source Python package for specifying structure and type, validating and correcting the outputs of large language models (LLMs). |
Fortify the security of Large Language Models;
Detection of harmful language;
Prevention of data leakage;
Resistance against prompt injection attacks | Enforces structure and type guarantees;
Validate and correct the outputs of large language models (LLMs);
Takes corrective actions (e.g. reasking LLM) when validation fails |
Statistics | |
GitHub Stars - | GitHub Stars 5.9K |
GitHub Forks - | GitHub Forks 471 |
Stacks 1 | Stacks 0 |
Followers 1 | Followers 0 |
Votes 0 | Votes 0 |
Integrations | |

LangProtect is an AI security firewall that protects LLM and GenAI applications at runtime. It blocks prompt injection, jailbreaks, and sensitive data leakage while enforcing customizable security policies. Built for enterprise and regulated teams, it delivers real-time protection, visibility, and audit-ready governance.

AI security gateway for Apache APISIX. 100% air-gapped, Open Source core. CPU-capable, GPU-optional. Protect LLMs from prompt injection, PII leaks, and data exfiltration. GDPR, EU AI Act, SOC2, HIPAA compliant. Your data never leaves your VPC.
Clawsec is an open-source security plugin that blocks dangerous actions in under 5ms. One command: openclaw plugins install clawsec

It is a robust static analysis framework for validating that LLM-generated structured output is safe. It currently supports SQL.

It is a self-hardening prompt injection detector. It is designed to protect AI applications from prompt injection (PI) attacks through a multi-stage defense.

It is an open-source toolkit for monitoring Large Language Models (LLMs). It extracts signals from prompts & responses, ensuring safety & security.