Help developers discover the tools you use. Get visibility for your team's tech choices and contribute to the community's knowledge.
It is a robust static analysis framework for validating that LLM-generated structured output is safe. It currently supports SQL. | It is a self-hardening prompt injection detector. It is designed to protect AI applications from prompt injection (PI) attacks through a multi-stage defense. |
Validate LLM-generated output;
Makes sure that AI won't wreck your systems | Filter out potentially malicious input before it reaches the LLM;
Use a dedicated LLM to analyze incoming prompts and identify potential attacks;
Store embeddings of previous attacks in a vector database;
Attack signature learning |
Statistics | |
GitHub Stars 113 | GitHub Stars 1.4K |
GitHub Forks 3 | GitHub Forks 117 |
Stacks 0 | Stacks 0 |
Followers 0 | Followers 2 |
Votes 0 | Votes 0 |
Integrations | |
| No integrations available | |

It is a comprehensive tool designed to fortify the security of Large Language Models (LLMs). By offering sanitization, detection of harmful language, prevention of data leakage, and resistance against prompt injection attacks, it ensures that your interactions with LLMs remain safe and secure.

It is an open-source Python package for specifying structure and type, validating and correcting the outputs of large language models (LLMs).

It is an open-source toolkit for monitoring Large Language Models (LLMs). It extracts signals from prompts & responses, ensuring safety & security.