Help developers discover the tools you use. Get visibility for your team's tech choices and contribute to the community's knowledge.
It is an open-source Python package for specifying structure and type, validating and correcting the outputs of large language models (LLMs). | Floyo brings ComfyUI to your browser: find & launch open-source workflows in seconds, zero setup, free building, and creative freedom without limits. |
Enforces structure and type guarantees;
Validate and correct the outputs of large language models (LLMs);
Takes corrective actions (e.g. reasking LLM) when validation fails | Creator-First Design, Optimized GPUs + cached nodes/models, Pay only for execution time, Built for creators, not just users. |
Statistics | |
GitHub Stars 5.9K | GitHub Stars - |
GitHub Forks 471 | GitHub Forks - |
Stacks 0 | Stacks 0 |
Followers 0 | Followers 1 |
Votes 0 | Votes 1 |
Integrations | |
| No integrations available | |

It is a robust static analysis framework for validating that LLM-generated structured output is safe. It currently supports SQL.

It is a self-hardening prompt injection detector. It is designed to protect AI applications from prompt injection (PI) attacks through a multi-stage defense.

It is a comprehensive tool designed to fortify the security of Large Language Models (LLMs). By offering sanitization, detection of harmful language, prevention of data leakage, and resistance against prompt injection attacks, it ensures that your interactions with LLMs remain safe and secure.

It is an open-source toolkit for monitoring Large Language Models (LLMs). It extracts signals from prompts & responses, ensuring safety & security.