Help developers discover the tools you use. Get visibility for your team's tech choices and contribute to the community's knowledge.
WFGY is a verification first reasoning engine for LLMs. It ships reproducible entry points and audit friendly specifications, designed to make failures visible and fixable. WFGY 1.0 to 3.0 are one set. Each version is a different depth level, not a different product line. MIT licensed. Public demos and docs live in the repo. Start here: Event Horizon (WFGY 3.0 public entry): https://github.com/onestardao/WFGY/blob/main/TensionUniverse/EventHorizon/README.md Starter Village (fast onboarding): https://github.com/onestardao/WFGY/blob/main/StarterVillage/README.md | Track AI API spend, tokens, latency, and errors across major providers with one observability dashboard. |
Verification, Reproducibility, Auditability, Failure analysis, RAG debugging, Open source | Features, Providers, Changelog, Terms of Privacy, Privacy Policy |
Statistics | |
Stacks 0 | Stacks 0 |
Followers 1 | Followers 1 |
Votes 1 | Votes 1 |

Small, fast and scaleable bearbones state-management solution. Has a comfy api based on hooks, that isn't boilerplatey or opinionated, but still just enough to be explicit and flux-like.

Statsbot is helping you take control of your raw data, providing an all-in-one analysis tool for engineers and non-tech folks alike.

It is a cutting-edge framework for orchestrating role-playing, autonomous AI agents. By fostering collaborative intelligence, it empowers agents to work together seamlessly, tackling complex tasks.

Is the game-changing European modern data quality platform that effortlessly uncovers anomalies and errors in your data with Artificial Intelligence.

Discover and install agent skills for Claude Code, Cursor, Windsurf and more. Browse 10000+ curated skills by category or author. Start building smarter today.

It is a platform for building production-grade LLM applications. It lets you debug, test, evaluate, and monitor chains and intelligent agents built on any LLM framework and seamlessly integrates with LangChain, the go-to open source framework for building with LLMs.

Find what caused your AI bill. Opsmeter gives endpoint, user, model, and prompt-level AI cost attribution in one view.

Transform basic prompts into expert-level AI instructions. Enhance, benchmark & optimize prompts for ChatGPT, Claude, Gemini & more.

Is an all-in-one AI coding platform that allows you build apps and websites by chatting with AI. YouWare enables full-stack code generation and deployment with a shareable URL instantly. no code, no setup, no hassle.

The collaborative testing platform for LLM applications and agents. Your whole team defines quality requirements together, Rhesis generates thousands of test scenarios covering edge cases, simulates realistic multi-turn conversations, and delivers actionable reviews. Testing infrastructure built for Gen AI.