Help developers discover the tools you use. Get visibility for your team's tech choices and contribute to the community's knowledge.
It helps AI teams rigorously test, validate, and improve GenAI applications throughout the entire development lifecycle. | ReputAgent provides A2A evaluation infrastructure. When agents work together, reputation emerges from real work—not benchmarks. We help you build that trust infrastructure. |
Modular evaluation of complex systems;
Close-to-human evaluators;
Pinpoint where problems originate;
Get support throughout the GenAI app development lifecycle
| Continuous AI agent evaluation, Reputation scoring from accumulated evidence, Evaluation dimensions framework (accuracy, safety, reliability), Failure modes library with mitigations, Evaluation patterns library (LLM-as-judge, human-in-the-loop, red teaming, orchestration), Agent Playground for pre-production scrimmage testing, Ecosystem tools tracker and comparisons, Research index of agent evaluation papers, Open dataset export (CC-BY-4.0), RepKit SDK (pre-release) for logging evaluations and querying reputation, Pre-production agent testing, Agent reliability QA before launch, Ongoing evaluation in production, Safety and red teaming workflows, Routing and delegation based on trust signals, Access control and governance decisions, Comparing agent frameworks and tools, Research and benchmarking of evaluation methods, Shared vocabulary and taxonomy for agent teams |
Statistics | |
GitHub Stars 509 | GitHub Stars - |
GitHub Forks 36 | GitHub Forks - |
Stacks 0 | Stacks 0 |
Followers 1 | Followers 1 |
Votes 0 | Votes 1 |
Integrations | |
| No integrations available | |

It is a cutting-edge framework for orchestrating role-playing, autonomous AI agents. By fostering collaborative intelligence, it empowers agents to work together seamlessly, tackling complex tasks.

Discover and install agent skills for Claude Code, Cursor, Windsurf and more. Browse 10000+ curated skills by category or author. Start building smarter today.

Transform basic prompts into expert-level AI instructions. Enhance, benchmark & optimize prompts for ChatGPT, Claude, Gemini & more.

Find what caused your AI bill. Opsmeter gives endpoint, user, model, and prompt-level AI cost attribution in one view.

It is a platform for building production-grade LLM applications. It lets you debug, test, evaluate, and monitor chains and intelligent agents built on any LLM framework and seamlessly integrates with LangChain, the go-to open source framework for building with LLMs.

An Agentic Enterprise Platform where your knowledge base powers AI with full ownership, control, and business-friendly interfaces. Find out our product: https://aiquinta.ai/our-product/

Oculer is an end-to-end AI video engine that turns a simple text idea into a fully produced Instagram Reel, or YouTube Short. Unlike basic motion graphic tools, Oculer creates complete storytelling videos — including script, storyboard, motion graphics, sound syncing, subtitles, and final editing — automatically.

Is an all-in-one AI coding platform that allows you build apps and websites by chatting with AI. YouWare enables full-stack code generation and deployment with a shareable URL instantly. no code, no setup, no hassle.

The collaborative testing platform for LLM applications and agents. Your whole team defines quality requirements together, Rhesis generates thousands of test scenarios covering edge cases, simulates realistic multi-turn conversations, and delivers actionable reviews. Testing infrastructure built for Gen AI.

Practice your interview skills with AI-powered interviewers. Simulate real interview scenarios and improve your performance. Get instant feedback. Get complete overview and a plan with next steps to improve.