Help developers discover the tools you use. Get visibility for your team's tech choices and contribute to the community's knowledge.
It is the first platform built for prompt engineers. Visually manage prompts, log LLM requests, search usage history, collaborate as a team, and more. | It helps AI teams rigorously test, validate, and improve GenAI applications throughout the entire development lifecycle. |
Visually create, edit, and deploy prompts;
Test prompts against usage history;
Understand how your LLM application is being used;
Facilitates collaboration and tightened feedback loops between product and engineering | Modular evaluation of complex systems;
Close-to-human evaluators;
Pinpoint where problems originate;
Get support throughout the GenAI app development lifecycle
|
Statistics | |
GitHub Stars - | GitHub Stars 509 |
GitHub Forks - | GitHub Forks 36 |
Stacks 0 | Stacks 0 |
Followers 3 | Followers 1 |
Votes 0 | Votes 0 |
Integrations | |

It is a platform for building production-grade LLM applications. It lets you debug, test, evaluate, and monitor chains and intelligent agents built on any LLM framework and seamlessly integrates with LangChain, the go-to open source framework for building with LLMs.

The collaborative testing platform for LLM applications and agents. Your whole team defines quality requirements together, Rhesis generates thousands of test scenarios covering edge cases, simulates realistic multi-turn conversations, and delivers actionable reviews. Testing infrastructure built for Gen AI.

Vivgrid is an AI agent infrastructure platform that helps developers and startups build, observe, evaluate, and deploy AI agents with safety guardrails and global low-latency inference. Support for GPT-5, Gemini 2.5 Pro, and DeepSeek-V3. Start free with $200 monthly credits. Ship production-ready AI agents confidently.

Is this image AI-generated? Free AI detector with 99.7% accuracy detects fake photos, deepfakes, and AI images from DALL-E, Midjourney, Stable Diffusion. No signup required.

CI failures are painful to debug. SentinelQA gives you run summaries, flaky test detection, regression analysis, visual diffs and AI-generated action items.

It improves the cost, performance, and accuracy of Gen AI apps. It takes <2 mins to integrate and with that, it already starts monitoring all of your LLM requests and also makes your app resilient, secure, performant, and more accurate at the same time.

It is an AI observability and LLM evaluation platform designed to help ML and LLM engineers and data scientists surface model issues quicker, resolve their root cause, and ultimately, improve model performance.

It is the leading observability platform trusted by high-performing teams to help maintain the quality and performance of ML models, LLMs, and data pipelines.

It is a tool for testing and evaluating LLM output quality. With this tool, you can systematically test prompts, models, and RAGs with predefined test cases. It can be utilized as a CLI, a library, or integrated into CI/CD pipelines.

It is the toolkit for evaluating and developing robust and reliable AI agents. Build compliant virtual employees with observability, evals, and replay analytics. No more black boxes and prompt guessing.