Help developers discover the tools you use. Get visibility for your team's tech choices and contribute to the community's knowledge.
It is a no-code compute platform for language models. It is aimed at AI developers and product builders. You can also vibe-check and compare quality, performance, and cost at once across a wide selection of open-source and proprietary LLMs. | Easily host and share test reports. Gaffer saves developers time and improves test visibility. |
Query and compare a large selection of open-source and proprietary models at once;
Replace costly APIs with cheap custom AI models;
Airtrain’s LLM-assisted scoring simplifies model grading using your task descriptions;
Cut your AI costs by up to 90% | Report Hosting, Report AI Analysis |
Statistics | |
Stacks 0 | Stacks 0 |
Followers 2 | Followers 1 |
Votes 0 | Votes 1 |
Integrations | |
| No integrations available | |

BrowserStack is the leading test platform built for developers & QAs to expand test coverage, scale & optimize testing with cross-browser, real device cloud, accessibility, visual testing, test management, and test observability.

TestRail helps you manage and track your software testing efforts and organize your QA department. Its intuitive web-based user interface makes it easy to create test cases, manage test runs and coordinate your entire testing process.

Waxell is the AI governance plane for agentic systems in production. It sits above agents, models, and integrations, enforcing constraints and defining what's allowed. Auto-instrumentation for 200+ libraries without code changes. Real-time tracing, token and cost tracking, and 11 categories of agentic governance policy enforcement.

Manage all aspects of software quality; integrate with JIRA and various test tools, foster collaboration and gain real-time visibility.

Find what caused your AI bill. Opsmeter gives endpoint, user, model, and prompt-level AI cost attribution in one view.

It is a platform for building production-grade LLM applications. It lets you debug, test, evaluate, and monitor chains and intelligent agents built on any LLM framework and seamlessly integrates with LangChain, the go-to open source framework for building with LLMs.

Transform basic prompts into expert-level AI instructions. Enhance, benchmark & optimize prompts for ChatGPT, Claude, Gemini & more.

The collaborative testing platform for LLM applications and agents. Your whole team defines quality requirements together, Rhesis generates thousands of test scenarios covering edge cases, simulates realistic multi-turn conversations, and delivers actionable reviews. Testing infrastructure built for Gen AI.

The most advanced, consistent, and effective AI humanizer on the market. Instantly transform AI-generated text into undetectable, human-like writing in one click.

Practice your interview skills with AI-powered interviewers. Simulate real interview scenarios and improve your performance. Get instant feedback. Get complete overview and a plan with next steps to improve.