Help developers discover the tools you use. Get visibility for your team's tech choices and contribute to the community's knowledge.
Easily host and share test reports. Gaffer saves developers time and improves test visibility. | Track the environmental impact of your AI queries and choose the most energy-efficient models. |
Report Hosting, Report AI Analysis | Coming Soon, Coming Soon, Coming Soon, Coming Soon, Coming Soon |
Statistics | |
Stacks 0 | Stacks 0 |
Followers 1 | Followers 1 |
Votes 1 | Votes 1 |

BrowserStack is the leading test platform built for developers & QAs to expand test coverage, scale & optimize testing with cross-browser, real device cloud, accessibility, visual testing, test management, and test observability.

TestRail helps you manage and track your software testing efforts and organize your QA department. Its intuitive web-based user interface makes it easy to create test cases, manage test runs and coordinate your entire testing process.

All-in-one content studio — easily create any photo, video or audio clip with AI. Affordable, easy to use and featuring the latest AI models.

Manage all aspects of software quality; integrate with JIRA and various test tools, foster collaboration and gain real-time visibility.
Never lose your best AI prompts again. SpacePrompts provides seamless prompt management to help you save, organize, and access your ChatGPT, Claude, Gemini, and other AI assistant prompts instantly across all your devices.

Transform basic prompts into expert-level AI instructions. Enhance, benchmark & optimize prompts for ChatGPT, Claude, Gemini & more.

It is a platform for building production-grade LLM applications. It lets you debug, test, evaluate, and monitor chains and intelligent agents built on any LLM framework and seamlessly integrates with LangChain, the go-to open source framework for building with LLMs.

The Hugging Face Hub is a platform (centralized web service) for hosting. Build, train, and deploy state of the art models powered by the reference open source in machine learning

The collaborative testing platform for LLM applications and agents. Your whole team defines quality requirements together, Rhesis generates thousands of test scenarios covering edge cases, simulates realistic multi-turn conversations, and delivers actionable reviews. Testing infrastructure built for Gen AI.

The most advanced, consistent, and effective AI humanizer on the market. Instantly transform AI-generated text into undetectable, human-like writing in one click.