Help developers discover the tools you use. Get visibility for your team's tech choices and contribute to the community's knowledge.
TestDino is an AI-native, Playwright-focused test reporting and management platform with MCP support. It enables Claude Code, Cursor, and LLM-based querying to navigate Playwright reporting, analyze flaky trends, compare environments, and sync complete run context into Jira or Asana. | Track AI API spend, tokens, latency, and errors across major providers with one observability dashboard. |
Flaky test analysis: finds top flaky tests across CI runs and branches. Solves: random failures, rerun waste, flaky noise, Errors analysis: groups failures and highlights the real failing file/method/line. Solves: noisy stack traces, hard triage, slow debugging, Evidence collection: trace, screenshots, video, console logs attached to failures. Solves: “works locally”, missing logs, can’t reproduce CI failures, Environment analysis: compares failures by OS/browser/runner/env. Solves: CI only failures, linux headless issues, infra based flakes, Test failure classification: bug vs flaky vs infrastructure vs UI change. Solves: wrong prioritization, dev QA blame game, wasted fixing wrong issues, Smart rerun grouping: attempts 1/2/3 grouped. Solves: proving flaky vs real bug, tracking rerun outcomes, retry confusion, AI insights: detects regressions, repeated failures, new failure patterns. Solves: hidden instability trends, late discovery of regressions, AI summaries: one line reason + next action. Solves: long debugging notes, slow understanding for non authors, Test run management: centralized history with commit/branch/duration. Solves: hunting CI artifacts, no single source of truth, GitHub integration: PR checks + commit summaries. Solves: low PR confidence, unstable merges, unclear test status, Slack app: real time failure and flaky alerts. Solves: delayed awareness, silent CI failures, missed regressions, Jira/Linear/Asana/Monday: auto create issues with full context. Solves: manual ticket creation, missing reproduction details, slow handoff, MCP server: query test runs/errors/flakes via AI tools. Solves: slow investigation, manual searching, lack of AI assisted debugging workflow. | Features, Providers, Changelog, Terms of Privacy, Privacy Policy |
Statistics | |
Stacks 0 | Stacks 0 |
Followers 1 | Followers 1 |
Votes 1 | Votes 1 |

BrowserStack is the leading test platform built for developers & QAs to expand test coverage, scale & optimize testing with cross-browser, real device cloud, accessibility, visual testing, test management, and test observability.

Small, fast and scaleable bearbones state-management solution. Has a comfy api based on hooks, that isn't boilerplatey or opinionated, but still just enough to be explicit and flux-like.

TestRail helps you manage and track your software testing efforts and organize your QA department. Its intuitive web-based user interface makes it easy to create test cases, manage test runs and coordinate your entire testing process.

Statsbot is helping you take control of your raw data, providing an all-in-one analysis tool for engineers and non-tech folks alike.

Is the game-changing European modern data quality platform that effortlessly uncovers anomalies and errors in your data with Artificial Intelligence.

Manage all aspects of software quality; integrate with JIRA and various test tools, foster collaboration and gain real-time visibility.

It is a platform for building production-grade LLM applications. It lets you debug, test, evaluate, and monitor chains and intelligent agents built on any LLM framework and seamlessly integrates with LangChain, the go-to open source framework for building with LLMs.

Find what caused your AI bill. Opsmeter gives endpoint, user, model, and prompt-level AI cost attribution in one view.

Transform basic prompts into expert-level AI instructions. Enhance, benchmark & optimize prompts for ChatGPT, Claude, Gemini & more.

The collaborative testing platform for LLM applications and agents. Your whole team defines quality requirements together, Rhesis generates thousands of test scenarios covering edge cases, simulates realistic multi-turn conversations, and delivers actionable reviews. Testing infrastructure built for Gen AI.