Help developers discover the tools you use. Get visibility for your team's tech choices and contribute to the community's knowledge.
Continuously validate your LLM-based application throughout the entire lifecycle from pre-deployment and internal experimentation to production. | CI failures are painful to debug. SentinelQA gives you run summaries, flaky test detection, regression analysis, visual diffs and AI-generated action items. |
LLM evaluation;
Real-time monitoring;
Simplify compliance with AI-related policies, regulations, and soft laws | QA, DevOps, Test Intelligence, AI, Analytics, Test Debugging |
Statistics | |
Stacks 0 | Stacks 0 |
Followers 0 | Followers 1 |
Votes 0 | Votes 1 |
Integrations | |
| No integrations available | |

BrowserStack is the leading test platform built for developers & QAs to expand test coverage, scale & optimize testing with cross-browser, real device cloud, accessibility, visual testing, test management, and test observability.

TestRail helps you manage and track your software testing efforts and organize your QA department. Its intuitive web-based user interface makes it easy to create test cases, manage test runs and coordinate your entire testing process.

That transforms AI-generated content into natural, undetectable human-like writing. Bypass AI detection systems with intelligent text humanization technology

It is a framework built around LLMs. It can be used for chatbots, generative question-answering, summarization, and much more. The core idea of the library is that we can “chain” together different components to create more advanced use cases around LLMs.

It allows you to run open-source large language models, such as Llama 2, locally.

Manage all aspects of software quality; integrate with JIRA and various test tools, foster collaboration and gain real-time visibility.

It is a project that provides a central interface to connect your LLMs with external data. It offers you a comprehensive toolset trading off cost and performance.

It is a library for building stateful, multi-actor applications with LLMs, built on top of (and intended to be used with) LangChain. It extends the LangChain Expression Language with the ability to coordinate multiple chains (or actors) across multiple steps of computation in a cyclic manner.

It is a platform for building production-grade LLM applications. It lets you debug, test, evaluate, and monitor chains and intelligent agents built on any LLM framework and seamlessly integrates with LangChain, the go-to open source framework for building with LLMs.

The collaborative testing platform for LLM applications and agents. Your whole team defines quality requirements together, Rhesis generates thousands of test scenarios covering edge cases, simulates realistic multi-turn conversations, and delivers actionable reviews. Testing infrastructure built for Gen AI.