Help developers discover the tools you use. Get visibility for your team's tech choices and contribute to the community's knowledge.
It is a platform for building production-grade LLM applications. It lets you debug, test, evaluate, and monitor chains and intelligent agents built on any LLM framework and seamlessly integrates with LangChain, the go-to open source framework for building with LLMs. | Track the environmental impact of your AI queries and choose the most energy-efficient models. |
Collaborate with teammates to get app behavior just right;
A unified DevOps platform for your LLM applications;
The platform for your LLM development lifecycle;
Develop with greater visibility | Coming Soon, Coming Soon, Coming Soon, Coming Soon, Coming Soon |
Statistics | |
Stacks 6 | Stacks 0 |
Followers 5 | Followers 1 |
Votes 1 | Votes 1 |
Integrations | |
| No integrations available | |

That transforms AI-generated content into natural, undetectable human-like writing. Bypass AI detection systems with intelligent text humanization technology

It is a framework built around LLMs. It can be used for chatbots, generative question-answering, summarization, and much more. The core idea of the library is that we can “chain” together different components to create more advanced use cases around LLMs.

It allows you to run open-source large language models, such as Llama 2, locally.

It is a project that provides a central interface to connect your LLMs with external data. It offers you a comprehensive toolset trading off cost and performance.

It is a library for building stateful, multi-actor applications with LLMs, built on top of (and intended to be used with) LangChain. It extends the LangChain Expression Language with the ability to coordinate multiple chains (or actors) across multiple steps of computation in a cyclic manner.
Never lose your best AI prompts again. SpacePrompts provides seamless prompt management to help you save, organize, and access your ChatGPT, Claude, Gemini, and other AI assistant prompts instantly across all your devices.

The Hugging Face Hub is a platform (centralized web service) for hosting. Build, train, and deploy state of the art models powered by the reference open source in machine learning

The collaborative testing platform for LLM applications and agents. Your whole team defines quality requirements together, Rhesis generates thousands of test scenarios covering edge cases, simulates realistic multi-turn conversations, and delivers actionable reviews. Testing infrastructure built for Gen AI.

Practice your interview skills with AI-powered interviewers. Simulate real interview scenarios and improve your performance. Get instant feedback. Get complete overview and a plan with next steps to improve.

Vivgrid is an AI agent infrastructure platform that helps developers and startups build, observe, evaluate, and deploy AI agents with safety guardrails and global low-latency inference. Support for GPT-5, Gemini 2.5 Pro, and DeepSeek-V3. Start free with $200 monthly credits. Ship production-ready AI agents confidently.