Help developers discover the tools you use. Get visibility for your team's tech choices and contribute to the community's knowledge.
It is a framework for building and running Generative AI (Gen AI) applications. It is designed to make it easy to build and run Gen AI applications that can process data in real-time. | Transform basic prompts into expert-level AI instructions. Enhance, benchmark & optimize prompts for ChatGPT, Claude, Gemini & more. |
Build Q&A chat over unstructured text in minutes;
Use the latest Gen AI technologies;
Manage embeddings in your vector database;
Code and deploy in Visual Studio Code;
Declare an app and deploy it to dev or prod;
Bring your existing data to the LLM;
Fix your day 2 problems with an event-driven architecture | Prompt Enhancement, Real-Time Scoring, A/B Testing, Prompt Compare, Image-to-Prompt, Presentation Builder, Smart Templates, Analytics Dashboard, Version Control, Collections & Folders, Chrome Extension, Expert Prompt Library |
Statistics | |
GitHub Stars 427 | GitHub Stars - |
GitHub Forks 34 | GitHub Forks - |
Stacks 1 | Stacks 10 |
Followers 3 | Followers 2 |
Votes 0 | Votes 1 |
Integrations | |
| No integrations available | |

That transforms AI-generated content into natural, undetectable human-like writing. Bypass AI detection systems with intelligent text humanization technology

Waxell is the AI governance plane for agentic systems in production. It sits above agents, models, and integrations, enforcing constraints and defining what's allowed. Auto-instrumentation for 200+ libraries without code changes. Real-time tracing, token and cost tracking, and 11 categories of agentic governance policy enforcement.

It is a framework built around LLMs. It can be used for chatbots, generative question-answering, summarization, and much more. The core idea of the library is that we can “chain” together different components to create more advanced use cases around LLMs.

It allows you to run open-source large language models, such as Llama 2, locally.

It is a project that provides a central interface to connect your LLMs with external data. It offers you a comprehensive toolset trading off cost and performance.

It is a library for building stateful, multi-actor applications with LLMs, built on top of (and intended to be used with) LangChain. It extends the LangChain Expression Language with the ability to coordinate multiple chains (or actors) across multiple steps of computation in a cyclic manner.

It is a platform for building production-grade LLM applications. It lets you debug, test, evaluate, and monitor chains and intelligent agents built on any LLM framework and seamlessly integrates with LangChain, the go-to open source framework for building with LLMs.

Find what caused your AI bill. Opsmeter gives endpoint, user, model, and prompt-level AI cost attribution in one view.

The collaborative testing platform for LLM applications and agents. Your whole team defines quality requirements together, Rhesis generates thousands of test scenarios covering edge cases, simulates realistic multi-turn conversations, and delivers actionable reviews. Testing infrastructure built for Gen AI.

Practice your interview skills with AI-powered interviewers. Simulate real interview scenarios and improve your performance. Get instant feedback. Get complete overview and a plan with next steps to improve.