Help developers discover the tools you use. Get visibility for your team's tech choices and contribute to the community's knowledge.
It is an open-source embedding database. Chroma makes it easy to build LLM apps by making knowledge, facts, and skills pluggable for LLMs. | Getting bad AI outputs? Find what's missing in your prompt and fix it in 30 seconds. Free AI-powered diagnosis to transform your prompts to expert level. |
Fully-typed, fully-tested, fully-documented;
Dev, Test, Prod: the same API that runs in your Python notebook, scales to your cluster;
Feature-rich;
Free & open source | 3 prompt analyses per week, First analysis instant, 4 core element diagnosis, AI auto-complete prompts, Unlimited analyses |
Statistics | |
GitHub Stars 24.2K | GitHub Stars - |
GitHub Forks 1.9K | GitHub Forks - |
Stacks 53 | Stacks 0 |
Followers 15 | Followers 1 |
Votes 0 | Votes 1 |
Integrations | |
| No integrations available | |

Milvus is an open source vector database. Built with heterogeneous computing architecture for the best cost efficiency. Searches over billion-scale vectors take only milliseconds with minimum computing resources.

The most advanced, consistent, and effective AI humanizer on the market. Instantly transform AI-generated text into undetectable, human-like writing in one click.

That transforms AI-generated content into natural, undetectable human-like writing. Bypass AI detection systems with intelligent text humanization technology

Waxell is the AI governance plane for agentic systems in production. It sits above agents, models, and integrations, enforcing constraints and defining what's allowed. Auto-instrumentation for 200+ libraries without code changes. Real-time tracing, token and cost tracking, and 11 categories of agentic governance policy enforcement.

It is a framework built around LLMs. It can be used for chatbots, generative question-answering, summarization, and much more. The core idea of the library is that we can “chain” together different components to create more advanced use cases around LLMs.

It allows you to run open-source large language models, such as Llama 2, locally.

It is a project that provides a central interface to connect your LLMs with external data. It offers you a comprehensive toolset trading off cost and performance.

It is a library for building stateful, multi-actor applications with LLMs, built on top of (and intended to be used with) LangChain. It extends the LangChain Expression Language with the ability to coordinate multiple chains (or actors) across multiple steps of computation in a cyclic manner.

Transform basic prompts into expert-level AI instructions. Enhance, benchmark & optimize prompts for ChatGPT, Claude, Gemini & more.

Find what caused your AI bill. Opsmeter gives endpoint, user, model, and prompt-level AI cost attribution in one view.