Help developers discover the tools you use. Get visibility for your team's tech choices and contribute to the community's knowledge.
It is the interface between your app and hosted LLMs. It streamlines API requests to OpenAI, Anthropic, Mistral, LLama2, Anyscale, Google Gemini, and more with a unified API. | APIXO is an affordable, performance‑first AI API platform that provides access to image, video, audio, and text models through one unified interface, delivering enterprise‑grade stability at a lower cost with unified routing, automatic failover, and transparent usage reporting. |
Blazing fast (9.9x faster) with a tiny footprint (~45kb installed);
Load balance across multiple models, providers, and keys;
Fallbacks make sure your app stays resilient;
Automatic Retries with exponential fallbacks come by default;
Plug-in middleware as needed;
Battle-tested over 100B tokens | Pay-as-you-use, no monthly fees or lock-in, Credits never expire, low minimum top-up, Free trial credits to ship demos/POCs, Single balance for every model and provider, Per-project API keys with environment isolation |
Statistics | |
GitHub Stars 9.8K | GitHub Stars - |
GitHub Forks 775 | GitHub Forks - |
Stacks 2 | Stacks 0 |
Followers 4 | Followers 1 |
Votes 0 | Votes 1 |
Integrations | |
| No integrations available | |

Qonversion allows fast in-app subscription implementation. It provides the back-end infrastructure to validate user receipts and manage cross-platform user access to paid content on your app, so you do not need to build your own server.

That transforms AI-generated content into natural, undetectable human-like writing. Bypass AI detection systems with intelligent text humanization technology

Unleash your creativity with letsmkvideo, the leading AI video generator. Effortlessly create professional videos from text, animate photos, and create stunning AI video effects. Get started for free—no watermarks, just high-quality results in minutes.

It is a framework built around LLMs. It can be used for chatbots, generative question-answering, summarization, and much more. The core idea of the library is that we can “chain” together different components to create more advanced use cases around LLMs.

It allows you to run open-source large language models, such as Llama 2, locally.

It is a project that provides a central interface to connect your LLMs with external data. It offers you a comprehensive toolset trading off cost and performance.

It is a library for building stateful, multi-actor applications with LLMs, built on top of (and intended to be used with) LangChain. It extends the LangChain Expression Language with the ability to coordinate multiple chains (or actors) across multiple steps of computation in a cyclic manner.

Transform basic prompts into expert-level AI instructions. Enhance, benchmark & optimize prompts for ChatGPT, Claude, Gemini & more.

It is a platform for building production-grade LLM applications. It lets you debug, test, evaluate, and monitor chains and intelligent agents built on any LLM framework and seamlessly integrates with LangChain, the go-to open source framework for building with LLMs.

Try Grok 4 on GPT Proto. Access xAI’s most advanced 1.7T LLM with 130K context, multimodal support, and real-time data integration for dynamic analysis.