Help developers discover the tools you use. Get visibility for your team's tech choices and contribute to the community's knowledge.
It is the interface between your app and hosted LLMs. It streamlines API requests to OpenAI, Anthropic, Mistral, LLama2, Anyscale, Google Gemini, and more with a unified API. | On the web: create Sora video from text and images. Try Sora 2 web (sora2 web) to generate videos online, or integrate with the Sora 2 API |
Blazing fast (9.9x faster) with a tiny footprint (~45kb installed);
Load balance across multiple models, providers, and keys;
Fallbacks make sure your app stays resilient;
Automatic Retries with exponential fallbacks come by default;
Plug-in middleware as needed;
Battle-tested over 100B tokens | sora 2 free, sora 2 web, sora2 free, sora2 web, sora 2 api |
Statistics | |
GitHub Stars 9.8K | GitHub Stars - |
GitHub Forks 775 | GitHub Forks - |
Stacks 2 | Stacks 0 |
Followers 4 | Followers 1 |
Votes 0 | Votes 1 |
Integrations | |
| No integrations available | |

Create stunning images with Google's Gemini 3 Pro physics engine. Edit-with-Gemini editing, character consistency, native 2K with 4K upscaling. Professional results in 10-30 seconds.

That transforms AI-generated content into natural, undetectable human-like writing. Bypass AI detection systems with intelligent text humanization technology

Try Nano Banana Pro for free, Gemini's AI image generator and photo editor, allows you to create high-quality images and turn photos into endless new creations.

It is a framework built around LLMs. It can be used for chatbots, generative question-answering, summarization, and much more. The core idea of the library is that we can “chain” together different components to create more advanced use cases around LLMs.

It allows you to run open-source large language models, such as Llama 2, locally.

It is a project that provides a central interface to connect your LLMs with external data. It offers you a comprehensive toolset trading off cost and performance.

It is a library for building stateful, multi-actor applications with LLMs, built on top of (and intended to be used with) LangChain. It extends the LangChain Expression Language with the ability to coordinate multiple chains (or actors) across multiple steps of computation in a cyclic manner.

It generates stunning images from simple text prompts in seconds. It works directly in Discord and there is no specialized hardware or software required.

It is a platform for building production-grade LLM applications. It lets you debug, test, evaluate, and monitor chains and intelligent agents built on any LLM framework and seamlessly integrates with LangChain, the go-to open source framework for building with LLMs.

Try Grok 4 on GPT Proto. Access xAI’s most advanced 1.7T LLM with 130K context, multimodal support, and real-time data integration for dynamic analysis.