Help developers discover the tools you use. Get visibility for your team's tech choices and contribute to the community's knowledge.
It is the interface between your app and hosted LLMs. It streamlines API requests to OpenAI, Anthropic, Mistral, LLama2, Anyscale, Google Gemini, and more with a unified API. | Try Grok 4 on GPT Proto. Access xAI’s most advanced 1.7T LLM with 130K context, multimodal support, and real-time data integration for dynamic analysis. |
Blazing fast (9.9x faster) with a tiny footprint (~45kb installed);
Load balance across multiple models, providers, and keys;
Fallbacks make sure your app stays resilient;
Automatic Retries with exponential fallbacks come by default;
Plug-in middleware as needed;
Battle-tested over 100B tokens | grok-4, grok-4 api, grok ai |
Statistics | |
GitHub Stars 9.8K | GitHub Stars - |
GitHub Forks 775 | GitHub Forks - |
Stacks 2 | Stacks 3 |
Followers 4 | Followers 3 |
Votes 0 | Votes 1 |
Integrations | |
| No integrations available | |

It is the base model weights and network architecture of Grok-1, the large language model. Grok-1 is a 314 billion parameter Mixture-of-Experts model trained from scratch by xAI.

That transforms AI-generated content into natural, undetectable human-like writing. Bypass AI detection systems with intelligent text humanization technology

Unleash your creativity with letsmkvideo, the leading AI video generator. Effortlessly create professional videos from text, animate photos, and create stunning AI video effects. Get started for free—no watermarks, just high-quality results in minutes.

It is a framework built around LLMs. It can be used for chatbots, generative question-answering, summarization, and much more. The core idea of the library is that we can “chain” together different components to create more advanced use cases around LLMs.

It is Google’s largest and most capable AI model. It is built to be multimodal, it can generalize, understand, operate across, and combine different types of info — like text, images, audio, video, and code.

It allows you to run open-source large language models, such as Llama 2, locally.

It is a state-of-the-art foundational large language model designed to help researchers advance their work in this subfield of AI.

It is a project that provides a central interface to connect your LLMs with external data. It offers you a comprehensive toolset trading off cost and performance.

It is a general-purpose speech recognition model. It is trained on a large dataset of diverse audio and is also a multi-task model that can perform multilingual speech recognition as well as speech translation and language identification.

It is a library for building stateful, multi-actor applications with LLMs, built on top of (and intended to be used with) LangChain. It extends the LangChain Expression Language with the ability to coordinate multiple chains (or actors) across multiple steps of computation in a cyclic manner.