Help developers discover the tools you use. Get visibility for your team's tech choices and contribute to the community's knowledge.
It is a state-of-the-art automatic speech recognition toolkit. It is intended for use by speech recognition researchers and professionals. | It is a general-purpose speech recognition model. It is trained on a large dataset of diverse audio and is also a multi-task model that can perform multilingual speech recognition as well as speech translation and language identification. |
| - | Automatic speech recognition;
Trained on a large dataset of diverse audio;
Multi-task model;
Can perform multilingual speech recognition;
Can perform speech translation and language identification |
Statistics | |
GitHub Stars 15.2K | GitHub Stars 90.3K |
GitHub Forks 5.4K | GitHub Forks 11.3K |
Stacks 24 | Stacks 24 |
Followers 25 | Followers 28 |
Votes 0 | Votes 1 |
Integrations | |
| No integrations available | |

It can be used to complement any regular touch user interface with a real time voice user interface. It offers real time feedback for faster and more intuitive experience that enables end user to recover from possible errors quickly and with no interruptions.

It is the base model weights and network architecture of Grok-1, the large language model. Grok-1 is a 314 billion parameter Mixture-of-Experts model trained from scratch by xAI.

It is Google’s largest and most capable AI model. It is built to be multimodal, it can generalize, understand, operate across, and combine different types of info — like text, images, audio, video, and code.

It is a state-of-the-art foundational large language model designed to help researchers advance their work in this subfield of AI.

Try Grok 4 on GPT Proto. Access xAI’s most advanced 1.7T LLM with 130K context, multimodal support, and real-time data integration for dynamic analysis.

Creating safe artificial general intelligence that benefits all of humanity. Our work to create safe and beneficial AI requires a deep understanding of the potential risks and benefits, as well as careful consideration of the impact.

It is a next-generation AI assistant. It is accessible through chat interface and API. It is capable of a wide variety of conversational and text-processing tasks while maintaining a high degree of reliability and predictability.

It is a large multimodal model (accepting text inputs and emitting text outputs today, with image inputs coming in the future) that can solve difficult problems with greater accuracy than any of our previous models, thanks to its broader general knowledge and advanced reasoning capabilities.

It offers an API to add cutting-edge language processing to any system. Through training, users can create massive models customized to their use case and trained on their data.

It is a small, yet powerful model adaptable to many use cases. It is better than Llama 2 13B on all benchmarks, has natural coding abilities, and 8k sequence length. We made it easy to deploy on any cloud.