It is the easiest way to customize and serve LLMs. In LLM Engine, models can be accessed via Scale's hosted version or by using the Helm charts in the repository to run model inference and fine-tuning in your own infrastructure. | It is a powerful agent-first search engine that enables you to run a webscale search engine locally or to connect via remote API. It's ideal for both Large Language Models (LLMs) and human users. |
Ready-to-use APIs for your favorite models;
Fine-tune foundation models;
Optimized inference;
Open-source integrations | Allows uploading of local data or tailoring of provided datasets to meet specific needs;
Facilitates operation in a completely offline environment;
Offers fully managed access through a dedicated API for seamless integration into various workflows |
Statistics | |
GitHub Stars 814 | GitHub Stars 510 |
GitHub Forks 67 | GitHub Forks 49 |
Stacks 2 | Stacks 0 |
Followers 4 | Followers 0 |
Votes 0 | Votes 0 |
Integrations | |

It lets you either batch index and search data stored in an SQL database, NoSQL storage, or just files quickly and easily — or index and search data on the fly, working with it pretty much as with a database server.

It builds completely static HTML sites that you can host on GitHub pages, Amazon S3, or anywhere else you choose. There's a stack of good looking themes available. The built-in dev-server allows you to preview your documentation as you're writing it. It will even auto-reload and refresh your browser whenever you save your changes.

Lucene Core, our flagship sub-project, provides Java-based indexing and search technology, as well as spellchecking, hit highlighting and advanced analysis/tokenization capabilities.

That transforms AI-generated content into natural, undetectable human-like writing. Bypass AI detection systems with intelligent text humanization technology

It is a framework built around LLMs. It can be used for chatbots, generative question-answering, summarization, and much more. The core idea of the library is that we can “chain” together different components to create more advanced use cases around LLMs.

Search the world's information, including webpages, images, videos and more. Google has many special features to help you find exactly what you're looking for.

It allows you to run open-source large language models, such as Llama 2, locally.

It is a project that provides a central interface to connect your LLMs with external data. It offers you a comprehensive toolset trading off cost and performance.

An open-source, high-performance, distributed SQL database built for resilience and scale. Re-uses the upper half of PostgreSQL to offer advanced RDBMS features, architected to be fully distributed like Google Spanner.

Searchkick learns what your users are looking for. As more people search, it gets smarter and the results get better. It’s friendly for developers - and magical for your users.