Help developers discover the tools you use. Get visibility for your team's tech choices and contribute to the community's knowledge.
It is a powerful agent-first search engine that enables you to run a webscale search engine locally or to connect via remote API. It's ideal for both Large Language Models (LLMs) and human users. | Transform basic prompts into expert-level AI instructions. Enhance, benchmark & optimize prompts for ChatGPT, Claude, Gemini & more. |
Allows uploading of local data or tailoring of provided datasets to meet specific needs;
Facilitates operation in a completely offline environment;
Offers fully managed access through a dedicated API for seamless integration into various workflows | Prompt Enhancement, Real-Time Scoring, A/B Testing, Prompt Compare, Image-to-Prompt, Presentation Builder, Smart Templates, Analytics Dashboard, Version Control, Collections & Folders, Chrome Extension, Expert Prompt Library |
Statistics | |
GitHub Stars 510 | GitHub Stars - |
GitHub Forks 49 | GitHub Forks - |
Stacks 0 | Stacks 10 |
Followers 0 | Followers 2 |
Votes 0 | Votes 1 |
Integrations | |
| No integrations available | |

It lets you either batch index and search data stored in an SQL database, NoSQL storage, or just files quickly and easily — or index and search data on the fly, working with it pretty much as with a database server.

It builds completely static HTML sites that you can host on GitHub pages, Amazon S3, or anywhere else you choose. There's a stack of good looking themes available. The built-in dev-server allows you to preview your documentation as you're writing it. It will even auto-reload and refresh your browser whenever you save your changes.

Lucene Core, our flagship sub-project, provides Java-based indexing and search technology, as well as spellchecking, hit highlighting and advanced analysis/tokenization capabilities.

The most advanced, consistent, and effective AI humanizer on the market. Instantly transform AI-generated text into undetectable, human-like writing in one click.

Waxell is the AI governance plane for agentic systems in production. It sits above agents, models, and integrations, enforcing constraints and defining what's allowed. Auto-instrumentation for 200+ libraries without code changes. Real-time tracing, token and cost tracking, and 11 categories of agentic governance policy enforcement.

That transforms AI-generated content into natural, undetectable human-like writing. Bypass AI detection systems with intelligent text humanization technology

It is a framework built around LLMs. It can be used for chatbots, generative question-answering, summarization, and much more. The core idea of the library is that we can “chain” together different components to create more advanced use cases around LLMs.

Search the world's information, including webpages, images, videos and more. Google has many special features to help you find exactly what you're looking for.

It allows you to run open-source large language models, such as Llama 2, locally.

It is a project that provides a central interface to connect your LLMs with external data. It offers you a comprehensive toolset trading off cost and performance.