Help developers discover the tools you use. Get visibility for your team's tech choices and contribute to the community's knowledge.
It provides all you need to build and deploy computer vision models, from data annotation and organization tools to scalable deployment solutions that work across devices. | It is a powerful agent-first search engine that enables you to run a webscale search engine locally or to connect via remote API. It's ideal for both Large Language Models (LLMs) and human users. |
Search, curate, and manage visual data;
Designed for ultra-fast labeling in the browser;
Tools to build accurate models;
Deploy custom and foundation models in minutes;
Manage annotation projects across multiple work streams | Allows uploading of local data or tailoring of provided datasets to meet specific needs;
Facilitates operation in a completely offline environment;
Offers fully managed access through a dedicated API for seamless integration into various workflows |
Statistics | |
GitHub Stars - | GitHub Stars 510 |
GitHub Forks - | GitHub Forks 49 |
Stacks 5 | Stacks 0 |
Followers 6 | Followers 0 |
Votes 0 | Votes 0 |
Integrations | |

It lets you either batch index and search data stored in an SQL database, NoSQL storage, or just files quickly and easily — or index and search data on the fly, working with it pretty much as with a database server.

Build a custom machine learning model without expertise or large amount of data. Just go to nanonets, upload images, wait for few minutes and integrate nanonets API to your application.

It builds completely static HTML sites that you can host on GitHub pages, Amazon S3, or anywhere else you choose. There's a stack of good looking themes available. The built-in dev-server allows you to preview your documentation as you're writing it. It will even auto-reload and refresh your browser whenever you save your changes.

It is the easiest way to deploy Machine Learning models. Start deploying Tensorflow, Scikit, Keras and spaCy straight from your notebook with just one extra line.

Building an intelligent, predictive application involves iterating over multiple steps: cleaning the data, developing features, training a model, and creating and maintaining a predictive service. GraphLab Create does all of this in one platform. It is easy to use, fast, and powerful.

Lucene Core, our flagship sub-project, provides Java-based indexing and search technology, as well as spellchecking, hit highlighting and advanced analysis/tokenization capabilities.

That transforms AI-generated content into natural, undetectable human-like writing. Bypass AI detection systems with intelligent text humanization technology

It is a framework built around LLMs. It can be used for chatbots, generative question-answering, summarization, and much more. The core idea of the library is that we can “chain” together different components to create more advanced use cases around LLMs.

Search the world's information, including webpages, images, videos and more. Google has many special features to help you find exactly what you're looking for.

It allows you to run open-source large language models, such as Llama 2, locally.