Help developers discover the tools you use. Get visibility for your team's tech choices and contribute to the community's knowledge.
It provides all you need to build and deploy computer vision models, from data annotation and organization tools to scalable deployment solutions that work across devices. | It is a platform where doctors and statisticians meet to build innovative and reliable predictive models for clinical medicine. |
Search, curate, and manage visual data;
Designed for ultra-fast labeling in the browser;
Tools to build accurate models;
Deploy custom and foundation models in minutes;
Manage annotation projects across multiple work streams | Build more impactful clinical models;
Collaborate and evaluate;
Create interactive visualizations in minutes;
Never let a good model go to waste |
Statistics | |
Stacks 5 | Stacks 1 |
Followers 6 | Followers 1 |
Votes 0 | Votes 0 |
Integrations | |

Build a custom machine learning model without expertise or large amount of data. Just go to nanonets, upload images, wait for few minutes and integrate nanonets API to your application.

It is the easiest way to deploy Machine Learning models. Start deploying Tensorflow, Scikit, Keras and spaCy straight from your notebook with just one extra line.

It is an app to help remote workers work without stress. A UN survey found that 41% of remote workers suffered from depression and these stats don't account for the economic downturn, global heath pandemic and social isolation those today are experiencing. It uses scientifically proven methods to reduce stress.

Building an intelligent, predictive application involves iterating over multiple steps: cleaning the data, developing features, training a model, and creating and maintaining a predictive service. GraphLab Create does all of this in one platform. It is easy to use, fast, and powerful.

That transforms AI-generated content into natural, undetectable human-like writing. Bypass AI detection systems with intelligent text humanization technology

Create AI videos at 60¢ each - 50% cheaper than Veo3, faster than HeyGen. Get 200 free credits, no subscription required. PayPal supported. Start in under 2 minutes.

It is a framework built around LLMs. It can be used for chatbots, generative question-answering, summarization, and much more. The core idea of the library is that we can “chain” together different components to create more advanced use cases around LLMs.

It allows you to run open-source large language models, such as Llama 2, locally.

It is a project that provides a central interface to connect your LLMs with external data. It offers you a comprehensive toolset trading off cost and performance.

It is a library for building stateful, multi-actor applications with LLMs, built on top of (and intended to be used with) LangChain. It extends the LangChain Expression Language with the ability to coordinate multiple chains (or actors) across multiple steps of computation in a cyclic manner.