Help developers discover the tools you use. Get visibility for your team's tech choices and contribute to the community's knowledge.
It is a low code platform to rapidly annotate data, train and then deploy custom Natural Language Processing (NLP) models. It takes care of model training, data selection and deployment for you. You upload your data and we provide an annotation interface for you to teach a classifier. As you label we train a model, work out what data is most valuable and then deploy the model for you. | It is a CLI that lets you talk to AWS Cloud in natural language and get intelligent responses from generative AI. Whether you need to analyze your costs, secure your resources, troubleshoot your problems, or fix your issues, it can help you do it all with ease. |
10x fewer labels needed;
Add your own heuristics and rules to speed up labelling;
Powerful search to focus your labelling efforts;
Powerful pretrained models from the forefront of NLP research | Cost analysis;
Security analysis;
Troubleshooting |
Statistics | |
Stacks 2 | Stacks 1 |
Followers 4 | Followers 0 |
Votes 0 | Votes 0 |
Integrations | |
| No integrations available | |

LocalStack provides an easy-to-use test/mocking framework for developing Cloud applications.

A JavaScript library for frontend and mobile developers building cloud-enabled applications. The library is a declarative interface across different categories of operations in order to make common tasks easier to add into your application. The default implementation works with Amazon Web Services (AWS) resources but is designed to be open and pluggable for usage with other cloud services that wish to provide an implementation or custom backends.

awless is a fast, powerful and easy-to-use command line interface (CLI) to manage Amazon Web Services.

That transforms AI-generated content into natural, undetectable human-like writing. Bypass AI detection systems with intelligent text humanization technology

It is a framework built around LLMs. It can be used for chatbots, generative question-answering, summarization, and much more. The core idea of the library is that we can “chain” together different components to create more advanced use cases around LLMs.

It allows you to run open-source large language models, such as Llama 2, locally.

It is a project that provides a central interface to connect your LLMs with external data. It offers you a comprehensive toolset trading off cost and performance.

It is a generative user interface system by Vercel powered by AI. It generates copy-and-paste friendly React code based on shadcn/ui and Tailwind CSS that people can use in their projects.

It is a library for building stateful, multi-actor applications with LLMs, built on top of (and intended to be used with) LangChain. It extends the LangChain Expression Language with the ability to coordinate multiple chains (or actors) across multiple steps of computation in a cyclic manner.

It is a platform for building production-grade LLM applications. It lets you debug, test, evaluate, and monitor chains and intelligent agents built on any LLM framework and seamlessly integrates with LangChain, the go-to open source framework for building with LLMs.