Help developers discover the tools you use. Get visibility for your team's tech choices and contribute to the community's knowledge.
Based on the implementation of Google's TurboQuant (ICLR 2026) — Quansloth brings elite KV cache compression to local LLM inference. Quansloth is a fully private, air-gapped AI server that runs massive context models natively on consumer hardware with ease! Please have a look at its GitHub (Apache 2.0 License) - https://github.com/PacifAIst/Quansloth | JarvisLabs is a GPU cloud platform used by AI teams, research labs, and universities to train, fine-tune, and deploy deep learning models. Access NVIDIA H100, H200, A100, and L4 GPUs on-demand with per-minute billing, persistent storage, and pre-configured ML environments. Trusted by 1000+ teams across startups, enterprises, and academic institutions. Multi-region availability with dedicated datacenter infrastructure and 99.5% uptime SLA. |
TurboQuant KV cache compression for 75% VRAM savings, native long-context support for consumer GPUs, real-time CUDA backend hardware monitoring, fully air-gapped, privacy-focused local execution. | GPU Cloud, AI Infrastructure, Machine Learning, Deep Learning, Model Training, LLM Fine-Tuning, Cloud GPU, MLOps, AI/ML Platform, NVIDIA H100 |
Statistics | |
Stacks 0 | Stacks 0 |
Followers 0 | Followers 1 |
Votes 1 | Votes 1 |

The Docker Platform is the industry-leading container platform for continuous, high-velocity innovation, enabling organizations to seamlessly build and share any application — from legacy to what comes next — and securely run them anywhere

LXD isn't a rewrite of LXC, in fact it's building on top of LXC to provide a new, better user experience. Under the hood, LXD uses LXC through liblxc and its Go binding to create and manage the containers. It's basically an alternative to LXC's tools and distribution template system with the added features that come from being controllable over the network.

LXC is a userspace interface for the Linux kernel containment features. Through a powerful API and simple tools, it lets Linux users easily create and manage system or application containers.

Rocket is a cli for running App Containers. The goal of rocket is to be composable, secure, and fast.

Vagrant Cloud pairs with Vagrant to enable access, insight and collaboration across teams, as well as to bring exposure to community contributions and development environments.

The most advanced, consistent, and effective AI humanizer on the market. Instantly transform AI-generated text into undetectable, human-like writing in one click.

Waxell is the AI governance plane for agentic systems in production. It sits above agents, models, and integrations, enforcing constraints and defining what's allowed. Auto-instrumentation for 200+ libraries without code changes. Real-time tracing, token and cost tracking, and 11 categories of agentic governance policy enforcement.

That transforms AI-generated content into natural, undetectable human-like writing. Bypass AI detection systems with intelligent text humanization technology

It is a framework built around LLMs. It can be used for chatbots, generative question-answering, summarization, and much more. The core idea of the library is that we can “chain” together different components to create more advanced use cases around LLMs.

It allows you to run open-source large language models, such as Llama 2, locally.