StackShareStackShare
Follow on
StackShare

Discover and share technology stacks from companies around the world.

Follow on

© 2025 StackShare. All rights reserved.

Product

  • Stacks
  • Tools
  • Feed

Company

  • About
  • Contact

Legal

  • Privacy Policy
  • Terms of Service
  1. Stackups
  2. Utilities
  3. Security
  4. Llm Security
  5. HeimdaLLM vs Rebuff

HeimdaLLM vs Rebuff

OverviewComparisonAlternatives

Overview

HeimdaLLM
HeimdaLLM
Stacks0
Followers0
Votes0
GitHub Stars113
Forks3
Rebuff
Rebuff
Stacks0
Followers2
Votes0
GitHub Stars1.4K
Forks117

Share your Stack

Help developers discover the tools you use. Get visibility for your team's tech choices and contribute to the community's knowledge.

View Docs
CLI (Node.js)
or
Manual

Detailed Comparison

HeimdaLLM
HeimdaLLM
Rebuff
Rebuff

It is a robust static analysis framework for validating that LLM-generated structured output is safe. It currently supports SQL.

It is a self-hardening prompt injection detector. It is designed to protect AI applications from prompt injection (PI) attacks through a multi-stage defense.

Validate LLM-generated output; Makes sure that AI won't wreck your systems
Filter out potentially malicious input before it reaches the LLM; Use a dedicated LLM to analyze incoming prompts and identify potential attacks; Store embeddings of previous attacks in a vector database; Attack signature learning
Statistics
GitHub Stars
113
GitHub Stars
1.4K
GitHub Forks
3
GitHub Forks
117
Stacks
0
Stacks
0
Followers
0
Followers
2
Votes
0
Votes
0
Integrations
No integrations available
Python
Python
Supabase
Supabase
Pinecone
Pinecone
OpenAI
OpenAI

What are some alternatives to HeimdaLLM, Rebuff?

Waxell

Waxell

Waxell is the AI governance plane for agentic systems in production. It sits above agents, models, and integrations, enforcing constraints and defining what's allowed. Auto-instrumentation for 200+ libraries without code changes. Real-time tracing, token and cost tracking, and 11 categories of agentic governance policy enforcement.

Lang Protect

Lang Protect

LangProtect is an AI security firewall that protects LLM and GenAI applications at runtime. It blocks prompt injection, jailbreaks, and sensitive data leakage while enforcing customizable security policies. Built for enterprise and regulated teams, it delivers real-time protection, visibility, and audit-ready governance.

SafeLLM — AI Security for Apache APISIX

SafeLLM — AI Security for Apache APISIX

AI security gateway for Apache APISIX. 100% air-gapped, Open Source core. CPU-capable, GPU-optional. Protect LLMs from prompt injection, PII leaks, and data exfiltration. GDPR, EU AI Act, SOC2, HIPAA compliant. Your data never leaves your VPC.

Your AI Agent Has Root Access. And Zero Guardrails.

Your AI Agent Has Root Access. And Zero Guardrails.

Clawsec is an open-source security plugin that blocks dangerous actions in under 5ms. One command: openclaw plugins install clawsec

SecuredAI

SecuredAI

Privacy-first AI assistant that protects sensitive information while preserving context.

LLM Guard

LLM Guard

It is a comprehensive tool designed to fortify the security of Large Language Models (LLMs). By offering sanitization, detection of harmful language, prevention of data leakage, and resistance against prompt injection attacks, it ensures that your interactions with LLMs remain safe and secure.

Guardrails AI

Guardrails AI

It is an open-source Python package for specifying structure and type, validating and correcting the outputs of large language models (LLMs).

LangKit

LangKit

It is an open-source toolkit for monitoring Large Language Models (LLMs). It extracts signals from prompts & responses, ensuring safety & security.

Related Comparisons

Postman
Swagger UI

Postman vs Swagger UI

Mapbox
Google Maps

Google Maps vs Mapbox

Mapbox
Leaflet

Leaflet vs Mapbox vs OpenLayers

Twilio SendGrid
Mailgun

Mailgun vs Mandrill vs SendGrid

Runscope
Postman

Paw vs Postman vs Runscope