Docs
Github
Blog

> the open-source LLM red teaming framework_

Get Started
Delivered by
Confident AI
Detect 40+ LLM Vulnerabilities

Automatically scan for vulnerabilities such as bias, PII leakage, toxicity, etc.

SOTA Adersarial Attacks

Prompt injections, gray box, etc. to jailbreak your LLM

OWASP Top 10, NIST AI, etc.

OWASP Top 10 for LLMs, NIST AI, and so much more out of the box

Documentation
  • Introduction
Articles You Must Read
  • How to jailbreak LLMs
  • OWASP Top 10 for LLMs
  • The comprehensive LLM safety guide
  • LLM evaluation metrics
Red Teaming Community
  • GitHub
  • Discord
  • Newsletter
Copyright © 2025 Confident AI Inc. Built with ❤️ and confidence.