Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.redsentinel.xyz/llms.txt

Use this file to discover all available pages before exploring further.

Live on Sui Mainnet — Start securing your AI agents today at app.redsentinel.xyz
Red Sentinel Overview

Red Sentinel is a crowdsourced AI red teaming platform for product teams shipping AI features. It turns safety testing into a live marketplace where real attackers probe your system and verifiable results settle instantly on-chain.

The Problem

AI systems combine model intelligence, sensitive data access, and autonomous decisions. That combination creates a unique, language-based attack surface where vulnerabilities are discovered by anyone, often long after deployment.

Why Current Testing Fails

  • Cost and scale: Managed red teaming is expensive, slow, and limited by who can be hired.
  • Point-in-time audits: One-off assessments miss fast-moving attack techniques.
  • Unverifiable claims: Security assurances are hard to prove or validate independently.

The Red Sentinel Solution

Product teams deploy their AI systems as Sentinels, set a bounty pool and message fee, and let a global community of attackers compete to break them. Every result is verified inside a Trusted Execution Environment and settled on-chain, so payouts are instant and disputes are eliminated.

How It Works

  • Defenders: Deploy their system, define instructions, fund a reward pool, and receive a full attack dataset plus a resilience score.
  • Attackers: Pay a small message fee to attempt jailbreaks, data extraction, and adversarial attacks. Break a Sentinel, earn the bounty.
  • The Protocol: A DSPY-powered jury model evaluates each attack. Results are cryptographically attested and settled on-chain automatically.

Economics

Each message fee splits three ways: 50% → reward pool, 40% → defender, 10% → protocol. Failed attacks grow the bounty, attracting stronger attackers over time. Defenders also earn continuous Sentinel token rewards proportional to their pool size.

See Red Sentinel in Action

Watch how attackers break AI systems, how defenses evolve, and how rewards flow instantly on-chain. The future of AI security, explained in 2 minutes.

Get In touch

Are you planning to integrate generative AI models? The Red Sentinel team can audit and secure your model against adversarial threats.