Guardrails are built-in safeguards for your Agents and Teams. You can use them to make sure the input you send to the LLM is safe and doesn’t contain anything undesired. Some of the most popular usages are:Documentation Index
Fetch the complete documentation index at: https://agno-v2-shaloo-ai-support-link.mintlify.app/llms.txt
Use this file to discover all available pages before exploring further.
- PII detection and redaction
- Prompt injection defense
- Jailbreak defense
- Data leakage prevention
- NSFW content filtering
Agno included Guardrails
Agno provides some built-in guardrails you can use out of the box with your Agents and Teams:- PII Detection Guardrail: detect PII (Personally Identifiable Information).
- Prompt Injection Guardrail: detect and stop prompt injection attempts.
- OpenAI Moderation Guardrail: detect content that violates OpenAI’s content policy.
pre_hooks parameter.
Guardrails are implemented as pre-hooks, which execute before your Agent processes input.
For example, to use the PII Detection Guardrail:
Custom Guardrails
You can create custom guardrails by extending theBaseGuardrail class. See the BaseGuardrail Reference for more details.
This is useful if you need to perform any check or transformation not handled by the built-in guardrails, or just to implement your own validation logic.
You will need to implement the check and async_check methods to perform your validation and raise exceptions when detecting undesired content.
Agno automatically uses the sync or async version of the guardrail based on whether you are running the agent with
.run() or .arun().Learn More
PII Detection
Detect and redact personally identifiable information
Prompt Injection Defense
Stop prompt injection and jailbreak attempts
OpenAI Moderation
Detect content that violates OpenAI’s content policy