Documentation Index
Fetch the complete documentation index at: https://agno-v2-shaloo-ai-support-link.mintlify.app/llms.txt
Use this file to discover all available pages before exploring further.
This example demonstrates using Agent as Judge evaluation as a post-hook, automatically evaluating every agent response.
Add the following code to your Python file
agent_as_judge_post_hook.py
from agno.agent import Agent
from agno.db.sqlite import SqliteDb
from agno.eval.agent_as_judge import AgentAsJudgeEval
from agno.models.openai import OpenAIResponses
# Setup database to persist eval results
db = SqliteDb(db_file="tmp/agent_as_judge_post_hook.db")
# Eval runs as post-hook, results saved to database
agent_as_judge_eval = AgentAsJudgeEval(
name="Response Quality Check",
model=OpenAIResponses(id="gpt-5.2"),
criteria="Response should be professional, well-structured, and provide balanced perspectives",
scoring_strategy="numeric",
threshold=7,
db=db,
)
agent = Agent(
model=OpenAIResponses(id="gpt-5.2"),
instructions="Provide professional and well-reasoned answers.",
post_hooks=[agent_as_judge_eval],
db=db,
)
response = agent.run("What are the benefits of renewable energy?")
print(response.content)
# Query database for eval results
print("Evaluation Results:")
eval_runs = db.get_eval_runs()
if eval_runs:
latest = eval_runs[-1]
if latest.eval_data and "results" in latest.eval_data:
result = latest.eval_data["results"][0]
print(f"Score: {result.get('score', 'N/A')}/10")
print(f"Status: {'PASSED' if result.get('passed') else 'FAILED'}")
print(f"Reason: {result.get('reason', 'N/A')[:200]}...")
Set up your virtual environment
uv venv --python 3.12
source .venv/bin/activate
Install dependencies
uv pip install -U agno openai
Export your OpenAI API key
export OPENAI_API_KEY="your_openai_api_key_here"
Run the example
python agent_as_judge_post_hook.py