Documentation Index
Fetch the complete documentation index at: https://agno-v2-shaloo-ai-support-link.mintlify.app/llms.txt
Use this file to discover all available pages before exploring further.
This example demonstrates using a custom evaluator agent with specific instructions for evaluation.
Add the following code to your Python file
agent_as_judge_custom_evaluator.py
from agno.agent import Agent
from agno.eval.agent_as_judge import AgentAsJudgeEval
from agno.models.openai import OpenAIResponses
agent = Agent(
model=OpenAIResponses(id="gpt-5.2"),
instructions="Explain technical concepts simply.",
)
response = agent.run("What is machine learning?")
# Create a custom evaluator with specific instructions
custom_evaluator = Agent(
model=OpenAIResponses(id="gpt-5.2"),
description="Strict technical evaluator",
instructions="You are a strict evaluator. Only give high scores to exceptionally clear and accurate explanations.",
)
evaluation = AgentAsJudgeEval(
name="Technical Accuracy",
criteria="Explanation must be technically accurate and comprehensive",
scoring_strategy="numeric",
threshold=8,
evaluator_agent=custom_evaluator,
)
result = evaluation.run(
input="What is machine learning?",
output=str(response.content),
)
print(f"Score: {result.results[0].score}/10")
print(f"Passed: {result.results[0].passed}")
Set up your virtual environment
uv venv --python 3.12
source .venv/bin/activate
Install dependencies
uv pip install -U agno openai
Export your OpenAI API key
export OPENAI_API_KEY="your_openai_api_key_here"
Run the example
python agent_as_judge_custom_evaluator.py