Documentation Index
Fetch the complete documentation index at: https://agno-v2-shaloo-ai-support-link.mintlify.app/llms.txt
Use this file to discover all available pages before exploring further.
The problem: Reasoning Agents force systematic thinking on every request. Reasoning Models require specialized models. What if you want reasoning only when needed, tailored to specific contexts?
The solution: Reasoning Tools give your agent explicit think() and analyze() tools, and let the agent decide when to use them. The agent chooses when to reason, when to act, and when it has enough information to respond.
Agno provides four specialized reasoning toolkits, each optimized for different domains:
| Toolkit | Purpose | Core Tools |
|---|
| ReasoningTools | General-purpose thinking and analysis | think(), analyze() |
| KnowledgeTools | Reasoning with knowledge base searches | think(), search_knowledge(), analyze() |
| MemoryTools | Reasoning about user memory operations | think(), get/add/update/delete_memory(), analyze() |
| WorkflowTools | Reasoning about workflow execution | think(), run_workflow(), analyze() |
Note: All reasoning toolkits register their think()/analyze() functions under the same names. When you combine toolkits, the agent keeps only the first implementation of each function name and silently drops duplicates. Disable enable_think/enable_analyze (or rename/customize functions) on the later toolkits if you still want them to expose their domain-specific actions without conflicting with the scratchpad tools.
All four toolkits follow the same Think → Act → Analyze pattern but provide domain-specific actions tailored to their use case.
This approach was first popularized by Anthropic in their “Extended Thinking” blog post, though many AI engineers (including our team) were using similar patterns long before.
Reasoning Tools give you the best of both worlds:
- Works with any model - Even models without native reasoning capabilities
- Explicit control - The agent decides when to think vs. when to act
- Full transparency - You see exactly what the agent is thinking
- Flexible workflow - The agent can interleave thinking with tool calls
- Domain-optimized - Each toolkit is specialized for its specific use case
- Natural reasoning - Feels more like human problem-solving (think, act, analyze, repeat)
The key difference: With Reasoning Agents, the reasoning happens automatically in a structured loop. With Reasoning Tools, the agent explicitly chooses when to use the think() and analyze() tools, giving you more control and visibility.
For general problem-solving without domain-specific tools.
What it provides:
think() - Plan and reason about the problem
analyze() - Evaluate results and determine next steps
When to use:
- Mathematical or logical problems
- Strategic planning
- Analysis tasks that don’t require external data
- Any scenario where you want structured reasoning
Example:
from agno.agent import Agent
from agno.models.openai import OpenAIResponses
from agno.tools.reasoning import ReasoningTools
agent = Agent(
model=OpenAIResponses(id="gpt-5.2"),
tools=[ReasoningTools(add_instructions=True)],
)
agent.print_response(
"Which is bigger: 9.11 or 9.9? Explain your reasoning.",
stream=True,
)
For searching and analyzing information from knowledge bases (RAG).
What it provides:
think() - Plan search strategy and refine approach
search_knowledge() - Query the knowledge base
analyze() - Evaluate search results for relevance and completeness
When to use:
- Document retrieval and analysis
- RAG (Retrieval-Augmented Generation) workflows
- Research tasks requiring multiple search iterations
- When you need to verify information from knowledge bases
Example:
from agno.agent import Agent
from agno.knowledge.pdf import PDFKnowledgeBase
from agno.models.openai import OpenAIResponses
from agno.tools.knowledge import KnowledgeTools
from agno.vectordb.pgvector import PgVector
# Create knowledge base
knowledge = PDFKnowledgeBase(
path="data/research_papers/",
vector_db=PgVector(
table_name="research_papers",
db_url="postgresql+psycopg://ai:ai@localhost:5532/ai",
),
)
agent = Agent(
model=OpenAIResponses(id="gpt-5.2"),
tools=[KnowledgeTools(knowledge=knowledge, add_instructions=True)],
instructions="Search thoroughly and cite your sources",
)
agent.print_response(
"What are the latest findings on quantum entanglement in our research papers?",
stream=True,
)
How it works:
- Agent calls
think(): “I need to search for quantum entanglement. Let me try multiple search terms.”
- Agent calls
search_knowledge("quantum entanglement")
- Agent calls
analyze(): “Results are too broad. Need more specific search.”
- Agent calls
search_knowledge("quantum entanglement recent findings")
- Agent calls
analyze(): “Now I have sufficient, relevant results.”
- Agent provides final answer
For managing and reasoning about user memories with CRUD operations.
What it provides:
think() - Plan memory operations
get_memories() - Retrieve user memories
add_memory() - Store new memories
update_memory() - Modify existing memories
delete_memory() - Remove memories
analyze() - Evaluate memory operations
When to use:
- Personalized agent interactions
- User preference management
- Maintaining conversation context across sessions
- Building user profiles over time
Example:
from agno.agent import Agent
from agno.db.postgres import PostgresDb
from agno.models.openai import OpenAIResponses
from agno.tools.memory import MemoryTools
db = PostgresDb(
db_url="postgresql+psycopg://ai:ai@localhost:5532/ai",
)
agent = Agent(
model=OpenAIResponses(id="gpt-5.2"),
tools=[MemoryTools(db=db, add_instructions=True)],
db=db,
)
agent.print_response(
"I prefer vegetarian recipes and I'm allergic to nuts.",
user_id="user_123",
)
How it works:
- Agent calls
think(): “User is sharing dietary preferences. I should store this.”
- Agent calls
add_memory(memory="User prefers vegetarian recipes and is allergic to nuts", topics=["dietary_preferences", "allergies"])
- Agent calls
analyze(): “Memory successfully stored with appropriate topics.”
- Agent responds to user confirming the information was saved
For executing and analyzing complex workflows.
What it provides:
think() - Plan workflow inputs and strategy
run_workflow() - Execute a workflow with specific inputs
analyze() - Evaluate workflow results
When to use:
- Multi-step automated processes
- Complex task orchestration
- When workflows need different inputs based on context
- A/B testing different workflow configurations
Example:
from agno.agent import Agent
from agno.models.openai import OpenAIResponses
from agno.tools.workflow import WorkflowTools
from agno.workflow import Workflow
from agno.workflow.step import Step
# Define a research workflow
research_workflow = Workflow(
name="research-workflow",
steps=[
Step(name="search", agent=search_agent),
Step(name="summarize", agent=summary_agent),
Step(name="fact-check", agent=fact_check_agent),
],
)
# Create agent with workflow tools
orchestrator = Agent(
model=OpenAIResponses(id="gpt-5.2"),
tools=[WorkflowTools(workflow=research_workflow, add_instructions=True)],
)
orchestrator.print_response(
"Research climate change impacts on agriculture",
stream=True,
)
How it works:
- Agent calls
think(): “I need to run the research workflow with ‘climate change agriculture’ as input.”
- Agent calls
run_workflow(input_data="climate change impacts on agriculture")
- Workflow executes all steps (search → summarize → fact-check)
- Agent calls
analyze(): “Workflow completed successfully. All fact-checks passed.”
- Agent provides final synthesized answer
Common Pattern: Think → Act → Analyze
All four toolkits follow the same reasoning cycle:
- THINK - Plan what to do, refine approach, brainstorm
- ACT (Domain-Specific)
- ReasoningTools: Direct reasoning
- KnowledgeTools:
search_knowledge()
- MemoryTools:
get/add/update/delete_memory()
- WorkflowTools:
run_workflow()
- ANALYZE - Evaluate results, decide next action
- REPEAT - Loop back to THINK if needed, or provide answer
This mirrors how humans solve complex problems: we think before acting, evaluate results, and adjust our approach based on what we learn.
| If you need to… | Use | Example |
|---|
| Solve logic puzzles or math problems | ReasoningTools | ”Solve: If x² + 5x + 6 = 0, what is x?” |
| Search through documents | KnowledgeTools | ”Find all mentions of user authentication in our docs” |
| Remember user preferences | MemoryTools | ”Remember that I’m allergic to shellfish” |
| Orchestrate complex multi-step tasks | WorkflowTools | ”Research, write, and fact-check an article” |
| Combine multiple domains | Use multiple toolkits | See examples for more patterns |
You can use multiple reasoning toolkits together for powerful multi-domain reasoning. Just remember that tool names must stay unique, so disable overlapping think/analyze entries (or rename the later ones) to prevent silent overrides:
from agno.agent import Agent
from agno.knowledge.pdf import PDFKnowledgeBase
from agno.models.openai import OpenAIResponses
from agno.tools.knowledge import KnowledgeTools
from agno.tools.memory import MemoryTools
from agno.tools.reasoning import ReasoningTools
agent = Agent(
model=OpenAIResponses(id="gpt-5.2"),
tools=[
ReasoningTools(add_instructions=True),
KnowledgeTools(
knowledge=my_knowledge,
enable_think=False,
enable_analyze=False,
add_instructions=False,
),
MemoryTools(
db=my_db,
enable_think=False,
enable_analyze=False,
add_instructions=False,
),
],
instructions="Use reasoning for planning, knowledge for facts, and memory for personalization",
)
With this setup:
ReasoningTools supplies the shared think/analyze scratchpad.
KnowledgeTools still exposes search_knowledge() (and any other unique methods) without trying to register duplicate scratchpad functions.
MemoryTools contributes the CRUD memory tools while inheriting the same central thinking loop.
If you need separate scratchpads per domain, create custom wrappers around think()/analyze() so each toolkit registers uniquely named functions (e.g., knowledge_think, memory_analyze).
Configuration Options
You can control which reasoning tools are available:
# Only thinking, no analysis
ReasoningTools(enable_think=True, enable_analyze=False)
# Only analysis, no thinking
ReasoningTools(enable_think=False, enable_analyze=True)
# Both (default)
ReasoningTools(enable_think=True, enable_analyze=True)
# Shorthand for both
ReasoningTools()
Add Instructions Automatically
Many toolkits ship with pre-written guidance that explains how to use their tools. Setting add_instructions=True injects those instructions into the agent prompt (when the toolkit actually has any):
ReasoningTools(add_instructions=True)
ReasoningTools, KnowledgeTools, MemoryTools, and WorkflowTools all include Agno-authored instructions (and optional few-shot examples) describing their Think → Act → Analyze workflow.
- Other toolkits may not define default instructions; in that case
add_instructions=True is a no-op unless you supply your own instructions=....
The built-in instructions cover when to use think() vs analyze(), how to iterate, and best practices for each domain. Turn them on unless you plan to provide custom guidance.
Add Few-Shot Examples
Want to show your agent some examples of good reasoning? Some toolkits come with pre-written few-shot examples that demonstrate the workflow in action. Turn them on with add_few_shot=True:
ReasoningTools(add_instructions=True, add_few_shot=True)
Right now, ReasoningTools, KnowledgeTools, and MemoryTools have built-in examples. Other toolkits won’t use add_few_shot=True unless you provide your own examples.
These examples show the agent how to iterate through problems, decide on next actions, and mix thinking with actual tool calls.
When should you use them?
- You’re using a smaller or cheaper model that needs extra guidance
- Your reasoning workflow has multiple stages or is complex
- You want more consistent behavior across different runs
Custom Instructions
Provide your own custom instructions for specialized reasoning:
custom_instructions = """
Use the think and analyze tools for rigorous scientific reasoning:
- Always think before making claims
- Cite evidence in your analysis
- Acknowledge uncertainty
- Consider alternative hypotheses
"""
ReasoningTools(
instructions=custom_instructions,
add_instructions=False # Don't include default instructions
)
Custom Few-Shot Examples
You can also write your own examples tailored to your domain:
medical_examples = """
Example: Medical Diagnosis
User: Patient has fever and cough for 3 days.
Agent thinks:
think(
title="Gather Symptoms",
thought="Need to collect all symptoms and their duration. Fever and cough suggest respiratory infection. Should check for other symptoms.",
action="Ask about additional symptoms",
confidence=0.9
)
"""
ReasoningTools(
add_instructions=True,
add_few_shot=True,
few_shot_examples=medical_examples # Your custom examples
)
Monitoring Your Agent’s Thinking
Use show_full_reasoning=True and stream_events=True to display reasoning steps in real-time. See Display Options in Reasoning Agents for details and Reasoning Reference for programmatic access to reasoning steps.
Both approaches add reasoning to any model, but they differ in control and automation:
| Aspect | Reasoning Tools | Reasoning Agents |
|---|
| Activation | Agent decides when to use think() | Automatic on every request |
| Control | Explicit tool calls | Automated loop |
| Transparency | See every think() and analyze() call | See structured reasoning steps |
| Workflow | Agent-driven (flexible) | Framework-driven (structured) |
| Best for | Research, analysis, exploratory tasks | Complex multi-step problems with defined structure |
Rule of thumb:
- Use Reasoning Tools when you want the agent to control its own reasoning process
- Use Reasoning Agents when you want guaranteed systematic thinking for every request