Debugging
Debug and test agent tools
Tool Debugging
Debug and test your agent tools effectively.
Local Testing
Test tools locally before deployment:
from swiftclaw import Agent
agent = Agent(name="test-agent")
@agent.tool
async def search_database(query: str):
"""Search the database"""
return await db.search(query)
# Test the tool
result = await agent.test_tool(
"search_database",
query="test query"
)
print(result)Debug Mode
Enable debug mode for detailed logging:
swiftclaw dev --debugOr in code:
agent = Agent(
name="my-agent",
debug=True
)Tool Execution Logs
View tool execution logs:
swiftclaw logs my-agent --filter tool-executionTracing
Trace tool execution:
@agent.tool(trace=True)
async def complex_tool(param: str):
"""Complex tool with tracing"""
with agent.trace("step1"):
result1 = await step1(param)
with agent.trace("step2"):
result2 = await step2(result1)
return result2Error Handling
Handle tool errors gracefully:
@agent.tool
async def risky_tool(param: str):
try:
return await external_api.call(param)
except APIError as e:
agent.log_error(f"API error: {e}")
return {"error": str(e)}
except Exception as e:
agent.log_error(f"Unexpected error: {e}")
raiseTool Validation
Validate tool inputs:
from pydantic import BaseModel, validator
class SearchParams(BaseModel):
query: str
limit: int = 10
@validator('limit')
def validate_limit(cls, v):
if v < 1 or v > 100:
raise ValueError('Limit must be between 1 and 100')
return v
@agent.tool
async def search(params: SearchParams):
return await db.search(params.query, params.limit)Performance Profiling
Profile tool performance:
@agent.tool(profile=True)
async def slow_tool(param: str):
"""Tool with performance profiling"""
result = await expensive_operation(param)
return resultView profiling data:
swiftclaw tools profile my-agent slow_toolMock Tools
Mock tools for testing:
# Mock external API
@agent.tool(mock=True)
async def external_api_call(param: str):
"""Mocked API call"""
return {"status": "success", "data": "mock data"}Tool Testing Framework
Use the testing framework:
import pytest
from swiftclaw.testing import AgentTest
@pytest.mark.asyncio
async def test_search_tool():
agent = AgentTest("my-agent")
result = await agent.call_tool(
"search_database",
query="test"
)
assert result["status"] == "success"
assert len(result["results"]) > 0Debugging Tips
Check Tool Registration
Verify tools are registered:
swiftclaw tools list my-agentInspect Tool Calls
See how the agent calls tools:
swiftclaw logs my-agent --filter tool-calls --tail 100Test Tool Isolation
Test tools in isolation:
result = await agent.test_tool(
"my_tool",
param1="value1",
param2="value2"
)Monitor Tool Performance
Track tool execution time:
swiftclaw metrics my-agent --metric tool-execution-timeProper debugging helps identify and fix tool issues before they affect production.
How is this guide ?
Last updated on