Agentic workflows enable AI models to autonomously execute multi-step tasks by intelligently calling tools, processing results, and making decisions—all without requiring manual orchestration from your application.
Traditional function calling requires you to manually manage a loop:
- Send request to model
- Receive tool call
- Execute tool in your code
- Send results back
- Repeat until done
Agentic workflows automate this entirely. The Aitronos backend handles the full execution loop, allowing the AI to act as an autonomous agent that:
- 🤖 Decides which tools to use based on context
- 🔄 Chains multiple tool calls intelligently
- 📊 Analyzes results and determines next steps
- ✅ Completes complex tasks end-to-end
- 💬 Responds with final answer after all necessary actions
Phase 1: Planning
- Model analyzes user request
- Determines which tools are needed
- Plans execution strategy
Phase 2: Execution
- Model calls first tool with appropriate parameters
- Backend executes tool automatically
- Results are fed back to model
Phase 3: Reasoning
- Model analyzes tool results
- Determines if more information is needed
- Decides on next tool call or final response
Phase 4: Iteration
- Process repeats until model has all necessary information
- Model synthesizes all data into coherent response
Phase 5: Completion
- Final response generated incorporating all tool results
- Complete answer returned to user
Before (Manual Loop):
# 50+ lines of code managing the loop
while not done:
response = call_api(messages)
if has_tool_calls(response):
results = execute_tools(response.tool_calls)
messages.append(results)
else:
done = TrueAfter (Agentic):
# Single API call!
response = requests.post(
"https://api.freddy.aitronos.com/v1/model/response",
json={
"model": "gpt-5",
"inputs": [{"role": "user", "texts": [{"text": "Your request"}]}],
"functions": [tool1, tool2]
}
)The model automatically:
- Chooses the right tools for the task
- Determines optimal execution order
- Handles dependencies between tools
- Adapts based on intermediate results
Common tools are executed server-side:
get_cities- Retrieve city lists by regionget_temperature- Fetch temperature dataweb_search- Real-time internet searchfile_search- Search uploaded documentscode_interpreter- Execute Python code
- Full conversation history maintained automatically
- Tool calls and results saved to thread
- Multi-turn workflows supported seamlessly
- Context available for follow-up questions
| Model | Strategy | Speed | Best For |
|---|---|---|---|
| GPT-5 | Sequential | Moderate | Complex reasoning, step-by-step analysis |
| GPT-4o | Sequential | Fast | Balanced performance, general tasks |
| Claude Sonnet 4 | Parallel | Very Fast | Multi-tool execution, batch operations |
| Claude 3.5 Sonnet | Parallel | Fast | Efficient workflows, concurrent tasks |
Sequential (GPT Models):
- Executes tools one at a time
- Each tool can use results from previous tools
- Ideal for dependent operations
- Set
parallelToolCalls: false
Parallel (Claude Models):
- Executes multiple tools simultaneously
- Faster for independent operations
- Ideal for batch processing
- Set
parallelToolCalls: true
"Get a comprehensive weather report for major European cities"
import os
import requests
response = requests.post(
"https://api.freddy.aitronos.com/v1/model/response",
headers={"Authorization": f"Bearer {os.environ['FREDDY_API_KEY']}"},
json={
"organizationId": "org_abc123",
"model": "gpt-5",
"inputs": [
{
"role": "user",
"texts": [{
"text": "Get a comprehensive weather report for major European cities"
}]
}
],
"functions": [
{
"type": "function",
"name": "get_cities",
"description": "Get a list of 5 cities for a specific region. Returns exactly 5 cities.",
"parameters": {
"type": "object",
"properties": {
"region": {
"type": "string",
"enum": ["asia", "europe", "americas", "global"],
"description": "Geographic region"
}
},
"required": ["region"]
}
},
{
"type": "function",
"name": "get_temperature",
"description": "Get current temperature for a specific city",
"parameters": {
"type": "object",
"properties": {
"city": {
"type": "string",
"description": "City name"
}
},
"required": ["city"]
}
}
],
"parallelToolCalls": False # Sequential execution for GPT-5
}
)
result = response.json()
print(result['response'])Iteration 1: Get Cities
Model Decision: "I need to get European cities first"
Tool Call: get_cities(region="europe")
Backend Execution: Returns ["Paris", "London", "Berlin", "Rome", "Madrid"]Iteration 2-6: Get Temperatures
Model Decision: "Now I need temperature for each city"
Iteration 2: get_temperature(city="Paris") → 18°C
Iteration 3: get_temperature(city="London") → 15°C
Iteration 4: get_temperature(city="Berlin") → 10°C
Iteration 5: get_temperature(city="Rome") → 20°C
Iteration 6: get_temperature(city="Madrid") → 22°CIteration 7: Generate Report
Model Decision: "I have all data, time to generate report"
Final Response:
"Here's a comprehensive weather report for major European cities:
🌍 **European Weather Overview**
📍 Paris, France: 18°C - Mild and pleasant
📍 London, UK: 15°C - Cool conditions
📍 Berlin, Germany: 10°C - Chilly weather
📍 Rome, Italy: 20°C - Warm and comfortable
📍 Madrid, Spain: 22°C - Warm and sunny
The warmest city is Madrid at 22°C, while Berlin is the coolest at 10°C.
Overall, Southern Europe is experiencing warmer temperatures compared to
Northern regions."Total: 7 iterations, 6 tool calls, 1 final response
The model adapts based on intermediate results:
# User: "Check inventory and reorder if stock is low"
# Step 1: Check inventory
result = check_inventory(product_id="WIDGET-100")
# Returns: {"stock": 5, "reorder_threshold": 10}
# Step 2: Model decides stock is low
# Automatically calls reorder
reorder_product(product_id="WIDGET-100", quantity=20)
# Step 3: Confirmation
# "I've checked the inventory for WIDGET-100 and found only 5 units
# remaining (below the threshold of 10). I've automatically placed
# a reorder for 20 units."Collect data from multiple sources:
# User: "Compare prices across all our suppliers"
# Automatic execution:
suppliers = get_suppliers() # ["SupplierA", "SupplierB", "SupplierC"]
for supplier in suppliers:
price = get_supplier_price(
supplier=supplier,
product="WIDGET-100"
)
# Model generates comparison reportModel handles failures gracefully:
# Step 1: Try primary data source
result = get_stock_data(source="primary")
# Returns: {"error": "Service unavailable"}
# Step 2: Model switches to backup
result = get_stock_data(source="backup")
# Success!
# Model continues without user interventionChain operations across different systems:
# User: "Book a flight to Paris and add it to my calendar"
# Step 1: Search flights
flights = search_flights(destination="Paris", date="2025-06-01")
# Step 2: Book selected flight
booking = book_flight(flight_id=flights[0]['id'])
# Step 3: Add to calendar
create_calendar_event(
title=f"Flight to Paris",
date="2025-06-01",
details=booking['confirmation']
)
# Complete response with all confirmation detailsThe model uses descriptions to decide when to call tools:
❌ Bad:
{
"name": "get_data",
"description": "Gets data"
}✅ Good:
{
"name": "get_customer_lifetime_value",
"description": "Calculate the total revenue generated by a specific customer across all their orders. Use this when you need to understand customer value or segment high-value customers."
}Prevent runaway execution:
{
"maxToolCalls": 20 // Limit total tool calls per workflow
}When tools depend on each other:
{
"parallelToolCalls": false // Ensure proper ordering
}Leverage both for powerful workflows:
{
"tools": [
{"type": "webSearch"}, // Built-in: Search internet
{ // Custom: Save to your database
"type": "function",
"name": "save_research",
"parameters": {...}
}
]
}Enable follow-up conversations:
{
"threadId": "thread_abc123", // Preserve context
"inputs": [{"role": "user", "texts": [{"text": "Now check Asia"}]}]
}Agentic workflows involve multiple model inferences:
| Workflow Complexity | Typical Duration | Tool Calls |
|---|---|---|
| Simple (1-2 tools) | 5-10 seconds | 1-2 |
| Medium (3-5 tools) | 15-30 seconds | 3-5 |
| Complex (6-10 tools) | 30-60 seconds | 6-10 |
Tips for Faster Execution:
- Use Claude models for parallel execution
- Minimize tool call dependencies
- Set appropriate
maxToolCallslimits - Use built-in tools when available (faster server-side execution)
Each tool call adds tokens:
Input Tokens:
- Tool definitions (sent with each request)
- Conversation history
- Previous tool results
Output Tokens:
- Tool call arguments
- Final response
Optimization Tips:
- Use concise tool descriptions
- Limit conversation history with
threadContextMode: "recent" - Set
maxToolCallsto prevent excessive iterations - Use
parallelToolCallsto reduce sequential back-and-forth
{
"include": [
"function_calls.logs", // Execution timing and details
"request.logs", // Full request processing log
"usage.detailed" // Token usage breakdown
]
}{
"response": "Final answer text",
"usage": {
"input_tokens": 2500,
"output_tokens": 450,
"total_tokens": 2950
},
"metadata": {
"iterations": 7,
"tool_calls": 6,
"execution_time_ms": 12400
},
"function_calls": [
{
"name": "get_cities",
"arguments": {"region": "europe"},
"result": ["Paris", "London", "Berlin", "Rome", "Madrid"],
"execution_time_ms": 234
},
{
"name": "get_temperature",
"arguments": {"city": "Paris"},
"result": {"temperature": 18},
"execution_time_ms": 156
}
// ... more calls
]
}Issue: Model Not Calling Tools
- Check tool descriptions are clear and relevant
- Verify user request actually requires tools
- Try
toolChoice: "required"to force tool usage
Issue: Too Many Iterations
- Set lower
maxToolCallslimit - Review tool descriptions for ambiguity
- Ensure tools return complete, structured data
Issue: Incomplete Results
- Check tool execution isn't timing out
- Verify tool results are in expected format
- Review error handling in tool execution
Pros:
- Full control over execution
- Can implement custom logic between calls
- Easier to debug individual steps
Cons:
- Requires loop management code
- Must handle tool result formatting
- Manual conversation history tracking
- More complex error handling
Best For:
- Client-side tool execution
- Custom business logic between steps
- Need for human-in-the-loop approval
Pros:
- Single API call simplicity
- Automatic orchestration
- Built-in tools executed server-side
- Conversation history managed automatically
Cons:
- Less control over execution flow
- Built-in tools have fixed implementations
- Harder to debug individual iterations
Best For:
- Server-side built-in tools
- Complex multi-step workflows
- Autonomous agent behavior
- Rapid prototyping
Before (Manual Loop):
def manual_workflow():
messages = [{"role": "user", "content": "Get weather for Europe"}]
while True:
response = call_api(messages)
if response.tool_calls:
# Execute tools manually
for call in response.tool_calls:
result = execute_tool(call.name, call.arguments)
messages.append({
"role": "tool",
"name": call.name,
"content": result
})
else:
return response.contentAfter (Agentic):
def agentic_workflow():
response = requests.post(
"https://api.freddy.aitronos.com/v1/model/response",
json={
"model": "gpt-5",
"inputs": [{"role": "user", "texts": [{"text": "Get weather for Europe"}]}],
"functions": [get_cities_tool, get_temperature_tool]
}
)
return response.json()['response']Migration Steps:
- ✅ Convert manual tool execution to built-in tools where possible
- ✅ Define all tools in single request
- ✅ Remove manual loop management code
- ✅ Update to expect final response directly
- ✅ Add appropriate
maxToolCallsandparallelToolCallssettings
Task: "Research AI trends and create a summary report"
Workflow:
web_search("latest AI trends 2025")web_search("AI industry reports")- Analyze and synthesize results
- Generate comprehensive summary
Task: "Check order status and update customer"
Workflow:
get_customer_info(email)get_order_status(order_id)get_shipping_tracking(tracking_number)- Generate status update message
Task: "Find and compare products, then add best one to cart"
Workflow:
search_products(query, filters)get_product_details(product_id)for eachcheck_inventory(product_id)compare_prices([product_ids])add_to_cart(best_product_id)- Generate purchase recommendation
Task: "Analyze sales data and create visualizations"
Workflow:
code_interpreter.execute("import pandas as pd; df = pd.read_csv('sales.csv')")code_interpreter.execute("df.describe()")code_interpreter.execute("plt.plot(df['month'], df['revenue'])")- Generate insights report with charts
- Function Calling Guide - Comprehensive function calling documentation
- Web Search Tool - Built-in search capabilities
- Code Interpreter - Python execution
- Threads Overview - Multi-turn conversations
- API Reference: Create Response - Full API docs
- Try the Quick Start - Weather Report Example
- Explore Built-In Tools - System Tools Documentation
- Build Custom Workflows - Function Calling Guide
- See More Examples - Examples Gallery