Reasoning models perform best when given clear, well-structured problems. These practices help you get accurate, efficient results.
Reasoning models work through a problem from start to finish. Ambiguous or incomplete information mid-reasoning leads to worse results. Provide everything the model needs in the initial message.
{
"inputs": [
{
"role": "user",
"content": "A train leaves City A at 9:00 AM traveling at 80 km/h. Another train leaves City B (200 km away) at 10:00 AM traveling at 120 km/h toward City A. At what time do they meet, and how far from City A?"
}
]
}Avoid: "Solve this problem" followed by the problem in a second message.
Reasoning models are most valuable for problems with a definitive correct answer. Broad or open-ended questions don't benefit much from extended thinking.
Good:
- "Is this code correct? If not, what is the bug and how do I fix it?"
- "Prove that this algorithm runs in O(n log n) time."
- "What is wrong with this SQL query?"
Less effective for reasoning models:
- "What are some ways to improve my codebase?"
- "Write a blog post about AI."
Let the model reason freely before formatting its response. Asking it to respond in a specific structured format can interfere with the reasoning process.
Avoid: "Think through this step by step and format your answer as a numbered list with exactly 5 points."
Better: "Explain why this algorithm fails on this edge case."
If you need a specific output format, use Structured Output instead.
Reasoning models consume more synapses and take longer to respond. Route requests intelligently:
def select_model(task_type: str) -> str:
reasoning_tasks = {"proof", "debug", "audit", "verify", "analyze"}
if task_type in reasoning_tasks:
return "o3"
return "gpt-4o" # Default for most tasksReasoning models can produce very long responses when working through complex problems. Set max_output_synapses to control length and cost:
{
"model": "o3",
"max_output_synapses": 2048,
"inputs": [{"role": "user", "content": "Debug this function."}]
}When you access reasoning content via the streaming API, remember:
- Reasoning is the model's scratchpad — it may contain false starts, discarded ideas, and corrections
- The final response is what the model committed to after reasoning
- Don't use reasoning content as authoritative; use it to understand how the model reached its conclusion
If you're using function calling alongside a reasoning model, design your tools so they can be called after the reasoning phase, not during. Interrupting reasoning with external tool calls can degrade reasoning quality.
Reasoning models work best at low temperature (the default). Avoid setting high temperature values — they introduce randomness that undermines careful logical reasoning.
| Aspect | Standard Model | Reasoning Model |
|---|---|---|
| Speed | Fast | Slower (thinking time) |
| Cost | Lower | Higher (reasoning synapses) |
| Accuracy on hard problems | Good | Better |
| Creative tasks | Excellent | No advantage |
| Simple Q&A | Excellent | No advantage |
- Reasoning Models — Model list and technical details
- Available Models — Choose the right model
- Structured Output — Enforce response format
- Pricing — Cost comparison