Freddy uses synapses and neurons to measure AI model computation and processing.
Synapses are Aitronos' unit of measurement for AI model output generation. They represent the computational units consumed during response creation, including:
- Output synapses - The visible text, images, or structured data in the response
- Reasoning synapses - Internal thinking and reasoning computations (when using reasoning models)
- Tool execution synapses - Processing for function calls and tool use
Neurons measure the input processing capacity consumed by your requests, including:
- Input neurons - Processing your text, images, audio, and file inputs
- Context neurons - Thread history and conversation context
- System neurons - Instructions and system prompts
Unified measurement: Track both input processing (neurons) and output generation (synapses) separately.
Reasoning transparency: Accounts for the model's internal reasoning process, not just visible output.
Fairer pricing: You're charged for the actual computational work performed by the AI.
Granular control: Set limits on both input context and output length independently.
Control the maximum length and cost of responses:
{
"maxOutputSynapses": 2048,
"inputs": [...]
}Common limits:
- 512 synapses - Short answers, simple queries
- 2048 synapses - Standard responses (default)
- 4096 synapses - Detailed explanations, complex reasoning
- 8192 synapses - Long-form content, multi-step tasks
The model stops generating when:
- The response is naturally complete, or
maxOutputSynapsesis reached
You'll receive a finish_reason indicating why generation stopped:
stop- Natural completionlength- Hit synapse limittool_call- Model called a function
Every response includes synapse and neuron usage breakdown:
{
"usage": {
"inputNeurons": 150,
"outputSynapses": 423,
"reasoningSynapses": 89,
"totalNeurons": 150,
"totalSynapses": 512
}
}Use this data to:
- Optimize prompts - Reduce input neurons
- Control costs - Monitor total consumption
- Debug responses - See reasoning overhead
- Improve performance - Identify expensive operations
- Each piece of generated content consumes synapses
- Reasoning models use additional synapses for internal thinking
- Tool calls and function executions count toward synapses
- Limit with
maxOutputSynapsesparameter
- Processing text, images, audio, and files consumes neurons
- Conversation history and thread context count as neurons
- System instructions and prompts use neurons
- Automatically managed based on model context limits
✅ Start with defaults - 2048 synapses works for most cases
✅ Monitor usage - Check usage in responses to optimize
✅ Set limits - Use maxOutputSynapses to prevent runaway costs
✅ Optimize inputs - Reduce neuron consumption with concise prompts
❌ Don't over-limit - Too low limits may cut off important responses
❌ Don't ignore usage data - Track patterns to optimize costs
Related:
- Reasoning Models
- Thread Context Modes - Managing neuron usage in conversations