id string
Unique identifier for the response. Format: resp_ followed by alphanumeric characters.
object string
Object type identifier. Always response.
createdAt integer
Unix timestamp (seconds) when the response was created.
model string
The model used to generate the response. Example: gpt-4o, claude-3-5-sonnet, o3-preview.
choices array
Array of completion choices generated by the model. Typically contains one choice unless n parameter is used.
Show structure
Each choice object contains:
index integer
The index of this choice in the array.
message object
The generated message content.
Show message properties
role string
The role of the message. Always assistant for model responses.
content string or array
The generated text content or structured content array.
toolCalls array optional
Array of tool calls made by the model during generation.
Show tool call structure
id string
Unique identifier for the tool call.
type string
Type of tool. One of function, fileSearch, webSearch, codeInterpreter, image_generation, computer_use_preview, mcp.
function object optional
Function call details (when type is function).
name string
Name of the function called.
arguments string
JSON string of function arguments.
refusal string optional
If the model refused to generate content, this contains the refusal explanation.
finishReason string
Reason why the model stopped generating. Values:
stop- Natural completionlength- Reached max output synapses limittoolCalls- Model called toolscontent_filter- Content filtered by moderationfunction_call- Deprecated, usetoolCalls
logprobs object optional
Log probability information for generated tokens (when logprobs: true).
Show logprobs structure
content array
Array of token log probability objects.
refusal array optional
Log probabilities for refusal content.
usage object
Token and synapse usage statistics for this request.
Show usage properties
inputNeurons integer
Number of neurons consumed by input context (prompt, conversation history, system instructions).
outputSynapses integer
Number of synapses generated in the response output.
totalSynapses integer
Total synapses consumed (input neurons + output synapses + reasoning synapses).
reasoningSynapses integer optional
Synapses used for internal reasoning (reasoning models only).
cacheReadNeurons integer optional
Neurons read from prompt cache (when caching is enabled).
cacheCreationNeurons integer optional
Neurons written to prompt cache for future requests.
organizationId string
The organization ID that owns this response.
threadId string optional
The thread ID this response belongs to (if part of a conversation thread).
metadata object optional
Custom metadata attached to the response request.
systemFingerprint string optional
Fingerprint representing the backend configuration used for this response. Useful for debugging and tracking model versions.
serviceTier string optional
The service tier used for processing. Values: auto, default.
A ResponseObjectResponse object containing the API response data.
{
"id": "resp_abc123xyz789",
"object": "response",
"createdAt": 1728057600,
"model": "gpt-4o",
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": "The capital of France is Paris. It has been the country's capital since 508 CE and is known for its art, culture, and iconic landmarks like the Eiffel Tower."
},
"finishReason": "stop",
"logprobs": null
}
],
"usage": {
"inputNeurons": 24,
"outputSynapses": 38,
"totalSynapses": 62,
"cacheReadNeurons": 0,
"cacheCreationNeurons": 0
},
"organizationId": "org_xyz789",
"threadId": null,
"metadata": {},
"systemFingerprint": "fp_2024_10_04",
"serviceTier": "default"
}- Create response - Generate a new model response
- Streaming events - Real-time response streaming
- Input message - Input message format