GET /v1/organizations/{organization_id}/models - Retrieve all AI models available to a specific organization, including active models with their configurations, pricing, and capabilities.
Returns a list of active AI models available to the specified organization. Models are returned in display order and include information about capabilities, token limits, and cost.
organization_id string required
The unique identifier of the organization. Find your organization ID in Freddy Hub → Settings → Organization.
- ✅ Organization-Scoped - Only models available to your organization
- ✅ Active Models Only - Returns only currently available models
- ✅ Ordered Results - Models sorted by display order for UI consumption
- ✅ UI Compatible - Formatted specifically for frontend applications
- ✅ Complete Information - Provider, capabilities, token limits, and pricing
- Requires valid authentication (JWT token or API key)
- User must be a member of the organization
- Returns 403 if user doesn't have access to the organization
Returns an OrganizationModelsResponse object.
organizationId string
The organization ID from the request path.
models array
Array of Organization Model objects representing available models.
totalCount integer
Total number of models available to the organization.
id string - Internal model ID
title string - Human-readable model name displayed in UI
description string or null - Detailed description of model capabilities
key string - Internal model key for API requests (e.g., gpt-4.1, claude-3-5-sonnet)
external string - External provider identifier
provider string - Model provider (openai, anthropic, aitronos, freddy)
isActive boolean - Whether the model is currently active
isVisibleInUI boolean - Whether the model should be displayed in user interfaces
order integer - Display order for UI sorting (lower numbers appear first)
maxTokens integer or null - Maximum context window size in tokens
costPer1kTokens number or null - Cost per 1,000 tokens (if available)
- Bash
- Python
- JavaScript
curl https://api.freddy.aitronos.com/v1/organizations/org_abc123/models \
-H "Authorization: Bearer $FREDDY_API_KEY"{
"organizationId": "org_abc123",
"models": [
{
"id": "1",
"title": "GPT-4.1",
"description": "Most capable model for complex tasks",
"key": "gpt-4.1",
"external": "gpt-4-turbo-preview",
"provider": "openai",
"isActive": true,
"isVisibleInUI": true,
"order": 1,
"maxTokens": 128000,
"costPer1kTokens": null
},
{
"id": "2",
"title": "Claude 3.5 Sonnet",
"description": "Excellent for analysis and long-form content",
"key": "claude-3-5-sonnet",
"external": "claude-3-5-sonnet-20241022",
"provider": "anthropic",
"isActive": true,
"isVisibleInUI": true,
"order": 2,
"maxTokens": 200000,
"costPer1kTokens": null
},
{
"id": "3",
"title": "FTG 3.0",
"description": "Aitronos flagship model optimized for speed and quality",
"key": "ftg-3.0",
"external": "ftg-3.0",
"provider": "aitronos",
"isActive": true,
"isVisibleInUI": true,
"order": 3,
"maxTokens": 32768,
"costPer1kTokens": null
},
{
"id": "4",
"title": "GPT-3.5 Turbo",
"description": "Fast and cost-effective for simple tasks",
"key": "gpt-3.5-turbo",
"external": "gpt-3.5-turbo",
"provider": "openai",
"isActive": true,
"isVisibleInUI": true,
"order": 4,
"maxTokens": 16385,
"costPer1kTokens": null
}
],
"totalCount": 4
}// Fetch models and build dropdown
async function loadModelSelector() {
const response = await fetch(
`https://api.freddy.aitronos.com/v1/organizations/${orgId}/models`,
{ headers: { 'Authorization': `Bearer ${token}` } }
);
const data = await response.json();
// Create dropdown options
const select = document.getElementById('model-selector');
data.models.forEach(model => {
const option = document.createElement('option');
option.value = model.key;
option.textContent = `${model.title} - ${model.provider}`;
if (model.maxTokens) {
option.textContent += ` (${model.maxTokens.toLocaleString()} tokens)`;
}
select.appendChild(option);
});
}import requests
response = requests.get(
f"https://api.freddy.aitronos.com/v1/organizations/{org_id}/models",
headers={"Authorization": f"Bearer {token}"}
)
models = response.json()
# Get only OpenAI models
openai_models = [m for m in models['models'] if m['provider'] == 'openai']
print(f"OpenAI models: {len(openai_models)}")
# Get only Anthropic models
anthropic_models = [m for m in models['models'] if m['provider'] == 'anthropic']
print(f"Anthropic models: {len(anthropic_models)}")
# Get Aitronos models
aitronos_models = [m for m in models['models'] if m['provider'] == 'aitronos']
print(f"Aitronos models: {len(aitronos_models)}")# Find model with largest context window
largest_context = max(
models['models'],
key=lambda m: m['maxTokens'] if m['maxTokens'] else 0
)
print(f"Largest context: {largest_context['title']} ({largest_context['maxTokens']} tokens)")
# Find fastest model (assuming lower order = faster/preferred)
fastest = min(models['models'], key=lambda m: m['order'])
print(f"Preferred model: {fastest['title']}")import json
from datetime import datetime, timedelta
CACHE_FILE = "models_cache.json"
CACHE_DURATION = timedelta(hours=1)
def get_organization_models(org_id, token, force_refresh=False):
"""Get models with caching"""
# Check cache
if not force_refresh:
try:
with open(CACHE_FILE, 'r') as f:
cache = json.load(f)
cache_time = datetime.fromisoformat(cache['timestamp'])
if datetime.now() - cache_time < CACHE_DURATION:
return cache['data']
except (FileNotFoundError, KeyError, ValueError):
pass
# Fetch fresh data
response = requests.get(
f"https://api.freddy.aitronos.com/v1/organizations/{org_id}/models",
headers={"Authorization": f"Bearer {token}"}
)
data = response.json()
# Update cache
with open(CACHE_FILE, 'w') as f:
json.dump({
'timestamp': datetime.now().isoformat(),
'data': data
}, f)
return dataModels are returned sorted by the order field (ascending), then alphabetically by title. This order is optimized for UI display:
- Lower order numbers = Higher priority/recommended models
- Equal order numbers = Sorted alphabetically
- Use the order as-is for consistent UI presentation across your application
- Model availability can change based on organization tier
- New models may be added without notice
isActiveflag indicates current availabilityisVisibleInUIdetermines if model should be shown to users- Always check
isActivebefore presenting models to users
- List all models - Global model list with full details
- Model object - Field reference for model payloads
- List organization tools - Available tools for the organization
- List organization users - Members associated with the organization
- Cache the Response - Models don't change frequently, cache for at least 1 hour
- Respect the Order - Use the
orderfield for consistent UI presentation - Filter by Provider - Allow users to filter models by provider if needed
- Show Context Limits - Display
maxTokensto help users choose appropriate models - Handle Missing Data -
maxTokensandcostPer1kTokensmay benull - Check isVisibleInUI - Only show models where
isVisibleInUIistrue
This endpoint returns organization-scoped models. For global model information with full details, use the Models API.