Skip to content
Last updated

🔨 In Development — This section is still being developed and may change.
> **Pending development** > This endpoint is under active development and not yet available in production. Response fields and availability may change.

GET /v1/organizations/{organization_id}/models - Retrieve all AI models available to a specific organization, including active models with their configurations, pricing, and capabilities.

GEThttps://api.freddy.aitronos.com/v1/organizations/{organization_id}/models

Returns a list of active AI models available to the specified organization. Models are returned in display order and include information about capabilities, token limits, and cost.

Path Parameters

organization_id string required

The unique identifier of the organization. Find your organization ID in Freddy Hub → Settings → Organization.

Features

  • Organization-Scoped - Only models available to your organization
  • Active Models Only - Returns only currently available models
  • Ordered Results - Models sorted by display order for UI consumption
  • UI Compatible - Formatted specifically for frontend applications
  • Complete Information - Provider, capabilities, token limits, and pricing

Security

  • Requires valid authentication (JWT token or API key)
  • User must be a member of the organization
  • Returns 403 if user doesn't have access to the organization

Response

Returns an OrganizationModelsResponse object.

organizationId string

The organization ID from the request path.

models array

Array of Organization Model objects representing available models.

totalCount integer

Total number of models available to the organization.

Organization Model Object

id string - Internal model ID

title string - Human-readable model name displayed in UI

description string or null - Detailed description of model capabilities

key string - Internal model key for API requests (e.g., gpt-4.1, claude-3-5-sonnet)

external string - External provider identifier

provider string - Model provider (openai, anthropic, aitronos, freddy)

isActive boolean - Whether the model is currently active

isVisibleInUI boolean - Whether the model should be displayed in user interfaces

order integer - Display order for UI sorting (lower numbers appear first)

maxTokens integer or null - Maximum context window size in tokens

costPer1kTokens number or null - Cost per 1,000 tokens (if available)

Bash
curl https://api.freddy.aitronos.com/v1/organizations/org_abc123/models \
  -H "Authorization: Bearer $FREDDY_API_KEY"

Response

{
  "organizationId": "org_abc123",
  "models": [
    {
      "id": "1",
      "title": "GPT-4.1",
      "description": "Most capable model for complex tasks",
      "key": "gpt-4.1",
      "external": "gpt-4-turbo-preview",
      "provider": "openai",
      "isActive": true,
      "isVisibleInUI": true,
      "order": 1,
      "maxTokens": 128000,
      "costPer1kTokens": null
    },
    {
      "id": "2",
      "title": "Claude 3.5 Sonnet",
      "description": "Excellent for analysis and long-form content",
      "key": "claude-3-5-sonnet",
      "external": "claude-3-5-sonnet-20241022",
      "provider": "anthropic",
      "isActive": true,
      "isVisibleInUI": true,
      "order": 2,
      "maxTokens": 200000,
      "costPer1kTokens": null
    },
    {
      "id": "3",
      "title": "FTG 3.0",
      "description": "Aitronos flagship model optimized for speed and quality",
      "key": "ftg-3.0",
      "external": "ftg-3.0",
      "provider": "aitronos",
      "isActive": true,
      "isVisibleInUI": true,
      "order": 3,
      "maxTokens": 32768,
      "costPer1kTokens": null
    },
    {
      "id": "4",
      "title": "GPT-3.5 Turbo",
      "description": "Fast and cost-effective for simple tasks",
      "key": "gpt-3.5-turbo",
      "external": "gpt-3.5-turbo",
      "provider": "openai",
      "isActive": true,
      "isVisibleInUI": true,
      "order": 4,
      "maxTokens": 16385,
      "costPer1kTokens": null
    }
  ],
  "totalCount": 4
}

Use Cases

Build Model Selector UI

// Fetch models and build dropdown
async function loadModelSelector() {
  const response = await fetch(
    `https://api.freddy.aitronos.com/v1/organizations/${orgId}/models`,
    { headers: { 'Authorization': `Bearer ${token}` } }
  );
  
  const data = await response.json();
  
  // Create dropdown options
  const select = document.getElementById('model-selector');
  data.models.forEach(model => {
    const option = document.createElement('option');
    option.value = model.key;
    option.textContent = `${model.title} - ${model.provider}`;
    if (model.maxTokens) {
      option.textContent += ` (${model.maxTokens.toLocaleString()} tokens)`;
    }
    select.appendChild(option);
  });
}

Filter by Provider

import requests

response = requests.get(
    f"https://api.freddy.aitronos.com/v1/organizations/{org_id}/models",
    headers={"Authorization": f"Bearer {token}"}
)

models = response.json()

# Get only OpenAI models
openai_models = [m for m in models['models'] if m['provider'] == 'openai']
print(f"OpenAI models: {len(openai_models)}")

# Get only Anthropic models
anthropic_models = [m for m in models['models'] if m['provider'] == 'anthropic']
print(f"Anthropic models: {len(anthropic_models)}")

# Get Aitronos models
aitronos_models = [m for m in models['models'] if m['provider'] == 'aitronos']
print(f"Aitronos models: {len(aitronos_models)}")

Find Best Model for Task

# Find model with largest context window
largest_context = max(
    models['models'],
    key=lambda m: m['maxTokens'] if m['maxTokens'] else 0
)
print(f"Largest context: {largest_context['title']} ({largest_context['maxTokens']} tokens)")

# Find fastest model (assuming lower order = faster/preferred)
fastest = min(models['models'], key=lambda m: m['order'])
print(f"Preferred model: {fastest['title']}")

Cache Models List

import json
from datetime import datetime, timedelta

CACHE_FILE = "models_cache.json"
CACHE_DURATION = timedelta(hours=1)

def get_organization_models(org_id, token, force_refresh=False):
    """Get models with caching"""
    # Check cache
    if not force_refresh:
        try:
            with open(CACHE_FILE, 'r') as f:
                cache = json.load(f)
                cache_time = datetime.fromisoformat(cache['timestamp'])
                if datetime.now() - cache_time < CACHE_DURATION:
                    return cache['data']
        except (FileNotFoundError, KeyError, ValueError):
            pass
    
    # Fetch fresh data
    response = requests.get(
        f"https://api.freddy.aitronos.com/v1/organizations/{org_id}/models",
        headers={"Authorization": f"Bearer {token}"}
    )
    data = response.json()
    
    # Update cache
    with open(CACHE_FILE, 'w') as f:
        json.dump({
            'timestamp': datetime.now().isoformat(),
            'data': data
        }, f)
    
    return data

Display Order

Models are returned sorted by the order field (ascending), then alphabetically by title. This order is optimized for UI display:

  1. Lower order numbers = Higher priority/recommended models
  2. Equal order numbers = Sorted alphabetically
  3. Use the order as-is for consistent UI presentation across your application

Model Availability

  • Model availability can change based on organization tier
  • New models may be added without notice
  • isActive flag indicates current availability
  • isVisibleInUI determines if model should be shown to users
  • Always check isActive before presenting models to users

Best Practices

  1. Cache the Response - Models don't change frequently, cache for at least 1 hour
  2. Respect the Order - Use the order field for consistent UI presentation
  3. Filter by Provider - Allow users to filter models by provider if needed
  4. Show Context Limits - Display maxTokens to help users choose appropriate models
  5. Handle Missing Data - maxTokens and costPer1kTokens may be null
  6. Check isVisibleInUI - Only show models where isVisibleInUI is true

This endpoint returns organization-scoped models. For global model information with full details, use the Models API.