# Create transcription Submit an audio file for transcription by providing an HTTPS URL. The API pre-charges synapses based on audio duration, submits the job for processing, and returns an initial response with a transcription ID for polling. Creates a new transcription job. The audio file must be accessible via a public HTTPS URL. The service validates the URL, calculates the synapse cost based on audio duration and any applicable multipliers, pre-charges the organization's balance, and submits the job for processing. The transcription starts in `queued` status and transitions through `processing` to `completed` or `failed`. Billing uses consumption-based pricing with the FVT-1 model. Priority levels (`normal`, `high`, `urgent`) apply multipliers to the base cost. #### Request Body **`organization_id`** string required The organization ID (`org_` prefixed string) to bill for this transcription. **`transcription_options`** object required Core transcription configuration. details summary Show properties - `audio_url` string *(required)* -- HTTPS URL of the audio file to transcribe. - `language` string *(required)* -- Language code (e.g., `en`, `es`, `fr`, `de`). If the language is unknown, use `en` and auto-detection will be applied. **`speaker_analysis`** object optional Speaker diarization configuration. details summary Show properties - `diarization_enabled` boolean -- Enable speaker identification. Defaults to `false`. - `speakers_expected_count` integer -- Expected number of speakers (1-10). **`intelligence_features`** object optional Advanced AI analysis features. details summary Show properties - `sentiment_analysis_enabled` boolean -- Enable sentiment analysis per sentence. Defaults to `false`. - `entity_detection_enabled` boolean -- Enable named entity detection. Defaults to `false`. - `auto_highlights_enabled` boolean -- Enable automatic key phrase highlights. Defaults to `false`. - `summarization_enabled` boolean -- Enable automatic summarization. Defaults to `false`. **`privacy_settings`** object optional Privacy and PII redaction settings. details summary Show properties - `pii_redaction_enabled` boolean -- Enable PII redaction in transcripts. Defaults to `false`. - `pii_policies` string[] -- List of PII policies to apply (e.g., `["email", "phone_number", "ssn"]`). **`webhook`** object optional Webhook notification configuration. details summary Show properties - `url` string -- HTTPS URL to receive webhook notifications. - `events` string[] -- List of events to trigger the webhook (e.g., `["transcription.completed", "transcription.failed"]`). ## Returns Returns a success envelope with the following top-level fields: - **`success`** -- `true` on successful creation. - **`data`** -- Transcription data including `transcription_id` (with `trans_` prefix), initial `status` of `queued`, `model_key`, `audio_metadata`, and `timestamps`. - **`billing`** -- Billing details including `synapses_consumed`, `synapses_refunded`, `transaction_id`, `currency`, and applied `multipliers` (base, organization, priority). - **`metadata`** -- Request metadata including `request_id`, `timestamp`, and `processing_time_ms`. - **`actions`** -- HATEOAS-style links to related endpoints such as `check_status`. Request ```bash cURL curl -X POST https://api.aitronos.com/v1/audio/transcribe \ -H "Authorization: Bearer $ACCESS_TOKEN" \ -H "Content-Type: application/json" \ -d '{ "organization_id": "org_abc123def456", "transcription_options": { "language": "en", "audio_url": "https://storage.example.com/meeting-recording.mp3" }, "speaker_analysis": { "diarization_enabled": true, "speakers_expected_count": 2 }, "intelligence_features": { "sentiment_analysis_enabled": true, "entity_detection_enabled": true, "summarization_enabled": true }, "privacy_settings": { "pii_redaction_enabled": true, "pii_policies": ["email", "phone_number", "ssn"] } }' ``` ```python Python SDK from aitronos import Aitronos client = Aitronos(api_key="your-api-key") result = client.audio.create_transcription( organization_id="org_abc123def456", transcription_options={ "language": "en", "audio_url": "https://storage.example.com/meeting-recording.mp3", }, speaker_analysis={ "diarization_enabled": True, "speakers_expected_count": 2, }, intelligence_features={ "sentiment_analysis_enabled": True, "entity_detection_enabled": True, "summarization_enabled": True, }, privacy_settings={ "pii_redaction_enabled": True, "pii_policies": ["email", "phone_number", "ssn"], }, ) print(result.data.transcription_id) ``` ```python Python import requests url = "https://api.aitronos.com/v1/audio/transcribe" headers = { "Authorization": "Bearer YOUR_ACCESS_TOKEN", "Content-Type": "application/json", } payload = { "organization_id": "org_abc123def456", "transcription_options": { "language": "en", "audio_url": "https://storage.example.com/meeting-recording.mp3", }, "speaker_analysis": { "diarization_enabled": True, "speakers_expected_count": 2, }, "intelligence_features": { "sentiment_analysis_enabled": True, "entity_detection_enabled": True, "summarization_enabled": True, }, } response = requests.post(url, headers=headers, json=payload) data = response.json() print(f"Transcription ID: {data['data']['transcription_id']}") print(f"Status: {data['data']['status']}") ``` ```javascript JavaScript const accessToken = process.env.ACCESS_TOKEN; const response = await fetch('https://api.aitronos.com/v1/audio/transcribe', { method: 'POST', headers: { 'Authorization': `Bearer ${accessToken}`, 'Content-Type': 'application/json', }, body: JSON.stringify({ organization_id: 'org_abc123def456', transcription_options: { language: 'en', audio_url: 'https://storage.example.com/meeting-recording.mp3', }, speaker_analysis: { diarization_enabled: true, speakers_expected_count: 2, }, intelligence_features: { sentiment_analysis_enabled: true, entity_detection_enabled: true, summarization_enabled: true, }, }), }); const data = await response.json(); console.log('Transcription ID:', data.data.transcription_id); ``` Response ```json 201 Created { "success": true, "data": { "transcription_id": "trans_a1b2c3d4e5f6a7b8c9d0e1f2a3b4c5d6", "status": "queued", "model_key": "fvt-1", "audio_metadata": { "url": "https://storage.example.com/meeting-recording.mp3", "duration_seconds": 125.5, "language_code": "en" }, "timestamps": { "created_at": "2026-03-02T10:30:00Z" } }, "billing": { "synapses_consumed": 126, "synapses_refunded": 0, "transaction_id": "txn_f1e2d3c4b5a6f7e8d9c0b1a2f3e4d5c6", "currency": "synapses", "multipliers": { "base": 1.0, "organization": 1.0, "priority": 1.0 } }, "metadata": { "request_id": "req_1a2b3c4d5e6f7a8b9c0d1e2f3a4b5c6d", "timestamp": "2026-03-02T10:30:00Z", "processing_time_ms": 245 }, "actions": { "check_status": "https://api.aitronos.com/v1/audio/transcribe/trans_a1b2c3d4e5f6a7b8c9d0e1f2a3b4c5d6" } } ``` ```json 402 Insufficient Balance { "success": false, "error": { "code": "INSUFFICIENT_SYNAPSES", "message": "You don't have enough synapses to complete this request.", "system_message": "Insufficient synapse balance: required 126, available 50", "type": "client_error", "status": 402, "details": { "required_synapses": 126, "available_synapses": 50, "organization_id": "org_abc123def456" }, "trace_id": "abc-123-def", "timestamp": "2026-03-02T10:30:00Z" } } ``` ```json 422 Validation Error { "success": false, "error": { "code": "INVALID_AUDIO_URL", "message": "The provided audio URL is invalid or inaccessible.", "system_message": "Audio URL validation failed", "type": "validation_error", "status": 422, "details": { "audio_url": "http://example.com/audio.mp3", "reason": "Audio URL must use HTTPS protocol" }, "trace_id": "abc-123-def", "timestamp": "2026-03-02T10:30:00Z" } } ``` ## Related Resources - [Get Transcription](/docs/api-reference/audio/get-transcription) - [List Transcriptions](/docs/api-reference/audio/list-transcriptions) - [Delete Transcription](/docs/api-reference/audio/delete-transcription) - [Get Paragraphs](/docs/api-reference/audio/get-paragraphs) - [Get Sentences](/docs/api-reference/audio/get-sentences) - [Get Subtitles](/docs/api-reference/audio/get-subtitles) - [Search Transcript](/docs/api-reference/audio/search-transcript) - [Get Redacted Audio](/docs/api-reference/audio/get-redacted-audio) - [Upload Audio File](/docs/api-reference/audio/upload-audio)