Get detailed progress information for a scraping job, including completion status and reason. ## Headers | Name | Type | Required | Description | | --- | --- | --- | --- | | `Authorization` | string | Yes | Bearer token authentication | ## Path Parameters | Parameter | Type | Required | Description | | --- | --- | --- | --- | | `job_id` | string | Yes | Job identifier | ## Response **Status**: `200 OK` | Field | Type | Description | | --- | --- | --- | | `job_id` | string | Unique job identifier | | `status` | string | "pending", "processing", "completed", "failed", "cancelled" | | `url` | string | The scraped URL | | `current_page` | integer | Current page being processed | | `total_pages_scraped` | integer | Total pages scraped so far | | `total_properties_found` | integer | Total items extracted | | `progress_percentage` | number | Completion percentage (0-100) | | `pagination_complete` | boolean | Whether pagination finished | | `completion_reason` | string|null | Why the job completed (see below) | | `created_at` | string | Job creation timestamp | | `completed_at` | string|null | Job completion timestamp | | `processing_time` | number|null | Total processing time in seconds | | `running_time` | number|null | Current running time (while processing) | | `error_message` | string|null | Error details if failed | ### Completion Reason Values | Value | Description | | --- | --- | | `done` | Job completed successfully - all requested pages/items were scraped | | `timeout` | Job stopped due to reaching the configured timeout limit. Partial results may be available | | `error` | Job failed due to an error (check `error_message` for details) | | `null` | Job is still processing (status is "pending" or "processing") | ## Usage Notes - When `completion_reason` is "timeout", check `total_properties_found` to see how many items were scraped before timeout - Timeout is configurable via `options.timeout` when creating a job (default: 300s, max: 1800s) - For depth scraping jobs, a timeout means some detail pages may not have been scraped Get Progress ```bash curl -X GET https://api.aitronos.com/v1/scrape/jobs/job_abc123/progress \ -H "X-API-Key: $FREDDY_API_KEY" ``` ```python import os import requests api_key = os.environ["FREDDY_API_KEY"] job_id = "job_abc123" response = requests.get( f"https://api.aitronos.com/v1/scrape/jobs/{job_id}/progress", headers={"X-API-Key": api_key} ) data = response.json()["data"] print(f"Progress: {data['progress_percentage']}%") print(f"Items found: {data['total_properties_found']}") if data["completion_reason"]: print(f"Completed: {data['completion_reason']}") ``` ```javascript const axios = require('axios'); const apiKey = process.env.FREDDY_API_KEY; const jobId = 'job_abc123'; const response = await axios.get( `https://api.aitronos.com/v1/scrape/jobs/${jobId}/progress`, { headers: { 'X-API-Key': apiKey } } ); const data = response.data.data; console.log(`Progress: ${data.progress_percentage}%`); console.log(`Items found: ${data.total_properties_found}`); if (data.completion_reason) { console.log(`Completed: ${data.completion_reason}`); } ``` **Response** `200 OK` ```json { "success": true, "data": { "job_id": "job_abc123", "status": "completed", "url": "https://example.com/listings", "current_page": 1, "total_pages_scraped": 1, "total_properties_found": 13, "progress_percentage": 100.0, "pagination_complete": true, "completion_reason": "timeout", "created_at": "2026-01-12T07:19:20.123456+00:00", "completed_at": "2026-01-12T07:20:26.789012+00:00", "processing_time": 60.03, "running_time": null, "error_message": null } } ```