GEThttps://api.aitronos.com/v1/scrape/jobs
List all scraping jobs for the authenticated user.
| Name | Type | Required | Description |
|---|---|---|---|
Authorization | string | Yes | Bearer token authentication |
| Parameter | Type | Default | Description |
|---|---|---|---|
status | string | null | Filter by status: "pending", "processing", "completed", "failed", "cancelled" |
limit | integer | 50 | Number of jobs to return (1-100) |
offset | integer | 0 | Pagination offset |
Status: 200 OK
| Field | Type | Description |
|---|---|---|
jobs | array | Array of job objects |
total | integer | Total number of jobs matching filter |
limit | integer | Limit used in request |
offset | integer | Offset used in request |
Each job in the array contains:
| Field | Type | Description |
|---|---|---|
job_id | string | Unique job identifier |
status | string | Job status |
url | string | Target URL |
extracted_data | array|null | Extracted items (null if not completed) |
error_message | string|null | Error message if failed |
metadata | object|null | Processing metadata |
created_at | string | Creation timestamp |
completed_at | string|null | Completion timestamp |
Bash
- Bash
- Python
- JavaScript
curl -X GET "https://api.aitronos.com/api/v1/scrape/jobs?limit=20&offset=0" \
-H "X-API-Key: $FREDDY_API_KEY"Response 200 OK
{
"jobs": [
{
"job_id": "job_abc123",
"status": "completed",
"url": "https://example.com/page1",
"extracted_data": [...],
"metadata": {...},
"created_at": "2024-12-16T10:30:00Z",
"completed_at": "2024-12-16T10:30:02Z"
},
{
"job_id": "job_def456",
"status": "failed",
"url": "https://example.com/page2",
"extracted_data": null,
"error_message": "Site blocked by anti-bot measures",
"metadata": {...},
"created_at": "2024-12-16T10:31:00Z",
"completed_at": "2024-12-16T10:31:05Z"
}
],
"total": 45,
"limit": 50,
"offset": 0
}