The Data Pipeline API enables large-scale data ingestion from external data pipeline platforms and custom connectors. This API is designed for batch processing of unstructured data with high throughput.
The Pipeline Connector API provides:
- Batch Upsert: Process up to 1,000 records per request
- Flexible Schema: Accept free-form data structures from any source system
- File Attachments: Support for file metadata and staged file references
- Multi-Source Support: Works with multiple data pipeline platforms and custom connectors
- Rate Limiting: 100 requests per minute per organization
Pipeline endpoints require a dedicated Pipeline Connector API Key passed via the X-API-Key header.
curl https://api.aitronos.com/v1/connectors/pipeline/upsert \
-H "X-API-Key: $PIPELINE_API_KEY" \
-H "Content-Type: application/json" \
-d '{...}'- Data Warehouse Integration: Sync data from Snowflake, BigQuery, or Redshift
- SaaS Application Sync: Ingest data from ClickUp, Jira, Notion, etc.
- Custom ETL Pipelines: Build custom data ingestion workflows
- File Processing: Process documents, spreadsheets, and other files
- POST /v1/connectors/pipeline/upsert - Batch upsert data records
- GET /v1/connectors/pipeline/health - Health check endpoint
- Get Tool Connections — List available connector tools
- Toggle Connection — Enable or disable a connection
- Initiate OAuth — Start OAuth flow for a connector
- Personal Connectors Guide — How connectors work