Learn how to configure parameters for your Streamline automations and pass values during execution.
Parameters allow you to:
- Make automations reusable with different inputs
- Configure behavior without changing code
- Pass runtime values during execution
- Set default values for optional parameters
Parameters are defined in streamline.yaml:
name: my-automation
description: Process data with configurable parameters
execution_file: main.py
parameters:
- name: api_key
type: string
required: true
description: "API key for authentication"
- name: batch_size
type: integer
required: false
default: 100
description: "Number of records to process per batch"
- name: enable_notifications
type: boolean
required: false
default: false
description: "Send notifications on completion"Text values:
- name: message
type: string
required: true
description: "Message to process"Usage in Python:
def main(message: str):
print(f"Processing: {message}")Whole numbers:
- name: count
type: integer
required: false
default: 10
description: "Number of items to process"Usage in Python:
def main(count: int = 10):
for i in range(count):
process_item(i)Decimal numbers:
- name: threshold
type: float
required: false
default: 0.75
description: "Confidence threshold"Usage in Python:
def main(threshold: float = 0.75):
if confidence > threshold:
accept_result()True/false values:
- name: debug_mode
type: boolean
required: false
default: false
description: "Enable debug logging"Usage in Python:
def main(debug_mode: bool = False):
if debug_mode:
print("Debug mode enabled")List of values:
- name: tags
type: array
required: false
default: []
description: "List of tags to filter"Usage in Python:
def main(tags: list = None):
tags = tags or []
for tag in tags:
process_tag(tag)JSON objects:
- name: config
type: object
required: false
default: {}
description: "Configuration object"Usage in Python:
def main(config: dict = None):
config = config or {}
timeout = config.get("timeout", 30)
retries = config.get("retries", 3)Must be provided during execution:
- name: api_key
type: string
required: true
description: "API key (required)"Execution without required parameter fails:
# ❌ Error: Missing required parameter 'api_key'
aitronos streamline execute <automation-id>
# ✅ Success
aitronos streamline execute <automation-id> \
--param api_key="your-api-key"Use default value if not provided:
- name: batch_size
type: integer
required: false
default: 100
description: "Batch size (optional)"Execution uses default if not specified:
# Uses default: batch_size=100
aitronos streamline execute <automation-id>
# Override default
aitronos streamline execute <automation-id> \
--param batch_size=50# Single parameter
aitronos streamline execute <automation-id> \
--param api_key="your-key"
# Multiple parameters
aitronos streamline execute <automation-id> \
--param api_key="your-key" \
--param batch_size=50 \
--param debug_mode=true
# Array parameter
aitronos streamline execute <automation-id> \
--param tags='["tag1","tag2","tag3"]'
# Object parameter
aitronos streamline execute <automation-id> \
--param config='{"timeout":60,"retries":5}'curl -X POST "https://api.aitronos.com/v1/streamline/automations/{automation_id}/execute" \
-H "Authorization: Bearer $FREDDY_SESSION_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"parameters": {
"api_key": "your-key",
"batch_size": 50,
"debug_mode": true,
"tags": ["tag1", "tag2", "tag3"],
"config": {
"timeout": 60,
"retries": 5
}
},
"return_mode": "wait"
}'Parameters for scheduled executions are set when creating the schedule:
# Schedule with parameters
curl -X POST "https://api.aitronos.com/v1/streamline/automations/{automation_id}/schedule" \
-H "Authorization: Bearer $FREDDY_SESSION_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"cron_expression": "0 9 * * *",
"timezone": "UTC",
"parameters": {
"api_key": "your-key",
"batch_size": 100
}
}'Parameters are passed as keyword arguments:
def main(api_key: str, batch_size: int = 100, debug_mode: bool = False):
"""
Main automation function.
Args:
api_key: API key for authentication (required)
batch_size: Number of records per batch (optional, default: 100)
debug_mode: Enable debug logging (optional, default: False)
"""
if debug_mode:
print(f"API Key: {api_key}")
print(f"Batch Size: {batch_size}")
# Your automation logic
process_data(api_key, batch_size)For flexible parameter handling:
def main(**kwargs):
"""Main automation with flexible parameters."""
api_key = kwargs.get("api_key")
batch_size = kwargs.get("batch_size", 100)
debug_mode = kwargs.get("debug_mode", False)
if not api_key:
raise ValueError("api_key is required")
# Your automation logic
process_data(api_key, batch_size)Add type checking for robustness:
def main(api_key: str, batch_size: int = 100):
"""Main automation with type validation."""
# Validate types
if not isinstance(api_key, str):
raise TypeError("api_key must be a string")
if not isinstance(batch_size, int):
raise TypeError("batch_size must be an integer")
if batch_size <= 0:
raise ValueError("batch_size must be positive")
# Your automation logic
process_data(api_key, batch_size)For sensitive values, use environment variables instead of parameters:
Don't include sensitive values:
# ❌ Bad: Hardcoded secret
parameters:
- name: api_key
type: string
default: "sk_live_abc123" # Don't do this!
# ✅ Good: No default for sensitive values
parameters:
- name: api_key
type: string
required: true
description: "API key (pass securely)"Use environment variables for secrets:
import os
def main(api_key: str = None):
"""Main automation using environment variables."""
# Prefer environment variable over parameter
api_key = api_key or os.getenv("API_KEY")
if not api_key:
raise ValueError("API key not provided")
# Your automation logic
process_data(api_key)Environment variables can be set in the execution environment (future feature).
- name: database_config
type: object
required: true
description: "Database configuration"Usage:
def main(database_config: dict):
"""Main automation with nested config."""
host = database_config.get("host", "localhost")
port = database_config.get("port", 5432)
credentials = database_config.get("credentials", {})
username = credentials.get("username")
password = credentials.get("password")
connect_to_database(host, port, username, password)Execution:
aitronos streamline execute <automation-id> \
--param database_config='{
"host": "db.example.com",
"port": 5432,
"credentials": {
"username": "user",
"password": "pass"
}
}'- name: tasks
type: array
required: true
description: "List of tasks to process"Usage:
def main(tasks: list):
"""Process list of tasks."""
for task in tasks:
task_id = task.get("id")
task_type = task.get("type")
task_data = task.get("data", {})
process_task(task_id, task_type, task_data)Execution:
aitronos streamline execute <automation-id> \
--param tasks='[
{"id": 1, "type": "email", "data": {"to": "user@example.com"}},
{"id": 2, "type": "sms", "data": {"phone": "+1234567890"}}
]'# ❌ Bad: Unclear names
- name: x
type: integer
# ✅ Good: Descriptive names
- name: max_retries
type: integer
description: "Maximum number of retry attempts"- name: timeout
type: integer
required: false
default: 30
description: "Request timeout in seconds (default: 30)"- name: batch_size
type: integer
required: false
default: 100 # Reasonable default
description: "Records per batch"def main(batch_size: int = 100):
"""Validate parameters."""
if batch_size <= 0:
raise ValueError("batch_size must be positive")
if batch_size > 1000:
raise ValueError("batch_size cannot exceed 1000")
# Continue with validated input
process_data(batch_size)def main(api_key: str, endpoint: str = "https://api.example.com"):
"""
Fetch data from external API.
Args:
api_key: API authentication key (required)
endpoint: API endpoint URL (optional, default: https://api.example.com)
Returns:
dict: Processed data
Raises:
ValueError: If api_key is empty
ConnectionError: If API is unreachable
"""
passdef main(**kwargs):
"""Handle missing parameters gracefully."""
required_params = ["api_key", "data_source"]
for param in required_params:
if param not in kwargs:
raise ValueError(f"Missing required parameter: {param}")
# Continue with validated parameters
process_data(kwargs)# streamline.yaml
name: send-notification
description: Send notification with custom message
execution_file: main.py
parameters:
- name: message
type: string
required: true
description: "Notification message"
- name: recipient
type: string
required: true
description: "Recipient email"# main.py
def main(message: str, recipient: str):
"""Send notification."""
send_email(recipient, "Notification", message)
return {"success": True, "recipient": recipient}# streamline.yaml
name: process-data
description: Process data with configurable options
execution_file: main.py
parameters:
- name: data_source
type: string
required: true
description: "Data source identifier"
- name: batch_size
type: integer
required: false
default: 100
description: "Records per batch"
- name: filters
type: object
required: false
default: {}
description: "Data filters"# main.py
def main(data_source: str, batch_size: int = 100, filters: dict = None):
"""Process data with filters."""
filters = filters or {}
data = fetch_data(data_source, filters)
results = process_in_batches(data, batch_size)
return {
"success": True,
"records_processed": len(results),
"batch_size": batch_size
}- Scheduling - Schedule automations with parameters
- Best Practices - Follow recommended patterns
- API Reference - Execution API documentation
Need help with parameters?
- Email: support@aitronos.com
- Documentation: Streamline Overview