Skip to content
Last updated

Learn how to configure parameters for your Streamline automations and pass values during execution.

Overview

Parameters allow you to:

  • Make automations reusable with different inputs
  • Configure behavior without changing code
  • Pass runtime values during execution
  • Set default values for optional parameters

Defining Parameters

Parameters are defined in streamline.yaml:

name: my-automation
description: Process data with configurable parameters
execution_file: main.py
parameters:
 - name: api_key
 type: string
 required: true
 description: "API key for authentication"

 - name: batch_size
 type: integer
 required: false
 default: 100
 description: "Number of records to process per batch"

 - name: enable_notifications
 type: boolean
 required: false
 default: false
 description: "Send notifications on completion"

Parameter Types

String

Text values:

- name: message
 type: string
 required: true
 description: "Message to process"

Usage in Python:

def main(message: str):
 print(f"Processing: {message}")

Integer

Whole numbers:

- name: count
 type: integer
 required: false
 default: 10
 description: "Number of items to process"

Usage in Python:

def main(count: int = 10):
 for i in range(count):
 process_item(i)

Float

Decimal numbers:

- name: threshold
 type: float
 required: false
 default: 0.75
 description: "Confidence threshold"

Usage in Python:

def main(threshold: float = 0.75):
 if confidence > threshold:
 accept_result()

Boolean

True/false values:

- name: debug_mode
 type: boolean
 required: false
 default: false
 description: "Enable debug logging"

Usage in Python:

def main(debug_mode: bool = False):
 if debug_mode:
 print("Debug mode enabled")

Array

List of values:

- name: tags
 type: array
 required: false
 default: []
 description: "List of tags to filter"

Usage in Python:

def main(tags: list = None):
 tags = tags or []
 for tag in tags:
 process_tag(tag)

Object

JSON objects:

- name: config
 type: object
 required: false
 default: {}
 description: "Configuration object"

Usage in Python:

def main(config: dict = None):
 config = config or {}
 timeout = config.get("timeout", 30)
 retries = config.get("retries", 3)

Required vs Optional

Required Parameters

Must be provided during execution:

- name: api_key
 type: string
 required: true
 description: "API key (required)"

Execution without required parameter fails:

# Error: Missing required parameter 'api_key'
aitronos streamline execute <automation-id>

# Success
aitronos streamline execute <automation-id> \
 --param api_key="your-api-key"

Optional Parameters

Use default value if not provided:

- name: batch_size
 type: integer
 required: false
 default: 100
 description: "Batch size (optional)"

Execution uses default if not specified:

# Uses default: batch_size=100
aitronos streamline execute <automation-id>

# Override default
aitronos streamline execute <automation-id> \
 --param batch_size=50

Passing Parameters

Via CLI

# Single parameter
aitronos streamline execute <automation-id> \
 --param api_key="your-key"

# Multiple parameters
aitronos streamline execute <automation-id> \
 --param api_key="your-key" \
 --param batch_size=50 \
 --param debug_mode=true

# Array parameter
aitronos streamline execute <automation-id> \
 --param tags='["tag1","tag2","tag3"]'

# Object parameter
aitronos streamline execute <automation-id> \
 --param config='{"timeout":60,"retries":5}'

Via API

curl -X POST "https://api.aitronos.com/v1/streamline/automations/{automation_id}/execute" \
 -H "Authorization: Bearer $FREDDY_SESSION_TOKEN" \
 -H "Content-Type: application/json" \
 -d '{
 "parameters": {
 "api_key": "your-key",
 "batch_size": 50,
 "debug_mode": true,
 "tags": ["tag1", "tag2", "tag3"],
 "config": {
 "timeout": 60,
 "retries": 5
 }
 },
 "return_mode": "wait"
 }'

Scheduled Executions

Parameters for scheduled executions are set when creating the schedule:

# Schedule with parameters
curl -X POST "https://api.aitronos.com/v1/streamline/automations/{automation_id}/schedule" \
 -H "Authorization: Bearer $FREDDY_SESSION_TOKEN" \
 -H "Content-Type: application/json" \
 -d '{
 "cron_expression": "0 9 * * *",
 "timezone": "UTC",
 "parameters": {
 "api_key": "your-key",
 "batch_size": 100
 }
 }'

Accessing Parameters in Code

Function Arguments

Parameters are passed as keyword arguments:

def main(api_key: str, batch_size: int = 100, debug_mode: bool = False):
 """
 Main automation function.

 Args:
 api_key: API key for authentication (required)
 batch_size: Number of records per batch (optional, default: 100)
 debug_mode: Enable debug logging (optional, default: False)
 """
 if debug_mode:
 print(f"API Key: {api_key}")
 print(f"Batch Size: {batch_size}")

 # Your automation logic
 process_data(api_key, batch_size)

Using **kwargs

For flexible parameter handling:

def main(**kwargs):
 """Main automation with flexible parameters."""
 api_key = kwargs.get("api_key")
 batch_size = kwargs.get("batch_size", 100)
 debug_mode = kwargs.get("debug_mode", False)

 if not api_key:
 raise ValueError("api_key is required")

 # Your automation logic
 process_data(api_key, batch_size)

Type Validation

Add type checking for robustness:

def main(api_key: str, batch_size: int = 100):
 """Main automation with type validation."""
 # Validate types
 if not isinstance(api_key, str):
 raise TypeError("api_key must be a string")

 if not isinstance(batch_size, int):
 raise TypeError("batch_size must be an integer")

 if batch_size <= 0:
 raise ValueError("batch_size must be positive")

 # Your automation logic
 process_data(api_key, batch_size)

Environment Variables

For sensitive values, use environment variables instead of parameters:

In streamline.yaml

Don't include sensitive values:

# Bad: Hardcoded secret
parameters:
 - name: api_key
 type: string
 default: "sk_live_abc123" # Don't do this!

# Good: No default for sensitive values
parameters:
 - name: api_key
 type: string
 required: true
 description: "API key (pass securely)"

In Python Code

Use environment variables for secrets:

import os

def main(api_key: str = None):
 """Main automation using environment variables."""
 # Prefer environment variable over parameter
 api_key = api_key or os.getenv("API_KEY")

 if not api_key:
 raise ValueError("API key not provided")

 # Your automation logic
 process_data(api_key)

Setting Environment Variables

Environment variables can be set in the execution environment (future feature).

Complex Parameters

Nested Objects

- name: database_config
 type: object
 required: true
 description: "Database configuration"

Usage:

def main(database_config: dict):
 """Main automation with nested config."""
 host = database_config.get("host", "localhost")
 port = database_config.get("port", 5432)
 credentials = database_config.get("credentials", {})
 username = credentials.get("username")
 password = credentials.get("password")

 connect_to_database(host, port, username, password)

Execution:

aitronos streamline execute <automation-id> \
 --param database_config='{
 "host": "db.example.com",
 "port": 5432,
 "credentials": {
 "username": "user",
 "password": "pass"
 }
 }'

Array of Objects

- name: tasks
 type: array
 required: true
 description: "List of tasks to process"

Usage:

def main(tasks: list):
 """Process list of tasks."""
 for task in tasks:
 task_id = task.get("id")
 task_type = task.get("type")
 task_data = task.get("data", {})

 process_task(task_id, task_type, task_data)

Execution:

aitronos streamline execute <automation-id> \
 --param tasks='[
 {"id": 1, "type": "email", "data": {"to": "user@example.com"}},
 {"id": 2, "type": "sms", "data": {"phone": "+1234567890"}}
 ]'

Best Practices

1. Use Descriptive Names

# Bad: Unclear names
- name: x
 type: integer

# Good: Descriptive names
- name: max_retries
 type: integer
 description: "Maximum number of retry attempts"

2. Provide Descriptions

- name: timeout
 type: integer
 required: false
 default: 30
 description: "Request timeout in seconds (default: 30)"

3. Set Sensible Defaults

- name: batch_size
 type: integer
 required: false
 default: 100 # Reasonable default
 description: "Records per batch"

4. Validate Input

def main(batch_size: int = 100):
 """Validate parameters."""
 if batch_size <= 0:
 raise ValueError("batch_size must be positive")

 if batch_size > 1000:
 raise ValueError("batch_size cannot exceed 1000")

 # Continue with validated input
 process_data(batch_size)

5. Document Parameter Usage

def main(api_key: str, endpoint: str = "https://api.example.com"):
 """
 Fetch data from external API.

 Args:
 api_key: API authentication key (required)
 endpoint: API endpoint URL (optional, default: https://api.example.com)

 Returns:
 dict: Processed data

 Raises:
 ValueError: If api_key is empty
 ConnectionError: If API is unreachable
 """
 pass

6. Handle Missing Parameters

def main(**kwargs):
 """Handle missing parameters gracefully."""
 required_params = ["api_key", "data_source"]

 for param in required_params:
 if param not in kwargs:
 raise ValueError(f"Missing required parameter: {param}")

 # Continue with validated parameters
 process_data(kwargs)

Examples

Simple Automation

# streamline.yaml
name: send-notification
description: Send notification with custom message
execution_file: main.py
parameters:
 - name: message
 type: string
 required: true
 description: "Notification message"

 - name: recipient
 type: string
 required: true
 description: "Recipient email"
# main.py
def main(message: str, recipient: str):
 """Send notification."""
 send_email(recipient, "Notification", message)
 return {"success": True, "recipient": recipient}

Data Processing Automation

# streamline.yaml
name: process-data
description: Process data with configurable options
execution_file: main.py
parameters:
 - name: data_source
 type: string
 required: true
 description: "Data source identifier"

 - name: batch_size
 type: integer
 required: false
 default: 100
 description: "Records per batch"

 - name: filters
 type: object
 required: false
 default: {}
 description: "Data filters"
# main.py
def main(data_source: str, batch_size: int = 100, filters: dict = None):
 """Process data with filters."""
 filters = filters or {}

 data = fetch_data(data_source, filters)
 results = process_in_batches(data, batch_size)

 return {
 "success": True,
 "records_processed": len(results),
 "batch_size": batch_size
 }

Next Steps

Support

Need help with parameters?