Skip to content
Last updated

🔨 In Development — This section is still being developed and may change.
MCP is an open standard for connecting AI models to external data sources, enabling seamless integration between language models and third-party services.

What is MCP?

Model Context Protocol (MCP) is a universal interface that allows AI models to query and interact with external systems—databases, APIs, file storage, and more—without custom integration code for each service.

Traditional Approach:        MCP Approach:
--------------------        --------------
AI ↔ Custom Code ↔ API     AI ↔ MCP Server ↔ API
      ↕                           ↕
   Custom Code ↔ DB          MCP Server ↔ DB

Why MCP?

Standardization

Instead of building unique integrations for every data source, MCP provides a single protocol:

  • One interface - Works with any MCP-compatible server
  • Consistent behavior - Predictable query/response patterns
  • Interoperable - Mix and match servers freely

Flexibility

  • Pre-built servers - Google Drive, Slack, GitHub, etc.
  • Custom servers - Build your own for proprietary systems
  • Composable - Combine multiple servers in one request

Security

  • Scoped access - Fine-grained permissions
  • User-level auth - Each user authorizes their own data
  • Zero storage - Real-time queries, no data retention

How MCP Works

Architecture

┌─────────────────────────────────────────┐
│           AI Model (Freddy)             │
│  Generates queries, processes responses │
└──────────────────┬──────────────────────┘
                   │ MCP Protocol

┌─────────────────────────────────────────┐
│           MCP Server Layer              │
│  Routes requests, handles auth, caching │
└──────────────────┬──────────────────────┘
                   │ API/Database Calls

┌─────────────────────────────────────────┐
│        External Data Sources            │
│  Google Drive, GitHub, SQL, REST APIs   │
└─────────────────────────────────────────┘

Request Flow

  1. User sends prompt with MCP tool enabled
  2. AI determines if external data is needed
  3. AI generates MCP query (search, filter, retrieve)
  4. MCP server executes query against data source
  5. Results return to AI for processing
  6. AI synthesizes final response with context

Pre-built MCP Servers

Official Servers

Aitronos maintains servers for popular services:

ServerDescriptionConfig
google_driveAccess Google Docs, Sheets, SlidesfolderId, mimeTypes
githubQuery repositories, issues, PRsrepository, branch
slackSearch messages, channelschannels, dateRange
notionAccess databases, pagesdatabaseId, filters
confluenceSearch documentationspaceKey, labels
sharepointAccess SharePoint sitessiteId, driveId

Community Servers

Browse MCP Registry for community-built servers:

  • PostgreSQL MCP - Query databases directly
  • Stripe MCP - Access payment data
  • Salesforce MCP - CRM integration
  • Zendesk MCP - Support ticket access

Building Custom MCP Servers

When to Build Custom

  • Proprietary systems - Internal APIs, databases
  • Specialized workflows - Custom business logic
  • Data transformation - Format conversions
  • Security requirements - Custom auth flows

MCP Server Spec

A valid MCP server must implement:

  1. Authentication - OAuth 2.0, API keys, or custom
  2. Query endpoint - Accept structured queries
  3. Response format - Return MCP-compatible JSON
  4. Error handling - Standard error codes

Example: Simple REST API Server

from mcp import MCPServer, Query, Response

class CustomAPIMCP(MCPServer):
    def __init__(self, api_key: str, base_url: str):
        self.api_key = api_key
        self.base_url = base_url
    
    def authenticate(self, user_id: str) -> bool:
        # Verify user has access
        return validate_user(user_id, self.api_key)
    
    def query(self, query: Query) -> Response:
        # Parse MCP query
        endpoint = query.resource
        params = query.parameters
        
        # Call your API
        url = f"{self.base_url}/{endpoint}"
        data = requests.get(url, params=params, 
                           headers={"Authorization": f"Bearer {self.api_key}"})
        
        # Return MCP response
        return Response(
            data=data.json(),
            metadata={"source": "custom_api", "timestamp": now()}
        )
    
    def capabilities(self) -> dict:
        return {
            "search": True,
            "filter": True,
            "pagination": True
        }

# Register with Freddy
server = CustomAPIMCP(api_key="sk_...", base_url="https://api.example.com")
server.register(connector_id="custom_api")

MCP Query Schema

{
  "resource": "documents",
  "operation": "search",
  "parameters": {
    "query": "sales report Q4",
    "filters": {
      "date_range": "2024-10-01,2024-12-31",
      "type": "pdf"
    },
    "limit": 10
  },
  "context": {
    "user_id": "user_abc123",
    "session_id": "sess_xyz789"
  }
}

MCP Response Schema

{
  "status": "success",
  "data": [
    {
      "id": "doc_123",
      "title": "Q4 Sales Report",
      "url": "https://example.com/docs/q4-sales",
      "excerpt": "Total revenue: $2.5M...",
      "metadata": {
        "author": "Alice",
        "created": "2024-12-15",
        "type": "pdf"
      }
    }
  ],
  "pagination": {
    "total": 42,
    "page": 1,
    "per_page": 10,
    "has_more": true
  },
  "metadata": {
    "query_time_ms": 245,
    "source": "custom_api"
  }
}

Using MCP in API Requests

Basic Usage

{
  "model": "gpt-4.1",
  "tools": [
    {
      "type": "mcp",
      "connectorId": "google_drive",
      "configuration": {
        "folderId": "1a2b3c4d5e"
      }
    }
  ],
  "inputs": [
    {
      "role": "user",
      "texts": [{"text": "Find the latest product roadmap"}]
    }
  ]
}

Advanced Configuration

{
  "tools": [
    {
      "type": "mcp",
      "connectorId": "github",
      "configuration": {
        "repository": "acme/backend",
        "branch": "main",
        "paths": ["src/", "docs/"],
        "fileTypes": [".py", ".md"],
        "maxResults": 20
      }
    }
  ]
}

Multiple MCP Sources

{
  "tools": [
    {
      "type": "mcp",
      "connectorId": "google_drive",
      "configuration": {"folderId": "abc123"}
    },
    {
      "type": "mcp",
      "connectorId": "notion",
      "configuration": {"databaseId": "xyz789"}
    },
    {
      "type": "mcp",
      "connectorId": "confluence",
      "configuration": {"spaceKey": "DOCS"}
    }
  ],
  "toolChoice": "auto"
}

Best Practices

✅ DO

  • Implement caching - Reduce redundant API calls
  • Handle rate limits - Respect third-party quotas
  • Validate inputs - Sanitize queries before execution
  • Log queries - Track usage and debug issues
  • Return metadata - Include source, timestamp, confidence
  • Use pagination - Don't return massive datasets
  • Implement timeouts - Don't block indefinitely

❌ DON'T

  • Store credentials - Use token references only
  • Expose raw errors - Return user-friendly messages
  • Ignore security - Validate all user inputs
  • Skip authentication - Always verify user access
  • Over-fetch data - Return only needed fields
  • Hardcode limits - Make configurable

Deployment

Hosting Options

Deploy to Freddy Hub:

freddy-cli mcp deploy --name custom_api --file server.py

2. Self-Hosted

Run on your infrastructure:

docker run -p 8080:8080 \
  -e MCP_API_KEY=$API_KEY \
  custom-mcp-server:latest

Register with Freddy:

freddy-cli mcp register \
  --url https://mcp.example.com \
  --id custom_api

3. Serverless

Deploy as AWS Lambda, Google Cloud Function, or similar:

# serverless.yml
service: custom-mcp
provider:
  name: aws
  runtime: python3.11
functions:
  query:
    handler: handler.query
    events:
      - http:
          path: query
          method: post

Monitoring & Debugging

Metrics to Track

  • Query latency - Time to fetch data
  • Error rate - Failed queries
  • Cache hit rate - Efficiency
  • API quota usage - Third-party limits

Debug Logging

Enable verbose logging:

{
  "tools": [
    {
      "type": "mcp",
      "connectorId": "custom_api",
      "configuration": {
        "debugMode": true,
        "logLevel": "verbose"
      }
    }
  ]
}

Response includes debug info:

{
  "data": [...],
  "debug": {
    "query_sent": "...",
    "api_calls": 3,
    "cache_hits": 2,
    "total_time_ms": 450
  }
}

Security Considerations

Authentication

  • User-level auth - Each user authorizes independently
  • Scoped permissions - Request minimal access
  • Token refresh - Auto-refresh before expiry
  • Revocation - Allow users to disconnect anytime

Data Privacy

  • No retention - Query results not stored
  • Encryption - TLS for all data in transit
  • Audit logs - Track all data access
  • Compliance - GDPR, SOC 2, HIPAA available

Rate Limiting

  • Per-user limits - Prevent abuse
  • Exponential backoff - Handle third-party throttling
  • Quota management - Track API usage

Troubleshooting

Server Not Responding

{
  "error": {
    "type": "mcp_error",
    "message": "MCP server 'custom_api' unreachable",
    "code": "server_timeout"
  }
}

Solution: Check server health, logs, network connectivity

Authentication Failed

{
  "error": {
    "type": "auth_error",
    "message": "Failed to authenticate with custom_api",
    "code": "invalid_credentials"
  }
}

Solution: Re-authorize in Freddy Hub, verify API keys

Query Parsing Error

{
  "error": {
    "type": "mcp_error",
    "message": "Invalid query format",
    "code": "invalid_query"
  }
}

Solution: Validate query schema against MCP spec

FAQ

Q: Can I use multiple MCP servers in one request?
A: Yes, combine as many as needed.

Q: How do I handle server downtime?
A: Implement fallback logic and retry mechanisms.

Q: Are MCP queries cached?
A: Yes, based on connector configuration and query parameters.

Q: Can I build an MCP server in any language?
A: Yes, as long as it implements the MCP protocol (REST API).

Q: How are credentials managed?
A: Users authorize via OAuth in Freddy Hub; servers receive scoped tokens.


Related: