Model Context Protocol (MCP) is a universal interface that allows AI models to query and interact with external systems—databases, APIs, file storage, and more—without custom integration code for each service.
Traditional Approach: MCP Approach:
-------------------- --------------
AI ↔ Custom Code ↔ API AI ↔ MCP Server ↔ API
↕ ↕
Custom Code ↔ DB MCP Server ↔ DBInstead of building unique integrations for every data source, MCP provides a single protocol:
- One interface - Works with any MCP-compatible server
- Consistent behavior - Predictable query/response patterns
- Interoperable - Mix and match servers freely
- Pre-built servers - Google Drive, Slack, GitHub, etc.
- Custom servers - Build your own for proprietary systems
- Composable - Combine multiple servers in one request
- Scoped access - Fine-grained permissions
- User-level auth - Each user authorizes their own data
- Zero storage - Real-time queries, no data retention
┌─────────────────────────────────────────┐
│ AI Model (Freddy) │
│ Generates queries, processes responses │
└──────────────────┬──────────────────────┘
│ MCP Protocol
↓
┌─────────────────────────────────────────┐
│ MCP Server Layer │
│ Routes requests, handles auth, caching │
└──────────────────┬──────────────────────┘
│ API/Database Calls
↓
┌─────────────────────────────────────────┐
│ External Data Sources │
│ Google Drive, GitHub, SQL, REST APIs │
└─────────────────────────────────────────┘- User sends prompt with MCP tool enabled
- AI determines if external data is needed
- AI generates MCP query (search, filter, retrieve)
- MCP server executes query against data source
- Results return to AI for processing
- AI synthesizes final response with context
Aitronos maintains servers for popular services:
| Server | Description | Config |
|---|---|---|
| google_drive | Access Google Docs, Sheets, Slides | folderId, mimeTypes |
| github | Query repositories, issues, PRs | repository, branch |
| slack | Search messages, channels | channels, dateRange |
| notion | Access databases, pages | databaseId, filters |
| confluence | Search documentation | spaceKey, labels |
| sharepoint | Access SharePoint sites | siteId, driveId |
Browse MCP Registry for community-built servers:
- PostgreSQL MCP - Query databases directly
- Stripe MCP - Access payment data
- Salesforce MCP - CRM integration
- Zendesk MCP - Support ticket access
- Proprietary systems - Internal APIs, databases
- Specialized workflows - Custom business logic
- Data transformation - Format conversions
- Security requirements - Custom auth flows
A valid MCP server must implement:
- Authentication - OAuth 2.0, API keys, or custom
- Query endpoint - Accept structured queries
- Response format - Return MCP-compatible JSON
- Error handling - Standard error codes
from mcp import MCPServer, Query, Response
class CustomAPIMCP(MCPServer):
def __init__(self, api_key: str, base_url: str):
self.api_key = api_key
self.base_url = base_url
def authenticate(self, user_id: str) -> bool:
# Verify user has access
return validate_user(user_id, self.api_key)
def query(self, query: Query) -> Response:
# Parse MCP query
endpoint = query.resource
params = query.parameters
# Call your API
url = f"{self.base_url}/{endpoint}"
data = requests.get(url, params=params,
headers={"Authorization": f"Bearer {self.api_key}"})
# Return MCP response
return Response(
data=data.json(),
metadata={"source": "custom_api", "timestamp": now()}
)
def capabilities(self) -> dict:
return {
"search": True,
"filter": True,
"pagination": True
}
# Register with Freddy
server = CustomAPIMCP(api_key="sk_...", base_url="https://api.example.com")
server.register(connector_id="custom_api"){
"resource": "documents",
"operation": "search",
"parameters": {
"query": "sales report Q4",
"filters": {
"date_range": "2024-10-01,2024-12-31",
"type": "pdf"
},
"limit": 10
},
"context": {
"user_id": "user_abc123",
"session_id": "sess_xyz789"
}
}{
"status": "success",
"data": [
{
"id": "doc_123",
"title": "Q4 Sales Report",
"url": "https://example.com/docs/q4-sales",
"excerpt": "Total revenue: $2.5M...",
"metadata": {
"author": "Alice",
"created": "2024-12-15",
"type": "pdf"
}
}
],
"pagination": {
"total": 42,
"page": 1,
"per_page": 10,
"has_more": true
},
"metadata": {
"query_time_ms": 245,
"source": "custom_api"
}
}{
"model": "gpt-4.1",
"tools": [
{
"type": "mcp",
"connectorId": "google_drive",
"configuration": {
"folderId": "1a2b3c4d5e"
}
}
],
"inputs": [
{
"role": "user",
"texts": [{"text": "Find the latest product roadmap"}]
}
]
}{
"tools": [
{
"type": "mcp",
"connectorId": "github",
"configuration": {
"repository": "acme/backend",
"branch": "main",
"paths": ["src/", "docs/"],
"fileTypes": [".py", ".md"],
"maxResults": 20
}
}
]
}{
"tools": [
{
"type": "mcp",
"connectorId": "google_drive",
"configuration": {"folderId": "abc123"}
},
{
"type": "mcp",
"connectorId": "notion",
"configuration": {"databaseId": "xyz789"}
},
{
"type": "mcp",
"connectorId": "confluence",
"configuration": {"spaceKey": "DOCS"}
}
],
"toolChoice": "auto"
}- Implement caching - Reduce redundant API calls
- Handle rate limits - Respect third-party quotas
- Validate inputs - Sanitize queries before execution
- Log queries - Track usage and debug issues
- Return metadata - Include source, timestamp, confidence
- Use pagination - Don't return massive datasets
- Implement timeouts - Don't block indefinitely
- Store credentials - Use token references only
- Expose raw errors - Return user-friendly messages
- Ignore security - Validate all user inputs
- Skip authentication - Always verify user access
- Over-fetch data - Return only needed fields
- Hardcode limits - Make configurable
Deploy to Freddy Hub:
freddy-cli mcp deploy --name custom_api --file server.pyRun on your infrastructure:
docker run -p 8080:8080 \
-e MCP_API_KEY=$API_KEY \
custom-mcp-server:latestRegister with Freddy:
freddy-cli mcp register \
--url https://mcp.example.com \
--id custom_apiDeploy as AWS Lambda, Google Cloud Function, or similar:
# serverless.yml
service: custom-mcp
provider:
name: aws
runtime: python3.11
functions:
query:
handler: handler.query
events:
- http:
path: query
method: post- Query latency - Time to fetch data
- Error rate - Failed queries
- Cache hit rate - Efficiency
- API quota usage - Third-party limits
Enable verbose logging:
{
"tools": [
{
"type": "mcp",
"connectorId": "custom_api",
"configuration": {
"debugMode": true,
"logLevel": "verbose"
}
}
]
}Response includes debug info:
{
"data": [...],
"debug": {
"query_sent": "...",
"api_calls": 3,
"cache_hits": 2,
"total_time_ms": 450
}
}- User-level auth - Each user authorizes independently
- Scoped permissions - Request minimal access
- Token refresh - Auto-refresh before expiry
- Revocation - Allow users to disconnect anytime
- No retention - Query results not stored
- Encryption - TLS for all data in transit
- Audit logs - Track all data access
- Compliance - GDPR, SOC 2, HIPAA available
- Per-user limits - Prevent abuse
- Exponential backoff - Handle third-party throttling
- Quota management - Track API usage
{
"error": {
"type": "mcp_error",
"message": "MCP server 'custom_api' unreachable",
"code": "server_timeout"
}
}Solution: Check server health, logs, network connectivity
{
"error": {
"type": "auth_error",
"message": "Failed to authenticate with custom_api",
"code": "invalid_credentials"
}
}Solution: Re-authorize in Freddy Hub, verify API keys
{
"error": {
"type": "mcp_error",
"message": "Invalid query format",
"code": "invalid_query"
}
}Solution: Validate query schema against MCP spec
Q: Can I use multiple MCP servers in one request?
A: Yes, combine as many as needed.
Q: How do I handle server downtime?
A: Implement fallback logic and retry mechanisms.
Q: Are MCP queries cached?
A: Yes, based on connector configuration and query parameters.
Q: Can I build an MCP server in any language?
A: Yes, as long as it implements the MCP protocol (REST API).
Q: How are credentials managed?
A: Users authorize via OAuth in Freddy Hub; servers receive scoped tokens.
Related:
- Personal Connectors Overview - General guide
- Create Model Response - API reference
- Function Calling - Custom tools