MCP Transport Types¶
Code Scalpel supports three MCP transport types to enable deployment across any environment - from local development to distributed cloud infrastructure.
Transport Comparison¶
| Transport | Best For | Latency | Setup | Network Required |
|---|---|---|---|---|
| stdio | Local AI clients (Claude Desktop, Cursor, VS Code) | Microseconds | Simplest | No |
| SSE | Network deployments, remote teams, cloud | Milliseconds | Moderate | Yes |
| streamable-http | Production systems, load balancers, proxies | Milliseconds | Moderate | Yes |
stdio Transport (Recommended)¶
Default and recommended for local AI assistant integration.
When to Use¶
- Claude Desktop on your local machine
- VS Code with Copilot on localhost
- Cursor on your development machine
- Any local AI client supporting MCP
Advantages¶
- Zero network overhead - direct process communication
- Microsecond latency (fastest possible)
- No SSL/TLS configuration needed
- Works offline
- Simplest setup
Configuration¶
Claude Desktop¶
Edit your config file:
Add this configuration:
Or with pip installation:
{
"mcpServers": {
"code-scalpel": {
"command": "python",
"args": ["-m", "code_scalpel.cli", "mcp"]
}
}
}
VS Code / Cursor¶
Create .vscode/mcp.json in your workspace:
Command Line¶
# Start stdio server (used by AI clients)
codescalpel mcp
# Or explicitly specify stdio transport
codescalpel mcp --transport stdio
SSE Transport (Server-Sent Events)¶
HTTP-based transport using Server-Sent Events for server-to-client streaming.
When to Use¶
- Remote team deployments (Code Scalpel on shared server)
- Docker/Kubernetes deployments
- Cloud-hosted development environments
- When firewall/network requires HTTP
- Multi-tenant scenarios
Advantages¶
- HTTP-based (firewall-friendly)
- Efficient server-to-client streaming
- Works through most proxies
- Standard HTTP/HTTPS security
- Good for real-time progress updates
Starting the Server¶
# Basic SSE server on localhost
codescalpel mcp --transport sse --host 127.0.0.1 --port 8080
# SSE server accessible on LAN
codescalpel mcp --transport sse --host 0.0.0.0 --port 8080 --allow-lan
# SSE with HTTPS (production)
codescalpel mcp --transport sse \
--host 0.0.0.0 \
--port 8443 \
--ssl-cert /path/to/cert.pem \
--ssl-key /path/to/key.pem
Client Configuration¶
Configure AI client to connect to SSE endpoint:
For HTTPS:
{
"mcpServers": {
"code-scalpel": {
"url": "https://code-scalpel.example.com:8443/sse",
"transport": "sse"
}
}
}
Docker Deployment¶
# docker-compose.yml
version: '3.8'
services:
code-scalpel:
image: ghcr.io/your-org/code-scalpel:latest
command: ["mcp", "--transport", "sse", "--host", "0.0.0.0", "--port", "8080", "--allow-lan"]
ports:
- "8080:8080"
volumes:
- ./workspace:/workspace:ro
environment:
- CODE_SCALPEL_LICENSE_PATH=/config/license.jwt
streamable-http Transport¶
HTTP POST-based transport with response streaming.
When to Use¶
- Load-balanced production deployments
- Behind reverse proxies (nginx, Apache)
- Enterprise infrastructure with HTTP requirements
- API gateway integration
- When you need standard HTTP POST semantics
Advantages¶
- Standard HTTP POST requests
- Works with all HTTP infrastructure
- Load balancer compatible
- Reverse proxy friendly
- Standard REST-like semantics
Starting the Server¶
# Basic HTTP server
codescalpel mcp --transport streamable-http --host 127.0.0.1 --port 8080
# HTTP server on LAN
codescalpel mcp --transport streamable-http \
--host 0.0.0.0 \
--port 8080 \
--allow-lan
# HTTPS with certificates
codescalpel mcp --transport streamable-http \
--host 0.0.0.0 \
--port 8443 \
--ssl-cert /path/to/cert.pem \
--ssl-key /path/to/key.pem
Client Configuration¶
For HTTPS:
{
"mcpServers": {
"code-scalpel": {
"url": "https://code-scalpel.example.com:8443/mcp",
"transport": "http"
}
}
}
Nginx Reverse Proxy¶
upstream code_scalpel {
server 127.0.0.1:8080;
}
server {
listen 443 ssl http2;
server_name code-scalpel.example.com;
ssl_certificate /etc/ssl/certs/code-scalpel.crt;
ssl_certificate_key /etc/ssl/private/code-scalpel.key;
location /mcp {
proxy_pass http://code_scalpel;
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
# Allow streaming responses
proxy_buffering off;
proxy_cache off;
}
}
Preferred HTTP Invocation¶
[20260310_DOCS] Prefer the explicit transport flag for current MCP runtime documentation.
Use the explicit transport selection rather than relying on older shorthand examples:
# Preferred MCP HTTP transport
codescalpel mcp --transport streamable-http --port 9000
# With explicit host binding
codescalpel mcp --transport streamable-http --host 0.0.0.0 --port 9000
# With HTTPS
codescalpel mcp --transport streamable-http --port 9000 --ssl-cert cert.pem --ssl-key key.pem
For current deployments, the verified MCP endpoint is /mcp on the configured port.
Security Configuration¶
HTTPS/TLS Setup¶
Generate Self-Signed Certificate (Development)¶
# Generate certificate and key
openssl req -x509 -newkey rsa:4096 -nodes \
-keyout key.pem \
-out cert.pem \
-days 365 \
-subj "/CN=localhost"
# Start server with HTTPS
codescalpel mcp --transport sse \
--ssl-cert cert.pem \
--ssl-key key.pem
Production Certificates¶
Use certificates from a trusted CA (Let's Encrypt, DigiCert, etc.):
codescalpel mcp --transport sse \
--host 0.0.0.0 \
--port 8443 \
--ssl-cert /etc/letsencrypt/live/example.com/fullchain.pem \
--ssl-key /etc/letsencrypt/live/example.com/privkey.pem
Network Security¶
LAN Access Warning¶
The --allow-lan flag disables host validation:
# ⚠️ WARNING: Only use on trusted networks
codescalpel mcp --transport sse \
--host 0.0.0.0 \
--allow-lan
Only use --allow-lan when: - On private, trusted network - Behind a firewall - For development/testing - With proper authentication
Firewall Configuration¶
For production deployments, configure your firewall:
# Allow HTTPS traffic (port 8443)
sudo ufw allow 8443/tcp
# Allow from specific IP range only
sudo ufw allow from 10.0.0.0/8 to any port 8443
Environment Variables¶
All transports support these environment variables:
# License configuration
export CODE_SCALPEL_LICENSE_PATH=/path/to/license.jwt
# Or legacy variable
export SCALPEL_LICENSE=/path/to/license.jwt
# Tier override (for testing)
export CODE_SCALPEL_TIER=enterprise
For production with process managers:
# systemd unit file
[Service]
Environment="CODE_SCALPEL_LICENSE_PATH=/etc/code-scalpel/license.jwt"
ExecStart=/usr/local/bin/codescalpel mcp --transport sse --host 0.0.0.0 --port 8443 --ssl-cert /etc/ssl/code-scalpel.crt --ssl-key /etc/ssl/code-scalpel.key
Production Deployment Examples¶
Kubernetes Deployment¶
apiVersion: apps/v1
kind: Deployment
metadata:
name: code-scalpel
spec:
replicas: 3
selector:
matchLabels:
app: code-scalpel
template:
metadata:
labels:
app: code-scalpel
spec:
containers:
- name: code-scalpel
image: ghcr.io/your-org/code-scalpel:1.4.0
command: ["codescalpel", "mcp"]
args:
- "--transport=streamable-http"
- "--host=0.0.0.0"
- "--port=8080"
- "--allow-lan"
ports:
- containerPort: 8080
env:
- name: CODE_SCALPEL_LICENSE_PATH
value: /etc/code-scalpel/license.jwt
volumeMounts:
- name: license
mountPath: /etc/code-scalpel
readOnly: true
volumes:
- name: license
secret:
secretName: code-scalpel-license
---
apiVersion: v1
kind: Service
metadata:
name: code-scalpel
spec:
selector:
app: code-scalpel
ports:
- port: 8080
targetPort: 8080
type: LoadBalancer
Systemd Service¶
# /etc/systemd/system/code-scalpel.service
[Unit]
Description=Code Scalpel MCP Server
After=network.target
[Service]
Type=simple
User=codescalpel
Group=codescalpel
WorkingDirectory=/opt/code-scalpel
Environment="CODE_SCALPEL_LICENSE_PATH=/etc/code-scalpel/license.jwt"
ExecStart=/usr/local/bin/codescalpel mcp --transport sse --host 127.0.0.1 --port 8080
Restart=always
RestartSec=10
[Install]
WantedBy=multi-user.target
Enable and start:
sudo systemctl daemon-reload
sudo systemctl enable code-scalpel
sudo systemctl start code-scalpel
sudo systemctl status code-scalpel
Troubleshooting¶
Connection Issues¶
stdio Transport¶
Problem: "Process exited with code 1"
Solutions: 1. Verify installation: codescalpel --version 2. Test manually: codescalpel mcp (should show "Starting Code Scalpel MCP Server") 3. Check Python path in config (use absolute paths)
HTTP Transports (SSE/streamable-http)¶
Problem: "Connection refused"
Solutions: 1. Verify server is running: curl http://localhost:8080/sse or /mcp 2. Check firewall: sudo ufw status 3. Verify port not in use: lsof -i :8080 4. Check logs for errors
Problem: "SSL handshake failed"
Solutions: 1. Verify certificate files exist and are readable 2. Check certificate validity: openssl x509 -in cert.pem -text -noout 3. Ensure client trusts the certificate 4. For self-signed certs, client may need to disable cert verification (dev only)
Performance Issues¶
High Latency¶
- stdio: Should be < 1ms. If not, check process overhead.
- SSE/HTTP: Should be < 50ms on LAN, < 200ms over internet.
- Check network latency:
ping your-server - Use HTTPS for production (overhead is acceptable)
- Consider geographic proximity (use CDN or regional servers)
Memory Issues¶
For large codebases, increase server resources:
# Increase Python heap (if using ulimit)
ulimit -v 4194304 # 4GB
# Or configure in systemd
[Service]
MemoryLimit=4G
Choosing the Right Transport¶
Decision Tree¶
Are you using a local AI client (Claude Desktop, Cursor)?
├─ YES → Use stdio (default, fastest)
└─ NO → Continue...
Do you need network access (remote team, Docker, cloud)?
├─ YES → Continue...
│ └─ Do you have HTTP infrastructure (load balancers, proxies)?
│ ├─ YES → Use streamable-http
│ └─ NO → Use SSE
└─ NO → Use stdio
Quick Reference¶
- Local development: stdio
- Remote team: SSE or streamable-http
- Docker/Kubernetes: streamable-http
- Behind nginx/Apache: streamable-http
- Simple network deployment: SSE
- Claude API integration: SSE or streamable-http with HTTPS
Next Steps¶
- Docker Installation - Container deployment
- Claude Desktop Installation - Local setup
- Enterprise Guide - Large-scale deployments
- MCP Protocol Reference - Protocol details