Project 18: MCP Server Chain - Composing Multiple Servers
Project 18: MCP Server Chain - Composing Multiple Servers
Project Overview
| Attribute | Value |
|---|---|
| File | P18-mcp-server-chain.md |
| Main Programming Language | TypeScript |
| Alternative Programming Languages | Python, Go |
| Coolness Level | Level 4: Hardcore Tech Flex |
| Business Potential | 4. The “Open Core” Infrastructure |
| Difficulty | Level 4: Expert |
| Knowledge Area | MCP / Service Composition / Microservices |
| Software or Tool | Multiple MCP Servers, Message Routing |
| Main Book | “Building Microservices” by Sam Newman |
| Time Estimate | 2-3 weeks |
| Prerequisites | Projects 15-17 completed, microservices understanding |
What You Will Build
An MCP “gateway” server that composes multiple MCP servers behind a single interface. Routes tool calls to appropriate backends, aggregates resources, and handles cross-server workflows (e.g., database query followed by GitHub issue creation).
Real-world MCP usage involves multiple servers. This project teaches server composition, routing, and building complex workflows that span multiple services.
Real World Outcome
You: Find slow queries in the database and create a GitHub issue for each
Claude: [Invokes mcp__gateway__compose_workflow]
Executing cross-server workflow...
Step 1: Query database for slow queries
[Routing to: sqlite server]
Found 3 queries slower than 1000ms
Step 2: Create GitHub issues
[Routing to: github server]
Created issues:
- #143: Optimize users query (avg: 2.3s)
- #144: Optimize orders join (avg: 1.8s)
- #145: Add index to products (avg: 1.2s)
Workflow complete:
- Database analysis: 3 slow queries found
- Issues created: 3
- Total time: 4.2s
The Core Question You Are Answering
“How do I compose multiple MCP servers into a unified interface that can handle complex, cross-service workflows?”
Individual MCP servers are powerful, but real workflows often span multiple systems. This project teaches you to build a composition layer that orchestrates across servers.
Gateway Architecture
+------------------------------------------------------------------+
| MCP GATEWAY SERVER |
+------------------------------------------------------------------+
| |
| Incoming Request: "Find slow queries and create issues" |
| | |
| v |
| +------------------------------------------------------------+ |
| | ROUTER | |
| | | |
| | Tool Prefix Mapping: | |
| | db_* --> SQLite Server | |
| | github_* --> GitHub Server | |
| | compose_* --> Workflow Orchestrator | |
| +------------------------------------------------------------+ |
| | | | |
| v v v |
| +--------------+ +--------------+ +--------------+ |
| | ORCHESTRATOR | | SQLite | | GitHub | |
| | | | Server | | Server | |
| | (workflows) | | (db tools) | | (PR tools) | |
| +--------------+ +--------------+ +--------------+ |
| | | |
| v v |
| +----------------+ +----------------+ |
| | SQLite DB | | GitHub API | |
| +----------------+ +----------------+ |
| |
+------------------------------------------------------------------+
Tool Namespacing Strategy
+------------------------------------------------------------------+
| TOOL NAMESPACE COLLISION PROBLEM |
+------------------------------------------------------------------+
Without namespacing:
SQLite Server: query, list, create
GitHub Server: query, list, create <-- COLLISION!
With namespacing:
SQLite Server: db_query, db_list, db_create
GitHub Server: github_query, github_list, github_create
+------------------------------------------------------------------+
| NAMESPACE APPROACHES |
+------------------------------------------------------------------+
1. PREFIX CONVENTION (Recommended)
Format: {server}_{tool}
Examples:
- db_query
- db_list_tables
- github_create_pr
- github_list_prs
2. HIERARCHICAL
Format: {category}/{server}/{tool}
Examples:
- data/sqlite/query
- devops/github/create_pr
3. FULLY QUALIFIED
Format: mcp__{server}__{tool}
Examples:
- mcp__sqlite__query
- mcp__github__create_pr
Workflow Orchestration Patterns
+------------------------------------------------------------------+
| ORCHESTRATION PATTERNS |
+------------------------------------------------------------------+
1. SEQUENTIAL WORKFLOW
A --> B --> C
Step 1: db_query (get slow queries)
|
v
Step 2: github_create_issue (for each result)
|
v
Step 3: notify_slack (summary)
2. PARALLEL WORKFLOW
A, B, C (simultaneously)
+----> db_query
|
Start ----> github_list_prs
|
+----> fetch_metrics
3. CONDITIONAL WORKFLOW
if A then B else C
check_status
|
+--(success)--> deploy_production
|
+--(failure)--> notify_oncall
4. FAN-OUT / FAN-IN
Split, process in parallel, aggregate
get_list
|
v
+---+---+---+
| | | | (parallel processing)
v v v v
P1 P2 P3 P4
| | | |
+---+---+---+
|
v
aggregate_results
Concepts You Must Understand First
Stop and research these before coding:
1. API Gateway Pattern
| Aspect | Description | Reference |
|---|---|---|
| Purpose | Single entry point for multiple services | “Building Microservices” Ch. 4 |
| Responsibilities | Routing, aggregation, authentication | API Gateway patterns |
| Trade-offs | Simplicity vs. single point of failure | Service mesh comparisons |
2. Tool Namespacing
- How do you avoid name collisions when multiple servers have similar tools?
- Prefix conventions:
{server}_{tool}vs{category}/{server}/{tool} - Tool discovery across multiple servers
- Reference: MCP specification on tool naming
3. Distributed Workflows
+------------------------------------------------------------------+
| SAGA PATTERN |
+------------------------------------------------------------------+
| |
| Happy Path: |
| T1 --> T2 --> T3 --> SUCCESS |
| |
| Compensation on Failure: |
| T1 --> T2 --> T3 (FAIL!) |
| | |
| v |
| C3 --> C2 --> C1 --> ROLLBACK COMPLETE |
| |
| Example: |
| T1: Reserve inventory |
| T2: Charge payment |
| T3: Ship order |
| |
| If T3 fails: |
| C3: Cancel shipment (nothing to do) |
| C2: Refund payment |
| C1: Release inventory |
| |
+------------------------------------------------------------------+
Reference: “Designing Data-Intensive Applications” Ch. 9
Questions to Guide Your Design
Before implementing, think through these:
1. Routing Strategy
| Strategy | Pros | Cons | Use When |
|---|---|---|---|
| By tool prefix | Simple, predictable | Tight coupling | Small number of servers |
| By configuration | Flexible, decoupled | Extra config file | Many servers |
| Dynamic discovery | Automatic, extensible | Complexity | Plugin architecture |
2. Composition Patterns
- Sequential: A then B then C (order matters)
- Parallel: A, B, C simultaneously (independent)
- Conditional: if A succeeds then B else C
3. Error Handling
| Scenario | Response |
|---|---|
| Backend server unavailable | Return error, skip or retry |
| Partial workflow failure | Report progress, offer rollback |
| Timeout | Cancel remaining steps |
| Validation error | Fail fast, clear message |
Thinking Exercise
Design the Gateway Architecture
+------------------------------------------------------------------+
| GATEWAY DESIGN QUESTIONS |
+------------------------------------------------------------------+
1. SERVER REGISTRY
How does the gateway know what backend servers exist?
Option A: Static configuration
{
"backends": {
"sqlite": {"command": "python", "args": ["sqlite_server.py"]},
"github": {"command": "node", "args": ["github_server.js"]}
}
}
Option B: Dynamic discovery
- Scan for .mcp.json files
- Query service registry
- Listen for announcements
2. DATA PASSING
How do you pass data between workflow steps?
Option A: Variable substitution
{
"steps": [
{"tool": "db_query", "result_as": "$queries"},
{"tool": "github_create_issue", "input": {"items": "$queries"}}
]
}
Option B: Pipeline (stdout -> stdin)
db_query | github_create_issue
Option C: Shared context
{
"context": {}, // Accumulates results
"steps": [...]
}
3. WORKFLOW DEFINITION
What is the interface for defining workflows?
Option A: JSON DSL
{
"name": "slow_query_issues",
"steps": [
{"server": "sqlite", "tool": "query", "args": {...}},
{"server": "github", "tool": "create_issue", "foreach": "$prev"}
]
}
Option B: Code (TypeScript/Python)
async function slowQueryIssues() {
const queries = await sqlite.query(...);
for (const q of queries) {
await github.createIssue(q);
}
}
Option C: Natural language (let Claude orchestrate)
User: "Find slow queries and create issues"
Claude: [Uses compose tool with natural description]
The Interview Questions They Will Ask
- “How would you design an API gateway for AI tool servers?”
- Discuss routing, aggregation, and the single-responsibility principle.
- “What patterns exist for composing microservices?”
- Explain orchestration vs choreography, saga pattern, and event-driven.
- “How do you handle failures in distributed workflows?”
- Discuss compensation, partial success, and idempotency.
- “What is the saga pattern and when would you use it?”
- Explain long-running transactions, compensation actions, and eventual consistency.
- “How do you namespace tools from multiple servers?”
- Discuss prefix conventions, collision detection, and discovery.
Hints in Layers
Hint 1: Start with Static Routing
Hardcode server-to-tool mappings first. Dynamic discovery can come later:
const toolRoutes: Record<string, string> = {
"db_query": "sqlite",
"db_list_tables": "sqlite",
"github_create_pr": "github",
"github_list_prs": "github",
};
Hint 2: Use Tool Prefixes
Prefix all tools with their source server name:
// During discovery, rename tools
const prefixedTools = tools.map(tool => ({
...tool,
name: `${serverName}_${tool.name}`
}));
Hint 3: Simple Workflow DSL
Define workflows as JSON arrays of steps:
{
"workflow": "slow_query_issues",
"steps": [
{
"server": "sqlite",
"tool": "query",
"args": {"sql": "SELECT * FROM slow_queries"}
},
{
"server": "github",
"tool": "create_issue",
"foreach": "$step_0",
"args": {"title": "Slow query: ${item.name}"}
}
]
}
Hint 4: Spawn Sub-processes
Each backend server runs as a separate process. Gateway communicates via stdio:
import { spawn } from "child_process";
function spawnBackend(config: BackendConfig): ChildProcess {
return spawn(config.command, config.args, {
stdio: ["pipe", "pipe", "inherit"]
});
}
Books That Will Help
| Topic | Book | Chapter | Why It Helps |
|---|---|---|---|
| Service Composition | “Building Microservices” by Newman | Ch. 4, 6 | Integration patterns |
| Distributed Workflows | “Designing Data-Intensive Applications” | Ch. 9 | Transaction patterns |
| API Gateways | “Microservices Patterns” by Richardson | Ch. 8 | Gateway architectures |
| Event-Driven | “Designing Event-Driven Systems” | Ch. 1-4 | Choreography patterns |
Implementation Skeleton
import { Server } from "@modelcontextprotocol/sdk/server/index.js";
import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";
import { spawn, ChildProcess } from "child_process";
import { createInterface } from "readline";
// Backend server configuration
interface BackendConfig {
command: string;
args: string[];
}
const BACKENDS: Record<string, BackendConfig> = {
sqlite: { command: "python", args: ["servers/sqlite_server.py"] },
github: { command: "node", args: ["servers/github_server.js"] }
};
// Active backend connections
const connections: Map<string, {
process: ChildProcess;
pending: Map<number, { resolve: Function; reject: Function }>;
nextId: number;
}> = new Map();
// Tool routing table: tool_name -> backend_name
const toolRoutes: Map<string, string> = new Map();
// All available tools (aggregated from backends)
let allTools: any[] = [];
async function connectBackend(name: string, config: BackendConfig): Promise<void> {
return new Promise((resolve, reject) => {
const proc = spawn(config.command, config.args, {
stdio: ["pipe", "pipe", "inherit"]
});
const conn = {
process: proc,
pending: new Map(),
nextId: 1
};
connections.set(name, conn);
// Handle responses from backend
const rl = createInterface({ input: proc.stdout! });
rl.on("line", (line) => {
try {
const response = JSON.parse(line);
const pending = conn.pending.get(response.id);
if (pending) {
conn.pending.delete(response.id);
if (response.error) {
pending.reject(new Error(response.error.message));
} else {
pending.resolve(response.result);
}
}
} catch (e) {
console.error(`Parse error from ${name}:`, e);
}
});
proc.on("error", reject);
proc.on("spawn", resolve);
});
}
async function sendToBackend(
backendName: string,
method: string,
params: any
): Promise<any> {
const conn = connections.get(backendName);
if (!conn) throw new Error(`Backend not connected: ${backendName}`);
const id = conn.nextId++;
const request = JSON.stringify({
jsonrpc: "2.0",
id,
method,
params
});
return new Promise((resolve, reject) => {
conn.pending.set(id, { resolve, reject });
conn.process.stdin!.write(request + "\n");
});
}
async function discoverTools(): Promise<void> {
for (const [name, config] of Object.entries(BACKENDS)) {
await connectBackend(name, config);
// Initialize backend
await sendToBackend(name, "initialize", {
protocolVersion: "2024-11-05",
capabilities: {},
clientInfo: { name: "gateway", version: "1.0.0" }
});
// List tools from backend
const result = await sendToBackend(name, "tools/list", {});
for (const tool of result.tools) {
const prefixedName = `${name}_${tool.name}`;
toolRoutes.set(prefixedName, name);
allTools.push({
...tool,
name: prefixedName,
description: `[${name}] ${tool.description}`
});
}
}
console.error(`Discovered ${allTools.length} tools from ${connections.size} backends`);
}
// Gateway server
const gateway = new Server(
{ name: "mcp-gateway", version: "1.0.0" },
{ capabilities: { tools: {} } }
);
gateway.setRequestHandler("tools/list", async () => {
return { tools: allTools };
});
gateway.setRequestHandler("tools/call", async (request) => {
const { name, arguments: args } = request.params;
// Route to appropriate backend
const backendName = toolRoutes.get(name);
if (!backendName) {
throw new Error(`Unknown tool: ${name}. Available: ${[...toolRoutes.keys()].join(", ")}`);
}
// Remove prefix and call backend
const originalName = name.replace(`${backendName}_`, "");
const result = await sendToBackend(backendName, "tools/call", {
name: originalName,
arguments: args
});
return result;
});
// Workflow composition tool
gateway.setRequestHandler("tools/call", async (request) => {
const { name, arguments: args } = request.params;
if (name === "compose_workflow") {
const { steps } = args;
const results: any[] = [];
for (const step of steps) {
const { server, tool, args: stepArgs, foreach } = step;
const prefixedTool = `${server}_${tool}`;
if (foreach && results.length > 0) {
// Fan-out: execute for each item in previous result
const items = results[results.length - 1];
for (const item of items) {
const interpolated = interpolateArgs(stepArgs, item);
const result = await sendToBackend(server, "tools/call", {
name: tool,
arguments: interpolated
});
results.push(result);
}
} else {
// Single execution
const result = await sendToBackend(server, "tools/call", {
name: tool,
arguments: stepArgs
});
results.push(result);
}
}
return {
content: [{
type: "text",
text: JSON.stringify({ steps: steps.length, results }, null, 2)
}]
};
}
// Delegate to regular tool routing
// ... (previous routing logic)
});
function interpolateArgs(args: any, context: any): any {
const str = JSON.stringify(args);
const interpolated = str.replace(/\$\{(\w+)\.(\w+)\}/g, (_, obj, key) => {
if (obj === "item" && context[key] !== undefined) {
return context[key];
}
return "";
});
return JSON.parse(interpolated);
}
async function main() {
await discoverTools();
const transport = new StdioServerTransport();
await gateway.connect(transport);
console.error("Gateway server running on stdio");
}
main().catch(console.error);
Learning Milestones
| Milestone | What It Proves | Verification |
|---|---|---|
| Tools route to correct servers | You understand the gateway pattern | Call db_query and github_list_prs |
| Namespacing prevents collisions | You understand tool discovery | Both servers can have “query” tools |
| Cross-server workflows work | You can compose operations | Execute the slow-query workflow |
| Errors are handled gracefully | You have built a robust system | Backend failure returns clear error |
Core Challenges Mapped to Concepts
| Challenge | Concept | Book Reference |
|---|---|---|
| Tool namespace management | Avoiding collisions | MCP Specification |
| Request routing | Service mesh patterns | “Building Microservices” Ch. 4 |
| Cross-server workflows | Orchestration | “Microservices Patterns” Ch. 4 |
| Error handling across servers | Distributed error handling | “Designing Data-Intensive Applications” |
Extension Ideas
Once the basic gateway works, consider these enhancements:
- Add dynamic server discovery from configuration files
- Implement circuit breakers for failing backends
- Add workflow persistence for long-running operations
- Implement parallel execution for independent steps
- Add retry logic with exponential backoff
Common Pitfalls
- Not handling backend startup delays - Wait for initialization
- Forgetting to namespace tools - Collisions cause routing failures
- Synchronous workflow execution - Use async for parallelism
- Not propagating errors - Backend errors should reach Claude
- Memory leaks from orphan processes - Clean up on gateway shutdown
Success Criteria
You have completed this project when:
- Your gateway discovers tools from multiple backend servers
- Tools are namespaced to prevent collisions
- Tool calls route to the correct backend
- Cross-server workflows execute successfully
- Backend failures return helpful error messages
- The gateway handles backend restarts gracefully
- Workflow steps can reference previous results
- The gateway cleans up resources on shutdown