Skip to content

feat(core): host-side MCP server support for Docker agents#197

Open
oheckmann74 wants to merge 3 commits intoedspencer:mainfrom
oheckmann74:feat/mcp-host-bridge
Open

feat(core): host-side MCP server support for Docker agents#197
oheckmann74 wants to merge 3 commits intoedspencer:mainfrom
oheckmann74:feat/mcp-host-bridge

Conversation

@oheckmann74
Copy link
Copy Markdown
Contributor

@oheckmann74 oheckmann74 commented Mar 5, 2026

Summary

  • host: true MCP servers: Agents running in Docker can now use MCP servers that must run on the host (e.g., PrusaSlicer GUI automation, Bear notes). The server spawns on the host via createMcpHostBridge, and tools are bridged into the container via HTTP.
  • host.docker.internal routing: MCP HTTP bridges use host.docker.internal for Docker agents instead of 127.0.0.1, so containerized agents can reach host-side bridges.
  • partitionMcpServers utility: Separates host: true servers from container-side servers at config time.

Config example

mcp_servers:
  prusaslicer:
    command: /path/to/prusaslicer-mcp/server.py
    args: ["/path/to/server.py"]
    host: true  # runs on host, bridged into Docker container

Test plan

  • Unit tests for createMcpHostBridge and partitionMcpServers
  • Typecheck clean
  • 3077 core tests pass

🤖 Generated with Claude Code

Summary by CodeRabbit

  • New Features

    • Optional host-mode for MCP servers so selected servers run on the host and are bridged into containerized agents
    • HTTP bridge support for injected MCP servers with configurable bridge host for improved container access
  • Tests

    • Added unit tests validating host vs container server partitioning, bridge behavior, tool invocation, and lifecycle cleanup
  • Chores

    • Release note added for the host-bridge feature

oheckmann74 and others added 2 commits March 5, 2026 11:40
When an agent runs in Docker and an MCP server has `host: true` in its
config, the server is spawned on the host instead of inside the container.
Tools are discovered via MCP protocol and bridged into the container
through the existing HTTP bridge infrastructure.

This enables MCP servers that need host access (e.g., Bear notes which
requires macOS `open` command and Full Disk Access) to work with
containerized agents.

- Add `host` field to McpServerSchema
- New mcp-host-bridge module: spawns host-side MCP process via
  StdioClientTransport, discovers tools, wraps as InjectedMcpServerDef
- partitionMcpServers utility splits host vs container servers
- ContainerRunner.execute() partitions before dispatch so both CLI and
  SDK runtime paths benefit

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
… agents

CLIRuntime MCP HTTP bridges used 127.0.0.1 which is unreachable from
inside Docker containers. Add mcpBridgeHost option (defaults to 127.0.0.1)
and set it to host.docker.internal in ContainerRunner's CLI path.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
@oheckmann74 oheckmann74 requested a review from edspencer as a code owner March 5, 2026 19:42
@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai bot commented Mar 5, 2026

📝 Walkthrough

Walkthrough

Adds host-side MCP server bridging: schema host flag, new mcp-host-bridge module with partitioning and host-to-container HTTP bridges, runtime integrations for CLI and container flows, unit tests, and environment adjustments for MCP file uploads.

Changes

Cohort / File(s) Summary
Configuration Schema
packages/core/src/config/schema.ts
Added optional host boolean field to McpServerSchema to mark servers for host-side execution when Docker is enabled.
MCP Host Bridge Module
packages/core/src/runner/runtime/mcp-host-bridge.ts
New module implementing createMcpHostBridge() to spawn host MCP servers via Stdio transport, connect via MCP SDK Client, discover tools, expose handlers, and export partitionMcpServers() and McpHostBridgeHandle.
MCP Host Bridge Tests
packages/core/src/runner/runtime/__tests__/mcp-host-bridge.test.ts
New test suite validating partitioning, bridge creation, tool discovery and invocation, handler delegation to client.callTool, and lifecycle cleanup using mocked SDK client and transport.
CLI Runtime Integration
packages/core/src/runner/runtime/cli-runtime.ts
Starts MCP HTTP bridges for injected servers, adds CLIRuntimeOptions.mcpBridgeHost and CLIRuntime.mcpBridgeHost, extends allowedTools patterns for injected MCP tools, and temporarily adjusts stream-close timeout for certain file-sender tools.
Container Runtime Integration
packages/core/src/runner/runtime/container-runner.ts
Partitions MCP servers into host vs. container sets, creates host bridges and injects their HTTP definitions, configures bridge host (host.docker.internal) for container accessibility, and manages bridge lifecycle with cleanup on errors.
Changeset
.changeset/mcp-host-bridge.md
Documented minor release note describing host-side MCP server support and HTTP bridge URL behavior.

Sequence Diagram(s)

sequenceDiagram
    participant Agent as Agent / Runtime
    participant Partition as partitionMcpServers()
    participant HostBridge as createMcpHostBridge()
    participant StdioTransport as Stdio Transport
    participant HostProcess as Host MCP Server
    participant SDKClient as MCP SDK Client
    participant ToolHandler as Tool Handler

    Agent->>Partition: provide mcpServers config
    Partition-->>Agent: return [hostServers, containerServers]

    Agent->>HostBridge: create bridge for each hostServer
    HostBridge->>StdioTransport: spawn host process
    StdioTransport->>HostProcess: launch MCP server
    HostProcess-->>StdioTransport: stdio ready

    HostBridge->>SDKClient: new Client(transport)
    SDKClient->>HostProcess: list_tools request
    HostProcess-->>SDKClient: tools + schemas
    SDKClient-->>HostBridge: tools list

    HostBridge->>Agent: return serverDef + handlers

    Agent->>ToolHandler: invoke tool with args
    ToolHandler->>SDKClient: callTool(toolName, args)
    SDKClient->>HostProcess: call_tool request
    HostProcess-->>SDKClient: result (content, isError)
    SDKClient-->>ToolHandler: tool result
    ToolHandler-->>Agent: return result

    Agent->>HostBridge: cleanup
    HostBridge->>SDKClient: close()
    SDKClient->>StdioTransport: close transport
    StdioTransport->>HostProcess: terminate
Loading

Estimated Code Review Effort

🎯 4 (Complex) | ⏱️ ~60 minutes

Poem

🐰 A tiny bridge I built with care,
For MCP tools to skip and share,
Host and container now hand in paw,
Tools called through tunnels, tidy and raw,
Hops and handlers—cleanup done fair. 🥕

🚥 Pre-merge checks | ✅ 3
✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The pull request title 'feat(core): host-side MCP server support for Docker agents' accurately and clearly summarizes the main change: adding support for MCP servers to run on the host and be bridged into Docker containers.
Docstring Coverage ✅ Passed Docstring coverage is 100.00% which is sufficient. The required threshold is 80.00%.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 5

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@packages/core/src/runner/runtime/cli-runtime.ts`:
- Around line 190-265: The code starts MCP HTTP bridges (bridges array via
startMcpHttpBridge) and mutates process.env.CLAUDE_CODE_STREAM_CLOSE_TIMEOUT
(with savedStreamCloseTimeout) but never restores or closes them on exit; add a
try/finally around the logic that uses injectedMcpServers so that in the finally
you iterate bridges and await b.close() (best-effort catch per bridge) and
restore process.env.CLAUDE_CODE_STREAM_CLOSE_TIMEOUT to savedStreamCloseTimeout
(use undefined/null marker semantics already set), and apply the same
finally-style cleanup pattern to the other similar block referenced (around the
code at the other occurrence).

In `@packages/core/src/runner/runtime/container-runner.ts`:
- Around line 91-126: Move the host bridge startup into the same lifecycle
try/finally that manages container cleanup so partial failures get cleaned up:
stop creating hostBridges and calling createMcpHostBridge(...) before the try;
instead, build the list of host server configs (using
partitionMcpServers(agent.mcp_servers)) before the try, then inside the try call
createMcpHostBridge for each host server and push into hostBridges, populate
injectedMcpServers and set effectiveOptions there (updating agent.mcp_servers to
containerServers as currently done), and keep the existing finally block to
close hostBridges and tear down the container; ensure references to hostBridges,
effectiveOptions, injectedMcpServers, and createMcpHostBridge are used as shown
so cleanup covers bridge startup failures.

In `@packages/core/src/runner/runtime/mcp-host-bridge.ts`:
- Around line 53-55: Replace the raw throw in the MCP host config check with a
FleetManagerError: import FleetManagerError from 'fleet-manager/errors.ts' and
throw new FleetManagerError(...) instead of Error inside the branch that checks
if (!config.command) in the function/method in mcp-host-bridge.ts; ensure the
FleetManagerError message is actionable (e.g., include the MCP name and guidance
like "provide a command when host:true") and preserve any surrounding context or
logging.
- Around line 67-85: After calling client.connect(transport) in the bridge
initialization, ensure the MCP client is closed if any subsequent
discovery/mapping (client.listTools() or the mapping to InjectedMcpToolDef and
its handler creation) throws; wrap the discovery/mapping logic (the
client.listTools() call and the map that builds InjectedMcpToolDef with handler
that calls client.callTool) in a try/catch or finally block and call
client.close() (or the appropriate shutdown method) on error before rethrowing
so the subprocess isn't left running and resources are cleaned up.
- Around line 15-16: Add the missing `@modelcontextprotocol/sdk` dependency to
packages/core/package.json so imports of Client and StdioClientTransport
resolve; replace the raw Error at the MCP host validation with a
FleetManagerError by importing FleetManagerError from
"../../fleet-manager/errors.js" and throwing new FleetManagerError(`MCP server
'${name}' has host: true but no command specified`); and wrap the
client.connect(transport) call in a try-catch that on failure cleans up the
spawned subprocess/transport (e.g., dispose/kill the StdioClientTransport or
process) before rethrowing so the subprocess isn’t orphaned.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 7bc60ac4-a738-4511-a35e-4e17654689f0

📥 Commits

Reviewing files that changed from the base of the PR and between 9b19d30 and 653186a.

📒 Files selected for processing (5)
  • packages/core/src/config/schema.ts
  • packages/core/src/runner/runtime/__tests__/mcp-host-bridge.test.ts
  • packages/core/src/runner/runtime/cli-runtime.ts
  • packages/core/src/runner/runtime/container-runner.ts
  • packages/core/src/runner/runtime/mcp-host-bridge.ts

Comment on lines +190 to +265
// Track env mutation so we can restore it (see CLAUDE_CODE_STREAM_CLOSE_TIMEOUT below)
let savedStreamCloseTimeout: string | undefined | null = null; // null = not mutated

// Start HTTP bridges for injected MCP servers (e.g., file sender)
// Same pattern as container-runner: expose in-process handlers via HTTP,
// then pass as HTTP-type MCP servers in --mcp-config
const bridges: McpHttpBridge[] = [];
if (options.injectedMcpServers && Object.keys(options.injectedMcpServers).length > 0) {
for (const [name, def] of Object.entries(options.injectedMcpServers)) {
let bridge: McpHttpBridge;
try {
bridge = await startMcpHttpBridge(def);
} catch (bridgeError) {
// Clean up any bridges that started successfully before this failure
for (const b of bridges) {
try {
await b.close();
} catch {
// best-effort cleanup
}
}
bridges.length = 0;
throw bridgeError;
}
bridges.push(bridge);

// Build or extend the --mcp-config to include this HTTP server
// Find existing --mcp-config arg index to merge with it
const mcpConfigIdx = args.indexOf("--mcp-config");
let mcpConfig: { mcpServers: Record<string, unknown> };

if (mcpConfigIdx !== -1 && mcpConfigIdx + 1 < args.length) {
// Parse existing config and add the bridge
mcpConfig = JSON.parse(args[mcpConfigIdx + 1]);
} else {
mcpConfig = { mcpServers: {} };
}

mcpConfig.mcpServers[name] = {
type: "http",
url: `http://${this.mcpBridgeHost}:${bridge.port}/mcp`,
};

const configJson = JSON.stringify(mcpConfig);
if (mcpConfigIdx !== -1) {
args[mcpConfigIdx + 1] = configJson;
} else {
args.push("--mcp-config", configJson);
}

logger.debug(`Started MCP HTTP bridge for '${name}' on port ${bridge.port}`);
}

// Auto-add injected MCP tool patterns to allowedTools.
// Only needed when the agent has an explicit allowlist — without one, all tools
// (including injected MCP tools) are allowed by default.
const allowedToolsIdx = args.indexOf("--allowedTools");
if (allowedToolsIdx !== -1 && allowedToolsIdx + 1 < args.length) {
const existing = args[allowedToolsIdx + 1];
const injectedPatterns = Object.keys(options.injectedMcpServers).map(
(name) => `mcp__${name}__*`,
);
args[allowedToolsIdx + 1] = [existing, ...injectedPatterns].join(",");
}

// File uploads via MCP tools can take longer than the default 60s timeout.
// Save the original value so we can restore it in `finally` to avoid leaking
// state across concurrent jobs.
if (options.injectedMcpServers["herdctl-file-sender"]) {
if (!process.env.CLAUDE_CODE_STREAM_CLOSE_TIMEOUT) {
savedStreamCloseTimeout = undefined; // marker: was not set
process.env.CLAUDE_CODE_STREAM_CLOSE_TIMEOUT = "120000";
}
}
}

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Close started MCP bridges and restore env mutation in finally.

Bridges started at Line 196 are never closed, and CLAUDE_CODE_STREAM_CLOSE_TIMEOUT set at Lines 259-262 is never restored. This leaks ports and process-global state across executions.

🐛 Suggested fix
-    } finally {
-      // Cleanup
-      watcher?.stop();
-    }
+    } finally {
+      // Cleanup watcher
+      watcher?.stop();
+
+      // Always close MCP HTTP bridges
+      for (const bridge of bridges) {
+        try {
+          await bridge.close();
+        } catch (err) {
+          logger.error(`Failed to close MCP HTTP bridge: ${err}`);
+        }
+      }
+
+      // Restore env mutation if we changed it
+      if (savedStreamCloseTimeout !== null) {
+        if (savedStreamCloseTimeout === undefined) {
+          delete process.env.CLAUDE_CODE_STREAM_CLOSE_TIMEOUT;
+        } else {
+          process.env.CLAUDE_CODE_STREAM_CLOSE_TIMEOUT = savedStreamCloseTimeout;
+        }
+      }
+    }

Also applies to: 495-498

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/core/src/runner/runtime/cli-runtime.ts` around lines 190 - 265, The
code starts MCP HTTP bridges (bridges array via startMcpHttpBridge) and mutates
process.env.CLAUDE_CODE_STREAM_CLOSE_TIMEOUT (with savedStreamCloseTimeout) but
never restores or closes them on exit; add a try/finally around the logic that
uses injectedMcpServers so that in the finally you iterate bridges and await
b.close() (best-effort catch per bridge) and restore
process.env.CLAUDE_CODE_STREAM_CLOSE_TIMEOUT to savedStreamCloseTimeout (use
undefined/null marker semantics already set), and apply the same finally-style
cleanup pattern to the other similar block referenced (around the code at the
other occurrence).

Comment on lines +91 to +126
// Partition host: true MCP servers — these run on the host and get bridged
const hostBridges: McpHostBridgeHandle[] = [];
let effectiveOptions = options;

if (agent.mcp_servers) {
const [hostServers, containerServers] = partitionMcpServers(agent.mcp_servers);

if (Object.keys(hostServers).length > 0) {
// Start host-side MCP bridges
for (const [name, config] of Object.entries(hostServers)) {
const handle = await createMcpHostBridge(name, config);
hostBridges.push(handle);
}

// Build effective options with host servers moved to injectedMcpServers
const injected = { ...(options.injectedMcpServers ?? {}) };
for (const handle of hostBridges) {
injected[handle.serverDef.name] = handle.serverDef;
}

effectiveOptions = {
...options,
agent: {
...agent,
mcp_servers:
Object.keys(containerServers).length > 0 ? containerServers : undefined,
},
injectedMcpServers: injected,
};

logger.info(
`Partitioned MCP servers: ${Object.keys(hostServers).length} host, ` +
`${Object.keys(containerServers).length} container`,
);
}
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Move host-bridge setup inside the guarded try/finally scope.

At Line 89 the container is already created, but host bridge startup (Lines 95-103) runs before the Line 128 try. A failure here skips cleanup for both partially started bridges and container lifecycle.

🐛 Suggested fix
-    if (agent.mcp_servers) {
-      const [hostServers, containerServers] = partitionMcpServers(agent.mcp_servers);
+    try {
+      if (agent.mcp_servers) {
+        const [hostServers, containerServers] = partitionMcpServers(agent.mcp_servers);
@@
-      }
-    }
-
-    try {
+      }
+
       // Get container ID for docker exec
       const containerInfo = await container.inspect();
       const containerId = containerInfo.Id;
@@
-    } catch (error) {
+    } catch (error) {
       const errorMessage = error instanceof Error ? error.message : String(error);
@@
-    } finally {
+    } finally {
       // Close host MCP bridges (kill subprocess + MCP client)
       for (const handle of hostBridges) {
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
// Partition host: true MCP servers — these run on the host and get bridged
const hostBridges: McpHostBridgeHandle[] = [];
let effectiveOptions = options;
if (agent.mcp_servers) {
const [hostServers, containerServers] = partitionMcpServers(agent.mcp_servers);
if (Object.keys(hostServers).length > 0) {
// Start host-side MCP bridges
for (const [name, config] of Object.entries(hostServers)) {
const handle = await createMcpHostBridge(name, config);
hostBridges.push(handle);
}
// Build effective options with host servers moved to injectedMcpServers
const injected = { ...(options.injectedMcpServers ?? {}) };
for (const handle of hostBridges) {
injected[handle.serverDef.name] = handle.serverDef;
}
effectiveOptions = {
...options,
agent: {
...agent,
mcp_servers:
Object.keys(containerServers).length > 0 ? containerServers : undefined,
},
injectedMcpServers: injected,
};
logger.info(
`Partitioned MCP servers: ${Object.keys(hostServers).length} host, ` +
`${Object.keys(containerServers).length} container`,
);
}
}
// Partition host: true MCP servers — these run on the host and get bridged
const hostBridges: McpHostBridgeHandle[] = [];
let effectiveOptions = options;
try {
if (agent.mcp_servers) {
const [hostServers, containerServers] = partitionMcpServers(agent.mcp_servers);
if (Object.keys(hostServers).length > 0) {
// Start host-side MCP bridges
for (const [name, config] of Object.entries(hostServers)) {
const handle = await createMcpHostBridge(name, config);
hostBridges.push(handle);
}
// Build effective options with host servers moved to injectedMcpServers
const injected = { ...(options.injectedMcpServers ?? {}) };
for (const handle of hostBridges) {
injected[handle.serverDef.name] = handle.serverDef;
}
effectiveOptions = {
...options,
agent: {
...agent,
mcp_servers:
Object.keys(containerServers).length > 0 ? containerServers : undefined,
},
injectedMcpServers: injected,
};
logger.info(
`Partitioned MCP servers: ${Object.keys(hostServers).length} host, ` +
`${Object.keys(containerServers).length} container`,
);
}
}
// Get container ID for docker exec
const containerInfo = await container.inspect();
const containerId = containerInfo.Id;
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/core/src/runner/runtime/container-runner.ts` around lines 91 - 126,
Move the host bridge startup into the same lifecycle try/finally that manages
container cleanup so partial failures get cleaned up: stop creating hostBridges
and calling createMcpHostBridge(...) before the try; instead, build the list of
host server configs (using partitionMcpServers(agent.mcp_servers)) before the
try, then inside the try call createMcpHostBridge for each host server and push
into hostBridges, populate injectedMcpServers and set effectiveOptions there
(updating agent.mcp_servers to containerServers as currently done), and keep the
existing finally block to close hostBridges and tear down the container; ensure
references to hostBridges, effectiveOptions, injectedMcpServers, and
createMcpHostBridge are used as shown so cleanup covers bridge startup failures.

Comment on lines +15 to +16
import { Client } from "@modelcontextprotocol/sdk/client/index.js";
import { StdioClientTransport } from "@modelcontextprotocol/sdk/client/stdio.js";
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
set -euo pipefail

echo "== MCP SDK dependency declarations =="
fd package.json --hidden --exclude node_modules | while read -r f; do
  v=$(jq -r '.dependencies["@modelcontextprotocol/sdk"] // .devDependencies["@modelcontextprotocol/sdk"] // empty' "$f")
  if [ -n "$v" ]; then
    echo "$f -> `@modelcontextprotocol/sdk`@$v"
  fi
done

echo
echo "== MCP SDK client imports in TS files =="
rg -n --type=ts '@modelcontextprotocol/sdk/client' | head -20

echo
echo "== TypeScript module resolution settings =="
fd 'tsconfig*.json' --hidden | while read -r f; do
  echo "--- $f"
  jq -r '.compilerOptions.moduleResolution // "default", .compilerOptions.module // "default"' "$f"
done

Repository: edspencer/herdctl

Length of output: 1184


🏁 Script executed:

# Read the actual file to examine content
wc -l packages/core/src/runner/runtime/mcp-host-bridge.ts

Repository: edspencer/herdctl

Length of output: 116


🏁 Script executed:

cat -n packages/core/src/runner/runtime/mcp-host-bridge.ts

Repository: edspencer/herdctl

Length of output: 5826


🏁 Script executed:

# Find FleetManagerError definition and usage patterns
rg -n 'FleetManagerError' packages/core/src --type=ts -B 2 -A 2 | head -40

Repository: edspencer/herdctl

Length of output: 2992


🏁 Script executed:

# Check the error handling imports in core package
rg -n "import.*Error.*from.*fleet-manager" packages/core/src --type=ts | head -10

Repository: edspencer/herdctl

Length of output: 1127


Add @modelcontextprotocol/sdk dependency to packages/core/package.json — currently blocking build.

The TS2307 error at line 15 is caused by a missing dependency declaration. @modelcontextprotocol/sdk is not declared in any package.json file and must be added.

Additionally, line 54 throws a raw Error instead of extending FleetManagerError. Per guidelines, all errors in packages/core/src/**/*.{ts,tsx} must extend FleetManagerError and include actionable messages. Refactor:

import { FleetManagerError } from "../../fleet-manager/errors.js";

then at line 54:

throw new FleetManagerError(`MCP server '${name}' has host: true but no command specified`);

Also wrap the client.connect(transport) call (line 67) in a try-catch to ensure the subprocess is cleaned up if connection fails; otherwise the spawned process may be orphaned.

🧰 Tools
🪛 GitHub Actions: CI

[error] 15-15: Cannot find module '@modelcontextprotocol/sdk/client/index.js' or its corresponding type declarations. (TS2307)

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/core/src/runner/runtime/mcp-host-bridge.ts` around lines 15 - 16,
Add the missing `@modelcontextprotocol/sdk` dependency to
packages/core/package.json so imports of Client and StdioClientTransport
resolve; replace the raw Error at the MCP host validation with a
FleetManagerError by importing FleetManagerError from
"../../fleet-manager/errors.js" and throwing new FleetManagerError(`MCP server
'${name}' has host: true but no command specified`); and wrap the
client.connect(transport) call in a try-catch that on failure cleans up the
spawned subprocess/transport (e.g., dispose/kill the StdioClientTransport or
process) before rethrowing so the subprocess isn’t orphaned.

Comment on lines +53 to +55
if (!config.command) {
throw new Error(`MCP server '${name}' has host: true but no command specified`);
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion | 🟠 Major

Use a FleetManagerError-derived error for invalid host MCP config.

This new throw path currently uses raw Error; in core, this should be a typed, actionable error.

As per coding guidelines, "Extend FleetManagerError from fleet-manager/errors.ts for all errors and always include actionable error messages."

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/core/src/runner/runtime/mcp-host-bridge.ts` around lines 53 - 55,
Replace the raw throw in the MCP host config check with a FleetManagerError:
import FleetManagerError from 'fleet-manager/errors.ts' and throw new
FleetManagerError(...) instead of Error inside the branch that checks if
(!config.command) in the function/method in mcp-host-bridge.ts; ensure the
FleetManagerError message is actionable (e.g., include the MCP name and guidance
like "provide a command when host:true") and preserve any surrounding context or
logging.

Comment on lines +67 to +85
await client.connect(transport);

// Discover tools from the MCP server
const toolsResult = await client.listTools();
const tools: InjectedMcpToolDef[] = (toolsResult.tools ?? []).map((tool) => ({
name: tool.name,
description: tool.description ?? "",
inputSchema: (tool.inputSchema ?? {}) as Record<string, unknown>,
handler: async (args: Record<string, unknown>): Promise<McpToolCallResult> => {
const result = await client.callTool({ name: tool.name, arguments: args });
const content = Array.isArray(result.content)
? (result.content as Array<{ type: string; text: string }>)
: [{ type: "text", text: JSON.stringify(result) }];
return {
content,
isError: result.isError === true,
};
},
}));
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Close the MCP client when bridge initialization fails post-connect.

If discovery/mapping throws after Line 67, the subprocess can be left running because no handle is returned.

♻️ Suggested fix
-  await client.connect(transport);
-
-  // Discover tools from the MCP server
-  const toolsResult = await client.listTools();
-  const tools: InjectedMcpToolDef[] = (toolsResult.tools ?? []).map((tool) => ({
+  let tools: InjectedMcpToolDef[] = [];
+  try {
+    await client.connect(transport);
+    const toolsResult = await client.listTools();
+    tools = (toolsResult.tools ?? []).map((tool) => ({
       name: tool.name,
       description: tool.description ?? "",
       inputSchema: (tool.inputSchema ?? {}) as Record<string, unknown>,
       handler: async (args: Record<string, unknown>): Promise<McpToolCallResult> => {
         const result = await client.callTool({ name: tool.name, arguments: args });
         const content = Array.isArray(result.content)
           ? (result.content as Array<{ type: string; text: string }>)
           : [{ type: "text", text: JSON.stringify(result) }];
         return {
           content,
           isError: result.isError === true,
         };
       },
-  }));
+    }));
+  } catch (error) {
+    try {
+      await client.close();
+    } catch {
+      // best-effort cleanup
+    }
+    throw error;
+  }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/core/src/runner/runtime/mcp-host-bridge.ts` around lines 67 - 85,
After calling client.connect(transport) in the bridge initialization, ensure the
MCP client is closed if any subsequent discovery/mapping (client.listTools() or
the mapping to InjectedMcpToolDef and its handler creation) throws; wrap the
discovery/mapping logic (the client.listTools() call and the map that builds
InjectedMcpToolDef with handler that calls client.callTool) in a try/catch or
finally block and call client.close() (or the appropriate shutdown method) on
error before rethrowing so the subprocess isn't left running and resources are
cleaned up.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (1)
.changeset/mcp-host-bridge.md (1)

5-5: Consider breaking the description into shorter sentences for better readability.

The changelog entry is clear and accurate but reads as a long run-on sentence. Breaking it into 2-3 shorter sentences would improve scannability for users reading release notes.

📝 Optional refactor for improved readability
-Add host-side MCP server support for Docker agents. Servers with `host: true` are spawned on the host and bridged into the container via HTTP, enabling MCP servers that need host resources (filesystem, credentials) while the agent runs in Docker. Also fix MCP HTTP bridge URLs to use `host.docker.internal` so containers can reach host-side bridges.
+Add host-side MCP server support for Docker agents. Servers marked with `host: true` are spawned on the host and bridged into the container via HTTP. This enables MCP servers that need host resources (filesystem, credentials) while the agent runs in Docker. MCP HTTP bridge URLs now use `host.docker.internal` so containers can reach host-side bridges.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.changeset/mcp-host-bridge.md at line 5, Split the long changelog sentence
into 2–3 shorter sentences to improve readability: describe that host-side MCP
server support for Docker agents was added (mention servers with `host: true`
are spawned on the host and bridged into the container via HTTP), then explain
the purpose (enable MCP servers that need host resources like filesystem and
credentials while agents run in Docker), and finally state the fix to MCP HTTP
bridge URLs to use `host.docker.internal` so containers can reach host-side
bridges; keep the same technical terms (`host: true`, `MCP server`,
`host.docker.internal`) but break the content into concise, scannable sentences.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Nitpick comments:
In @.changeset/mcp-host-bridge.md:
- Line 5: Split the long changelog sentence into 2–3 shorter sentences to
improve readability: describe that host-side MCP server support for Docker
agents was added (mention servers with `host: true` are spawned on the host and
bridged into the container via HTTP), then explain the purpose (enable MCP
servers that need host resources like filesystem and credentials while agents
run in Docker), and finally state the fix to MCP HTTP bridge URLs to use
`host.docker.internal` so containers can reach host-side bridges; keep the same
technical terms (`host: true`, `MCP server`, `host.docker.internal`) but break
the content into concise, scannable sentences.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 45577ff8-07a7-43a3-8ecc-624e3533c7bc

📥 Commits

Reviewing files that changed from the base of the PR and between 653186a and 98c5ed6.

📒 Files selected for processing (1)
  • .changeset/mcp-host-bridge.md

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant