feat(core): host-side MCP server support for Docker agents#197
feat(core): host-side MCP server support for Docker agents#197oheckmann74 wants to merge 3 commits intoedspencer:mainfrom
Conversation
When an agent runs in Docker and an MCP server has `host: true` in its config, the server is spawned on the host instead of inside the container. Tools are discovered via MCP protocol and bridged into the container through the existing HTTP bridge infrastructure. This enables MCP servers that need host access (e.g., Bear notes which requires macOS `open` command and Full Disk Access) to work with containerized agents. - Add `host` field to McpServerSchema - New mcp-host-bridge module: spawns host-side MCP process via StdioClientTransport, discovers tools, wraps as InjectedMcpServerDef - partitionMcpServers utility splits host vs container servers - ContainerRunner.execute() partitions before dispatch so both CLI and SDK runtime paths benefit Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
… agents CLIRuntime MCP HTTP bridges used 127.0.0.1 which is unreachable from inside Docker containers. Add mcpBridgeHost option (defaults to 127.0.0.1) and set it to host.docker.internal in ContainerRunner's CLI path. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
📝 WalkthroughWalkthroughAdds host-side MCP server bridging: schema Changes
Sequence Diagram(s)sequenceDiagram
participant Agent as Agent / Runtime
participant Partition as partitionMcpServers()
participant HostBridge as createMcpHostBridge()
participant StdioTransport as Stdio Transport
participant HostProcess as Host MCP Server
participant SDKClient as MCP SDK Client
participant ToolHandler as Tool Handler
Agent->>Partition: provide mcpServers config
Partition-->>Agent: return [hostServers, containerServers]
Agent->>HostBridge: create bridge for each hostServer
HostBridge->>StdioTransport: spawn host process
StdioTransport->>HostProcess: launch MCP server
HostProcess-->>StdioTransport: stdio ready
HostBridge->>SDKClient: new Client(transport)
SDKClient->>HostProcess: list_tools request
HostProcess-->>SDKClient: tools + schemas
SDKClient-->>HostBridge: tools list
HostBridge->>Agent: return serverDef + handlers
Agent->>ToolHandler: invoke tool with args
ToolHandler->>SDKClient: callTool(toolName, args)
SDKClient->>HostProcess: call_tool request
HostProcess-->>SDKClient: result (content, isError)
SDKClient-->>ToolHandler: tool result
ToolHandler-->>Agent: return result
Agent->>HostBridge: cleanup
HostBridge->>SDKClient: close()
SDKClient->>StdioTransport: close transport
StdioTransport->>HostProcess: terminate
Estimated Code Review Effort🎯 4 (Complex) | ⏱️ ~60 minutes Poem
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Comment |
There was a problem hiding this comment.
Actionable comments posted: 5
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@packages/core/src/runner/runtime/cli-runtime.ts`:
- Around line 190-265: The code starts MCP HTTP bridges (bridges array via
startMcpHttpBridge) and mutates process.env.CLAUDE_CODE_STREAM_CLOSE_TIMEOUT
(with savedStreamCloseTimeout) but never restores or closes them on exit; add a
try/finally around the logic that uses injectedMcpServers so that in the finally
you iterate bridges and await b.close() (best-effort catch per bridge) and
restore process.env.CLAUDE_CODE_STREAM_CLOSE_TIMEOUT to savedStreamCloseTimeout
(use undefined/null marker semantics already set), and apply the same
finally-style cleanup pattern to the other similar block referenced (around the
code at the other occurrence).
In `@packages/core/src/runner/runtime/container-runner.ts`:
- Around line 91-126: Move the host bridge startup into the same lifecycle
try/finally that manages container cleanup so partial failures get cleaned up:
stop creating hostBridges and calling createMcpHostBridge(...) before the try;
instead, build the list of host server configs (using
partitionMcpServers(agent.mcp_servers)) before the try, then inside the try call
createMcpHostBridge for each host server and push into hostBridges, populate
injectedMcpServers and set effectiveOptions there (updating agent.mcp_servers to
containerServers as currently done), and keep the existing finally block to
close hostBridges and tear down the container; ensure references to hostBridges,
effectiveOptions, injectedMcpServers, and createMcpHostBridge are used as shown
so cleanup covers bridge startup failures.
In `@packages/core/src/runner/runtime/mcp-host-bridge.ts`:
- Around line 53-55: Replace the raw throw in the MCP host config check with a
FleetManagerError: import FleetManagerError from 'fleet-manager/errors.ts' and
throw new FleetManagerError(...) instead of Error inside the branch that checks
if (!config.command) in the function/method in mcp-host-bridge.ts; ensure the
FleetManagerError message is actionable (e.g., include the MCP name and guidance
like "provide a command when host:true") and preserve any surrounding context or
logging.
- Around line 67-85: After calling client.connect(transport) in the bridge
initialization, ensure the MCP client is closed if any subsequent
discovery/mapping (client.listTools() or the mapping to InjectedMcpToolDef and
its handler creation) throws; wrap the discovery/mapping logic (the
client.listTools() call and the map that builds InjectedMcpToolDef with handler
that calls client.callTool) in a try/catch or finally block and call
client.close() (or the appropriate shutdown method) on error before rethrowing
so the subprocess isn't left running and resources are cleaned up.
- Around line 15-16: Add the missing `@modelcontextprotocol/sdk` dependency to
packages/core/package.json so imports of Client and StdioClientTransport
resolve; replace the raw Error at the MCP host validation with a
FleetManagerError by importing FleetManagerError from
"../../fleet-manager/errors.js" and throwing new FleetManagerError(`MCP server
'${name}' has host: true but no command specified`); and wrap the
client.connect(transport) call in a try-catch that on failure cleans up the
spawned subprocess/transport (e.g., dispose/kill the StdioClientTransport or
process) before rethrowing so the subprocess isn’t orphaned.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 7bc60ac4-a738-4511-a35e-4e17654689f0
📒 Files selected for processing (5)
packages/core/src/config/schema.tspackages/core/src/runner/runtime/__tests__/mcp-host-bridge.test.tspackages/core/src/runner/runtime/cli-runtime.tspackages/core/src/runner/runtime/container-runner.tspackages/core/src/runner/runtime/mcp-host-bridge.ts
| // Track env mutation so we can restore it (see CLAUDE_CODE_STREAM_CLOSE_TIMEOUT below) | ||
| let savedStreamCloseTimeout: string | undefined | null = null; // null = not mutated | ||
|
|
||
| // Start HTTP bridges for injected MCP servers (e.g., file sender) | ||
| // Same pattern as container-runner: expose in-process handlers via HTTP, | ||
| // then pass as HTTP-type MCP servers in --mcp-config | ||
| const bridges: McpHttpBridge[] = []; | ||
| if (options.injectedMcpServers && Object.keys(options.injectedMcpServers).length > 0) { | ||
| for (const [name, def] of Object.entries(options.injectedMcpServers)) { | ||
| let bridge: McpHttpBridge; | ||
| try { | ||
| bridge = await startMcpHttpBridge(def); | ||
| } catch (bridgeError) { | ||
| // Clean up any bridges that started successfully before this failure | ||
| for (const b of bridges) { | ||
| try { | ||
| await b.close(); | ||
| } catch { | ||
| // best-effort cleanup | ||
| } | ||
| } | ||
| bridges.length = 0; | ||
| throw bridgeError; | ||
| } | ||
| bridges.push(bridge); | ||
|
|
||
| // Build or extend the --mcp-config to include this HTTP server | ||
| // Find existing --mcp-config arg index to merge with it | ||
| const mcpConfigIdx = args.indexOf("--mcp-config"); | ||
| let mcpConfig: { mcpServers: Record<string, unknown> }; | ||
|
|
||
| if (mcpConfigIdx !== -1 && mcpConfigIdx + 1 < args.length) { | ||
| // Parse existing config and add the bridge | ||
| mcpConfig = JSON.parse(args[mcpConfigIdx + 1]); | ||
| } else { | ||
| mcpConfig = { mcpServers: {} }; | ||
| } | ||
|
|
||
| mcpConfig.mcpServers[name] = { | ||
| type: "http", | ||
| url: `http://${this.mcpBridgeHost}:${bridge.port}/mcp`, | ||
| }; | ||
|
|
||
| const configJson = JSON.stringify(mcpConfig); | ||
| if (mcpConfigIdx !== -1) { | ||
| args[mcpConfigIdx + 1] = configJson; | ||
| } else { | ||
| args.push("--mcp-config", configJson); | ||
| } | ||
|
|
||
| logger.debug(`Started MCP HTTP bridge for '${name}' on port ${bridge.port}`); | ||
| } | ||
|
|
||
| // Auto-add injected MCP tool patterns to allowedTools. | ||
| // Only needed when the agent has an explicit allowlist — without one, all tools | ||
| // (including injected MCP tools) are allowed by default. | ||
| const allowedToolsIdx = args.indexOf("--allowedTools"); | ||
| if (allowedToolsIdx !== -1 && allowedToolsIdx + 1 < args.length) { | ||
| const existing = args[allowedToolsIdx + 1]; | ||
| const injectedPatterns = Object.keys(options.injectedMcpServers).map( | ||
| (name) => `mcp__${name}__*`, | ||
| ); | ||
| args[allowedToolsIdx + 1] = [existing, ...injectedPatterns].join(","); | ||
| } | ||
|
|
||
| // File uploads via MCP tools can take longer than the default 60s timeout. | ||
| // Save the original value so we can restore it in `finally` to avoid leaking | ||
| // state across concurrent jobs. | ||
| if (options.injectedMcpServers["herdctl-file-sender"]) { | ||
| if (!process.env.CLAUDE_CODE_STREAM_CLOSE_TIMEOUT) { | ||
| savedStreamCloseTimeout = undefined; // marker: was not set | ||
| process.env.CLAUDE_CODE_STREAM_CLOSE_TIMEOUT = "120000"; | ||
| } | ||
| } | ||
| } | ||
|
|
There was a problem hiding this comment.
Close started MCP bridges and restore env mutation in finally.
Bridges started at Line 196 are never closed, and CLAUDE_CODE_STREAM_CLOSE_TIMEOUT set at Lines 259-262 is never restored. This leaks ports and process-global state across executions.
🐛 Suggested fix
- } finally {
- // Cleanup
- watcher?.stop();
- }
+ } finally {
+ // Cleanup watcher
+ watcher?.stop();
+
+ // Always close MCP HTTP bridges
+ for (const bridge of bridges) {
+ try {
+ await bridge.close();
+ } catch (err) {
+ logger.error(`Failed to close MCP HTTP bridge: ${err}`);
+ }
+ }
+
+ // Restore env mutation if we changed it
+ if (savedStreamCloseTimeout !== null) {
+ if (savedStreamCloseTimeout === undefined) {
+ delete process.env.CLAUDE_CODE_STREAM_CLOSE_TIMEOUT;
+ } else {
+ process.env.CLAUDE_CODE_STREAM_CLOSE_TIMEOUT = savedStreamCloseTimeout;
+ }
+ }
+ }Also applies to: 495-498
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/core/src/runner/runtime/cli-runtime.ts` around lines 190 - 265, The
code starts MCP HTTP bridges (bridges array via startMcpHttpBridge) and mutates
process.env.CLAUDE_CODE_STREAM_CLOSE_TIMEOUT (with savedStreamCloseTimeout) but
never restores or closes them on exit; add a try/finally around the logic that
uses injectedMcpServers so that in the finally you iterate bridges and await
b.close() (best-effort catch per bridge) and restore
process.env.CLAUDE_CODE_STREAM_CLOSE_TIMEOUT to savedStreamCloseTimeout (use
undefined/null marker semantics already set), and apply the same finally-style
cleanup pattern to the other similar block referenced (around the code at the
other occurrence).
| // Partition host: true MCP servers — these run on the host and get bridged | ||
| const hostBridges: McpHostBridgeHandle[] = []; | ||
| let effectiveOptions = options; | ||
|
|
||
| if (agent.mcp_servers) { | ||
| const [hostServers, containerServers] = partitionMcpServers(agent.mcp_servers); | ||
|
|
||
| if (Object.keys(hostServers).length > 0) { | ||
| // Start host-side MCP bridges | ||
| for (const [name, config] of Object.entries(hostServers)) { | ||
| const handle = await createMcpHostBridge(name, config); | ||
| hostBridges.push(handle); | ||
| } | ||
|
|
||
| // Build effective options with host servers moved to injectedMcpServers | ||
| const injected = { ...(options.injectedMcpServers ?? {}) }; | ||
| for (const handle of hostBridges) { | ||
| injected[handle.serverDef.name] = handle.serverDef; | ||
| } | ||
|
|
||
| effectiveOptions = { | ||
| ...options, | ||
| agent: { | ||
| ...agent, | ||
| mcp_servers: | ||
| Object.keys(containerServers).length > 0 ? containerServers : undefined, | ||
| }, | ||
| injectedMcpServers: injected, | ||
| }; | ||
|
|
||
| logger.info( | ||
| `Partitioned MCP servers: ${Object.keys(hostServers).length} host, ` + | ||
| `${Object.keys(containerServers).length} container`, | ||
| ); | ||
| } | ||
| } |
There was a problem hiding this comment.
Move host-bridge setup inside the guarded try/finally scope.
At Line 89 the container is already created, but host bridge startup (Lines 95-103) runs before the Line 128 try. A failure here skips cleanup for both partially started bridges and container lifecycle.
🐛 Suggested fix
- if (agent.mcp_servers) {
- const [hostServers, containerServers] = partitionMcpServers(agent.mcp_servers);
+ try {
+ if (agent.mcp_servers) {
+ const [hostServers, containerServers] = partitionMcpServers(agent.mcp_servers);
@@
- }
- }
-
- try {
+ }
+
// Get container ID for docker exec
const containerInfo = await container.inspect();
const containerId = containerInfo.Id;
@@
- } catch (error) {
+ } catch (error) {
const errorMessage = error instanceof Error ? error.message : String(error);
@@
- } finally {
+ } finally {
// Close host MCP bridges (kill subprocess + MCP client)
for (const handle of hostBridges) {📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| // Partition host: true MCP servers — these run on the host and get bridged | |
| const hostBridges: McpHostBridgeHandle[] = []; | |
| let effectiveOptions = options; | |
| if (agent.mcp_servers) { | |
| const [hostServers, containerServers] = partitionMcpServers(agent.mcp_servers); | |
| if (Object.keys(hostServers).length > 0) { | |
| // Start host-side MCP bridges | |
| for (const [name, config] of Object.entries(hostServers)) { | |
| const handle = await createMcpHostBridge(name, config); | |
| hostBridges.push(handle); | |
| } | |
| // Build effective options with host servers moved to injectedMcpServers | |
| const injected = { ...(options.injectedMcpServers ?? {}) }; | |
| for (const handle of hostBridges) { | |
| injected[handle.serverDef.name] = handle.serverDef; | |
| } | |
| effectiveOptions = { | |
| ...options, | |
| agent: { | |
| ...agent, | |
| mcp_servers: | |
| Object.keys(containerServers).length > 0 ? containerServers : undefined, | |
| }, | |
| injectedMcpServers: injected, | |
| }; | |
| logger.info( | |
| `Partitioned MCP servers: ${Object.keys(hostServers).length} host, ` + | |
| `${Object.keys(containerServers).length} container`, | |
| ); | |
| } | |
| } | |
| // Partition host: true MCP servers — these run on the host and get bridged | |
| const hostBridges: McpHostBridgeHandle[] = []; | |
| let effectiveOptions = options; | |
| try { | |
| if (agent.mcp_servers) { | |
| const [hostServers, containerServers] = partitionMcpServers(agent.mcp_servers); | |
| if (Object.keys(hostServers).length > 0) { | |
| // Start host-side MCP bridges | |
| for (const [name, config] of Object.entries(hostServers)) { | |
| const handle = await createMcpHostBridge(name, config); | |
| hostBridges.push(handle); | |
| } | |
| // Build effective options with host servers moved to injectedMcpServers | |
| const injected = { ...(options.injectedMcpServers ?? {}) }; | |
| for (const handle of hostBridges) { | |
| injected[handle.serverDef.name] = handle.serverDef; | |
| } | |
| effectiveOptions = { | |
| ...options, | |
| agent: { | |
| ...agent, | |
| mcp_servers: | |
| Object.keys(containerServers).length > 0 ? containerServers : undefined, | |
| }, | |
| injectedMcpServers: injected, | |
| }; | |
| logger.info( | |
| `Partitioned MCP servers: ${Object.keys(hostServers).length} host, ` + | |
| `${Object.keys(containerServers).length} container`, | |
| ); | |
| } | |
| } | |
| // Get container ID for docker exec | |
| const containerInfo = await container.inspect(); | |
| const containerId = containerInfo.Id; |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/core/src/runner/runtime/container-runner.ts` around lines 91 - 126,
Move the host bridge startup into the same lifecycle try/finally that manages
container cleanup so partial failures get cleaned up: stop creating hostBridges
and calling createMcpHostBridge(...) before the try; instead, build the list of
host server configs (using partitionMcpServers(agent.mcp_servers)) before the
try, then inside the try call createMcpHostBridge for each host server and push
into hostBridges, populate injectedMcpServers and set effectiveOptions there
(updating agent.mcp_servers to containerServers as currently done), and keep the
existing finally block to close hostBridges and tear down the container; ensure
references to hostBridges, effectiveOptions, injectedMcpServers, and
createMcpHostBridge are used as shown so cleanup covers bridge startup failures.
| import { Client } from "@modelcontextprotocol/sdk/client/index.js"; | ||
| import { StdioClientTransport } from "@modelcontextprotocol/sdk/client/stdio.js"; |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
set -euo pipefail
echo "== MCP SDK dependency declarations =="
fd package.json --hidden --exclude node_modules | while read -r f; do
v=$(jq -r '.dependencies["@modelcontextprotocol/sdk"] // .devDependencies["@modelcontextprotocol/sdk"] // empty' "$f")
if [ -n "$v" ]; then
echo "$f -> `@modelcontextprotocol/sdk`@$v"
fi
done
echo
echo "== MCP SDK client imports in TS files =="
rg -n --type=ts '@modelcontextprotocol/sdk/client' | head -20
echo
echo "== TypeScript module resolution settings =="
fd 'tsconfig*.json' --hidden | while read -r f; do
echo "--- $f"
jq -r '.compilerOptions.moduleResolution // "default", .compilerOptions.module // "default"' "$f"
doneRepository: edspencer/herdctl
Length of output: 1184
🏁 Script executed:
# Read the actual file to examine content
wc -l packages/core/src/runner/runtime/mcp-host-bridge.tsRepository: edspencer/herdctl
Length of output: 116
🏁 Script executed:
cat -n packages/core/src/runner/runtime/mcp-host-bridge.tsRepository: edspencer/herdctl
Length of output: 5826
🏁 Script executed:
# Find FleetManagerError definition and usage patterns
rg -n 'FleetManagerError' packages/core/src --type=ts -B 2 -A 2 | head -40Repository: edspencer/herdctl
Length of output: 2992
🏁 Script executed:
# Check the error handling imports in core package
rg -n "import.*Error.*from.*fleet-manager" packages/core/src --type=ts | head -10Repository: edspencer/herdctl
Length of output: 1127
Add @modelcontextprotocol/sdk dependency to packages/core/package.json — currently blocking build.
The TS2307 error at line 15 is caused by a missing dependency declaration. @modelcontextprotocol/sdk is not declared in any package.json file and must be added.
Additionally, line 54 throws a raw Error instead of extending FleetManagerError. Per guidelines, all errors in packages/core/src/**/*.{ts,tsx} must extend FleetManagerError and include actionable messages. Refactor:
import { FleetManagerError } from "../../fleet-manager/errors.js";then at line 54:
throw new FleetManagerError(`MCP server '${name}' has host: true but no command specified`);Also wrap the client.connect(transport) call (line 67) in a try-catch to ensure the subprocess is cleaned up if connection fails; otherwise the spawned process may be orphaned.
🧰 Tools
🪛 GitHub Actions: CI
[error] 15-15: Cannot find module '@modelcontextprotocol/sdk/client/index.js' or its corresponding type declarations. (TS2307)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/core/src/runner/runtime/mcp-host-bridge.ts` around lines 15 - 16,
Add the missing `@modelcontextprotocol/sdk` dependency to
packages/core/package.json so imports of Client and StdioClientTransport
resolve; replace the raw Error at the MCP host validation with a
FleetManagerError by importing FleetManagerError from
"../../fleet-manager/errors.js" and throwing new FleetManagerError(`MCP server
'${name}' has host: true but no command specified`); and wrap the
client.connect(transport) call in a try-catch that on failure cleans up the
spawned subprocess/transport (e.g., dispose/kill the StdioClientTransport or
process) before rethrowing so the subprocess isn’t orphaned.
| if (!config.command) { | ||
| throw new Error(`MCP server '${name}' has host: true but no command specified`); | ||
| } |
There was a problem hiding this comment.
🛠️ Refactor suggestion | 🟠 Major
Use a FleetManagerError-derived error for invalid host MCP config.
This new throw path currently uses raw Error; in core, this should be a typed, actionable error.
As per coding guidelines, "Extend FleetManagerError from fleet-manager/errors.ts for all errors and always include actionable error messages."
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/core/src/runner/runtime/mcp-host-bridge.ts` around lines 53 - 55,
Replace the raw throw in the MCP host config check with a FleetManagerError:
import FleetManagerError from 'fleet-manager/errors.ts' and throw new
FleetManagerError(...) instead of Error inside the branch that checks if
(!config.command) in the function/method in mcp-host-bridge.ts; ensure the
FleetManagerError message is actionable (e.g., include the MCP name and guidance
like "provide a command when host:true") and preserve any surrounding context or
logging.
| await client.connect(transport); | ||
|
|
||
| // Discover tools from the MCP server | ||
| const toolsResult = await client.listTools(); | ||
| const tools: InjectedMcpToolDef[] = (toolsResult.tools ?? []).map((tool) => ({ | ||
| name: tool.name, | ||
| description: tool.description ?? "", | ||
| inputSchema: (tool.inputSchema ?? {}) as Record<string, unknown>, | ||
| handler: async (args: Record<string, unknown>): Promise<McpToolCallResult> => { | ||
| const result = await client.callTool({ name: tool.name, arguments: args }); | ||
| const content = Array.isArray(result.content) | ||
| ? (result.content as Array<{ type: string; text: string }>) | ||
| : [{ type: "text", text: JSON.stringify(result) }]; | ||
| return { | ||
| content, | ||
| isError: result.isError === true, | ||
| }; | ||
| }, | ||
| })); |
There was a problem hiding this comment.
Close the MCP client when bridge initialization fails post-connect.
If discovery/mapping throws after Line 67, the subprocess can be left running because no handle is returned.
♻️ Suggested fix
- await client.connect(transport);
-
- // Discover tools from the MCP server
- const toolsResult = await client.listTools();
- const tools: InjectedMcpToolDef[] = (toolsResult.tools ?? []).map((tool) => ({
+ let tools: InjectedMcpToolDef[] = [];
+ try {
+ await client.connect(transport);
+ const toolsResult = await client.listTools();
+ tools = (toolsResult.tools ?? []).map((tool) => ({
name: tool.name,
description: tool.description ?? "",
inputSchema: (tool.inputSchema ?? {}) as Record<string, unknown>,
handler: async (args: Record<string, unknown>): Promise<McpToolCallResult> => {
const result = await client.callTool({ name: tool.name, arguments: args });
const content = Array.isArray(result.content)
? (result.content as Array<{ type: string; text: string }>)
: [{ type: "text", text: JSON.stringify(result) }];
return {
content,
isError: result.isError === true,
};
},
- }));
+ }));
+ } catch (error) {
+ try {
+ await client.close();
+ } catch {
+ // best-effort cleanup
+ }
+ throw error;
+ }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/core/src/runner/runtime/mcp-host-bridge.ts` around lines 67 - 85,
After calling client.connect(transport) in the bridge initialization, ensure the
MCP client is closed if any subsequent discovery/mapping (client.listTools() or
the mapping to InjectedMcpToolDef and its handler creation) throws; wrap the
discovery/mapping logic (the client.listTools() call and the map that builds
InjectedMcpToolDef with handler that calls client.callTool) in a try/catch or
finally block and call client.close() (or the appropriate shutdown method) on
error before rethrowing so the subprocess isn't left running and resources are
cleaned up.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
There was a problem hiding this comment.
🧹 Nitpick comments (1)
.changeset/mcp-host-bridge.md (1)
5-5: Consider breaking the description into shorter sentences for better readability.The changelog entry is clear and accurate but reads as a long run-on sentence. Breaking it into 2-3 shorter sentences would improve scannability for users reading release notes.
📝 Optional refactor for improved readability
-Add host-side MCP server support for Docker agents. Servers with `host: true` are spawned on the host and bridged into the container via HTTP, enabling MCP servers that need host resources (filesystem, credentials) while the agent runs in Docker. Also fix MCP HTTP bridge URLs to use `host.docker.internal` so containers can reach host-side bridges. +Add host-side MCP server support for Docker agents. Servers marked with `host: true` are spawned on the host and bridged into the container via HTTP. This enables MCP servers that need host resources (filesystem, credentials) while the agent runs in Docker. MCP HTTP bridge URLs now use `host.docker.internal` so containers can reach host-side bridges.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.changeset/mcp-host-bridge.md at line 5, Split the long changelog sentence into 2–3 shorter sentences to improve readability: describe that host-side MCP server support for Docker agents was added (mention servers with `host: true` are spawned on the host and bridged into the container via HTTP), then explain the purpose (enable MCP servers that need host resources like filesystem and credentials while agents run in Docker), and finally state the fix to MCP HTTP bridge URLs to use `host.docker.internal` so containers can reach host-side bridges; keep the same technical terms (`host: true`, `MCP server`, `host.docker.internal`) but break the content into concise, scannable sentences.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In @.changeset/mcp-host-bridge.md:
- Line 5: Split the long changelog sentence into 2–3 shorter sentences to
improve readability: describe that host-side MCP server support for Docker
agents was added (mention servers with `host: true` are spawned on the host and
bridged into the container via HTTP), then explain the purpose (enable MCP
servers that need host resources like filesystem and credentials while agents
run in Docker), and finally state the fix to MCP HTTP bridge URLs to use
`host.docker.internal` so containers can reach host-side bridges; keep the same
technical terms (`host: true`, `MCP server`, `host.docker.internal`) but break
the content into concise, scannable sentences.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 45577ff8-07a7-43a3-8ecc-624e3533c7bc
📒 Files selected for processing (1)
.changeset/mcp-host-bridge.md
Summary
host: trueMCP servers: Agents running in Docker can now use MCP servers that must run on the host (e.g., PrusaSlicer GUI automation, Bear notes). The server spawns on the host viacreateMcpHostBridge, and tools are bridged into the container via HTTP.host.docker.internalrouting: MCP HTTP bridges usehost.docker.internalfor Docker agents instead of127.0.0.1, so containerized agents can reach host-side bridges.partitionMcpServersutility: Separateshost: trueservers from container-side servers at config time.Config example
Test plan
createMcpHostBridgeandpartitionMcpServers🤖 Generated with Claude Code
Summary by CodeRabbit
New Features
Tests
Chores