fix(discord): PDF uploads, result_summary default, attachment diagnostics#200
fix(discord): PDF uploads, result_summary default, attachment diagnostics#200oheckmann74 wants to merge 50 commits intoedspencer:mainfrom
Conversation
Allow agents to create, update, and delete their own schedules at runtime through a capability-gated MCP server. This solves the problem where agents couldn't reliably set up dynamic schedules (e.g., FedEx tracking) because they didn't know the correct config file path. - Add `self_scheduling` config field (enabled, max_schedules, min_interval) - Create per-agent YAML store at .herdctl/dynamic-schedules/<agent>.yaml - Implement herdctl-scheduler stdio MCP server with 4 tools - Auto-inject MCP server into agents with self_scheduling.enabled - Merge dynamic schedules with static ones in scheduler (static wins) - Add source: "static" | "dynamic" tag to ScheduleInfo for visibility - Safety: namespace isolation, count limits, min interval, TTL support Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Fix qualified names with dots (e.g., "fleet.agent") rejected by file path validation — allow dots but block ".." path traversal sequences - Fix getSchedule/enableSchedule/disableSchedule not finding dynamic schedules — fall back to dynamic schedule lookup from disk - Fix validateMinInterval passing interval value as schedule name in MinIntervalViolationError - Add prompt max length (10,000 chars) to MCP tool schema to match storage schema validation - Add console.warn for corrupt dynamic schedule files instead of silent catch Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Fix TTL on update: expires_at now computed from Date.now() instead of created_at — setting ttl_hours: 24 during an update expires 24h from now, not 24h from original creation - Fix read/write race on TTL cleanup: reads no longer write back expired entries. Expired schedules are filtered in-memory during reads and cleaned up during mutations (create/update/delete) which already perform a read-modify-write cycle - Fix dynamic schedules missing from getAgentInfo(): buildAgentInfo now accepts and merges dynamic schedules so they appear in CLI status and web dashboard views Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Validate min_interval at config parse time via Zod refine - Validate MCP env vars via Zod schema (prevents NaN from parseInt) - Validate schedule payloads via DynamicScheduleSchema.parse() before writing - Replace console.warn with createLogger in dynamic-schedules - Log warnings instead of silently swallowing dynamic schedule load errors - Include dynamic schedule names in ScheduleNotFoundError context - Force immediate dynamic schedule load on scheduler start (reset cache) - Update pnpm-lock.yaml with @modelcontextprotocol/sdk dependency Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Log unexpected errors in cron gap estimation instead of silencing - Use DynamicSchedule type directly for scheduler cache (remove fragile cast) - Use listDynamicSchedules for single-agent lookup in getAgentInfoByName Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Adds a `typing_indicator` boolean to `DiscordOutputSchema` (default: true)
that controls whether the bot shows a typing indicator while processing.
The typing indicator can cause "An unknown error occurred" messages in
Discord on long-running jobs, because Discord's client shows an error
when the sendTyping() API call fails or races with rate limits during
extended processing.
Setting `typing_indicator: false` in the agent's Discord output config
disables the indicator entirely:
```yaml
chat:
discord:
output:
typing_indicator: false
```
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
…ile upload
Four Discord/Slack integration improvements:
1. Message deduplication (Discord + Slack)
- Skip intermediate JSONL snapshots (stop_reason: null) that lack
complete text content
- Deliver only finalized snapshots, deduplicate by message.id
- Fixes "completed the task but no response" fallback message
2. Acknowledgement emoji reaction (Discord)
- React with configurable emoji (default: eyes) on message receipt
- Remove reaction after response is sent
- New output config: acknowledge_emoji
3. Voice message transcription (Discord)
- Detect Discord voice messages via MessageFlags.IsVoiceMessage
- Download audio, transcribe via OpenAI Whisper API
- New config: chat.discord.voice (enabled, provider, api_key_env,
model, language)
- New module: voice-transcriber.ts
4. File upload / file sender MCP (Discord)
- Add uploadFile() to DiscordConnector using AttachmentBuilder
- Wire FileSenderContext + createFileSenderDef in Discord manager
- Pass injectedMcpServers to trigger() so agents can send files
back to the originating channel
- Mirrors existing Slack file upload support
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…prompts, and progress indicators Discord conversations with Claude Code agents are verbose — they narrate tool usage, emit intermediate reasoning, and produce full diffs. This adds three features that work together to make the Discord experience feel like texting a coworker: - **concise_mode** (default: true): Appends a system prompt telling the agent to skip narration and give only its final answer. Threaded through a new `systemPromptAppend` field on TriggerOptions → RunnerOptions → RuntimeExecuteOptions, applied in both SDK (preset append) and CLI (--system-prompt) runtimes. - **final_answer_only** (default: true): Buffers all assistant messages during execution and sends only the last one (the actual answer). Suppresses tool result embeds and system status embeds in this mode. - **progress_indicator** (default: true): Shows a single "Working..." embed that updates in-place with tool names as they run, throttled to 2s intervals. Deleted on job completion. Powered by a new `replyWithRef` function on the Discord connector that returns edit/delete handles. All three options default to the new quiet behavior. Set to `false` in agent config to restore verbose output. Slack, Web, and existing configs are unaffected. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Add resultText fallback from SDK result message when all assistant turns are tool-only (no text blocks). Update concise mode system prompt to instruct Claude to always end with a text summary. Add config merge support for new discord output fields. Add 10 handleMessage pipeline tests covering dedup, buffering, fallback, tool result embeds, concise prompt injection, and session storage. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…nded footer
Redesign all Discord embeds for a calm, premium feel:
- Remove titles from all embeds (progress, tool results, errors, status, summary)
- Add signature footer ("herdctl · agent-name") to all embeds
- Refine color palette: soft violet for progress, emerald for success, cool gray for system, sky blue for commands
- Compact tool result embeds: collapse title + fields into single description line
- Horizontal result summary with centered-dot separators instead of inline fields
- Convert /help, /status, /reset slash commands from plain text to styled embeds
- Add syntax highlighting (ansi) for Bash tool output
- Make DiscordReplyEmbed.title optional to support title-less embeds
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Add processing for non-voice file attachments (images, PDFs, text/code files) uploaded alongside Discord messages. Text files are inlined into the prompt; images and PDFs are saved to disk so the agent can view them via its Read tool. Configurable via chat.discord.attachments with size limits, allowed types, and automatic cleanup. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…stant_messages enum
Replace two blunt output controls with a single `assistant_messages` enum
("answers" | "all") that classifies turns by whether they contain tool_use
blocks. Answer turns (no tool_use) are sent immediately; reasoning turns
(text + tool_use) are suppressed in "answers" mode. This eliminates message
buffering, removes the concise-mode system prompt injection that degraded
answer quality, and fixes the common "no additional output to share" fallback.
Also flips `result_summary` default to `true` so task stats are always shown.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Agents with multiple MCP servers (e.g. gmail, calendar, bear, gdrive, google-contacts) can exceed 15s during initialization, causing the session file watcher to time out before Claude CLI creates the session. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
CLI runtime silently ignored injectedMcpServers (file sender for Discord/Slack uploads). Reuse existing mcp-http-bridge infrastructure: start HTTP bridges on localhost for each injected server and pass them via --mcp-config as HTTP-type MCP servers. Also includes: - Fix --mcp-config wrapper (mcpServers key was missing, caused CLI hang) - Synthetic result message for CLI runtime usage tracking - Discord embed content extraction in conversation context builder - discord-changes.md documenting all fork changes Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Files uploaded by agents via MCP are now buffered and attached to the next answer message instead of sent as standalone messages above it. Also updates discord-changes.md with corrected descriptions. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Critical: - Restore process.env.CLAUDE_CODE_STREAM_CLOSE_TIMEOUT in finally block to prevent leaking state across concurrent jobs Important: - Clean up partial HTTP bridges if a bridge fails to start midway - Use wrapper object for progressEmbedHandle to fix TypeScript narrowing - Add voice transcription prefix "[Voice message transcription]:" as documented - Cap toolNamesRun to last 50 entries to prevent unbounded memory growth - Wrap finally block async operations in try/catch to prevent error masking - Document allowedTools behavior for injected MCP tools Minor: - Add 30s timeout to attachment downloads and voice message fetch - Add 60s timeout to Whisper API transcription call - Cap embed text extraction to 4000 chars in mention handler - Use broader channel type cast (TextChannel | DMChannel) for uploadFile - Remove discord-changes.md from repo (belongs in PR description, not tree) Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Wrap ack reaction in try/catch so failure doesn't abort message handling - Add path traversal validation on download_dir schema (reject '..' and absolute paths) - Validate Whisper API response payload before returning - Move progress embed cleanup to finally block (cleanup on error path too) - Use path.dirname() instead of manual lastIndexOf/substring - Add vi.unstubAllGlobals() safety net in attachments test afterEach Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
When self_scheduling is enabled, append a system prompt snippet that teaches agents how to use the herdctl scheduling tools. This prevents agents from trying to edit config files directly (as happened with juliusagent) and provides clear guidelines on schedule creation, frequency limits, TTL usage, and prompt authoring. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
When an operator explicitly declares a custom herdctl-scheduler MCP server, the system prompt guidance should still be appended. Previously the `continue` statement skipped both MCP injection and prompt injection. Addresses CodeRabbit review feedback on PR edspencer#192. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Replace local formatDuration in embeds.ts with formatDurationMs from @herdctl/chat (identical implementation) - Extract SyntheticTextChannel type and createSyntheticMessageEvent() factory from the 80-line inline retry event construction - Document why message-normalizer.ts uses casts (SDKMessage is intentionally loose to support both SDK and CLI runtimes) Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
When an agent runs in Docker and an MCP server has `host: true` in its config, the server is spawned on the host instead of inside the container. Tools are discovered via MCP protocol and bridged into the container through the existing HTTP bridge infrastructure. This enables MCP servers that need host access (e.g., Bear notes which requires macOS `open` command and Full Disk Access) to work with containerized agents. - Add `host` field to McpServerSchema - New mcp-host-bridge module: spawns host-side MCP process via StdioClientTransport, discovers tools, wraps as InjectedMcpServerDef - partitionMcpServers utility splits host vs container servers - ContainerRunner.execute() partitions before dispatch so both CLI and SDK runtime paths benefit Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
… agents CLIRuntime MCP HTTP bridges used 127.0.0.1 which is unreachable from inside Docker containers. Add mcpBridgeHost option (defaults to 127.0.0.1) and set it to host.docker.internal in ContainerRunner's CLI path. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Add optional default_channel_mode field to DiscordGuildSchema. When set, the bot responds in any channel in that guild (not just explicitly listed ones). Enables agents like Vector to respond to @mentions server-wide while keeping auto mode in their home channel. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
/usage now shows two embeds: last run stats (per-channel) and session totals (per-agent lifetime) including total runs, cost, tokens, and duration. Extracts ChannelRunUsage and CumulativeUsage types to replace inline shapes. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
… override - Add .claude/skills/ to skill discovery paths (was only scanning .codex/skills/ and skills/, missing all Claude Code skills) - When skills: [] is set in config, disable skill discovery entirely - When skills: [...] is set, use exactly those and skip auto-discovery - When skills is omitted, auto-discover from filesystem Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
… blocks Images attached in Discord or Slack messages are now downloaded as base64 and sent directly to the agent as native image content blocks, eliminating the extra Read tool round-trip. Non-image attachments and CLI runtimes gracefully fall back to text-only prompts. - Add ContentBlock, ImageBlock, TextBlock, PromptContent types to core - Widen prompt type from string to PromptContent through the pipeline - SDK runtime wraps ContentBlock[] as AsyncIterable<UserMessage> for query() - CLI runtime extracts text via extractPromptText() (stdin is text-only) - Docker SDK wrapper handles multimodal arrays in the container - Job executor serializes prompt to string for job metadata storage - Discord connector extracts attachment info from Message.attachments - Discord manager downloads images as base64, builds ContentBlock[] - Slack connector passes event.files through to SlackMessageEvent - Slack manager downloads images via authenticated url_private Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…ements, MCP host bridge, file attachments Includes: agent self-scheduling, Discord slash commands, voice transcription, file attachment processing, MCP host bridge for Docker agents, CLI injected MCP, typing indicators, ack emoji reactions, and various polish fixes. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…ocks Images (JPEG, PNG, GIF, WebP) are now downloaded as base64 and sent to the agent as ContentBlock[] instead of being saved to disk with file path references. This eliminates the extra Read tool round-trip. PDFs and text files continue using their existing handling (disk save / inline text). Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Image attachments (JPEG, PNG, GIF, WebP) are now processed as multimodal content blocks even without `attachments.enabled` in the agent config. The full attachment pipeline (text inlining, PDF disk saves) still requires the config flag, but images work out of the box for all Discord agents. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
… Docker When a Docker container agent uses CLI runtime but receives a ContentBlock[] prompt (multimodal images), automatically route through the SDK wrapper instead of the CLI path. CLI runtime can only pipe plain text via stdin, so content blocks would be lost. The SDK wrapper handles them natively. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
CLI session IDs are incompatible with SDK sessions. When a CLI-runtime Docker agent is auto-promoted to SDK for multimodal content blocks, the resume session ID must be cleared to avoid the SDK wrapper hanging on an unrecognized session. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…all attachments All agents use CLI runtime which pipes text via stdin — ContentBlock[] gets flattened and images are lost. Reverts the auto-promote-to-SDK approach which also broke session continuity. Images now go through the same file-to-disk pipeline as PDFs: saved to the agent's working directory and referenced in the prompt. The agent views them via the Read tool (which is multimodal). This preserves conversation context across all message types. The core PromptContent/ContentBlock types remain available for future SDK-runtime agents. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
After transcribing a voice message, post the transcription as a subtle grey embed so all channel members can read what was said without needing to listen to the audio. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
When an agent runs in Docker, the working_directory is bind-mounted at /workspace inside the container. Attachment download paths in the prompt were using the host path, making them inaccessible to the agent. Now rewrites host paths to /workspace paths when docker is enabled. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Non-ephemeral containers were orphaned on every herdctl restart because getOrCreateContainer only checked an in-memory map. Now queries Docker for the most recent stopped container and restarts it, preserving tools agents installed. Also reduce DEFAULT_MAX_CONTAINERS from 5 to 1 as a safety net against future accumulation. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…tachment diagnostics - Guess MIME type from file extension when Discord returns null contentType, fixing PDF uploads that silently fail in DMs - Revert result_summary default from true back to false (unintentional flip) - Sync manager fallback defaults with schema defaults - Upgrade attachment skip logging from debug to warn - Warn when message has attachments but attachments.enabled is false - Add _guessContentType tests Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
📝 WalkthroughWalkthroughAdds agent self-scheduling with per-agent dynamic schedules and a scheduler MCP; multimodal prompt support across runtimes and host-side MCP bridging; extensive Discord enhancements (attachments, voice transcription, new slash commands, embeds, command registration); Slack attachment propagation; many new tests, schema/type expansions, and backup metadata files. Changes
Sequence DiagramssequenceDiagram
actor Agent
participant FleetMgr as Fleet Manager
participant Scheduler as Scheduler
participant Dyn as DynamicSchedules
participant MCP as herdctl-scheduler MCP
Agent->>FleetMgr: start (self_scheduling enabled)
FleetMgr->>Scheduler: injectSchedulerMcpServers(agents, stateDir)
Scheduler->>Dyn: loadAllDynamicSchedules(stateDir)
Scheduler->>MCP: ensure herdctl-scheduler injected per-agent
Dyn-->>Scheduler: return dynamic schedules
Scheduler->>Scheduler: merge static + dynamic schedules
Scheduler->>Scheduler: evaluate triggers -> create jobs
sequenceDiagram
participant CLI as CLI Runtime
participant Container as Container Runner
participant Bridge as MCP HTTP Bridge
participant HostMCP as Host stdio MCP
participant Agent as Agent process
CLI->>Container: start execution with injectedMcpServers
Container->>Bridge: createMcpHostBridge for host servers
Bridge->>HostMCP: spawn stdio server & connect via SDK client
HostMCP->>Bridge: listTools / callTool
Bridge-->>Container: expose HTTP endpoints for tools
Container->>CLI: pass --mcp-config to CLI
CLI->>Agent: run prompt (multimodal)
Agent-->>CLI: return result (tool calls delegated via bridge)
Estimated code review effort🎯 5 (Critical) | ⏱️ ~120 minutes Possibly related PRs
Suggested reviewers
Poem
✨ Finishing Touches🧪 Generate unit tests (beta)
|
There was a problem hiding this comment.
Actionable comments posted: 14
Note
Due to the large number of review comments, Critical, Major severity comments were prioritized as inline comments.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
packages/slack/src/slack-connector.ts (1)
330-355:⚠️ Potential issue | 🟠 MajorAttachment-only Slack messages still get dropped.
These
event.filesarguments are only reached after theempty_promptearly return. A DM or mention that contains just a file and no caption is still discarded beforebuildMessageEvent(), so the new attachment path never runs for that case. Allow attachments to bypass the empty-prompt guard in both handlers.Possible fix
- const prompt = processMessage(event.text ?? "", this.botUserId); - if (!prompt) { + const prompt = processMessage(event.text ?? "", this.botUserId); + const hasAttachments = (event.files?.length ?? 0) > 0; + if (!prompt && !hasAttachments) { this.messagesIgnored++; this.emit("messageIgnored", { agentName: this.agentName, reason: "empty_prompt", channelId: event.channel, messageTs: event.ts, }); return; }Apply the same guard change in both the
app_mentionandmessagepaths.Also applies to: 461-487
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/slack/src/slack-connector.ts` around lines 330 - 355, The empty-prompt early return currently drops attachment-only messages; modify the check around processMessage in both the app_mention and message handlers so that if prompt is falsy but event.files exists and has length > 0 you do NOT increment messagesIgnored or emit the "empty_prompt" event and instead continue to call tryExecuteCommand and buildMessageEvent (so attachment-only DMs/mentions reach buildMessageEvent with event.files); keep the existing behavior when prompt is falsy and there are no files. Ensure you update the logic that references processMessage, tryExecuteCommand, buildMessageEvent, event.files, and the messagesIgnored/emit("messageIgnored", { reason: "empty_prompt" }) path in both handlers.packages/discord/src/__tests__/discord-connector.test.ts (1)
123-133:⚠️ Potential issue | 🟠 MajorKeep the shared Discord test fixture aligned with the reverted default.
createMockDiscordConfig()now hardcodesresult_summary: true, but this PR’s intended default isfalse. Because this helper backs most connector tests, it makes them exercise the non-default path and can hide regressions in the default behavior. Override it in the specific tests that need summaries instead of baking it into the base fixture.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/discord/src/__tests__/discord-connector.test.ts` around lines 123 - 133, The shared test fixture createMockDiscordConfig currently hardcodes output.result_summary: true which forces tests to use the non-default behavior; change the fixture to set result_summary to false (or remove the explicit setting so it defaults to false) and update only the specific tests that require summaries to override result_summary: true in their test setup. Locate the createMockDiscordConfig function (and the output object in the test fixture) and revert/result_summary to the default false, then add explicit overrides in tests that need summaries rather than baking it into the base fixture.
🟡 Minor comments (15)
.changeset/discord-file-attachments.md-2-11 (1)
2-11:⚠️ Potential issue | 🟡 MinorAlign this changeset with the actual PR scope.
This reads like a brand-new attachments feature, but the PR is described as a fix release: PDF MIME detection in DMs,
result_summarydefault reverting tofalse, and warn-level attachment diagnostics. As written, the changelog will mislead users and may also justify the wrong bump level (minorvs likelypatch). Please rewrite the entry to describe the fixes shipped here, and include the default/logging changes if they are intended to be user-visible.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.changeset/discord-file-attachments.md around lines 2 - 11, The changeset currently describes a new attachments feature and marks "@herdctl/core" and "@herdctl/discord" as minor, but the PR actually fixes PDF MIME detection in DMs, reverts the default of result_summary to false, and changes attachment diagnostics to warn-level; update the changeset summary to accurately list those fixes (PDF MIME detection in DMs, result_summary default=false, attachment diagnostics now warn) and remove the new-feature wording and any references implying full attachments support (e.g., remove or revise mentions of chat.discord.attachments behavior), and change the version bump to patch if no new public API was added..gitignore-50-52 (1)
50-52:⚠️ Potential issue | 🟡 MinorScope the database ignore more narrowly.
*.dbis much broader than the Dolt-specific comment here implies. It will ignore any SQLite/test fixture DB anywhere in the repo, which can make legitimate files disappear fromgit status. Prefer keeping.dolt/and scoping any DB ignore to a known path or filename instead of a repo-wide glob.Suggested change
# Dolt database files (added by bd init) .dolt/ -*.db +# Ignore only known local DB artifacts if needed, e.g. +# path/to/local-dev.db🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.gitignore around lines 50 - 52, Replace the broad repo-wide glob "*.db" with a scoped ignore so only Dolt DB files are excluded: keep the existing ".dolt/" entry and remove "*.db", or replace it with a Dolt-scoped pattern such as ".dolt/*.db" or the explicit Dolt filename used by your tooling (e.g., "dolt.db") to avoid accidentally ignoring unrelated SQLite/test DB files..beads/backup/issues.jsonl-1-10 (1)
1-10:⚠️ Potential issue | 🟡 MinorIssue data file contains PII - verify if appropriate for repository.
Each issue record contains an email address in the
ownerfield (oliver.heckmann@gmail.com). If this repository is public, consider whether storing personal email addresses in committed data files is appropriate from a privacy perspective.Additionally, like the other backup files, this doesn't implement the PR's stated objectives.
,
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.beads/backup/issues.jsonl around lines 1 - 10, The JSONL backup contains personal email addresses in the "owner" field (e.g., owner:"oliver.heckmann@gmail.com" in records like id:"herdctl-3ee", "herdctl-60a", etc.); remove or anonymize these PII values across all records (replace with a neutral identifier such as owner:null or owner:"redacted@example.com" or a team handle) and update the commit to avoid exposing personal emails in the public repo; ensure the change is applied to all entries in ".beads/backup/issues.jsonl" and add a short commit message indicating PII sanitization.packages/core/src/runner/runtime/container-runner.ts-98-103 (1)
98-103:⚠️ Potential issue | 🟡 MinorPartial bridge creation failure leaves earlier bridges orphaned.
If
createMcpHostBridgethrows for a server after previous bridges were successfully created, the earlier bridges inhostBridgesarray won't be closed because the error will propagate before reaching thefinallyblock that populateshostBridges.🛡️ Proposed fix: Wrap bridge creation in try-catch for cleanup
if (Object.keys(hostServers).length > 0) { // Start host-side MCP bridges - for (const [name, config] of Object.entries(hostServers)) { - const handle = await createMcpHostBridge(name, config); - hostBridges.push(handle); + try { + for (const [name, config] of Object.entries(hostServers)) { + const handle = await createMcpHostBridge(name, config); + hostBridges.push(handle); + } + } catch (err) { + // Clean up any bridges created before the failure + for (const handle of hostBridges) { + try { + await handle.close(); + } catch { + // Ignore cleanup errors + } + } + throw err; }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/core/src/runner/runtime/container-runner.ts` around lines 98 - 103, The loop that creates MCP host bridges using createMcpHostBridge can throw partway and leave earlier handles in hostBridges orphaned; wrap each bridge creation in a try/catch so that on any failure you synchronously/awaitedly clean up previously created handles by iterating hostBridges and calling their shutdown/close method (e.g., await handle.close() or the existing cleanup function used elsewhere), then rethrow the original error; ensure hostBridges is only pushed to after a successful creation and that the catch/finally path does the same cleanup to avoid leaking bridges.AGENTS.md-127-151 (1)
127-151:⚠️ Potential issue | 🟡 MinorRemove or rename the duplicate
Landing the Planesection.This repeats the heading and workflow already defined at Lines 15-39, and markdownlint is already flagging MD024 here. Keeping both copies creates two sources of truth.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@AGENTS.md` around lines 127 - 151, The "Landing the Plane (Session Completion)" section is duplicated and triggers MD024; remove the redundant block (the heading and the entire workflow currently titled "Landing the Plane (Session Completion)") or rename and consolidate it into the original canonical section so there's a single source of truth; update any internal references to use the remaining section title and ensure the mandatory workflow steps (File issues, Run quality gates, Update issue status, PUSH TO REMOTE, Clean up, Verify, Hand off) are preserved in the canonical section.docs/src/content/docs/integrations/discord.mdx-274-275 (1)
274-275:⚠️ Potential issue | 🟡 MinorFix the documented
result_summarydefault.The PR reverts
output.result_summaryback tofalse, but both the YAML reference and the settings table still saytrue. That leaves the Discord docs out of sync with the actual default behavior.📝 Suggested doc fix
- result_summary: true # Show completion summary embed (default: true) + result_summary: false # Show completion summary embed (default: false)-| `output.result_summary` | boolean | `true` | Show a summary embed when the agent finishes (duration, cost, tokens) | +| `output.result_summary` | boolean | `false` | Show a summary embed when the agent finishes (duration, cost, tokens) |Also applies to: 372-373
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/src/content/docs/integrations/discord.mdx` around lines 274 - 275, Update the Discord docs to reflect the actual default of output.result_summary=false: change the YAML reference entry (the `result_summary: true` line) and the settings table entries (both occurrences around the `result_summary`/`errors` blocks) to indicate the default is false, and ensure any descriptive text referencing `output.result_summary` matches the reverted default behavior.packages/discord/src/__tests__/message-normalizer.test.ts-72-90 (1)
72-90:⚠️ Potential issue | 🟡 MinorUse distinct SDK and CLI fixtures in the parity test.
sdkResultandcliSyntheticResultare built from the exact same payload, so this only proves the function is deterministic. If the CLI-specific normalization path diverges, this test still passes.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/discord/src/__tests__/message-normalizer.test.ts` around lines 72 - 90, The test currently constructs sdkResult and cliSyntheticResult from the identical payload so it only verifies determinism; update the "produces equivalent result events for SDK and CLI-style result payloads" test to use two distinct fixtures that represent the SDK and CLI payload shapes (keep sdkResult and cliSyntheticResult variables and call normalizeDiscordMessage for both), ensuring the CLI-style payload exercises the CLI-specific normalization path (e.g., use the CLI field naming/structure that the normalizer expects). Verify the outputs still equal so the test actually asserts parity between SDK and CLI inputs for normalizeDiscordMessage.packages/discord/src/commands/__tests__/command-manager.test.ts-214-217 (1)
214-217:⚠️ Potential issue | 🟡 MinorAssert that guild scope replaces global registration.
This only proves that the guild route was hit at least once. If
registerCommands()accidentally PUTs both global and guild commands, the test still passes. Add a call-count or negative global-route assertion here.🧪 Suggested tightening
+ expect(mockRestPut).toHaveBeenCalledTimes(1); expect(mockRestPut).toHaveBeenCalledWith( expect.stringContaining("/guilds/guild-123/commands"), expect.any(Object), );🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/discord/src/commands/__tests__/command-manager.test.ts` around lines 214 - 217, The test currently only verifies the guild route was hit once; tighten it by asserting registerCommands() did not also PUT the global route: add an assertion on mockRestPut call count (e.g., to ensure only one call) or a negative assertion that mockRestPut was not called with the global registration route (string containing the applications/<app-id>/commands/global path), so you confirm only the guild-specific registration occurred when registerCommands() is invoked.packages/core/src/config/__tests__/self-scheduling.test.ts-194-199 (1)
194-199:⚠️ Potential issue | 🟡 MinorMake this test actually cover
enabled: false.The test name says “disabled”, but the fixture never sets
self_scheduling.enabledtofalse. That leaves the false branch untested for system prompt injection.🧪 Proposed fix
it("does not inject system prompt when self_scheduling is disabled", () => { - const agents = [makeAgent()]; + const agents = [ + makeAgent({ + self_scheduling: { enabled: false, max_schedules: 10, min_interval: "5m" }, + }), + ]; injectSchedulerMcpServers(agents, "/tmp/.herdctl"); expect(agents[0].system_prompt).toBeUndefined();🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/core/src/config/__tests__/self-scheduling.test.ts` around lines 194 - 199, The test "does not inject system prompt when self_scheduling is disabled" doesn't set self_scheduling.enabled to false, so update the fixture to exercise the disabled branch: modify the agent produced by makeAgent (or the agent object passed to injectSchedulerMcpServers) to include self_scheduling: { enabled: false } before calling injectSchedulerMcpServers; then assert agents[0].system_prompt is undefined as before. Ensure you reference the makeAgent call and injectSchedulerMcpServers invocation so the change is applied to the same agent passed into injectSchedulerMcpServers.packages/discord/src/message-normalizer.ts-175-182 (1)
175-182:⚠️ Potential issue | 🟡 MinorFall back when the first error string is empty.
If the runtime sends
content: "", Lines 176-181 return an empty Discord error even whenmessagehas the real text. That leaves the UI with no actionable error details.🩹 Proposed fix
if (message.type === "error") { - const errorText = - typeof (message as { content?: unknown }).content === "string" - ? ((message as { content: string }).content ?? "An unknown error occurred") - : typeof (message as { message?: unknown }).message === "string" - ? ((message as { message: string }).message ?? "An unknown error occurred") - : "An unknown error occurred"; + const contentText = + typeof (message as { content?: unknown }).content === "string" + ? (message as { content: string }).content.trim() + : ""; + const messageText = + typeof (message as { message?: unknown }).message === "string" + ? (message as { message: string }).message.trim() + : ""; + const errorText = contentText || messageText || "An unknown error occurred"; return [{ kind: "error", message: errorText }]; }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/discord/src/message-normalizer.ts` around lines 175 - 182, In the message.type === "error" branch in message-normalizer.ts, the current errorText logic treats an empty content string as valid and returns it; change the checks so you only accept content if it's a non-empty string (e.g., trim() !== ""), otherwise fall back to message.message if that is a non-empty string, and finally to the "An unknown error occurred" default; update the variable errorText calculation accordingly so the UI receives the real error text when content is empty but message.message contains the error.packages/discord/src/commands/skills.ts-8-16 (1)
8-16:⚠️ Potential issue | 🟡 MinorDifferentiate “unsupported” from “no skills found.”
If
commandActions.listSkillsis absent, Line 9 turns that into an empty list and the command replies “No skills were discovered…”. That hides a missing integration behind a false negative.🩹 Proposed fix
async execute(context: CommandContext): Promise<void> { const { interaction, commandActions, agentName } = context; - const skills = commandActions?.listSkills ? await commandActions.listSkills() : []; + if (!commandActions?.listSkills) { + await interaction.reply({ + content: "Skill discovery isn't available for this agent.", + ephemeral: true, + }); + return; + } + + const skills = await commandActions.listSkills(); if (skills.length === 0) { await interaction.reply({ content: "No skills were discovered for this agent.",🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/discord/src/commands/skills.ts` around lines 8 - 16, The command currently treats a missing listSkills integration as an empty result, causing a misleading "No skills were discovered" reply; update the handler to first check for the presence of commandActions and commandActions.listSkills (symbols: commandActions, listSkills) and if absent reply with a clear message that skill listing is unsupported or the integration is not configured (use interaction.reply), otherwise call await commandActions.listSkills() and only then, if the returned skills array is empty, reply with "No skills were discovered for this agent."packages/core/src/scheduler/__tests__/dynamic-schedules.test.ts-150-154 (1)
150-154:⚠️ Potential issue | 🟡 MinorRead the YAML file here instead of calling
readDynamicSchedules()again.The second read re-applies TTL filtering in memory, so this assertion still passes even if the cleanup was never written back to disk.
🧪 Proposed fix
- const ondisk = stringifyYaml( - JSON.parse(JSON.stringify(await readDynamicSchedules(stateDir, "vector"))), - ); + const ondisk = await readFile(join(dir, "vector.yaml"), "utf-8"); expect(ondisk).not.toContain("expired");🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/core/src/scheduler/__tests__/dynamic-schedules.test.ts` around lines 150 - 154, The test re-reads schedules via readDynamicSchedules(), which applies in-memory TTL filtering again; instead, read the actual YAML file from disk for the "vector" tenant and assert its contents do not contain "expired". Replace the second call to readDynamicSchedules(stateDir, "vector") with a direct file read (fs.readFile or fs.promises.readFile) of the dynamic schedules YAML file created for the "vector" tenant, then parse (if needed) and pass the raw file content into stringifyYaml (or simply use the file string) before running expect(...).not.toContain("expired"); keep references to stringifyYaml and the original assertion so the test still verifies on-disk data rather than in-memory filtering.packages/discord/src/commands/command-manager.ts-9-9 (1)
9-9:⚠️ Potential issue | 🟡 MinorLogger import violates coding guidelines.
The coding guidelines specify that Discord package files should use
DiscordLoggerfrom this package for logging, notcreateLoggerfrom core. TheDiscordLoggersupports three logging levels (minimal,standard,verbose) with auto-redaction of sensitive fields.As per coding guidelines: "Use
DiscordLoggerfrom this package for logging, notcreateLoggerfrom core."🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/discord/src/commands/command-manager.ts` at line 9, The file currently imports createLogger which violates guidelines; replace the import of createLogger with DiscordLogger from the same package and update usages in command-manager.ts to instantiate and use DiscordLogger (respecting its logging levels: minimal, standard, verbose) so logging uses the package's auto-redaction and level controls; search for createLogger and swap to new DiscordLogger construction and method calls accordingly.packages/discord/src/manager.ts-1017-1023 (1)
1017-1023:⚠️ Potential issue | 🟡 MinorMark standalone file replies as user-visible output.
After sending
pendingFileshere, the fallback branch still seessentAnswer === false,hasSentMessages() === false, andembedsSent === 0, so it will also post “no additional output to share.” Set a reply-sent flag when files are delivered, or exclude file-only runs from that fallback.Also applies to: 1049-1059
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/discord/src/manager.ts` around lines 1017 - 1023, The fallback posts “no additional output” after standalone file replies because sending pendingFiles via event.reply does not flip the reply-sent state; after the await event.reply({ files: ... }) in the pendingFiles handling block (and the similar block at 1049-1059) set the same flag the rest of the flow uses to indicate output was sent (e.g., set sentAnswer = true or call the existing hasSentMessages/mark-as-sent helper) so the fallback sees that a reply occurred; alternatively update the condition that triggers the fallback to treat file-only sends (files.length > 0) as visible output.packages/core/src/config/schema.ts-636-637 (1)
636-637:⚠️ Potential issue | 🟡 MinorFix the
result_summarydoc default.The schema now defaults
result_summarytofalse, but this comment still saystrue. That will mislead operators reading the config surface.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/core/src/config/schema.ts` around lines 636 - 637, Update the doc comment for the config field result_summary to reflect the actual default (false) used in the zod schema: change the comment text that currently reads "(default: true)" to "(default: false)" so the inline documentation matches the z.boolean().optional().default(false) declaration in the schema.
🧹 Nitpick comments (15)
packages/core/src/runner/runtime/__tests__/mcp-host-bridge.test.ts (1)
5-31: Consider resetting mocks between tests to prevent test pollution.The mock functions (
mockConnect,mockClose,mockListTools,mockCallTool) are shared across all tests without being reset. This can cause tests to fail or pass incorrectly if run in different orders, since call counts and mock states accumulate.♻️ Proposed fix: Add mock reset in beforeEach
+import { beforeEach, describe, expect, it, vi } from "vitest"; -import { describe, expect, it, vi } from "vitest"; import type { McpServer } from "../../../config/index.js"; // Mock the MCP SDK before importing the module under test const mockConnect = vi.fn().mockResolvedValue(undefined); const mockClose = vi.fn().mockResolvedValue(undefined); const mockListTools = vi.fn().mockResolvedValue({ // ... existing mock implementation }); const mockCallTool = vi.fn().mockResolvedValue({ content: [{ type: "text", text: "Note found: Hello World" }], isError: false, }); +beforeEach(() => { + mockConnect.mockClear(); + mockClose.mockClear(); + mockListTools.mockClear(); + mockCallTool.mockClear(); +});🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/core/src/runner/runtime/__tests__/mcp-host-bridge.test.ts` around lines 5 - 31, The shared mock functions (mockConnect, mockClose, mockListTools, mockCallTool) accumulate call counts/state across tests; add a beforeEach that resets their state (e.g., vi.resetAllMocks() or individually call mockClear/mockReset on mockConnect, mockClose, mockListTools, mockCallTool) so each test starts with fresh mocks and avoids test pollution when running in different orders.packages/discord/src/commands/__tests__/extended-commands.test.ts (1)
94-133: Consider adding more specific assertions for command behavior.Tests like
/new,/stop,/retryonly verify thatreplywas called once, but don't assert on the response content. While integration tests, they could provide more value by checking response structure (e.g., embeds exist, ephemeral flag).📝 Optional: Add content assertions for basic smoke tests
it("executes /new", async () => { const ctx = makeContext(); await newCommand.execute(ctx); - expect(ctx.interaction.reply).toHaveBeenCalledOnce(); + expect(ctx.interaction.reply).toHaveBeenCalledOnce(); + const call = (ctx.interaction.reply as ReturnType<typeof vi.fn>).mock.calls[0][0]; + expect(call.ephemeral).toBe(true); });🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/discord/src/commands/__tests__/extended-commands.test.ts` around lines 94 - 133, The tests currently only assert that ctx.interaction.reply was called for commands like newCommand.execute, stopCommand.execute, retryCommand.execute, skillsCommand.execute, usageCommand.execute, toolsCommand.execute, configCommand.execute, pingCommand.execute and should be strengthened to assert reply payload shape and key fields; update each test to inspect the arguments passed to ctx.interaction.reply (using toHaveBeenCalledWith/toHaveBeenCalledTimes and argument matchers) and verify expected properties such as an embed array or content string, presence of ephemeral: true for ephemeral responses, and any expected fields in embeds (title/description) for commands like newCommand, skillsCommand and usageCommand so tests validate response structure rather than only call counts.packages/core/src/runner/types.ts (1)
43-59: Consider handling empty prompt arrays.The
extractPromptTextfunction handles strings and content block arrays correctly, but an empty array would return an empty string. This may be intentional, but worth documenting if prompts can be empty.📝 Optional: Add JSDoc note about empty arrays
/** * Extract the text portion of a PromptContent value. * * When `prompt` is a string, returns it as-is. * When `prompt` is ContentBlock[], joins all text blocks and adds * "[N image(s) attached]" placeholders for image blocks. + * + * Note: Returns an empty string for empty ContentBlock arrays. */ export function extractPromptText(prompt: PromptContent): string {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/core/src/runner/types.ts` around lines 43 - 59, Add a JSDoc comment to the extractPromptText function that documents how it handles inputs: state that when prompt is a string it returns the string, when it's a PromptContent array it concatenates text blocks and appends a “[N image(s) attached]” marker for images, and explicitly note the behavior for an empty array (it currently returns an empty string); update the JSDoc above extractPromptText to make this contract clear to callers.packages/discord/src/__tests__/manager.test.ts (2)
176-186: Extract the default Discordoutputfixture into a helper.This expanded
outputobject is now copied throughout the file. Every schema/default tweak will require mass edits, and it makes fixture drift easy. A smallcreateDiscordOutput(overrides)helper would keep these tests much easier to maintain.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/discord/src/__tests__/manager.test.ts` around lines 176 - 186, Extract the repeated default Discord output object into a single helper named createDiscordOutput(overrides) that returns the full default object (tool_results, tool_result_max_length, system_status, result_summary, typing_indicator, errors, acknowledge_emoji, assistant_messages, progress_indicator) merged with any overrides; update all uses in manager.test.ts to call createDiscordOutput({...}) instead of inlining the object so tests share one authoritative fixture and can override fields as needed. Ensure the helper is exported/defined near other test fixtures and used anywhere the inline output literal currently appears (search for assistant_messages: "answers" and tool_result_max_length) so future schema tweaks only require changing the helper.
1125-1128: Avoid wall-clock sleeps in these pipeline tests.These new cases wait 2.5s–5s just to let timing behavior elapse. That will add a lot of runtime to the suite and make CI failures timing-sensitive. Stubbing the delay/rate-limiter would keep the assertions deterministic without burning real time.
Also applies to: 1604-1607, 2374-2377
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/discord/src/__tests__/manager.test.ts` around lines 1125 - 1128, The test uses a real setTimeout to wait for rate-limiting after calling manager.start() and connector.emit("message", event); instead of sleeping, stub or mock the rate-limiting/delay so the test runs deterministically: either use Jest fake timers (jest.useFakeTimers(), then advance timers with jest.advanceTimersByTime(2500) and restore) around the section that waits, or mock the rate-limiter/delay implementation used by manager (the module/function that enforces delays) to resolve immediately; ensure you target the symbols manager.start, connector.emit("message", ...), and the delay/rate-limiter function used by the code, and restore real timers/mocks after the test.packages/discord/src/__tests__/embeds.test.ts (1)
49-57: Potential snapshot flakiness withDate.now()without fake timers.This test uses
Date.now()forstartTime(line 51) but doesn't enable fake timers like the other tests. IfbuildToolResultEmbedcalculates duration fromstartTime, the snapshot may vary slightly between runs due to real time passage during test execution.♻️ Suggested fix: use fake timers for consistency
it("builds tool result embed with truncated output", () => { + vi.useFakeTimers(); + vi.setSystemTime(new Date("2026-03-04T12:00:00.000Z")); const embed = buildToolResultEmbed({ - toolUse: { name: "Bash", input: { command: "echo hi" }, startTime: Date.now() - 2000 }, + toolUse: { name: "Bash", input: { command: "echo hi" }, startTime: Date.now() - 2000 }, toolResult: { output: "line\n".repeat(200), isError: false }, agentName: "herdctl.test-agent", maxOutputChars: 120, }); expect(embed).toMatchSnapshot(); + vi.useRealTimers(); });🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/discord/src/__tests__/embeds.test.ts` around lines 49 - 57, The test for buildToolResultEmbed uses Date.now() for startTime which can make snapshots flaky; update the test to use Jest fake timers (or otherwise freeze time) around the call so startTime is deterministic, e.g., call jest.useFakeTimers()/jest.setSystemTime(...) before creating the embed and restore timers after; ensure you target the test case that constructs toolUse.startTime and the buildToolResultEmbed invocation so the duration-dependent snapshot is stable.packages/core/src/mcp/scheduler-mcp.ts (1)
49-51: Route startup diagnostics through the core logger.These
console.errorcalls bypass the shared log-level/filtering behavior used elsewhere inpackages/core. This entrypoint should follow the same logging contract as the rest of the core runtime.♻️ Suggested change
+import { createLogger } from "../utils/logger.js"; + +const logger = createLogger("scheduler-mcp"); + const envResult = EnvSchema.safeParse(process.env); if (!envResult.success) { - console.error(`Invalid environment: ${envResult.error.issues.map((i) => i.message).join(", ")}`); + logger.error(`Invalid environment: ${envResult.error.issues.map((i) => i.message).join(", ")}`); process.exit(1); } ... main().catch((error) => { - console.error("Failed to start herdctl-scheduler MCP server:", error); + logger.error( + `Failed to start herdctl-scheduler MCP server: ${error instanceof Error ? error.message : String(error)}`, + ); process.exit(1); });As per coding guidelines, "NEVER use raw console.log/warn/error/debug for runtime logging - use createLogger(prefix) from
@herdctl/coreinstead" and "Use relative imports for the logger within@herdctl/core:import { createLogger } from '../utils/logger.js'."Also applies to: 318-320
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/core/src/mcp/scheduler-mcp.ts` around lines 49 - 51, Replace raw console.error usage with the core logger: add "import { createLogger } from '../utils/logger.js'" at the top, create a logger instance (e.g., const logger = createLogger('mcp:scheduler') ) and swap console.error(...) calls (the one checking envResult.success and the other at lines ~318-320) to logger.error(...) preserving the same message interpolation; keep the process.exit(1) behavior unchanged. Ensure you reference envResult.error.issues.map(...) in the logger.error call so startup diagnostics go through the shared createLogger pipeline.packages/discord/src/commands/index.ts (1)
36-36: Consider movingusageCommandexport for alphabetical consistency.Other command exports follow alphabetical order. Moving
usageCommandbetweentoolsCommand(line 23) and the types block would maintain consistency.Suggested reordering
export { stopCommand } from "./stop.js"; export { toolsCommand } from "./tools.js"; +export { usageCommand } from "./usage.js"; // Types export type { ChannelRunUsage, CommandActionResult, CommandActions, CommandContext, CommandManagerLogger, CommandManagerOptions, CumulativeUsage, ICommandManager, SlashCommand, } from "./types.js"; -export { usageCommand } from "./usage.js";🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/discord/src/commands/index.ts` at line 36, Exports are out of alphabetical order: move the export for usageCommand so it sits after toolsCommand and before the types block to match the project's alphabetical export ordering; locate the export statement "export { usageCommand } from \"./usage.js\";" and reposition it accordingly alongside other command exports to maintain consistent ordering.packages/core/src/fleet-manager/schedule-management.ts (2)
56-61: Type assertion loses some type safety.The cast to
typeof dynamicSchedulesdiscards the fullDynamicScheduletype (which includesttl_hours,created_at, etc.). Since only the common fields are used for buildingScheduleInfo, this works but is fragile.Consider using a mapped type or extracting the schedule-building logic to handle both types uniformly.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/core/src/fleet-manager/schedule-management.ts` around lines 56 - 61, The cast to typeof dynamicSchedules when assigning dynamicSchedules from loadAllDynamicSchedules loses the full DynamicSchedule type (e.g., ttl_hours, created_at) and is fragile; instead, update the code to preserve types by either (a) defining a mapped/union type (e.g., PartialDynamicSchedule | DynamicSchedule) or (b) extract the schedule conversion into a helper like buildScheduleInfo(schedule: DynamicSchedule | PartialDynamicSchedule) that accepts both shapes and returns ScheduleInfo, then call loadAllDynamicSchedules() without the unsafe assertion and map results through that helper (referencing dynamicSchedules, loadAllDynamicSchedules, DynamicSchedule, ScheduleInfo, ttl_hours, created_at).
188-221: Redundant agent validation in enableSchedule.The agent lookup and validation on lines 194-202 duplicates logic that
getSchedule()already performs on line 205. The agent name is available from thegetScheduleresult.Simplified approach
async enableSchedule(agentName: string, scheduleName: string): Promise<ScheduleInfo> { - const config = this.ctx.getConfig(); const logger = this.ctx.getLogger(); const stateDir = this.ctx.getStateDir(); - // Validate the agent exists - const agents = config?.agents ?? []; - const agent = - agents.find((a) => a.qualifiedName === agentName) ?? agents.find((a) => a.name === agentName); - - if (!agent) { - throw new AgentNotFoundError(agentName, { - availableAgents: agents.map((a) => a.qualifiedName), - }); - } - // Verify the schedule exists (static or dynamic) - await this.getSchedule(agent.qualifiedName, scheduleName); + const scheduleInfo = await this.getSchedule(agentName, scheduleName); // Update schedule state to enabled (idle) — use qualifiedName as the state key const { updateScheduleState } = await import("../scheduler/schedule-state.js"); await updateScheduleState( stateDir, - agent.qualifiedName, + scheduleInfo.agentName, scheduleName, { status: "idle" }, { logger: { warn: logger.warn } }, ); - logger.info(`Enabled schedule ${agent.qualifiedName}/${scheduleName}`); + logger.info(`Enabled schedule ${scheduleInfo.agentName}/${scheduleName}`); // Return the updated schedule info - return this.getSchedule(agent.qualifiedName, scheduleName); + return this.getSchedule(scheduleInfo.agentName, scheduleName); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/core/src/fleet-manager/schedule-management.ts` around lines 188 - 221, Remove the duplicated agent lookup in enableSchedule and instead call getSchedule(...) first to obtain the schedule info (use its agent-qualified name from the returned ScheduleInfo); replace references to the local agent variable with the qualifiedName from the getSchedule result when calling updateScheduleState and when logging, and return that getSchedule result directly; delete the manual agents array/AgentNotFoundError validation block since getSchedule already validates the agent. Ensure you still import and call updateScheduleState with stateDir, the schedule's qualified agent name, scheduleName, and the same status/logger arguments.packages/discord/src/discord-connector.ts (1)
325-351: uploadFile implementation looks correct.The method properly validates client readiness, fetches and validates the channel, and uses
AttachmentBuilderfor the upload. The fallback tosent.idif no attachment ID exists is a reasonable safety measure.However, there's no size validation before upload. Discord has file size limits (8MB for regular users, 50MB for Nitro). Consider adding validation to provide a clear error message.
💡 Optional: Add file size validation
async uploadFile(params: DiscordFileUploadParams): Promise<{ fileId: string }> { if (!this._client?.isReady()) { throw new Error("Cannot upload file: not connected to Discord"); } + + // Discord file size limit (8MB for regular, could be higher for Nitro) + const MAX_FILE_SIZE = 8 * 1024 * 1024; + if (params.fileBuffer.length > MAX_FILE_SIZE) { + throw new Error(`File size ${params.fileBuffer.length} exceeds Discord limit of ${MAX_FILE_SIZE} bytes`); + } const channel = await this._client.channels.fetch(params.channelId);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/discord/src/discord-connector.ts` around lines 325 - 351, Add a pre-upload size check in uploadFile (accepting DiscordFileUploadParams) before creating the AttachmentBuilder: inspect params.fileBuffer.length and compare it to a configurable MAX_FILE_SIZE (or service-specific constants for 8MB/50MB), and if it exceeds the limit throw a clear Error like "File exceeds Discord upload limit" (and log via this._logger.error with filename/channelId/size) so we fail fast rather than attempting the send; keep the existing channel validation, AttachmentBuilder creation, and return behavior unchanged.packages/core/src/scheduler/dynamic-schedules.ts (3)
447-447: Dynamic import of readdir could be moved to module-level.The
readdirfunction is dynamically imported insideloadAllDynamicSchedules. Sincefs/promisesis already imported at the module level (line 14), consider addingreaddirto that import for consistency and marginal performance improvement.♻️ Move readdir to module-level import
-import { mkdir, readFile, unlink } from "node:fs/promises"; +import { mkdir, readdir, readFile, unlink } from "node:fs/promises"; // ... later in loadAllDynamicSchedules ... - const { readdir } = await import("node:fs/promises"); const dir = getDynamicSchedulesDir(stateDir);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/core/src/scheduler/dynamic-schedules.ts` at line 447, The dynamic import of readdir inside loadAllDynamicSchedules should be moved to the module-level import to match the existing fs/promises usage and avoid a redundant dynamic import; update the top-of-file import to include readdir alongside any other fs/promises imports and remove the inline "const { readdir } = await import('node:fs/promises')" from loadAllDynamicSchedules so the function uses the module-scoped readdir.
71-97: Error classes don't extend FleetManagerError.Per coding guidelines for core package: "Extend
FleetManagerErrorfromfleet-manager/errors.tsfor all errors." However, these are scheduler-specific errors rather than fleet-manager errors, so a baseDynamicScheduleErrorextendingErrormay be intentional for separation of concerns.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/core/src/scheduler/dynamic-schedules.ts` around lines 71 - 97, Replace the plain Error base with the shared FleetManagerError: have DynamicScheduleError extend FleetManagerError (import FleetManagerError from fleet-manager/errors.ts) and keep the specialized errors (ScheduleLimitExceededError, MinIntervalViolationError, ScheduleNameConflictError) extending DynamicScheduleError; ensure constructors still call super(message) and set this.name appropriately so existing behavior/identifiers are preserved.
506-525: Cron interval validation could fail silently.The cron gap estimation catches all errors except
MinIntervalViolationError, logging failures at debug level. IfcalculateNextCronTriggerfails consistently (e.g., for edge-case cron expressions), the minimum interval validation is bypassed.Consider logging at warn level for unexpected failures to aid debugging.
💡 Optional: Log cron estimation failures at warn level
} catch (error) { // Re-throw our own errors, ignore cron parsing failures if (error instanceof MinIntervalViolationError) throw error; - logger.debug(`Unexpected error estimating cron gap for "${schedule.cron}": ${error}`); + logger.warn(`Could not estimate cron gap for "${schedule.cron}": ${error}`); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/core/src/scheduler/dynamic-schedules.ts` around lines 506 - 525, The cron gap estimation currently swallows non-MinIntervalViolationError exceptions by logging them at debug level (in the try/catch around calculateNextCronTrigger) which can silently bypass min-interval validation; update the catch to rethrow MinIntervalViolationError as before but change logger.debug to logger.warn and include contextual info (e.g., scheduleName and schedule.cron plus the error) so failures in calculateNextCronTrigger are visible (refer to calculateNextCronTrigger, MinIntervalViolationError, and logger in the catch block).packages/core/src/runner/runtime/cli-runtime.ts (1)
304-351: Usage tracking relies on unchecked type assertions.The
trackAssistantUsagefunction castsmessageto inline object types without runtime validation. If the message structure differs from expectations (e.g., SDK changes), this could silently produce incorrect usage statistics.Consider adding defensive checks or using Zod validation for the expected message shape, consistent with the coding guidelines for external input validation.
♻️ Suggested defensive approach
const trackAssistantUsage = (message: SDKMessage) => { if (message.type !== "assistant") { return; } - const assistantMeta = message as { - message?: { - id?: string; - stop_reason?: unknown; - }; - }; - const stopReason = assistantMeta.message?.stop_reason; + // Defensively access nested properties + const messageObj = "message" in message && typeof message.message === "object" + ? message.message as Record<string, unknown> + : null; + if (!messageObj) return; + + const stopReason = messageObj.stop_reason;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/core/src/runner/runtime/cli-runtime.ts` around lines 304 - 351, The trackAssistantUsage function currently uses unchecked casts of SDKMessage and assumes fields exist; replace these assertions with runtime validation (either manual guards or a zod schema) for the expected shape before reading fields: validate message.type === "assistant", ensure message.message is an object, message.message.id is a non-empty string, message.message.stop_reason is handled explicitly, message.message.usage fields are numbers before adding to totalInputTokens/totalOutputTokens, and message.message.content is an array of objects with string type/text before constructing lastAssistantText; update uses of seenAssistantMessageIds, numTurns, totalInputTokens, totalOutputTokens, and lastAssistantText only after validation and handle missing/invalid values safely (e.g., treat missing usage as zeros and ignore malformed content).
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 0e4b490c-6361-44ed-8f7b-8843b168629f
⛔ Files ignored due to path filters (2)
packages/discord/src/__tests__/__snapshots__/embeds.test.ts.snapis excluded by!**/*.snappnpm-lock.yamlis excluded by!**/pnpm-lock.yaml
📒 Files selected for processing (86)
.beads/backup/backup_state.json.beads/backup/comments.jsonl.beads/backup/config.jsonl.beads/backup/dependencies.jsonl.beads/backup/events.jsonl.beads/backup/issues.jsonl.beads/backup/labels.jsonl.changeset/agent-self-scheduling.md.changeset/cli-runtime-injected-mcp.md.changeset/discord-file-attachments.md.changeset/discord-pdf-ack-fixes.md.gitignoreAGENTS.mddocs/src/content/docs/integrations/discord.mdxpackages/core/package.jsonpackages/core/src/config/__tests__/schema.test.tspackages/core/src/config/__tests__/self-scheduling.test.tspackages/core/src/config/index.tspackages/core/src/config/merge.tspackages/core/src/config/schema.tspackages/core/src/config/self-scheduling.tspackages/core/src/fleet-manager/fleet-manager.tspackages/core/src/fleet-manager/job-control.tspackages/core/src/fleet-manager/schedule-management.tspackages/core/src/fleet-manager/status-queries.tspackages/core/src/fleet-manager/types.tspackages/core/src/mcp/scheduler-mcp.tspackages/core/src/runner/index.tspackages/core/src/runner/job-executor.tspackages/core/src/runner/runtime/__tests__/cli-runtime.test.tspackages/core/src/runner/runtime/__tests__/mcp-host-bridge.test.tspackages/core/src/runner/runtime/cli-runtime.tspackages/core/src/runner/runtime/container-manager.tspackages/core/src/runner/runtime/container-runner.tspackages/core/src/runner/runtime/docker-config.tspackages/core/src/runner/runtime/docker-sdk-wrapper.jspackages/core/src/runner/runtime/interface.tspackages/core/src/runner/runtime/mcp-host-bridge.tspackages/core/src/runner/runtime/sdk-runtime.tspackages/core/src/runner/types.tspackages/core/src/scheduler/__tests__/dynamic-schedules.test.tspackages/core/src/scheduler/dynamic-schedules.tspackages/core/src/scheduler/index.tspackages/core/src/scheduler/scheduler.tspackages/discord/README.mdpackages/discord/src/__tests__/attachments.test.tspackages/discord/src/__tests__/discord-connector.test.tspackages/discord/src/__tests__/embeds.test.tspackages/discord/src/__tests__/logger.test.tspackages/discord/src/__tests__/manager.test.tspackages/discord/src/__tests__/message-normalizer.test.tspackages/discord/src/__tests__/runtime-parity.test.tspackages/discord/src/auto-mode-handler.tspackages/discord/src/commands/__tests__/command-manager.test.tspackages/discord/src/commands/__tests__/extended-commands.test.tspackages/discord/src/commands/__tests__/help.test.tspackages/discord/src/commands/__tests__/reset.test.tspackages/discord/src/commands/__tests__/status.test.tspackages/discord/src/commands/cancel.tspackages/discord/src/commands/command-manager.tspackages/discord/src/commands/config.tspackages/discord/src/commands/help.tspackages/discord/src/commands/index.tspackages/discord/src/commands/new.tspackages/discord/src/commands/ping.tspackages/discord/src/commands/reset.tspackages/discord/src/commands/retry.tspackages/discord/src/commands/session.tspackages/discord/src/commands/skill.tspackages/discord/src/commands/skills.tspackages/discord/src/commands/status.tspackages/discord/src/commands/stop.tspackages/discord/src/commands/tools.tspackages/discord/src/commands/types.tspackages/discord/src/commands/usage.tspackages/discord/src/discord-connector.tspackages/discord/src/embeds.tspackages/discord/src/index.tspackages/discord/src/manager.tspackages/discord/src/mention-handler.tspackages/discord/src/message-normalizer.tspackages/discord/src/types.tspackages/discord/src/voice-transcriber.tspackages/slack/src/manager.tspackages/slack/src/slack-connector.tspackages/slack/src/types.ts
| "@herdctl/core": minor | ||
| --- | ||
|
|
||
| Add agent self-scheduling via MCP server | ||
|
|
||
| Agents can now create, update, and delete their own schedules at runtime through a capability-gated MCP server. Enable with `self_scheduling.enabled: true` in agent config. | ||
|
|
||
| - Per-agent YAML store at `.herdctl/dynamic-schedules/<agent>.yaml` | ||
| - Standalone `herdctl-scheduler` stdio MCP server with 4 tools (create, list, update, delete) | ||
| - Auto-injected when `self_scheduling.enabled` is true — no manual MCP config needed | ||
| - Safety: namespace isolation, configurable max schedules, minimum interval enforcement, optional TTL | ||
| - Dynamic schedules merged with static in scheduler (static wins on name collision) | ||
| - `source: "static" | "dynamic"` tag on ScheduleInfo for fleet operator visibility |
There was a problem hiding this comment.
Update this changeset to match the actual release surface of the PR.
This entry announces a new @herdctl/core minor for self-scheduling, but the PR objectives here are Discord PDF/default/logging fixes. Merging this as-is will publish the wrong changelog and likely the wrong version bump.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.changeset/agent-self-scheduling.md around lines 2 - 14, The changeset
currently advertises a new "@herdctl/core" minor for agent self-scheduling which
is incorrect for this PR; edit the changeset heading and body so the package
bump reflects the actual release surface (adjust release type to patch/minor as
appropriate), remove the self_scheduling content, and replace it with a concise
summary of the real changes (Discord PDF/default/logging fixes and any affected
packages), ensuring the changed package token "@herdctl/core" and the release
type in the frontmatter match the intended version bump.
| --- | ||
| "@herdctl/core": minor | ||
| --- | ||
|
|
||
| Add injected MCP server support to CLI runtime via HTTP bridges | ||
|
|
||
| CLI runtime now supports `injectedMcpServers` (e.g., file sender for Discord/Slack uploads). | ||
| Previously only SDK and Docker runtimes handled injected MCP servers — CLI silently ignored them. | ||
|
|
||
| The fix reuses existing `mcp-http-bridge.ts` infrastructure: starts HTTP bridges on localhost | ||
| for each injected server and passes them via `--mcp-config` as HTTP-type MCP servers. |
There was a problem hiding this comment.
Remove the unrelated CLI runtime changeset from this PR.
This entry publishes an @herdctl/core minor for injected MCP server support, which does not match the Discord PDF / result_summary / attachment-diagnostics scope described for PR #200. Leaving it here will generate the wrong release notes and version bump.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.changeset/cli-runtime-injected-mcp.md around lines 1 - 11, The changeset
that lifts "@herdctl/core" to a minor (the injectedMcpServers/CLI runtime entry
referencing mcp-http-bridge and --mcp-config) is unrelated and should be removed
from this PR: delete the changeset file that contains the `"@herdctl/core":
minor` entry (the injected MCP CLI runtime changelog) so the PR only contains
the Discord PDF / result_summary/attachment-diagnostics changes, then re-run the
changeset generator/check and verify no other files (e.g., references to
injectedMcpServers or mcp-http-bridge.ts) were accidentally staged.
| // Merge mcp_servers (deep merge — agent servers override defaults with same name, | ||
| // default servers not present in agent config are inherited) | ||
| if (defaults.mcp_servers || agent.mcp_servers) { | ||
| result.mcp_servers = deepMerge( | ||
| defaults.mcp_servers as Record<string, unknown> | undefined, | ||
| agent.mcp_servers as Record<string, unknown> | undefined, | ||
| ) as AgentConfig["mcp_servers"]; |
There was a problem hiding this comment.
Replace same-name MCP servers instead of deep-merging them.
deepMerge() recurses into the server definitions themselves, so an agent cannot actually override a default server with the same name. For example, env: {} still preserves inherited env keys, and a transport switch can leave stale fields behind.
♻️ Proposed fix
- if (defaults.mcp_servers || agent.mcp_servers) {
- result.mcp_servers = deepMerge(
- defaults.mcp_servers as Record<string, unknown> | undefined,
- agent.mcp_servers as Record<string, unknown> | undefined,
- ) as AgentConfig["mcp_servers"];
- }
+ if (defaults.mcp_servers || agent.mcp_servers) {
+ result.mcp_servers = {
+ ...(defaults.mcp_servers ?? {}),
+ ...(agent.mcp_servers ?? {}),
+ };
+ }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| // Merge mcp_servers (deep merge — agent servers override defaults with same name, | |
| // default servers not present in agent config are inherited) | |
| if (defaults.mcp_servers || agent.mcp_servers) { | |
| result.mcp_servers = deepMerge( | |
| defaults.mcp_servers as Record<string, unknown> | undefined, | |
| agent.mcp_servers as Record<string, unknown> | undefined, | |
| ) as AgentConfig["mcp_servers"]; | |
| if (defaults.mcp_servers || agent.mcp_servers) { | |
| result.mcp_servers = { | |
| ...(defaults.mcp_servers ?? {}), | |
| ...(agent.mcp_servers ?? {}), | |
| }; | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/core/src/config/merge.ts` around lines 240 - 246, The current merge
in merge.ts uses deepMerge on defaults.mcp_servers and agent.mcp_servers which
recurses into server definitions and prevents an agent from fully replacing a
same-named server; replace that behavior by doing a shallow replace-at-key merge
so any server key present in agent.mcp_servers completely overrides the default
for that key. Edit the block referencing deepMerge, result.mcp_servers,
defaults.mcp_servers, and agent.mcp_servers to build result.mcp_servers by
spreading/assigning defaults first and then agent entries (or iterating keys and
using agent entry when present), ensuring undefined inputs are handled and
removing the deepMerge call for this map-level merge.
| if (options.injectedMcpServers["herdctl-file-sender"]) { | ||
| if (!process.env.CLAUDE_CODE_STREAM_CLOSE_TIMEOUT) { | ||
| savedStreamCloseTimeout = undefined; // marker: was not set | ||
| process.env.CLAUDE_CODE_STREAM_CLOSE_TIMEOUT = "120000"; | ||
| } | ||
| } |
There was a problem hiding this comment.
Environment variable mutation is not thread-safe.
Modifying process.env directly can cause race conditions if multiple CLI runtime instances execute concurrently in the same process. One job could restore the timeout while another job is still using the extended timeout.
Consider passing the timeout via CLI arguments or environment variables scoped to the subprocess instead of mutating the parent process's environment.
🔧 Suggested approach: Pass timeout to subprocess environment
- // File uploads via MCP tools can take longer than the default 60s timeout.
- // Save the original value so we can restore it in `finally` to avoid leaking
- // state across concurrent jobs.
- if (options.injectedMcpServers["herdctl-file-sender"]) {
- if (!process.env.CLAUDE_CODE_STREAM_CLOSE_TIMEOUT) {
- savedStreamCloseTimeout = undefined; // marker: was not set
- process.env.CLAUDE_CODE_STREAM_CLOSE_TIMEOUT = "120000";
- }
- }
+ // Note: If CLAUDE_CODE_STREAM_CLOSE_TIMEOUT needs to be set for file uploads,
+ // consider passing it through the subprocess spawner's env option instead
+ // of mutating process.env, to avoid race conditions with concurrent jobs.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/core/src/runner/runtime/cli-runtime.ts` around lines 267 - 272, The
code mutates process.env.CLAUDE_CODE_STREAM_CLOSE_TIMEOUT inside the CLI runtime
when options.injectedMcpServers["herdctl-file-sender"] is present, which is not
thread-safe; instead, stop changing the global process.env and pass the timeout
only to the subprocess/child invocation or via a CLI argument. Update the logic
in cli-runtime.ts (the block that references
options.injectedMcpServers["herdctl-file-sender"], savedStreamCloseTimeout and
CLAUDE_CODE_STREAM_CLOSE_TIMEOUT) to remove direct global mutation and inject
the "120000" value into the child process spawn/env argument or append a CLI
flag when launching herdctl-file-sender, preserving any savedStreamCloseTimeout
handling locally for that subprocess.
| // Query Docker for the most recent stopped container for this agent | ||
| // This handles herdctl restarts where the in-memory map is empty | ||
| const existing = await this.docker.listContainers({ | ||
| all: true, | ||
| limit: 1, | ||
| filters: { | ||
| name: [`herdctl-${agentName}-`], | ||
| status: ["exited", "created"], | ||
| }, | ||
| }); | ||
|
|
||
| if (existing.length > 0) { | ||
| const container = this.docker.getContainer(existing[0].Id); | ||
| try { | ||
| await container.start(); | ||
| this.runningContainers.set(agentName, container); | ||
| logger.info(`Restarted existing container for ${agentName}: ${existing[0].Names[0]}`); | ||
| return container; | ||
| } catch (err) { | ||
| logger.warn(`Failed to restart existing container for ${agentName}, creating new: ${err}`); | ||
| // Fall through to create a new container | ||
| } | ||
| } |
There was a problem hiding this comment.
Reclaim running persistent containers after a manager restart.
This recovery query only looks for exited/created containers. If herdctl restarts while the previous persistent container is still running, it won't be found here and a second container gets created for the same agent. Include running in the search and short-circuit reuse before calling start().
Suggested direction
const existing = await this.docker.listContainers({
all: true,
limit: 1,
filters: {
name: [`herdctl-${agentName}-`],
- status: ["exited", "created"],
+ status: ["running", "exited", "created"],
},
});
if (existing.length > 0) {
const container = this.docker.getContainer(existing[0].Id);
try {
- await container.start();
+ const info = await container.inspect();
+ if (!info.State.Running) {
+ await container.start();
+ }
this.runningContainers.set(agentName, container);
- logger.info(`Restarted existing container for ${agentName}: ${existing[0].Names[0]}`);
+ logger.info(`Reused existing container for ${agentName}: ${existing[0].Names[0]}`);
return container;
} catch (err) {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/core/src/runner/runtime/container-manager.ts` around lines 63 - 85,
The recovery query in container-manager.ts (this.docker.listContainers) only
filters for "exited" and "created" so a still-running persistent container is
missed; update the filters to include "running" and, when you find an existing
container whose status is "running", short-circuit: call
this.runningContainers.set(agentName, container) and return it without calling
container.start(); for non-running matches keep the existing start()/catch flow.
Ensure you reference the same agentName, container, this.docker.listContainers,
container.start, and this.runningContainers.set symbols when making the change.
| onJobCreated: async (jobId) => { | ||
| this.activeJobsByChannel.set( | ||
| this.getChannelKey(qualifiedName, event.metadata.channelId), | ||
| jobId, | ||
| ); | ||
| }, |
There was a problem hiding this comment.
Reserve the channel before kicking off the background run.
activeJobsByChannel is only populated later in onJobCreated(). Two slash-triggered runs can both pass the guard and start separate jobs in the same channel before that callback fires. Put a pending sentinel/lock in place before handleMessage() starts and replace it with the real job id once it exists.
Also applies to: 1456-1464, 1499-1512
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/discord/src/manager.ts` around lines 764 - 769, Race can occur
because activeJobsByChannel is only set in onJobCreated; before starting
handleMessage we must reserve the channel with a pending sentinel so concurrent
handlers fail the guard. Update the flow where handleMessage is invoked to set
this.activeJobsByChannel.set(this.getChannelKey(qualifiedName,
event.metadata.channelId), PENDING_SENTINEL) immediately before launching the
background job, and then in onJobCreated replace the PENDING_SENTINEL with the
real jobId (or clear it on error). Use a unique constant like PENDING_SENTINEL
and ensure the guard that prevents new runs treats the sentinel as an active
lock; apply the same change at the other occurrences referenced (around the
blocks for lines ~1456-1464 and ~1499-1512).
| if (normalized.kind === "tool_results" && showToolResults) { | ||
| for (const toolResult of normalized.results) { | ||
| const toolUse = toolResult.toolUseId | ||
| ? pendingToolUses.get(toolResult.toolUseId) | ||
| : undefined; | ||
| if (toolResult.toolUseId) { | ||
| pendingToolUses.delete(toolResult.toolUseId); | ||
| } | ||
| const toolName = toolUse?.name ?? "Tool"; | ||
| const output = toolResult.output.trim(); | ||
| const preview = output.length > 0 ? output.replace(/\s+/g, " ").slice(0, 90) : ""; | ||
| pushTraceLine( | ||
| `${toolResult.isError ? "✖" : "✓"} ${toolName}${preview ? ` · ${preview}` : ""}`, | ||
| ); | ||
| latestStatusText = `${toolResult.isError ? "Error from" : "Completed"} ${toolName}`; | ||
| await refreshRunCard("running"); | ||
|
|
||
| // Oversized output is attached as a file instead of flooding chat. | ||
| const maxOutputChars = outputConfig.tool_result_max_length ?? 900; | ||
| if (output.length > maxOutputChars) { | ||
| await streamer.flush(); | ||
| const filename = `${toolName.toLowerCase().replace(/[^a-z0-9_-]+/g, "-") || "tool"}-output.txt`; | ||
| const previewEmbed = buildToolResultEmbed({ | ||
| toolUse: toolUse ?? null, | ||
| toolResult: { | ||
| output: output.slice(0, Math.min(300, maxOutputChars)), | ||
| isError: toolResult.isError, | ||
| }, | ||
| agentName: qualifiedName, | ||
| maxOutputChars: Math.min(300, maxOutputChars), | ||
| }); | ||
| await event.reply({ | ||
| embeds: [previewEmbed], | ||
| files: [{ attachment: Buffer.from(output, "utf8"), name: filename }], | ||
| }); | ||
| embedsSent++; | ||
| } |
There was a problem hiding this comment.
tool_result_max_length is ignored for normal tool outputs.
This branch only emits a tool-result reply when the output is larger than the limit. For anything under the threshold, users only get the hard-coded 90-character trace preview, so tool_results: true no longer shows the configured amount of output. Emit a normal buildToolResultEmbed(...) in the non-oversize path too.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/discord/src/manager.ts` around lines 869 - 905, The code only sends
a full tool result embed when output exceeds
outputConfig.tool_result_max_length, leaving small outputs as a 90-char trace
preview; modify the branch handling normalized.kind === "tool_results" in
manager.ts so that for non-oversized outputs you also build and send the full
embed via buildToolResultEmbed (using toolUse, toolResult, agentName:
qualifiedName and maxOutputChars from outputConfig.tool_result_max_length) and
call event.reply with embeds (no file), while preserving the existing
oversized-path behavior (flush streamer, attach file, increment embedsSent) and
still updating pushTraceLine/refreshRunCard/latestStatusText as currently done.
| // Extract text from embeds so rich content (hook notifications, link | ||
| // previews, other bot output) is visible in conversation context | ||
| if (message.embeds?.length > 0) { | ||
| const embedTexts: string[] = []; | ||
| for (const embed of message.embeds) { | ||
| const parts: string[] = []; | ||
| if (embed.title) parts.push(embed.title); | ||
| if (embed.description) parts.push(embed.description); | ||
| if (embed.fields?.length) { | ||
| for (const field of embed.fields) { | ||
| parts.push(`${field.name}: ${field.value}`); | ||
| } | ||
| } | ||
| if (parts.length > 0) embedTexts.push(parts.join("\n")); | ||
| } | ||
| if (embedTexts.length > 0) { | ||
| let embedContent = embedTexts.join("\n\n"); | ||
| // Cap embed text to avoid extremely long context from rich messages | ||
| if (embedContent.length > 4000) { | ||
| embedContent = embedContent.substring(0, 4000) + "\n[embed text truncated]"; | ||
| } | ||
| content = content ? `${content}\n\n${embedContent}` : embedContent; | ||
| } | ||
| } |
There was a problem hiding this comment.
Apply the same embed-text extraction to the trigger message.
This only enriches processMessage(). buildConversationContext() still builds prompt from triggerMessage.content alone, so an embed-only mention/link preview will arrive with an empty prompt even though history messages now get the richer text. Please pull this into a shared helper and use it for both history messages and the trigger prompt path.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/discord/src/mention-handler.ts` around lines 234 - 257, The embed
extraction logic used in processMessage() should be refactored into a shared
helper (e.g., extractEmbedTextFromMessage(message): string) and then used in
buildConversationContext() when building the trigger prompt so triggerMessage
isn't limited to triggerMessage.content; update buildConversationContext() to
call the new helper and append/embed its result into the prompt the same way
processMessage() does (including the 4000-char cap and truncation marker) so
both history messages and the trigger message receive identical rich-text
extraction.
| const result = (await response.json()) as Record<string, unknown>; | ||
| if (typeof result.text !== "string") { | ||
| throw new Error("OpenAI Whisper API returned an unexpected response (missing 'text' field)"); | ||
| } | ||
| return { text: result.text }; |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# First, check if the file exists and view the relevant section
fd voice-transcriber.ts packages/discord/srcRepository: edspencer/herdctl
Length of output: 102
🏁 Script executed:
# Check the current implementation around lines 61-65
cat -n packages/discord/src/voice-transcriber.ts | sed -n '50,75p'Repository: edspencer/herdctl
Length of output: 771
🏁 Script executed:
# Search for Zod imports in the discord package to understand existing patterns
rg "import.*zod" packages/discord/src/ -iRepository: edspencer/herdctl
Length of output: 43
🏁 Script executed:
# Check if Zod is used for validation elsewhere in discord package
rg "z\.(object|string|parse)" packages/discord/src/ -A 2 -B 2Repository: edspencer/herdctl
Length of output: 43
🏁 Script executed:
# Check the imports at the top of voice-transcriber.ts
head -30 packages/discord/src/voice-transcriber.tsRepository: edspencer/herdctl
Length of output: 971
🏁 Script executed:
# Check for error handling patterns in the file and error classes used
rg "throw new Error|ConnectorError" packages/discord/src/voice-transcriber.ts -A 1 -B 1Repository: edspencer/herdctl
Length of output: 368
🏁 Script executed:
# Check if Zod is in the dependencies
grep -r "zod" package.json packages/discord/package.json 2>/dev/nullRepository: edspencer/herdctl
Length of output: 43
🏁 Script executed:
# Check for Zod usage across the entire codebase to see if it's established
rg "from \"zod\"|from 'zod'" --max-count 5Repository: edspencer/herdctl
Length of output: 1194
🏁 Script executed:
# Check what error classes or patterns are used in the discord package for external API calls
rg "class.*Error|ConnectorError" packages/discord/src/ -B 2 -A 5Repository: edspencer/herdctl
Length of output: 39044
🏁 Script executed:
# Check full imports section of voice-transcriber.ts to see what's already there
head -50 packages/discord/src/voice-transcriber.tsRepository: edspencer/herdctl
Length of output: 1585
🏁 Script executed:
# Check if Zod is used anywhere in the discord package
rg "from.*zod|import.*z" packages/discord/src/Repository: edspencer/herdctl
Length of output: 407
🏁 Script executed:
# Check the module documentation comment more carefully
sed -n '1,10p' packages/discord/src/voice-transcriber.tsRepository: edspencer/herdctl
Length of output: 366
🏁 Script executed:
# Check package.json to see if zod is a dependency in discord package
cat packages/discord/package.json | grep -A 20 -B 5 "dependencies"Repository: edspencer/herdctl
Length of output: 671
🏁 Script executed:
# Search for how errors are thrown in voice-transcriber.ts
rg "throw" packages/discord/src/voice-transcriber.ts -B 2 -A 1Repository: edspencer/herdctl
Length of output: 622
🏁 Script executed:
# Check if zod is in the root/workspace level
cat package.json | grep -i zodRepository: edspencer/herdctl
Length of output: 43
🏁 Script executed:
# Check the entire voice-transcriber.ts file to understand full context
wc -l packages/discord/src/voice-transcriber.tsRepository: edspencer/herdctl
Length of output: 105
🏁 Script executed:
# View the complete file to understand all error patterns
cat -n packages/discord/src/voice-transcriber.tsRepository: edspencer/herdctl
Length of output: 2590
Use a Zod schema for OpenAI Whisper API response validation.
This endpoint returns external JSON that must be validated. Currently the code uses a manual typeof check, but a Zod schema aligns with project standards and provides cleaner error handling if the API response shape changes. Zod is available via @herdctl/core, which is already a dependency.
Suggested schema-based parse
+import { z } from "zod";
+
+const TranscriptionResponseSchema = z.object({
+ text: z.string(),
+});
+
export async function transcribeAudio(
audioBuffer: Buffer,
filename: string,
options: TranscribeOptions,
): Promise<TranscribeResult> {
// ... API call setup ...
if (!response.ok) {
const errorBody = await response.text().catch(() => "unknown error");
throw new Error(`OpenAI Whisper API error (${response.status}): ${errorBody}`);
}
- const result = (await response.json()) as Record<string, unknown>;
- if (typeof result.text !== "string") {
- throw new Error("OpenAI Whisper API returned an unexpected response (missing 'text' field)");
- }
- return { text: result.text };
+ const result = TranscriptionResponseSchema.parse(await response.json());
+ return { text: result.text };
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const result = (await response.json()) as Record<string, unknown>; | |
| if (typeof result.text !== "string") { | |
| throw new Error("OpenAI Whisper API returned an unexpected response (missing 'text' field)"); | |
| } | |
| return { text: result.text }; | |
| import { z } from "zod"; | |
| const TranscriptionResponseSchema = z.object({ | |
| text: z.string(), | |
| }); | |
| export async function transcribeAudio( | |
| audioBuffer: Buffer, | |
| filename: string, | |
| options: TranscribeOptions, | |
| ): Promise<TranscribeResult> { | |
| // ... API call setup ... | |
| if (!response.ok) { | |
| const errorBody = await response.text().catch(() => "unknown error"); | |
| throw new Error(`OpenAI Whisper API error (${response.status}): ${errorBody}`); | |
| } | |
| const result = TranscriptionResponseSchema.parse(await response.json()); | |
| return { text: result.text }; | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/discord/src/voice-transcriber.ts` around lines 61 - 65, Replace the
manual typeof check on the Whisper response with a Zod schema parse: import z
from '@herdctl/core', define a schema like z.object({ text: z.string() }) and
use schema.parse(result) (or safeParse and throw a clear error) where the
current `result` is validated; update the function in voice-transcriber.ts that
reads `const result = (await response.json()) as Record<string, unknown>;` to
parse via Zod and return the parsed `.text` value (preserving the same return
shape) so any schema mismatch produces a descriptive validation error.
| private static async buildPromptWithAttachments( | ||
| textPrompt: string, | ||
| attachments: SlackAttachmentInfo[] | undefined, | ||
| botToken: string | undefined, | ||
| logger: ChatConnectorLogger, | ||
| ): Promise<PromptContent> { | ||
| if (!attachments || attachments.length === 0 || !botToken) { | ||
| return textPrompt; | ||
| } | ||
|
|
||
| // Filter to supported image types | ||
| const imageAttachments = attachments.filter((a) => | ||
| SlackManager.IMAGE_MIME_TYPES.has(a.mimetype), | ||
| ); | ||
|
|
||
| if (imageAttachments.length === 0) { | ||
| return textPrompt; | ||
| } | ||
|
|
||
| // Download images and build content blocks | ||
| const blocks: ContentBlock[] = []; | ||
|
|
||
| // Add text block first | ||
| if (textPrompt.trim()) { | ||
| blocks.push({ type: "text", text: textPrompt }); | ||
| } | ||
|
|
||
| for (const attachment of imageAttachments) { | ||
| try { | ||
| // Slack url_private requires Bearer token authentication | ||
| const response = await fetch(attachment.urlPrivate, { | ||
| headers: { Authorization: `Bearer ${botToken}` }, | ||
| signal: AbortSignal.timeout(30_000), | ||
| }); | ||
| if (!response.ok) { | ||
| logger.warn(`Failed to download Slack file ${attachment.name}: HTTP ${response.status}`); | ||
| continue; | ||
| } | ||
| const buffer = Buffer.from(await response.arrayBuffer()); | ||
| const base64 = buffer.toString("base64"); | ||
| blocks.push({ | ||
| type: "image", | ||
| source: { | ||
| type: "base64", | ||
| media_type: attachment.mimetype as | ||
| | "image/jpeg" | ||
| | "image/png" | ||
| | "image/gif" | ||
| | "image/webp", | ||
| data: base64, | ||
| }, | ||
| }); |
There was a problem hiding this comment.
Add a hard size cap before buffering Slack images.
This path downloads every image fully into memory and then base64-encodes it, but attachment.size is never checked. A large Slack upload can spike memory on this path and still fail downstream on prompt-size limits. Skip or downsize attachments over a defined byte budget before calling arrayBuffer().
Possible fix
+ private static readonly MAX_IMAGE_BYTES = 5 * 1024 * 1024;
+
for (const attachment of imageAttachments) {
+ if (attachment.size > SlackManager.MAX_IMAGE_BYTES) {
+ logger.warn(
+ `Skipping Slack image ${attachment.name}: ${attachment.size} bytes exceeds ${SlackManager.MAX_IMAGE_BYTES} byte limit`,
+ );
+ continue;
+ }
+
try {
// Slack url_private requires Bearer token authentication
const response = await fetch(attachment.urlPrivate, {
headers: { Authorization: `Bearer ${botToken}` },
signal: AbortSignal.timeout(30_000),🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/slack/src/manager.ts` around lines 794 - 845, In
buildPromptWithAttachments (SlackManager) add a hard byte-size check before
downloading/ buffering: define a MAX_ATTACHMENT_BYTES constant, inspect
attachment.size (or use response.headers.get('content-length') / response.body
stream to enforce the cap) and if the file exceeds the cap, log a warning via
logger.warn and skip the attachment; for files slightly over the limit consider
streaming/downsizing or rejecting earlier rather than calling
response.arrayBuffer() so you never allocate large buffers in memory and
downstream prompt-size limits are respected.
…mode Claude Code 2.1.71+ no longer writes .jsonl session files in -p (pipe) mode. This caused a 60s timeout on every new Docker agent session. Now races waitForNewSessionFile against process completion. If the process exits first, extracts session_id from stdout JSON and creates a stub .jsonl file so the session watcher has something to work with. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Adds headers field to McpServerSchema and SDKMcpServerConfig, allowing Authorization and other custom headers on HTTP-based MCP servers. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
The scheduler MCP server needs host filesystem access to the state directory. Set host: true so it gets HTTP-bridged into containers automatically via the MCP host bridge. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Remove stub file creation when CLI exits without writing a session file. Empty stubs cause "No conversation found" errors when herdctl tries to resume them. Instead, use a synthetic session ID so the current run completes but won't be resumed. - Validate Docker session files have content (size > 0) before allowing resume. Empty files are treated as missing. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
There was a problem hiding this comment.
Actionable comments posted: 5
♻️ Duplicate comments (1)
packages/core/src/runner/runtime/cli-runtime.ts (1)
267-271:⚠️ Potential issue | 🟠 MajorPer-run timeout should not be written into shared
process.env.Concurrent
CLIRuntimeexecutions can clobber each other here and restore the wrong timeout value afterward. This still needs to move into the spawned subprocess environment.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/core/src/runner/runtime/cli-runtime.ts` around lines 267 - 271, The code writes per-run timeout into the global process.env (CLAUDE_CODE_STREAM_CLOSE_TIMEOUT) inside CLIRuntime, which can be clobbered by concurrent runs; instead avoid mutating process.env and pass the timeout only into the spawned subprocess environment: when options.injectedMcpServers["herdctl-file-sender"] is true compute the desired timeout value locally (use savedStreamCloseTimeout as an internal marker variable but do not set process.env), then when creating the child process (where spawn/exec is performed) merge process.env with { CLAUDE_CODE_STREAM_CLOSE_TIMEOUT: "120000" } in the child options.env so only the child sees the override; remove any logic that restores process.env from savedStreamCloseTimeout.
🧹 Nitpick comments (3)
packages/core/src/config/self-scheduling.ts (2)
64-68: Consider adding debug logging for injection operations.For operational visibility, debug-level logging when MCP servers are injected would help operators troubleshoot self-scheduling configuration. Per coding guidelines, use
createLoggerfrom the appropriate relative path.♻️ Suggested addition
+import { createLogger } from "../utils/logger.js"; + +const logger = createLogger("self-scheduling"); + export function injectSchedulerMcpServers(agents: ResolvedAgent[], stateDir: string): void { const mcpPath = getSchedulerMcpPath(); for (const agent of agents) { if (!agent.self_scheduling?.enabled) continue; + logger.debug(`Injecting scheduler MCP server for agent: ${agent.qualifiedName}`); + const selfScheduling = agent.self_scheduling;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/core/src/config/self-scheduling.ts` around lines 64 - 68, Add debug-level logging to injectSchedulerMcpServers to record when MCP servers are injected and which agent is affected: import and instantiate createLogger, call it (e.g., logger.debug) before and after injection steps inside injectSchedulerMcpServers and include the mcpPath and agent.id or agent.name plus whether agent.self_scheduling?.enabled was true; use getSchedulerMcpPath to show the path in the log and ensure messages are concise and only logged at debug level.
34-35: Consider extracting default values as shared constants.The defaults
10and"5m"are duplicated here and at lines 88-89 when constructing env vars. They also match the defaults inscheduler-mcp.tsEnvSchema. Extracting these as module-level constants would prevent drift if limits are adjusted later.♻️ Suggested refactor
+const DEFAULT_MAX_SCHEDULES = 10; +const DEFAULT_MIN_INTERVAL = "5m"; + function buildSchedulingPrompt( selfScheduling: NonNullable<ResolvedAgent["self_scheduling"]>, ): string { - const maxSchedules = selfScheduling.max_schedules ?? 10; - const minInterval = selfScheduling.min_interval ?? "5m"; + const maxSchedules = selfScheduling.max_schedules ?? DEFAULT_MAX_SCHEDULES; + const minInterval = selfScheduling.min_interval ?? DEFAULT_MIN_INTERVAL;Then use the same constants at lines 88-89.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/core/src/config/self-scheduling.ts` around lines 34 - 35, Extract the duplicated defaults into module-level constants (e.g., DEFAULT_MAX_SCHEDULES = 10 and DEFAULT_MIN_INTERVAL = "5m") and replace the inline literals used when reading selfScheduling (maxSchedules = selfScheduling.max_schedules ?? ...) and when constructing env vars later in this module with those constants; ensure the same constants align with the EnvSchema defaults in scheduler-mcp.ts so updates are centralized and avoid drift.packages/core/src/runner/types.ts (1)
15-28: Useinterfacefor the block object shapes.
TextBlockandImageBlockare plain object shapes, so they should follow the repo convention and be declared asinterfaces. The union aliases can stay astypes.♻️ Proposed refactor
-export type TextBlock = { type: "text"; text: string }; +export interface TextBlock { + type: "text"; + text: string; +} -export type ImageBlock = { +export interface ImageBlock { type: "image"; source: { type: "base64"; media_type: "image/jpeg" | "image/png" | "image/gif" | "image/webp"; data: string; // base64-encoded }; -}; +}As per coding guidelines, "Prefer
interfaceovertypefor defining object shapes in TypeScript".🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/core/src/runner/types.ts` around lines 15 - 28, Convert the object-shaped aliases TextBlock and ImageBlock from type declarations to exported interfaces keeping their exact property shapes and JSDoc comments (interface TextBlock { type: "text"; text: string } and interface ImageBlock { type: "image"; source: { type: "base64"; media_type: "image/jpeg" | "image/png" | "image/gif" | "image/webp"; data: string; }; };), and leave the union alias ContentBlock as a type alias (export type ContentBlock = TextBlock | ImageBlock) so external usage and discriminated unions remain unchanged; ensure all exports and names (TextBlock, ImageBlock, ContentBlock) match the originals.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@packages/core/src/runner/runtime/cli-runtime.ts`:
- Around line 439-455: The race between sessionFilePromise and
processExitPromise can falsely choose process_done before a late .jsonl appears;
update the fallback in the Promise.race handling so that when raceResult.type
=== "process_done" you wait a short grace period (e.g., 50–200ms) or attempt a
final non-blocking check for the session file before generating a synthetic ID.
Concretely, after detecting process_done (in the same block that currently
computes syntheticId and sets sessionFilePath), re-check the session directory
for the expected file path (or re-await/inspect sessionFilePromise with a short
timeout) and only create the synthetic path if the file truly does not exist;
use the existing symbols sessionFilePromise, processExitPromise, raceResult,
sessionFilePath, sessionDir, and capturedSessionId to locate and modify the
logic.
- Around line 384-387: The code is currently logging raw Claude stdout at info
level inside the subprocess.stdout?.on("data", ...) handler (variable text and
logger.info), which can leak model/user data; change this to not emit the raw
chunk at info—either remove logger.info(text) or demote it to logger.debug(text)
— then parse the stream to extract session_id and log only the extracted
session_id (or other minimal metadata) at info level; ensure the parsing and
session_id extraction logic remains in the same handler so no raw content is
emitted to default logs.
- Around line 161-170: The fallback branch currently pushes
options.systemPromptAppend using "--system-prompt", which replaces the default
Claude Code prompt; change that branch to push it with "--append-system-prompt"
(i.e., update the args.push call where only options.systemPromptAppend is
present so it uses "--append-system-prompt" and options.systemPromptAppend
instead of "--system-prompt"). This ensures that when
options.agent.system_prompt is absent but options.systemPromptAppend exists, the
code appends to the default prompt rather than replacing it; locate the
conditional handling options.agent.system_prompt and options.systemPromptAppend
and modify the final args.push accordingly.
In `@packages/core/src/runner/types.ts`:
- Around line 48-57: The loop that builds parts currently collects image blocks
by incrementing imageCount and appending a combined "[{n} image(s) attached]" at
the end, which reorders images; change the loop so when block.type === "image"
you push the image placeholder into parts at that position (e.g. push the same
placeholder string used today, such as "[1 image(s) attached]" or a single-image
variant) instead of incrementing imageCount, and remove the final if (imageCount
> 0) append; update any tests that expect the old suffix behavior. Ensure you
edit the code handling prompt, parts and imageCount in this file
(packages/core/src/runner/types.ts) to push placeholders inline rather than
aggregating them.
In `@packages/core/src/state/session-validation.ts`:
- Around line 144-146: The current existence helper returns stats.size > 0 which
treats 0-byte Docker session files as missing; change it to return true on a
successful stat (i.e., treat any present file, including size === 0, as
existing) so validateSessionWithFileCheck() doesn't mark transient, just-created
session files as file_not_found; if you need to reject permanently empty files,
implement a separate "resume-ready" check (e.g., isSessionResumeReady or
similar) that explicitly verifies stats.size > 0 after the writer/append phase.
---
Duplicate comments:
In `@packages/core/src/runner/runtime/cli-runtime.ts`:
- Around line 267-271: The code writes per-run timeout into the global
process.env (CLAUDE_CODE_STREAM_CLOSE_TIMEOUT) inside CLIRuntime, which can be
clobbered by concurrent runs; instead avoid mutating process.env and pass the
timeout only into the spawned subprocess environment: when
options.injectedMcpServers["herdctl-file-sender"] is true compute the desired
timeout value locally (use savedStreamCloseTimeout as an internal marker
variable but do not set process.env), then when creating the child process
(where spawn/exec is performed) merge process.env with {
CLAUDE_CODE_STREAM_CLOSE_TIMEOUT: "120000" } in the child options.env so only
the child sees the override; remove any logic that restores process.env from
savedStreamCloseTimeout.
---
Nitpick comments:
In `@packages/core/src/config/self-scheduling.ts`:
- Around line 64-68: Add debug-level logging to injectSchedulerMcpServers to
record when MCP servers are injected and which agent is affected: import and
instantiate createLogger, call it (e.g., logger.debug) before and after
injection steps inside injectSchedulerMcpServers and include the mcpPath and
agent.id or agent.name plus whether agent.self_scheduling?.enabled was true; use
getSchedulerMcpPath to show the path in the log and ensure messages are concise
and only logged at debug level.
- Around line 34-35: Extract the duplicated defaults into module-level constants
(e.g., DEFAULT_MAX_SCHEDULES = 10 and DEFAULT_MIN_INTERVAL = "5m") and replace
the inline literals used when reading selfScheduling (maxSchedules =
selfScheduling.max_schedules ?? ...) and when constructing env vars later in
this module with those constants; ensure the same constants align with the
EnvSchema defaults in scheduler-mcp.ts so updates are centralized and avoid
drift.
In `@packages/core/src/runner/types.ts`:
- Around line 15-28: Convert the object-shaped aliases TextBlock and ImageBlock
from type declarations to exported interfaces keeping their exact property
shapes and JSDoc comments (interface TextBlock { type: "text"; text: string }
and interface ImageBlock { type: "image"; source: { type: "base64"; media_type:
"image/jpeg" | "image/png" | "image/gif" | "image/webp"; data: string; }; };),
and leave the union alias ContentBlock as a type alias (export type ContentBlock
= TextBlock | ImageBlock) so external usage and discriminated unions remain
unchanged; ensure all exports and names (TextBlock, ImageBlock, ContentBlock)
match the originals.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 84ecc738-207f-4368-8071-09b3a1d8ddfe
📒 Files selected for processing (6)
packages/core/src/config/__tests__/self-scheduling.test.tspackages/core/src/config/self-scheduling.tspackages/core/src/runner/runtime/cli-runtime.tspackages/core/src/runner/sdk-adapter.tspackages/core/src/runner/types.tspackages/core/src/state/session-validation.ts
🚧 Files skipped from review as they are similar to previous changes (1)
- packages/core/src/config/tests/self-scheduling.test.ts
| // Add system prompt if specified, with optional append for chat platforms | ||
| if (options.agent.system_prompt && options.systemPromptAppend) { | ||
| args.push( | ||
| "--system-prompt", | ||
| options.agent.system_prompt + "\n\n" + options.systemPromptAppend, | ||
| ); | ||
| } else if (options.agent.system_prompt) { | ||
| args.push("--system-prompt", options.agent.system_prompt); | ||
| } else if (options.systemPromptAppend) { | ||
| args.push("--system-prompt", options.systemPromptAppend); |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
Does the Claude CLI --system-prompt option replace the default Claude Code system prompt, or is there an official way to append text to the default/preset system prompt? Please check official Anthropic/Claude Code documentation.
💡 Result:
Yes—per the official Claude Code CLI reference, --system-prompt replaces the entire default Claude Code system prompt (“removes all default Claude Code instructions”). [1]
To append text while keeping the built-in/default prompt, the official mechanism is to use --append-system-prompt (or --append-system-prompt-file). [1] Anthropic also notes in the official settings docs that Claude Code’s internal system prompt isn’t published, and directs users to CLAUDE.md or --append-system-prompt to add custom instructions. [2]
Sources: [1] CLI reference (Claude Code) • [2] Claude Code settings docs
Use --append-system-prompt to append text to the default Claude Code system prompt.
The Claude Code CLI option --system-prompt replaces the entire default Claude Code system prompt; to append text while preserving the default, use --append-system-prompt or --append-system-prompt-file.
The fallback branch (lines 165–167) treats systemPromptAppend as the sole system prompt, passing it via --system-prompt. This replaces the default Claude Code prompt instead of appending to it, breaking the append semantics specified in RunnerOptions.
Update the fallback case to pass systemPromptAppend via --append-system-prompt instead.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/core/src/runner/runtime/cli-runtime.ts` around lines 161 - 170, The
fallback branch currently pushes options.systemPromptAppend using
"--system-prompt", which replaces the default Claude Code prompt; change that
branch to push it with "--append-system-prompt" (i.e., update the args.push call
where only options.systemPromptAppend is present so it uses
"--append-system-prompt" and options.systemPromptAppend instead of
"--system-prompt"). This ensures that when options.agent.system_prompt is absent
but options.systemPromptAppend exists, the code appends to the default prompt
rather than replacing it; locate the conditional handling
options.agent.system_prompt and options.systemPromptAppend and modify the final
args.push accordingly.
| subprocess.stdout?.on("data", (data) => { | ||
| logger.info(data.toString()); | ||
| const text = data.toString(); | ||
| logger.info(text); | ||
| // Extract session_id from stream-json result lines |
There was a problem hiding this comment.
Don’t log raw Claude stdout at info.
These chunks can contain full model output and user data, so this now leaks normal-run content into default logs. Parse the session_id without emitting the raw chunk, or move any dump behind debug.
As per coding guidelines, "Choose appropriate log levels: debug for internal details, info for significant events, warn for recoverable issues, error for failures".
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/core/src/runner/runtime/cli-runtime.ts` around lines 384 - 387, The
code is currently logging raw Claude stdout at info level inside the
subprocess.stdout?.on("data", ...) handler (variable text and logger.info),
which can leak model/user data; change this to not emit the raw chunk at
info—either remove logger.info(text) or demote it to logger.debug(text) — then
parse the stream to extract session_id and log only the extracted session_id (or
other minimal metadata) at info level; ensure the parsing and session_id
extraction logic remains in the same handler so no raw content is emitted to
default logs.
| // Race: session file appears OR process exits (whichever comes first) | ||
| const raceResult = await Promise.race([ | ||
| sessionFilePromise.then((path) => ({ type: "file" as const, path })), | ||
| processExitPromise.then(() => ({ type: "process_done" as const, path: undefined })), | ||
| ]); | ||
| if (raceResult.type === "file") { | ||
| sessionFilePath = raceResult.path; | ||
| } else { | ||
| // Process finished without writing a session file. | ||
| // This can happen if the CLI exits quickly or if the session directory | ||
| // isn't writable from inside the container. Don't create stub files — | ||
| // they break resume. Instead, generate a synthetic session ID so the | ||
| // current run can complete, but the session won't be resumable. | ||
| const syntheticId = capturedSessionId ?? crypto.randomUUID(); | ||
| logger.warn(`No session file written by CLI (session ${syntheticId}). Run will complete but session cannot be resumed.`); | ||
| sessionFilePath = `${sessionDir}/${syntheticId}.jsonl`; | ||
| } |
There was a problem hiding this comment.
Wait briefly after process exit before falling back to a synthetic session path.
A real .jsonl can appear a moment after the CLI exits, especially on Docker or mounted filesystems. Once process_done wins this race, the runtime never watches the real file, so late final messages are lost and resume is disabled even though the session file may arrive milliseconds later.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/core/src/runner/runtime/cli-runtime.ts` around lines 439 - 455, The
race between sessionFilePromise and processExitPromise can falsely choose
process_done before a late .jsonl appears; update the fallback in the
Promise.race handling so that when raceResult.type === "process_done" you wait a
short grace period (e.g., 50–200ms) or attempt a final non-blocking check for
the session file before generating a synthetic ID. Concretely, after detecting
process_done (in the same block that currently computes syntheticId and sets
sessionFilePath), re-check the session directory for the expected file path (or
re-await/inspect sessionFilePromise with a short timeout) and only create the
synthetic path if the file truly does not exist; use the existing symbols
sessionFilePromise, processExitPromise, raceResult, sessionFilePath, sessionDir,
and capturedSessionId to locate and modify the logic.
| for (const block of prompt) { | ||
| if (block.type === "text") { | ||
| parts.push(block.text); | ||
| } else if (block.type === "image") { | ||
| imageCount++; | ||
| } | ||
| } | ||
| if (imageCount > 0) { | ||
| parts.push(`[${imageCount} image(s) attached]`); | ||
| } |
There was a problem hiding this comment.
Preserve image placeholders in the original block order.
This currently moves all image markers to the end, so an interleaved prompt like text -> image -> text becomes text -> text -> [1 image(s) attached]. That changes the CLI fallback prompt semantics for references like “in the image above/below”.
🛠️ Proposed fix
const parts: string[] = [];
- let imageCount = 0;
for (const block of prompt) {
if (block.type === "text") {
parts.push(block.text);
} else if (block.type === "image") {
- imageCount++;
+ parts.push("[image attached]");
}
}
- if (imageCount > 0) {
- parts.push(`[${imageCount} image(s) attached]`);
- }
return parts.join("\n");🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/core/src/runner/types.ts` around lines 48 - 57, The loop that builds
parts currently collects image blocks by incrementing imageCount and appending a
combined "[{n} image(s) attached]" at the end, which reorders images; change the
loop so when block.type === "image" you push the image placeholder into parts at
that position (e.g. push the same placeholder string used today, such as "[1
image(s) attached]" or a single-image variant) instead of incrementing
imageCount, and remove the final if (imageCount > 0) append; update any tests
that expect the old suffix behavior. Ensure you edit the code handling prompt,
parts and imageCount in this file (packages/core/src/runner/types.ts) to push
placeholders inline rather than aggregating them.
| const stats = await stat(sessionFile); | ||
| // Empty files (0 bytes) are invalid — they can't be resumed | ||
| return stats.size > 0; |
There was a problem hiding this comment.
Don't treat a newly created 0-byte Docker session file as missing.
This helper is named and documented as an existence check, but stats.size > 0 turns it into a content-readiness check. packages/core/src/runner/runtime/cli-runtime.ts:454-485 already documents that the Docker session path can be observed before the first append, so size === 0 is a valid transient state. Returning false here makes validateSessionWithFileCheck() mark an initializing session as file_not_found, which can trigger duplicate session creation.
Suggested fix
- const stats = await stat(sessionFile);
- // Empty files (0 bytes) are invalid — they can't be resumed
- return stats.size > 0;
+ const stats = await stat(sessionFile);
+ return stats.isFile();If you still want to reject permanently empty files, do that in a separate "resume-ready" check after the writer has had a chance to populate the file.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const stats = await stat(sessionFile); | |
| // Empty files (0 bytes) are invalid — they can't be resumed | |
| return stats.size > 0; | |
| const stats = await stat(sessionFile); | |
| return stats.isFile(); |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/core/src/state/session-validation.ts` around lines 144 - 146, The
current existence helper returns stats.size > 0 which treats 0-byte Docker
session files as missing; change it to return true on a successful stat (i.e.,
treat any present file, including size === 0, as existing) so
validateSessionWithFileCheck() doesn't mark transient, just-created session
files as file_not_found; if you need to reject permanently empty files,
implement a separate "resume-ready" check (e.g., isSessionResumeReady or
similar) that explicitly verifies stats.size > 0 after the writer/append phase.
Self-scheduling is now opt-out instead of opt-in. Agents that omit self_scheduling config get it enabled with default limits (max 10 schedules, 5m min interval). Set enabled: false to disable. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
There was a problem hiding this comment.
Actionable comments posted: 4
🧹 Nitpick comments (4)
packages/slack/src/__tests__/manager.test.ts (1)
73-73: Optional: deduplicate the repeatedself_schedulingfixture literal.Same object is defined twice; a shared constant would make future default changes safer.
Suggested refactor
+const defaultSelfScheduling = { enabled: true, max_schedules: 10, min_interval: "5m" } as const; + function createSlackAgent(name: string, slackConfig: AgentChatSlack): ResolvedAgent { return { @@ - self_scheduling: { enabled: true, max_schedules: 10, min_interval: "5m" }, + self_scheduling: { ...defaultSelfScheduling }, } as ResolvedAgent; } @@ function createNonSlackAgent(name: string): ResolvedAgent { return { @@ - self_scheduling: { enabled: true, max_schedules: 10, min_interval: "5m" }, + self_scheduling: { ...defaultSelfScheduling }, } as ResolvedAgent; }Also applies to: 85-85
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/slack/src/__tests__/manager.test.ts` at line 73, Extract the repeated self_scheduling literal into a single shared constant (e.g., const SELF_SCHEDULING = { enabled: true, max_schedules: 10, min_interval: "5m" }) and replace the duplicate inline objects in the test file with references to that constant; update occurrences used in the tests that currently define self_scheduling inline (the fixtures around the manager tests) to use SELF_SCHEDULING so future changes only require editing one place.packages/discord/src/__tests__/manager.test.ts (2)
580-582: Removeconsole.logfrom mock function.The mock
replyfunction logs to console, which can clutter test output. While this is test code, it's better practice to use a no-op or track calls via the mock itself.♻️ Suggested fix
- reply: async (content) => { - console.log("Reply:", content); - }, + reply: vi.fn().mockResolvedValue(undefined),🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/discord/src/__tests__/manager.test.ts` around lines 580 - 582, The test mock for the reply function currently prints to stdout; replace the inline console.log in the mock reply implementation with a no-op or a jest mock so tests don't clutter output—e.g., change the reply mock (reply: async (content) => { console.log("Reply:", content); }) to a silent implementation or use jest.fn()/sinon stub to record calls (reply: jest.fn().mockResolvedValue(undefined)) so assertions can still inspect calls without printing.
1086-1086: Consider reducing hardcoded timeout delays.Several tests use
setTimeoutwith 2500ms delays to "wait for rate limiting". These long delays can make tests slow and potentially flaky in CI environments.Consider using shorter delays (e.g., 50-100ms) for tests where rate limiting isn't the focus, or use
vi.useFakeTimers()to control time advancement deterministically:// Example using fake timers beforeEach(() => { vi.useFakeTimers(); }); afterEach(() => { vi.useRealTimers(); }); // In test: connector.emit("message", event); await vi.runAllTimersAsync();Also applies to: 1130-1130, 1609-1609, 1682-1682
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/discord/src/__tests__/manager.test.ts` at line 1086, Replace hardcoded long setTimeout waits in the tests with deterministic timing: either shorten the delay to a small value (50–100ms) where rate-limiting behavior is not under test, or switch the test suite to use vi.useFakeTimers()/vi.runAllTimersAsync() and vi.useRealTimers() in beforeEach/afterEach so you can emit events (e.g., via connector.emit("message", event)) and then advance timers deterministically instead of awaiting new Promise(resolve => setTimeout(resolve, 2000+)). Update the occurrences of the Promise+setTimeout pattern in the manager.test.ts tests (the places that emit connector events and then wait for rate limiting) to use the fake timer approach or reduced delays.packages/cli/src/commands/__tests__/agent.test.ts (1)
315-316: Optional: deduplicate repeatedself_schedulingfixture payload.The same object is repeated several times; extracting one constant will make future updates less error-prone.
Small DRY refactor
+const DEFAULT_SELF_SCHEDULING = { enabled: true, max_schedules: 10, min_interval: "5m" }; ... - agentConfig: { name: agentName, permission_mode: "default", runtime: "sdk", self_scheduling: { enabled: true, max_schedules: 10, min_interval: "5m" } }, + agentConfig: { name: agentName, permission_mode: "default", runtime: "sdk", self_scheduling: DEFAULT_SELF_SCHEDULING },Also applies to: 371-372, 569-570, 632-633
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/cli/src/commands/__tests__/agent.test.ts` around lines 315 - 316, Extract the repeated self_scheduling object into a single constant (e.g., SELF_SCHEDULING_FIXTURE) and reuse it inside the various agentConfig fixtures instead of inlining the object; update the test cases that construct agentConfig (the entries containing permission_mode, runtime, and self_scheduling) to reference SELF_SCHEDULING_FIXTURE, and replace the repeated literals at the other occurrences noted in the comment so edits to scheduling params only need to be made in one place.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@packages/core/src/config/schema.ts`:
- Line 629: Update the stale inline docs/comments for the Discord output
configuration so the documented default for the result_summary option matches
the actual schema default (false). Locate the occurrences of "result_summary" in
packages/core/src/config/schema.ts (the comment block around the Discord output
example and the related comments at the other two occurrences) and change the
example/comment that currently says "result_summary: true" to "result_summary:
false" (or otherwise note the default is false) so documentation and the schema
(result_summary default false) are consistent.
- Line 991: The self_scheduling schema currently defaults enabled to true which
turns the feature on for all agents; update the schema property
(self_scheduling.enabled) to default to false instead of true to make it opt-in,
and apply the same change to the other occurrence referenced in the file (the
second self_scheduling.enabled block around the 1043-1045 area) so both schema
instances are consistent. Ensure you only change the default value for the
enabled key and leave surrounding validation (z.boolean()) intact.
- Around line 736-742: The current Zod schema for download_dir (the download_dir
field in packages/core/src/config/schema.ts) only rejects POSIX
absolute/traversal patterns and still permits Windows absolute paths (e.g.,
C:\...) and backslash traversal (..\). Update the refine predicate to also
reject Windows drive-letter absolute paths and UNC/leading-backslash paths by
checking for patterns like /^[A-Za-z]:\\/ and /^\\\\/ and also reject
backslash-based parent segments (e.g., contains "..\\" or startsWith("\\"));
alternatively use Node's path utilities (path.isAbsolute and path.normalize)
inside the refine to ensure the value is a relative path without any ".."
segments after normalization. Ensure the error message remains descriptive and
keep the validation attached to the same download_dir Zod chain.
In `@packages/discord/src/__tests__/discord-connector.test.ts`:
- Line 128: The shared test fixture currently hardcodes result_summary: true;
change that default to result_summary: false in the shared config so tests
exercise the reverted default, and update any tests that need a true behavior to
set result_summary: true explicitly in their individual fixtures or test setup;
look for the shared fixture object that contains the "result_summary" property
and flip its value and adjust dependent tests accordingly.
---
Nitpick comments:
In `@packages/cli/src/commands/__tests__/agent.test.ts`:
- Around line 315-316: Extract the repeated self_scheduling object into a single
constant (e.g., SELF_SCHEDULING_FIXTURE) and reuse it inside the various
agentConfig fixtures instead of inlining the object; update the test cases that
construct agentConfig (the entries containing permission_mode, runtime, and
self_scheduling) to reference SELF_SCHEDULING_FIXTURE, and replace the repeated
literals at the other occurrences noted in the comment so edits to scheduling
params only need to be made in one place.
In `@packages/discord/src/__tests__/manager.test.ts`:
- Around line 580-582: The test mock for the reply function currently prints to
stdout; replace the inline console.log in the mock reply implementation with a
no-op or a jest mock so tests don't clutter output—e.g., change the reply mock
(reply: async (content) => { console.log("Reply:", content); }) to a silent
implementation or use jest.fn()/sinon stub to record calls (reply:
jest.fn().mockResolvedValue(undefined)) so assertions can still inspect calls
without printing.
- Line 1086: Replace hardcoded long setTimeout waits in the tests with
deterministic timing: either shorten the delay to a small value (50–100ms) where
rate-limiting behavior is not under test, or switch the test suite to use
vi.useFakeTimers()/vi.runAllTimersAsync() and vi.useRealTimers() in
beforeEach/afterEach so you can emit events (e.g., via connector.emit("message",
event)) and then advance timers deterministically instead of awaiting new
Promise(resolve => setTimeout(resolve, 2000+)). Update the occurrences of the
Promise+setTimeout pattern in the manager.test.ts tests (the places that emit
connector events and then wait for rate limiting) to use the fake timer approach
or reduced delays.
In `@packages/slack/src/__tests__/manager.test.ts`:
- Line 73: Extract the repeated self_scheduling literal into a single shared
constant (e.g., const SELF_SCHEDULING = { enabled: true, max_schedules: 10,
min_interval: "5m" }) and replace the duplicate inline objects in the test file
with references to that constant; update occurrences used in the tests that
currently define self_scheduling inline (the fixtures around the manager tests)
to use SELF_SCHEDULING so future changes only require editing one place.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 4011defe-725d-4118-912e-4aeac354e398
📒 Files selected for processing (11)
packages/cli/src/commands/__tests__/agent.test.tspackages/core/src/config/__tests__/merge.test.tspackages/core/src/config/schema.tspackages/core/src/runner/__tests__/job-executor.test.tspackages/core/src/runner/__tests__/sdk-adapter.test.tspackages/core/src/scheduler/__tests__/scheduler.test.tspackages/core/src/work-sources/__tests__/manager.test.tspackages/discord/src/__tests__/discord-connector.test.tspackages/discord/src/__tests__/manager.test.tspackages/slack/src/__tests__/manager.test.tspackages/web/src/server/chat/web-chat-manager.ts
| * tool_result_max_length: 900 | ||
| * system_status: true | ||
| * result_summary: false | ||
| * result_summary: true |
There was a problem hiding this comment.
Fix stale default value in Discord output docs/comments.
The comments say result_summary defaults to true, but schema default is false. This will confuse operators reading config examples.
Suggested doc fix
- * result_summary: true
+ * result_summary: false
...
- /** Show a summary embed when the agent finishes a turn (cost, tokens, turns) (default: true) */
+ /** Show a summary embed when the agent finishes a turn (cost, tokens, turns) (default: false) */Also applies to: 636-637
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/core/src/config/schema.ts` at line 629, Update the stale inline
docs/comments for the Discord output configuration so the documented default for
the result_summary option matches the actual schema default (false). Locate the
occurrences of "result_summary" in packages/core/src/config/schema.ts (the
comment block around the Discord output example and the related comments at the
other two occurrences) and change the example/comment that currently says
"result_summary: true" to "result_summary: false" (or otherwise note the default
is false) so documentation and the schema (result_summary default false) are
consistent.
| download_dir: z | ||
| .string() | ||
| .optional() | ||
| .default(".discord-attachments") | ||
| .refine((v) => !v.includes("..") && !v.startsWith("/"), { | ||
| message: "download_dir must be a relative path without '..' segments", | ||
| }), |
There was a problem hiding this comment.
Harden download_dir validation against Windows absolute/traversal paths.
Current refine allows values like C:\temp or ..\outside on Windows-style inputs. That weakens path-boundary guarantees for attachment writes.
Suggested validation hardening
download_dir: z
.string()
.optional()
.default(".discord-attachments")
- .refine((v) => !v.includes("..") && !v.startsWith("/"), {
+ .refine((v) => {
+ // Reject absolute paths (POSIX, UNC, Windows drive letters)
+ if (/^(\/|\\\\|[a-zA-Z]:[\\/])/.test(v)) return false;
+ // Reject parent traversal in either separator style
+ if (/(^|[\\/])\.\.([\\/]|$)/.test(v)) return false;
+ return true;
+ }, {
message: "download_dir must be a relative path without '..' segments",
}),🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/core/src/config/schema.ts` around lines 736 - 742, The current Zod
schema for download_dir (the download_dir field in
packages/core/src/config/schema.ts) only rejects POSIX absolute/traversal
patterns and still permits Windows absolute paths (e.g., C:\...) and backslash
traversal (..\). Update the refine predicate to also reject Windows drive-letter
absolute paths and UNC/leading-backslash paths by checking for patterns like
/^[A-Za-z]:\\/ and /^\\\\/ and also reject backslash-based parent segments
(e.g., contains "..\\" or startsWith("\\")); alternatively use Node's path
utilities (path.isAbsolute and path.normalize) inside the refine to ensure the
value is a relative path without any ".." segments after normalization. Ensure
the error message remains descriptive and keep the validation attached to the
same download_dir Zod chain.
| */ | ||
| export const SelfSchedulingSchema = z.object({ | ||
| /** Whether self-scheduling is enabled for this agent */ | ||
| enabled: z.boolean().default(true), |
There was a problem hiding this comment.
Defaulting self_scheduling.enabled to true is an aggressive global behavior change.
With self_scheduling auto-defaulted in agent config, this enables capability by default for all agents unless explicitly disabled. Consider making it opt-in (enabled: false) to preserve least-privilege behavior.
Safer default
export const SelfSchedulingSchema = z.object({
/** Whether self-scheduling is enabled for this agent */
- enabled: z.boolean().default(true),
+ enabled: z.boolean().default(false),Also applies to: 1043-1045
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/core/src/config/schema.ts` at line 991, The self_scheduling schema
currently defaults enabled to true which turns the feature on for all agents;
update the schema property (self_scheduling.enabled) to default to false instead
of true to make it opt-in, and apply the same change to the other occurrence
referenced in the file (the second self_scheduling.enabled block around the
1043-1045 area) so both schema instances are consistent. Ensure you only change
the default value for the enabled key and leave surrounding validation
(z.boolean()) intact.
| tool_result_max_length: 900, | ||
| system_status: true, | ||
| result_summary: false, | ||
| result_summary: true, |
There was a problem hiding this comment.
Align shared fixture default with result_summary: false.
At Line 128, the shared config hardcodes result_summary: true. Since the default was reverted to false, this baseline can mask fallback/default regressions in tests that rely on fixture defaults.
Suggested tweak
- result_summary: true,
+ result_summary: false,📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| result_summary: true, | |
| result_summary: false, |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/discord/src/__tests__/discord-connector.test.ts` at line 128, The
shared test fixture currently hardcodes result_summary: true; change that
default to result_summary: false in the shared config so tests exercise the
reverted default, and update any tests that need a true behavior to set
result_summary: true explicitly in their individual fixtures or test setup; look
for the shared fixture object that contains the "result_summary" property and
flip its value and adjust dependent tests accordingly.
Summary
Addresses Ed's feedback on PRs 194 and 197:
PDF uploads failing in DMs: Discord can return
nullforcontentTypeon attachments (especially in DMs). The code fell back toapplication/octet-streamwhich was categorized as "unsupported" and silently dropped. Now guesses MIME type from file extension (.pdf→application/pdf) before falling back.result_summarydefault flipped: PR 194 intentionally changedresult_summaryfromfalsetotrue, but this is a behavior change that makes all agents show cost/token summary embeds by default. Reverted tofalseand synced the manager's hardcoded fallback defaults to match.Silent attachment failures: When a user sends attachments but
attachments.enabledisfalse, or when attachments are skipped (wrong type, too large, no working_directory), the logs were atdebuglevel — invisible in normal operation. Upgraded towarnso operators can diagnose issues.Note on
acknowledge_emojiEd noted it appeared unimplemented, but it is fully wired: schema → manager → connector → Discord API. The implementation reads the emoji from config and calls
message.react(). No changes needed.Test plan
pnpm typecheckpasses_guessContentTypetests)result_summary: falsedefaultattachments.enabled: true🤖 Generated with Claude Code
Summary by CodeRabbit
New Features
Bug Fixes
Documentation