Summary
The commands/review.md and skills/code-review/SKILL.md prompts are structured as reference manuals rather than action directives. When a Claude Code subagent receives these prompts (especially via context: fork), it narrates the review process ("First I would check the version, then I would run…") instead of executing coderabbit review --agent. Users experience this as "boilerplate text" or the skill "not working."
Root cause
Both files were expanded (review.md to ~184 lines, SKILL.md to ~261 lines) with procedural phases, error classifier tables, CLI reference tables, auth option tables, and verbose guard logic. LLM-as-agent prompts have a well-documented failure mode: when a prompt reads more like documentation than a command, the model describes the steps instead of executing them. The contrast with commit-commands/commit.md (~17 lines, pure imperative) is stark — that command works reliably because it tells the agent what to do, not everything it could theoretically know.
Affected files
commands/review.md — the /coderabbit:review command prompt
skills/code-review/SKILL.md — the coderabbit:code-review skill prompt
What works (local fix)
Rewriting both files to concise imperative style fixes the problem completely:
review.md: 184 → 51 lines. Three numbered steps with an explicit "Execute the steps below — do not describe them" directive. Context checks preserved. Reference tables removed.
SKILL.md: 261 → 58 lines. Same imperative structure. CLI reference, auth options, and error classifier tables removed (the CLI handles all of that internally). Autonomous fix-review cycle and security notes kept.
After the rewrite, coderabbit review --agent runs correctly and returns real findings with severity, filenames, and codegenInstructions.
Suggested fix
The key principles that make agent prompts reliable:
- Lead with the action. The first non-frontmatter content should be "do X" — not "X is a tool that can…"
- Strip reference material the agent doesn't need. If the CLI handles auth validation, error messages, and flag parsing internally, the prompt doesn't need tables documenting those. The agent just needs to know which flags to forward.
- Add an explicit execution directive. "Execute the steps below — do not describe them" is surprisingly effective at preventing narration.
- Keep total length under ~60 lines. Past that threshold, forked subagents increasingly drift into narration.
Happy to submit a PR with the rewritten files if that's useful.
Environment
- Claude Code (CLI)
- CodeRabbit plugin v1.1.0
- CodeRabbit CLI v0.4.1
- macOS (darwin), zsh
Summary
The
commands/review.mdandskills/code-review/SKILL.mdprompts are structured as reference manuals rather than action directives. When a Claude Code subagent receives these prompts (especially viacontext: fork), it narrates the review process ("First I would check the version, then I would run…") instead of executingcoderabbit review --agent. Users experience this as "boilerplate text" or the skill "not working."Root cause
Both files were expanded (review.md to ~184 lines, SKILL.md to ~261 lines) with procedural phases, error classifier tables, CLI reference tables, auth option tables, and verbose guard logic. LLM-as-agent prompts have a well-documented failure mode: when a prompt reads more like documentation than a command, the model describes the steps instead of executing them. The contrast with
commit-commands/commit.md(~17 lines, pure imperative) is stark — that command works reliably because it tells the agent what to do, not everything it could theoretically know.Affected files
commands/review.md— the/coderabbit:reviewcommand promptskills/code-review/SKILL.md— thecoderabbit:code-reviewskill promptWhat works (local fix)
Rewriting both files to concise imperative style fixes the problem completely:
review.md: 184 → 51 lines. Three numbered steps with an explicit "Execute the steps below — do not describe them" directive. Context checks preserved. Reference tables removed.SKILL.md: 261 → 58 lines. Same imperative structure. CLI reference, auth options, and error classifier tables removed (the CLI handles all of that internally). Autonomous fix-review cycle and security notes kept.After the rewrite,
coderabbit review --agentruns correctly and returns real findings with severity, filenames, andcodegenInstructions.Suggested fix
The key principles that make agent prompts reliable:
Happy to submit a PR with the rewritten files if that's useful.
Environment