Conversation
Document how to use custom providers via GitHub repos, NPM packages, or local paths. Covers the ProviderInterface, supported source formats, caching behavior, and a full example implementation. Also updates the API reference to mention custom providers under providerStrategy. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Restructure providers into dedicated section with overview, custom providers, community providers (with GitHub edit link), and GitHub/GitLab integration pages - Rewrite API reference with proper tables, all missing parameters (orchestratorRepoName, githubOwner, allowDirtyBuild, postBuildSteps, preBuildSteps, customHookFiles, customCommandHooks, useCleanupCron), and environment variables (AWS_FORCE_PROVIDER, PURGE_REMOTE_BUILDER_CACHE, ORCHESTRATOR_AWS_STACK_WAIT_TIME, GIT_PRIVATE_TOKEN) - Document premade rclone hooks and Steam deployment hooks - Add S3/rclone workspace locking documentation - Tighten language across all pages for clarity - Add ASCII diagrams to introduction, caching, logging, and config override - Add tasteful emoji to section headers - Rename "Game-CI vs Orchestrator" to "Standard Game-CI vs Orchestrator Mode" - Remove outdated Deno section from command line docs - Improve examples with proper tables, workflow snippets, and cross-links Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Content already merged into the providers section at 07-providers/05-gitlab-integration.mdx Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…e cross-linking
- Promote Providers from Advanced Topics to top-level section (05-providers/)
with dedicated pages for AWS, Kubernetes, Local Docker, Local, Custom, Community,
GitHub Integration, and GitLab Integration
- Move Secrets out of Advanced Topics to top-level (06-secrets.mdx)
- Rename custom-hooks to hooks throughout
- Remove all WIP/preview/release-status notices (project is stable)
- Fix floating {/* */} comment symbols in community-providers (use code block template)
- Update ASCII diagram in Game-CI vs Orchestrator to show CLI/any CI dispatch
- Add sidebar_label frontmatter for Game-CI vs Orchestrator page
- Add comprehensive cross-linking across all orchestrator docs:
- Introduction links to providers, hooks, getting started, platforms
- API Reference links to caching, hooks, providers, configuration override
- Provider pages link to caching, hooks, API Reference sections
- Getting Started links to provider setup guides and secrets
- GitHub Integration links to API Reference for parameters and modes
- Advanced Topics pages cross-reference each other and API Reference
- Fix all broken links from old directory structure
- Delete old directories (examples/github-examples, advanced-topics/providers)
- Run Prettier on all files
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Merge configuration-override.mdx content into secrets.mdx as a section - Delete standalone configuration-override page - Rename "Configuration Override" to "Pull Secrets" in API reference - Update all cross-links (command-line, GitLab integration, API reference) - Fix logging: "Orchestrator job (Fargate task)" instead of "Fargate tasks" Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Link to container-hooks page instead of the hooks directory. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Complete workflow examples for every provider and common patterns: - Minimal workflow, AWS Fargate, Kubernetes, Local Docker - Async mode with GitHub Checks - Scheduled garbage collection - Multi-platform matrix builds - Retained workspaces for faster rebuilds - Container hooks (S3 upload + Steam deploy) - Required secrets tables and cross-links to all relevant docs Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Equalize box widths and arrow spacing for consistent rendering. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…ned workspaces - Custom Providers: plugin loading flow (source → fetch → ProviderInterface) - GitHub Integration: async mode lifecycle (dispatch → return → Check updates) - Retained Workspaces: workspace locking across concurrent builds Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…roviders overview - Container Hooks: build pipeline with pre/post hook execution points - Garbage Collection: resource lifecycle (normal cleanup vs stale → GC) - AWS: CloudFormation resource stack (ECS, S3, CloudWatch, Kinesis) - Providers Overview: decision flowchart for choosing a provider Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Shows each provider side-by-side with its key trait instead of a decision flowchart. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Set Advanced Topics position to 7.0 so it renders after Secrets (position 6 from filename). Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Update title and all cross-references across container hooks, command hooks, and GitHub Actions examples. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Rename premade-container-jobs.mdx to built-in-hooks.mdx (fixes URL slug) - Update all links from premade-container-jobs to built-in-hooks - Rename "Pre-built Hooks" section header to "Built-In Hooks" - Move Custom Job from hooks/ to advanced-topics/ (it's not a hook) - Rename "Custom Jobs" to "Custom Job" (singular) - Update API reference link to advanced-topics/custom-job - Fix numbering conflicts in advanced-topics - Fix retained workspace diagram alignment (remove emoji, align box walls) Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Remove emoji characters from diagrams (variable width across platforms makes alignment impossible). Fix box wall alignment, arrow connections, and consistent spacing in all 11 diagrams: - Introduction (architecture overview) - Caching (standard vs retained) - Providers overview (4-across comparison) - Container hooks (build pipeline) - GitHub integration (async mode lifecycle) - AWS (CloudFormation resource stack) - Secrets (pull flow) - Logging (log pipeline) - Garbage collection (resource lifecycle) - Custom providers (plugin loading) - Retained workspaces (already fixed) Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Documents how to route builds across multiple providers using GitHub Actions scripting: platform-based routing, branch-based routing, runner availability fallback, weighted distribution, and async mode integration. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Storage page: documents project files, build output, caches, S3 and rclone backends, LZ4 compression, workspace locking, large packages, and container file system layout - Architecture page: describes build lifecycle, core components, provider system, workflow composition, hook system, configuration resolution, remote client, CLI modes, and source code map - Caching page: add Build Caching section explaining automatic build output caching based on cache key Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
The box previously said "GitHub Actions" which contradicted the "Your Machine / CI" header. Now lists GitHub Actions, GitLab CI, CLI, etc. to reflect that Orchestrator works from any entry point. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…, hooks Adds a new advanced topics page documenting orchestrator build services: - Submodule profiles (YAML, glob patterns, variant overlays) - Local build caching (Library + LFS filesystem cache) - Custom LFS transfer agents (elastic-git-storage, etc.) - Git hooks (lefthook/husky detection, skip lists) Related: game-ci/unity-builder#777 Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Covers storage type comparison tables, inputs, examples, and cross-links to related providers. Both marked as experimental. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Adds a new page documenting the CLI provider protocol that lets users write orchestrator providers in any language (Go, Python, Rust, shell). Covers: invocation model, JSON stdin/stdout protocol, streaming output, subcommands with timeouts, shell example, CLI vs TypeScript comparison. Related: game-ci/unity-builder#777 Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Documents the new fallbackProviderStrategy, runnerCheckEnabled, runnerCheckLabels, and runnerCheckMinAvailable inputs. Adds comparison table for built-in vs manual fallback approaches. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Reframes page around intelligent provider routing with built-in API. Adds retry-on-alternate, provider init timeout, async mode integration, and decision table. Restructures manual scripting as secondary option. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…ting examples Add two new script-based routing patterns: dispatching to an alternate workflow when self-hosted runners are busy, and using reusable workflows for shared build config with dynamic provider routing. Updated the comparison table with the new patterns. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Expand the Secrets page with premade source documentation (AWS Secrets Manager, AWS Parameter Store, GCP Secret Manager, Azure Key Vault, env), custom commands, YAML definitions, and migration from legacy inputPullCommand. Covers all five cloud providers and the env source. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Documents hashicorp-vault (KV v2), hashicorp-vault-kv1 (KV v1), and vault (shorthand alias). Covers VAULT_ADDR, VAULT_TOKEN, and VAULT_MOUNT configuration with examples for both KV versions. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
New Jobs page explains the build lifecycle, job types (build, test, custom editor method, custom job, async), pre/post build phases, and execution by provider. Gives users a conceptual overview before diving into advanced topics. New LFS Agents page documents elastic-git-storage built-in support with auto-install, version pinning, multiple storage backends, and custom agent configuration. Renamed api-reference from 04 to 05 to accommodate the new Jobs page. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Add documentation for two new orchestrator features: - Test Workflow Engine: YAML-based test suite definitions, taxonomy filters, structured results - Hot Runner Protocol: extensible runner registration, persistent editor providers Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Remove npm/npx installation references. Add PowerShell install script for Windows. Fix install.sh URL to point to unity-builder repo where the scripts live. Add environment variable options and manual download section. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
There was a problem hiding this comment.
Actionable comments posted: 2
♻️ Duplicate comments (1)
docs/03-github-orchestrator/05-providers/12-cli-provider-protocol.mdx (1)
105-115:⚠️ Potential issue | 🟠 MajorDocument the enforced timeout limits.
Line 108 and Line 113 still describe
run-taskandwatch-workflowas unbounded, but provider authors need the real limits here or they will implement the wrong heartbeat/cancellation behavior.Suggested doc fix
| Subcommand | Purpose | Timeout | | -------------------- | -------------------------------------- | --------- | | `setup-workflow` | Initialize infrastructure | 300s | -| `run-task` | Execute the build | No limit | +| `run-task` | Execute the build | 2h | | `cleanup-workflow` | Tear down infrastructure | 300s | | `garbage-collect` | Remove old resources | 300s | | `list-resources` | List active resources | 300s | | `list-workflow` | List active workflows | 300s | -| `watch-workflow` | Watch a workflow until completion | No limit | +| `watch-workflow` | Watch a workflow until completion | 1h | -`run-task` and `watch-workflow` have no timeout because builds can run for hours. +`run-task` is limited to 2 hours and `watch-workflow` to 1 hour.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/03-github-orchestrator/05-providers/12-cli-provider-protocol.mdx` around lines 105 - 115, The docs claim `run-task` and `watch-workflow` have "No limit" but the system enforces timeouts; update the table entries for the `run-task` and `watch-workflow` subcommands to show the actual enforced timeout values (replace "No limit" with the real durations), and adjust the trailing explanatory sentence to state those concrete limits. Locate where those timeouts are defined in the code/config (search for the `run-task` and `watch-workflow` timeout constants or enforcement logic) and use those exact values in the table and sentence so provider authors can implement correct heartbeat/cancellation behavior.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@docs/03-github-orchestrator/05-providers/12-cli-provider-protocol.mdx`:
- Around line 157-159: The default/unknown-command handler (the case '*' branch
that currently echoes '{"success": false, "error": "Unknown command: ..."}' to
stderr) must write the JSON error payload to stdout instead of stderr; change
the echo redirection so the structured JSON goes to stdout and keep the exit 1
to signal failure (preserve the same message formatting but remove the >&2
redirection) so the orchestrator can parse the response while stderr remains
available for logs.
- Around line 128-135: The run-task branch always prints {"success": true}
regardless of docker run result; modify the run-task handler (the shell snippet
using docker run) to capture docker run's exit code (e.g., run the docker run
line, save $? into a variable), then conditionally emit JSON with success: true
when exit code is 0 and success: false (including the exit code and/or error
output) when non-zero, and exit the script with the same exit code; update the
block that currently contains "docker run ... 2>&1" and the following echo to
perform this check and conditional JSON output.
---
Duplicate comments:
In `@docs/03-github-orchestrator/05-providers/12-cli-provider-protocol.mdx`:
- Around line 105-115: The docs claim `run-task` and `watch-workflow` have "No
limit" but the system enforces timeouts; update the table entries for the
`run-task` and `watch-workflow` subcommands to show the actual enforced timeout
values (replace "No limit" with the real durations), and adjust the trailing
explanatory sentence to state those concrete limits. Locate where those timeouts
are defined in the code/config (search for the `run-task` and `watch-workflow`
timeout constants or enforcement logic) and use those exact values in the table
and sentence so provider authors can implement correct heartbeat/cancellation
behavior.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: c6feafb0-15c5-4375-92ac-dd2e37e5a54f
📒 Files selected for processing (5)
docs/03-github-orchestrator/04-jobs.mdxdocs/03-github-orchestrator/05-providers/12-cli-provider-protocol.mdxdocs/03-github-orchestrator/07-advanced-topics/10-build-services.mdxdocs/03-github-orchestrator/08-cli/01-getting-started.mdxdocs/03-github-orchestrator/08-cli/03-orchestrate-command.mdx
✅ Files skipped from review due to trivial changes (3)
- docs/03-github-orchestrator/07-advanced-topics/10-build-services.mdx
- docs/03-github-orchestrator/08-cli/03-orchestrate-command.mdx
- docs/03-github-orchestrator/04-jobs.mdx
🚧 Files skipped from review as they are similar to previous changes (1)
- docs/03-github-orchestrator/08-cli/01-getting-started.mdx
| run-task) | ||
| read request | ||
| image=$(echo "$request" | jq -r '.params.image') | ||
| commands=$(echo "$request" | jq -r '.params.commands') | ||
| # Stream build output, then send JSON result | ||
| docker run --rm "$image" /bin/sh -c "$commands" 2>&1 | ||
| echo '{"success": true, "result": "done"}' | ||
| ;; |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
head -135 "docs/03-github-orchestrator/05-providers/12-cli-provider-protocol.mdx" | tail -20Repository: game-ci/documentation
Length of output: 568
The run-task example reports failed builds as success.
If docker run exits non-zero, this snippet still prints {"success": true}. Copy-pasted providers would silently mark broken builds as successful. The exit code must be checked before reporting the result.
Suggested fix
run-task)
read request
image=$(echo "$request" | jq -r '.params.image')
commands=$(echo "$request" | jq -r '.params.commands')
# Stream build output, then send JSON result
- docker run --rm "$image" /bin/sh -c "$commands" 2>&1
- echo '{"success": true, "result": "done"}'
+ if docker run --rm "$image" /bin/sh -c "$commands" 2>&1; then
+ echo '{"success": true, "result": "done"}'
+ else
+ echo '{"success": false, "error": "Container exited with a non-zero status"}'
+ exit 1
+ fi
;;📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| run-task) | |
| read request | |
| image=$(echo "$request" | jq -r '.params.image') | |
| commands=$(echo "$request" | jq -r '.params.commands') | |
| # Stream build output, then send JSON result | |
| docker run --rm "$image" /bin/sh -c "$commands" 2>&1 | |
| echo '{"success": true, "result": "done"}' | |
| ;; | |
| run-task) | |
| read request | |
| image=$(echo "$request" | jq -r '.params.image') | |
| commands=$(echo "$request" | jq -r '.params.commands') | |
| # Stream build output, then send JSON result | |
| if docker run --rm "$image" /bin/sh -c "$commands" 2>&1; then | |
| echo '{"success": true, "result": "done"}' | |
| else | |
| echo '{"success": false, "error": "Container exited with a non-zero status"}' | |
| exit 1 | |
| fi | |
| ;; |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@docs/03-github-orchestrator/05-providers/12-cli-provider-protocol.mdx` around
lines 128 - 135, The run-task branch always prints {"success": true} regardless
of docker run result; modify the run-task handler (the shell snippet using
docker run) to capture docker run's exit code (e.g., run the docker run line,
save $? into a variable), then conditionally emit JSON with success: true when
exit code is 0 and success: false (including the exit code and/or error output)
when non-zero, and exit the script with the same exit code; update the block
that currently contains "docker run ... 2>&1" and the following echo to perform
this check and conditional JSON output.
| *) | ||
| echo '{"success": false, "error": "Unknown command: '"$1"'"}' >&2 | ||
| exit 1 |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# First, let's locate and examine the file
find . -name "12-cli-provider-protocol.mdx" -type fRepository: game-ci/documentation
Length of output: 136
🏁 Script executed:
# Get the file and examine the context around lines 157-159
cat -n docs/03-github-orchestrator/05-providers/12-cli-provider-protocol.mdx | sed -n '130,180p'Repository: game-ci/documentation
Length of output: 1980
🏁 Script executed:
# Also search for protocol documentation about stdout/stderr handling
rg -i "protocol|stdout|stderr" docs/03-github-orchestrator/05-providers/12-cli-provider-protocol.mdx | head -30Repository: game-ci/documentation
Length of output: 768
Send the terminal error payload to stdout.
According to the CLI Provider Protocol, all final JSON responses—including error responses—must be written to stdout. The protocol specifies that stderr is forwarded only to logs, not parsed for structured responses. The current implementation redirects the error JSON to stderr, which prevents the orchestrator from receiving the structured error response. All other command handlers in this file correctly send JSON responses to stdout.
Suggested fix
*)
- echo '{"success": false, "error": "Unknown command: '"$1"'"}' >&2
+ echo '{"success": false, "error": "Unknown command: '"$1"'"}'
exit 1📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| *) | |
| echo '{"success": false, "error": "Unknown command: '"$1"'"}' >&2 | |
| exit 1 | |
| *) | |
| echo '{"success": false, "error": "Unknown command: '"$1"'"}' | |
| exit 1 |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@docs/03-github-orchestrator/05-providers/12-cli-provider-protocol.mdx` around
lines 157 - 159, The default/unknown-command handler (the case '*' branch that
currently echoes '{"success": false, "error": "Unknown command: ..."}' to
stderr) must write the JSON error payload to stdout instead of stderr; change
the echo redirection so the structured JSON goes to stdout and keep the exit 1
to signal failure (preserve the same message formatting but remove the >&2
redirection) so the orchestrator can parse the response while stderr remains
available for logs.
Install scripts and releases live in game-ci/orchestrator, not unity-builder. Updated all install URLs accordingly. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Replace all 336 em-dash characters with regular dashes across 37 docs files. Convert remote-powershell ASCII box diagram to mermaid sequence diagram. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
There was a problem hiding this comment.
Actionable comments posted: 13
♻️ Duplicate comments (8)
docs/03-github-orchestrator/07-advanced-topics/07-load-balancing.mdx (1)
241-251:⚠️ Potential issue | 🟠 MajorCount idle runners, not just online runners.
Line 246 counts any online runner as available, so this example will route to
local-dockereven when all self-hosted runners are busy, breaking the fallback behavior.Note that other examples in this same file (lines 315-316 and 441-442) correctly filter for idle runners using
.busy == false. This inconsistency should be resolved by applying the same pattern here.🔧 Proposed fix to match the correct pattern used elsewhere in this file
- RUNNERS=$(gh api repos/${{ github.repository }}/actions/runners --jq '[.runners[] | select(.status == "online")] | length') + RUNNERS=$(gh api repos/${{ github.repository }}/actions/runners --jq '[.runners[] | select(.status == "online" and .busy == false)] | length')🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/03-github-orchestrator/07-advanced-topics/07-load-balancing.mdx` around lines 241 - 251, The RUNNERS count logic uses the gh api call assigned to the RUNNERS variable and incorrectly counts any online runners; update the jq filter in the RUNNERS assignment (the gh api repos/${{ github.repository }}/actions/runners --jq '...') to select idle runners by using select(.status == "online" and .busy == false) or the equivalent pattern used elsewhere (select(.busy == false)) so the script only treats truly idle self-hosted runners as available before writing provider=local-docker to GITHUB_OUTPUT.docs/03-github-orchestrator/05-providers/12-cli-provider-protocol.mdx (3)
105-115:⚠️ Potential issue | 🟠 MajorUpdate the timeout table to match the implementation.
This still documents
run-taskandwatch-workflowas unlimited. Provider authors will design the wrong heartbeat and retry behavior if the reference promises “No limit” for long-running commands.Suggested change
| `setup-workflow` | Initialize infrastructure | 300s | -| `run-task` | Execute the build | No limit | +| `run-task` | Execute the build | 2h | | `cleanup-workflow` | Tear down infrastructure | 300s | | `garbage-collect` | Remove old resources | 300s | | `list-resources` | List active resources | 300s | | `list-workflow` | List active workflows | 300s | -| `watch-workflow` | Watch a workflow until completion | No limit | +| `watch-workflow` | Watch a workflow until completion | 1h | -`run-task` and `watch-workflow` have no timeout because builds can run for hours. +`run-task` is limited to 2 hours and `watch-workflow` to 1 hour.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/03-github-orchestrator/05-providers/12-cli-provider-protocol.mdx` around lines 105 - 115, The timeout table incorrectly documents `run-task` and `watch-workflow` as "No limit"; update the table so the Timeout column matches the implementation (replace "No limit" for `run-task` and `watch-workflow` with the actual timeout values used by the codebase), and ensure the entries for `run-task`, `watch-workflow`, and any related subcommands like `setup-workflow`, `cleanup-workflow`, `garbage-collect`, `list-resources`, and `list-workflow` precisely reflect the implementation to avoid misleading provider authors about heartbeat/retry behavior.
128-135:⚠️ Potential issue | 🟠 MajorPropagate
docker runfailures in the result JSON.The
run-taskbranch always emitssuccess: true, so a failed container still looks like a successful build to anyone copying this example.Suggested change
run-task) read request image=$(echo "$request" | jq -r '.params.image') commands=$(echo "$request" | jq -r '.params.commands') # Stream build output, then send JSON result - docker run --rm "$image" /bin/sh -c "$commands" 2>&1 - echo '{"success": true, "result": "done"}' + if docker run --rm "$image" /bin/sh -c "$commands" 2>&1; then + echo '{"success": true, "result": "done"}' + else + echo '{"success": false, "error": "Container exited with a non-zero status"}' + exit 1 + fi ;;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/03-github-orchestrator/05-providers/12-cli-provider-protocol.mdx` around lines 128 - 135, In the run-task branch, the script always echoes '{"success": true ...}' regardless of the docker run outcome; modify the block in the run-task handler to capture the docker run exit code and output (e.g., run the docker run and save its stdout/stderr and $? into variables), then emit a JSON result where "success" is true only when the exit code is zero and include the exit code and output in the JSON result; update the code around the docker run invocation and the final echo that currently emits '{"success": true, "result": "done"}' so it conditionally reflects the actual docker run result.
157-159:⚠️ Potential issue | 🟠 MajorWrite the structured error response to stdout.
The protocol says stdout carries the final JSON response; sending the unknown-command payload to stderr makes it unavailable to the orchestrator parser.
Suggested change
*) - echo '{"success": false, "error": "Unknown command: '"$1"'"}' >&2 + echo '{"success": false, "error": "Unknown command: '"$1"'"}' exit 1🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/03-github-orchestrator/05-providers/12-cli-provider-protocol.mdx` around lines 157 - 159, The catch-all case (*) currently writes the unknown-command JSON to stderr using the echo ... >&2 redirection, but the protocol requires the final JSON be emitted on stdout; change the fallback in the "*" case so the JSON error is echoed to stdout (remove the stderr redirection) and then exit 1, ensuring the orchestrator receives the structured response on stdout.docs/03-github-orchestrator/03-examples/02-github-actions.mdx (2)
135-146:⚠️ Potential issue | 🟠 MajorEither remove
aws-s3-upload-buildhere or document AWS auth for it.This Kubernetes example enables the S3 upload hook but never shows any AWS credential setup, so the copied workflow is incomplete.
Suggested change
- uses: game-ci/unity-builder@v4 id: build with: providerStrategy: k8s targetPlatform: ${{ matrix.targetPlatform }} gitPrivateToken: ${{ secrets.GITHUB_TOKEN }} unityVersion: 2022.3.0f1 kubeVolumeSize: 10Gi containerCpu: 1024 containerMemory: 4096 - containerHookFiles: aws-s3-upload-build githubCheck: trueIf S3 upload is intentional here, add the AWS auth step explicitly instead of removing the hook.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/03-github-orchestrator/03-examples/02-github-actions.mdx` around lines 135 - 146, The workflow enables the S3 upload hook via containerHookFiles: aws-s3-upload-build but provides no AWS credentials; either remove the aws-s3-upload-build entry or add an explicit AWS authentication step before the build step (e.g., configure AWS credentials/role using an actions/aws or custom step) so the aws-s3-upload-build hook can access S3; update the docs to show the added AWS auth step and reference containerHookFiles and aws-s3-upload-build so readers know where the credentials are required.
33-38:⚠️ Potential issue | 🟠 MajorPrefer GitHub OIDC in the canonical AWS workflows.
These examples still normalize long-lived
AWS_ACCESS_KEY_ID/AWS_SECRET_ACCESS_KEYsecrets. The official action recommends OIDC withrole-to-assume, which avoids storing static AWS keys in repository secrets and is the safer baseline for GitHub Actions docs. (github.com)Suggested change
+permissions: + id-token: write + contents: read + jobs: build: @@ - name: Configure AWS Credentials uses: aws-actions/configure-aws-credentials@v4 with: - aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }} - aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }} + role-to-assume: arn:aws:iam::<account-id>:role/<github-actions-role> aws-region: eu-west-2The required-secrets table should then describe the IAM role/OIDC setup instead of static key material.
Also applies to: 76-81, 98-104, 228-233, 275-280
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/03-github-orchestrator/03-examples/02-github-actions.mdx` around lines 33 - 38, Replace the long‑lived AWS secret usage in the "Configure AWS Credentials" step that uses aws-actions/configure-aws-credentials@v4 and the keys aws-access-key-id/aws-secret-access-key with the recommended GitHub OIDC settings: remove the static secret inputs and add role-to-assume (ARN of the IAM role), role-session-name (optional), and aws-region; update the required-secrets table/output docs to describe the IAM role and GitHub OIDC trust configuration (instead of static key material). Also apply the same change to the other occurrences of the "Configure AWS Credentials" step in this document so all examples use OIDC/role-to-assume.docs/03-github-orchestrator/05-providers/10-gcp-cloud-run.mdx (1)
68-70:⚠️ Potential issue | 🟠 MajorMake the documented default valid for
in-memory.
gcpDiskSizeGbis documented as the in-memory volume size with a max of 32, but the default shown here is100. A reader copying the defaults starts from an impossible value.Suggested change
-| `gcpDiskSizeGb` | `100` | In-memory volume size (for `in-memory`, max 32) | +| `gcpDiskSizeGb` | `32` | In-memory volume size (for `in-memory`, max 32) |🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/03-github-orchestrator/05-providers/10-gcp-cloud-run.mdx` around lines 68 - 70, The table's default for gcpDiskSizeGb is invalid for in-memory (currently `100` while max is 32); update the default value shown for `gcpDiskSizeGb` to a valid in-memory size (e.g., `32`) in the docs/03-github-orchestrator/05-providers/10-gcp-cloud-run.mdx table so the documented default is achievable when using `in-memory` volumes and ensure the cell still describes it as the in-memory volume size with max 32.docs/03-github-orchestrator/08-cli/01-getting-started.mdx (1)
80-101:⚠️ Potential issue | 🟠 MajorRe-verify the “first build” flow against current CLI behavior.
This section still assumes
game-ci buildperforms a local Docker/native build, but the standalone CLI behavior previously flagged in this PR rejectedprovider-strategy=local. If that implementation is still current, this onboarding flow is materially wrong and should either show--provider-strategy aws|k8sor explain that local builds are not supported yet.Suggested change
game-ci build \ + --provider-strategy aws \ --target-platform StandaloneLinux64-The CLI will: - -1. Pull the matching Unity Docker image -2. Mount your project into a container -3. Run the Unity build -4. Output build artifacts to the `build/` directory +The CLI will: + +1. Resolve your local project settings +2. Dispatch the build through the selected orchestrator provider +3. Stream workflow status back to the terminal +4. Write artifacts when the remote workflow completes🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/03-github-orchestrator/08-cli/01-getting-started.mdx` around lines 80 - 101, Re-check current CLI behavior for "game-ci build" and update this onboarding section to reflect whether local Docker/native builds are supported; if local/provider-strategy=local is unsupported, change the example and text to show a supported provider (e.g., add a --provider-strategy aws or --provider-strategy k8s example) or add a clear note that local builds are not yet supported; ensure references to auto-detection of Unity version and the example commands (the `game-ci build` invocation and the `--unity-version`/`--target-platform` flags) are adjusted accordingly so the documented "first build" flow matches actual behavior.
🧹 Nitpick comments (1)
docs/03-github-orchestrator/05-providers/06-remote-powershell.mdx (1)
86-107: Workflow description is WinRM-specific but SSH transport is also supported.The mermaid diagram (step 1: "Test connection (Test-WSMan)") and the explanation at line 99 describe WinRM-only behavior. For SSH transport,
Test-WSManis not applicable. Consider noting that this diagram represents the WinRM flow, or adding a brief note about how SSH connectivity verification differs.📝 Suggested clarification
1. **Setup** - The orchestrator tests connectivity to the remote host using `Test-WSMan`. + (For SSH transport, standard SSH connectivity is verified instead.)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/03-github-orchestrator/05-providers/06-remote-powershell.mdx` around lines 86 - 107, The diagram and description currently present a WinRM-only flow (uses Test-WSMan and PS-based credential creation) but claim SSH is supported; update the text around the mermaid block to clarify that the diagram illustrates the WinRM flow specifically and add a short note describing SSH differences: state that Test-WSMan is not used for SSH, Invoke-Command should be called with -HostName for SSH transport, and that credential handling differs from the PScredential created from remotePowershellCredential; mention that SSH connectivity verification (e.g., SSH key or password-based host reachability) must occur instead of Test-WSMan.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@docs/03-github-orchestrator/01-introduction.mdx`:
- Around line 29-31: The admonition starting with the literal string ":::info
Standalone Package" is malformed because its content and the closing ":::"
are on the same line; fix the Docusaurus admonition by placing the content on
the following line(s) and ensuring the closing ":::"
is on its own line so the block looks like an opening ":::info ..." line, the
content lines, then a standalone ":::". Update the block that begins with
":::info Standalone Package" accordingly.
In `@docs/03-github-orchestrator/05-providers/10-gcp-cloud-run.mdx`:
- Around line 6-7: Replace the inline admonition syntax ":::caution Experimental
This provider is experimental. APIs and behavior may change between releases.
:::" with a proper block admonition: put the label ":::caution" on its own
opening line, place the title "Experimental" on the next line (or as first line
of content), then the explanatory text on following lines, and finally close
with a standalone ":::". Locate the current ":::caution ... :::" occurrence and
ensure the closing delimiter is on its own line and the content is separated
from the label to conform to Docusaurus/MDX block admonition rules.
In `@docs/03-github-orchestrator/05-providers/11-azure-aci.mdx`:
- Around line 7-8: The inline single-line admonition ":::caution Experimental
This provider is experimental. APIs and behavior may change between releases.
:::" is invalid Docusaurus syntax; replace it with a block-level admonition
using opening and closing fences and blank lines: use an opening fence like
":::caution" on its own line, a blank line, the body text "Experimental This
provider is experimental. APIs and behavior may change between releases." on its
own line(s), another blank line, and the closing fence ":::". Ensure there are
empty lines between the fences and the body so Docusaurus will render the
admonition correctly.
In `@docs/03-github-orchestrator/05-providers/12-cli-provider-protocol.mdx`:
- Around line 121-155: The example handler uses `read request` in each case
branch (setup-workflow, run-task, cleanup-workflow, garbage-collect,
list-resources, list-workflow, watch-workflow) which only reads up to the first
newline and will truncate multiline JSON; replace each `read request` with
`request=$(cat)` so the full stdin payload (potentially pretty-printed JSON) is
captured before using jq or docker run, ensuring jq -r '.params.*' and other
JSON parsing operate on the complete object.
In `@docs/03-github-orchestrator/06-secrets.mdx`:
- Around line 43-53: Update the example under secretSource: aws-secrets-manager
to show short-lived OIDC/assume-role auth as the primary pattern (e.g.,
reference GitHub OIDC/configure-aws-credentials + role-to-assume and
AWS_DEFAULT_REGION) and remove the long-lived
AWS_ACCESS_KEY_ID/AWS_SECRET_ACCESS_KEY as the main copy-paste; instead move
those two variables (AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY) into a clearly
labeled fallback note explaining static keys are unsupported by default and
should be avoided. Keep secretSource: aws-secrets-manager and AWS_DEFAULT_REGION
in the example so readers know which fields remain required.
- Around line 204-205: Replace the ambiguous sentence "Secrets are created as
native Kubernetes Secret objects and mounted to job containers as environment
variables." with precise wording that explains both mechanisms: state that
secrets are created as Kubernetes Secret objects and are exposed to job
containers either via environment variable references (envFrom/env) or mounted
as files in a volume, and keep the note that the orchestrator handles creation
and cleanup automatically; update the exact sentence in
docs/03-github-orchestrator/06-secrets.mdx where the current phrasing appears.
- Around line 186-195: Call out that the legacy shortcut names (used with
inputPullCommand, e.g., aws-secret-manager and gcp-secret-manager) are different
from the new secretSource identifiers (e.g., aws-secrets-manager); update the
docs around the YAML example to add a clear note warning that inputPullCommand:
aws-secret-manager is NOT the same as secretSource: aws-secrets-manager and that
copy-pasting between legacy and new configs will fail unless the identifier is
converted, referencing the exact symbols inputPullCommand, aws-secret-manager,
secretSource, and aws-secrets-manager so readers can spot the mismatch quickly.
In `@docs/03-github-orchestrator/07-advanced-topics/01-caching.mdx`:
- Line 66: Typo: add the missing period after "etc" in the table cell string
"Flexible cloud storage via [rclone](https://rclone.org). Supports 70+ backends
(Google Cloud, Azure Blob, Backblaze, SFTP, etc)"—update that string (the row
starting with "Rclone | `rclone` | ...") to end the parenthetical with "etc." so
it reads "(Google Cloud, Azure Blob, Backblaze, SFTP, etc.)."
In `@docs/03-github-orchestrator/07-advanced-topics/11-test-workflow-engine.mdx`:
- Around line 60-68: The Run Fields table is missing the builtClientPath field
used in examples; add a new table row documenting `builtClientPath`
(description: Path to the built client used when `builtClient: true`) alongside
the other fields in the Run Fields table so readers see the example's
`builtClientPath: ./Builds/StandaloneLinux64` is supported; update the table
near the existing `builtClient`/`filters`/`timeout` entries and ensure the field
name is formatted as `builtClientPath` to match the example.
In `@docs/03-github-orchestrator/07-advanced-topics/17-build-reliability.mdx`:
- Around line 51-56: The sentence "The checkout action completes the clone on
the next step" is ambiguous; update the wording to explicitly state that the
orchestrator's internal checkout/sync phase (not a later GitHub Actions workflow
step) will re-populate the repository immediately after re-initialization so
readers using the unity-builder inputs understand no extra post-build checkout
is required—replace the phrase with something like "the orchestrator's internal
checkout/sync phase will re-populate the repository immediately after
re-initialization" and ensure references to "checkout action" are clarified to
mean the orchestrator's own checkout step.
- Around line 7-10: The intro currently states "All features are opt-in via
action inputs" but the "GIT_CONFIG_NOSYSTEM=1" section describes automatic Git
hardening; update the intro paragraph to say that most reliability features are
opt-in via action inputs while explicitly calling out that Git hardening
(referenced by the GIT_CONFIG_NOSYSTEM=1 section) is applied automatically as an
exception, and apply the same clarification to the later paragraph covering
lines 169-174 so the document no longer contradicts itself.
- Around line 135-142: Add a note stating that the archival step may move (not
just copy) the original build output, which can remove the usual build output
path on same-device runners; explicitly tell readers that downstream steps
should read artifacts from the archive location (use the
{archivePath}/{platform}/build-{timestamp} pattern) instead of the original
build output path when archival is enabled, and mention that cross-device moves
fall back to copying so behavior differs by runner.
In `@docs/03-github-orchestrator/08-cli/02-build-command.mdx`:
- Around line 116-126: Update the Provider Strategy table in the build command
docs to include the missing `local-docker` option: modify the description for
`--provider-strategy` (used by the `build` command) to list allowed values as
`local`, `local-docker`, `k8s`, or `aws` and add a short note that
`local-docker` is a built-in strategy, keeping the existing reference to
[Orchestrate Command] for cloud-specific options to ensure parity with the
orchestrate command docs.
---
Duplicate comments:
In `@docs/03-github-orchestrator/03-examples/02-github-actions.mdx`:
- Around line 135-146: The workflow enables the S3 upload hook via
containerHookFiles: aws-s3-upload-build but provides no AWS credentials; either
remove the aws-s3-upload-build entry or add an explicit AWS authentication step
before the build step (e.g., configure AWS credentials/role using an actions/aws
or custom step) so the aws-s3-upload-build hook can access S3; update the docs
to show the added AWS auth step and reference containerHookFiles and
aws-s3-upload-build so readers know where the credentials are required.
- Around line 33-38: Replace the long‑lived AWS secret usage in the "Configure
AWS Credentials" step that uses aws-actions/configure-aws-credentials@v4 and the
keys aws-access-key-id/aws-secret-access-key with the recommended GitHub OIDC
settings: remove the static secret inputs and add role-to-assume (ARN of the IAM
role), role-session-name (optional), and aws-region; update the required-secrets
table/output docs to describe the IAM role and GitHub OIDC trust configuration
(instead of static key material). Also apply the same change to the other
occurrences of the "Configure AWS Credentials" step in this document so all
examples use OIDC/role-to-assume.
In `@docs/03-github-orchestrator/05-providers/10-gcp-cloud-run.mdx`:
- Around line 68-70: The table's default for gcpDiskSizeGb is invalid for
in-memory (currently `100` while max is 32); update the default value shown for
`gcpDiskSizeGb` to a valid in-memory size (e.g., `32`) in the
docs/03-github-orchestrator/05-providers/10-gcp-cloud-run.mdx table so the
documented default is achievable when using `in-memory` volumes and ensure the
cell still describes it as the in-memory volume size with max 32.
In `@docs/03-github-orchestrator/05-providers/12-cli-provider-protocol.mdx`:
- Around line 105-115: The timeout table incorrectly documents `run-task` and
`watch-workflow` as "No limit"; update the table so the Timeout column matches
the implementation (replace "No limit" for `run-task` and `watch-workflow` with
the actual timeout values used by the codebase), and ensure the entries for
`run-task`, `watch-workflow`, and any related subcommands like `setup-workflow`,
`cleanup-workflow`, `garbage-collect`, `list-resources`, and `list-workflow`
precisely reflect the implementation to avoid misleading provider authors about
heartbeat/retry behavior.
- Around line 128-135: In the run-task branch, the script always echoes
'{"success": true ...}' regardless of the docker run outcome; modify the block
in the run-task handler to capture the docker run exit code and output (e.g.,
run the docker run and save its stdout/stderr and $? into variables), then emit
a JSON result where "success" is true only when the exit code is zero and
include the exit code and output in the JSON result; update the code around the
docker run invocation and the final echo that currently emits '{"success": true,
"result": "done"}' so it conditionally reflects the actual docker run result.
- Around line 157-159: The catch-all case (*) currently writes the
unknown-command JSON to stderr using the echo ... >&2 redirection, but the
protocol requires the final JSON be emitted on stdout; change the fallback in
the "*" case so the JSON error is echoed to stdout (remove the stderr
redirection) and then exit 1, ensuring the orchestrator receives the structured
response on stdout.
In `@docs/03-github-orchestrator/07-advanced-topics/07-load-balancing.mdx`:
- Around line 241-251: The RUNNERS count logic uses the gh api call assigned to
the RUNNERS variable and incorrectly counts any online runners; update the jq
filter in the RUNNERS assignment (the gh api repos/${{ github.repository
}}/actions/runners --jq '...') to select idle runners by using select(.status ==
"online" and .busy == false) or the equivalent pattern used elsewhere
(select(.busy == false)) so the script only treats truly idle self-hosted
runners as available before writing provider=local-docker to GITHUB_OUTPUT.
In `@docs/03-github-orchestrator/08-cli/01-getting-started.mdx`:
- Around line 80-101: Re-check current CLI behavior for "game-ci build" and
update this onboarding section to reflect whether local Docker/native builds are
supported; if local/provider-strategy=local is unsupported, change the example
and text to show a supported provider (e.g., add a --provider-strategy aws or
--provider-strategy k8s example) or add a clear note that local builds are not
yet supported; ensure references to auto-detection of Unity version and the
example commands (the `game-ci build` invocation and the
`--unity-version`/`--target-platform` flags) are adjusted accordingly so the
documented "first build" flow matches actual behavior.
---
Nitpick comments:
In `@docs/03-github-orchestrator/05-providers/06-remote-powershell.mdx`:
- Around line 86-107: The diagram and description currently present a WinRM-only
flow (uses Test-WSMan and PS-based credential creation) but claim SSH is
supported; update the text around the mermaid block to clarify that the diagram
illustrates the WinRM flow specifically and add a short note describing SSH
differences: state that Test-WSMan is not used for SSH, Invoke-Command should be
called with -HostName for SSH transport, and that credential handling differs
from the PScredential created from remotePowershellCredential; mention that SSH
connectivity verification (e.g., SSH key or password-based host reachability)
must occur instead of Test-WSMan.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: e06775f6-35ff-4b40-921d-a64ebff9edd8
📒 Files selected for processing (37)
docs/03-github-orchestrator/01-introduction.mdxdocs/03-github-orchestrator/03-examples/02-github-actions.mdxdocs/03-github-orchestrator/03-examples/03-aws.mdxdocs/03-github-orchestrator/03-examples/04-kubernetes.mdxdocs/03-github-orchestrator/04-jobs.mdxdocs/03-github-orchestrator/05-api-reference.mdxdocs/03-github-orchestrator/05-providers/02-aws.mdxdocs/03-github-orchestrator/05-providers/03-kubernetes.mdxdocs/03-github-orchestrator/05-providers/04-github-actions-dispatch.mdxdocs/03-github-orchestrator/05-providers/05-gitlab-ci-dispatch.mdxdocs/03-github-orchestrator/05-providers/05-local.mdxdocs/03-github-orchestrator/05-providers/06-custom-providers.mdxdocs/03-github-orchestrator/05-providers/06-remote-powershell.mdxdocs/03-github-orchestrator/05-providers/07-ansible.mdxdocs/03-github-orchestrator/05-providers/09-gitlab-integration.mdxdocs/03-github-orchestrator/05-providers/10-gcp-cloud-run.mdxdocs/03-github-orchestrator/05-providers/11-azure-aci.mdxdocs/03-github-orchestrator/05-providers/12-cli-provider-protocol.mdxdocs/03-github-orchestrator/06-secrets.mdxdocs/03-github-orchestrator/07-advanced-topics/01-caching.mdxdocs/03-github-orchestrator/07-advanced-topics/02-retained-workspace.mdxdocs/03-github-orchestrator/07-advanced-topics/07-load-balancing.mdxdocs/03-github-orchestrator/07-advanced-topics/08-storage.mdxdocs/03-github-orchestrator/07-advanced-topics/09-architecture.mdxdocs/03-github-orchestrator/07-advanced-topics/10-build-services.mdxdocs/03-github-orchestrator/07-advanced-topics/10-lfs-agents.mdxdocs/03-github-orchestrator/07-advanced-topics/11-test-workflow-engine.mdxdocs/03-github-orchestrator/07-advanced-topics/12-hot-runner-protocol.mdxdocs/03-github-orchestrator/07-advanced-topics/13-build-output-system.mdxdocs/03-github-orchestrator/07-advanced-topics/14-incremental-sync-protocol.mdxdocs/03-github-orchestrator/07-advanced-topics/15-massive-projects.mdxdocs/03-github-orchestrator/07-advanced-topics/16-monorepo-support.mdxdocs/03-github-orchestrator/07-advanced-topics/17-build-reliability.mdxdocs/03-github-orchestrator/08-cli/01-getting-started.mdxdocs/03-github-orchestrator/08-cli/02-build-command.mdxdocs/03-github-orchestrator/08-cli/03-orchestrate-command.mdxdocs/03-github-orchestrator/08-cli/04-other-commands.mdx
✅ Files skipped from review due to trivial changes (6)
- docs/03-github-orchestrator/07-advanced-topics/15-massive-projects.mdx
- docs/03-github-orchestrator/07-advanced-topics/10-build-services.mdx
- docs/03-github-orchestrator/05-providers/09-gitlab-integration.mdx
- docs/03-github-orchestrator/03-examples/04-kubernetes.mdx
- docs/03-github-orchestrator/04-jobs.mdx
- docs/03-github-orchestrator/07-advanced-topics/14-incremental-sync-protocol.mdx
🚧 Files skipped from review as they are similar to previous changes (15)
- docs/03-github-orchestrator/05-providers/05-gitlab-ci-dispatch.mdx
- docs/03-github-orchestrator/07-advanced-topics/09-architecture.mdx
- docs/03-github-orchestrator/05-providers/05-local.mdx
- docs/03-github-orchestrator/05-providers/02-aws.mdx
- docs/03-github-orchestrator/05-api-reference.mdx
- docs/03-github-orchestrator/07-advanced-topics/08-storage.mdx
- docs/03-github-orchestrator/05-providers/03-kubernetes.mdx
- docs/03-github-orchestrator/05-providers/04-github-actions-dispatch.mdx
- docs/03-github-orchestrator/08-cli/03-orchestrate-command.mdx
- docs/03-github-orchestrator/08-cli/04-other-commands.mdx
- docs/03-github-orchestrator/03-examples/03-aws.mdx
- docs/03-github-orchestrator/05-providers/07-ansible.mdx
- docs/03-github-orchestrator/05-providers/06-custom-providers.mdx
- docs/03-github-orchestrator/07-advanced-topics/16-monorepo-support.mdx
- docs/03-github-orchestrator/07-advanced-topics/10-lfs-agents.mdx
| ```bash | ||
| #!/bin/bash | ||
| case "$1" in | ||
| setup-workflow) | ||
| read request | ||
| echo '{"success": true, "result": "ready"}' | ||
| ;; | ||
| run-task) | ||
| read request | ||
| image=$(echo "$request" | jq -r '.params.image') | ||
| commands=$(echo "$request" | jq -r '.params.commands') | ||
| # Stream build output, then send JSON result | ||
| docker run --rm "$image" /bin/sh -c "$commands" 2>&1 | ||
| echo '{"success": true, "result": "done"}' | ||
| ;; | ||
| cleanup-workflow) | ||
| read request | ||
| echo '{"success": true, "result": "cleaned"}' | ||
| ;; | ||
| garbage-collect) | ||
| read request | ||
| docker system prune -f >&2 | ||
| echo '{"success": true, "result": "pruned"}' | ||
| ;; | ||
| list-resources) | ||
| read request | ||
| echo '{"success": true, "result": []}' | ||
| ;; | ||
| list-workflow) | ||
| read request | ||
| echo '{"success": true, "result": []}' | ||
| ;; | ||
| watch-workflow) | ||
| read request | ||
| echo '{"success": true, "result": ""}' |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# First, locate and check the file exists
ls -la docs/03-github-orchestrator/05-providers/12-cli-provider-protocol.mdx 2>/dev/null && echo "File found" || echo "File not found"
# Check the file size to understand scope
wc -l docs/03-github-orchestrator/05-providers/12-cli-provider-protocol.mdx 2>/dev/nullRepository: game-ci/documentation
Length of output: 268
🏁 Script executed:
# Read the full file to understand the protocol documentation
cat -n docs/03-github-orchestrator/05-providers/12-cli-provider-protocol.mdxRepository: game-ci/documentation
Length of output: 7203
Replace read request with request=$(cat) to handle multiline JSON payloads.
The protocol specifies a single JSON object on stdin with no requirement for single-line formatting. The documentation example shows pretty-printed JSON spanning multiple lines. The builtin read command stops at the first newline, so multiline JSON input would be truncated to just {, breaking subsequent jq parsing. Use request=$(cat) in all seven case branches to read the complete stdin payload.
Suggested change
setup-workflow)
- read request
+ request=$(cat)
echo '{"success": true, "result": "ready"}'
;;
run-task)
- read request
+ request=$(cat)
image=$(echo "$request" | jq -r '.params.image')
commands=$(echo "$request" | jq -r '.params.commands')
# Stream build output, then send JSON result
docker run --rm "$image" /bin/sh -c "$commands" 2>&1
echo '{"success": true, "result": "done"}'
;;
cleanup-workflow)
- read request
+ request=$(cat)
echo '{"success": true, "result": "cleaned"}'
;;
garbage-collect)
- read request
+ request=$(cat)
docker system prune -f >&2
echo '{"success": true, "result": "pruned"}'
;;
list-resources)
- read request
+ request=$(cat)
echo '{"success": true, "result": []}'
;;
list-workflow)
- read request
+ request=$(cat)
echo '{"success": true, "result": []}'
;;
watch-workflow)
- read request
+ request=$(cat)
echo '{"success": true, "result": ""}'
;;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@docs/03-github-orchestrator/05-providers/12-cli-provider-protocol.mdx` around
lines 121 - 155, The example handler uses `read request` in each case branch
(setup-workflow, run-task, cleanup-workflow, garbage-collect, list-resources,
list-workflow, watch-workflow) which only reads up to the first newline and will
truncate multiline JSON; replace each `read request` with `request=$(cat)` so
the full stdin payload (potentially pretty-printed JSON) is captured before
using jq or docker run, ensuring jq -r '.params.*' and other JSON parsing
operate on the complete object.
| Fetches secrets using `aws secretsmanager get-secret-value`. Your build environment needs AWS | ||
| credentials configured (e.g., via `AWS_ACCESS_KEY_ID` and `AWS_SECRET_ACCESS_KEY`, or an IAM role). | ||
|
|
||
| ```yaml | ||
| env: | ||
| pullInputList: UNITY_LICENSE,UNITY_SERIAL | ||
| secretSource: aws-secrets-manager | ||
| AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }} | ||
| AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }} | ||
| AWS_DEFAULT_REGION: us-east-1 | ||
| ``` |
There was a problem hiding this comment.
Prefer short-lived AWS auth in the primary example.
The paragraph mentions IAM roles, but the only copy-paste example still normalizes long-lived access keys in GitHub secrets. Please make OIDC / assumed-role auth the primary path here and relegate static keys to a fallback note.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@docs/03-github-orchestrator/06-secrets.mdx` around lines 43 - 53, Update the
example under secretSource: aws-secrets-manager to show short-lived
OIDC/assume-role auth as the primary pattern (e.g., reference GitHub
OIDC/configure-aws-credentials + role-to-assume and AWS_DEFAULT_REGION) and
remove the long-lived AWS_ACCESS_KEY_ID/AWS_SECRET_ACCESS_KEY as the main
copy-paste; instead move those two variables (AWS_ACCESS_KEY_ID,
AWS_SECRET_ACCESS_KEY) into a clearly labeled fallback note explaining static
keys are unsupported by default and should be avoided. Keep secretSource:
aws-secrets-manager and AWS_DEFAULT_REGION in the example so readers know which
fields remain required.
| | Field | Description | | ||
| | ------------- | ------------------------------------------------- | | ||
| | `name` | Run identifier | | ||
| | `needs` | List of run names that must complete first | | ||
| | `editMode` | Run Unity EditMode tests (default: false) | | ||
| | `playMode` | Run Unity PlayMode tests (default: false) | | ||
| | `builtClient` | Run tests against a built client (default: false) | | ||
| | `filters` | Taxonomy filters to select tests | | ||
| | `timeout` | Maximum run duration in seconds | |
There was a problem hiding this comment.
Missing builtClientPath field in Run Fields table.
The example at lines 141-148 uses builtClientPath: ./Builds/StandaloneLinux64, but this field is not documented in the Run Fields table.
📝 Proposed fix
| `builtClient` | Run tests against a built client (default: false) |
+| `builtClientPath` | Path to the built client for `builtClient` runs |
| `filters` | Taxonomy filters to select tests |🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@docs/03-github-orchestrator/07-advanced-topics/11-test-workflow-engine.mdx`
around lines 60 - 68, The Run Fields table is missing the builtClientPath field
used in examples; add a new table row documenting `builtClientPath`
(description: Path to the built client used when `builtClient: true`) alongside
the other fields in the Run Fields table so readers see the example's
`builtClientPath: ./Builds/StandaloneLinux64` is supported; update the table
near the existing `builtClient`/`filters`/`timeout` entries and ensure the field
name is formatted as `builtClientPath` to match the example.
| Build reliability features harden CI builds against common failure modes: git corruption on | ||
| persistent runners, Windows filesystem issues with cross-platform repositories, and build output | ||
| management. All features are opt-in via action inputs and fail gracefully - a reliability check that | ||
| encounters an error logs a warning rather than failing the build. |
There was a problem hiding this comment.
The page contradicts itself on whether these features are opt-in.
The intro says all reliability features are enabled via inputs, but the GIT_CONFIG_NOSYSTEM=1 section says that behavior is always applied automatically. Please narrow the intro to the configurable features, or call out the automatic Git hardening there as an exception.
Suggested wording
-Build reliability features harden CI builds against common failure modes: git corruption on
-persistent runners, Windows filesystem issues with cross-platform repositories, and build output
-management. All features are opt-in via action inputs and fail gracefully - a reliability check that
-encounters an error logs a warning rather than failing the build.
+Build reliability features harden CI builds against common failure modes: git corruption on
+persistent runners, Windows filesystem issues with cross-platform repositories, and build output
+management. Most features are opt-in via action inputs and fail gracefully - a reliability check
+that encounters an error logs a warning rather than failing the build. The Git environment hardening
+described below is applied automatically.Also applies to: 169-174
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@docs/03-github-orchestrator/07-advanced-topics/17-build-reliability.mdx`
around lines 7 - 10, The intro currently states "All features are opt-in via
action inputs" but the "GIT_CONFIG_NOSYSTEM=1" section describes automatic Git
hardening; update the intro paragraph to say that most reliability features are
opt-in via action inputs while explicitly calling out that Git hardening
(referenced by the GIT_CONFIG_NOSYSTEM=1 section) is applied automatically as an
exception, and apply the same clarification to the later paragraph covering
lines 169-174 so the document no longer contradicts itself.
| 1. Remove the corrupted `.git` directory entirely | ||
| 2. Re-initialize the repository with `git init` | ||
| 3. The checkout action completes the clone on the next step | ||
|
|
||
| This is a last-resort recovery. It works because the orchestrator's checkout step will re-populate | ||
| the repository from the remote after re-initialization. |
There was a problem hiding this comment.
Clarify that recovery depends on an internal checkout phase, not a later workflow step.
"the checkout action completes the clone on the next step" reads like a later GitHub Actions step, which is confusing in a page that only shows unity-builder inputs. Reword this to point at the orchestrator's own checkout/sync phase so readers do not assume they need an extra post-build checkout.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@docs/03-github-orchestrator/07-advanced-topics/17-build-reliability.mdx`
around lines 51 - 56, The sentence "The checkout action completes the clone on
the next step" is ambiguous; update the wording to explicitly state that the
orchestrator's internal checkout/sync phase (not a later GitHub Actions workflow
step) will re-populate the repository immediately after re-initialization so
readers using the unity-builder inputs understand no extra post-build checkout
is required—replace the phrase with something like "the orchestrator's internal
checkout/sync phase will re-populate the repository immediately after
re-initialization" and ensure references to "checkout action" are clarified to
mean the orchestrator's own checkout step.
| 1. After a successful build, the build output directory is moved (or copied, if a cross-device move | ||
| is not possible) to `{archivePath}/{platform}/build-{timestamp}`. | ||
| 2. Archives are organized by platform - each target platform gets its own subdirectory. | ||
| 3. The retention policy keeps the N most recent builds per platform. Older builds are automatically | ||
| removed. | ||
|
|
||
| The archive path must be set when archival is enabled. This can be a local directory on the runner | ||
| or a mounted network volume. |
There was a problem hiding this comment.
Document that archiving can remove the original build output path.
Because the archive step moves the build directory when it can, later workflow steps that upload artifacts from the usual build path will fail on same-device runners. Please explicitly say that enabling archival may relocate the only copy of the output and which path downstream steps should read from.
Suggested wording
1. After a successful build, the build output directory is moved (or copied, if a cross-device move
is not possible) to `{archivePath}/{platform}/build-{timestamp}`.
2. Archives are organized by platform - each target platform gets its own subdirectory.
3. The retention policy keeps the N most recent builds per platform. Older builds are automatically
removed.
+
+If the archive is created via a same-filesystem move, the original build output path no longer
+exists afterward. Any later workflow step that consumes the build output should read from the
+archived path instead.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@docs/03-github-orchestrator/07-advanced-topics/17-build-reliability.mdx`
around lines 135 - 142, Add a note stating that the archival step may move (not
just copy) the original build output, which can remove the usual build output
path on same-device runners; explicitly tell readers that downstream steps
should read artifacts from the archive location (use the
{archivePath}/{platform}/build-{timestamp} pattern) instead of the original
build output path when archival is enabled, and mention that cross-device moves
fall back to copying so behavior differs by runner.
| ## Provider Strategy | ||
|
|
||
| By default, the `build` command runs locally. You can redirect execution to a remote orchestrator | ||
| provider: | ||
|
|
||
| | Flag | Default | Description | | ||
| | --------------------- | ------- | -------------------------------------------- | | ||
| | `--provider-strategy` | `local` | Execution strategy: `local`, `k8s`, or `aws` | | ||
|
|
||
| When set to anything other than `local`, the build is handed off to the orchestrator. See | ||
| [Orchestrate Command](orchestrate-command) for cloud-specific options. |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# Search for "local-docker" mentions in the codebase
rg "local-docker" --type md --type mdx --type json --type js --type ts -iRepository: game-ci/documentation
Length of output: 92
🏁 Script executed:
# Search for provider strategy definitions and documentation
rg "providerStrategy|provider-strategy" --type md --type mdx -A 3 -B 1Repository: game-ci/documentation
Length of output: 92
🏁 Script executed:
# Check the orchestrate-command documentation referenced in the file
fd "orchestrate-command" --type fRepository: game-ci/documentation
Length of output: 126
🏁 Script executed:
# Look for provider-related documentation files
fd "provider" --type f -path "*/docs/*"Repository: game-ci/documentation
Length of output: 233
🏁 Script executed:
# Search for "local-docker" without file type restrictions
rg "local-docker" -iRepository: game-ci/documentation
Length of output: 3245
🏁 Script executed:
# Search for provider strategy documentation without file type
rg "providerStrategy|provider.strategy" -A 3 -B 1Repository: game-ci/documentation
Length of output: 50379
🏁 Script executed:
# Look at the orchestrate-command file that was found
cat "docs/03-github-orchestrator/08-cli/03-orchestrate-command.mdx"Repository: game-ci/documentation
Length of output: 5959
🏁 Script executed:
# Find all files in the docs directory related to providers
fd "" "docs/03-github-orchestrator" -type f | head -20Repository: game-ci/documentation
Length of output: 235
Add local-docker to the --provider-strategy description for consistency.
The local-docker provider is documented as a built-in strategy in the API reference and provider documentation, but this table omits it while the parallel orchestrate command includes it. This creates confusion about available options.
Suggested change
-| `--provider-strategy` | `local` | Execution strategy: `local`, `k8s`, or `aws` |
+| `--provider-strategy` | `local` | Execution strategy: `local`, `local-docker`, `k8s`, or `aws` |📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| ## Provider Strategy | |
| By default, the `build` command runs locally. You can redirect execution to a remote orchestrator | |
| provider: | |
| | Flag | Default | Description | | |
| | --------------------- | ------- | -------------------------------------------- | | |
| | `--provider-strategy` | `local` | Execution strategy: `local`, `k8s`, or `aws` | | |
| When set to anything other than `local`, the build is handed off to the orchestrator. See | |
| [Orchestrate Command](orchestrate-command) for cloud-specific options. | |
| ## Provider Strategy | |
| By default, the `build` command runs locally. You can redirect execution to a remote orchestrator | |
| provider: | |
| | Flag | Default | Description | | |
| | --------------------- | ------- | -------------------------------------------- | | |
| | `--provider-strategy` | `local` | Execution strategy: `local`, `local-docker`, `k8s`, or `aws` | | |
| When set to anything other than `local`, the build is handed off to the orchestrator. See | |
| [Orchestrate Command](orchestrate-command) for cloud-specific options. |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@docs/03-github-orchestrator/08-cli/02-build-command.mdx` around lines 116 -
126, Update the Provider Strategy table in the build command docs to include the
missing `local-docker` option: modify the description for `--provider-strategy`
(used by the `build` command) to list allowed values as `local`, `local-docker`,
`k8s`, or `aws` and add a short note that `local-docker` is a built-in strategy,
keeping the existing reference to [Orchestrate Command] for cloud-specific
options to ensure parity with the orchestrate command docs.
Fix formatting issues after em-dash removal. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
There was a problem hiding this comment.
Actionable comments posted: 9
♻️ Duplicate comments (9)
docs/03-github-orchestrator/05-providers/09-gitlab-integration.mdx (1)
18-22:⚠️ Potential issue | 🟠 MajorReplace the legacy
unity-builderCLI flow.This example still sends readers through the old
game-ci/unity-builderentrypoint and deprecated camelCase flags, which is out of sync with the standalone@game-ci/orchestratorCLI this PR is documenting. Please update the snippet to use the current orchestrator command surface.Suggested doc update
script: - - git clone https://github.com/game-ci/unity-builder.git /tmp/game-ci - - cd /tmp/game-ci && yarn install - > - yarn run cli -m cli-build --projectPath $CI_PROJECT_DIR --providerStrategy aws - --gitPrivateToken $GIT_TOKEN + game-ci orchestrate --project-path "$CI_PROJECT_DIR" --provider-strategy aws + --git-private-token "$GIT_TOKEN"🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/03-github-orchestrator/05-providers/09-gitlab-integration.mdx` around lines 18 - 22, Replace the legacy unity-builder snippet: stop calling the old "game-ci/unity-builder" entrypoint and the camelCase flags used in "yarn run cli -m cli-build" and instead invoke the current `@game-ci/orchestrator` CLI (e.g., via npx or yarn dlx) and switch all flags to kebab-case; update the example to call the orchestrator build command (reference symbol: `@game-ci/orchestrator` and the build action) and use flags like --project-path, --provider-strategy, and --git-private-token in place of --projectPath, --providerStrategy, and --gitPrivateToken so the docs reflect the new CLI surface.docs/03-github-orchestrator/05-providers/12-cli-provider-protocol.mdx (4)
125-155:⚠️ Potential issue | 🟠 MajorUse
cathere so multiline JSON payloads are not truncated.Bash
readconsumes a single line by default, but the protocol above explicitly allows pretty-printed JSON on stdin. A provider copied from this example would pass only the first line tojqfor multiline requests. (gnu.org)Suggested fix
- read request + request=$(cat)Apply the same change in each subcommand branch that currently uses
read request.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/03-github-orchestrator/05-providers/12-cli-provider-protocol.mdx` around lines 125 - 155, The subcommands (e.g., run-task, readied branch, cleanup-workflow, garbage-collect, list-resources, list-workflow, watch-workflow) currently use Bash read request which only reads a single line and will truncate pretty-printed JSON; replace the single-line read usage with reading the full stdin into the request variable (e.g., use cat to capture multiline JSON) in each branch that currently calls read request so jq sees the entire payload and not just the first line.
128-135:⚠️ Potential issue | 🟠 MajorOnly report success when
docker runactually succeeds.This example always emits
{"success": true}even if the container exits non-zero. The CLI provider trusts that final JSON response to decide whether the task succeeded, so copied providers would silently mark failed builds as successful. (raw.githubusercontent.com)Suggested fix
- docker run --rm "$image" /bin/sh -c "$commands" 2>&1 - echo '{"success": true, "result": "done"}' + if docker run --rm "$image" /bin/sh -c "$commands" 2>&1; then + echo '{"success": true, "result": "done"}' + else + exit_code=$? + echo "{\"success\": false, \"error\": \"Container exited with code ${exit_code}\"}" + exit "$exit_code" + fi🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/03-github-orchestrator/05-providers/12-cli-provider-protocol.mdx` around lines 128 - 135, In the run-task case, the script always emits '{"success": true}' regardless of the container exit status; change it so the exit code of the docker run command is checked and the final JSON reflects success only when that exit code is 0. Specifically, after resolving image and commands in the run-task block, execute docker run and capture its exit code (from the docker run invocation), then emit '{"success": true, "result": ...}' only when the exit code is 0 and emit '{"success": false, "result": ...}' (including the exit code or stderr message) when non-zero; update the code around the docker run invocation and the final echo in the run-task case accordingly.
157-159:⚠️ Potential issue | 🟠 MajorWrite the structured error response to stdout, not stderr.
The implementation parses response JSON from stdout and forwards stderr to logs, so redirecting the unknown-command payload to stderr prevents the orchestrator from reading the structured failure response. (raw.githubusercontent.com)
Suggested fix
- echo '{"success": false, "error": "Unknown command: '"$1"'"}' >&2 + echo '{"success": false, "error": "Unknown command: '"$1"'"}' exit 1🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/03-github-orchestrator/05-providers/12-cli-provider-protocol.mdx` around lines 157 - 159, The unknown-command branch currently writes the structured JSON error to stderr; change it to write to stdout so the orchestrator can parse the JSON response — i.e., modify the echo in that case (the line producing '{"success": false, "error": "Unknown command: '"$1"'"}') to write to stdout (remove the >&2 redirection) and keep the exit 1 so the process still returns a failure code.
105-115:⚠️ Potential issue | 🟠 MajorDocument the enforced timeouts instead of “No limit”.
The CLI provider currently hardcodes
run-taskto 7,200,000 ms andwatch-workflowto 3,600,000 ms, so this table and the sentence below it describe behavior the implementation does not allow. (raw.githubusercontent.com)Suggested fix
| `setup-workflow` | Initialize infrastructure | 300s | -| `run-task` | Execute the build | No limit | +| `run-task` | Execute the build | 2h | | `cleanup-workflow` | Tear down infrastructure | 300s | | `garbage-collect` | Remove old resources | 300s | | `list-resources` | List active resources | 300s | | `list-workflow` | List active workflows | 300s | -| `watch-workflow` | Watch a workflow until completion | No limit | +| `watch-workflow` | Watch a workflow until completion | 1h | -`run-task` and `watch-workflow` have no timeout because builds can run for hours. +`run-task` is limited to 2 hours and `watch-workflow` to 1 hour in the CLI provider implementation.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/03-github-orchestrator/05-providers/12-cli-provider-protocol.mdx` around lines 105 - 115, Update the docs to reflect the actual enforced timeouts for the CLI provider: replace "No limit" for `run-task` with "7,200,000 ms (2h)" and for `watch-workflow` with "3,600,000 ms (1h)" in the table and change the following sentence so it notes these enforced limits (use the millisecond values and human-readable durations); reference the CLI subcommands `run-task` and `watch-workflow` so readers can correlate the table to the implementation that enforces those timeouts.docs/03-github-orchestrator/07-advanced-topics/07-load-balancing.mdx (1)
241-247:⚠️ Potential issue | 🟠 MajorCount idle runners here, not just online runners.
status == "online"still includes busy runners; GitHub's runner payload exposesbusyseparately, so this example can route tolocal-dockereven when no self-hosted runner is actually free. (docs.github.com)Suggested fix
- RUNNERS=$(gh api repos/${{ github.repository }}/actions/runners --jq '[.runners[] | select(.status == "online")] | length') + RUNNERS=$(gh api repos/${{ github.repository }}/actions/runners --jq '[.runners[] | select(.status == "online" and .busy == false)] | length')🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/03-github-orchestrator/07-advanced-topics/07-load-balancing.mdx` around lines 241 - 247, The RUNNERS count in the step with id "pick" currently counts any runner with status == "online" and therefore includes busy runners; update the GH API jq filter used in the RUNNERS assignment so it only counts runners where status is "online" AND busy is false (i.e., idle), ensuring the workflow routes to local-docker only when a self-hosted runner is actually free; modify the RUNNERS assignment in that step accordingly while keeping GH_TOKEN and the rest of the step intact.docs/03-github-orchestrator/05-providers/06-remote-powershell.mdx (2)
114-116:⚠️ Potential issue | 🟠 MajorNarrow the HTTP warning to the auth methods that actually expose credentials.
This currently reads as if WinRM over HTTP always sends credentials in clear text. That is true for Basic auth, but not for Negotiate/Kerberos/NTLM or CredSSP, which provide message-level protection over HTTP. The note should distinguish those cases so readers do not get the wrong transport guidance.
Microsoft Learn WinRM HTTP Basic authentication cleartext Negotiate Kerberos NTLM CredSSP message encryptionSuggested doc tweak
-- **WinRM HTTPS** - For production use, configure WinRM over HTTPS (port 5986) with a valid TLS - certificate to encrypt traffic. The default HTTP transport (port 5985) sends credentials in clear - text over the network. +- **WinRM transport security** - Prefer WinRM over HTTPS (port 5986) with a valid TLS certificate, + especially when using Basic authentication or when you need certificate-based host validation. + Over HTTP (5985), Basic authentication exposes credentials, while + Negotiate/Kerberos/NTLM and CredSSP provide message-level protection.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/03-github-orchestrator/05-providers/06-remote-powershell.mdx` around lines 114 - 116, Update the WinRM HTTP warning paragraph to clarify that only Basic authentication sends credentials in clear text over HTTP; change the sentence under the "WinRM HTTPS" bullet (the paragraph describing port 5985 vs 5986) to explicitly call out Basic auth as the insecure case and note that Negotiate/Kerberos/NTLM and CredSSP provide message-level protection even over HTTP, while still recommending WinRM over HTTPS (5986) with a valid TLS certificate for production. Mention the relevant auth methods by name (Basic, Negotiate/Kerberos/NTLM, CredSSP) so readers can distinguish which transports expose credentials.
35-41:⚠️ Potential issue | 🟠 MajorMove
TrustedHostsguidance off the target and avoid*.Line 40 configures the WinRM client trust list, so it does not belong in a “Target” setup snippet. Using
TrustedHosts="*"here also disables host validation for every endpoint. Keep the target instructions to listener setup, then document client-side trust separately with explicit hosts or HTTPS.Microsoft Learn WinRM TrustedHosts client setting wildcard security riskSuggested doc tweak
### Enabling WinRM on the Target ```powershell # Run as Administrator on the target machine Enable-PSRemoting -Force -Set-Item WSMan:\localhost\Client\TrustedHosts -Value "*" -Force # Or specific host
+If the orchestrator runner connects over WinRM HTTP to a non-domain host, configure
+WSMan:\localhost\Client\TrustedHostson the runner/client with explicit hostnames only,
+or prefer WinRM over HTTPS (5986).</details> <details> <summary>🤖 Prompt for AI Agents</summary>Verify each finding against the current code and only fix it if needed.
In
@docs/03-github-orchestrator/05-providers/06-remote-powershell.mdxaround
lines 35 - 41, The snippet configures the client-side TrustedHosts (Set-Item
WSMan:\localhost\Client\TrustedHosts) inside the "Target" setup; remove that
line from the target snippet and instead add a separate client/runner note
explaining to configure WSMan:\localhost\Client\TrustedHosts on the
orchestrator/runner (not the target) with explicit hostnames only (do not use
"*") or, preferably, use WinRM over HTTPS (port 5986) for non-domain hosts; keep
the target snippet limited to listener setup (e.g., Enable-PSRemoting) and place
the TrustedHosts guidance in a distinct client-side section.</details> </blockquote></details> <details> <summary>docs/03-github-orchestrator/07-advanced-topics/11-test-workflow-engine.mdx (1)</summary><blockquote> `58-68`: _⚠️ Potential issue_ | _🟡 Minor_ **Add `builtClientPath` to the Run Fields table.** The example below uses `builtClientPath`, but the Run Fields table does not document it, so readers cannot tell that it is a supported field. <details> <summary>Suggested doc tweak</summary> ```diff | `editMode` | Run Unity EditMode tests (default: false) | | `playMode` | Run Unity PlayMode tests (default: false) | | `builtClient` | Run tests against a built client (default: false) | +| `builtClientPath` | Path to the built client when `builtClient: true` | | `filters` | Taxonomy filters to select tests | | `timeout` | Maximum run duration in seconds |🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/03-github-orchestrator/07-advanced-topics/11-test-workflow-engine.mdx` around lines 58 - 68, The Run Fields table is missing the builtClientPath entry used in examples; add a new row in the “Run Fields” table with the field name `builtClientPath` and a short description such as “Filesystem path to the built client to run tests against (used when builtClient is true)”, keeping the same table formatting and column style as the existing rows so examples and docs match the supported fields.
🧹 Nitpick comments (1)
docs/03-github-orchestrator/05-providers/07-ansible.mdx (1)
227-229: Consider documenting security implications of writing secrets to disk.Writing the vault password to
/tmp/vault-passon a shared runner means the secret is briefly accessible as a plaintext file. While the cleanup step mitigates this, you may want to add a note about this trade-off, especially for users on GitHub-hosted runners.Optional: Consider suggesting
mktempfor a unique filename to reduce collision risk:- name: Write vault password run: | VAULT_PASS_FILE=$(mktemp) echo "${{ secrets.ANSIBLE_VAULT_PASSWORD }}" > "$VAULT_PASS_FILE" echo "VAULT_PASS_FILE=$VAULT_PASS_FILE" >> $GITHUB_ENV🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/03-github-orchestrator/05-providers/07-ansible.mdx` around lines 227 - 229, The "Write vault password" step writes the secret to /tmp/vault-pass which leaves plaintext on disk on shared runners; update the docs to explicitly call out this security implication and recommend safer alternatives: use mktemp to create a unique temp file, restrict file permissions (chmod 600) before writing, export the temp filename to GITHUB_ENV if needed, and ensure an immediate cleanup step to shred/remove the file; mention that GitHub-hosted runners are shared and advise using in-memory approaches or GitHub Actions built-in secrets masking where possible.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@docs/03-github-orchestrator/04-jobs.mdx`:
- Around line 143-150: The docs show inconsistent YAML keys between the API
reference and examples: normalize on one key name (prefer the API reference's
singular "command" if that is the implemented schema) by updating all examples
(e.g., the preBuildSteps and postBuildSteps examples that currently use
"commands") to use "command" and ensure "image" remains present; then add a
short example demonstrating the "secrets" key in a postBuildSteps item (showing
how to list/reference secrets) and update the API reference text to match the
example so preBuildSteps, postBuildSteps, image, command, and secrets are
consistent across docs.
In `@docs/03-github-orchestrator/05-providers/02-aws.mdx`:
- Around line 28-50: Update the AWS credentials section to distinguish two
flows: Preferred OIDC role assumption (no long-lived secrets) and Fallback
long-lived access keys; explicitly mention that IAM roles assumed via OIDC do
not use AWS_ACCESS_KEY_ID/AWS_SECRET_ACCESS_KEY. Modify the GitHub Actions
example using aws-actions/configure-aws-credentials@v4 to show a role-to-assume
variant (include inputs like role-to-assume and optionally
role-session-name/role-duration-seconds) and keep a separate fallback example
showing aws-access-key-id and aws-secret-access-key for legacy users; ensure the
docs mention which environment variables are required for each path and label
the OIDC approach as recommended.
In `@docs/03-github-orchestrator/05-providers/03-kubernetes.mdx`:
- Around line 17-20: Update the "CPU and Memory" section to clarify that "1024 =
1 vCPU" is a provider-level convention rather than a native Kubernetes unit;
explicitly mention that Kubernetes native CPU units are millicores (e.g., 100m)
and memory uses binary SI suffixes (e.g., Mi, Gi), and note that the inputs
containerCpu and containerMemory follow the provider abstraction and are
translated to Kubernetes resource quantities. Reference the "CPU and Memory"
heading and the containerCpu/containerMemory inputs when making the text change
so readers understand it's an abstraction layer, not the upstream Kubernetes
format.
In `@docs/03-github-orchestrator/05-providers/07-ansible.mdx`:
- Around line 54-58: The example is passing the secret value directly to
ansibleVaultPassword even though that input expects a file path; update the
example so ansibleVaultPassword points to a filesystem path containing the vault
password (matching the "Full Workflow Example" approach). Concretely, change the
snippet around ansibleVaultPassword so it demonstrates writing the secret from
secrets.ANSIBLE_VAULT_PASSWORD (or secrets.ANSIBLE_VAULT_PASSWORD_FILE) into a
temporary file and then setting ansibleVaultPassword to that temp file path;
keep ansibleInventory and ansiblePlaybook as-is and reference the temp file path
in place of the current direct secret usage.
In `@docs/03-github-orchestrator/07-advanced-topics/07-load-balancing.mdx`:
- Around line 243-245: The example uses GH_TOKEN (secrets.GITHUB_TOKEN) to call
the GET /repos/{owner}/{repo}/actions/runners endpoint which requires the
fine-grained Administration: Read permission; update the example job to either
add a job-level permissions block (permissions: administration: read) so
secrets.GITHUB_TOKEN will work, or explicitly document that a PAT or GitHub App
token with Administration (read) rights must be supplied instead of
secrets.GITHUB_TOKEN.
- Line 246: Update the RUNNERS lookup to paginate and count only idle online
runners: call gh api with --paginate when invoking the /actions/runners endpoint
and change the jq filter to select runners where .status == "online" and .busy
== false before taking the length (the existing RUNNERS=$(gh api ... --jq
'[.runners[] | select(.status == "online")] | length') should be adjusted to use
--paginate and include the .busy check).
In `@docs/03-github-orchestrator/07-advanced-topics/16-monorepo-support.mdx`:
- Around line 110-111: Update the sentence that mentions submoduleProfilePath to
narrow its scope to submodule git operations: replace "before any git
operations" with "before any submodule git operations" and keep the rest of the
sentence about initializing only the listed submodules and skipped submodules
never being cloned, fetched, or touched; refer to the symbol
submoduleProfilePath and the orchestrator behavior when making this wording
change.
- Around line 237-243: The PowerShell example in the "Validate submodule
profiles" run block uses bash-style backslashes and an OS-specific shell; change
the continuation characters to PowerShell backticks (`) for the multiline
command and switch the step's shell from "powershell" to "pwsh" for
cross-platform PowerShell Core compatibility while keeping the same parameters
(-ProfileDir config/submodule-profiles and -GitmodulesPath .gitmodules) and the
same script invocation name ("Validate submodule profiles").
In `@docs/03-github-orchestrator/08-cli/04-other-commands.mdx`:
- Around line 129-133: Update the CLI example output that currently reads
"game-ci (`@game-ci/orchestrator`) v3.0.0" to "game-ci (`@game-ci/orchestrator`)
v2.0.0" so the documentation example matches the LTS 2.0.0 release; locate the
code block showing the Node/Platform output (the triple-backtick example with
game-ci and Node.js) and replace the version string only.
---
Duplicate comments:
In `@docs/03-github-orchestrator/05-providers/06-remote-powershell.mdx`:
- Around line 114-116: Update the WinRM HTTP warning paragraph to clarify that
only Basic authentication sends credentials in clear text over HTTP; change the
sentence under the "WinRM HTTPS" bullet (the paragraph describing port 5985 vs
5986) to explicitly call out Basic auth as the insecure case and note that
Negotiate/Kerberos/NTLM and CredSSP provide message-level protection even over
HTTP, while still recommending WinRM over HTTPS (5986) with a valid TLS
certificate for production. Mention the relevant auth methods by name (Basic,
Negotiate/Kerberos/NTLM, CredSSP) so readers can distinguish which transports
expose credentials.
- Around line 35-41: The snippet configures the client-side TrustedHosts
(Set-Item WSMan:\localhost\Client\TrustedHosts) inside the "Target" setup;
remove that line from the target snippet and instead add a separate
client/runner note explaining to configure WSMan:\localhost\Client\TrustedHosts
on the orchestrator/runner (not the target) with explicit hostnames only (do not
use "*") or, preferably, use WinRM over HTTPS (port 5986) for non-domain hosts;
keep the target snippet limited to listener setup (e.g., Enable-PSRemoting) and
place the TrustedHosts guidance in a distinct client-side section.
In `@docs/03-github-orchestrator/05-providers/09-gitlab-integration.mdx`:
- Around line 18-22: Replace the legacy unity-builder snippet: stop calling the
old "game-ci/unity-builder" entrypoint and the camelCase flags used in "yarn run
cli -m cli-build" and instead invoke the current `@game-ci/orchestrator` CLI
(e.g., via npx or yarn dlx) and switch all flags to kebab-case; update the
example to call the orchestrator build command (reference symbol:
`@game-ci/orchestrator` and the build action) and use flags like --project-path,
--provider-strategy, and --git-private-token in place of --projectPath,
--providerStrategy, and --gitPrivateToken so the docs reflect the new CLI
surface.
In `@docs/03-github-orchestrator/05-providers/12-cli-provider-protocol.mdx`:
- Around line 125-155: The subcommands (e.g., run-task, readied branch,
cleanup-workflow, garbage-collect, list-resources, list-workflow,
watch-workflow) currently use Bash read request which only reads a single line
and will truncate pretty-printed JSON; replace the single-line read usage with
reading the full stdin into the request variable (e.g., use cat to capture
multiline JSON) in each branch that currently calls read request so jq sees the
entire payload and not just the first line.
- Around line 128-135: In the run-task case, the script always emits
'{"success": true}' regardless of the container exit status; change it so the
exit code of the docker run command is checked and the final JSON reflects
success only when that exit code is 0. Specifically, after resolving image and
commands in the run-task block, execute docker run and capture its exit code
(from the docker run invocation), then emit '{"success": true, "result": ...}'
only when the exit code is 0 and emit '{"success": false, "result": ...}'
(including the exit code or stderr message) when non-zero; update the code
around the docker run invocation and the final echo in the run-task case
accordingly.
- Around line 157-159: The unknown-command branch currently writes the
structured JSON error to stderr; change it to write to stdout so the
orchestrator can parse the JSON response — i.e., modify the echo in that case
(the line producing '{"success": false, "error": "Unknown command: '"$1"'"}') to
write to stdout (remove the >&2 redirection) and keep the exit 1 so the process
still returns a failure code.
- Around line 105-115: Update the docs to reflect the actual enforced timeouts
for the CLI provider: replace "No limit" for `run-task` with "7,200,000 ms (2h)"
and for `watch-workflow` with "3,600,000 ms (1h)" in the table and change the
following sentence so it notes these enforced limits (use the millisecond values
and human-readable durations); reference the CLI subcommands `run-task` and
`watch-workflow` so readers can correlate the table to the implementation that
enforces those timeouts.
In `@docs/03-github-orchestrator/07-advanced-topics/07-load-balancing.mdx`:
- Around line 241-247: The RUNNERS count in the step with id "pick" currently
counts any runner with status == "online" and therefore includes busy runners;
update the GH API jq filter used in the RUNNERS assignment so it only counts
runners where status is "online" AND busy is false (i.e., idle), ensuring the
workflow routes to local-docker only when a self-hosted runner is actually free;
modify the RUNNERS assignment in that step accordingly while keeping GH_TOKEN
and the rest of the step intact.
In `@docs/03-github-orchestrator/07-advanced-topics/11-test-workflow-engine.mdx`:
- Around line 58-68: The Run Fields table is missing the builtClientPath entry
used in examples; add a new row in the “Run Fields” table with the field name
`builtClientPath` and a short description such as “Filesystem path to the built
client to run tests against (used when builtClient is true)”, keeping the same
table formatting and column style as the existing rows so examples and docs
match the supported fields.
---
Nitpick comments:
In `@docs/03-github-orchestrator/05-providers/07-ansible.mdx`:
- Around line 227-229: The "Write vault password" step writes the secret to
/tmp/vault-pass which leaves plaintext on disk on shared runners; update the
docs to explicitly call out this security implication and recommend safer
alternatives: use mktemp to create a unique temp file, restrict file permissions
(chmod 600) before writing, export the temp filename to GITHUB_ENV if needed,
and ensure an immediate cleanup step to shred/remove the file; mention that
GitHub-hosted runners are shared and advise using in-memory approaches or GitHub
Actions built-in secrets masking where possible.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 540a9bec-c429-451b-8b43-5fecc06530fa
📒 Files selected for processing (37)
docs/03-github-orchestrator/01-introduction.mdxdocs/03-github-orchestrator/03-examples/02-github-actions.mdxdocs/03-github-orchestrator/03-examples/03-aws.mdxdocs/03-github-orchestrator/03-examples/04-kubernetes.mdxdocs/03-github-orchestrator/04-jobs.mdxdocs/03-github-orchestrator/05-api-reference.mdxdocs/03-github-orchestrator/05-providers/02-aws.mdxdocs/03-github-orchestrator/05-providers/03-kubernetes.mdxdocs/03-github-orchestrator/05-providers/04-github-actions-dispatch.mdxdocs/03-github-orchestrator/05-providers/05-gitlab-ci-dispatch.mdxdocs/03-github-orchestrator/05-providers/05-local.mdxdocs/03-github-orchestrator/05-providers/06-custom-providers.mdxdocs/03-github-orchestrator/05-providers/06-remote-powershell.mdxdocs/03-github-orchestrator/05-providers/07-ansible.mdxdocs/03-github-orchestrator/05-providers/09-gitlab-integration.mdxdocs/03-github-orchestrator/05-providers/10-gcp-cloud-run.mdxdocs/03-github-orchestrator/05-providers/11-azure-aci.mdxdocs/03-github-orchestrator/05-providers/12-cli-provider-protocol.mdxdocs/03-github-orchestrator/06-secrets.mdxdocs/03-github-orchestrator/07-advanced-topics/01-caching.mdxdocs/03-github-orchestrator/07-advanced-topics/02-retained-workspace.mdxdocs/03-github-orchestrator/07-advanced-topics/07-load-balancing.mdxdocs/03-github-orchestrator/07-advanced-topics/08-storage.mdxdocs/03-github-orchestrator/07-advanced-topics/09-architecture.mdxdocs/03-github-orchestrator/07-advanced-topics/10-build-services.mdxdocs/03-github-orchestrator/07-advanced-topics/10-lfs-agents.mdxdocs/03-github-orchestrator/07-advanced-topics/11-test-workflow-engine.mdxdocs/03-github-orchestrator/07-advanced-topics/12-hot-runner-protocol.mdxdocs/03-github-orchestrator/07-advanced-topics/13-build-output-system.mdxdocs/03-github-orchestrator/07-advanced-topics/14-incremental-sync-protocol.mdxdocs/03-github-orchestrator/07-advanced-topics/15-massive-projects.mdxdocs/03-github-orchestrator/07-advanced-topics/16-monorepo-support.mdxdocs/03-github-orchestrator/07-advanced-topics/17-build-reliability.mdxdocs/03-github-orchestrator/08-cli/01-getting-started.mdxdocs/03-github-orchestrator/08-cli/02-build-command.mdxdocs/03-github-orchestrator/08-cli/03-orchestrate-command.mdxdocs/03-github-orchestrator/08-cli/04-other-commands.mdx
✅ Files skipped from review due to trivial changes (2)
- docs/03-github-orchestrator/03-examples/04-kubernetes.mdx
- docs/03-github-orchestrator/05-providers/06-custom-providers.mdx
🚧 Files skipped from review as they are similar to previous changes (13)
- docs/03-github-orchestrator/07-advanced-topics/10-build-services.mdx
- docs/03-github-orchestrator/05-providers/05-gitlab-ci-dispatch.mdx
- docs/03-github-orchestrator/05-providers/11-azure-aci.mdx
- docs/03-github-orchestrator/05-providers/04-github-actions-dispatch.mdx
- docs/03-github-orchestrator/07-advanced-topics/09-architecture.mdx
- docs/03-github-orchestrator/05-api-reference.mdx
- docs/03-github-orchestrator/06-secrets.mdx
- docs/03-github-orchestrator/08-cli/01-getting-started.mdx
- docs/03-github-orchestrator/08-cli/03-orchestrate-command.mdx
- docs/03-github-orchestrator/07-advanced-topics/14-incremental-sync-protocol.mdx
- docs/03-github-orchestrator/07-advanced-topics/15-massive-projects.mdx
- docs/03-github-orchestrator/07-advanced-topics/13-build-output-system.mdx
- docs/03-github-orchestrator/03-examples/03-aws.mdx
| ```yaml | ||
| - uses: game-ci/unity-builder@v4 | ||
| with: | ||
| preBuildSteps: | | ||
| - name: install-dependencies | ||
| image: node:18 | ||
| commands: npm install | ||
| ``` |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Search for preBuildSteps/postBuildSteps schema definitions in the codebase
rg -n "preBuildSteps|postBuildSteps" --type yaml --type json -A 5
rg -n "commands|command" docs/03-github-orchestrator/ -B 2 -A 2 | head -80Repository: game-ci/documentation
Length of output: 7501
🏁 Script executed:
#!/bin/bash
# Read the API reference file around line 60
echo "=== API Reference (lines 55-75) ==="
sed -n '55,75p' docs/03-github-orchestrator/05-api-reference.mdx
# Read the jobs.mdx file to see the preBuildSteps example
echo ""
echo "=== Jobs.mdx preBuildSteps example (lines 143-170) ==="
sed -n '143,170p' docs/03-github-orchestrator/04-jobs.mdxRepository: game-ci/documentation
Length of output: 3238
Fix YAML key inconsistency in documentation.
The API reference (line 60) documents postBuildSteps with keys image, secrets, and command (singular), but the examples in this file use commands (plural) for both preBuildSteps and postBuildSteps. The architecture documentation also refers to commands (plural). Clarify which key name is correct and update the API reference and/or examples accordingly to ensure consistency.
Additionally, the API reference mentions secrets as a key for postBuildSteps, but neither example demonstrates its usage. Add documentation or an example showing how to use the secrets parameter if it is supported.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@docs/03-github-orchestrator/04-jobs.mdx` around lines 143 - 150, The docs
show inconsistent YAML keys between the API reference and examples: normalize on
one key name (prefer the API reference's singular "command" if that is the
implemented schema) by updating all examples (e.g., the preBuildSteps and
postBuildSteps examples that currently use "commands") to use "command" and
ensure "image" remains present; then add a short example demonstrating the
"secrets" key in a postBuildSteps item (showing how to list/reference secrets)
and update the API reference text to match the example so preBuildSteps,
postBuildSteps, image, command, and secrets are consistent across docs.
| - An AWS account with permission to create resources (ECS, CloudFormation, S3, Kinesis, CloudWatch). | ||
| - An IAM user or role with an access key and secret key. | ||
|
|
||
| ## AWS Credentials | ||
|
|
||
| Set the following as `env` variables in your workflow: | ||
|
|
||
| | Variable | Description | | ||
| | ----------------------- | ------------------------------------------------------- | | ||
| | `AWS_ACCESS_KEY_ID` | IAM access key ID. | | ||
| | `AWS_SECRET_ACCESS_KEY` | IAM secret access key. | | ||
| | `AWS_DEFAULT_REGION` | AWS region matching your base stack (e.g. `eu-west-2`). | | ||
|
|
||
| If you're using GitHub Actions, configure credentials with: | ||
|
|
||
| ```yaml | ||
| - name: Configure AWS Credentials | ||
| uses: aws-actions/configure-aws-credentials@v4 | ||
| with: | ||
| aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }} | ||
| aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }} | ||
| aws-region: eu-west-2 | ||
| ``` |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
aws-actions configure-aws-credentials v4 GitHub Actions OIDC assume role IAM role access keys
💡 Result:
Recommended: GitHub Actions → AWS via OIDC (no long‑lived access keys)
1) In AWS IAM
- Create an OIDC identity provider for
https://token.actions.githubusercontent.com(audience typicallysts.amazonaws.com). [1] - Create an IAM role whose trust policy allows
sts:AssumeRoleWithWebIdentityfrom that provider and scopes access via OIDC claims (example below). [1]
Example trust policy (scope to a branch):
{
"Version": "2012-10-17",
"Statement": [{
"Effect": "Allow",
"Principal": {
"Federated": "arn:aws:iam::<AWS_ACCOUNT_ID>:oidc-provider/token.actions.githubusercontent.com"
},
"Action": "sts:AssumeRoleWithWebIdentity",
"Condition": {
"StringEquals": {
"token.actions.githubusercontent.com:aud": "sts.amazonaws.com",
"token.actions.githubusercontent.com:sub": "repo:<ORG>/<REPO>:ref:refs/heads/<BRANCH>"
}
}
}]
}
``` [1]
- Attach the **permissions policy** to the role (least privilege for what your workflow must do). [1][2]
**2) In your GitHub workflow**
- Grant permission to request an OIDC token: `permissions: id-token: write`. [1]
- Use the action (v4) and provide `role-to-assume` + `aws-region`:
```yaml
permissions:
id-token: write
contents: read
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Configure AWS credentials (OIDC)
uses: aws-actions/configure-aws-credentials@v4
with:
role-to-assume: arn:aws:iam::<AWS_ACCOUNT_ID>:role/<ROLE_NAME>
aws-region: us-east-1
- run: aws sts get-caller-identityThis action exports standard AWS env vars (e.g., AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, AWS_SESSION_TOKEN, AWS_REGION) for later steps. [3]
Alternative: Use IAM access keys (not recommended for CI)
If you must use long-lived keys, you can pass repository secrets like aws-access-key-id / aws-secret-access-key (and optionally still role-to-assume to assume a role). [1]
Best practice is OIDC to avoid storing access keys in GitHub. [1][2]
Sources: [1] aws-actions/configure-aws-credentials README, [2] AWS Security Blog (GitHub Actions ↔ IAM roles), [3] action overview describing exported env vars.
Fix IAM authentication guidance: distinguish between OIDC role assumption and long-lived access keys.
Lines 28–29 state "an IAM user or role" needs an access key and secret key, but IAM roles assumed via OIDC (the recommended approach) do not use long-lived credentials. The workflow example only demonstrates static secrets, which misleads GitHub Actions users away from the secure OIDC flow that configure-aws-credentials@v4 supports.
Separate the guidance into two paths:
- Preferred: IAM role assumable from GitHub Actions via OIDC (no stored secrets).
- Fallback: IAM user with long-lived access keys (for legacy workflows).
Update the example to show role-to-assume with OIDC instead of (or in addition to) the static-secret approach.
Suggested changes
-- An IAM user or role with an access key and secret key.
+- Either:
+ - an IAM role assumable from GitHub Actions via OIDC (preferred), or
+ - an IAM user with an access key and secret key.
@@
-If you're using GitHub Actions, configure credentials with:
+If you're using GitHub Actions, prefer OIDC + `role-to-assume`:
@@
- name: Configure AWS Credentials
uses: aws-actions/configure-aws-credentials@v4
with:
- aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
- aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
+ role-to-assume: arn:aws:iam::123456789012:role/game-ci-orchestrator
aws-region: eu-west-2🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@docs/03-github-orchestrator/05-providers/02-aws.mdx` around lines 28 - 50,
Update the AWS credentials section to distinguish two flows: Preferred OIDC role
assumption (no long-lived secrets) and Fallback long-lived access keys;
explicitly mention that IAM roles assumed via OIDC do not use
AWS_ACCESS_KEY_ID/AWS_SECRET_ACCESS_KEY. Modify the GitHub Actions example using
aws-actions/configure-aws-credentials@v4 to show a role-to-assume variant
(include inputs like role-to-assume and optionally
role-session-name/role-duration-seconds) and keep a separate fallback example
showing aws-access-key-id and aws-secret-access-key for legacy users; ensure the
docs mention which environment variables are required for each path and label
the OIDC approach as recommended.
| ## CPU and Memory | ||
|
|
||
| Kubernetes accepts the same unit format as AWS - `1024 = 1 vCPU`, memory in MB. Do not include the | ||
| vCPU or GB suffix. |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
Kubernetes official docs CPU memory resource quantity format millicores Mi
💡 Result:
In Kubernetes, CPU and memory fields (for example resources.requests/limits) use the resource.Quantity format:
<quantity> ::= <signedNumber><suffix> where suffix can be decimal SI (m, k, M, G, T, P, E), binary SI (Ki, Mi, Gi, Ti, Pi, Ei), or a decimal exponent (e / E). Case matters. [2]
CPU (cores, millicores / millicpu)
1CPU = 1 core (or 1 vCPU). [1]- You can specify fractions as decimals or millicpu using
m:0.1CPU =100m(often said as “100 millicores”). [1]
- Precision finer than
1mis not allowed. [1]
Memory (bytes, Mi / Gi, etc.)
- Memory is measured in bytes, with optional suffixes:
- Decimal:
k,M,G,T,P,E - Binary:
Ki,Mi,Gi,Ti,Pi,Ei[1]
- Decimal:
- Example equivalences Kubernetes gives:
129M≈123Mi(different bases: 10³ vs 2¹⁰). [1] - Suffix case matters:
400mmemory is 0.4 bytes (not 400Mi). [1]
Sources: Kubernetes “Resource Management for Pods and Containers” doc (CPU millicores, memory units, case sensitivity) [1]; Kubernetes API “Quantity” definition (formal grammar and suffix sets) [2].
Clarify that 1024 = 1 vCPU is a provider convention, not a native Kubernetes unit.
The current wording misleads readers by implying Kubernetes itself accepts this format. Native Kubernetes resource quantities use millicores (m for CPU) and binary SI suffixes (Mi, Gi for memory), as documented in the Kubernetes official specification. The containerCpu / containerMemory inputs follow a provider-level abstraction.
Suggested edit
-Kubernetes accepts the same unit format as AWS - `1024 = 1 vCPU`, memory in MB. Do not include the
-vCPU or GB suffix.
+The Kubernetes provider uses the same `containerCpu` / `containerMemory` input convention as AWS:
+`1024 = 1 vCPU`, memory in MB. These are provider inputs, not raw Kubernetes resource quantities.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@docs/03-github-orchestrator/05-providers/03-kubernetes.mdx` around lines 17 -
20, Update the "CPU and Memory" section to clarify that "1024 = 1 vCPU" is a
provider-level convention rather than a native Kubernetes unit; explicitly
mention that Kubernetes native CPU units are millicores (e.g., 100m) and memory
uses binary SI suffixes (e.g., Mi, Gi), and note that the inputs containerCpu
and containerMemory follow the provider abstraction and are translated to
Kubernetes resource quantities. Reference the "CPU and Memory" heading and the
containerCpu/containerMemory inputs when making the text change so readers
understand it's an abstraction layer, not the upstream Kubernetes format.
| ansibleInventory: ./infrastructure/inventory.yml | ||
| ansiblePlaybook: ./infrastructure/unity-build.yml | ||
| ansibleExtraVars: '{"unity_version": "2022.3.0f1"}' | ||
| ansibleVaultPassword: ${{ secrets.ANSIBLE_VAULT_PASSWORD_FILE }} | ||
| ``` |
There was a problem hiding this comment.
Inconsistent vault password handling in example.
The configuration example passes secrets.ANSIBLE_VAULT_PASSWORD_FILE directly to ansibleVaultPassword, but this input expects a file path (as documented in the Inputs Reference table). The "Full Workflow Example" later correctly demonstrates writing the secret to a temporary file first.
Consider updating this example to be consistent with the recommended pattern:
📝 Suggested fix
ansiblePlaybook: ./infrastructure/unity-build.yml
ansibleExtraVars: '{"unity_version": "2022.3.0f1"}'
- ansibleVaultPassword: ${{ secrets.ANSIBLE_VAULT_PASSWORD_FILE }}
+ # Note: ansibleVaultPassword expects a file path.
+ # See the "Full Workflow Example" section for the recommended
+ # pattern of writing the secret to a temporary file first.
+ ansibleVaultPassword: /tmp/vault-pass📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| ansibleInventory: ./infrastructure/inventory.yml | |
| ansiblePlaybook: ./infrastructure/unity-build.yml | |
| ansibleExtraVars: '{"unity_version": "2022.3.0f1"}' | |
| ansibleVaultPassword: ${{ secrets.ANSIBLE_VAULT_PASSWORD_FILE }} | |
| ``` | |
| ansibleInventory: ./infrastructure/inventory.yml | |
| ansiblePlaybook: ./infrastructure/unity-build.yml | |
| ansibleExtraVars: '{"unity_version": "2022.3.0f1"}' | |
| # Note: ansibleVaultPassword expects a file path. | |
| # See the "Full Workflow Example" section for the recommended | |
| # pattern of writing the secret to a temporary file first. | |
| ansibleVaultPassword: /tmp/vault-pass |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@docs/03-github-orchestrator/05-providers/07-ansible.mdx` around lines 54 -
58, The example is passing the secret value directly to ansibleVaultPassword
even though that input expects a file path; update the example so
ansibleVaultPassword points to a filesystem path containing the vault password
(matching the "Full Workflow Example" approach). Concretely, change the snippet
around ansibleVaultPassword so it demonstrates writing the secret from
secrets.ANSIBLE_VAULT_PASSWORD (or secrets.ANSIBLE_VAULT_PASSWORD_FILE) into a
temporary file and then setting ansibleVaultPassword to that temp file path;
keep ansibleInventory and ansiblePlaybook as-is and reference the temp file path
in place of the current direct secret usage.
| env: | ||
| GH_TOKEN: ${{ secrets.GITHUB_TOKEN }} | ||
| run: | |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# First, find and read the file to understand the context
cat -n docs/03-github-orchestrator/07-advanced-topics/07-load-balancing.mdx | sed -n '230,260p'Repository: game-ci/documentation
Length of output: 1233
🏁 Script executed:
#!/bin/bash
# Get more context around the lines in question
cat -n docs/03-github-orchestrator/07-advanced-topics/07-load-balancing.mdx | sed -n '200,280p'Repository: game-ci/documentation
Length of output: 3110
🏁 Script executed:
#!/bin/bash
# Search for references to runners, self-hosted, or GH_TOKEN in the docs
rg -i "self-hosted.*runner|gh_token|github_token" docs/03-github-orchestrator/ -A 3 -B 3Repository: game-ci/documentation
Length of output: 50379
🌐 Web query:
GitHub REST API self-hosted runners endpoint permissions fine-grained token requirements
💡 Result:
Fine‑grained PAT (FGPAT) permissions for self‑hosted runners REST endpoints (GitHub Actions)
Organization-level runner endpoints (/orgs/{org}/actions/runners/...)
- Read operations (e.g., get runner, list labels): Organization permission → “Self-hosted runners” = Read. [1]
- Write operations (e.g., create registration/remove token, delete runner, manage labels): Organization permission → “Self-hosted runners” = Write. [2]
Repository-level runner endpoints (/repos/{owner}/{repo}/actions/runners/...)
- Read operations (e.g., list runners, get runner, list runner downloads/labels): Repository permission → “Administration” = Read. [1]
- Write operations (e.g., create registration/remove token, delete runner, manage labels): Repository permission → “Administration” = Write. [1]
Quick check for the two most common endpoints
POST /orgs/{org}/actions/runners/registration-token→ Org: Self-hosted runners (Write). [2]POST /repos/{owner}/{repo}/actions/runners/registration-token→ Repo: Administration (Write). [1]
Classic PAT comparison (often the source of confusion)
- Org runner endpoints typically require
admin:org(andrepotoo if the relevant repo is private). [3] - Repo runner endpoints typically require
reposcope. [3]
Sources:
[1] GitHub Docs – “Permissions required for fine-grained personal access tokens” (lists runner endpoints and required repo permission/access). (docs.github.com)
[2] GitHub Docs – Self-hosted runners REST API (“Create a registration token for an organization” → FGPAT needs Self-hosted runners org permission (write)). (docs.github.com)
[3] GitHub Docs – Self-hosted runners REST API (classic PAT scope notes like admin:org / repo). (docs.github.com)
Citations:
- 1: https://docs.github.com/en/rest/overview/permissions-required-for-fine-grained-personal-access-tokens
- 2: https://docs.github.com/en/rest/actions/self-hosted-runners
- 3: https://docs.github.com/en/rest/actions/self-hosted-runners
Update the example to include required permissions or call out the PAT/App token requirement.
The endpoint GET /repos/{owner}/{repo}/actions/runners requires the fine-grained permission Administration: Read. The secrets.GITHUB_TOKEN provided by GitHub Actions has only contents: read and metadata: read by default and will fail without explicit configuration. Either add permissions: administration: read to the job, or update the example to document that users need a PAT or GitHub App token with Administration permissions.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@docs/03-github-orchestrator/07-advanced-topics/07-load-balancing.mdx` around
lines 243 - 245, The example uses GH_TOKEN (secrets.GITHUB_TOKEN) to call the
GET /repos/{owner}/{repo}/actions/runners endpoint which requires the
fine-grained Administration: Read permission; update the example job to either
add a job-level permissions block (permissions: administration: read) so
secrets.GITHUB_TOKEN will work, or explicitly document that a PAT or GitHub App
token with Administration (read) rights must be supplied instead of
secrets.GITHUB_TOKEN.
| env: | ||
| GH_TOKEN: ${{ secrets.GITHUB_TOKEN }} | ||
| run: | | ||
| RUNNERS=$(gh api repos/${{ github.repository }}/actions/runners --jq '[.runners[] | select(.status == "online")] | length') |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# First, let's check if the file exists and view the context around line 246
if [ -f "docs/03-github-orchestrator/07-advanced-topics/07-load-balancing.mdx" ]; then
wc -l "docs/03-github-orchestrator/07-advanced-topics/07-load-balancing.mdx"
fiRepository: game-ci/documentation
Length of output: 137
🏁 Script executed:
# View the context around line 246
sed -n '240,260p' "docs/03-github-orchestrator/07-advanced-topics/07-load-balancing.mdx"Repository: game-ci/documentation
Length of output: 696
🌐 Web query:
GitHub API GET /repos/{owner}/{repo}/actions/runners pagination default limit
💡 Result:
For GET /repos/{owner}/{repo}/actions/runners (List self-hosted runners for a repository), pagination defaults are:
per_page: 30 by default (max 100)page: 1 by default
[1]
Source(s):
[1] (docs.github.com)
Citations:
🌐 Web query:
gh cli --paginate --slurp flags documentation actions runners
💡 Result:
--paginate and --slurp are flags for gh api (REST) and gh api graphql:
--paginate: “Make additional HTTP requests to fetch all pages of results.” [1]--slurp: “Use with--paginateto return an array of all pages of either JSON arrays or objects” (i.e., wraps each page’s JSON response into one outer JSON array). [1]
Using them for GitHub Actions self-hosted runners
GitHub’s REST API endpoints include:
- List runners for an org:
GET /orgs/{org}/actions/runners[2] - List runners for a repo:
GET /repos/{owner}/{repo}/actions/runners[2]
Examples:
# Org runners: fetch all pages, then flatten to one JSON array of runners
gh api /orgs/ORG/actions/runners --paginate --slurp \
| jq '[.[].runners[]]'
# Repo runners
gh api /repos/OWNER/REPO/actions/runners --paginate --slurp \
| jq '[.[].runners[]]'If you don’t need --slurp, you can omit it and process each page as it streams. [1]
Sources: GitHub CLI gh api manual (flags + pagination behavior) [1]; GitHub REST API “self-hosted runners” endpoints [2].
Paginate the runner lookup for larger fleets.
GET /repos/{owner}/{repo}/actions/runners defaults to 30 results per page, and gh api only fetches later pages when --paginate is set. On repos with more than 30 runners, this sample can undercount available capacity and misroute builds. Additionally, the current query counts all online runners, including busy ones—you should filter for idle runners only. (docs.github.com)
Suggested fix
- RUNNERS=$(gh api repos/${{ github.repository }}/actions/runners --jq '[.runners[] | select(.status == "online")] | length')
+ RUNNERS=$(gh api --paginate --slurp repos/${{ github.repository }}/actions/runners \
+ --jq '[.[].runners[] | select(.status == "online" and .busy == false)] | length')📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| RUNNERS=$(gh api repos/${{ github.repository }}/actions/runners --jq '[.runners[] | select(.status == "online")] | length') | |
| RUNNERS=$(gh api --paginate --slurp repos/${{ github.repository }}/actions/runners \ | |
| --jq '[.[].runners[] | select(.status == "online" and .busy == false)] | length') |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@docs/03-github-orchestrator/07-advanced-topics/07-load-balancing.mdx` at line
246, Update the RUNNERS lookup to paginate and count only idle online runners:
call gh api with --paginate when invoking the /actions/runners endpoint and
change the jq filter to select runners where .status == "online" and .busy ==
false before taking the length (the existing RUNNERS=$(gh api ... --jq
'[.runners[] | select(.status == "online")] | length') should be adjusted to use
--paginate and include the .busy check).
| When `submoduleProfilePath` is set, orchestrator reads the profile before any git operations and | ||
| initializes only the listed submodules. Skipped submodules are never cloned, fetched, or touched. |
There was a problem hiding this comment.
Narrow this to submodule git operations.
The profile cannot be read before any git activity if it lives in the repository. “Before any submodule git operations” matches the behavior described here without overstating it.
Suggested wording
-When `submoduleProfilePath` is set, orchestrator reads the profile before any git operations and
+When `submoduleProfilePath` is set, orchestrator reads the profile before any submodule git operations and📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| When `submoduleProfilePath` is set, orchestrator reads the profile before any git operations and | |
| initializes only the listed submodules. Skipped submodules are never cloned, fetched, or touched. | |
| When `submoduleProfilePath` is set, orchestrator reads the profile before any submodule git operations and | |
| initializes only the listed submodules. Skipped submodules are never cloned, fetched, or touched. |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@docs/03-github-orchestrator/07-advanced-topics/16-monorepo-support.mdx`
around lines 110 - 111, Update the sentence that mentions submoduleProfilePath
to narrow its scope to submodule git operations: replace "before any git
operations" with "before any submodule git operations" and keep the rest of the
sentence about initializing only the listed submodules and skipped submodules
never being cloned, fetched, or touched; refer to the symbol
submoduleProfilePath and the orchestrator behavior when making this wording
change.
| ```yaml | ||
| - name: Validate submodule profiles | ||
| run: | | ||
| .\automation\ValidateSubmoduleProfiles.ps1 \ | ||
| -ProfileDir config/submodule-profiles \ | ||
| -GitmodulesPath .gitmodules | ||
| shell: powershell |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
Does GitHub Actions shell: powershellaccept` as a line continuation in run blocks, and is pwsh the recommended cross-platform PowerShell shell name?`
💡 Result:
No. In a GitHub Actions run: | block, the content is executed by the selected shell, so with shell: powershell (Windows PowerShell) a trailing \ is not a line-continuation character—PowerShell’s explicit line continuation is the backtick (`), and PowerShell also supports implicit continuation in syntactically “unfinished” constructs (e.g., inside (...), [...], {...}, after |, etc.). [2]
Yes. pwsh is the recommended cross-platform PowerShell shell name in GitHub Actions (PowerShell Core / PowerShell 7+), while powershell refers to Windows PowerShell (Desktop) and is Windows-only. GitHub’s shell table distinguishes pwsh (PowerShell Core) from powershell (PowerShell Desktop). [1]
Sources:
[1] GitHub Docs – workflow shell definitions (pwsh vs powershell) https://docs.github.com/fr/enterprise-server%403.16/actions/reference/github_token-reference
[2] Microsoft Learn – PowerShell parsing / backtick line continuation https://learn.microsoft.com/en-us/powershell/module/microsoft.powershell.core/about/about_parsing?view=powershell-7.4&viewFallbackFrom=powershell-7.2
Fix the PowerShell example before publishing.
This run block uses bash-style \ continuations, but PowerShell requires backticks (`) for line continuation. With shell: powershell, the example will fail. Additionally, use shell: pwsh for cross-platform compatibility (PowerShell Core) instead of powershell (Windows-only).
Suggested fix
- name: Validate submodule profiles
run: |
- .\automation\ValidateSubmoduleProfiles.ps1 \
- -ProfileDir config/submodule-profiles \
- -GitmodulesPath .gitmodules
- shell: powershell
+ ./automation/ValidateSubmoduleProfiles.ps1 `
+ -ProfileDir config/submodule-profiles `
+ -GitmodulesPath .gitmodules
+ shell: pwsh📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| ```yaml | |
| - name: Validate submodule profiles | |
| run: | | |
| .\automation\ValidateSubmoduleProfiles.ps1 \ | |
| -ProfileDir config/submodule-profiles \ | |
| -GitmodulesPath .gitmodules | |
| shell: powershell |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@docs/03-github-orchestrator/07-advanced-topics/16-monorepo-support.mdx`
around lines 237 - 243, The PowerShell example in the "Validate submodule
profiles" run block uses bash-style backslashes and an OS-specific shell; change
the continuation characters to PowerShell backticks (`) for the multiline
command and switch the step's shell from "powershell" to "pwsh" for
cross-platform PowerShell Core compatibility while keeping the same parameters
(-ProfileDir config/submodule-profiles and -GitmodulesPath .gitmodules) and the
same script invocation name ("Validate submodule profiles").
| ``` | ||
| game-ci (@game-ci/orchestrator) v3.0.0 | ||
| Node.js v20.5.1 | ||
| Platform: win32 x64 | ||
| ``` |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
What is the latest version of @game-ci/orchestrator npm package?
💡 Result:
I couldn’t find any public npm package published under the name @game-ci/orchestrator, so there’s no “latest version” I can reliably report from the npm registry.
GameCI’s Orchestrator appears to be provided/used via GitHub Actions workflows (e.g., uses: game-ci/unity-builder@main) in their documentation rather than as an npm package. [1]
Sources:
[1] (game.ci)
Citations:
🏁 Script executed:
# Read the specific file to see the full context
cat -n "docs/03-github-orchestrator/08-cli/04-other-commands.mdx" | sed -n '125,140p'Repository: game-ci/documentation
Length of output: 510
🏁 Script executed:
# Search for version definitions in the repository
find . -type f \( -name "package.json" -o -name "version.json" -o -name "_version*" -o -name ".version*" \) | head -20Repository: game-ci/documentation
Length of output: 1384
🏁 Script executed:
# Check if there's a version or changelog that indicates what version this PR is for
find . -type f \( -name "CHANGELOG*" -o -name "HISTORY*" -o -name "VERSION*" \) | head -20Repository: game-ci/documentation
Length of output: 1072
🏁 Script executed:
# Check the documentation repository's package.json
cat package.json | grep -A 2 -B 2 '"version"'Repository: game-ci/documentation
Length of output: 193
🏁 Script executed:
# Search for version references in the documentation
rg "v3\.0\.0|v2\.0\.0|3\.0\.0|2\.0\.0" --type mdx --type md docs/ | head -30Repository: game-ci/documentation
Length of output: 92
🏁 Script executed:
# Check if there's a README or version file in the orchestrator references
find docs -type f -name "*.mdx" -o -name "*.md" | xargs rg "@game-ci/orchestrator" -lRepository: game-ci/documentation
Length of output: 276
🏁 Script executed:
# Search for version numbers in orchestrator docs
rg "v3|v2|2\.0|3\.0" docs/03-github-orchestrator/ -A 2 -B 2Repository: game-ci/documentation
Length of output: 10473
🏁 Script executed:
# Check orchestrator introduction/getting-started for version context
cat -n docs/03-github-orchestrator/01-introduction.mdx | head -50Repository: game-ci/documentation
Length of output: 2688
🏁 Script executed:
# Search for any LTS or version release information in docs
rg "LTS|release|version" docs/03-github-orchestrator/ | head -20Repository: game-ci/documentation
Length of output: 2587
Update the example CLI version output to match the LTS 2.0.0 release.
The example at line 130 shows v3.0.0, but other documentation examples and this PR target LTS 2.0.0. Change the version output example to v2.0.0 for consistency.
Reference: Other version examples in the documentation
GAME_CI_VERSION=v2.0.0in CLI getting-started guide- Update command example shows
v2.1.0
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@docs/03-github-orchestrator/08-cli/04-other-commands.mdx` around lines 129 -
133, Update the CLI example output that currently reads "game-ci
(`@game-ci/orchestrator`) v3.0.0" to "game-ci (`@game-ci/orchestrator`) v2.0.0" so
the documentation example matches the LTS 2.0.0 release; locate the code block
showing the Node/Platform output (the triple-backtick example with game-ci and
Node.js) and replace the version string only.
Replace old "git clone unity-builder / yarn run cli" instructions with the proper game-ci CLI install and usage from the orchestrator package. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
There was a problem hiding this comment.
♻️ Duplicate comments (1)
docs/03-github-orchestrator/03-examples/01-command-line.mdx (1)
70-78:⚠️ Potential issue | 🟠 MajorPoint this example at the current secrets flow.
Line 71 still links to a fragment that does not exist in
docs/03-github-orchestrator/06-secrets.mdx, and Lines 76-78 still demonstrate the legacyinputPullCommandpath instead of the recommendedsecretSourceflow. As written, this section will both fail the internal-link check and send readers to an outdated example.Suggested update
-Avoid long CLI flags for credentials by using environment variables or the -[Pull Secrets](../secrets#-pulling-secrets-from-external-sources) feature: +Avoid long CLI flags for credentials by using environment variables or the +[Secrets](../secrets) feature: game-ci build \ --target-platform StandaloneLinux64 \ - --populate-override true \ + --secret-source gcp-secret-manager \ --pull-input-list UNITY_EMAIL,UNITY_SERIAL,UNITY_PASSWORD \ - --input-pull-command='gcloud secrets versions access 1 --secret="{0}"'🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/03-github-orchestrator/03-examples/01-command-line.mdx` around lines 70 - 78, Update the "Pull Secrets" link to point at the current secrets documentation fragment (the canonical Pull Secrets section) and replace the legacy inputPullCommand example with the recommended secretSource flow: remove usage of the deprecated inputPullCommand pattern and instead show using --pull-input-list together with the secretSource configuration (referencing the secretSource parameter name) so the example matches the current secrets flow and the internal-link check.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Duplicate comments:
In `@docs/03-github-orchestrator/03-examples/01-command-line.mdx`:
- Around line 70-78: Update the "Pull Secrets" link to point at the current
secrets documentation fragment (the canonical Pull Secrets section) and replace
the legacy inputPullCommand example with the recommended secretSource flow:
remove usage of the deprecated inputPullCommand pattern and instead show using
--pull-input-list together with the secretSource configuration (referencing the
secretSource parameter name) so the example matches the current secrets flow and
the internal-link check.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 0ab57570-8447-4507-9741-8f04160ee986
📒 Files selected for processing (1)
docs/03-github-orchestrator/03-examples/01-command-line.mdx
…tor docs - Enable @docusaurus/theme-mermaid for native diagram rendering - Convert all ASCII box-drawing diagrams to mermaid flowcharts (22 files) - Fix broken admonition syntax (:::info/:::caution blocks) - Rewrite getting-started page with clear GitHub Actions and CLI sections Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Reposition orchestrator as advanced layer on top of unity-builder - Emphasize benefits for projects of any size, not just large ones - Add self-hosted runner complementarity (failover, load balancing) - Expand "What Orchestrator Handles" with full lifecycle details - Add "Choosing Your Setup" decision matrix to comparison page Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Add webpack alias to redirect cytoscape UMD import to CJS bundle - Fix broken markdown link to unity-builder (use GitHub URL) Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Remove bogus npm install step from getting-started (orchestrator is built into unity-builder, no separate install needed) - Add dispatch, experimental, and additional providers to overview page - Clarify orchestrator is built-in and activates via providerStrategy Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Replace "Adolescent" maturity label with "Stable" for clearer terminology. Rename "Built-in Dimensions" to "Example Dimensions" and add extensibility note to emphasize the taxonomy is a starting point that projects can fully customize. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…ation Add documentation for the new EnginePlugin interface that allows the orchestrator to support non-Unity engines (Godot, Unreal, custom). - New page: Advanced Topics > Engine Plugins — full guide covering the interface, plugin sources (npm, CLI, Docker), and authoring plugins - Updated introduction to mention engine agnosticism - Updated caching page to reference engine-aware cache folders - Added engine/enginePlugin to API reference parameters - Added --engine and --engine-plugin to CLI build command docs Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
|
Engine plugin documentation added via #544 (now merged into this branch). Covers the new EnginePlugin interface, plugin sources (npm, CLI, Docker), and updates to introduction, caching, API reference, and CLI docs. Related orchestrator PR: game-ci/orchestrator#4 |
Sections removed during README restructure, to be migrated to the documentation site (game-ci/documentation#541). Delete this file once that PR is merged. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Update introduction, getting-started, and comparison pages to describe the orchestrator as taking whatever hardware you give it, rather than framing it as three distinct types (cloud, self-hosted, local). Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Update all instances across docs and versioned docs to use "machines" instead of "hardware" for clearer, more approachable language. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Revert em dash changes outside orchestrator subfolder. Fix remaining "hardware" references in orchestrator docs. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…pulling workaround Add troubleshooting entry for the Container Overrides 8192-byte limit that AWS ECS/Fargate users can hit with complex workflows. Document the connection between using secretSource/pullInputList and reducing the override payload size. Cross-link from AWS provider docs and secrets docs. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Show failure count (e.g., "15/15") next to failed build status icons. Add a reset button (admin-only) that calls the new resetFailedBuilds backend endpoint to clear inflated failure counts so the Ingeminator can retry them automatically. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…#548 The 8192 troubleshooting entry and build reset button UI changes are now in a separate PR (#548) targeting main. The orchestrator-specific docs (AWS provider troubleshooting section and secrets tip callout) remain here since those files only exist on this branch. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
This PR now only contains the reset button UI changes. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* feat: add 8192 troubleshooting entry and reset button for maxed-out builds - Add troubleshooting entry for the AWS ECS containerOverrides 8192-byte limit with explanation and secret-pulling workaround - Show failure count (e.g. "15/15") next to failed build status icons - Add admin-only reset button that calls resetFailedBuilds endpoint to clear inflated failure counts so the Ingeminator retries them Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * remove troubleshooting entry (moved to orchestrator LTS 2.0.0 PR #541) This PR now only contains the reset button UI changes. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> --------- Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
# Preview
Summary
Combined documentation update for the orchestrator LTS 2.0.0 release. Updated to reflect the standalone
@game-ci/orchestratorpackage architecture.Key changes
@game-ci/orchestratorpackage (not unity-builder)Consolidates
Also incorporates content from closed unity-builder PRs:
Standalone architecture
The orchestrator is now a standalone package at
game-ci/orchestrator. When installed, unity-builder automatically loads it via the plugin interface.Cross-references
Test plan
🤖 Generated with Claude Code
Summary by CodeRabbit