LTS Infrastructure -- Combined Integration#816
Conversation
…ule profiles, caching, LFS, hooks Add generic enterprise-grade features to the orchestrator, enabling Unity projects with complex CI/CD pipelines to adopt game-ci/unity-builder with built-in support for: - CLI provider protocol: JSON-over-stdin/stdout bridge enabling providers in any language (Go, Python, Rust, shell) via the `providerExecutable` input - Submodule profiles: YAML-based selective submodule initialization with glob patterns and variant overlays (`submoduleProfilePath`, `submoduleVariantPath`) - Local build caching: Filesystem-based Library and LFS caching for local builds without external cache actions (`localCacheEnabled`, `localCacheRoot`) - Custom LFS transfer agents: Register external transfer agents like elastic-git-storage (`lfsTransferAgent`, `lfsTransferAgentArgs`, `lfsStoragePaths`) - Git hooks support: Detect and install lefthook/husky with configurable skip lists (`gitHooksEnabled`, `gitHooksSkipList`) Also removes all `orchestrator-develop` branch references, replacing with `main`. 13 new action inputs, 13 new files, 14 new CLI provider tests, 17 submodule tests, plus cache/LFS/hooks unit tests. All 452 tests pass. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…iders Add two new cloud provider implementations for the orchestrator, both marked as experimental: - **GCP Cloud Run Jobs** (`providerStrategy: gcp-cloud-run`): Executes Unity builds as Cloud Run Jobs with GCS FUSE for large artifact storage. Supports configurable machine types, service accounts, and VPC connectors. 7 new inputs (gcpProject, gcpRegion, gcpBucket, gcpMachineType, gcpDiskSizeGb, gcpServiceAccount, gcpVpcConnector). - **Azure Container Instances** (`providerStrategy: azure-aci`): Executes Unity builds as ACI containers with Azure File Shares (Premium FileStorage) for large artifact storage up to 100 TiB. Supports configurable CPU/memory, VNet integration, and subscription targeting. 9 new inputs (azureResourceGroup, azureLocation, azureStorageAccount, azureFileShareName, azureSubscriptionId, azureCpu, azureMemoryGb, azureDiskSizeGb, azureSubnetId). Both providers use their respective CLIs (gcloud, az) for infrastructure management and support garbage collection of old build resources. No tests included as these require real cloud infrastructure to validate. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Both providers now support four storage backends via gcpStorageType / azureStorageType: GCP Cloud Run: - gcs-fuse: Mount GCS bucket as POSIX filesystem (unlimited, best for large sequential I/O) - gcs-copy: Copy artifacts in/out via gsutil (simpler, no FUSE overhead) - nfs: Filestore NFS mount (true POSIX, good random I/O, up to 100 TiB) - in-memory: tmpfs (fastest, volatile, up to 32 GiB) Azure ACI: - azure-files: SMB file share mount (up to 100 TiB, premium throughput) - blob-copy: Copy artifacts in/out via az storage blob (no mount overhead) - azure-files-nfs: NFS 4.1 file share mount (true POSIX, no SMB lock overhead) - in-memory: emptyDir tmpfs (fastest, volatile, limited by container memory) New inputs: gcpStorageType, gcpFilestoreIp, gcpFilestoreShare, azureStorageType, azureBlobContainer. Constructor validates storage config and warns on missing prerequisites (e.g. NFS requires VPC connector/subnet). Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…ity check Adds built-in load balancing: check GitHub runner availability before builds start, auto-route to a fallback provider when runners are busy or offline. Eliminates the need for a separate check-runner job. New inputs: fallbackProviderStrategy, runnerCheckEnabled, runnerCheckLabels, runnerCheckMinAvailable. Outputs providerFallbackUsed and providerFallbackReason for workflow visibility. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Adds retryOnFallback (retry failed builds on alternate provider) and providerInitTimeout (swap provider if init takes too long). Refactors run() into run()/runWithProvider() to support retry loop. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Adds tests for cache hit restore (picks latest tar), LFS cache restore/save, garbage collection age filtering, and edge cases like permission errors and empty directories. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Covers: no token skip, no runners fallback, busy/offline runners, label filtering (case-insensitive), minAvailable threshold, fail-open on API error, mixed runner states. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Adds 64 new mock-based unit tests covering orchestrator services that previously had zero test coverage: - TaskParameterSerializer: env var format conversion, round-trip, uniqBy deduplication, blocked params, default secrets - FollowLogStreamService: build output message parsing — end of transmission, build success/failure detection, error accumulation, Library rebuild detection - OrchestratorNamespace (guid): GUID generation format, platform name normalization, nanoid uniqueness - OrchestratorFolders: path computation for all folder getters, ToLinuxFolder conversion, repo URL generation, purge flag detection All tests are pure mock-based and run without any external infrastructure (no LocalStack, K8s, Docker, or AWS). Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Adds a fast-fail unit test step at the top of orchestrator-integrity, right after yarn install and before any infrastructure setup (k3d, LocalStack). Runs 113 mock-based orchestrator tests in ~5 seconds. If serialization, path computation, log parsing, or provider loading is broken, the workflow fails immediately instead of spending 30+ minutes setting up LocalStack and k3d clusters. Tests included: orchestrator-guid, orchestrator-folders, task-parameter-serializer, follow-log-stream-service, runner-availability-service, provider-url-parser, provider-loader, provider-git-manager, orchestrator-image, orchestrator-hooks, orchestrator-github-checks. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Add comprehensive tests for CLI provider (cleanupWorkflow, garbageCollect, listWorkflow, watchWorkflow, stderr forwarding, timeout handling), local cache service (saveLfsCache full path and error handling), git hooks service (husky install, failure logging, edge cases), and LFS agent service (empty storagePaths, validate logging). 73 tests across 4 test files. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Replace token-in-URL pattern with http.extraHeader for git clone and LFS operations. The token no longer appears in clone URLs, git remote config, or process command lines. Add gitAuthMode input (default: 'header', legacy: 'url') so users can fall back to the old behavior if needed. Closes #785 Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Add SecretSourceService with premade secret source integrations:
- aws-secrets-manager (with --query SecretString for direct value)
- aws-parameter-store (with --with-decryption)
- gcp-secret-manager (latest version)
- azure-key-vault (via $AZURE_VAULT_NAME env var)
- env (environment variables, no shell command needed)
- Custom commands (any string with {0} placeholder)
- YAML file definitions for custom sources
Add secretSource input that takes precedence over inputPullCommand.
Backward compatible — existing inputPullCommand behavior unchanged.
Closes #776
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Adds three Vault entries: hashicorp-vault (KV v2), hashicorp-vault-kv1 (KV v1), and vault (short alias). Uses VAULT_ADDR for server address and VAULT_MOUNT env var for configurable mount path (defaults to 'secret'). Refs #776 Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
First-class support for elastic-git-storage as a custom LFS transfer agent. When lfsTransferAgent is set to "elastic-git-storage" (or "elastic-git-storage@v1.0.0" for a specific version), the service automatically finds or installs the agent from GitHub releases, then configures it via git config. Supports version pinning via @Version suffix in the agent value, eliminating the need for a separate version parameter. Platform and architecture detection handles linux/darwin/windows on amd64/arm64. 37 unit tests covering detection, PATH lookup, installation, version parsing, and configuration delegation. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Built-in support for Unity Git Hooks (com.frostebite.unitygithooks): - Auto-detect UPM package in Packages/manifest.json - Run init-unity-lefthook.js before hook installation - Set CI-friendly env vars (disable background project mode) New gitHooksRunBeforeBuild input runs specific lefthook groups before the Unity build, allowing CI to trigger pre-commit or pre-push checks that normally only fire on git events. 35 unit tests covering detection, init, CI env, group execution, and failure handling. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
… Actions, GitLab CI, Ansible Add four new providers that delegate builds to external CI platforms: - remote-powershell: Execute on remote machines via WinRM/SSH - github-actions: Dispatch workflow_dispatch on target repository - gitlab-ci: Trigger pipeline via GitLab API - ansible: Run playbooks against managed inventory Each follows the CI-as-a-provider pattern: trigger remote job, pass build parameters, stream logs, report status. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…ved filename cleanup, archival Add three optional reliability features for hardening CI pipelines: - Git corruption detection & recovery (fsck, stale lock cleanup, submodule backing store validation, auto-recovery) - Reserved filename cleanup (removes Windows device names that cause Unity asset importer infinite loops) - Build output archival with configurable retention policy All features are opt-in and fail gracefully with warnings only. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…rity, reserved filename cleanup, and build archival Adds BuildReliabilityService with the following capabilities: - checkGitIntegrity(): runs git fsck --no-dangling and parses output for corruption - cleanStaleLockFiles(): removes stale .lock files older than 10 minutes - validateSubmoduleBackingStores(): validates .git files point to valid backing stores - recoverCorruptedRepo(): orchestrates fsck, lock cleanup, re-fetch, retry fsck - cleanReservedFilenames(): removes Windows reserved filenames (con, prn, aux, nul, com1-9, lpt1-9) - archiveBuildOutput(): creates tar.gz archive of build output - enforceRetention(): deletes archives older than retention period - configureGitEnvironment(): sets GIT_TERMINAL_PROMPT=0, http.postBuffer, core.longpaths Wired into action.yml as opt-in inputs, with pre-build integrity checks and post-build archival in the main entry point. Includes 29 unit tests covering success and failure cases for all methods. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…Lab CI, PowerShell, and Ansible providers (#806) Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
#777) Implement two-level workspace isolation pattern for enterprise-scale CI: - Atomic O(1) workspace restore via filesystem move (no tar/download/extract) - Separate Library caching for independent restore - .git preservation for delta operations - Stale workspace cleanup with configurable retention policies - 5 new action inputs: childWorkspacesEnabled, childWorkspaceName, childWorkspaceCacheRoot, childWorkspacePreserveGit, childWorkspaceSeparateLibrary - 28 unit tests covering all service methods This enables enterprise CI where workspaces are 50GB+ and traditional caching via actions/cache is impractical. On NTFS, workspace restore is O(1) via atomic rename when source and destination are on the same volume. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Prevent builds from hanging indefinitely when CLI provider subprocess is unresponsive. Default 2h for runTaskInWorkflow, 1h for watchWorkflow. Graceful SIGTERM with 10s grace before SIGKILL. - Added RUN_TASK_TIMEOUT_MS (2 hours) and WATCH_WORKFLOW_TIMEOUT_MS (1 hour) - Added gracefulKill helper: SIGTERM first, SIGKILL after 10s grace period - runTaskInWorkflow and watchWorkflow now have timeout protection - Existing execute() method upgraded to use gracefulKill - core.error() called with clear human-readable timeout message - Added comprehensive tests: timeout triggers, SIGKILL escalation, grace period cancellation on voluntary exit, normal completion Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…lues - Validate secret key names against alphanumeric allowlist before shell interpolation - Apply validation in both SecretSourceService.fetchSecret() and legacy queryOverride() - Mask fetched secret values with core.setSecret() to prevent log exposure - Add 20 new tests for validation and masking Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Cap pagination at 100 pages (10,000 runners max), detect GitHub API rate limiting (403/429) with reset time reporting, add 30-second total timeout for pagination loop. Log clear diagnostic when no runners found suggesting possible causes (token permissions, runner registration). Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Check available disk space (cross-platform: wmic/df) before archive operations to prevent data loss on full disks. Skip archival with warning if insufficient space (10% safety margin). Clean up partial archives on tar failure. Proceed with warning when space check fails. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…e dependencies - GitHub Actions: max 4-hour polling with clear timeout error including run URL - GitLab CI: max 4-hour polling with clear timeout error including pipeline URL - Remote PowerShell: fix credential split to preserve passwords with colons (split on first colon only instead of all colons) - Remote PowerShell: throw clear error when credential format is invalid - Ansible: validate ansible-playbook binary exists in setupWorkflow (separate from ansible --version check) - All timeout errors use core.error() for GitHub Actions annotation visibility Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
The monolithic orchestrator-integrity workflow runs 25+ tests sequentially in a single job, consistently hitting the 60-minute timeout on PR runs. Split into 4 parallel jobs (k8s, aws-provider, local-docker, rclone) each on its own runner, cutting wall-clock time from 3+ hours to ~1 hour and eliminating disk space exhaustion from shared runner contention. Adopts the parallel architecture from PR #809. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
… release/lts-infrastructure # Conflicts: # dist/index.js.map
… into release/lts-infrastructure # Conflicts: # .github/workflows/orchestrator-integrity.yml # dist/index.js.map
…release/lts-infrastructure # Conflicts: # action.yml # dist/index.js # dist/index.js.map # src/model/build-parameters.ts # src/model/input.ts
… into release/lts-infrastructure # Conflicts: # dist/index.js.map # src/model/build-parameters.ts # src/model/orchestrator/options/orchestrator-options.ts
…into release/lts-infrastructure # Conflicts: # action.yml # dist/index.js # dist/index.js.map
…s' into release/lts-infrastructure # Conflicts: # action.yml # dist/index.js.map # src/model/input.ts # src/model/orchestrator/providers/provider-loader.ts
…nto release/lts-infrastructure # Conflicts: # action.yml # dist/index.js # dist/index.js.map # src/model/build-parameters.ts # src/model/input.ts # src/model/orchestrator/orchestrator.ts
…p' into release/lts-infrastructure # Conflicts: # .github/workflows/orchestrator-integrity.yml # dist/index.js.map
📝 WalkthroughWalkthroughLarge feature addition: many new providers (CLI, Remote PowerShell, Ansible, GitHub Actions, GitLab CI, GCP Cloud Run, Azure ACI), extensive enterprise inputs and wiring, new cache/child-workspace/LFS/hooks/reliability/secret services, provider selection/fallback and runner-availability checks, many tests, CLI commands, and CI/workflow bootstrap changes (default branch -> main). Changes
Sequence Diagram(s)sequenceDiagram
participant GitHub as GitHub Actions
participant Index as index.ts
participant Reliability as BuildReliabilityService
participant ChildWS as ChildWorkspaceService
participant Submodule as SubmoduleProfileService
participant LFS as LfsAgentService
participant Hooks as GitHooksService
participant Cache as LocalCacheService
participant Orch as Orchestrator
participant Provider as Selected Provider
GitHub->>Index: trigger run with inputs
Index->>Reliability: checkGitIntegrity()
Reliability-->>Index: integrity status
Index->>ChildWS: initializeWorkspace()
ChildWS-->>Index: restored?
Index->>Submodule: init submodules (optional)
Submodule-->>Index: plan executed
Index->>LFS: configure LFS agent (optional)
LFS-->>Index: configured
Index->>Hooks: installHooks (optional)
Hooks-->>Index: done
Index->>Cache: restoreLibraryCache()
Cache-->>Index: done
Index->>Orch: run(buildParameters)
Orch->>Provider: setupWorkflow / runTaskInWorkflow
Provider-->>Orch: result
Orch-->>Index: build finished
Index->>Cache: saveLibraryCache()
Index->>ChildWS: saveWorkspace()
Index->>Reliability: archiveBuildOutput() (optional)
sequenceDiagram
participant Orch as Orchestrator
participant RunnerCheck as RunnerAvailabilityService
participant Loader as ProviderLoader
participant Primary as Primary Provider
participant Fallback as Fallback Provider
Orch->>RunnerCheck: checkAvailability(owner, repo, token, labels, min)
RunnerCheck-->>Orch: {shouldFallback, reason}
alt shouldFallback == true
Orch->>Loader: load(fallbackStrategy)
Loader-->>Orch: Fallback
Orch->>Fallback: setupWorkflow / runTaskInWorkflow
Fallback-->>Orch: result
else
Orch->>Loader: load(primaryStrategy)
Loader-->>Orch: Primary
Orch->>Primary: setupWorkflow / runTaskInWorkflow
Primary-->>Orch: result
end
Estimated code review effort🎯 4 (Complex) | ⏱️ ~75 minutes Possibly related issues
Possibly related PRs
Suggested labels
Suggested reviewers
✨ Finishing Touches🧪 Generate unit tests (beta)
|
There was a problem hiding this comment.
Actionable comments posted: 24
Note
Due to the large number of review comments, Critical severity comments were prioritized as inline comments.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
src/model/orchestrator/tests/e2e/orchestrator-end2end-caching.test.ts (1)
24-35:⚠️ Potential issue | 🟠 MajorThis e2e case now executes
main, not the branch under test.Hard-coding
orchestratorBranch: 'main'makes the remote build pull the default branch even when this suite is running on a release/PR branch. That can let the test pass while the current branch's dist/workflow changes are broken. Please source this from the checked-out ref or test inputs instead of a constant.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/model/orchestrator/tests/e2e/orchestrator-end2end-caching.test.ts` around lines 24 - 35, The test hard-codes orchestratorBranch: 'main' in the overrides object which forces remote builds to use the default branch; change the assignment of overrides.orchestratorBranch to derive the current branch dynamically (e.g., read a CI env var like process.env.GITHUB_REF_NAME / process.env.CI_COMMIT_REF_NAME or call a helper getCurrentGitBranch() that runs git rev-parse --abbrev-ref HEAD) or accept the branch via the test input, and keep 'main' only as a safe fallback; update the overrides object where orchestratorBranch is set so it uses this dynamic value instead of the literal 'main'.
🟠 Major comments (24)
src/model/orchestrator/services/core/task-parameter-serializer.test.ts-12-27 (1)
12-27:⚠️ Potential issue | 🟠 MajorTesting anti-pattern: mock duplicates production logic.
The
ToEnvVarFormatmock (lines 16-25) contains an inline implementation of the conversion logic. This defeats the purpose of unit testing because:
- Tests verify the mock, not the actual production code
- If the real implementation diverges, tests may pass while production code is broken
- Maintenance burden: logic exists in two places
Instead, either:
- Import and use the real
ToEnvVarFormatfrom the module being tested- Use a spy that calls through to the real implementation
- If mocking is required for isolation, use a simple stub without duplicating logic
♻️ Suggested refactor
jest.mock('../../options/orchestrator-options', () => ({ __esModule: true, default: { getInput: jest.fn().mockReturnValue(undefined), - ToEnvVarFormat: (input: string) => { - if (input.toUpperCase() === input) { - return input; - } - return input - .replace(/([A-Z])/g, ' $1') - .trim() - .toUpperCase() - .replace(/ /g, '_'); - }, }, }));Then import the actual implementation:
import { TaskParameterSerializer } from './task-parameter-serializer';The tests will then verify the real implementation.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/model/orchestrator/services/core/task-parameter-serializer.test.ts` around lines 12 - 27, The test currently mocks ToEnvVarFormat with an inline implementation inside the jest.mock for '../../options/orchestrator-options', which duplicates production logic and invalidates the test; replace this by either importing the real ToEnvVarFormat from the actual module or using a spy that calls through (e.g., jest.spyOn(require('../../options/orchestrator-options'), 'ToEnvVarFormat').mockImplementationOnce(original) / .mockImplementation to callThrough), or if isolation is required replace the mock with a simple stub that returns fixed values (no conversion logic). Locate the mock in task-parameter-serializer.test.ts and update the jest.mock block for orchestrator-options to stop embedding the conversion algorithm and instead delegate to or spy on the real ToEnvVarFormat function.src/model/orchestrator/services/core/follow-log-stream-service.test.ts-45-49 (1)
45-49:⚠️ Potential issue | 🟠 MajorDon't mask stale
errorsstate in test setup.Line 48 is hiding a likely cross-run bug:
Reset()is treated as the per-run reset hook, but this suite patcheserrorsmanually instead of asserting thatReset()clears it. Since later tests showerrorsis folded into the"Build fail"check text, stale data from a previous run can leak into the next build summary.Suggested test change
beforeEach(() => { jest.clearAllMocks(); FollowLogStreamService.Reset(); - FollowLogStreamService.errors = ''; }); describe('Reset', () => { - it('resets DidReceiveEndOfTransmission to false', () => { + it('resets per-run state', () => { FollowLogStreamService.DidReceiveEndOfTransmission = true; + FollowLogStreamService.errors = '\nprevious error'; FollowLogStreamService.Reset(); expect(FollowLogStreamService.DidReceiveEndOfTransmission).toBe(false); + expect(FollowLogStreamService.errors).toBe(''); }); });Also applies to: 51-57
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/model/orchestrator/services/core/follow-log-stream-service.test.ts` around lines 45 - 49, Tests are masking a stale state by directly assigning FollowLogStreamService.errors = '' in beforeEach; remove that manual assignment and ensure Reset() is the canonical reset by updating tests to call FollowLogStreamService.Reset() only and add an assertion that FollowLogStreamService.Reset() clears FollowLogStreamService.errors (e.g., expect(FollowLogStreamService.errors).toBe('')) so future runs can't leak prior errors—apply the same change to the other setup blocks around the 51-57 range that currently patch errors directly.src/model/orchestrator/services/reliability/build-reliability-service.ts-415-424 (1)
415-424:⚠️ Potential issue | 🟠 MajorShell injection in
tarcommand with path interpolation.Similar to the git commands, the
archiveFile,sourcePathpaths are directly interpolated into a shell command. UseexecFileSyncfor safer execution:🛡️ Proposed safer implementation
- execSync(`tar -czf "${archiveFile}" -C "${path.dirname(sourcePath)}" "${path.basename(sourcePath)}"`, { - encoding: 'utf8', - timeout: 600_000, - stdio: ['pipe', 'pipe', 'pipe'], - }); + execFileSync('tar', ['-czf', archiveFile, '-C', path.dirname(sourcePath), path.basename(sourcePath)], { + encoding: 'utf8', + timeout: 600_000, + stdio: ['pipe', 'pipe', 'pipe'], + });🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/model/orchestrator/services/reliability/build-reliability-service.ts` around lines 415 - 424, The tar invocation interpolates untrusted paths into a shell string (archiveFile/sourcePath) causing shell injection; replace the execSync call that runs `tar -czf ...` with a non-shell exec variant (e.g. import and use child_process.execFileSync or spawnSync) and pass 'tar' as the command and each argument as separate array elements: ['-czf', archiveFile, '-C', path.dirname(sourcePath), path.basename(sourcePath)], preserving options (encoding, timeout, stdio) so the call in build-reliability-service (the execSync line that uses archiveFile and sourcePath) becomes a safe execFileSync/ spawnSync invocation without shell interpolation.src/model/orchestrator/services/reliability/build-reliability-service.ts-49-83 (1)
49-83:⚠️ Potential issue | 🟠 MajorShell command injection risk with unescaped path interpolation.
The
repoPathparameter is directly interpolated into a shell command string. IfrepoPathcontains shell metacharacters (e.g.,"; rm -rf /;), this could lead to command injection. While CI contexts typically control these paths, this pattern is fragile.Consider using
execFileSyncwith array arguments to avoid shell interpretation:🛡️ Proposed safer implementation
- const output = execSync(`git -C "${repoPath}" fsck --no-dangling`, { - encoding: 'utf8', - timeout: 120_000, - stdio: ['pipe', 'pipe', 'pipe'], - }); + const output = execFileSync('git', ['-C', repoPath, 'fsck', '--no-dangling'], { + encoding: 'utf8', + timeout: 120_000, + stdio: ['pipe', 'pipe', 'pipe'], + });The same pattern applies to other
execSynccalls with path interpolation inrecoverCorruptedRepo(line 236) andarchiveBuildOutput(line 419).🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/model/orchestrator/services/reliability/build-reliability-service.ts` around lines 49 - 83, The code interpolates repoPath into shell command strings (e.g., in checkGitIntegrity) causing shell injection risk; update calls that use execSync with interpolated paths (checkGitIntegrity, recoverCorruptedRepo, archiveBuildOutput) to invoke Git without a shell by using execFileSync/execFile or spawnSync with argv arrays (e.g., command "git" and args ["-C", repoPath, "fsck", "--no-dangling"]) or by passing repoPath via the cwd option instead of embedding it in a string; ensure you stop using string interpolation for paths, pass args as an array, and retain the existing timeout/encoding/stio options when replacing execSync calls.src/model/orchestrator/services/reliability/build-reliability-service.ts-331-339 (1)
331-339:⚠️ Potential issue | 🟠 MajorCross-platform incompatibility:
df -BMwill fail on macOS CI runners.The
-BMflag is GNU-specific; macOS/BSDdfdoesn't support it and will error with "illegal option". Since the project runs macOS builds (build-tests-mac.ymlusesmacos-latest), thegetAvailableSpaceMB()function will fail when called during thearchiveBuildOutput()step. The tests only cover Linux behavior—there's no macOS test to catch this.macOS requires
-minstead. Update the code to detect the platform and use the appropriate flags and parsing logic for each OS.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/model/orchestrator/services/reliability/build-reliability-service.ts` around lines 331 - 339, getAvailableSpaceMB() currently calls GNU-specific df flags and will fail on macOS; update it to detect the platform (process.platform) and call df with OS-appropriate flags and parsing: for Linux run execFileSync('df', ['-BM','--output=avail', directoryPath], ...) and parse the last line (strip trailing 'M' then parseInt); for macOS (darwin) run execFileSync('df', ['-m', directoryPath], ...) and parse the last output line by splitting on whitespace and selecting the last token that is a numeric value (or strip non-digits from the last token) then parseInt; include a sensible fallback branch for other platforms that either uses the darwin approach or throws a clear error. Ensure this change is applied in the getAvailableSpaceMB (or the function surrounding the shown diff) so archiveBuildOutput() works on macOS CI.src/model/orchestrator/services/cache/local-cache-service.ts-65-73 (1)
65-73:⚠️ Potential issue | 🟠 MajorClear the target directory before restoring cached state.
Both restore paths untar on top of the existing
Library/.git/lfscontents. If the workspace already has files that were deleted in the cached snapshot, those stale files survive the restore and can poison later builds.🧹 Suggested fix
const tarPath = path.join(cachePath, latestFile); const libraryDest = path.join(projectPath, 'Library'); - // Ensure destination exists + fs.rmSync(libraryDest, { recursive: true, force: true }); fs.mkdirSync(libraryDest, { recursive: true }); @@ const tarPath = path.join(cachePath, latestFile); const lfsDest = path.join(repoPath, '.git', 'lfs'); - // Ensure destination exists + fs.rmSync(lfsDest, { recursive: true, force: true }); fs.mkdirSync(lfsDest, { recursive: true });Also applies to: 154-162
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/model/orchestrator/services/cache/local-cache-service.ts` around lines 65 - 73, Before extracting the cached tarball, remove or empty the target restore directory to prevent stale files from surviving (e.g., delete contents of libraryDest and the equivalent .git/lfs restore target). Locate the restore logic that uses tarPath, libraryDest, projectPath and OrchestratorSystem.Run (and the similar block referenced at the other restore site) and add a preparatory step that recursively removes all files/subdirectories inside the target directory (or recreates it via fs.rmSync(target, { recursive: true, force: true }) followed by fs.mkdirSync(target, { recursive: true })) before calling OrchestratorSystem.Run to extract the archive.delete-me-update-all-integration-branches.ps1-83-87 (1)
83-87:⚠️ Potential issue | 🟠 MajorSkip this integration branch if the fallback pull conflicts.
If
git pull origin $name --no-editalso fails, the script keeps going with a merge already in progress. The subsequent component merges and push for that branch become unreliable.🛠️ Suggested guard
} else { git checkout $name git pull origin $name --ff-only 2>$null if ($LASTEXITCODE -ne 0) { - git pull origin $name --no-edit + git pull origin $name --no-edit 2>$null + if ($LASTEXITCODE -ne 0) { + Write-Host "Failed to update $name from origin - skipping" -ForegroundColor Red + git merge --abort 2>$null + continue + } } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@delete-me-update-all-integration-branches.ps1` around lines 83 - 87, The fallback pull can also fail leaving a merge in progress; after running git pull origin $name --no-edit check $LASTEXITCODE and if it is non‑zero abort the in‑progress merge (git merge --abort), log/skipping the problematic branch ($name) and continue to the next branch instead of proceeding with merges/pushes; update the block around git checkout $name / git pull origin $name --ff-only / git pull origin $name --no-edit to perform this check, abort the merge on failure and skip further actions for that $name.src/model/orchestrator/providers/ansible/index.ts-111-119 (1)
111-119:⚠️ Potential issue | 🟠 MajorDon't silently drop invalid
ansibleExtraVars.The catch block says "using as-is", but the raw value is never appended to the playbook command. A malformed extra-vars string is ignored entirely, so builds run with a different input set than the caller requested.
🧾 Suggested fix
if (this.extraVariables) { try { const userVariables = JSON.parse(this.extraVariables); Object.assign(playbookVariables, userVariables); } catch { - OrchestratorLogger.logWarning(`[Ansible] Failed to parse ansibleExtraVars as JSON, using as-is`); + throw new Error('ansibleExtraVars must be valid JSON.'); } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/model/orchestrator/providers/ansible/index.ts` around lines 111 - 119, The catch currently swallows a malformed this.extraVariables and never applies it to the playbook, so update the catch in the block that merges into playbookVariables to preserve and forward the raw value: when JSON.parse(this.extraVariables) fails, assign the raw string into playbookVariables (e.g., playbookVariables.ansibleExtraVarsRaw = this.extraVariables) so the playbook builder can pass it through as an extra-vars string, and update the OrchestratorLogger.logWarning to include the raw this.extraVariables for visibility; keep the try branch behavior unchanged..github/workflows/orchestrator-integrity.yml-223-225 (1)
223-225:⚠️ Potential issue | 🟠 MajorSplit this into two steps; the first
runcommand is being overwritten.Although the YAML parses, the duplicate
runkey causes the first step (yarn install --frozen-lockfile) to be lost—only the secondrunblock executes. Additionally,timeout-minutes: 5is stuck in a comment and not applied. Move the k3d cluster bootstrap to its own step withtimeout-minutesas a proper key.🧩 Suggested fix
- run: yarn install --frozen-lockfile - # =================================== timeout-minutes: 5 - run: | + + - name: Create k3d cluster + timeout-minutes: 5 + run: | LOCALSTACK_IP=$(docker inspect -f '{{range.NetworkSettings.Networks}}{{.IPAddress}}{{end}}' localstack-main 2>/dev/null || echo "") echo "LocalStack container IP: $LOCALSTACK_IP" k3d cluster create unity-builder \🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/orchestrator-integrity.yml around lines 223 - 225, The workflow step currently has two `run` entries so the first (`run: yarn install --frozen-lockfile`) is overwritten; split into two separate steps: keep a standalone step that runs `yarn install --frozen-lockfile`, and create a new separate step for the k3d cluster bootstrap commands (the multi-line `run: |` block) and add `timeout-minutes: 5` as a top-level key on that k3d step. Ensure the step names and their `run` keys (the yarn install step and the k3d bootstrap step) are unique so both execute.src/model/orchestrator/workflows/async-workflow.ts-33-38 (1)
33-38:⚠️ Potential issue | 🟠 MajorDon't silently swap to
mainwhen the requested builder branch is missing.If
orchestratorBranchpoints to a release/PR ref that is not present on the remote yet, this path clonesmainand runs a different builder than the caller requested. That makes async builds appear green against stale code. Fail fast or fetch by exact ref/SHA instead of changing branches implicitly.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/model/orchestrator/workflows/async-workflow.ts` around lines 33 - 38, The current shell fragment silently falls back to cloning main when the requested branch (BRANCH) is absent, which can run the wrong builder; change the logic around the git ls-remote / git clone block so it fails fast or fetches the exact ref instead of switching to main: if git ls-remote --heads "$REPO" "$BRANCH" fails, either exit with a non-zero error and a clear message referencing REPO and BRANCH, or perform a clone followed by git fetch origin "refs/heads/$BRANCH:refs/remotes/origin/$BRANCH" (or fetch by exact ref/SHA) and then checkout that ref; remove the implicit git clone -b main fallback and ensure the script returns a failure if the requested ref cannot be obtained.src/model/orchestrator/services/core/runner-availability-service.ts-129-160 (1)
129-160:⚠️ Potential issue | 🟠 MajorImplement per-request timeout and distinguish rate-limit from permission 403 errors.
The 30s budget is only checked before
octokit.request()(line 131), so a single slow request can still exceed it. Octokit v5 supports per-request timeouts viarequest: { timeout: ms }orAbortSignal; consider using them.Also, every HTTP 403 is currently treated as rate limiting, but GitHub distinguishes them in response headers: rate-limit 403s have
X-RateLimit-Remaining: 0orRetry-After, while permission errors haveX-Accepted-GitHub-Permissionsheader or messages like "Resource not accessible by integration". In both cases the loop exits with partial/empty data, whichcheckAvailability()then treats as runner capacity exhaustion instead of an API error.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/model/orchestrator/services/core/runner-availability-service.ts` around lines 129 - 160, The pagination loop in RunnerAvailability can hang because the overall timeout (PAGINATION_TIMEOUT_MS) is only checked between requests and treats all 403s as rate limits; update the octokit.request call to include a per-request timeout (use octokit.request('GET /repos/{owner}/{repo}/actions/runners', { ..., request: { timeout: <ms> } }) or an AbortSignal) and handle request errors in the catch by distinguishing rate-limit 403s from permission 403s: inspect requestError.response?.headers for 'x-ratelimit-remaining' === '0', 'retry-after' or 'x-ratelimit-reset' to treat as rate-limit (log and break as now), but if headers or error message indicate a permissions issue (e.g. 'x-accepted-github-permissions' present or message contains "Resource not accessible by integration") log a clear error via OrchestratorLogger and rethrow or return an explicit API error so checkAvailability can surface an API failure instead of treating it as empty runner capacity; keep uses of octokit.request, requestError, OrchestratorLogger, PAGINATION_TIMEOUT_MS and checkAvailability to locate the changes.src/model/orchestrator/services/lfs/lfs-agent-service.ts-34-36 (1)
34-36:⚠️ Potential issue | 🟠 MajorShell command injection vulnerability due to use of
exec()with unescaped string interpolation.Lines 34-36 pass command strings to
OrchestratorSystem.Run(), which uses Node.jsexec()and spawns a shell interpreter. The interpolated variables—particularlyrepoPath,agentPath,agentArgs, andagentName—are vulnerable to shell injection if they contain metacharacters like backticks,$(),$variables, or;.While line 36 derives
agentNamesafely frompath.basename(), it remains unquoted. More critically,agentArgscould contain user-controlled injection payloads and lacks escaping.Recommendation: Replace
exec()withexecFile()(which does not invoke a shell) or properly escape all interpolated variables. Example:
- Use
execFile()to execute commands with argument arrays instead of shell strings- Or use a shell escaping library (e.g.,
shell-quote) for any string-based approach🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/model/orchestrator/services/lfs/lfs-agent-service.ts` around lines 34 - 36, The three git config invocations in lfs-agent-service.ts use OrchestratorSystem.Run() with interpolated shell strings (variables repoPath, agentPath, agentArgs, agentName) causing shell injection risk; change these calls to run the git binary without a shell by using an execFile-style invocation (pass "git" as the command and argument arrays like ["-C", repoPath, "config", "lfs.customtransfer."+agentName+".path", agentPath], etc.) or update OrchestratorSystem.Run to accept a command+args signature that uses child_process.execFile/spawn with args arrays, and ensure agentName is derived via path.basename() and agentArgs is split/validated rather than concatenated into a shell string (or fully escaped with a vetted shell-escape library if execFile cannot be used).src/model/orchestrator/services/submodule/submodule-profile-service.test.ts-294-302 (1)
294-302:⚠️ Potential issue | 🟠 MajorSubmoduleProfileService should use header-based auth like the rest of the system.
The
SubmoduleProfileService.executemethod uses legacy URL-embedded token auth (git config url."https://${token}@github.com/".insteadOf), but the system now defaults to secure header-based auth viahttp.extraHeader(seeOrchestratorFolders.useHeaderAuthandgitAuthMode).Update
SubmoduleProfileService.executeto respect thegitAuthModesetting and use header-based auth when configured, rather than always embedding the token in the URL.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/model/orchestrator/services/submodule/submodule-profile-service.test.ts` around lines 294 - 302, SubmoduleProfileService.execute currently always sets URL-embedded auth; change it to respect the configured git auth mode by checking OrchestratorFolders.useHeaderAuth() or the gitAuthMode setting inside SubmoduleProfileService.execute and, when header-based auth is enabled, configure git to use an http.extraHeader e.g. set 'git config --local http.extraHeader "Authorization: Bearer <token>"' (and avoid the url."https://<token>@github.com/" insteadOf config); otherwise fall back to the legacy URL-embedded configuration. Update the code paths that call mockedSystem.Run / System.Run inside SubmoduleProfileService.execute to conditionally run the appropriate git config command based on the gitAuthMode.src/model/orchestrator/services/hooks/git-hooks-service.ts-223-228 (1)
223-228:⚠️ Potential issue | 🟠 Major
disableHooks()permanently rewrites the repo config.
git config core.hooksPath ...writes to.git/config, and nothing in this flow restores the previous value. Running a local build with hooks disabled will leave that repository with hooks turned off after the action exits.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/model/orchestrator/services/hooks/git-hooks-service.ts` around lines 223 - 228, The disableHooks function permanently rewrites core.hooksPath; update disableHooks (and/or add complementary restore logic) to first read and save the repo's existing hooks path using git -C "<repoPath>" config --get core.hooksPath (or detect if unset), then set core.hooksPath to the temp emptyDir via OrchestratorSystem.Run as you do now, and ensure the original value is restored in a finally/cleanup step (using git -C "<repoPath>" config core.hooksPath "<original>" or git -C "<repoPath>" config --unset core.hooksPath when original was unset). Reference the disableHooks method and the OrchestratorSystem.Run calls so the saved value is written back reliably even on error.src/model/orchestrator/orchestrator.ts-381-403 (1)
381-403:⚠️ Potential issue | 🟠 MajorTimeouts need a cancellation or teardown path.
Promise.race()only changes which promise wins; it does not stopsetupPromise. After a timeout, the original provider init can keep running and creating remote state while the fallback provider starts, which is a bad failure mode for these cloud/CI providers.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/model/orchestrator/orchestrator.ts` around lines 381 - 403, The current Promise.race between setupPromise (from Orchestrator.Provider.setupWorkflow) and timeoutPromise leaves the provider init running after a timeout; change the flow to create an AbortController (or equivalent cancellation token), pass its signal into Orchestrator.Provider.setupWorkflow (or change setupWorkflow to accept an AbortSignal/Context), and on timeout call controller.abort() (or invoke a returned teardown/cleanup function) before resolving the timeout path; also ensure Provider.setupWorkflow listens for the signal and performs deterministic cleanup/rollback of any remote state it started so that only one provider init can proceed.src/model/orchestrator/orchestrator.ts-281-291 (1)
281-291:⚠️ Potential issue | 🟠 MajorKeep provider initialization inside the error-handling path.
If
setup(), template validation, orsetupWorkflowWithTimeout()throws, execution never reaches thecatchbelow, so the GitHub status update, grouped-log cleanup, and orchestrator exception handling are skipped.🛠️ Suggested reshape
private static async runWithProvider(buildParameters: BuildParameters, baseImage: string) { - await Orchestrator.setup(buildParameters); - - // When aws-local mode is enabled, validate AWS CloudFormation templates - // This ensures AWS templates are correct even when executing via local-docker - if (Orchestrator.validateAwsTemplates) { - await Orchestrator.validateAwsCloudFormationTemplates(); - } - - // Setup workflow with optional init timeout - await Orchestrator.setupWorkflowWithTimeout(); - try { + await Orchestrator.setup(buildParameters); + if (Orchestrator.validateAwsTemplates) { + await Orchestrator.validateAwsCloudFormationTemplates(); + } + await Orchestrator.setupWorkflowWithTimeout(); + if (buildParameters.maxRetainedWorkspaces > 0) { Orchestrator.lockedWorkspace = SharedWorkspaceLocking.NewWorkspaceName();🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/model/orchestrator/orchestrator.ts` around lines 281 - 291, The calls to Orchestrator.setup, Orchestrator.validateAwsCloudFormationTemplates, and Orchestrator.setupWorkflowWithTimeout inside runWithProvider currently run outside the try/catch so any throw bypasses the existing catch that performs GitHub status updates, grouped-log cleanup, and orchestrator exception handling; move those initialization calls into the protected/error-handling path (inside the same try block that contains the rest of provider execution) so that if setup(), validateAwsCloudFormationTemplates(), or setupWorkflowWithTimeout() throws, control flows to the existing catch where GitHub status update, grouped-log cleanup, and Orchestrator exception handling are executed. Ensure you keep the same function names (runWithProvider, setup, validateAwsCloudFormationTemplates, setupWorkflowWithTimeout) and do not duplicate cleanup logic—use the existing catch/finally handlers.src/model/orchestrator/services/secrets/secret-source-service.ts-210-217 (1)
210-217:⚠️ Potential issue | 🟠 MajorPreserve falsy JSON-field values.
parsed[source.jsonField] || ''turns valid values like0andfalseinto the empty string, so those secrets silently disappear. Use a nullish check and stringify the extracted field instead.💡 Possible fix
if (source.parseOutput === 'json-field' && source.jsonField) { try { const parsed = JSON.parse(output); - value = parsed[source.jsonField] || ''; + const rawValue = parsed[source.jsonField]; + value = rawValue == null ? '' : String(rawValue); } catch { OrchestratorLogger.logWarning(`Failed to parse JSON output from ${source.name} for key ${key}`); value = output.trim(); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/model/orchestrator/services/secrets/secret-source-service.ts` around lines 210 - 217, The JSON-field extraction in secret-source-service.ts currently uses parsed[source.jsonField] || '' which drops falsy but valid values (0, false); update the assignment in the try block (where source.parseOutput === 'json-field' and source.jsonField are checked) to test for null/undefined (nullish) instead of falsiness, and set value to String(parsed[source.jsonField]) when the field is not nullish; keep the existing catch behavior (OrchestratorLogger.logWarning(...), value = output.trim()) unchanged.src/model/orchestrator/services/submodule/submodule-profile-service.ts-25-35 (1)
25-35:⚠️ Potential issue | 🟠 MajorFail fast on malformed submodule entries.
String(entry.name)andString(entry.branch)turn missing fields into the literal"undefined", so a bad profile gets all the way togit checkout undefinedinstead of failing at parse time.💡 Possible fix
- return { - primary_submodule: parsed.primary_submodule, - product_name: parsed.product_name, - submodules: parsed.submodules.map((entry: any) => ({ - name: String(entry.name), - branch: String(entry.branch), - })), - }; + return { + primary_submodule: parsed.primary_submodule, + product_name: parsed.product_name, + submodules: parsed.submodules.map((entry: any, index: number) => { + if (!entry || typeof entry.name !== 'string' || typeof entry.branch !== 'string') { + throw new Error(`Invalid submodule entry at index ${index} in ${profilePath}`); + } + + return { + name: entry.name, + branch: entry.branch, + }; + }), + };🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/model/orchestrator/services/submodule/submodule-profile-service.ts` around lines 25 - 35, The parser currently converts missing fields to the string "undefined" which can later cause invalid git operations; update the parsing in submodule-profile-service (the code that reads parsed.submodules and returns primary_submodule/product_name/submodules) to validate each parsed.submodules entry before mapping: for each entry ensure entry.name and entry.branch are present and non-empty strings (or throw a descriptive Error including profilePath and the failing index/field) instead of blindly using String(...); perform validation prior to or inside the mapping step so malformed entries fail fast at parse time.src/model/orchestrator/providers/remote-powershell/index.ts-45-52 (1)
45-52:⚠️ Potential issue | 🟠 MajorSSH transport fails at setup due to WinRM-specific connectivity test.
When
remotePowershellTransport === 'ssh', thesetupWorkflow()method fails becauseTest-WSManonly validates WinRM endpoints, not SSH. The test file confirms this gap—there is no test coverage for SSH setup, only for SSH task execution.Suggested fix
- const testCommand = this.buildPwshCommand(`Test-WSMan -ComputerName "${this.host}" -ErrorAction Stop`); + const testCommand = + this.transport === 'ssh' + ? this.buildPwshCommand('Write-Output ok') + : this.buildPwshCommand(`Test-WSMan -ComputerName "${this.host}" -ErrorAction Stop`);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/model/orchestrator/providers/remote-powershell/index.ts` around lines 45 - 52, The setupWorkflow() currently always runs a WinRM-specific connectivity check by calling buildPwshCommand(`Test-WSMan ...`) and OrchestratorSystem.Run which fails when remotePowershellTransport === 'ssh'; update setupWorkflow() to branch on the remotePowershellTransport value: skip the Test-WSMan connectivity test (or run an SSH-appropriate check) when remotePowershellTransport === 'ssh', leaving the WinRM Test-WSMan path for non-SSH transports; locate the Test-WSMan usage in the call sequence around buildPwshCommand and OrchestratorSystem.Run to implement the conditional behavior.src/model/orchestrator/providers/cli/cli-provider.ts-127-148 (1)
127-148:⚠️ Potential issue | 🟠 MajorKeep a remainder buffer between
stdoutchunks to capture JSON responses spanning multiple data events.The
dataevents from child process streams are arbitrary chunks, not guaranteed to align with line boundaries. If a CLI JSON response like{"success": true, "output": "..."}spans two chunks—say{"success": true, "outpandut": "..."}— the current code splits and parses each chunk independently, both fail JSON parsing, andlastJsonResponsenever gets set. The promise then resolves with justoutputLinesinstead of the parsed response. Theexecute()method in this same file (lines 330–406) already uses the correct pattern: accumulate all stdout data, then parse once at process close.runTaskInWorkflow()andwatchWorkflow()should follow the same approach.💡 Suggested fix
+ let stdoutRemainder = ''; + child.stdout.on('data', (data: Buffer) => { - const lines = data.toString().split('\n'); + stdoutRemainder += data.toString(); + const lines = stdoutRemainder.split('\n'); + stdoutRemainder = lines.pop() ?? ''; + for (const line of lines) { const trimmed = line.trim(); if (!trimmed) continue; @@ } }); + + child.on('close', (code: number | null) => { + if (stdoutRemainder.trim()) { + // parse the final buffered line before resolving/rejecting + } + // existing close logic... + });Also applies to: lines 253–272 in
watchWorkflow()🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/model/orchestrator/providers/cli/cli-provider.ts` around lines 127 - 148, The stdout 'data' handler in runTaskInWorkflow (and similarly in watchWorkflow) can receive JSON that spans multiple chunks so you must buffer partial data between events: introduce a local remainder string inside the child.stdout.on('data') scope, on each data event prepend remainder to data.toString(), split into lines, keep the last partial line in remainder instead of discarding it, parse complete lines for JSON and update lastJsonResponse (symbol names: child.stdout.on('data', ...), remainder, lastJsonResponse, OrchestratorLogger.log, outputLines), and on the child process 'close'/'end' event flush any remaining remainder and/or attempt to parse the accumulated full stdout (as execute() does) to set lastJsonResponse before resolving; apply the same buffering change to watchWorkflow.src/model/orchestrator/services/cache/child-workspace-service.ts-112-139 (1)
112-139:⚠️ Potential issue | 🟠 MajorStage the new cache before removing
.git,Library/, or the previous cache.
saveLibraryCache(),.gitdeletion, and the old-cachermSync()all happen before the finalrenameSync(). If that rename fails, you can end up with no old cache and a mutated live workspace. The destructive steps need to happen only after the replacement cache is safely staged.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/model/orchestrator/services/cache/child-workspace-service.ts` around lines 112 - 139, Stage the workspace before performing destructive cleanup: first move projectPath to a temporary staging path (e.g., `${cachedWorkspacePath}.staging`) using fs.renameSync to atomically stage the new cache, then perform the destructive operations (remove .git when config.preserveGitDirectory is false, call ChildWorkspaceService.saveLibraryCache on the staging path when config.separateLibraryCache is true) against the staged path; after that safely remove the old cachedWorkspacePath (if exists) and finally rename the staging path to cachedWorkspacePath (replacing the old cache). Ensure you reference projectPath, cachedWorkspacePath, the staging path variable, config.preserveGitDirectory, config.separateLibraryCache, ChildWorkspaceService.saveLibraryCache, and fs.renameSync in the changes.src/model/orchestrator/providers/gcp-cloud-run/index.ts-221-224 (1)
221-224:⚠️ Potential issue | 🟠 Major
workingdiris never applied before the build command runs.This provider accepts a working directory, but the generated job never
cds into it or sets a container workdir. Any relative path inside the build command will resolve against the image default instead of the orchestrator workspace.Also applies to: 247-253
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/model/orchestrator/providers/gcp-cloud-run/index.ts` around lines 221 - 224, The provided workingdir parameter is accepted but not applied when building the Cloud Run job; update the job/container creation logic (the function that uses the parameters image, commands, mountdir, workingdir) to ensure the container executes in the supplied working directory—either set the container's workingDir/working_dir field in the generated job spec or prefix the commands with a safe change-directory invocation like `cd ${workingdir} && ` (and handle empty/undefined workingdir). Apply the same change to the other job-generation block noted (the second use around the referenced 247-253 area) so both build and run jobs respect workingdir.src/model/orchestrator/providers/gcp-cloud-run/index.ts-358-364 (1)
358-364:⚠️ Potential issue | 🟠 MajorCleanup currently leaks one Cloud Run Job per build.
jobNameis derived frombuildGuid, so each run creates a distinct resource. BecausecleanupWorkflow()is empty, successful builds accumulate jobs until a separate GC pass happens. That will steadily clutter the project and can trip quotas.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/model/orchestrator/providers/gcp-cloud-run/index.ts` around lines 358 - 364, cleanupWorkflow currently does nothing, causing one Cloud Run Job per build (jobName derived from buildGuid) to accumulate; implement cleanupWorkflow to derive the created job's name from buildParameters.buildGuid (same naming logic used when creating the job), instantiate the Cloud Run Jobs client used elsewhere in this module, call the delete method for that job in the correct region/project, await completion, and log success or errors via OrchestratorLogger; ensure the deletion is idempotent (ignore NotFound) and surface errors for retries.src/model/orchestrator/providers/gcp-cloud-run/index.ts-61-62 (1)
61-62:⚠️ Potential issue | 🟠 MajorThe advertised GCP sizing input is currently a no-op.
gcpMachineTypeis captured on the provider, but job creation always hard-codes--cpu=4and--memory=16Gi. Users can set the input and get no change in the actual Cloud Run Job shape.Also applies to: 259-269
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/model/orchestrator/providers/gcp-cloud-run/index.ts` around lines 61 - 62, The GCP sizing inputs are ignored because the Cloud Run job creation still uses hard-coded flags like '--cpu=4' and '--memory=16Gi'; update the job creation logic to use the provider properties (this.machineType and this.diskSizeGb) instead of literals: derive cpu and memory from this.machineType (or add a small mapping from machine type names to cpu/memory) and use this.diskSizeGb for disk-related flags, then replace occurrences of '--cpu=4' and '--memory=16Gi' in the job command builder (the Cloud Run job creation method) with the computed values so user-supplied gcpMachineType and gcpDiskSizeGb take effect.
🟡 Minor comments (5)
src/model/orchestrator/services/lfs/lfs-agent-service.test.ts-23-26 (1)
23-26:⚠️ Potential issue | 🟡 MinorRestore
LFS_STORAGE_PATHSafter each test.
configure()mutatesprocess.env, but this suite only clears mocks. After the storage-path case runs, later tests inherit that value, so the suite becomes order-dependent.🧪 Suggested fix
describe('LfsAgentService', () => { + let originalLfsStoragePaths: string | undefined; + beforeEach(() => { jest.clearAllMocks(); + originalLfsStoragePaths = process.env.LFS_STORAGE_PATHS; }); + + afterEach(() => { + if (originalLfsStoragePaths === undefined) { + delete process.env.LFS_STORAGE_PATHS; + } else { + process.env.LFS_STORAGE_PATHS = originalLfsStoragePaths; + } + }); @@ it('should not set LFS_STORAGE_PATHS when storagePaths is empty', async () => { (mockFs.existsSync as jest.Mock).mockReturnValue(true); - const originalValue = process.env.LFS_STORAGE_PATHS; delete process.env.LFS_STORAGE_PATHS; @@ await LfsAgentService.configure('/usr/local/bin/agent', '', [], '/repo'); expect(process.env.LFS_STORAGE_PATHS).toBeUndefined(); - - if (originalValue !== undefined) { - process.env.LFS_STORAGE_PATHS = originalValue; - } });Also applies to: 52-58, 83-97
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/model/orchestrator/services/lfs/lfs-agent-service.test.ts` around lines 23 - 26, The test suite mutates process.env (LFS_STORAGE_PATHS) via configure(), so make tests order-independent by saving the original process.env.LFS_STORAGE_PATHS at the start of the suite and restoring it after each test; e.g., in the LfsAgentService describe block capture const originalLfsPaths = process.env.LFS_STORAGE_PATHS in a beforeAll or beforeEach, keep the existing jest.clearAllMocks() in beforeEach, and add an afterEach that resets process.env.LFS_STORAGE_PATHS = originalLfsPaths (or deletes it if originally undefined) so tests that call configure() don’t leak state to later tests.src/model/orchestrator/services/lfs/lfs-agent-service.test.ts-2-2 (1)
2-2:⚠️ Potential issue | 🟡 MinorRemove the unused
pathimport.It currently trips
no-unused-varsin this new test file and can fail lint-only CI.✂️ Suggested cleanup
-import path from 'node:path';🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/model/orchestrator/services/lfs/lfs-agent-service.test.ts` at line 2, The test file currently imports "path" via the statement importing path from 'node:path' which is unused and triggers no-unused-vars; remove the unused import (the import line referencing the symbol "path") from lfs-agent-service.test.ts so the file no longer includes that unused symbol and lint passes.src/model/orchestrator/services/core/runner-availability-service.test.ts-283-315 (1)
283-315:⚠️ Potential issue | 🟡 MinorRestore
Date.nowin afinallyblock.If this test throws before Line 315, the mocked clock leaks into later suites and makes unrelated failures hard to reason about. Please restore it with
try/finally(or anafterEach) instead of relying on the happy path.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/model/orchestrator/services/core/runner-availability-service.test.ts` around lines 283 - 315, The test "should handle pagination timeout gracefully" currently replaces Date.now and only restores it on the happy path; change the test to restore Date.now in a finally block so the mocked clock is always reset even if assertions fail: wrap the setup/call/assert sequence where Date.now is mocked (the originalDateNow assignment, Date.now override, MockedOctokit.mockImplementation and the await RunnerAvailabilityService.checkAvailability(...)) in try { ... } finally { Date.now = originalDateNow; } so the mockRequest and Date.now cleanup always runs.src/model/orchestrator/services/hooks/git-hooks-service.test.ts-2-2 (1)
2-2:⚠️ Potential issue | 🟡 MinorRemove unused import.
The
pathimport is never used in this test file.🧹 Proposed fix
import fs from 'node:fs'; -import path from 'node:path'; import { GitHooksService } from './git-hooks-service';🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/model/orchestrator/services/hooks/git-hooks-service.test.ts` at line 2, Remove the unused Node path import in the test file by deleting the import statement "import path from 'node:path';" from src/model/orchestrator/services/hooks/git-hooks-service.test.ts so the file no longer contains an unused symbol.src/model/orchestrator/services/secrets/secret-source-service.ts-68-116 (1)
68-116:⚠️ Potential issue | 🟡 MinorRegister
envin the public source registry.
fetchAll()supports'env', butisPremadeSource(),resolveSource(), andgetAvailableSources()still treat it as unknown. Any upstream validation or UI built on those public methods will reject a documented source.Also applies to: 121-129
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/model/orchestrator/services/secrets/secret-source-service.ts` around lines 68 - 116, The premade source registry (SecretSourceService.premadeSources) is missing the 'env' entry so public methods like isPremadeSource, resolveSource, and getAvailableSources treat it as unknown; add an 'env' definition to the premadeSources map (matching the expected shape SecretSourceDefinition with name: 'env', command and parseOutput as appropriate for environment variable reads) and also add the same alias/registration where similar premade entries are duplicated (the block referenced around lines 121-129) so fetchAll() and the public registry methods consistently recognize 'env' as a valid premade source.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Repository UI
Review profile: CHILL
Plan: Pro
Run ID: ebc87e18-1258-4f1b-b8fe-c8853915a7ec
⛔ Files ignored due to path filters (2)
dist/index.jsis excluded by!**/dist/**dist/index.js.mapis excluded by!**/dist/**,!**/*.map
📒 Files selected for processing (58)
.github/workflows/orchestrator-async-checks.yml.github/workflows/orchestrator-integrity.ymlaction.ymldelete-me-update-all-integration-branches.ps1delete-me-update-this-integration-branch.ps1src/index.tssrc/model/build-parameters.tssrc/model/cli/cli.tssrc/model/input.tssrc/model/orchestrator/options/orchestrator-folders-auth.test.tssrc/model/orchestrator/options/orchestrator-folders.test.tssrc/model/orchestrator/options/orchestrator-folders.tssrc/model/orchestrator/options/orchestrator-guid.test.tssrc/model/orchestrator/options/orchestrator-options.tssrc/model/orchestrator/options/orchestrator-query-override.tssrc/model/orchestrator/orchestrator.tssrc/model/orchestrator/providers/ansible/ansible-provider.test.tssrc/model/orchestrator/providers/ansible/index.tssrc/model/orchestrator/providers/azure-aci/index.tssrc/model/orchestrator/providers/cli/cli-provider-protocol.tssrc/model/orchestrator/providers/cli/cli-provider.test.tssrc/model/orchestrator/providers/cli/cli-provider.tssrc/model/orchestrator/providers/cli/index.tssrc/model/orchestrator/providers/gcp-cloud-run/index.tssrc/model/orchestrator/providers/github-actions/github-actions-provider.test.tssrc/model/orchestrator/providers/github-actions/index.tssrc/model/orchestrator/providers/gitlab-ci/gitlab-ci-provider.test.tssrc/model/orchestrator/providers/gitlab-ci/index.tssrc/model/orchestrator/providers/provider-loader.tssrc/model/orchestrator/providers/provider-selection.test.tssrc/model/orchestrator/providers/remote-powershell/index.tssrc/model/orchestrator/providers/remote-powershell/remote-powershell-provider.test.tssrc/model/orchestrator/remote-client/index.tssrc/model/orchestrator/services/cache/child-workspace-service.test.tssrc/model/orchestrator/services/cache/child-workspace-service.tssrc/model/orchestrator/services/cache/local-cache-service.test.tssrc/model/orchestrator/services/cache/local-cache-service.tssrc/model/orchestrator/services/core/follow-log-stream-service.test.tssrc/model/orchestrator/services/core/runner-availability-service.test.tssrc/model/orchestrator/services/core/runner-availability-service.tssrc/model/orchestrator/services/core/task-parameter-serializer.test.tssrc/model/orchestrator/services/hooks/git-hooks-service.test.tssrc/model/orchestrator/services/hooks/git-hooks-service.tssrc/model/orchestrator/services/lfs/elastic-git-storage-service.test.tssrc/model/orchestrator/services/lfs/elastic-git-storage-service.tssrc/model/orchestrator/services/lfs/lfs-agent-service.test.tssrc/model/orchestrator/services/lfs/lfs-agent-service.tssrc/model/orchestrator/services/reliability/build-reliability-service.test.tssrc/model/orchestrator/services/reliability/build-reliability-service.tssrc/model/orchestrator/services/reliability/index.tssrc/model/orchestrator/services/secrets/secret-source-service.test.tssrc/model/orchestrator/services/secrets/secret-source-service.tssrc/model/orchestrator/services/submodule/submodule-profile-service.test.tssrc/model/orchestrator/services/submodule/submodule-profile-service.tssrc/model/orchestrator/services/submodule/submodule-profile-types.tssrc/model/orchestrator/tests/e2e/orchestrator-end2end-caching.test.tssrc/model/orchestrator/workflows/async-workflow.tssrc/model/orchestrator/workflows/build-automation-workflow.ts
| case 'azure-aci': | ||
| OrchestratorLogger.log('⚠ EXPERIMENTAL: Azure Container Instances provider'); | ||
| Orchestrator.Provider = new AzureAciProvider(Orchestrator.buildParameters); | ||
| case 'remote-powershell': | ||
| Orchestrator.Provider = new RemotePowershellProvider(Orchestrator.buildParameters); | ||
| break; |
There was a problem hiding this comment.
Add the missing break after azure-aci.
Without it, the azure-aci case falls through into remote-powershell and overwrites the provider instance.
🐛 Minimal fix
case 'azure-aci':
OrchestratorLogger.log('⚠ EXPERIMENTAL: Azure Container Instances provider');
Orchestrator.Provider = new AzureAciProvider(Orchestrator.buildParameters);
+ break;
case 'remote-powershell':
Orchestrator.Provider = new RemotePowershellProvider(Orchestrator.buildParameters);
break;📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| case 'azure-aci': | |
| OrchestratorLogger.log('⚠ EXPERIMENTAL: Azure Container Instances provider'); | |
| Orchestrator.Provider = new AzureAciProvider(Orchestrator.buildParameters); | |
| case 'remote-powershell': | |
| Orchestrator.Provider = new RemotePowershellProvider(Orchestrator.buildParameters); | |
| break; | |
| case 'azure-aci': | |
| OrchestratorLogger.log('⚠ EXPERIMENTAL: Azure Container Instances provider'); | |
| Orchestrator.Provider = new AzureAciProvider(Orchestrator.buildParameters); | |
| break; | |
| case 'remote-powershell': | |
| Orchestrator.Provider = new RemotePowershellProvider(Orchestrator.buildParameters); | |
| break; |
🧰 Tools
🪛 Biome (2.4.4)
[error] 219-221: This case is falling through to the next case.
(lint/suspicious/noFallthroughSwitchClause)
🪛 ESLint
[error] 222-224: Expected a 'break' statement before 'case'.
(no-fallthrough)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/model/orchestrator/orchestrator.ts` around lines 219 - 224, The switch
case for 'azure-aci' is missing a break so execution falls through and
overwrites Orchestrator.Provider with RemotePowershellProvider; add a break
statement immediately after the line that sets Orchestrator.Provider = new
AzureAciProvider(Orchestrator.buildParameters) (in the switch handling
'azure-aci') so the AzureAciProvider instance remains assigned and does not fall
through to the RemotePowershellProvider case.
| it('prefixes secrets as environment variables in the command', async () => { | ||
| mockRun.mockResolvedValueOnce('ok'); | ||
|
|
||
| const secrets = [ | ||
| { ParameterKey: 'key1', EnvironmentVariable: 'SECRET_TOKEN', ParameterValue: 'tok-abc' }, | ||
| { ParameterKey: 'key2', EnvironmentVariable: 'DEPLOY_KEY', ParameterValue: 'dk-xyz' }, | ||
| ]; | ||
|
|
||
| await provider.runTaskInWorkflow('guid-secrets', 'img', 'cmd', '/m', '/w', [], secrets as any); | ||
|
|
||
| const command = mockRun.mock.calls[0][0]; | ||
| expect(command).toMatch(/^SECRET_TOKEN='tok-abc'/); | ||
| expect(command).toContain("DEPLOY_KEY='dk-xyz'"); | ||
| expect(command).toContain('ansible-playbook'); | ||
| }); |
There was a problem hiding this comment.
Don't lock in shell-prefixed secrets here.
This expectation requires SECRET_TOKEN='...' ansible-playbook ..., which exposes secret values in process args and any command logging. Pass secrets via the spawned process environment or a temporary vars file instead, then update the test to assert that safer path.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/model/orchestrator/providers/ansible/ansible-provider.test.ts` around
lines 219 - 233, The test currently asserts that secrets are embedded in the
shell command string (e.g., SECRET_TOKEN='...') which leaks values; update
provider.runTaskInWorkflow and the test to pass secrets safely (either via the
spawned process environment or by writing them to a temporary vars file and
pointing ansible-playbook at that file) instead of interpolating them into the
command string. Modify the implementation path used by runTaskInWorkflow to set
env variables on the child process or create a temp vars file and pass its path
as --extra-vars/@file, and change the test (which references mockRun and
mockRun.mock.calls[0][0]) to assert that mockRun was invoked without inline
secret literals and that either process.env contains the secret keys when the
process is spawned or that the command contains a reference to a temp vars file
(not the secret values themselves). Ensure changes touch
provider.runTaskInWorkflow and the test assertions around mockRun only.
| // Build ansible-playbook command | ||
| const commandParts = [ | ||
| 'ansible-playbook', | ||
| `-i "${this.inventory}"`, | ||
| `"${this.playbook}"`, | ||
| `-e '${extraVariablesJson}'`, | ||
| '--no-color', | ||
| ]; | ||
|
|
||
| if (this.vaultPassword) { | ||
| commandParts.push(`--vault-password-file "${this.vaultPassword}"`); | ||
| } | ||
|
|
||
| // Add secret variables as extra environment | ||
| const environmentPrefix = secrets | ||
| .map((secret) => `${secret.EnvironmentVariable}='${secret.ParameterValue}'`) | ||
| .join(' '); | ||
|
|
||
| const fullCommand = environmentPrefix ? `${environmentPrefix} ${commandParts.join(' ')}` : commandParts.join(' '); |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
find src -name "*orchestrator-system*" -type fRepository: game-ci/unity-builder
Length of output: 124
🏁 Script executed:
cat -n src/model/orchestrator/services/core/orchestrator-system.tsRepository: game-ci/unity-builder
Length of output: 2569
🏁 Script executed:
sed -n '100,160p' src/model/orchestrator/providers/ansible/index.tsRepository: game-ci/unity-builder
Length of output: 2085
Confirm shell injection vulnerability in command building and fix unescaped parameter interpolation.
OrchestratorSystem.Run uses Node.js exec() which invokes a shell by default. The fullCommand concatenates multiple unescaped values into a shell string:
this.inventory,this.playbook, andthis.vaultPasswordare wrapped in double quotes but not escaped—quotes or backticks in these values will break the command or enable injectionsecret.ParameterValueandsecret.EnvironmentVariableare unescaped—a single quote inParameterValuebreaks the surrounding quotes; metacharacters enable injection
Only extraVariablesJson receives escaping (single-quote escaping). Either escape all interpolated tokens using shell-safe methods (e.g., shellescape package) or switch to execFile or spawn with argv-based argument passing to eliminate shell interpretation entirely.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/model/orchestrator/providers/ansible/index.ts` around lines 123 - 141,
The command string built in commandParts/fullCommand is vulnerable to shell
injection because this.inventory, this.playbook, this.vaultPassword and secret
fields are interpolated into a shell string unescaped; fix by changing
OrchestratorSystem.Run to invoke the playbook without a shell (use
child_process.spawn or execFile and pass commandParts as argv) or, if keeping a
shell, properly shell-escape every token (inventory, playbook, vaultPassword,
each secret.EnvironmentVariable and secret.ParameterValue) using a vetted
escaping utility (e.g., shell-escape/shellescape) before joining; ensure
extraVariablesJson remains safely quoted and stop concatenating environment
assignments into a single shell string—pass env via the spawn/execFile
options.env instead.
| // If the target path already exists, remove it to make way for the move | ||
| if (fs.existsSync(projectPath)) { | ||
| OrchestratorLogger.log(`[ChildWorkspace] Removing existing target path: ${projectPath}`); | ||
| fs.rmSync(projectPath, { recursive: true, force: true }); | ||
| } | ||
|
|
||
| // Ensure parent directory of project path exists | ||
| const parentDirectory = path.dirname(projectPath); | ||
| if (!fs.existsSync(parentDirectory)) { | ||
| fs.mkdirSync(parentDirectory, { recursive: true }); | ||
| } | ||
|
|
||
| // Atomic move (rename) - O(1) on NTFS when on same volume | ||
| OrchestratorLogger.log(`[ChildWorkspace] Restoring workspace: ${cachedWorkspacePath} -> ${projectPath}`); | ||
| fs.renameSync(cachedWorkspacePath, projectPath); |
There was a problem hiding this comment.
Avoid deleting the target workspace before the restore move succeeds.
If fs.renameSync(cachedWorkspacePath, projectPath) throws, the catch returns false after projectPath has already been removed. That turns a restore failure into data loss. Keep a rollback path or stage the move somewhere safe before deleting the destination.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/model/orchestrator/services/cache/child-workspace-service.ts` around
lines 65 - 79, The current sequence deletes projectPath before doing
fs.renameSync(cachedWorkspacePath, projectPath), which can cause data loss if
the rename fails; change the flow in the function handling workspace restore in
child-workspace-service.ts to first move/rename the existing projectPath to a
safe backup location (e.g., projectPath + '.bak' or a temp dir) instead of
removing it, then perform fs.renameSync(cachedWorkspacePath, projectPath); on
success remove the backup, and on failure attempt to restore the backup by
renaming it back to projectPath, wrapping these steps in try/catch to ensure
rollback; reference the variables projectPath, cachedWorkspacePath, and the
calls fs.rmSync/fs.renameSync and OrchestratorLogger.log when applying this
change.
| static buildConfig(parameters: { | ||
| childWorkspacesEnabled: boolean; | ||
| childWorkspaceName: string; | ||
| childWorkspaceCacheRoot: string; | ||
| childWorkspacePreserveGit: boolean; | ||
| childWorkspaceSeparateLibrary: boolean; | ||
| }): ChildWorkspaceConfig { | ||
| return { | ||
| enabled: parameters.childWorkspacesEnabled, | ||
| workspaceName: parameters.childWorkspaceName, | ||
| parentCacheRoot: parameters.childWorkspaceCacheRoot, | ||
| preserveGitDirectory: parameters.childWorkspacePreserveGit, | ||
| separateLibraryCache: parameters.childWorkspaceSeparateLibrary, | ||
| }; |
There was a problem hiding this comment.
Validate childWorkspaceName before building the config.
When child workspaces are enabled and workspaceName === '', path.join(parentCacheRoot, workspaceName) collapses to parentCacheRoot. saveWorkspace() then deletes that directory at Line 134. Path-like values such as ../foo have the same blast radius. Reject empty/non-segment names up front.
🛡️ Proposed fix
static buildConfig(parameters: {
childWorkspacesEnabled: boolean;
childWorkspaceName: string;
childWorkspaceCacheRoot: string;
childWorkspacePreserveGit: boolean;
childWorkspaceSeparateLibrary: boolean;
}): ChildWorkspaceConfig {
+ const workspaceName = parameters.childWorkspaceName.trim();
+ if (
+ parameters.childWorkspacesEnabled &&
+ (!workspaceName || workspaceName === '.' || workspaceName === '..' || /[\\/]/.test(workspaceName))
+ ) {
+ throw new Error('[ChildWorkspace] childWorkspaceName must be a non-empty single path segment');
+ }
+
return {
enabled: parameters.childWorkspacesEnabled,
- workspaceName: parameters.childWorkspaceName,
+ workspaceName,
parentCacheRoot: parameters.childWorkspaceCacheRoot,
preserveGitDirectory: parameters.childWorkspacePreserveGit,
separateLibraryCache: parameters.childWorkspaceSeparateLibrary,
};
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| static buildConfig(parameters: { | |
| childWorkspacesEnabled: boolean; | |
| childWorkspaceName: string; | |
| childWorkspaceCacheRoot: string; | |
| childWorkspacePreserveGit: boolean; | |
| childWorkspaceSeparateLibrary: boolean; | |
| }): ChildWorkspaceConfig { | |
| return { | |
| enabled: parameters.childWorkspacesEnabled, | |
| workspaceName: parameters.childWorkspaceName, | |
| parentCacheRoot: parameters.childWorkspaceCacheRoot, | |
| preserveGitDirectory: parameters.childWorkspacePreserveGit, | |
| separateLibraryCache: parameters.childWorkspaceSeparateLibrary, | |
| }; | |
| static buildConfig(parameters: { | |
| childWorkspacesEnabled: boolean; | |
| childWorkspaceName: string; | |
| childWorkspaceCacheRoot: string; | |
| childWorkspacePreserveGit: boolean; | |
| childWorkspaceSeparateLibrary: boolean; | |
| }): ChildWorkspaceConfig { | |
| const workspaceName = parameters.childWorkspaceName.trim(); | |
| if ( | |
| parameters.childWorkspacesEnabled && | |
| (!workspaceName || workspaceName === '.' || workspaceName === '..' || /[\\/]/.test(workspaceName)) | |
| ) { | |
| throw new Error('[ChildWorkspace] childWorkspaceName must be a non-empty single path segment'); | |
| } | |
| return { | |
| enabled: parameters.childWorkspacesEnabled, | |
| workspaceName, | |
| parentCacheRoot: parameters.childWorkspaceCacheRoot, | |
| preserveGitDirectory: parameters.childWorkspacePreserveGit, | |
| separateLibraryCache: parameters.childWorkspaceSeparateLibrary, | |
| }; | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/model/orchestrator/services/cache/child-workspace-service.ts` around
lines 310 - 323, The buildConfig function should validate childWorkspaceName
when childWorkspacesEnabled is true: in ChildWorkspaceService.buildConfig check
parameters.childWorkspacesEnabled and then ensure parameters.childWorkspaceName
is non-empty and is a single path segment (reject '', '.' , '..', or any value
containing path separators like '/' or '\\'); if invalid throw a clear Error
(e.g., "invalid childWorkspaceName") to prevent parentCacheRoot being used or
dangerous paths being created (this validation protects saveWorkspace which
deletes the resolved path).
| for (const group of hookGroups) { | ||
| try { | ||
| await OrchestratorSystem.Run(`cd "${repoPath}" && npx lefthook run ${group}`, true); |
There was a problem hiding this comment.
Validate hook group names before invoking lefthook.
group is interpolated directly into a shell command. Since these names come from inputs, a crafted value can break out of npx lefthook run ... and execute arbitrary commands on the runner.
🔒 Minimal hardening
for (const group of hookGroups) {
try {
- await OrchestratorSystem.Run(`cd "${repoPath}" && npx lefthook run ${group}`, true);
+ if (!/^[A-Za-z0-9:_-]+$/.test(group)) {
+ throw new Error(`Invalid hook group: ${group}`);
+ }
+ await OrchestratorSystem.Run(`cd "${repoPath}" && npx lefthook run "${group}"`, true);
OrchestratorLogger.log(`[GitHooks] Hook group '${group}' passed`);
results[group] = true;
} catch (error: any) {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/model/orchestrator/services/hooks/git-hooks-service.ts` around lines 179
- 181, The loop invoking OrchestratorSystem.Run interpolates untrusted hook
group names (hookGroups) into a shell string; validate and sanitize those names
before use: implement an allowlist or regex check (e.g., only alphanumeric,
dashes/underscores) for each group and reject or skip invalid entries, and
prefer invoking lefthook without shell interpolation by switching
OrchestratorSystem.Run usage to a safe exec form that accepts argv (or create a
wrapper that calls npx with arguments rather than building a single shell
command). Update the code around OrchestratorSystem.Run, hookGroups, and
repoPath to enforce the validation and use argument-based execution to prevent
command injection.
| if (token) { | ||
| OrchestratorLogger.log('Configuring git authentication for submodule initialization...'); | ||
| await OrchestratorSystem.Run(`git config url."https://${token}@github.com/".insteadOf "https://github.com/"`); |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# First, read the specific file and lines mentioned
cat -n src/model/orchestrator/services/submodule/submodule-profile-service.ts | sed -n '190,210p'Repository: game-ci/unity-builder
Length of output: 1089
🏁 Script executed:
# Search for url.insteadOf usage in the codebase
rg "url\.insteadOf" -B 2 -A 2Repository: game-ci/unity-builder
Length of output: 605
🏁 Script executed:
# Search for http.extraHeader usage in the codebase
rg "http\.extraHeader" -B 2 -A 2Repository: game-ci/unity-builder
Length of output: 4855
🏁 Script executed:
# Check the full context of the submodule-profile-service file to understand authentication approach
wc -l src/model/orchestrator/services/submodule/submodule-profile-service.tsRepository: game-ci/unity-builder
Length of output: 139
Replace url.insteadOf with http.extraHeader for secure token handling.
The token is exposed in plain text in git config with url.insteadOf. The rest of the codebase has migrated to http.extraHeader which keeps the token out of URLs and config output. This method should use the same secure authentication pattern already established in remote-client and orchestrator-folders.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/model/orchestrator/services/submodule/submodule-profile-service.ts`
around lines 198 - 200, The current code stores the token in git config via
url.insteadOf which exposes it in plain text; update the block that checks token
(the token variable in submodule-profile-service and the OrchestratorSystem.Run
call) to instead configure git to use http.extraHeader for github (matching the
pattern used in remote-client and orchestrator-folders), i.e., remove the
url."https://...".insteadOf command and run a git config command that sets
http.https://github.com/.extraHeader to an Authorization header containing the
token (and use the same header format and --replace-all/local semantics used
elsewhere), and ensure you also clear/remove that extraHeader after submodule
init so the token is not left in git config.
| for (const action of plan) { | ||
| const fullPath = path.posix.join(repoPath, action.path).replace(/\\/g, '/'); | ||
|
|
||
| if (action.action === 'init') { | ||
| OrchestratorLogger.log(`Initializing submodule: ${action.name} (branch: ${action.branch})`); | ||
| await OrchestratorSystem.Run(`git submodule update --init ${action.path}`); | ||
|
|
||
| if (action.branch !== 'main') { | ||
| OrchestratorLogger.log(`Checking out branch '${action.branch}' for submodule: ${action.name}`); | ||
| await OrchestratorSystem.Run(`git -C ${action.path} checkout ${action.branch}`); | ||
| } | ||
| } else { | ||
| OrchestratorLogger.log(`Skipping submodule: ${action.name}`); | ||
| await OrchestratorSystem.Run(`git submodule deinit -f ${action.path} 2>/dev/null || true`); |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# First, let's locate and read the file to understand the full context
find . -name "submodule-profile-service.ts" -type fRepository: game-ci/unity-builder
Length of output: 137
🏁 Script executed:
# Check the full file and surrounding context
wc -l src/model/orchestrator/services/submodule/submodule-profile-service.tsRepository: game-ci/unity-builder
Length of output: 139
🏁 Script executed:
# Read the relevant sections of the file with context
cat -n src/model/orchestrator/services/submodule/submodule-profile-service.ts | sed -n '190,230p'Repository: game-ci/unity-builder
Length of output: 1830
🏁 Script executed:
# Also look for the OrchestratorSystem.Run implementation to understand how commands are executed
rg -n "OrchestratorSystem\.Run" -A 3 -B 1 src/model/orchestrator/services/submodule/submodule-profile-service.ts | head -50Repository: game-ci/unity-builder
Length of output: 1025
🏁 Script executed:
# Find the OrchestratorSystem definition
find . -name "*orchestrator*system*" -o -name "*OrchestratorSystem*" | grep -i "\.ts$"Repository: game-ci/unity-builder
Length of output: 126
🏁 Script executed:
# Search for OrchestratorSystem class/module definition
rg -l "OrchestratorSystem" --type tsRepository: game-ci/unity-builder
Length of output: 2561
🏁 Script executed:
# Read the OrchestratorSystem implementation
cat -n src/model/orchestrator/services/core/orchestrator-system.ts | head -100Repository: game-ci/unity-builder
Length of output: 2569
🏁 Script executed:
# Look for the Run method specifically
rg -n "static.*Run|async.*Run" src/model/orchestrator/services/core/orchestrator-system.ts -A 10Repository: game-ci/unity-builder
Length of output: 821
🏁 Script executed:
# Get the full file to understand execution
wc -l src/model/orchestrator/services/core/orchestrator-system.tsRepository: game-ci/unity-builder
Length of output: 127
🏁 Script executed:
# Find the buildPlan method and understand how the plan is constructed
rg -n "buildPlan|SubmoduleInitPlan" src/model/orchestrator/services/submodule/submodule-profile-service.ts -B 5 -A 15Repository: game-ci/unity-builder
Length of output: 3065
🏁 Script executed:
# Look for where SubmoduleInitPlan is defined and what its structure is
rg -n "interface SubmoduleInitPlan|type SubmoduleInitPlan" src/Repository: game-ci/unity-builder
Length of output: 191
🏁 Script executed:
# Check if there's any validation on action.path or action.branch
rg -n "action.path|action.branch" src/model/orchestrator/services/submodule/submodule-profile-service.ts -B 2 -A 2Repository: game-ci/unity-builder
Length of output: 920
🏁 Script executed:
# Read the types file to understand SubmoduleInitAction structure
cat -n src/model/orchestrator/services/submodule/submodule-profile-types.tsRepository: game-ci/unity-builder
Length of output: 568
🏁 Script executed:
# Look at parseGitmodules method to see where action.path comes from
rg -n "parseGitmodules" src/model/orchestrator/services/submodule/submodule-profile-service.ts -A 30Repository: game-ci/unity-builder
Length of output: 2118
🏁 Script executed:
# Look at how plan entries are created from matchedEntry
rg -n "SubmoduleInitAction|plan.push" src/model/orchestrator/services/submodule/submodule-profile-service.ts -B 3 -A 3Repository: game-ci/unity-builder
Length of output: 904
Fix shell injection in git submodule commands by properly escaping or using array-based command execution.
Commands on lines 208, 212, and 216 inject action.path and action.branch unquoted into shell strings passed to exec(), which spawns a shell. Since action.path is parsed directly from .gitmodules and action.branch from the YAML profile, an attacker controlling either file can inject arbitrary shell metacharacters (e.g., ; rm -rf /) to execute unintended commands. Additionally, the token parameter on line 200 is similarly unescaped. Use a shell-escaping library or, preferably, replace exec() with execFile() or spawn() using argument arrays to avoid shell parsing entirely.
🧰 Tools
🪛 ESLint
[error] 204-204: 'fullPath' is assigned a value but never used.
(no-unused-vars)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/model/orchestrator/services/submodule/submodule-profile-service.ts`
around lines 203 - 216, The git invocations in the for-loop (calls to
OrchestratorSystem.Run that build shell strings with action.path and
action.branch) and the earlier use of the token parameter are vulnerable to
shell injection; change these to use argument-array execution (e.g.,
spawn/execFile style) so values are passed as separate args rather than
interpolated into a shell string. Update the calls in the loop that execute "git
submodule update --init", "git -C <path> checkout <branch>", and "git submodule
deinit -f <path>" to pass the command and each component (e.g., "git",
["submodule","update","--init", action.path]) or equivalent via
OrchestratorSystem.Run so action.path and action.branch are never concatenated
into a shell command, and similarly ensure the token is passed as an argument
(or safely escaped) where it's used; modify OrchestratorSystem.Run or its
callers as needed to accept and forward argument arrays securely.
| } | ||
|
|
||
| private buildPwshCommand(script: string): string { | ||
| return `pwsh -NoProfile -NonInteractive -Command "${script.replace(/"/g, '\\"')}"`; |
Check failure
Code scanning / CodeQL
Incomplete string escaping or encoding High
Show autofix suggestion
Hide autofix suggestion
Copilot Autofix
AI 2 days ago
In general, when building command strings that embed user-controlled or variable content inside quoted literals, you must escape all special characters relevant to that quoting context, including backslashes and quotation marks, and do so globally. In JavaScript/TypeScript, String.prototype.replace with a plain string argument only affects the first occurrence and doesn’t handle backslashes; you should use regular expressions with the g flag and perform backslash escaping before escaping quotes.
In this file, the immediate issue is in buildPwshCommand (line 139–141). It currently returns:
return `pwsh -NoProfile -NonInteractive -Command "${script.replace(/"/g, '\\"')}"`;This escapes only double quotes. To fix the problem without changing the intended functionality, we should first escape backslashes (\ → \\) and then escape double quotes (" → \") within script. This preserves existing behavior for inputs without backslashes, while correctly handling inputs that contain them. A straightforward change is:
const escapedScript = script.replace(/\\/g, '\\\\').replace(/"/g, '\\"');
return `pwsh -NoProfile -NonInteractive -Command "${escapedScript}"`;No new methods or imports are required; the change is entirely local to the buildPwshCommand method in src/model/orchestrator/providers/remote-powershell/index.ts. We do not modify buildInvokeCommand because the CodeQL report only targets line 140, and we must avoid unrequested behavioral changes elsewhere.
| @@ -137,7 +137,8 @@ | ||
| } | ||
|
|
||
| private buildPwshCommand(script: string): string { | ||
| return `pwsh -NoProfile -NonInteractive -Command "${script.replace(/"/g, '\\"')}"`; | ||
| const escapedScript = script.replace(/\\/g, '\\\\').replace(/"/g, '\\"'); | ||
| return `pwsh -NoProfile -NonInteractive -Command "${escapedScript}"`; | ||
| } | ||
|
|
||
| private buildInvokeCommand(remoteScript: string): string { |
| } | ||
|
|
||
| private buildInvokeCommand(remoteScript: string): string { | ||
| const escapedScript = remoteScript.replace(/"/g, '\\"').replace(/'/g, "''"); |
Check failure
Code scanning / CodeQL
Incomplete string escaping or encoding High
Show autofix suggestion
Hide autofix suggestion
Copilot Autofix
AI 2 days ago
In general, this should be fixed by performing consistent escaping of all characters that are significant for the layers involved (the outer shell, PowerShell -Command, and the PowerShell script block), instead of only escaping quotes. At minimum, backslashes must be escaped before escaping quotes, so that sequences like \" do not become inconsistently interpreted. A better long-term design would be to avoid manual string concatenation by using argument arrays or a helper library for PowerShell command construction, but that would alter more behavior.
For this codebase, the minimal change that preserves existing functionality is to update the escapedScript computation in buildInvokeCommand so that it first doubles backslashes, then escapes double quotes, then escapes single quotes for PowerShell’s single-quoted strings. This keeps the rest of the command format unchanged. Concretely, in src/model/orchestrator/providers/remote-powershell/index.ts around line 144, change:
const escapedScript = remoteScript.replace(/"/g, '\\"').replace(/'/g, "''");to:
const escapedScript = remoteScript
.replace(/\\/g, '\\\\')
.replace(/"/g, '\\"')
.replace(/'/g, "''");No new imports or helpers are required; this uses only built-in string replacement with global regular expressions.
| @@ -141,7 +141,10 @@ | ||
| } | ||
|
|
||
| private buildInvokeCommand(remoteScript: string): string { | ||
| const escapedScript = remoteScript.replace(/"/g, '\\"').replace(/'/g, "''"); | ||
| const escapedScript = remoteScript | ||
| .replace(/\\/g, '\\\\') | ||
| .replace(/"/g, '\\"') | ||
| .replace(/'/g, "''"); | ||
|
|
||
| if (this.transport === 'ssh') { | ||
| return `pwsh -NoProfile -NonInteractive -Command "Invoke-Command -HostName '${this.host}' -ScriptBlock { ${escapedScript} }"`; |
…support' into release/lts-infrastructure Merges latest commits from PR #777 (git hooks fix + 129 new tests). Resolved merge conflicts: - dist/index.js, dist/index.js.map: rebuilt from merged source - src/model/input.ts: fixed 3 missing closing braces from merge - src/model/orchestrator/options/orchestrator-options.ts: fixed 1 missing closing brace from merge
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (4)
src/index.ts (2)
57-57: Avoidanytype forchildWorkspaceConfig.Using
anyloses type safety. Consider importing theChildWorkspaceConfigtype from the service module or defining an interface locally.🔧 Proposed fix
- let childWorkspaceConfig: any; + let childWorkspaceConfig: { enabled: boolean; workspaceName: string; parentCacheRoot: string; preserveGitDirectory: boolean; separateLibraryCache: boolean } | undefined;Alternatively, import the type:
import type { ChildWorkspaceConfig } from './model/orchestrator/services/cache/child-workspace-service';🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/index.ts` at line 57, Replace the loose any type on the variable childWorkspaceConfig with a concrete type: import the ChildWorkspaceConfig type from the child-workspace service module (e.g., import type { ChildWorkspaceConfig } from the service) or define a local interface matching that service's shape, then change the declaration to use ChildWorkspaceConfig so the variable reads like: let childWorkspaceConfig: ChildWorkspaceConfig; ensuring any usages conform to the imported/defined type.
60-62: Cache root fallback may produce a relative path.If
childWorkspaceCacheRoot,runnerTempPath, andRUNNER_TEMPare all empty/undefined, this evaluates topath.join('', 'game-ci-workspaces')which yields a relative pathgame-ci-workspaces. This could cause workspace operations to occur in unexpected locations.Consider adding validation or a more robust fallback:
🛡️ Proposed fix
const cacheRoot = buildParameters.childWorkspaceCacheRoot || - path.join(buildParameters.runnerTempPath || process.env.RUNNER_TEMP || '', 'game-ci-workspaces'); + path.join(buildParameters.runnerTempPath || process.env.RUNNER_TEMP || process.cwd(), 'game-ci-workspaces');🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/index.ts` around lines 60 - 62, The current cacheRoot assignment (variable cacheRoot using buildParameters.childWorkspaceCacheRoot, buildParameters.runnerTempPath, and process.env.RUNNER_TEMP with path.join('', 'game-ci-workspaces')) can produce a relative path; update the logic so the fallback is an absolute safe directory (e.g., use os.tmpdir() or another validated absolute path) and normalize to an absolute path via path.resolve; specifically, change how cacheRoot is derived so that if buildParameters.childWorkspaceCacheRoot, buildParameters.runnerTempPath, and process.env.RUNNER_TEMP are falsy you fall back to os.tmpdir() (or throw) and then call path.resolve(...) to ensure cacheRoot is absolute before use.src/index-enterprise-features.test.ts (1)
190-204: Timing-dependent test helper may cause flaky tests.The
runIndexhelper usessetTimeout(resolve, 100)to wait for async operations to complete. This fixed delay approach can lead to flaky tests if:
- The operations take longer than 100ms (test passes incorrectly or assertions run too early)
- CI environments have variable performance
Consider using
setImmediatewith proper promise chaining orjest.runAllTimers()with fake timers for more deterministic behavior.🔧 Proposed improvement
async function runIndex(overrides: EnterpriseBuildParametersOverrides = {}): Promise<void> { mockedBuildParametersCreate.mockResolvedValue(createMockBuildParameters(overrides)); return new Promise<void>((resolve) => { jest.isolateModules(() => { require('./index'); - - // runMain() is async; give it a tick to complete - // We use setImmediate to ensure all microtasks from the dynamic imports resolve }); - // Allow all promises and microtasks to settle - setTimeout(resolve, 100); + // Flush all pending promises and microtasks + setImmediate(() => { + process.nextTick(resolve); + }); }); }Or use fake timers:
jest.useFakeTimers(); // ... in runIndex: jest.runAllTimers(); jest.useRealTimers();🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/index-enterprise-features.test.ts` around lines 190 - 204, The helper runIndex is flaky because it uses setTimeout(resolve, 100) to wait for async work after require('./index'); replace the fixed-delay with a deterministic wait: either await the actual startup promise exported by the module (e.g., call and await an exported runMain or similar from require('./index')) or, if you must wait for microtasks, replace the setTimeout with a Promise that resolves on setImmediate (return new Promise(r => setImmediate(r))) or switch to fake timers around runIndex using jest.useFakeTimers() + jest.runAllTimers() and then jest.useRealTimers(); update runIndex to use one of these deterministic strategies and keep references to mockedBuildParametersCreate/createMockBuildParameters/jest.isolateModules/require('./index') to locate the change.src/model/enterprise-inputs.test.ts (1)
486-508: Consider adding coverage for additional enterprise properties.The defaults test verifies 19 enterprise properties, but the PR also adds:
gitIntegrityCheck,gitAutoRecover,cleanReservedFilenamesbuildArchiveEnabled,buildArchivePath,buildArchiveRetentionThese properties are used in
src/index.tsfor pre-build reliability checks and post-build archiving.📝 Suggested additions to defaults test
expect(parameters.providerExecutable).toBe(''); + expect(parameters.gitIntegrityCheck).toBe(false); + expect(parameters.gitAutoRecover).toBe(false); + expect(parameters.cleanReservedFilenames).toBe(false); + expect(parameters.buildArchiveEnabled).toBe(false); + expect(parameters.buildArchivePath).toBe('./build-archives'); + expect(parameters.buildArchiveRetention).toBe(30); });🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/model/enterprise-inputs.test.ts` around lines 486 - 508, The test at BuildParameters.create currently misses assertions for the new enterprise properties; update the 'has correct defaults for all enterprise properties' test to include expectations for gitIntegrityCheck, gitAutoRecover, cleanReservedFilenames, buildArchiveEnabled, buildArchivePath, and buildArchiveRetention by calling BuildParameters.create() and asserting each new property equals its actual default from the implementation (use the same boolean/string/number default values defined on BuildParameters for gitIntegrityCheck, gitAutoRecover, cleanReservedFilenames, buildArchiveEnabled, buildArchivePath, and buildArchiveRetention).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In @.github/workflows/build-tests-mac.yml:
- Line 15: Move the job-level continue-on-error: true to a per-matrix-entry
expression so only opt-in rows are non-blocking: remove or set continue-on-error
at the job level and add continue-on-error: ${{ matrix.allowFailure }} (or
similar) alongside your matrix definition; keep strategy.fail-fast: false as-is
and ensure the matrix includes an allowFailure boolean for experimental
macOS/iOS rows to preserve diagnostics while preventing regressions from masking
CI failures.
---
Nitpick comments:
In `@src/index-enterprise-features.test.ts`:
- Around line 190-204: The helper runIndex is flaky because it uses
setTimeout(resolve, 100) to wait for async work after require('./index');
replace the fixed-delay with a deterministic wait: either await the actual
startup promise exported by the module (e.g., call and await an exported runMain
or similar from require('./index')) or, if you must wait for microtasks, replace
the setTimeout with a Promise that resolves on setImmediate (return new
Promise(r => setImmediate(r))) or switch to fake timers around runIndex using
jest.useFakeTimers() + jest.runAllTimers() and then jest.useRealTimers(); update
runIndex to use one of these deterministic strategies and keep references to
mockedBuildParametersCreate/createMockBuildParameters/jest.isolateModules/require('./index')
to locate the change.
In `@src/index.ts`:
- Line 57: Replace the loose any type on the variable childWorkspaceConfig with
a concrete type: import the ChildWorkspaceConfig type from the child-workspace
service module (e.g., import type { ChildWorkspaceConfig } from the service) or
define a local interface matching that service's shape, then change the
declaration to use ChildWorkspaceConfig so the variable reads like: let
childWorkspaceConfig: ChildWorkspaceConfig; ensuring any usages conform to the
imported/defined type.
- Around line 60-62: The current cacheRoot assignment (variable cacheRoot using
buildParameters.childWorkspaceCacheRoot, buildParameters.runnerTempPath, and
process.env.RUNNER_TEMP with path.join('', 'game-ci-workspaces')) can produce a
relative path; update the logic so the fallback is an absolute safe directory
(e.g., use os.tmpdir() or another validated absolute path) and normalize to an
absolute path via path.resolve; specifically, change how cacheRoot is derived so
that if buildParameters.childWorkspaceCacheRoot, buildParameters.runnerTempPath,
and process.env.RUNNER_TEMP are falsy you fall back to os.tmpdir() (or throw)
and then call path.resolve(...) to ensure cacheRoot is absolute before use.
In `@src/model/enterprise-inputs.test.ts`:
- Around line 486-508: The test at BuildParameters.create currently misses
assertions for the new enterprise properties; update the 'has correct defaults
for all enterprise properties' test to include expectations for
gitIntegrityCheck, gitAutoRecover, cleanReservedFilenames, buildArchiveEnabled,
buildArchivePath, and buildArchiveRetention by calling BuildParameters.create()
and asserting each new property equals its actual default from the
implementation (use the same boolean/string/number default values defined on
BuildParameters for gitIntegrityCheck, gitAutoRecover, cleanReservedFilenames,
buildArchiveEnabled, buildArchivePath, and buildArchiveRetention).
ℹ️ Review info
⚙️ Run configuration
Configuration used: Repository UI
Review profile: CHILL
Plan: Pro
Run ID: 3cbff279-839a-4240-89ba-c2281f287447
⛔ Files ignored due to path filters (2)
dist/index.jsis excluded by!**/dist/**dist/index.js.mapis excluded by!**/dist/**,!**/*.map
📒 Files selected for processing (7)
.github/workflows/build-tests-mac.ymlsrc/index-enterprise-features.test.tssrc/index.tssrc/model/enterprise-inputs.test.tssrc/model/input.tssrc/model/orchestrator/options/orchestrator-options.tssrc/model/orchestrator/services/submodule/submodule-profile-service.test.ts
🚧 Files skipped from review as they are similar to previous changes (1)
- src/model/orchestrator/options/orchestrator-options.ts
| buildForAllPlatformsMacOS: | ||
| name: ${{ matrix.targetPlatform }} on ${{ matrix.unityVersion }} | ||
| runs-on: macos-latest | ||
| continue-on-error: true |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
cat -n .github/workflows/build-tests-mac.yml | head -100Repository: game-ci/unity-builder
Length of output: 3751
🌐 Web query:
GitHub Actions matrix fail-fast continue-on-error per entry scoping documentation
💡 Result:
jobs.<job_id>.strategy.fail-fastis matrix-wide: whentrue(default), if any matrix job fails, GitHub cancels the other in-progress or queued matrix jobs. This setting lives under the job’sstrategy. [1]jobs.<job_id>.continue-on-erroris per job run (per matrix entry): it’s a job-level key (sibling toruns-on,steps, etc.), and it can be an expression that referencesmatrix.*, so each matrix leg can decide whether its own failure should be tolerated. [2] [3]- Per-entry scoping pattern: add a boolean (or similar) field in
strategy.matrix.include, then read it fromjobs.<job_id>.continue-on-error. This is explicitly supported/documented and commonly shown as “experimental” legs. [2] [4] - Context availability (“scoping”): the
matrixcontext is available when evaluatingjobs.<job_id>.continue-on-error. [5]
Example:
jobs:
test:
runs-on: ubuntu-latest
continue-on-error: ${{ matrix.allow_failure }}
strategy:
fail-fast: true
matrix:
python: [ "3.11", "3.12" ]
allow_failure: [ false ]
include:
- python: "3.13"
allow_failure: trueDocs to cite/quote in your own notes
- Matrix “Handling failures” section (shows
continue-on-error: ${{ matrix.experimental }}+strategy.fail-fast). [2] - Workflow syntax reference entries for
jobs.<job_id>.strategy.fail-fastandjobs.<job_id>.continue-on-error. [1] - Contexts table confirming
matrixis allowed forjobs.<job_id>.continue-on-error. [5]
References:
[1] GitHub Docs — Workflow syntax for GitHub Actions (docs.github.com)
[2] GitHub Docs — Run job variations (matrix) (“Handling failures” section) (docs.github.com)
[3] GitHub Blog changelog — expressions in job continue-on-error with matrix.* (github.blog)
[4] GitHub Docs — matrix example using matrix.experimental to drive continue-on-error (docs.github.com)
[5] GitHub Docs — Contexts reference (shows jobs.<job_id>.continue-on-error supports matrix) (docs.github.com)
Citations:
- 1: https://docs.github.com/actions/automating-your-workflow-with-github-actions/workflow-syntax-for-github-actions?utm_source=openai
- 2: https://docs.github.com/en/actions/using-jobs/using-a-matrix-for-your-jobs
- 3: https://github.blog/changelog/2020-04-15-github-actions-new-workflow-features/
- 4: https://docs.github.com/en/actions/using-jobs/using-a-matrix-for-your-jobs?utm_source=openai
- 5: https://docs.github.com/en/actions/reference/workflows-and-actions/contexts
Scope continue-on-error to opt-in matrix rows, not the entire job.
strategy.fail-fast: false already keeps the rest of the matrix running. Setting continue-on-error: true at the job level makes every failing macOS/iOS build non-blocking, allowing the workflow to pass even when a platform build regresses. GitHub's matrix documentation recommends scoping continue-on-error per matrix entry using an expression like matrix.allowFailure to keep diagnostic collection for experimental versions without weakening CI signal for stable ones.
♻️ Suggested change
- continue-on-error: true
+ continue-on-error: ${{ matrix.allowFailure == true }}
strategy:
fail-fast: false
matrix:
projectPath:
- test-project
unityVersion:
- 2021.3.45f1
- 2022.3.13f1
- 2023.2.2f1
targetPlatform:
- StandaloneOSX
- iOS
include:
- unityVersion: 6000.0.36f1
targetPlatform: StandaloneOSX
+ allowFailure: true
- unityVersion: 6000.0.36f1
targetPlatform: StandaloneOSX
buildProfile: 'Assets/Settings/Build Profiles/Sample macOS Build Profile.asset'
+ allowFailure: true🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.github/workflows/build-tests-mac.yml at line 15, Move the job-level
continue-on-error: true to a per-matrix-entry expression so only opt-in rows are
non-blocking: remove or set continue-on-error at the job level and add
continue-on-error: ${{ matrix.allowFailure }} (or similar) alongside your matrix
definition; keep strategy.fail-fast: false as-is and ensure the matrix includes
an allowFailure boolean for experimental macOS/iOS rows to preserve diagnostics
while preventing regressions from masking CI failures.
Use the Checks API to flip failed macOS build conclusions to neutral (gray dash) so unstable builds don't show red X marks on PRs. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (1)
.github/workflows/build-tests-mac.yml (1)
11-12: Scope the new token permission to the job that needs it.This top-level
permissionsblock applies across the workflow, and once any permission is specified the unspecified ones default tonone. SincebuildForAllPlatformsMacOSusesactions/checkout, keepingchecks: writeworkflow-wide unnecessarily strips the repo-read permission that checkout recommends. (docs.github.com)♻️ Suggested change
-permissions: - checks: write +permissions: + contents: read jobs: buildForAllPlatformsMacOS: name: ${{ matrix.targetPlatform }} on ${{ matrix.unityVersion }} runs-on: macos-latest continue-on-error: true @@ markUnstableBuilds: name: Mark unstable macOS builds needs: buildForAllPlatformsMacOS if: always() runs-on: ubuntu-latest + permissions: + checks: write steps:🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/build-tests-mac.yml around lines 11 - 12, The workflow currently sets a top-level permissions block ("permissions: checks: write") which applies to all jobs and removes other default permissions; move that permission into the specific job that needs it by deleting the top-level permissions entry and adding a job-level "permissions: checks: write" under the buildForAllPlatformsMacOS job (the job that calls actions/checkout), so other jobs keep the default repo read permissions required by actions/checkout and only buildForAllPlatformsMacOS gets checks: write.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In @.github/workflows/build-tests-mac.yml:
- Around line 95-123: Replace the global checks.listForRef approach with the
Workflow Jobs API scoped to this run: call
github.rest.actions.listJobsForWorkflowRun with run_id: context.runId to get
only jobs for the current workflow run, iterate the returned jobs (not
check_runs), filter by job.name including "StandaloneOSX" or "iOS", extract the
check_run id from each job.check_run_url (parse the final path segment) and call
github.rest.checks.update with that id to set conclusion: 'neutral' and the
unstable output; also add a top-level permissions entry to the workflow YAML:
permissions: { actions: read, checks: write } so the API calls are permitted.
---
Nitpick comments:
In @.github/workflows/build-tests-mac.yml:
- Around line 11-12: The workflow currently sets a top-level permissions block
("permissions: checks: write") which applies to all jobs and removes other
default permissions; move that permission into the specific job that needs it by
deleting the top-level permissions entry and adding a job-level "permissions:
checks: write" under the buildForAllPlatformsMacOS job (the job that calls
actions/checkout), so other jobs keep the default repo read permissions required
by actions/checkout and only buildForAllPlatformsMacOS gets checks: write.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Repository UI
Review profile: CHILL
Plan: Pro
Run ID: cfc80615-c500-4e69-9111-3f7a418528e7
📒 Files selected for processing (1)
.github/workflows/build-tests-mac.yml
| markUnstableBuilds: | ||
| name: Mark unstable macOS builds | ||
| needs: buildForAllPlatformsMacOS | ||
| if: always() | ||
| runs-on: ubuntu-latest | ||
| steps: | ||
| - uses: actions/github-script@v7 | ||
| with: | ||
| script: | | ||
| const { data: checkRuns } = await github.rest.checks.listForRef({ | ||
| owner: context.repo.owner, | ||
| repo: context.repo.repo, | ||
| ref: context.sha, | ||
| per_page: 100, | ||
| }); | ||
|
|
||
| for (const run of checkRuns.check_runs) { | ||
| if (run.conclusion !== 'failure' || run.app?.slug !== 'github-actions') continue; | ||
| if (!run.name.includes('StandaloneOSX') && !run.name.includes('iOS')) continue; | ||
|
|
||
| await github.rest.checks.update({ | ||
| owner: context.repo.owner, | ||
| repo: context.repo.repo, | ||
| check_run_id: run.id, | ||
| conclusion: 'neutral', | ||
| output: { | ||
| title: `${run.name} (unstable)`, | ||
| summary: 'This macOS build is unstable and its failure does not block merging.', | ||
| }, |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
find . -name "build-tests-ubuntu.yml" -type fRepository: game-ci/unity-builder
Length of output: 107
🏁 Script executed:
fd -t f "build-tests-ubuntu.yml"Repository: game-ci/unity-builder
Length of output: 47
🏁 Script executed:
git ls-files | grep -i "build.*ubuntu"Repository: game-ci/unity-builder
Length of output: 105
🏁 Script executed:
cat -n .github/workflows/build-tests-ubuntu.yml | sed -n '39,62p'Repository: game-ci/unity-builder
Length of output: 1118
🏁 Script executed:
cat -n .github/workflows/build-tests-mac.yml | sed -n '95,123p'Repository: game-ci/unity-builder
Length of output: 1376
🌐 Web query:
GitHub API checks.listForRef scope all workflows commit
💡 Result:
octokit.checks.listForRef() calls the REST endpoint “List check runs for a Git reference”:
- Endpoint:
GET /repos/{owner}/{repo}/commits/{ref}/check-runswhererefcan be a commit SHA, branch, or tag. It lists check runs associated with that commit (this includes check runs produced by GitHub Actions workflows, plus other GitHub Apps that create checks). [1] - All vs latest: by default it uses
filter=latest(most recent check runs). To return everything available for that commit, usefilter=all. [1] - Token scope / permissions:
- PAT classic / OAuth: needs
reposcope for private repos. [1] - Fine-grained PAT / GitHub App tokens: needs repository permission “Checks: read”. [1]
- Public repo: can be called without auth if you’re only requesting public resources. [1]
- PAT classic / OAuth: needs
- Gotchas: if the commit has >1000 check suites, results are limited to the 1000 most recent suites; to iterate beyond that you must list check suites for the ref and then list runs per suite. Also, checks endpoints only look for pushes in the repo where the suite/run was created (fork pushes won’t be detected). [1]
Sources: GitHub REST API “Check runs” docs (List check runs for a Git reference). [1]
🌐 Web query:
GitHub API actions.listJobsForWorkflowRun permissions required
💡 Result:
For the REST endpoint behind actions.listJobsForWorkflowRun (GET /repos/{owner}/{repo}/actions/runs/{run_id}/jobs):
- Repository access: anyone with read access to the repo can use it. [1]
- Fine-grained PAT / GitHub App tokens: require Repository permissions → Actions: Read. [1]
- Classic PAT / OAuth app token (private repos): requires the
reposcope. [1] - Public repos: can be called without authentication (subject to unauthenticated rate limits). [1]
Sources:
[1] GitHub Docs – “REST API endpoints for workflow jobs” (List jobs for a workflow run)
Scope the check run updates to only this workflow run.
checks.listForRef returns check runs from all workflows for the commit. The current filter—checking if job names include StandaloneOSX or iOS—will match Ubuntu workflow jobs (which use the same matrix values at .github/workflows/build-tests-ubuntu.yml:54-58). This causes unrelated Ubuntu job failures to be neutralized, weakening branch protection for real regressions.
Use the Workflow Jobs API instead, which scopes to the current workflow run via context.runId and directly exposes each job's check_run_url. Add permissions: { actions: read, checks: write } to support the API calls.
Suggested change
markUnstableBuilds:
name: Mark unstable macOS builds
needs: buildForAllPlatformsMacOS
if: always()
runs-on: ubuntu-latest
+ permissions:
+ actions: read
+ checks: write
steps:
- uses: actions/github-script@v7
with:
script: |
- const { data: checkRuns } = await github.rest.checks.listForRef({
- owner: context.repo.owner,
- repo: context.repo.repo,
- ref: context.sha,
- per_page: 100,
- });
-
- for (const run of checkRuns.check_runs) {
- if (run.conclusion !== 'failure' || run.app?.slug !== 'github-actions') continue;
- if (!run.name.includes('StandaloneOSX') && !run.name.includes('iOS')) continue;
-
- await github.rest.checks.update({
+ const jobs = await github.paginate(
+ github.rest.actions.listJobsForWorkflowRun,
+ {
+ owner: context.repo.owner,
+ repo: context.repo.repo,
+ run_id: context.runId,
+ per_page: 100,
+ },
+ );
+
+ for (const job of jobs) {
+ if (job.conclusion !== 'failure') continue;
+ if (!job.name.includes('StandaloneOSX') && !job.name.includes('iOS')) continue;
+
+ const checkRunId = Number(job.check_run_url.split('/').pop());
+
+ await github.rest.checks.update({
owner: context.repo.owner,
repo: context.repo.repo,
- check_run_id: run.id,
+ check_run_id: checkRunId,
conclusion: 'neutral',
output: {
- title: `${run.name} (unstable)`,
+ title: `${job.name} (unstable)`,
summary: 'This macOS build is unstable and its failure does not block merging.',
},
});
}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.github/workflows/build-tests-mac.yml around lines 95 - 123, Replace the
global checks.listForRef approach with the Workflow Jobs API scoped to this run:
call github.rest.actions.listJobsForWorkflowRun with run_id: context.runId to
get only jobs for the current workflow run, iterate the returned jobs (not
check_runs), filter by job.name including "StandaloneOSX" or "iOS", extract the
check_run id from each job.check_run_url (parse the final path segment) and call
github.rest.checks.update with that id to set conclusion: 'neutral' and the
unstable output; also add a top-level permissions entry to the workflow YAML:
permissions: { actions: read, checks: write } so the API calls are permitted.
A mangled comment merged the "Create k3d cluster" step header with the previous yarn install step, causing a duplicate `run` key that prevented the entire Integrity workflow from loading. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (5)
.github/workflows/orchestrator-integrity.yml (5)
6-10: Unused workflow inputrunGithubIntegrationTests.The input
runGithubIntegrationTestsis declared but never referenced in any job or step condition. Consider removing it if it's no longer needed, or wire it up to conditionally run the GitHub integration tests.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/orchestrator-integrity.yml around lines 6 - 10, The workflow input runGithubIntegrationTests is declared but unused; either remove the runGithubIntegrationTests input block or wire it into the job that runs the GitHub Checks integration tests (the job described as "Run GitHub Checks integration tests") by adding a conditional that checks inputs.runGithubIntegrationTests (e.g., an if: that compares to 'true') so the job/steps only run when the input is enabled, or delete the input entirely if it's no longer needed.
1074-1107: Missing per-test cleanup afterorchestrator-end2end-caching(local-docker).Unlike other local-docker tests which have individual cleanup steps after each test,
orchestrator-end2end-caching(test 9) flows directly into final cleanup. While functional, adding a cleanup step here would maintain consistency with the established pattern and ensure resources are freed before final cleanup runs (useful if final cleanup fails for some reason).📋 Suggested addition for consistency
GITHUB_TOKEN: ${{ secrets.GIT_PRIVATE_TOKEN }} + - name: Cleanup after orchestrator-end2end-caching (local-docker) + if: always() + run: | + source /tmp/cleanup-functions.sh + light_cleanup # --- Final cleanup --- - name: Final cleanup🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/orchestrator-integrity.yml around lines 1074 - 1107, Add a per-test cleanup step immediately after the "Run orchestrator-end2end-caching test (local-docker with S3)" job to mirror the other local-docker tests: run the same small cleanup commands used elsewhere (e.g., rm -rf ./orchestrator-cache/* || true, docker stop localstack-main || true, docker rm localstack-main || true, docker system prune -af --volumes || true) so resources are freed before the final cleanup; target the step name "Run orchestrator-end2end-caching test (local-docker with S3)" as the insertion point and ensure the new step uses if: always() so it runs even on failure.
471-488: Consider extracting cleanup functions to a shared script or composite action.The cleanup function definitions (lines 71-112, 471-488, 799-816) are duplicated across jobs with minor variations. Consider:
- Checking in a shared
scripts/cleanup-functions.shto the repository- Creating a composite action for common setup/cleanup patterns
This would reduce maintenance burden and ensure consistency across jobs.
Also applies to: 799-816
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/orchestrator-integrity.yml around lines 471 - 488, Duplicate shell functions light_cleanup and heavy_cleanup are defined in multiple workflow jobs; extract them into a single shared script (e.g., scripts/cleanup-functions.sh) or implement a composite action, commit that file, then replace the inline cat heredoc blocks in the workflow with a simple step that either sources the shared script or invokes the composite action; ensure the shared script contains the same rm -rf and docker prune commands and that workflow steps call light_cleanup/heavy_cleanup as before so behavior remains identical.
66-69: Consider pinning versions for reproducible builds.Several external tools use unpinned versions that could cause unexpected CI failures:
- Line 68: k3d install script pulls from
mainbranch- Lines 140, 506, 834, 1139:
localstack/localstack:latestimage- Line 1183: rclone install script
Pinning specific versions (e.g.,
localstack/localstack:3.0.0) improves CI reproducibility and makes failures easier to diagnose.📌 Example: Pin LocalStack version
- localstack/localstack:latest || true + localstack/localstack:3.0.0 || trueAlso applies to: 140-140, 1183-1183
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/orchestrator-integrity.yml around lines 66 - 69, The workflow uses unpinned external resources causing non-reproducible CI; update the "Install k3d" step to fetch a specific k3d release (pin the installer or download a specific k3d version instead of pulling from main), replace all occurrences of the Docker image tag "localstack/localstack:latest" with a fixed version like "localstack/localstack:<desired-version>" (check each occurrence mentioned), and change the rclone installation step to reference a specific rclone release URL or version tag rather than the unpinned install script; ensure each replacement references the exact symbol names from the diff ("Install k3d" step, "localstack/localstack:latest", and the rclone install script) so the CI will use pinned, reproducible versions.
498-532: Duplicated LocalStack startup logic across jobs.The LocalStack startup and health check pattern is repeated in all four jobs (lines 127-169, 498-532, 826-859, 1131-1157). A composite action with a
servicesinput parameter would consolidate this to a single maintainable location.Also applies to: 826-859, 1131-1157
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/orchestrator-integrity.yml around lines 498 - 532, The LocalStack startup + health-check shell block (creating container named localstack-main, exposing port 4566, checking http://localhost:4566/_localstack/health for "services" and specifically '"s3"', using READY/ MAX_ATTEMPTS and docker logs on failure) is duplicated across jobs; extract that logic into a reusable composite GitHub Action (e.g., actions/localstack-start) that accepts a services input string (services: "s3,cloudformation,ecs,...") and implements the same container name, health endpoint check and retry logic, then replace each duplicated inline block with a step that uses the composite action and passes the desired services list; ensure the composite action preserves the health-check behavior (checking for required service like "s3") and error/logging behavior (docker logs and exit 1) so callers behave identically.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In @.github/workflows/orchestrator-integrity.yml:
- Around line 568-582: The AWS provider test steps are missing the LocalStack/S3
endpoint env vars required by orchestrator-s3-steps.test.ts and
orchestrator-environment.test.ts; update each test step in the
aws-provider-tests job (e.g., the "Run orchestrator-image test (AWS)" step and
any other orchestrator-* AWS steps) to export AWS_S3_ENDPOINT,
INPUT_AWSS3ENDPOINT, and INPUT_AWSENDPOINT (pointing to the job-level
AWS_ENDPOINT/AWS_ENDPOINT_URL) so tests can locate the LocalStack S3 endpoint;
ensure the variable names exactly match what orchestrator-s3-steps.test.ts and
orchestrator-environment.test.ts expect.
---
Nitpick comments:
In @.github/workflows/orchestrator-integrity.yml:
- Around line 6-10: The workflow input runGithubIntegrationTests is declared but
unused; either remove the runGithubIntegrationTests input block or wire it into
the job that runs the GitHub Checks integration tests (the job described as "Run
GitHub Checks integration tests") by adding a conditional that checks
inputs.runGithubIntegrationTests (e.g., an if: that compares to 'true') so the
job/steps only run when the input is enabled, or delete the input entirely if
it's no longer needed.
- Around line 1074-1107: Add a per-test cleanup step immediately after the "Run
orchestrator-end2end-caching test (local-docker with S3)" job to mirror the
other local-docker tests: run the same small cleanup commands used elsewhere
(e.g., rm -rf ./orchestrator-cache/* || true, docker stop localstack-main ||
true, docker rm localstack-main || true, docker system prune -af --volumes ||
true) so resources are freed before the final cleanup; target the step name "Run
orchestrator-end2end-caching test (local-docker with S3)" as the insertion point
and ensure the new step uses if: always() so it runs even on failure.
- Around line 471-488: Duplicate shell functions light_cleanup and heavy_cleanup
are defined in multiple workflow jobs; extract them into a single shared script
(e.g., scripts/cleanup-functions.sh) or implement a composite action, commit
that file, then replace the inline cat heredoc blocks in the workflow with a
simple step that either sources the shared script or invokes the composite
action; ensure the shared script contains the same rm -rf and docker prune
commands and that workflow steps call light_cleanup/heavy_cleanup as before so
behavior remains identical.
- Around line 66-69: The workflow uses unpinned external resources causing
non-reproducible CI; update the "Install k3d" step to fetch a specific k3d
release (pin the installer or download a specific k3d version instead of pulling
from main), replace all occurrences of the Docker image tag
"localstack/localstack:latest" with a fixed version like
"localstack/localstack:<desired-version>" (check each occurrence mentioned), and
change the rclone installation step to reference a specific rclone release URL
or version tag rather than the unpinned install script; ensure each replacement
references the exact symbol names from the diff ("Install k3d" step,
"localstack/localstack:latest", and the rclone install script) so the CI will
use pinned, reproducible versions.
- Around line 498-532: The LocalStack startup + health-check shell block
(creating container named localstack-main, exposing port 4566, checking
http://localhost:4566/_localstack/health for "services" and specifically '"s3"',
using READY/ MAX_ATTEMPTS and docker logs on failure) is duplicated across jobs;
extract that logic into a reusable composite GitHub Action (e.g.,
actions/localstack-start) that accepts a services input string (services:
"s3,cloudformation,ecs,...") and implements the same container name, health
endpoint check and retry logic, then replace each duplicated inline block with a
step that uses the composite action and passes the desired services list; ensure
the composite action preserves the health-check behavior (checking for required
service like "s3") and error/logging behavior (docker logs and exit 1) so
callers behave identically.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Repository UI
Review profile: CHILL
Plan: Pro
Run ID: 3b49f1e1-f600-42b5-baf9-8b3acc77c8bc
📒 Files selected for processing (1)
.github/workflows/orchestrator-integrity.yml
| # --- AWS Test 1: orchestrator-image --- | ||
| - name: Run orchestrator-image test (AWS) | ||
| timeout-minutes: 10 | ||
| run: yarn run test "orchestrator-image" --detectOpenHandles --forceExit --runInBand | ||
| env: | ||
| UNITY_EMAIL: ${{ secrets.UNITY_EMAIL }} | ||
| UNITY_PASSWORD: ${{ secrets.UNITY_PASSWORD }} | ||
| UNITY_SERIAL: ${{ secrets.UNITY_SERIAL }} | ||
| PROJECT_PATH: test-project | ||
| TARGET_PLATFORM: StandaloneWindows64 | ||
| orchestratorTests: true | ||
| versioning: None | ||
| KUBE_STORAGE_CLASS: local-path | ||
| PROVIDER_STRATEGY: aws | ||
| AWS_ACCESS_KEY_ID: test | ||
| AWS_SECRET_ACCESS_KEY: test | ||
| AWS_ENDPOINT: http://localhost:4566 | ||
| AWS_ENDPOINT_URL: http://localhost:4566 | ||
| GIT_PRIVATE_TOKEN: ${{ secrets.GIT_PRIVATE_TOKEN }} | ||
| GITHUB_TOKEN: ${{ secrets.GIT_PRIVATE_TOKEN }} |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Check if the orchestrator tests reference these environment variables
echo "=== Searching for AWS_S3_ENDPOINT usage ==="
rg -n "AWS_S3_ENDPOINT|INPUT_AWSS3ENDPOINT|INPUT_AWSENDPOINT" --type ts --type js -C 2 | head -50
echo ""
echo "=== Checking test files for S3 endpoint configuration ==="
fd -e test.ts -e spec.ts | xargs rg -l "S3|s3" | head -10Repository: game-ci/unity-builder
Length of output: 4984
🏁 Script executed:
# Get the full context of the workflow file around lines 568-582
sed -n '450,600p' .github/workflows/orchestrator-integrity.yml | cat -n
# Also find the aws-provider-tests job definition
rg -n "aws-provider-tests" .github/workflows/orchestrator-integrity.yml -A 20 | head -60Repository: game-ci/unity-builder
Length of output: 8228
🏁 Script executed:
# Find the orchestrator-image test definition
fd -e test.ts | xargs rg -l "orchestrator-image"
# Also check if orchestrator-s3-steps is imported/used by orchestrator-image test
rg -n "orchestrator-s3-steps|orchestrator-image" src/model/orchestrator/tests/ -A 5 -B 5 | head -100
# Check the full aws-provider-tests job for all test steps
sed -n '453,800p' .github/workflows/orchestrator-integrity.yml | grep -E "name:|PROVIDER_STRATEGY|AWS_S3|INPUT_AWS" | head -30Repository: game-ci/unity-builder
Length of output: 1433
🏁 Script executed:
# Get the orchestrator-s3-steps test step with full env block
sed -n '453,900p' .github/workflows/orchestrator-integrity.yml | grep -A 20 "orchestrator-s3-steps test"
# Also get exact lines for orchestrator-s3-steps step to see env vars
sed -n '650,750p' .github/workflows/orchestrator-integrity.ymlRepository: game-ci/unity-builder
Length of output: 5168
🏁 Script executed:
# Find and examine the orchestrator-image test file
find . -name "*orchestrator-image*test*" -type f
# Look at the content if it exists
cat src/model/orchestrator/tests/orchestrator-image.test.ts 2>/dev/null | head -100
# Also check what tests use AWS_S3_ENDPOINT
rg -l "AWS_S3_ENDPOINT" src/model/orchestrator/tests/ --type tsRepository: game-ci/unity-builder
Length of output: 1999
🏁 Script executed:
# Check if orchestrator-environment.test.ts actually uses AWS_S3_ENDPOINT
rg -n "AWS_S3_ENDPOINT|process.env" src/model/orchestrator/tests/orchestrator-environment.test.ts -B 3 -A 3 | head -80
# Check how aws-task-runner.ts uses these endpoint variables
cat src/model/orchestrator/providers/aws/aws-task-runner.ts | head -100Repository: game-ci/unity-builder
Length of output: 4818
🏁 Script executed:
# Check the orchestrator-environment.test.ts test to understand what it validates
sed -n '60,120p' src/model/orchestrator/tests/orchestrator-environment.test.tsRepository: game-ci/unity-builder
Length of output: 2575
🏁 Script executed:
# Check what variables k8s-tests sets for comparison
sed -n '45,350p' .github/workflows/orchestrator-integrity.yml | grep -A 25 "orchestrator-environment test (K8s)" | head -30
# Check local-docker-tests for comparison
sed -n '850,1050p' .github/workflows/orchestrator-integrity.yml | grep -A 25 "orchestrator-s3-steps test (local-docker)" | head -30
# Check if OrchestratorOptions.awsS3Endpoint gets set somewhere in the test setup
rg -n "awsS3Endpoint\|AWS_S3_ENDPOINT" src/model/orchestrator/ --type ts | grep -v node_modulesRepository: game-ci/unity-builder
Length of output: 47
🏁 Script executed:
# Check k8s-tests environment setup
sed -n '45,450p' .github/workflows/orchestrator-integrity.yml | grep -E "AWS_S3_ENDPOINT|INPUT_AWS|orchestrator-environment"
# Check local-docker-tests environment setup
sed -n '800,1100p' .github/workflows/orchestrator-integrity.yml | grep -E "AWS_S3_ENDPOINT|INPUT_AWS|orchestrator"
# Search the entire workflow for AWS_S3_ENDPOINT
rg -n "AWS_S3_ENDPOINT" .github/workflows/orchestrator-integrity.ymlRepository: game-ci/unity-builder
Length of output: 4689
AWS endpoint variables are missing from all aws-provider-tests steps.
The aws-provider-tests job sets AWS_ENDPOINT and AWS_ENDPOINT_URL at the job level (lines 459-460), but no test step defines AWS_S3_ENDPOINT, INPUT_AWSS3ENDPOINT, or INPUT_AWSENDPOINT. This is inconsistent with k8s-tests and local-docker-tests, which both set these variables for tests that interact with S3 (orchestrator-s3-steps, orchestrator-environment, orchestrator-caching, etc.). The code in orchestrator-s3-steps.test.ts and orchestrator-environment.test.ts expects these variables to configure LocalStack endpoints; without them, S3-dependent tests may fail to connect.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.github/workflows/orchestrator-integrity.yml around lines 568 - 582, The AWS
provider test steps are missing the LocalStack/S3 endpoint env vars required by
orchestrator-s3-steps.test.ts and orchestrator-environment.test.ts; update each
test step in the aws-provider-tests job (e.g., the "Run orchestrator-image test
(AWS)" step and any other orchestrator-* AWS steps) to export AWS_S3_ENDPOINT,
INPUT_AWSS3ENDPOINT, and INPUT_AWSENDPOINT (pointing to the job-level
AWS_ENDPOINT/AWS_ENDPOINT_URL) so tests can locate the LocalStack S3 endpoint;
ensure the variable names exactly match what orchestrator-s3-steps.test.ts and
orchestrator-environment.test.ts expect.

LTS Infrastructure -- Combined Integration
Integration branch combining the 9 infrastructure/LTS PRs without the next-gen features. This provides a reviewable and testable subset of the LTS 2.0.0 release containing only the core infrastructure improvements.
Component PRs
Related Issues
Tracks #794 and #810
Scope
This is the infrastructure subset of the full LTS 2.0.0 release (#815). It excludes the 6 next-gen feature PRs (#790, #791, #798, #799, #804, #813) which are available separately on
release/next-gen(#814).Three integration branches exist for different review/test scopes:
release/lts-infrastructurerelease/next-genrelease/lts-2.0.0Merge Conflict Notes
All conflicts were additive -- multiple branches adding properties/inputs to shared files (
action.yml,src/index.ts,src/model/build-parameters.ts,src/model/input.ts,src/model/orchestrator/options/orchestrator-options.ts,src/model/orchestrator/providers/provider-loader.ts). Resolved by keeping all additions from each branch. Thedist/build artifacts accepted from the latest merge -- will need ayarn buildto produce the final combined dist.Changelog
New Providers: GCP Cloud Run, Azure ACI, Remote PowerShell, GitHub Actions dispatch, GitLab CI dispatch, Ansible
Provider Features: Load balancing, fallback strategies, runner availability checks, init timeouts
Security: Secure git authentication via http.extraHeader (no more plaintext URL tokens)
Secrets: Premade secret sources (AWS, GCP, Azure, HashiCorp Vault, env, custom YAML)
Caching: Local filesystem cache (Library, LFS), child workspace isolation
LFS: Custom transfer agent support (elastic-git-storage), configurable storage paths
Git: Submodule profile initialization, git hooks management, integrity checks
Build: Reserved filename detection, archival support, reliability improvements
Testing: Further unit test coverage, CI integrity workflow speedup
Update Script
Run
.\delete-me-update-this-integration-branch.ps1to pull latest changes from all component branches.Run
.\delete-me-update-all-integration-branches.ps1to update all three integration branches at once.Summary by CodeRabbit
New Features
Improvements