Conversation
…ule profiles, caching, LFS, hooks Add generic enterprise-grade features to the orchestrator, enabling Unity projects with complex CI/CD pipelines to adopt game-ci/unity-builder with built-in support for: - CLI provider protocol: JSON-over-stdin/stdout bridge enabling providers in any language (Go, Python, Rust, shell) via the `providerExecutable` input - Submodule profiles: YAML-based selective submodule initialization with glob patterns and variant overlays (`submoduleProfilePath`, `submoduleVariantPath`) - Local build caching: Filesystem-based Library and LFS caching for local builds without external cache actions (`localCacheEnabled`, `localCacheRoot`) - Custom LFS transfer agents: Register external transfer agents like elastic-git-storage (`lfsTransferAgent`, `lfsTransferAgentArgs`, `lfsStoragePaths`) - Git hooks support: Detect and install lefthook/husky with configurable skip lists (`gitHooksEnabled`, `gitHooksSkipList`) Also removes all `orchestrator-develop` branch references, replacing with `main`. 13 new action inputs, 13 new files, 14 new CLI provider tests, 17 submodule tests, plus cache/LFS/hooks unit tests. All 452 tests pass. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…iders Add two new cloud provider implementations for the orchestrator, both marked as experimental: - **GCP Cloud Run Jobs** (`providerStrategy: gcp-cloud-run`): Executes Unity builds as Cloud Run Jobs with GCS FUSE for large artifact storage. Supports configurable machine types, service accounts, and VPC connectors. 7 new inputs (gcpProject, gcpRegion, gcpBucket, gcpMachineType, gcpDiskSizeGb, gcpServiceAccount, gcpVpcConnector). - **Azure Container Instances** (`providerStrategy: azure-aci`): Executes Unity builds as ACI containers with Azure File Shares (Premium FileStorage) for large artifact storage up to 100 TiB. Supports configurable CPU/memory, VNet integration, and subscription targeting. 9 new inputs (azureResourceGroup, azureLocation, azureStorageAccount, azureFileShareName, azureSubscriptionId, azureCpu, azureMemoryGb, azureDiskSizeGb, azureSubnetId). Both providers use their respective CLIs (gcloud, az) for infrastructure management and support garbage collection of old build resources. No tests included as these require real cloud infrastructure to validate. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Both providers now support four storage backends via gcpStorageType / azureStorageType: GCP Cloud Run: - gcs-fuse: Mount GCS bucket as POSIX filesystem (unlimited, best for large sequential I/O) - gcs-copy: Copy artifacts in/out via gsutil (simpler, no FUSE overhead) - nfs: Filestore NFS mount (true POSIX, good random I/O, up to 100 TiB) - in-memory: tmpfs (fastest, volatile, up to 32 GiB) Azure ACI: - azure-files: SMB file share mount (up to 100 TiB, premium throughput) - blob-copy: Copy artifacts in/out via az storage blob (no mount overhead) - azure-files-nfs: NFS 4.1 file share mount (true POSIX, no SMB lock overhead) - in-memory: emptyDir tmpfs (fastest, volatile, limited by container memory) New inputs: gcpStorageType, gcpFilestoreIp, gcpFilestoreShare, azureStorageType, azureBlobContainer. Constructor validates storage config and warns on missing prerequisites (e.g. NFS requires VPC connector/subnet). Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…ity check Adds built-in load balancing: check GitHub runner availability before builds start, auto-route to a fallback provider when runners are busy or offline. Eliminates the need for a separate check-runner job. New inputs: fallbackProviderStrategy, runnerCheckEnabled, runnerCheckLabels, runnerCheckMinAvailable. Outputs providerFallbackUsed and providerFallbackReason for workflow visibility. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Adds retryOnFallback (retry failed builds on alternate provider) and providerInitTimeout (swap provider if init takes too long). Refactors run() into run()/runWithProvider() to support retry loop. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Adds tests for cache hit restore (picks latest tar), LFS cache restore/save, garbage collection age filtering, and edge cases like permission errors and empty directories. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Covers: no token skip, no runners fallback, busy/offline runners, label filtering (case-insensitive), minAvailable threshold, fail-open on API error, mixed runner states. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Adds 64 new mock-based unit tests covering orchestrator services that previously had zero test coverage: - TaskParameterSerializer: env var format conversion, round-trip, uniqBy deduplication, blocked params, default secrets - FollowLogStreamService: build output message parsing — end of transmission, build success/failure detection, error accumulation, Library rebuild detection - OrchestratorNamespace (guid): GUID generation format, platform name normalization, nanoid uniqueness - OrchestratorFolders: path computation for all folder getters, ToLinuxFolder conversion, repo URL generation, purge flag detection All tests are pure mock-based and run without any external infrastructure (no LocalStack, K8s, Docker, or AWS). Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Adds a fast-fail unit test step at the top of orchestrator-integrity, right after yarn install and before any infrastructure setup (k3d, LocalStack). Runs 113 mock-based orchestrator tests in ~5 seconds. If serialization, path computation, log parsing, or provider loading is broken, the workflow fails immediately instead of spending 30+ minutes setting up LocalStack and k3d clusters. Tests included: orchestrator-guid, orchestrator-folders, task-parameter-serializer, follow-log-stream-service, runner-availability-service, provider-url-parser, provider-loader, provider-git-manager, orchestrator-image, orchestrator-hooks, orchestrator-github-checks. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Add comprehensive tests for CLI provider (cleanupWorkflow, garbageCollect, listWorkflow, watchWorkflow, stderr forwarding, timeout handling), local cache service (saveLfsCache full path and error handling), git hooks service (husky install, failure logging, edge cases), and LFS agent service (empty storagePaths, validate logging). 73 tests across 4 test files. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Replace token-in-URL pattern with http.extraHeader for git clone and LFS operations. The token no longer appears in clone URLs, git remote config, or process command lines. Add gitAuthMode input (default: 'header', legacy: 'url') so users can fall back to the old behavior if needed. Closes #785 Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Add SecretSourceService with premade secret source integrations:
- aws-secrets-manager (with --query SecretString for direct value)
- aws-parameter-store (with --with-decryption)
- gcp-secret-manager (latest version)
- azure-key-vault (via $AZURE_VAULT_NAME env var)
- env (environment variables, no shell command needed)
- Custom commands (any string with {0} placeholder)
- YAML file definitions for custom sources
Add secretSource input that takes precedence over inputPullCommand.
Backward compatible — existing inputPullCommand behavior unchanged.
Closes #776
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Adds three Vault entries: hashicorp-vault (KV v2), hashicorp-vault-kv1 (KV v1), and vault (short alias). Uses VAULT_ADDR for server address and VAULT_MOUNT env var for configurable mount path (defaults to 'secret'). Refs #776 Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
First-class support for elastic-git-storage as a custom LFS transfer agent. When lfsTransferAgent is set to "elastic-git-storage" (or "elastic-git-storage@v1.0.0" for a specific version), the service automatically finds or installs the agent from GitHub releases, then configures it via git config. Supports version pinning via @Version suffix in the agent value, eliminating the need for a separate version parameter. Platform and architecture detection handles linux/darwin/windows on amd64/arm64. 37 unit tests covering detection, PATH lookup, installation, version parsing, and configuration delegation. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Built-in support for Unity Git Hooks (com.frostebite.unitygithooks): - Auto-detect UPM package in Packages/manifest.json - Run init-unity-lefthook.js before hook installation - Set CI-friendly env vars (disable background project mode) New gitHooksRunBeforeBuild input runs specific lefthook groups before the Unity build, allowing CI to trigger pre-commit or pre-push checks that normally only fire on git events. 35 unit tests covering detection, init, CI env, group execution, and failure handling. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Initial scaffold for the test workflow engine service directory. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Initial scaffold for the runner registration and hot editor provider module. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…, and collection service Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…ut, and storage-backed sync Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Add scheduled workflow that validates community Unity packages compile and build correctly using unity-builder. Runs weekly on Sunday. Includes: - YAML plugin registry (community-plugins.yml) for package listings - Matrix expansion across plugins and platforms - Automatic failure reporting via GitHub issues - Manual trigger with plugin filter and Unity version override Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
… Actions, GitLab CI, Ansible Add four new providers that delegate builds to external CI platforms: - remote-powershell: Execute on remote machines via WinRM/SSH - github-actions: Dispatch workflow_dispatch on target repository - gitlab-ci: Trigger pipeline via GitLab API - ansible: Run playbooks against managed inventory Each follows the CI-as-a-provider pattern: trigger remote job, pass build parameters, stream logs, report status. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…ved filename cleanup, archival Add three optional reliability features for hardening CI pipelines: - Git corruption detection & recovery (fsck, stale lock cleanup, submodule backing store validation, auto-recovery) - Reserved filename cleanup (removes Windows device names that cause Unity asset importer infinite loops) - Build output archival with configurable retention policy All features are opt-in and fail gracefully with warnings only. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…rity, reserved filename cleanup, and build archival Adds BuildReliabilityService with the following capabilities: - checkGitIntegrity(): runs git fsck --no-dangling and parses output for corruption - cleanStaleLockFiles(): removes stale .lock files older than 10 minutes - validateSubmoduleBackingStores(): validates .git files point to valid backing stores - recoverCorruptedRepo(): orchestrates fsck, lock cleanup, re-fetch, retry fsck - cleanReservedFilenames(): removes Windows reserved filenames (con, prn, aux, nul, com1-9, lpt1-9) - archiveBuildOutput(): creates tar.gz archive of build output - enforceRetention(): deletes archives older than retention period - configureGitEnvironment(): sets GIT_TERMINAL_PROMPT=0, http.postBuffer, core.longpaths Wired into action.yml as opt-in inputs, with pre-build integrity checks and post-build archival in the main entry point. Includes 29 unit tests covering success and failure cases for all methods. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…Lab CI, PowerShell, and Ansible providers (#806) Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
… monitoring, and job dispatch (#791) Adds persistent Unity editor instance support to reduce build iteration time by eliminating cold-start overhead. Includes: - HotRunnerTypes: interfaces for config, status, job request/result, transport - HotRunnerRegistry: in-memory runner management with file-based persistence - HotRunnerHealthMonitor: periodic health checks, idle recycling, job-count recycling - HotRunnerDispatcher: job routing with wait-for-runner, timeout, and output streaming - HotRunnerService: high-level API integrating registry, health, and dispatch - 34 unit tests covering registration, filtering, health, dispatch, timeout, fallback - action.yml inputs for hot runner configuration (7 new inputs) - Input/BuildParameters integration for hot runner settings - index.ts wiring with cold-build fallback when hot runner unavailable Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…s, tests, and action integration (#798) - Add ArtifactUploadHandler with support for github-artifacts, storage (rclone), and local copy upload targets, including large file chunking for GitHub Artifacts - Add 44 unit tests covering OutputTypeRegistry, OutputService, and ArtifactUploadHandler (config parsing, upload coordination, file collection) - Add 6 new action.yml inputs for artifact configuration - Add artifactManifestPath action output - Wire artifact collection and upload into index.ts post-build flow Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…omy filtering, and structured results (#790) Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…te management, and tests (#799) - Add storage-pull strategy: rclone-based sync from remote storage with overlay and clean modes, URI parsing (storage://remote:bucket/path), transfer parallelism, and automatic rclone availability checking - Add SyncStateManager: persistent state load/save with configurable paths, workspace hash calculation via SHA-256 of key project files, and drift detection for external modification awareness - Add action.yml inputs: syncStrategy, syncInputRef, syncStorageRemote, syncRevertAfter, syncStatePath with sensible defaults - Wire sync into Input (5 getters), BuildParameters (5 fields), index.ts (local build path), and RemoteClient (orchestrator path) with post-job overlay revert when syncRevertAfter is true - Add 42 unit tests covering all strategies, URI parsing, state management, hash calculation, drift detection, error handling, and edge cases (missing rclone, invalid URIs, absent state, empty diffs) - Add root:true to eslintrc to prevent plugin resolution conflicts Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…p' into release/lts-2.0.0 # Conflicts: # .github/workflows/orchestrator-integrity.yml # dist/index.js.map
…to release/lts-2.0.0 # Conflicts: # action.yml # dist/index.js # dist/index.js.map # src/index.ts # src/model/build-parameters.ts # src/model/input.ts
…o release/lts-2.0.0 # Conflicts: # dist/index.js # dist/index.js.map # src/index.ts # src/model/build-parameters.ts # src/model/input.ts
… into release/lts-2.0.0 # Conflicts: # dist/index.js # dist/index.js.map # src/index.ts # src/model/build-parameters.ts
…l' into release/lts-2.0.0 # Conflicts: # action.yml # dist/index.js # dist/index.js.map # src/index.ts # src/model/build-parameters.ts
…ion' into release/lts-2.0.0 # Conflicts: # dist/index.js.map
…e/lts-2.0.0 # Conflicts: # dist/index.js.map
📝 WalkthroughWalkthroughLarge feature addition: full CLI, multi-provider orchestrator (new providers, CLI provider protocol), hot-runner subsystem, caching/LFS/git-hooks/artifact systems, extensive CI workflows and release tooling, many new inputs/outputs, install scripts, and a broad test suite. Changes
Sequence Diagram(s)sequenceDiagram
participant CLI as CLI (game-ci)
participant Orch as Orchestrator
participant Loader as ProviderLoader
participant Prov as Provider (e.g., gcp-cloud-run / github-actions / cli)
participant Storage as CloudStorage / Local
participant Art as ArtifactHandler
CLI->>Orch: parse args → mapCliArgumentsToInput
Orch->>Loader: select provider (strategy, runner checks)
Loader->>Prov: instantiate provider
Orch->>Prov: setupWorkflow(buildGuid, buildParameters)
Prov->>Storage: provision buckets/fileshares / prepare volumes
Prov->>Orch: runTaskInWorkflow => logs & result
Orch->>Art: collectOutputs(projectPath, buildGuid, types)
Art->>Storage: upload artifacts (rclone / GitHub Artifacts / local)
Art-->>Orch: upload summary
Orch-->>CLI: return exit code, logs, artifact manifest
Estimated code review effort🎯 5 (Critical) | ⏱️ ~120 minutes Possibly related issues
Possibly related PRs
Suggested labels
Suggested reviewers
Poem
✨ Finishing Touches🧪 Generate unit tests (beta)
|
| } | ||
|
|
||
| private buildPwshCommand(script: string): string { | ||
| return `pwsh -NoProfile -NonInteractive -Command "${script.replace(/"/g, '\\"')}"`; |
Check failure
Code scanning / CodeQL
Incomplete string escaping or encoding High
Show autofix suggestion
Hide autofix suggestion
Copilot Autofix
AI 3 days ago
In general, the safe fix is to avoid ad‑hoc manual escaping and instead either: (1) use a well-tested library for shell/PowerShell argument construction, or (2) at minimum, ensure that all metacharacters relevant to the quoting context are handled consistently, including backslashes and double quotes when embedding a value inside double quotes.
For this specific code, the minimal change without altering external behavior is to extend the escaping used in buildPwshCommand so that it also escapes backslashes before the string is wrapped in " for the -Command argument. That way, any backslash that might otherwise interact with the surrounding quoting is preserved as a literal backslash in the PowerShell command text. We will modify buildPwshCommand in src/model/orchestrator/providers/remote-powershell/index.ts to first escape backslashes (\ → \\), then escape double quotes (" → \"), and then interpolate the result into the template literal. No new imports are required; we can use standard JavaScript string replace with a global regular expression.
Concretely, replace line 140 with logic that computes an escapedScript using two chained .replace operations, and then uses escapedScript inside the command string.
| @@ -137,7 +137,8 @@ | ||
| } | ||
|
|
||
| private buildPwshCommand(script: string): string { | ||
| return `pwsh -NoProfile -NonInteractive -Command "${script.replace(/"/g, '\\"')}"`; | ||
| const escapedScript = script.replace(/\\/g, '\\\\').replace(/"/g, '\\"'); | ||
| return `pwsh -NoProfile -NonInteractive -Command "${escapedScript}"`; | ||
| } | ||
|
|
||
| private buildInvokeCommand(remoteScript: string): string { |
| } | ||
|
|
||
| private buildInvokeCommand(remoteScript: string): string { | ||
| const escapedScript = remoteScript.replace(/"/g, '\\"').replace(/'/g, "''"); |
Check failure
Code scanning / CodeQL
Incomplete string escaping or encoding High
Show autofix suggestion
Hide autofix suggestion
Copilot Autofix
AI 3 days ago
In general, the problem should be fixed by replacing ad‑hoc string replacement with either a robust, well-tested escaping routine or a dedicated library for safely constructing PowerShell command lines. At a minimum, when backslash-escaping or quote-escaping, the backslash itself must be escaped before other characters, and a global (/g) regular expression must be used so all occurrences are handled.
In this specific code, the minimal and safest fix that preserves functionality is to extend the escaping performed on remoteScript so that backslashes are escaped first, followed by double quotes and single quotes. This avoids sequences like \" being turned into \\" (escaped backslash, but unescaped quote) and ensures that any existing backslashes don’t interfere with the intended quoting of the PowerShell command. We can implement this by chaining an additional .replace(/\\/g, '\\\\') as the first step in the escapedScript expression on line 144. No new imports or functions are needed; we only adjust the escapedScript assignment.
Concretely, in src/model/orchestrator/providers/remote-powershell/index.ts, update the buildInvokeCommand method so that escapedScript is computed as:
const escapedScript = remoteScript
.replace(/\\/g, '\\\\')
.replace(/"/g, '\\"')
.replace(/'/g, "''");This preserves the existing behavior for quotes while addressing the missing backslash escaping that CodeQL reports.
| @@ -141,7 +141,10 @@ | ||
| } | ||
|
|
||
| private buildInvokeCommand(remoteScript: string): string { | ||
| const escapedScript = remoteScript.replace(/"/g, '\\"').replace(/'/g, "''"); | ||
| const escapedScript = remoteScript | ||
| .replace(/\\/g, '\\\\') | ||
| .replace(/"/g, '\\"') | ||
| .replace(/'/g, "''"); | ||
|
|
||
| if (this.transport === 'ssh') { | ||
| return `pwsh -NoProfile -NonInteractive -Command "Invoke-Command -HostName '${this.host}' -ScriptBlock { ${escapedScript} }"`; |
| name: Build ${{ matrix.target }} | ||
| runs-on: ${{ matrix.os }} | ||
| strategy: | ||
| fail-fast: false | ||
| matrix: | ||
| include: | ||
| - target: linux-x64 | ||
| os: ubuntu-latest | ||
| pkg-target: node20-linux-x64 | ||
| binary-name: game-ci-linux-x64 | ||
| - target: linux-arm64 | ||
| os: ubuntu-latest | ||
| pkg-target: node20-linux-arm64 | ||
| binary-name: game-ci-linux-arm64 | ||
| - target: macos-x64 | ||
| os: macos-latest | ||
| pkg-target: node20-macos-x64 | ||
| binary-name: game-ci-macos-x64 | ||
| - target: macos-arm64 | ||
| os: macos-latest | ||
| pkg-target: node20-macos-arm64 | ||
| binary-name: game-ci-macos-arm64 | ||
| - target: windows-x64 | ||
| os: windows-latest | ||
| pkg-target: node20-win-x64 | ||
| binary-name: game-ci-windows-x64.exe | ||
| steps: | ||
| - uses: actions/checkout@v4 | ||
| with: | ||
| ref: ${{ github.event.release.tag_name || inputs.tag || github.ref }} | ||
|
|
||
| - uses: actions/setup-node@v4 | ||
| with: | ||
| node-version: '20' | ||
|
|
||
| - name: Install dependencies | ||
| run: yarn install --frozen-lockfile | ||
|
|
||
| - name: Build TypeScript | ||
| run: yarn build | ||
|
|
||
| - name: Verify CLI before packaging | ||
| run: node lib/cli.js version | ||
|
|
||
| - name: Build standalone binary | ||
| run: npx pkg lib/cli.js --target ${{ matrix.pkg-target }} --output ${{ matrix.binary-name }} --compress GZip | ||
|
|
||
| - name: Verify standalone binary (non-cross-compiled) | ||
| if: | | ||
| (matrix.target == 'linux-x64' && runner.os == 'Linux') || | ||
| (matrix.target == 'macos-arm64' && runner.os == 'macOS' && runner.arch == 'ARM64') || | ||
| (matrix.target == 'macos-x64' && runner.os == 'macOS' && runner.arch == 'X64') || | ||
| (matrix.target == 'windows-x64' && runner.os == 'Windows') | ||
| run: ./${{ matrix.binary-name }} version | ||
| shell: bash | ||
|
|
||
| - uses: actions/upload-artifact@v4 | ||
| with: | ||
| name: binary-${{ matrix.target }} | ||
| path: ${{ matrix.binary-name }} | ||
| retention-days: 5 | ||
|
|
||
| create-checksums-and-upload: |
Check warning
Code scanning / CodeQL
Workflow does not contain permissions Medium
Show autofix suggestion
Hide autofix suggestion
Copilot Autofix
AI 3 days ago
In general, the fix is to explicitly declare a permissions block restricting the GITHUB_TOKEN to the least privileges needed. For this specific workflow, the build-binaries job only needs to read the repository contents (for actions/checkout) and upload artifacts; upload-artifact uses workflow-scoped storage and does not require additional repository write permissions. Therefore, we can safely add permissions: contents: read to that job. The other jobs (create-checksums-and-upload and publish-npm) already have explicit permissions and don’t need changes.
Concretely, in .github/workflows/release-cli.yml, under jobs: build-binaries:, add a permissions: section (aligned with runs-on) with contents: read. No imports or additional methods are needed, as this is YAML configuration for GitHub Actions only. This change keeps existing behavior but ensures the GITHUB_TOKEN for build-binaries cannot be used to modify repository contents or other resources.
| @@ -23,6 +23,8 @@ | ||
| build-binaries: | ||
| name: Build ${{ matrix.target }} | ||
| runs-on: ${{ matrix.os }} | ||
| permissions: | ||
| contents: read | ||
| strategy: | ||
| fail-fast: false | ||
| matrix: |
There was a problem hiding this comment.
Actionable comments posted: 18
Note
Due to the large number of review comments, Critical severity comments were prioritized as inline comments.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
.github/workflows/orchestrator-integrity.yml (1)
278-297:⚠️ Potential issue | 🟡 MinorUse
read -rto prevent backslash mangling in resource names.Lines 285, 288, 291, 292: The
while readloops should useread -rto prevent backslash interpretation. Resource names containing backslashes (unlikely but possible) would be corrupted.Proposed fix
- kubectl get pods -n default -o name 2>/dev/null | grep -E "(unity-builder-job-|helper-pod-)" | while read pod; do + kubectl get pods -n default -o name 2>/dev/null | grep -E "(unity-builder-job-|helper-pod-)" | while read -r pod; do kubectl delete "$pod" --ignore-not-found=true || true done || true - kubectl get pvc -n default -o name 2>/dev/null | grep "unity-builder-pvc-" | while read pvc; do + kubectl get pvc -n default -o name 2>/dev/null | grep "unity-builder-pvc-" | while read -r pvc; do kubectl delete "$pvc" --ignore-not-found=true || true done || true - kubectl get secrets -n default -o name 2>/dev/null | grep "build-credentials-" | while read secret; do + kubectl get secrets -n default -o name 2>/dev/null | grep "build-credentials-" | while read -r secret; do kubectl delete "$secret" --ignore-not-found=true || true done || true🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/orchestrator-integrity.yml around lines 278 - 297, The while-read loops in the cleanup step (e.g., the loop that processes PV names produced by kubectl in the block containing k8s_resource_cleanup) use plain read and thus may mangle backslashes; change those occurrences (e.g., the "while read pv;" invocation) to use read -r to prevent backslash interpretation, and update any other similar "while read" loops in this workflow to "while read -r" so resource names are preserved when iterating and passed to kubectl delete.
🟡 Minor comments (23)
src/model/orchestrator/runners/README.md-5-5 (1)
5-5:⚠️ Potential issue | 🟡 MinorAdd the specific GitHub issue link.
The reference to "GitHub Issue" is incomplete without a link or issue number, making it impossible for users to locate the full specification for the Hot Runner Protocol.
📝 Suggested fix
-See GitHub Issue for full specification. +See [`#810`](https://github.com/your-org/your-repo/issues/810) for full specification.Note: Replace with the actual issue number/URL. The PR objectives mention issue
#810is tracked, which may be relevant.Alternatively, if the specification is ready, I can help draft inline documentation to make this README more self-contained rather than relying on an external issue reference.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/model/orchestrator/runners/README.md` at line 5, Replace the placeholder line "See GitHub Issue for full specification." in the README with a direct link or issue number (e.g., "See GitHub Issue `#810`: <full-URL>") so readers can find the Hot Runner Protocol spec; if the spec is already finalized, instead insert a short inline summary of the specification and then link the full issue for details to make the README self-contained and discoverable..github/workflows/release-cli.yml-9-9 (1)
9-9:⚠️ Potential issue | 🟡 Minor
taginput description does not match runtime behavior.Line 9 says empty input uses the latest release, but Lines 117-118 explicitly skip upload when no tag is available. This should be aligned (either implement latest-tag lookup or update the description).
Also applies to: 117-118
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/release-cli.yml at line 9, The workflow's `tag` input description is inaccurate: it claims an empty value uses the latest release, but the workflow logic (the step that skips upload when no tag is available around lines 117-118) actually skips the upload; update the `tag` input description to reflect runtime behavior (e.g., "Release tag to build (e.g., v2.0.0). If empty, upload will be skipped.") so the `tag` input and the skip-upload logic are aligned; reference the `tag` input and the upload-skip step when making the change.src/model/orchestrator/options/orchestrator-guid.test.ts-7-49 (1)
7-49:⚠️ Potential issue | 🟡 MinorResolve ESLint errors in this new test file before merge.
This block currently violates configured rules (Line 7/31 comment spacing, Line 28/46 variable naming, and regex form on Lines 8/13/18/23/37/42/49). Since they are ESLint errors, they can block CI.
💡 Minimal lint-fix pattern
- // Format: {runNumber}-{platform}-{nanoid4} + + // Format: {runNumber}-{platform}-{nanoid4} - expect(guid).toMatch(/^42-linux64-[a-z0-9]{4}$/); + expect(guid).toMatch(/^42-linux64-[\da-z]{4}$/); - for (let i = 0; i < 20; i++) { - guids.add(OrchestratorNamespace.generateGuid('1', 'StandaloneLinux64')); + for (let index = 0; index < 20; index++) { + guids.add(OrchestratorNamespace.generateGuid('1', 'StandaloneLinux64')); } - for (let i = 0; i < 10; i++) { + for (let index = 0; index < 10; index++) { const guid = OrchestratorNamespace.generateGuid('1', 'test');🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/model/orchestrator/options/orchestrator-guid.test.ts` around lines 7 - 49, Tests in orchestrator-guid.test.ts are failing ESLint: add a space after the inline comment punctuation, fix variable naming to comply with camelCase, and convert the bare regex literals to the form your linter expects. Specifically, in the tests that call OrchestratorNamespace.generateGuid rename the Set variable (currently "guids") to a camelCase name like "guidSet", ensure the inline comment before the first expect has a space after "//", and replace the offending regex literals (the /^...$/ patterns used in multiple expects) with the form required by the project linter (e.g., use new RegExp('^...$') or the sanctioned literal style) so all regex uses match the configured rule.delete-me-update-all-integration-branches.ps1-14-15 (1)
14-15:⚠️ Potential issue | 🟡 MinorInclude untracked files in stash before switching branches.
The
git stash pushcommand without the-uflag stashes only tracked files. Any untracked files remain in the working directory and can block checkout or merge operations. Add the-u/--include-untrackedflag to ensure a clean state during branch updates.🔧 Small reliability fix
- git stash push -m "auto-stash before integration branch update" + git stash push -u -m "auto-stash before integration branch update"🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@delete-me-update-all-integration-branches.ps1` around lines 14 - 15, The stash uses `git stash push` which only saves tracked changes; update the call in the script (the `git stash push -m "auto-stash before integration branch update"` invocation that sets `$stashed = $true`) to include untracked files by adding the `-u` / `--include-untracked` flag so the working tree is fully cleaned before switching branches.src/model/orchestrator/providers/ansible/index.ts-107-109 (1)
107-109:⚠️ Potential issue | 🟡 MinorEnvironment variable name lowercasing may cause collisions.
Converting environment variable names to lowercase (line 108) could cause collisions if both
UNITY_VERSIONandunity_versionare passed. Consider preserving original casing or detecting conflicts.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/model/orchestrator/providers/ansible/index.ts` around lines 107 - 109, The loop that assigns playbookVariables[element.name.toLowerCase()] risks key collisions (e.g., UNITY_VERSION vs unity_version); update the assignment in the for (const element of environment) loop to preserve the original element.name as the key (use element.name instead of element.name.toLowerCase()) and add a conflict check: if playbookVariables already has a key with a different-cased name, emit a clear error or warning (or throw) so callers know there’s a duplicate variable; reference playbookVariables, environment, element.name.toLowerCase(), and element.name when making this change..github/workflows/orchestrator-integrity.yml-82-94 (1)
82-94:⚠️ Potential issue | 🟡 MinorSame
read -rissue in the cleanup function definition.The cleanup functions at lines 85, 88, 91 also need
read -r.Proposed fix
- kubectl get pods -n default -o name 2>/dev/null | grep -E "(unity-builder-job-|helper-pod-)" | while read pod; do + kubectl get pods -n default -o name 2>/dev/null | grep -E "(unity-builder-job-|helper-pod-)" | while read -r pod; do kubectl delete "$pod" --ignore-not-found=true || true done || true - kubectl get pvc -n default -o name 2>/dev/null | grep "unity-builder-pvc-" | while read pvc; do + kubectl get pvc -n default -o name 2>/dev/null | grep "unity-builder-pvc-" | while read -r pvc; do kubectl delete "$pvc" --ignore-not-found=true || true done || true - kubectl get secrets -n default -o name 2>/dev/null | grep "build-credentials-" | while read secret; do + kubectl get secrets -n default -o name 2>/dev/null | grep "build-credentials-" | while read -r secret; do kubectl delete "$secret" --ignore-not-found=true || true done || true🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/orchestrator-integrity.yml around lines 82 - 94, In k8s_resource_cleanup(), the while loops use plain read which can mis-handle backslashes; update each loop that reads names (the pods loop reading variable pod, the pvc loop reading variable pvc, and the secrets loop reading variable secret) to use read -r so fields are read raw (e.g. change "while read pod" to "while read -r pod", likewise for pvc and secret) to prevent backslash-escape interpretation during deletion iteration.install.ps1-72-84 (1)
72-84:⚠️ Potential issue | 🟡 MinorMissing checksum entry not reported to user.
If
$ExpectedLineis empty (the asset name isn't found inchecksums.txt), the script silently skips verification without notifying the user that the checksum couldn't be validated.Proposed fix
if ($ExpectedLine) { $ExpectedHash = ($ExpectedLine -split '\s+')[0] $ActualHash = (Get-FileHash -Path $BinaryPath -Algorithm SHA256).Hash.ToLower() if ($ExpectedHash -eq $ActualHash) { Write-Info "Checksum verified (SHA256)" } else { Write-Host "error: Checksum verification failed!" -ForegroundColor Red Write-Host " Expected: $ExpectedHash" -ForegroundColor Red Write-Host " Got: $ActualHash" -ForegroundColor Red Remove-Item $BinaryPath -Force exit 1 } + } else { + Write-Warn "Checksum entry not found for $AssetName; skipping verification" }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@install.ps1` around lines 72 - 84, The script currently skips verification when $ExpectedLine is empty; update the block that uses $ExpectedLine (the section computing $ExpectedHash/$ActualHash and calling Get-FileHash on $BinaryPath) to explicitly handle a missing checksum entry: if $ExpectedLine is null/empty, Write-Host a clear error (e.g., "error: No checksum entry found for <asset> in checksums.txt") to stderr/Red and exit 1 before attempting hash comparison, otherwise proceed with the existing ExpectedHash vs ActualHash check and removal/exit behavior on mismatch.src/model/orchestrator/services/hooks/git-hooks-service.test.ts-2-2 (1)
2-2:⚠️ Potential issue | 🟡 MinorRemove unused
pathimport to keep lint passing.Line 2 is unused and currently triggers
no-unused-vars.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/model/orchestrator/services/hooks/git-hooks-service.test.ts` at line 2, Remove the unused import by deleting the "import path from 'node:path';" statement in the test file (git-hooks-service.test.ts) so the unused symbol `path` no longer triggers the no-unused-vars lint rule; alternatively, if the test should use it, reference `path` in the test (e.g., use path.join/resolve) — but the quick fix is to remove the import line.src/model/orchestrator/services/hooks/git-hooks-service.test.ts-239-247 (1)
239-247:⚠️ Potential issue | 🟡 MinorRestore
process.envafter this test case.Line 239–247 asserts global env mutation but does not reset values, which can leak into later tests.
Proposed fix
it('should set CI env vars when Unity Git Hooks detected', async () => { + const previousBackground = process.env.UNITY_GITHOOKS_BACKGROUND_PROJECT_ENABLED; + const previousCi = process.env.CI; + (mockFs.existsSync as jest.Mock).mockReturnValue(true); (mockFs.readFileSync as jest.Mock).mockReturnValue(`{"dependencies":{"com.frostebite.unitygithooks":"1.0.5"}}`); (mockFs.readdirSync as jest.Mock).mockReturnValue(['com.frostebite.unitygithooks@1.0.5']); - - await GitHooksService.installHooks('/repo'); - - expect(process.env.UNITY_GITHOOKS_BACKGROUND_PROJECT_ENABLED).toBe('false'); + try { + await GitHooksService.installHooks('/repo'); + expect(process.env.UNITY_GITHOOKS_BACKGROUND_PROJECT_ENABLED).toBe('false'); + } finally { + if (previousBackground === undefined) delete process.env.UNITY_GITHOOKS_BACKGROUND_PROJECT_ENABLED; + else process.env.UNITY_GITHOOKS_BACKGROUND_PROJECT_ENABLED = previousBackground; + if (previousCi === undefined) delete process.env.CI; + else process.env.CI = previousCi; + } });🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/model/orchestrator/services/hooks/git-hooks-service.test.ts` around lines 239 - 247, The test "should set CI env vars when Unity Git Hooks detected" mutates process.env.UNITY_GITHOOKS_BACKGROUND_PROJECT_ENABLED and doesn't restore it; update the test around GitHooksService.installHooks to save the original value (e.g., const original = process.env.UNITY_GITHOOKS_BACKGROUND_PROJECT_ENABLED) before invoking installHooks and restore it in a finally or after block (or use jest's afterEach) so process.env is reset; reference the test in git-hooks-service.test.ts and the call to GitHooksService.installHooks as the change point, and ensure mocks (mockFs.*) behavior remains the same while cleaning up the environment variable.src/cli.ts-4-5 (1)
4-5:⚠️ Potential issue | 🟡 MinorRemove stale eslint disable to avoid lint failure.
Line [4] disables
import/no-unresolved, but static analysis reports it as unused. That should be removed (or justified by a real unresolved import).🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/cli.ts` around lines 4 - 5, Remove the stale eslint disable before the import of hideBin in src/cli.ts: delete the comment "// eslint-disable-next-line import/no-unresolved" that precedes "import { hideBin } from 'yargs/helpers'"; if the import truly is unresolved on CI, either fix the import/resolution (ensure yargs is installed and types configured) or replace the disable with a targeted justification comment referencing why import/no-unresolved is expected (and only then keep it).src/model/cli/cli.ts-200-202 (1)
200-202:⚠️ Potential issue | 🟡 MinorRename
agentArgsto a non-abbreviated identifier to satisfy lint.Line 200 is currently flagged by
unicorn/prevent-abbreviations.🔧 Proposed fix
- const agentArgs = Cli.options!['agentArgs'] || ''; + const agentArguments = Cli.options!['agentArgs'] || ''; const storagePaths = (Cli.options!['storagePaths'] || '').split(';').filter(Boolean); - await LfsAgentService.configure(agentPath, agentArgs, storagePaths, process.cwd()); + await LfsAgentService.configure(agentPath, agentArguments, storagePaths, process.cwd());🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/model/cli/cli.ts` around lines 200 - 202, Rename the abbreviated local variable agentArgs to a non-abbreviated identifier (e.g., agentArguments) and use that name when calling LfsAgentService.configure; update any other references in this file that use agentArgs. If LfsAgentService.configure's parameter name is also abbreviated, rename that parameter to match (agentArguments) and update its callers accordingly to satisfy unicorn/prevent-abbreviations.src/cli/__tests__/input-mapper.test.ts-5-8 (1)
5-8:⚠️ Potential issue | 🟡 MinorReset
GitHub.githubInputEnabledin test teardown to avoid global state leakage.This suite mutates a global flag, but teardown currently restores only mocks and
Cli.options.🔧 Proposed fix
afterEach(() => { jest.restoreAllMocks(); Cli.options = undefined; + GitHub.githubInputEnabled = true; });🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/cli/__tests__/input-mapper.test.ts` around lines 5 - 8, The test teardown currently restores mocks and resets Cli.options but fails to reset the mutated global flag GitHub.githubInputEnabled; update the afterEach block (where jest.restoreAllMocks() and Cli.options = undefined are set) to also set GitHub.githubInputEnabled back to its default (false or the original state used in your test suite) so the global state does not leak between tests.src/model/orchestrator/options/orchestrator-folders-auth.test.ts-130-138 (1)
130-138:⚠️ Potential issue | 🟡 MinorUse a descriptive env backup name and always restore via
finally.Line 130 naming is lint-flagged, and restoration should run even when the test fails mid-way.
🔧 Proposed fix
- const originalEnv = process.env.GIT_PRIVATE_TOKEN; - delete process.env.GIT_PRIVATE_TOKEN; + const originalEnvironmentVariable = process.env.GIT_PRIVATE_TOKEN; + delete process.env.GIT_PRIVATE_TOKEN; const { OrchestratorSystem } = require('../services/core/orchestrator-system'); - await OrchestratorFolders.configureGitAuth(); - - expect(OrchestratorSystem.Run).not.toHaveBeenCalled(); - if (originalEnv !== undefined) process.env.GIT_PRIVATE_TOKEN = originalEnv; + try { + await OrchestratorFolders.configureGitAuth(); + expect(OrchestratorSystem.Run).not.toHaveBeenCalled(); + } finally { + if (originalEnvironmentVariable !== undefined) { + process.env.GIT_PRIVATE_TOKEN = originalEnvironmentVariable; + } else { + delete process.env.GIT_PRIVATE_TOKEN; + } + }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/model/orchestrator/options/orchestrator-folders-auth.test.ts` around lines 130 - 138, Rename the env backup variable to something descriptive (e.g., originalGitPrivateToken) and ensure environment restoration always happens by wrapping the test body in a try/finally where the finally block restores process.env.GIT_PRIVATE_TOKEN; keep the call to OrchestratorFolders.configureGitAuth() and the assertion on OrchestratorSystem.Run unchanged, but move the restore logic into finally so it executes even if the test throws.src/cli/commands/orchestrate.ts-22-26 (1)
22-26:⚠️ Potential issue | 🟡 MinorProvider strategy help text is outdated.
The description only lists older providers; CLI help will be misleading with the newly added providers in this release.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/cli/commands/orchestrate.ts` around lines 22 - 26, Update the CLI help for the 'provider-strategy' option in orchestrate.ts: the current description only lists older providers. Locate the .option call for 'provider-strategy' (alias 'providerStrategy') and replace the description string with an accurate, up-to-date list of supported providers (including the newly added ones) or reference a centralized constant/enum if available so the help text stays correct in future releases.src/model/orchestrator/services/lfs/lfs-agent-service.test.ts-2-2 (1)
2-2:⚠️ Potential issue | 🟡 MinorRemove unused
pathimport to satisfy linting.
pathis imported but never used.Proposed fix
-import path from 'node:path';🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/model/orchestrator/services/lfs/lfs-agent-service.test.ts` at line 2, The test file imports an unused symbol `path` (`import path from 'node:path';`) which triggers lint errors; remove that import statement from src/model/orchestrator/services/lfs/lfs-agent-service.test.ts (delete the `import path from 'node:path';` line) so the file no longer references an unused symbol.src/model/orchestrator/services/lfs/lfs-agent-service.test.ts-52-58 (1)
52-58:⚠️ Potential issue | 🟡 MinorRestore
LFS_STORAGE_PATHSafter mutation in this test.This test sets a process-level env var and leaves it modified, which can impact later tests.
Proposed fix
it('should set LFS_STORAGE_PATHS environment variable when storagePaths provided', async () => { (mockFs.existsSync as jest.Mock).mockReturnValue(true); + const originalValue = process.env.LFS_STORAGE_PATHS; - await LfsAgentService.configure('/usr/local/bin/elastic-git-storage', '', ['/path/a', '/path/b'], '/repo'); + try { + await LfsAgentService.configure('/usr/local/bin/elastic-git-storage', '', ['/path/a', '/path/b'], '/repo'); + expect(process.env.LFS_STORAGE_PATHS).toBe('/path/a;/path/b'); + } finally { + if (originalValue === undefined) { + delete process.env.LFS_STORAGE_PATHS; + } else { + process.env.LFS_STORAGE_PATHS = originalValue; + } + } - - expect(process.env.LFS_STORAGE_PATHS).toBe('/path/a;/path/b'); });🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/model/orchestrator/services/lfs/lfs-agent-service.test.ts` around lines 52 - 58, The test in lfs-agent-service.test.ts mutates process.env.LFS_STORAGE_PATHS via calling LfsAgentService.configure and does not restore it, risking cross-test contamination; update the test to save the original value (e.g., const original = process.env.LFS_STORAGE_PATHS) before calling LfsAgentService.configure and restore it after the assertion (or use an afterEach/afterAll hook to restore process.env.LFS_STORAGE_PATHS to its original value) so the environment is unchanged for subsequent tests.src/model/orchestrator/services/core/task-parameter-serializer.test.ts-178-205 (1)
178-205:⚠️ Potential issue | 🟡 MinorGuard environment restoration with
finallyfor test isolation.These tests mutate
process.env.UNITY_SERIALand restore it only at the end. If an assertion fails early, state can leak to later tests.Proposed fix
it('includes secrets from environment when present', () => { const originalSerial = process.env.UNITY_SERIAL; - process.env.UNITY_SERIAL = 'test-serial'; - - const secrets = TaskParameterSerializer.readDefaultSecrets(); - const serialSecret = secrets.find((s) => s.ParameterKey === 'UNITY_SERIAL'); - expect(serialSecret).toBeDefined(); - expect(serialSecret?.ParameterValue).toBe('test-serial'); - - if (originalSerial !== undefined) { - process.env.UNITY_SERIAL = originalSerial; - } else { - delete process.env.UNITY_SERIAL; - } + try { + process.env.UNITY_SERIAL = 'test-serial'; + const secrets = TaskParameterSerializer.readDefaultSecrets(); + const serialSecret = secrets.find((s) => s.ParameterKey === 'UNITY_SERIAL'); + expect(serialSecret).toBeDefined(); + expect(serialSecret?.ParameterValue).toBe('test-serial'); + } finally { + if (originalSerial !== undefined) { + process.env.UNITY_SERIAL = originalSerial; + } else { + delete process.env.UNITY_SERIAL; + } + } });🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/model/orchestrator/services/core/task-parameter-serializer.test.ts` around lines 178 - 205, Both tests mutate process.env.UNITY_SERIAL without guaranteeing restoration on failure; wrap the env modification and assertions in a try/finally so the original value is always restored. Specifically, in the tests that call TaskParameterSerializer.readDefaultSecrets() (the "includes secrets from environment when present" and "excludes secrets not in environment" cases), capture the originalSerial, set or delete process.env.UNITY_SERIAL as needed, run the assertions inside a try block, and perform the restore/delete logic in the finally block to ensure test isolation.src/model/orchestrator/providers/ansible/ansible-provider.test.ts-41-41 (1)
41-41:⚠️ Potential issue | 🟡 MinorAddress lint blockers in test locals/fixture keys.
params/envabbreviations and unquotedcustom_varcurrently violate enabled lint rules; this can block merge even if behavior is correct.🧹 Example lint-safe adjustments
- const params = createBuildParameters({ + const parameters = createBuildParameters({ ansibleInventory: '/custom/inventory', }); - const p = new AnsibleProvider(params); + const p = new AnsibleProvider(parameters);- const params = createBuildParameters({ - ansibleExtraVars: JSON.stringify({ custom_var: 'custom_value', another: '42' }), + const parameters = createBuildParameters({ + ansibleExtraVars: JSON.stringify({ 'custom_var': 'custom_value', another: '42' }), }); - provider = new AnsibleProvider(params); + provider = new AnsibleProvider(parameters);Also applies to: 52-52, 79-79, 148-148, 159-159, 175-175, 176-176, 190-190, 200-200, 270-270
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/model/orchestrator/providers/ansible/ansible-provider.test.ts` at line 41, Test fixture object keys use lint-forbidden abbreviations and unquoted identifiers; update all occurrences where createBuildParameters is called (e.g., variables named params, env and keys like custom_var) to use lint-safe names and properly quoted keys: replace abbreviated locals (params -> parameters or buildParameters, env -> environment) and ensure object literal keys that are not valid identifiers are quoted (e.g., "custom_var") across the listed test lines (41, 52, 79, 148, 159, 175, 176, 190, 200, 270) so the tests comply with lint rules while preserving the same values passed to createBuildParameters.src/model/orchestrator/services/lfs/lfs-agent-service.ts-39-43 (1)
39-43:⚠️ Potential issue | 🟡 MinorClear
LFS_STORAGE_PATHSwhen no storage paths are provided.This method only sets the env var when paths exist; on later calls with an empty list, stale values remain and can misconfigure subsequent jobs.
♻️ Proposed fix
if (storagePaths.length > 0) { const storagePathsValue = storagePaths.join(';'); process.env.LFS_STORAGE_PATHS = storagePathsValue; OrchestratorLogger.log(`[LfsAgent] Storage paths: ${storagePathsValue}`); + } else { + delete process.env.LFS_STORAGE_PATHS; }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/model/orchestrator/services/lfs/lfs-agent-service.ts` around lines 39 - 43, When storagePaths is empty the code currently leaves a stale process.env.LFS_STORAGE_PATHS value; update the logic around storagePaths (the block using storagePaths.join and process.env.LFS_STORAGE_PATHS) to explicitly clear the env var when storagePaths.length === 0 (either delete process.env.LFS_STORAGE_PATHS or set it to an empty string) and log the change via OrchestratorLogger.log (e.g., log that LFS storage paths were cleared) so subsequent calls to the function won't use stale configuration.src/model/orchestrator/services/lfs/elastic-git-storage-service.test.ts-285-285 (1)
285-285:⚠️ Potential issue | 🟡 MinorRemove the unused
OrchestratorSystembinding.This is a lint error (
no-unused-vars) and can block CI.✅ Minimal fix
- const { OrchestratorSystem } = require('../core/orchestrator-system'); mockOs.platform.mockReturnValue('linux');🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/model/orchestrator/services/lfs/elastic-git-storage-service.test.ts` at line 285, The test imports an unused binding "OrchestratorSystem" which triggers a no-unused-vars lint error; remove the unused require binding (the line that declares const { OrchestratorSystem } = require(...)) or, if it was intended to be used, replace it with actual usage—most simply delete that const binding so the test no longer imports OrchestratorSystem unnecessarily.src/model/orchestrator/services/lfs/elastic-git-storage-service.test.ts-168-180 (1)
168-180:⚠️ Potential issue | 🟡 MinorWrap temporary
process.envmutations intry/finally.If this test fails before restoration, global env pollution can cascade into unrelated tests.
🧪 Proposed stabilization
- const originalEnv = { ...process.env }; - process.env.LOCALAPPDATA = 'C:\\Users\\runner\\AppData\\Local'; - process.env.RUNNER_TOOL_CACHE = ''; + const originalEnv = { ...process.env }; + try { + process.env.LOCALAPPDATA = 'C:\\Users\\runner\\AppData\\Local'; + process.env.RUNNER_TOOL_CACHE = ''; - (mockFs.existsSync as jest.Mock).mockImplementation((p: string) => { - return p.includes('AppData'); - }); + (mockFs.existsSync as jest.Mock).mockImplementation((p: string) => { + return p.includes('AppData'); + }); - const result = await ElasticGitStorageService.findInstalled(); - expect(result).toContain('elastic-git-storage.exe'); - - process.env = originalEnv; + const result = await ElasticGitStorageService.findInstalled(); + expect(result).toContain('elastic-git-storage.exe'); + } finally { + process.env = originalEnv; + }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/model/orchestrator/services/lfs/elastic-git-storage-service.test.ts` around lines 168 - 180, The test mutates process.env directly; wrap the mutation and invocation of ElasticGitStorageService.findInstalled inside a try/finally so originalEnv is always restored: save originalEnv as you do, set LOCALAPPDATA and RUNNER_TOOL_CACHE, run the mocked existsSync and await ElasticGitStorageService.findInstalled inside the try block, and restore process.env = originalEnv in the finally block to prevent env leakage across tests.src/cli/commands/update.ts-210-235 (1)
210-235:⚠️ Potential issue | 🟡 MinorVersion parsing may fail on pre-release or non-standard versions.
parseVersionPartsusesNumber()which returnsNaNfor non-numeric segments like "beta" in "2.0.0-beta.1". This causescompareSemverto behave unexpectedly sinceNaNcomparisons always return false.🛡️ Proposed fix to handle pre-release versions
function parseVersionParts(version: string): number[] { return version .replace(/^v/, '') + .split('-')[0] // Strip pre-release suffix before parsing .split('.') - .map((part) => Number(part)); + .map((part) => { + const num = Number.parseInt(part, 10); + return Number.isNaN(num) ? 0 : num; + }); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/cli/commands/update.ts` around lines 210 - 235, parseVersionParts currently uses Number(...) which yields NaN for pre-release identifiers (e.g. "2.0.0-beta.1") and breaks compareSemver; update parseVersionParts and compareSemver so they first split off and preserve the pre-release metadata (after '-') and parse only the numeric core using parseInt with fallback 0 for each segment, and then in compareSemver, after comparing the three numeric parts, add logic to handle pre-release precedence: treat a version without pre-release as greater than one with pre-release, and if both have pre-release compare their dot-separated identifiers with numeric-aware comparison (numeric identifiers compared by numeric value, non-numeric by lexicographic order; shorter numeric/identifier list is lower if all equal up to shorter length). Ensure you reference parseVersionParts and compareSemver when making these changes.src/model/orchestrator/services/output/artifact-upload-handler.ts-369-381 (1)
369-381:⚠️ Potential issue | 🟡 MinorExplicitly configure rclone timeout and retry parameters for clarity and consistent behavior.
While rclone has built-in defaults (
--retries 3,--timeout 5m,--contimeout 1m), making these bounds explicit in the code documents intent and ensures artifact uploads are protected against transient failures. Add explicit timeout and retry parameters to match your expectations for this workload.Suggested fix
- const args = ['copy', resolvedPath, destination, '--progress']; + const args = [ + 'copy', + resolvedPath, + destination, + '--progress', + '--retries', + '3', + '--retries-sleep', + '10s', + '--timeout', + '30m', + '--contimeout', + '30s', + ];🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/model/orchestrator/services/output/artifact-upload-handler.ts` around lines 369 - 381, The rclone invocation builds args and calls exec('rclone', args) but doesn't set explicit timeout/retry flags; append explicit flags (e.g., '--retries', '3', '--timeout', '5m', '--contimeout', '1m') to the args array before calling exec to make retry/timeouts deterministic and document intent. Locate the args declaration and the exec call in artifact-upload-handler.ts (the args variable and exec('rclone', args)), add the flags to args (or derive values from config if desired), and keep the existing informational OrchestratorLogger.log for compression unchanged.
🧹 Nitpick comments (12)
src/model/orchestrator/providers/ansible/index.ts (1)
165-178: Usenumberinstead ofNumberfor primitive type.Line 171 uses the boxed
Numbertype instead of the primitivenumber. TypeScript best practice is to use lowercase primitive types.Proposed fix
async garbageCollect( // eslint-disable-next-line no-unused-vars filter: string, // eslint-disable-next-line no-unused-vars previewOnly: boolean, // eslint-disable-next-line no-unused-vars - olderThan: Number, + olderThan: number, // eslint-disable-next-line no-unused-vars fullCache: boolean, // eslint-disable-next-line no-unused-vars baseDependencies: boolean, ): Promise<string> {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/model/orchestrator/providers/ansible/index.ts` around lines 165 - 178, The parameter type for olderThan in the async garbageCollect method is declared as the boxed Number; change it to the primitive number in the garbageCollect signature (async garbageCollect(filter: string, previewOnly: boolean, olderThan: number, fullCache: boolean, baseDependencies: boolean): Promise<string>) and update any usages or tests that expect Number to use the primitive type instead so TypeScript treats it as a numeric primitive rather than the boxed object.src/cli/commands/status.ts (1)
18-19: Redundant fallback for projectPath.The
|| '.'fallback is unnecessary since yargs already providesdefault: '.'at line 15.Proposed simplification
handler: async (cliArguments) => { - const projectPath = (cliArguments.projectPath as string) || '.'; + const projectPath = cliArguments.projectPath as string;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/cli/commands/status.ts` around lines 18 - 19, The handler's projectPath assignment in the async handler still uses a redundant fallback "|| '.'"; remove the fallback and simply read the yargs-provided default by changing the line in the handler to use the value directly (e.g., const projectPath = cliArguments.projectPath as string) so the default configured in yargs is relied upon; update any dependent code if necessary to assume projectPath is always a string.src/model/orchestrator/providers/cli/cli-provider-protocol.ts (1)
1-20: Clean protocol definition; consider stronger typing for improved safety.The protocol interfaces are well-structured. For future maintainability, consider replacing
anywith more specific types or generics to improve type safety and IDE support.💡 Optional: Typed result example
-export interface CliProviderRequest { +export interface CliProviderRequest<T = Record<string, unknown>> { command: CliProviderSubcommand; - params: Record<string, any>; + params: T; } -export interface CliProviderResponse { +export interface CliProviderResponse<T = unknown> { success: boolean; - result?: any; + result?: T; error?: string; output?: string; }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/model/orchestrator/providers/cli/cli-provider-protocol.ts` around lines 1 - 20, Replace the loose any types with stronger typing by making CliProviderRequest and CliProviderResponse generic or by introducing discriminated unions: create a CliProviderParamsMap and CliProviderResultsMap keyed by CliProviderSubcommand, then type CliProviderRequest as { command: K; params: CliProviderParamsMap[K] } and CliProviderResponse as { success: boolean; result?: CliProviderResultsMap[K]; error?: string; output?: string } (or add generics <P,R> to CliProviderRequest/CliProviderResponse) so callers of command-specific handlers get precise param/result types; update usages of CliProviderRequest, CliProviderResponse, and CliProviderSubcommand accordingly.src/model/orchestrator/providers/provider-selection.test.ts (1)
85-127: Routing test does not exercise actual provider selection logic.This test validates a local hardcoded map rather than the production routing mechanism, so it can pass even when real strategy-to-provider wiring regresses.
Consider routing through the real selector/loader path (or a thin exported selector helper) and asserting resolved class type per
providerStrategy.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/model/orchestrator/providers/provider-selection.test.ts` around lines 85 - 127, The test currently constructs a local hardcoded strategies map instead of exercising the real provider routing; change it to call the production selector/loader with BuildParameters (set providerStrategy on the params) and assert the returned instance is the correct provider class (RemotePowershellProvider, GitHubActionsProvider, GitLabCIProvider, AnsibleProvider). Locate and import the exported selector function used in production (e.g., the module that resolves provider by providerStrategy — look for names like selectProviderForParameters, getProviderForStrategy, or createProviderFromParams), replace the Object.entries(strategies) loop with calls like selector(paramsWithProviderStrategy) for each strategy value, and assert instance.constructor.name (or use instanceof) matches the expected provider class.src/model/orchestrator/orchestrator.ts (1)
373-404: Timeout promise is never cancelled on success.In
setupWorkflowWithTimeout, ifsetupPromiseresolves before the timeout, the timeout'ssetTimeoutremains scheduled and will eventually reject (though the promise is already settled). While this doesn't cause functional issues, it's a minor resource leak.♻️ Suggested fix to properly cancel timeout
private static async setupWorkflowWithTimeout() { const timeoutSeconds = Orchestrator.buildParameters.providerInitTimeout; const setupPromise = Orchestrator.Provider.setupWorkflow( Orchestrator.buildParameters.buildGuid, Orchestrator.buildParameters, Orchestrator.buildParameters.branch, Orchestrator.defaultSecrets, ); if (timeoutSeconds <= 0) { await setupPromise; return; } OrchestratorLogger.log(`Provider init timeout: ${timeoutSeconds}s`); - const timeoutPromise = new Promise<never>((_, reject) => { - setTimeout( - () => reject(new Error(`Provider initialization timed out after ${timeoutSeconds}s`)), - timeoutSeconds * 1000, - ); - }); - - await Promise.race([setupPromise, timeoutPromise]); + let timeoutId: NodeJS.Timeout; + const timeoutPromise = new Promise<never>((_, reject) => { + timeoutId = setTimeout( + () => reject(new Error(`Provider initialization timed out after ${timeoutSeconds}s`)), + timeoutSeconds * 1000, + ); + }); + + try { + await Promise.race([setupPromise, timeoutPromise]); + } finally { + clearTimeout(timeoutId!); + } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/model/orchestrator/orchestrator.ts` around lines 373 - 404, The timeout created in setupWorkflowWithTimeout never gets cleared when setupPromise resolves; modify setupWorkflowWithTimeout to capture the timer id returned by setTimeout when building timeoutPromise (use a let timerId = setTimeout(...)), race setupPromise against the timeout, and after the race settles (both on success and error) call clearTimeout(timerId) to cancel the pending timeout; ensure this logic wraps around Orchestrator.Provider.setupWorkflow and the timeoutPromise so the timer is always cleared (also keep the existing early return when providerInitTimeout <= 0).src/model/orchestrator/services/core/runner-availability-service.test.ts (1)
283-316: Potential test isolation issue with Date.now mock.The
Date.nowmock restoration at line 315 may not execute if the test fails before reaching that point, which could affect subsequent tests.♻️ Suggested fix using afterEach or try/finally
it('should handle pagination timeout gracefully', async () => { - // Mock Date.now to simulate timeout const originalDateNow = Date.now; let callCount = 0; - const mockRequest = jest.fn().mockImplementation(() => { - callCount++; - // After first call, advance time past the timeout - if (callCount >= 2) { - Date.now = jest.fn(() => originalDateNow() + 31_000); - } - const runners = createMockRunners( - Array.from({ length: 100 }, (_, i) => ({ - name: `runner-${callCount}-${i}`, - status: 'online' as const, - busy: false, - labels: ['self-hosted'], - })), - ); - - return Promise.resolve({ status: 200, data: { runners } }); - }); - MockedOctokit.mockImplementation(() => ({ request: mockRequest } as any)); - - const result = await RunnerAvailabilityService.checkAvailability('owner', 'repo', 'token', [], 1); - - // Should have stopped after timeout was detected (2 pages: first succeeds, second triggers timeout check) - expect(mockRequest.mock.calls.length).toBeLessThanOrEqual(3); - // Should have runners from pages fetched before timeout - expect(result.totalRunners).toBeGreaterThan(0); - - // Restore - Date.now = originalDateNow; + try { + const mockRequest = jest.fn().mockImplementation(() => { + callCount++; + if (callCount >= 2) { + Date.now = jest.fn(() => originalDateNow() + 31_000); + } + const runners = createMockRunners( + Array.from({ length: 100 }, (_, index) => ({ + name: `runner-${callCount}-${index}`, + status: 'online' as const, + busy: false, + labels: ['self-hosted'], + })), + ); + return Promise.resolve({ status: 200, data: { runners } }); + }); + MockedOctokit.mockImplementation(() => ({ request: mockRequest } as any)); + + const result = await RunnerAvailabilityService.checkAvailability('owner', 'repo', 'token', [], 1); + + expect(mockRequest.mock.calls.length).toBeLessThanOrEqual(3); + expect(result.totalRunners).toBeGreaterThan(0); + } finally { + Date.now = originalDateNow; + } });🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/model/orchestrator/services/core/runner-availability-service.test.ts` around lines 283 - 316, The test temporarily overrides Date.now which may not be restored if the test throws; wrap the mocking and assertions in a try/finally (or move the Date.now restore to an afterEach) so Date.now is always reset — modify the test around the mockRequest/RunnerAvailabilityService.checkAvailability call to set Date.now = jest.fn(...) and ensure Date.now = originalDateNow is executed in a finally block (or add an afterEach that restores Date.now) so other tests are not affected; reference the Date.now mock in this test and the RunnerAvailabilityService.checkAvailability invocation when making the change.src/model/orchestrator/services/output/output-service.ts (2)
99-117: Recursive directory traversal may be slow for large output directories.
getDirectorySizeuses synchronousfs.statSynccalls recursively, which blocks the event loop. For Unity builds with many output files (e.g., thousands of assets), this could cause noticeable delays.Additionally, silently ignoring all errors (line 112-114) may hide permission issues or broken symlinks that users should know about.
♻️ Suggested improvements
- private static getDirectorySize(dirPath: string): number { + private static getDirectorySize(directoryPath: string): number { let totalSize = 0; try { - const entries = fs.readdirSync(dirPath, { withFileTypes: true }); + const entries = fs.readdirSync(directoryPath, { withFileTypes: true }); for (const entry of entries) { - const fullPath = path.join(dirPath, entry.name); - if (entry.isDirectory()) { - totalSize += OutputService.getDirectorySize(fullPath); - } else { - totalSize += fs.statSync(fullPath).size; + const fullPath = path.join(directoryPath, entry.name); + try { + totalSize += entry.isDirectory() + ? OutputService.getDirectorySize(fullPath) + : fs.statSync(fullPath).size; + } catch { + // Skip inaccessible files but continue enumeration } } - } catch { - // Ignore errors in size calculation + } catch (error) { + OrchestratorLogger.logWarning(`[Output] Could not enumerate directory: ${directoryPath}`); } return totalSize; }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/model/orchestrator/services/output/output-service.ts` around lines 99 - 117, getDirectorySize currently performs synchronous recursive fs.statSync calls inside OutputService.getDirectorySize which blocks the event loop and swallows all errors; change this to a non-blocking async implementation (e.g., getDirectorySizeAsync using fs.promises.readdir and fs.promises.stat with Promise.all or an explicit async queue/stack to avoid deep recursion) and return a Promise<number>, update calling code to await it (or provide both sync/async variants); also stop silently ignoring errors—catch and log/propagate permission/broken-symlink errors from the new getDirectorySizeAsync (or return partial size with a warning via the existing logging facility) so issues aren’t hidden while keeping traversal resilient.
23-94: Async method uses blocking synchronous I/O.
collectOutputsis declaredasyncbut uses synchronous fs methods (existsSync,statSync,readdirSync,mkdirSync,writeFileSync). Consider using async equivalents (fs.promises.*) to avoid blocking the event loop during manifest generation.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/model/orchestrator/services/output/output-service.ts` around lines 23 - 94, collectOutputs currently blocks the event loop by using synchronous fs calls; replace all sync calls with their async/promises equivalents (use fs.promises.access or try fs.promises.stat instead of existsSync/statSync, fs.promises.readdir instead of readdirSync, and fs.promises.mkdir/fs.promises.writeFile instead of mkdirSync/writeFileSync), await each operation and propagate errors to the existing try/catch blocks; update the directory-size computation used by OutputService.getDirectorySize to an async variant (or add getDirectorySizeAsync) and call that from collectOutputs; keep existing logging calls (OrchestratorLogger.log / logWarning) and preserve behavior for missing outputs and manifestPath handling while converting all file I/O to async/await.src/cli/commands/update.ts (1)
31-76: Network requests lack timeout protection.Both
fetchJsonanddownloadFilehave no timeout mechanism. A slow or unresponsive server could cause the update command to hang indefinitely.♻️ Suggested fix to add request timeout
function fetchJson(url: string): Promise<any> { return new Promise((resolve, reject) => { + const REQUEST_TIMEOUT = 30_000; // 30 seconds const get = (targetUrl: string, redirectCount: number) => { if (redirectCount > 5) { reject(new Error('Too many redirects')); return; } - https + const request = https .get( targetUrl, { headers: { 'User-Agent': 'game-ci-cli', Accept: 'application/json' }, + timeout: REQUEST_TIMEOUT, }, (response) => { // ... existing handler code }, ) .on('error', reject); + request.on('timeout', () => { + request.destroy(); + reject(new Error(`Request timed out after ${REQUEST_TIMEOUT}ms`)); + }); }; get(url, 0); }); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/cli/commands/update.ts` around lines 31 - 76, fetchJson and downloadFile lack request timeouts so a slow server can hang the update command; add a per-request timeout (e.g., a configurable constant like REQUEST_TIMEOUT_MS) to each HTTPS request: when creating the request (inside fetchJson's https.get and the request used in downloadFile) attach a timeout handler (request.setTimeout or an AbortController equivalent) that aborts/destroys the request and rejects with a descriptive timeout Error, ensure you clear the timeout on 'response' end/error and when following redirects (preserve/reset the timer for each redirected request), and make sure downloadFile cleans up any open streams when timing out.src/index.ts (1)
276-331: Artifact upload errors are silently swallowed.The entire artifact collection/upload block (lines 276-331) catches errors and only logs warnings. If artifact upload is critical for CI pipelines, this could cause silent failures where builds appear successful but artifacts are missing.
Consider making this configurable or at minimum ensuring the warning is prominent.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/index.ts` around lines 276 - 331, The artifact collection/upload block currently swallows errors and only calls core.warning, so change it to respect a configurable flag (e.g., buildParameters.failOnArtifactFailure) and on any failure from OutputService.collectOutputs or ArtifactUploadHandler.uploadArtifacts either call core.setFailed with a clear message (including the error details) or rethrow the error when the flag is true; keep the existing warning behavior when the flag is false, but ensure uploadResult errors include full error details in the message and parse errors from artifactCustomTypes are logged with full stack/message via core.error or included in the final failure. Ensure references: OutputService.collectOutputs, ArtifactUploadHandler.uploadArtifacts, artifactCustomTypes, buildParameters, and core.warning/core.setFailed are updated accordingly.src/model/orchestrator/services/hot-runner/hot-runner-dispatcher.ts (1)
132-139: Clear timeout handle afterPromise.racesettles.Line 132 starts a timer that is never cleared on success. At scale, this creates avoidable pending timers and churn.
💡 Proposed fix
- const timeoutPromise = new Promise<typeof TIMEOUT_SENTINEL>((resolve) => { - setTimeout(() => { + let timeoutHandle: NodeJS.Timeout | undefined; + const timeoutPromise = new Promise<typeof TIMEOUT_SENTINEL>((resolve) => { + timeoutHandle = setTimeout(() => { resolve(TIMEOUT_SENTINEL); }, request.timeout); }); - const result = await Promise.race([transport.sendJob(request), timeoutPromise]); + const result = await Promise.race([transport.sendJob(request), timeoutPromise]); + if (timeoutHandle) clearTimeout(timeoutHandle);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/model/orchestrator/services/hot-runner/hot-runner-dispatcher.ts` around lines 132 - 139, The timeout timer created for the timeoutPromise (using TIMEOUT_SENTINEL and request.timeout) is never cleared, causing lingering timers; modify the block around the Promise.race call to capture the setTimeout id (e.g., const timeoutId = setTimeout(...)) and ensure you call clearTimeout(timeoutId) after Promise.race settles (in both success and error paths, e.g., in a finally block) so the timer is always cleared regardless of whether transport.sendJob(request) wins the race.src/model/orchestrator/services/output/artifact-service.test.ts (1)
546-574: Usejest.spyOninstead of directexecFileSyncreassignment.These tests mutate
childProcess.execFileSyncdirectly. If an assertion throws before manual restore, downstream tests can inherit the patch.💡 Proposed fix
- const childProcess = require('node:child_process'); - const originalExecFileSync = childProcess.execFileSync; - childProcess.execFileSync = jest.fn(() => { - throw new Error('ENOENT'); - }); + const childProcess = require('node:child_process'); + const execFileSyncSpy = jest.spyOn(childProcess, 'execFileSync').mockImplementation(() => { + throw new Error('ENOENT'); + }); @@ - // Restore - childProcess.execFileSync = originalExecFileSync; + execFileSyncSpy.mockRestore();Also applies to: 578-605
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/model/orchestrator/services/output/artifact-service.test.ts` around lines 546 - 574, Replace the direct reassignment of childProcess.execFileSync in the test with a jest.spyOn so the mock is automatically restorable: create a spy via const execSpy = jest.spyOn(childProcess, 'execFileSync').mockImplementation(() => { throw new Error('ENOENT'); }); call ArtifactUploadHandler.uploadArtifacts(...) and assert as before, then restore the spy with execSpy.mockRestore() (or register restoration in an afterEach/finally) instead of reassigning originalExecFileSync; apply the same change for the other similar block around lines 578-605.
| manifest['dependencies']['${{ matrix.name }}'] = '${{ matrix.package }}' | ||
| json.dump(manifest, sys.stdout, indent=2) | ||
| " > Packages/manifest.tmp && mv Packages/manifest.tmp Packages/manifest.json |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
find . -name "validate-community-plugins.yml" -o -name "community-plugins.yml" | head -20Repository: game-ci/unity-builder
Length of output: 139
🏁 Script executed:
cat .github/workflows/validate-community-plugins.ymlRepository: game-ci/unity-builder
Length of output: 7418
🏁 Script executed:
cat .github/workflows/community-plugins.ymlRepository: game-ci/unity-builder
Length of output: 136
🌐 Web query:
Unity UPM Packages/manifest.json dependency keys format requirements package name or display name
💡 Result:
In Unity UPM:
-
In
Packages/manifest.json(project manifest), thedependenciesobject is a map of package name → version (or other locator). The key must be the package’s registered name (the same value as the package’s"name"in itspackage.json), not its"displayName". [1] -
In a package’s
package.json(package manifest):"name"is the official/unique identifier and must follow Unity’s naming convention (reverse-domain style). Unity explicitly notes this is not the user-friendly name. [2]"displayName"is only the UI-friendly label shown in the Editor. [2]"dependencies"keys are package names. [2]
Package name format rules (for "name" and thus dependency keys):
- Uses reverse-domain notation, starts with
com.<company-name>. - Lowercase; allowed chars: lowercase letters, digits,
-,_,.(period). - Periods can indicate nested namespaces. [2]
Dependency value formats (project manifest):
- Most commonly a SemVer version string (Unity doesn’t support version ranges in dependencies). [2]
- Can also be a local disk path reference for local packages (Unity docs and vendor instructions commonly show
"<package-name>": "file:..."/ local mappings). [3]
Example (correct):
"dependencies": {
"com.unity.timeline": "1.8.7",
"com.mycompany.mypackage": "2.3.1"
}Sources: Unity Manual on project manifest dependencies [1], Unity Manual on package manifest (name, displayName, dependency keys, naming rules) [2], Google’s Unity UPM install instructions (local/path-style dependency mapping examples) [3].
[1] (docs.unity3d.com)
[2] (docs.unity3d.com)
[3] (developers.google.com)
Citations:
- 1: https://docs.unity3d.com/es/2019.3/Manual/upm-dependencies.html
- 2: https://docs.unity3d.com/es/2019.3/Manual/upm-manifestPkg.html
- 3: https://developers.google.com/unity/instructions?utm_source=openai
🏁 Script executed:
cat community-plugins.yml | head -100Repository: game-ci/unity-builder
Length of output: 1096
🏁 Script executed:
wc -l community-plugins.ymlRepository: game-ci/unity-builder
Length of output: 89
Use package identifiers for dependency keys, not plugin display names.
Line 104 writes dependencies as "<display name>": "<git url>" (e.g., "UniTask", "NaughtyAttributes", "Unity Atoms"). However, Unity's Packages/manifest.json requires dependency keys to be valid package identifiers in reverse-domain format (e.g., com.company.package). The current implementation will produce invalid manifests that fail validation in any Unity project.
To fix, add a packageName field to each plugin entry in community-plugins.yml containing the actual package identifier, then use it in line 104:
🔧 Suggested fix
In community-plugins.yml, add the package identifier for each plugin:
- name: UniTask
+ packageName: com.cysharp.unitask
package: https://github.com/Cysharp/UniTask.git?path=src/UniTask/Assets/Plugins/UniTaskIn the matrix construction step, include packageName:
matrix.push({
name: plugin.name,
packageName: plugin.packageName,
package: plugin.package,
...
});
Then update line 104:
- manifest['dependencies']['${{ matrix.name }}'] = '${{ matrix.package }}'
+ manifest['dependencies']['${{ matrix.packageName }}'] = '${{ matrix.package }}'🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.github/workflows/validate-community-plugins.yml around lines 104 - 106, The
manifest is being written using display names as dependency keys
(manifest['dependencies']['${{ matrix.name }}'] = '${{ matrix.package }}'),
which produces invalid Unity package IDs; update the matrix construction to
include a packageName (e.g., matrix.push({ name: plugin.name, packageName:
plugin.packageName, package: plugin.package, ... })) and change the manifest
write to use matrix.packageName as the dependency key instead of matrix.name so
manifest['dependencies'][ '${{ matrix.packageName }}' ] = '${{ matrix.package
}}'.
| continue-on-error: true | ||
|
|
||
| - name: Record result | ||
| if: always() | ||
| run: | | ||
| STATUS="${{ steps.build.outcome }}" | ||
| echo "## ${{ matrix.name }} — ${{ matrix.platform }}" >> $GITHUB_STEP_SUMMARY | ||
| echo "" >> $GITHUB_STEP_SUMMARY | ||
| if [ "$STATUS" = "success" ]; then | ||
| echo "✅ **PASSED** — Compiled and built successfully" >> $GITHUB_STEP_SUMMARY | ||
| else | ||
| echo "❌ **FAILED** — Build or compilation failed" >> $GITHUB_STEP_SUMMARY | ||
| fi |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# First, let's check the workflow file exists and view the relevant section
find . -name "validate-community-plugins.yml" -type fRepository: game-ci/unity-builder
Length of output: 115
🌐 Web query:
GitHub Actions continue-on-error behavior job conclusion success
💡 Result:
continue-on-error: truedoes not change what actually happened (the command still exits non‑zero), but it changes the recorded “conclusion” used for overall success.- For a step with
continue-on-error: truethat fails:steps.<id>.outcome == 'failure'(real result, before applyingcontinue-on-error)steps.<id>.conclusion == 'success'(final result, after applyingcontinue-on-error) [1]
- For a job with
jobs.<job_id>.continue-on-error: true(often used with matrix “experimental” jobs), the job is treated as allowed-to-fail so it won’t make the workflow fail in the normal way [2]. As a result, downstreamneeds.<job_id>.resultcan showsuccesseven when something inside that job failed (because the allowed failure is effectively “masked” for dependency/result purposes) [3].
How to fail the workflow/job based on a “continue-on-error” failure (common pattern):
- Put
continue-on-error: trueon the step, then later add a step that checkssteps.<id>.outcomeand explicitlyexit 1if it wasfailure(so you keep running later steps, but still control final failure based on your own logic) [1].
Sources
- GitHub Docs (steps context:
outcomevsconclusion, andcontinue-on-errorbehavior) [1] - GitHub Docs (job-level
continue-on-error, incl. matrix/experimental usage) [2] - actions/toolkit issue discussing
needs.<job_id>.resultshowingsuccesswhen the job usescontinue-on-error) [3]
🏁 Script executed:
# Let's read the workflow file to verify the code and check for a report job
cat -n ./.github/workflows/validate-community-plugins.yml | sed -n '110,150p'Repository: game-ci/unity-builder
Length of output: 1839
🏁 Script executed:
# Also search for any report job or downstream jobs
grep -n "job\|report\|Report" ./.github/workflows/validate-community-plugins.ymlRepository: game-ci/unity-builder
Length of output: 486
🏁 Script executed:
# Check if there are more steps in the validate job after line 139
cat -n ./.github/workflows/validate-community-plugins.yml | sed -n '139,145p'Repository: game-ci/unity-builder
Length of output: 310
🏁 Script executed:
# Also get the full validate job structure to see all steps
sed -n '22,141p' ./.github/workflows/validate-community-plugins.yml | tail -50Repository: game-ci/unity-builder
Length of output: 1959
🏁 Script executed:
# Let's see the full report job implementation to understand how it reads job conclusions
sed -n '141,180p' ./.github/workflows/validate-community-plugins.ymlRepository: game-ci/unity-builder
Length of output: 1812
Failed builds are masked in the validation report due to continue-on-error: true preventing job failure.
With continue-on-error: true on the Build step (line 122), a failed build doesn't fail the job. Although the "Record result" step correctly checks steps.build.outcome and reports the failure to the step summary, the report job reads job.conclusion from the GitHub API (line 169), which remains success because no subsequent step failed. This causes failed validations to appear as ✅ passed in the final report.
Preserve summary + fail job when build fails
- name: Build with unity-builder
uses: ./
id: build
with:
projectPath: test-project
targetPlatform: ${{ matrix.platform }}
unityVersion: ${{ matrix.unity }}
continue-on-error: true
- name: Record result
if: always()
run: |
STATUS="${{ steps.build.outcome }}"
echo "## ${{ matrix.name }} — ${{ matrix.platform }}" >> $GITHUB_STEP_SUMMARY
echo "" >> $GITHUB_STEP_SUMMARY
if [ "$STATUS" = "success" ]; then
echo "✅ **PASSED** — Compiled and built successfully" >> $GITHUB_STEP_SUMMARY
else
echo "❌ **FAILED** — Build or compilation failed" >> $GITHUB_STEP_SUMMARY
fi
echo "" >> $GITHUB_STEP_SUMMARY
echo "- Unity: ${{ matrix.unity }}" >> $GITHUB_STEP_SUMMARY
echo "- Platform: ${{ matrix.platform }}" >> $GITHUB_STEP_SUMMARY
echo "- Source: ${{ matrix.source }}" >> $GITHUB_STEP_SUMMARY
echo "- Package: \`${{ matrix.package }}\`" >> $GITHUB_STEP_SUMMARY
+
+ - name: Fail validation job when build failed
+ if: steps.build.outcome != 'success'
+ run: exit 1🧰 Tools
🪛 actionlint (1.7.11)
[error] 126-126: shellcheck reported issue in this script: SC2086:info:10:40: Double quote to prevent globbing and word splitting
(shellcheck)
[error] 126-126: shellcheck reported issue in this script: SC2086:info:11:46: Double quote to prevent globbing and word splitting
(shellcheck)
[error] 126-126: shellcheck reported issue in this script: SC2086:info:12:42: Double quote to prevent globbing and word splitting
(shellcheck)
[error] 126-126: shellcheck reported issue in this script: SC2086:info:13:48: Double quote to prevent globbing and word splitting
(shellcheck)
[error] 126-126: shellcheck reported issue in this script: SC2086:info:2:58: Double quote to prevent globbing and word splitting
(shellcheck)
[error] 126-126: shellcheck reported issue in this script: SC2086:info:3:12: Double quote to prevent globbing and word splitting
(shellcheck)
[error] 126-126: shellcheck reported issue in this script: SC2086:info:5:60: Double quote to prevent globbing and word splitting
(shellcheck)
[error] 126-126: shellcheck reported issue in this script: SC2086:info:7:56: Double quote to prevent globbing and word splitting
(shellcheck)
[error] 126-126: shellcheck reported issue in this script: SC2086:info:9:12: Double quote to prevent globbing and word splitting
(shellcheck)
[error] 126-126: shellcheck reported issue in this script: SC2129:style:9:1: Consider using { cmd1; cmd2; } >> file instead of individual redirects
(shellcheck)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.github/workflows/validate-community-plugins.yml around lines 122 - 134, The
Build step currently uses continue-on-error: true which masks failures from
job.conclusion; remove continue-on-error: true from the Build step so a failing
build actually fails the job (job.conclusion) while keeping the "Record result"
step (if: always()) to append the summary using steps.build.outcome;
alternatively, if you must not fail the job immediately, add a short follow-up
step that runs when always() and exits non-zero when steps.build.outcome !=
"success" to force the job to fail so job.conclusion reflects real failures
(references: continue-on-error, "Record result" step, steps.build.outcome,
job.conclusion).
| # Verify the binary works | ||
| if "${INSTALL_DIR}/${BINARY_NAME}" version > /dev/null 2>&1; then | ||
| INSTALLED_VERSION=$("${INSTALL_DIR}/${BINARY_NAME}" version 2>&1 | head -1) | ||
| info "Verified: ${INSTALLED_VERSION}" | ||
| else | ||
| warn "Binary downloaded but could not verify. It may still work." | ||
| fi |
There was a problem hiding this comment.
Do not execute the downloaded binary before checksum verification.
Line 122 runs untrusted downloaded code before integrity is validated. This is a release-security blocker.
🔒 Safe ordering fix
- # Verify the binary works
- if "${INSTALL_DIR}/${BINARY_NAME}" version > /dev/null 2>&1; then
+ # Verify checksum before execution
+ verify_checksum
+
+ # Verify the binary works
+ if "${INSTALL_DIR}/${BINARY_NAME}" version > /dev/null 2>&1; then
INSTALLED_VERSION=$("${INSTALL_DIR}/${BINARY_NAME}" version 2>&1 | head -1)
info "Verified: ${INSTALLED_VERSION}"
else
warn "Binary downloaded but could not verify. It may still work."
fi
@@
detect_platform
get_latest_version
install
-verify_checksumAlso applies to: 193-196
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@install.sh` around lines 121 - 127, The script currently executes the
downloaded binary via "${INSTALL_DIR}/${BINARY_NAME}" to get its version before
any checksum is verified; change the flow so no downloaded binary is executed
until integrity is confirmed: perform checksum verification first (using your
existing checksum logic) and only after a successful checksum run the version
check or any execution. Update both occurrences that call
"${INSTALL_DIR}/${BINARY_NAME}" (the block using INSTALLED_VERSION and the
similar block around lines referenced 193-196) so they are moved to or gated by
the post-checksum-success path and fail fast if checksum validation fails.
| "yargs": "^18.0.0" | ||
| }, | ||
| "devDependencies": { | ||
| "@types/base-64": "^1.0.0", | ||
| "@types/jest": "^27.4.1", | ||
| "@types/node": "^17.0.23", | ||
| "@types/semver": "^7.3.9", | ||
| "@types/uuid": "^9.0.0", | ||
| "@types/yargs": "^17.0.35", | ||
| "@typescript-eslint/parser": "4.8.1", |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
yargs 18.0.0 Node.js version support requirements
💡 Result:
yargs@18.0.0 declares the following Node.js version requirement in its engines.node field:
- Node.js:
^20.19.0 || ^22.12.0 || >=23(i.e., Node 20.19+, Node 22.12+, or any 23+ release) [1][2]
Node 18.x and earlier are not supported by yargs@18.0.0 per this requirement [1][2].
🌐 Web query:
yargs 18.0.0 TypeScript types included built-in declarations
💡 Result:
yargs@18.0.0 does not ship full built-in TypeScript declarations for the main yargs import.
- Its published
package.jsonhas no top-level"types"/"typings"entry, and the package’s"files"list explicitly excludes*.d.ts/**/*.d.ts. It only includes a tinybrowser.d.ts, and only the./browserexport maps a"types"entry. [1] - As a result, TypeScript users typically still need the external typings package
@types/yargs. [2]
So: for import yargs from "yargs" / import { Argv } from "yargs", you should expect to install @types/yargs (unless your setup is using some other source of typings). [1][2]
Sources
[1] UNPKG – yargs@18.0.0 package.json (exports/files/types info) (app.unpkg.com)
[2] jsDocs.io – yargs@18.0.0 (notes typings via @types/yargs) (jsdocs.io)
Citations:
- 1: https://app.unpkg.com/yargs%4018.0.0/files/package.json
- 2: https://www.jsdocs.io/package/yargs?utm_source=openai
🌐 Web query:
@types/yargs 17.0.35 compatible yargs 18.x
💡 Result:
No—@types/yargs@17.0.35 is the DefinitelyTyped package for the yargs v17 API line, and it’s not meant to track/guarantee compatibility with yargs 18.x (which has breaking changes) [1], [3].
For yargs 18.x, you generally should not install @types/yargs at all, because yargs ships its own .d.ts types (it publishes declaration files and exposes them via exports, e.g. browser.d.ts) [2]. If you have both installed, TypeScript can pick up the wrong types.
What to do
- Remove DT types:
npm uninstall@types/yargs(or `pnpm remove `@types/yargs) [2] - Keep only:
yargs(and let it provide the types) [2] - If you hit TS/ESM import/type issues on yargs 18, follow yargs-18 type import guidance (e.g., import types from
yargsrather than relying on namespace patterns) [4].
Sources
@types/yargspackage description (DefinitelyTyped; types for yargs v17 line) [1]yargs@18.0.0package manifest showing bundled.d.ts/ type exports [2]- yargs release notes indicating breaking changes across major versions [3]
- Notes on yargs 18 TypeScript/type-usage pitfalls [4]
Remove @types/yargs and fix Node engine requirement for yargs 18.x compatibility.
yargs@18.0.0 requires Node.js ^20.19.0 || ^22.12.0 || >=23, which directly conflicts with the declared engine >=18.x. The package will fail to install or run on Node 18. Additionally, yargs@18 ships its own TypeScript declarations, so @types/yargs@17.0.35 is incompatible and must be removed—having both packages can cause type conflicts.
Either pin yargs to v17 and update @types/yargs to v17, or bump the engine requirement to >=20.19 || >=22.12 || >=23 and remove @types/yargs entirely.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@package.json` around lines 77 - 86, The package.json currently depends on
"yargs": "^18.0.0" while still listing "@types/yargs": "^17.0.35" and an engines
field requiring Node >=18.x; update package.json to resolve this mismatch by
either (A) pinning yargs to a 17.x release (e.g., change "yargs" to a 17.x
version) and keep or update "@types/yargs" accordingly, or (B) remove the
"@types/yargs" entry entirely and bump the engines field to satisfy yargs@18
(e.g., require Node >=20.19.0 || >=22.12.0 || >=23) so the built-in TypeScript
declarations are used; target the "yargs" dependency, the "@types/yargs"
devDependency, and the "engines" field in package.json when making the change.
|
|
||
| // Local build caching - save | ||
| if (buildParameters.localCacheEnabled) { | ||
| const { LocalCacheService } = await import('./model/orchestrator/services/cache/local-cache-service'); | ||
| if (buildParameters.localCacheLibrary) { | ||
| const projectFullPath = path.join(workspace, buildParameters.projectPath); | ||
| await LocalCacheService.saveLibraryCache(projectFullPath, cacheRoot, cacheKey); | ||
| } | ||
| if (buildParameters.localCacheLfs) { | ||
| await LocalCacheService.saveLfsCache(workspace, cacheRoot, cacheKey); | ||
| } | ||
| } | ||
|
|
||
| // Child workspace isolation - save workspace for next run | ||
| if (childWorkspaceConfig && childWorkspaceConfig.enabled) { | ||
| const { ChildWorkspaceService } = await import('./model/orchestrator/services/cache/child-workspace-service'); | ||
| const projectFullPath = path.join(workspace, buildParameters.projectPath); | ||
| const preSaveSize = ChildWorkspaceService.getWorkspaceSize(projectFullPath); | ||
| core.info(`Child workspace size before save: ${preSaveSize}`); | ||
|
|
||
| ChildWorkspaceService.saveWorkspace(projectFullPath, childWorkspaceConfig); | ||
| core.info(`Child workspace "${buildParameters.childWorkspaceName}" saved to cache`); | ||
| } | ||
|
|
||
| // Revert overlays after job completion if configured | ||
| if (buildParameters.syncRevertAfter && syncStrategy !== 'full') { | ||
| core.info('[Sync] Reverting overlay changes after job completion'); | ||
| try { | ||
| await IncrementalSyncService.revertOverlays(workspace, buildParameters.syncStatePath); | ||
| } catch (revertError) { | ||
| core.warning(`[Sync] Overlay revert failed: ${(revertError as Error).message}`); | ||
| } | ||
| } | ||
| exitCode = await runColdBuild(buildParameters, baseImage, workspace, actionFolder); | ||
| } else { |
There was a problem hiding this comment.
Critical: Build exit code is overwritten, breaking local build results.
After the local build completes and sets exitCode (lines 216-223), line 257 unconditionally calls runColdBuild() again, overwriting the actual build result. This means:
- Local builds run twice
- The original exit code from Docker/MacBuilder is discarded
- Post-build steps and failure detection use the wrong exit code
This appears to be a merge artifact where line 257 should not exist within the providerStrategy === 'local' branch.
🐛 Proposed fix to remove duplicate build call
// Revert overlays after job completion if configured
if (buildParameters.syncRevertAfter && syncStrategy !== 'full') {
core.info('[Sync] Reverting overlay changes after job completion');
try {
await IncrementalSyncService.revertOverlays(workspace, buildParameters.syncStatePath);
} catch (revertError) {
core.warning(`[Sync] Overlay revert failed: ${(revertError as Error).message}`);
}
}
- exitCode = await runColdBuild(buildParameters, baseImage, workspace, actionFolder);
} else {📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| // Local build caching - save | |
| if (buildParameters.localCacheEnabled) { | |
| const { LocalCacheService } = await import('./model/orchestrator/services/cache/local-cache-service'); | |
| if (buildParameters.localCacheLibrary) { | |
| const projectFullPath = path.join(workspace, buildParameters.projectPath); | |
| await LocalCacheService.saveLibraryCache(projectFullPath, cacheRoot, cacheKey); | |
| } | |
| if (buildParameters.localCacheLfs) { | |
| await LocalCacheService.saveLfsCache(workspace, cacheRoot, cacheKey); | |
| } | |
| } | |
| // Child workspace isolation - save workspace for next run | |
| if (childWorkspaceConfig && childWorkspaceConfig.enabled) { | |
| const { ChildWorkspaceService } = await import('./model/orchestrator/services/cache/child-workspace-service'); | |
| const projectFullPath = path.join(workspace, buildParameters.projectPath); | |
| const preSaveSize = ChildWorkspaceService.getWorkspaceSize(projectFullPath); | |
| core.info(`Child workspace size before save: ${preSaveSize}`); | |
| ChildWorkspaceService.saveWorkspace(projectFullPath, childWorkspaceConfig); | |
| core.info(`Child workspace "${buildParameters.childWorkspaceName}" saved to cache`); | |
| } | |
| // Revert overlays after job completion if configured | |
| if (buildParameters.syncRevertAfter && syncStrategy !== 'full') { | |
| core.info('[Sync] Reverting overlay changes after job completion'); | |
| try { | |
| await IncrementalSyncService.revertOverlays(workspace, buildParameters.syncStatePath); | |
| } catch (revertError) { | |
| core.warning(`[Sync] Overlay revert failed: ${(revertError as Error).message}`); | |
| } | |
| } | |
| exitCode = await runColdBuild(buildParameters, baseImage, workspace, actionFolder); | |
| } else { | |
| // Local build caching - save | |
| if (buildParameters.localCacheEnabled) { | |
| const { LocalCacheService } = await import('./model/orchestrator/services/cache/local-cache-service'); | |
| if (buildParameters.localCacheLibrary) { | |
| const projectFullPath = path.join(workspace, buildParameters.projectPath); | |
| await LocalCacheService.saveLibraryCache(projectFullPath, cacheRoot, cacheKey); | |
| } | |
| if (buildParameters.localCacheLfs) { | |
| await LocalCacheService.saveLfsCache(workspace, cacheRoot, cacheKey); | |
| } | |
| } | |
| // Child workspace isolation - save workspace for next run | |
| if (childWorkspaceConfig && childWorkspaceConfig.enabled) { | |
| const { ChildWorkspaceService } = await import('./model/orchestrator/services/cache/child-workspace-service'); | |
| const projectFullPath = path.join(workspace, buildParameters.projectPath); | |
| const preSaveSize = ChildWorkspaceService.getWorkspaceSize(projectFullPath); | |
| core.info(`Child workspace size before save: ${preSaveSize}`); | |
| ChildWorkspaceService.saveWorkspace(projectFullPath, childWorkspaceConfig); | |
| core.info(`Child workspace "${buildParameters.childWorkspaceName}" saved to cache`); | |
| } | |
| // Revert overlays after job completion if configured | |
| if (buildParameters.syncRevertAfter && syncStrategy !== 'full') { | |
| core.info('[Sync] Reverting overlay changes after job completion'); | |
| try { | |
| await IncrementalSyncService.revertOverlays(workspace, buildParameters.syncStatePath); | |
| } catch (revertError) { | |
| core.warning(`[Sync] Overlay revert failed: ${(revertError as Error).message}`); | |
| } | |
| } | |
| } else { |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/index.ts` around lines 224 - 258, The local-provider branch currently
overwrites the real local build exit code by unconditionally calling
runColdBuild() after the local build flow; remove the duplicate runColdBuild()
invocation (and its assignment to exitCode) from inside the providerStrategy ===
'local' branch so that exitCode preserves the value set by the local build path
(the earlier Docker/MacBuilder code), leaving runColdBuild(runColdBuild(...))
only in the non-local branch/else where cold builds are intended; confirm
references to exitCode, runColdBuild, and the providerStrategy conditional are
updated accordingly.
| const cachedWorkspacePath = path.join(config.parentCacheRoot, config.workspaceName); | ||
|
|
||
| try { | ||
| if (!fs.existsSync(cachedWorkspacePath)) { | ||
| OrchestratorLogger.log(`[ChildWorkspace] No cached workspace found at ${cachedWorkspacePath}, starting fresh`); | ||
|
|
||
| return false; | ||
| } | ||
|
|
||
| // Verify the cached workspace has content | ||
| const entries = fs.readdirSync(cachedWorkspacePath); | ||
| if (entries.length === 0) { | ||
| OrchestratorLogger.log(`[ChildWorkspace] Cached workspace at ${cachedWorkspacePath} is empty, starting fresh`); | ||
| fs.rmSync(cachedWorkspacePath, { recursive: true, force: true }); | ||
|
|
||
| return false; | ||
| } | ||
|
|
||
| // If the target path already exists, remove it to make way for the move | ||
| if (fs.existsSync(projectPath)) { | ||
| OrchestratorLogger.log(`[ChildWorkspace] Removing existing target path: ${projectPath}`); | ||
| fs.rmSync(projectPath, { recursive: true, force: true }); |
There was a problem hiding this comment.
Destructive filesystem operations are performed on unsanitized paths.
workspaceName, parentCacheRoot, and projectPath are used to build paths that are recursively deleted/moved. Without normalization/containment checks, a bad value (for example .. segments or root-like targets) can delete outside the intended cache scope.
Suggested hardening
@@
static buildConfig(parameters: {
@@
}): ChildWorkspaceConfig {
+ const workspaceName = parameters.childWorkspaceName.trim();
+ if (
+ workspaceName === '' ||
+ workspaceName.includes('..') ||
+ path.isAbsolute(workspaceName) ||
+ workspaceName.includes(path.sep)
+ ) {
+ throw new Error(`Invalid childWorkspaceName: "${parameters.childWorkspaceName}"`);
+ }
+
return {
enabled: parameters.childWorkspacesEnabled,
- workspaceName: parameters.childWorkspaceName,
+ workspaceName,
parentCacheRoot: parameters.childWorkspaceCacheRoot,
preserveGitDirectory: parameters.childWorkspacePreserveGit,
separateLibraryCache: parameters.childWorkspaceSeparateLibrary,
};
}Also applies to: 103-135, 310-324
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/model/orchestrator/services/cache/child-workspace-service.ts` around
lines 47 - 68, The code performs destructive fs operations using unsanitized
paths (cachedWorkspacePath, projectPath) built from config.parentCacheRoot and
config.workspaceName; fix by normalizing and validating before any rmSync/move:
resolve parentRoot = path.resolve(config.parentCacheRoot), resolve target =
path.resolve(parentRoot, path.normalize(config.workspaceName)), reject
workspaceName containing path separators or traversal (e.g., ..) and ensure
path.relative(parentRoot, target) does not start with '..' and
target.startsWith(parentRoot); apply the same resolve+containment checks for
projectPath, and only call fs.rmSync or move when validation passes, logging and
throwing otherwise (use the same symbols cachedWorkspacePath, projectPath,
config.parentCacheRoot, config.workspaceName in your checks).
| for (const group of hookGroups) { | ||
| try { | ||
| await OrchestratorSystem.Run(`cd "${repoPath}" && npx lefthook run ${group}`, true); | ||
| OrchestratorLogger.log(`[GitHooks] Hook group '${group}' passed`); |
There was a problem hiding this comment.
Sanitize hook group names before shell execution.
Line 181 interpolates group directly into a shell command. Since hook groups are user-configurable, this is a command-injection risk.
Proposed fix
for (const group of hookGroups) {
+ if (!/^[A-Za-z0-9._-]+$/.test(group)) {
+ OrchestratorLogger.logWarning(`[GitHooks] Invalid hook group '${group}', skipping`);
+ results[group] = false;
+ continue;
+ }
try {
- await OrchestratorSystem.Run(`cd "${repoPath}" && npx lefthook run ${group}`, true);
+ await OrchestratorSystem.Run(`cd "${repoPath}" && npx lefthook run "${group}"`, true);
OrchestratorLogger.log(`[GitHooks] Hook group '${group}' passed`);
results[group] = true;📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| for (const group of hookGroups) { | |
| try { | |
| await OrchestratorSystem.Run(`cd "${repoPath}" && npx lefthook run ${group}`, true); | |
| OrchestratorLogger.log(`[GitHooks] Hook group '${group}' passed`); | |
| for (const group of hookGroups) { | |
| if (!/^[A-Za-z0-9._-]+$/.test(group)) { | |
| OrchestratorLogger.logWarning(`[GitHooks] Invalid hook group '${group}', skipping`); | |
| results[group] = false; | |
| continue; | |
| } | |
| try { | |
| await OrchestratorSystem.Run(`cd "${repoPath}" && npx lefthook run "${group}"`, true); | |
| OrchestratorLogger.log(`[GitHooks] Hook group '${group}' passed`); | |
| results[group] = true; |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/model/orchestrator/services/hooks/git-hooks-service.ts` around lines 179
- 182, The loop interpolates user-controlled group into a shell command, so
validate/sanitize each group before calling OrchestratorSystem.Run: ensure the
group string (from hookGroups) only contains a safe, narrow whitelist of
characters (e.g., alphanumerics, dot, underscore, dash) and reject or skip+log
any group that fails validation; do not pass raw user data into the template
string—either escape shell metacharacters or (preferably) change the call to
invoke lefthook with args instead of string interpolation; update the block
around OrchestratorSystem.Run, the group variable handling, and
OrchestratorLogger.log to reflect validation and a clear error/log when a group
is rejected.
| let runner = registry.findAvailableRunner({ | ||
| unityVersion, | ||
| platform: request.buildTarget, | ||
| }); | ||
|
|
||
| if (!runner) { | ||
| OrchestratorLogger.log( | ||
| `[HotRunner] No idle runner available for ${unityVersion}/${request.buildTarget}, waiting...`, | ||
| ); | ||
| runner = await this.waitForRunner({ unityVersion, platform: request.buildTarget }, request.timeout, registry); | ||
| } | ||
|
|
||
| // Mark runner as busy | ||
| registry.updateRunner(runner.id, { | ||
| state: 'busy', | ||
| currentJob: request.jobId, | ||
| }); |
There was a problem hiding this comment.
Make runner reservation atomic to prevent double-dispatch.
Line 31 (findAvailableRunner) and Line 44 (updateRunner) are a TOCTOU window. Concurrent dispatches can select the same idle runner before either marks it busy.
💡 Proposed fix
- let runner = registry.findAvailableRunner({
- unityVersion,
- platform: request.buildTarget,
- });
+ let runner = registry.reserveAvailableRunner(
+ { unityVersion, platform: request.buildTarget },
+ request.jobId,
+ );
@@
- registry.updateRunner(runner.id, {
- state: 'busy',
- currentJob: request.jobId,
- });
+ // reservation already marks runner busy/currentJob atomically// In HotRunnerRegistry (new method)
reserveAvailableRunner(
requirements: { unityVersion: string; platform: string },
jobId: string,
): HotRunnerStatus | undefined {
const runner = this.findAvailableRunner(requirements);
if (!runner) return undefined;
if (runner.state !== 'idle') return undefined;
this.updateRunner(runner.id, { state: 'busy', currentJob: jobId });
return this.getRunner(runner.id);
}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/model/orchestrator/services/hot-runner/hot-runner-dispatcher.ts` around
lines 31 - 47, The findAvailableRunner -> updateRunner sequence in
hot-runner-dispatcher creates a TOCTOU race; add an atomic reservation method on
HotRunnerRegistry (e.g., reserveAvailableRunner(requirements: {unityVersion,
platform}, jobId): HotRunnerStatus|undefined) that internally checks state ===
'idle', sets state:'busy' and currentJob:jobId, and returns the reserved runner;
call this new reserveAvailableRunner from hot-runner-dispatcher (including
inside the waitForRunner fallback) instead of calling findAvailableRunner then
updateRunner so no two dispatches can claim the same runner.
| const releaseTag = version === 'latest' ? 'latest' : version; | ||
| const assetName = `elastic-git-storage_${osName}_${archName}${ext}`; | ||
|
|
||
| let downloadUrl: string; | ||
| if (releaseTag === 'latest') { | ||
| downloadUrl = `https://github.com/${ElasticGitStorageService.REPO_OWNER}/${ElasticGitStorageService.REPO_NAME}/releases/latest/download/${assetName}`; | ||
| } else { | ||
| downloadUrl = `https://github.com/${ElasticGitStorageService.REPO_OWNER}/${ElasticGitStorageService.REPO_NAME}/releases/download/${releaseTag}/${assetName}`; | ||
| } | ||
|
|
||
| OrchestratorLogger.log(`[ElasticGitStorage] Downloading from ${downloadUrl}`); | ||
|
|
||
| try { | ||
| await OrchestratorSystem.Run(`curl -fsSL -o "${installPath}" "${downloadUrl}"`); | ||
|
|
There was a problem hiding this comment.
Validate version before composing shell download command.
version flows into downloadUrl and then into a shell command at Line 152. Without strict validation, malformed input can break quoting and execute unintended shell tokens.
💡 Proposed fix
static async install(version: string = 'latest'): Promise<string> {
+ if (!/^(latest|v\d+\.\d+\.\d+)$/.test(version)) {
+ throw new Error(`[ElasticGitStorage] Invalid version format: ${version}`);
+ }
@@
- const releaseTag = version === 'latest' ? 'latest' : version;
+ const releaseTag = version;🧰 Tools
🪛 ESLint
[error] 143-147: This if statement can be replaced by a ternary expression.
(unicorn/prefer-ternary)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/model/orchestrator/services/lfs/elastic-git-storage-service.ts` around
lines 139 - 153, Summary: version currently flows unchecked into downloadUrl and
then into a shell command (OrchestratorSystem.Run), allowing injection;
validate/sanitize it first. Fix: in ElasticGitStorageService, validate the
incoming version value (accept only "latest" or a strict semver/PRERELEASE
pattern via a regex, e.g. /^[0-9]+\.[0-9]+\.[0-9]+(?:[-+][A-Za-z0-9.]+)?$/ or
the literal "latest"), reject or throw for anything else; derive releaseTag only
from the validated value, then URL-encode releaseTag (encodeURIComponent) when
building downloadUrl; finally avoid embedding untrusted strings raw into a shell
by passing the properly quoted/encoded downloadUrl to OrchestratorSystem.Run or
switch to a safe exec variant that accepts args (use OrchestratorSystem.Run with
an array/arg-escaping helper if available). Reference symbols:
ElasticGitStorageService, version, releaseTag, downloadUrl,
OrchestratorSystem.Run.
| await OrchestratorSystem.Run(`git -C "${repoPath}" config lfs.customtransfer.${agentName}.path "${agentPath}"`); | ||
| await OrchestratorSystem.Run(`git -C "${repoPath}" config lfs.customtransfer.${agentName}.args "${agentArgs}"`); | ||
| await OrchestratorSystem.Run(`git -C "${repoPath}" config lfs.standalonetransferagent ${agentName}`); |
There was a problem hiding this comment.
Harden command construction before invoking Git config.
These commands interpolate untrusted string values directly into a shell command. If agentPath, agentArgs, or repoPath contains shell metacharacters, command execution can be altered.
🔐 Proposed hardening (shell-quote inputs)
+const shellQuote = (value: string): string => `'${value.replaceAll("'", `'\\''`)}'`;
+
- await OrchestratorSystem.Run(`git -C "${repoPath}" config lfs.customtransfer.${agentName}.path "${agentPath}"`);
- await OrchestratorSystem.Run(`git -C "${repoPath}" config lfs.customtransfer.${agentName}.args "${agentArgs}"`);
- await OrchestratorSystem.Run(`git -C "${repoPath}" config lfs.standalonetransferagent ${agentName}`);
+ const quotedRepoPath = shellQuote(repoPath);
+ const quotedAgentPath = shellQuote(agentPath);
+ const quotedAgentArgs = shellQuote(agentArgs);
+ const quotedAgentName = shellQuote(agentName);
+ await OrchestratorSystem.Run(
+ `git -C ${quotedRepoPath} config lfs.customtransfer.${agentName}.path ${quotedAgentPath}`,
+ );
+ await OrchestratorSystem.Run(
+ `git -C ${quotedRepoPath} config lfs.customtransfer.${agentName}.args ${quotedAgentArgs}`,
+ );
+ await OrchestratorSystem.Run(
+ `git -C ${quotedRepoPath} config lfs.standalonetransferagent ${quotedAgentName}`,
+ );📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| await OrchestratorSystem.Run(`git -C "${repoPath}" config lfs.customtransfer.${agentName}.path "${agentPath}"`); | |
| await OrchestratorSystem.Run(`git -C "${repoPath}" config lfs.customtransfer.${agentName}.args "${agentArgs}"`); | |
| await OrchestratorSystem.Run(`git -C "${repoPath}" config lfs.standalonetransferagent ${agentName}`); | |
| const shellQuote = (value: string): string => `'${value.replaceAll("'", `'\\''`)}'`; | |
| const quotedRepoPath = shellQuote(repoPath); | |
| const quotedAgentPath = shellQuote(agentPath); | |
| const quotedAgentArgs = shellQuote(agentArgs); | |
| const quotedAgentName = shellQuote(agentName); | |
| await OrchestratorSystem.Run( | |
| `git -C ${quotedRepoPath} config lfs.customtransfer.${agentName}.path ${quotedAgentPath}`, | |
| ); | |
| await OrchestratorSystem.Run( | |
| `git -C ${quotedRepoPath} config lfs.customtransfer.${agentName}.args ${quotedAgentArgs}`, | |
| ); | |
| await OrchestratorSystem.Run( | |
| `git -C ${quotedRepoPath} config lfs.standalonetransferagent ${quotedAgentName}`, | |
| ); |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/model/orchestrator/services/lfs/lfs-agent-service.ts` around lines 34 -
36, Commands passed to OrchestratorSystem.Run interpolate untrusted values
(agentPath, agentArgs, repoPath, agentName) into a shell string and are
vulnerable to shell injection; update the code to construct these git config
invocations safely by either (a) modifying OrchestratorSystem.Run to accept an
argv array and call the underlying child process without a shell, then call
OrchestratorSystem.Run with explicit args (e.g. ['git','-C', repoPath,'config',
...]) using the identifiers agentPath, agentArgs, repoPath, agentName, or (b) if
keeping a single-string API, properly shell-quote/escape agentPath, agentArgs,
repoPath, and agentName using a vetted quoting utility before interpolation;
ensure you change all three calls that reference OrchestratorSystem.Run so they
no longer pass unescaped, unsanitized variables to a shell.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Remove the else branch that actively called GitHooksService.disableHooks() for every user where gitHooksEnabled was false (the default). This was a breaking change that silently modified core.hooksPath to point at an empty directory, disabling any existing git hooks (husky, lefthook, pre-commit, etc.). When gitHooksEnabled is false (default), the action now does nothing regarding hooks — exactly matching the behavior on main before the hooks feature was added. The hooks feature only activates when users explicitly set gitHooksEnabled: true. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…eatures Add three test files covering the two highest-priority gaps in PR #777: 1. src/index-enterprise-features.test.ts (21 tests) - Integration wiring tests for index.ts that verify conditional gating of all enterprise services (GitHooks, LocalCache, ChildWorkspace, SubmoduleProfile, LfsAgent). Tests that disabled features (default) are never invoked, enabled features call the correct service methods, and the order of operations is correct (restore before build, save after build). Also tests non-local provider strategy skips all enterprise features. 2. src/model/enterprise-inputs.test.ts (103 tests) - Input/BuildParameters wiring tests for all 20 new enterprise properties. Covers defaults, explicit values, and boolean string parsing edge cases (the #1 source of bugs: 'false' as truthy, 'TRUE' case sensitivity, '1', 'yes'). Verifies BuildParameters.create() correctly maps all Input getters. 3. src/model/orchestrator/services/submodule/submodule-profile-service.test.ts (5 new tests) - Command construction safety tests for execute(), documenting how paths, branches, and tokens are passed into git commands and verifying the expected command strings. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…support' into release/lts-2.0.0 Merges latest commits from PR #777 (git hooks fix + 129 new tests). Resolved merge conflicts: - dist/index.js, dist/index.js.map: rebuilt from merged source - src/model/input.ts: fixed 5 missing closing braces and return statements from merge
There was a problem hiding this comment.
Actionable comments posted: 2
♻️ Duplicate comments (1)
src/index.ts (1)
254-254:⚠️ Potential issue | 🔴 CriticalCritical: Build exit code is overwritten, breaking local build results.
Line 254 calls
runColdBuild()after the local build has already completed at lines 213-220. This overwrites the actual build result, causing:
- The build to run twice
- The original exit code from Docker/MacBuilder to be discarded
- Post-build failure detection to use the wrong exit code
This appears to be a merge artifact that should be removed.
🐛 Proposed fix
// Revert overlays after job completion if configured if (buildParameters.syncRevertAfter && syncStrategy !== 'full') { core.info('[Sync] Reverting overlay changes after job completion'); try { await IncrementalSyncService.revertOverlays(workspace, buildParameters.syncStatePath); } catch (revertError) { core.warning(`[Sync] Overlay revert failed: ${(revertError as Error).message}`); } } - exitCode = await runColdBuild(buildParameters, baseImage, workspace, actionFolder); } else {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/index.ts` at line 254, The code erroneously calls runColdBuild() and assigns its result to exitCode after the local build already completed, overwriting the original Docker/MacBuilder result; remove the duplicated call (or guard it) so exitCode retains the original local build return value (from the prior Docker/MacBuilder/runLocalBuild call) and only call runColdBuild() when a true cold build is intended (e.g., behind an explicit condition), ensuring symbols runColdBuild, exitCode, and the prior local build function (runDockerBuild/runMacBuilder/runLocalBuild) are used to locate and fix the overwrite.
🧹 Nitpick comments (8)
delete-me-update-all-integration-branches.ps1 (2)
109-109: Unused variable$result.Same pattern as above — capturing merge output only to check exit code.
♻️ Suggested fix
- $result = git merge $remoteBranch --no-edit 2>&1 + $null = git merge $remoteBranch --no-edit 2>&1🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@delete-me-update-all-integration-branches.ps1` at line 109, The code assigns the merge output to an unused variable ($result) when running "git merge $remoteBranch --no-edit 2>&1"; remove the unused assignment and either call git merge directly and check $LASTEXITCODE, or pipe the output to Out-Null if you want to discard it. Update the line that creates $result so it no longer defines an unused variable (or rename/use it if you actually need the output), referencing the git merge invocation and the $remoteBranch variable.
94-94: Unused variable$refExists.The variable captures git output only to check
$LASTEXITCODE. Use$nullassignment to suppress the warning and make intent clear.♻️ Suggested fix
- $refExists = git rev-parse --verify $remoteBranch 2>$null + $null = git rev-parse --verify $remoteBranch 2>$null🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@delete-me-update-all-integration-branches.ps1` at line 94, The variable $refExists is assigned the output of git rev-parse only to inspect $LASTEXITCODE and is never used; replace the assignment with $null (i.e., use $null = git rev-parse --verify $remoteBranch 2>$null) so the command output is discarded clearly while preserving the subsequent $LASTEXITCODE check in the script.src/model/enterprise-inputs.test.ts (1)
1-14: Minor documentation inconsistency: 20 vs 19 properties.The header states "20 new input properties" but the enumerated list contains 7 boolean + 12 string = 19 properties. Consider updating either the count or the list for accuracy.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/model/enterprise-inputs.test.ts` around lines 1 - 14, The file header comment in enterprise-inputs.test.ts incorrectly states "20 new input properties" while the enumerated list contains 19 items; either update the numeric count to "19" or add the missing property to the list so the header and list match; locate the top-of-file comment block (the "Tests for enterprise input properties" header) and make the count consistent with the actual properties enumerated (or update the enumerated list to include the intended 20th property).src/model/input.ts (1)
587-589: Consider validating transport value before casting.The type assertion accepts any string without validation. An invalid value like
'http'would silently pass through as a union member.Optional: Add validation
static get hotRunnerTransport(): 'websocket' | 'grpc' | 'named-pipe' { - return (Input.getInput('hotRunnerTransport') ?? 'websocket') as 'websocket' | 'grpc' | 'named-pipe'; + const input = Input.getInput('hotRunnerTransport') ?? 'websocket'; + const validTransports = ['websocket', 'grpc', 'named-pipe'] as const; + if (!validTransports.includes(input as typeof validTransports[number])) { + return 'websocket'; + } + return input as 'websocket' | 'grpc' | 'named-pipe'; }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/model/input.ts` around lines 587 - 589, The getter hotRunnerTransport currently casts Input.getInput('hotRunnerTransport') directly to the transport union, allowing invalid strings to slip through; update Input.hotRunnerTransport to validate the retrieved string against the allowed set ('websocket','grpc','named-pipe') before returning, e.g., read value = Input.getInput('hotRunnerTransport'), check if value is one of the three allowed values, return that value if valid, otherwise return the default 'websocket' (and optionally log or throw); this ensures the union type is enforced at runtime without unsafe casting.src/index-enterprise-features.test.ts (2)
36-43: Test mock signature mismatch: LocalCacheService.generateCacheKey arguments.The mock at lines 36-43 doesn't validate the method signature, but the test assertions at lines 269-321 only check that methods were called without verifying arguments.
Looking at
index.ts(lines 181-185),generateCacheKeyis called with(targetPlatform, editorVersion, branch). However, the context snippet from the older code shows it was previously called with(localCacheEnabled, localCacheLibrary, localCacheLfs). If the service interface changed, the test should verify the new argument signature to catch regressions.Consider adding argument assertions to validate the correct parameters are passed:
✨ Proposed enhancement to verify arguments
it('should call restore and save operations when localCacheEnabled is true', async () => { await runIndex({ localCacheEnabled: true, localCacheLibrary: true, localCacheLfs: true, }); expect(mockLocalCacheService.resolveCacheRoot).toHaveBeenCalled(); - expect(mockLocalCacheService.generateCacheKey).toHaveBeenCalled(); + expect(mockLocalCacheService.generateCacheKey).toHaveBeenCalledWith( + 'StandaloneLinux64', // targetPlatform from createMockBuildParameters + '2021.3.1f1', // editorVersion from createMockBuildParameters + 'main', // branch from createMockBuildParameters + ); expect(mockLocalCacheService.restoreLibraryCache).toHaveBeenCalled();Also applies to: 269-321
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/index-enterprise-features.test.ts` around lines 36 - 43, The test mock for LocalCacheService.generateCacheKey doesn't match the current production call signature and the test lacks argument assertions; update the mock in src/index-enterprise-features.test.ts so generateCacheKey is defined as a jest.fn() that accepts (targetPlatform, editorVersion, branch) and then add assertions (e.g., expect(mockLocalCacheService.generateCacheKey).toHaveBeenCalledWith(...)) in the existing assertion block (around the previous lines 269-321) to verify the exact parameters passed by the code path in index.ts where generateCacheKey is invoked; reference the LocalCacheService.generateCacheKey symbol and the code path in index.ts that calls it to ensure the test will catch signature regressions.
190-204: Flaky test pattern: hardcoded timeout for async settling.Using
setTimeout(resolve, 100)to wait for async operations is inherently flaky. On slower CI machines or under load, 100ms may not be sufficient, leading to intermittent test failures. Conversely, it unnecessarily slows down tests when operations complete faster.Consider using
setImmediatewith proper promise chaining, or leverage Jest'sflushPromisespattern:♻️ Proposed fix using flushPromises
+// Add at the top of the file or in a test utility +const flushPromises = () => new Promise(setImmediate); + async function runIndex(overrides: EnterpriseBuildParametersOverrides = {}): Promise<void> { mockedBuildParametersCreate.mockResolvedValue(createMockBuildParameters(overrides)); - return new Promise<void>((resolve) => { - jest.isolateModules(() => { - require('./index'); - - // runMain() is async; give it a tick to complete - // We use setImmediate to ensure all microtasks from the dynamic imports resolve - }); - - // Allow all promises and microtasks to settle - setTimeout(resolve, 100); - }); + jest.isolateModules(() => { + require('./index'); + }); + + // Allow all promises and microtasks to settle + await flushPromises(); + await flushPromises(); // Double flush for nested async }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/index-enterprise-features.test.ts` around lines 190 - 204, The test uses a flaky fixed delay (setTimeout(resolve, 100)) inside runIndex; replace that wait with a deterministic microtask/macro-task flush so async work from jest.isolateModules()/require('./index') settles reliably. Update runIndex to await a Promise-based flush (e.g., await new Promise(res => setImmediate(res)) or use a flushPromises helper that returns Promise.resolve().then(...)) instead of setTimeout; keep mockedBuildParametersCreate and createMockBuildParameters setup and call jest.isolateModules/require('./index') as-is so the test no longer depends on a hardcoded 100ms delay.src/index.ts (2)
318-325: ESLint: Rename variableeto something more descriptive.The
unicorn/prevent-abbreviationsrule flags the single-letter variable. Consider usingentryfor clarity:♻️ Proposed fix
if (!uploadResult.success) { core.warning( `Artifact upload completed with errors: ${uploadResult.entries - .filter((e) => !e.success) - .map((e) => `${e.type}: ${e.error}`) + .filter((entry) => !entry.success) + .map((entry) => `${entry.type}: ${entry.error}`) .join('; ')}`, ); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/index.ts` around lines 318 - 325, The loop uses a single-letter lambda parameter `e` in the uploadResult.entries filter/map chain which triggers the unicorn/prevent-abbreviations rule; rename `e` to a descriptive name like `entry` in both the .filter and .map callbacks (the block around the `if (!uploadResult.success)` check that constructs the warning message), and update all references inside the template string to use `entry.type` and `entry.error` so the code remains functionally identical but passes linting.
40-42: ESLint: ReplaceArray#reduce()with a loop or alternative.The
unicorn/no-array-reducerule flags this pattern. Consider using a simple loop orfilter().lengthfor clarity:♻️ Proposed fix
- const totalFailed = results.reduce((sum, r) => sum + r.failed, 0); + const totalFailed = results.filter((r) => r.failed > 0).reduce((sum, r) => sum + r.failed, 0);Or using a loop:
- const totalFailed = results.reduce((sum, r) => sum + r.failed, 0); + let totalFailed = 0; + for (const r of results) { + totalFailed += r.failed; + }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/index.ts` around lines 40 - 42, Replace the Array.prototype.reduce usage that computes totalFailed with a clearer alternative to satisfy unicorn/no-array-reduce: iterate over results with a simple for/of loop accumulating r.failed into totalFailed or compute totalFailed as results.filter(r => r.failed > 0).length (or sum r.failed via a for loop) and then use core.setFailed as before; update the variable name totalFailed and the surrounding logic in the same block that references results so behavior is unchanged.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In @.github/workflows/build-tests-mac.yml:
- Line 15: The job-level continue-on-error: true makes every matrix variant
non-blocking; change it to scope tolerance to only the experimental matrix
entries by replacing the job-level boolean with an expression like
continue-on-error: ${{ matrix.experimental }} (or remove the job-level key and
add continue-on-error under the matrix configuration) so only matrix items
marked experimental are allowed to fail; update the workflow so the
"continue-on-error" YAML key is bound to the matrix field rather than applied
globally.
In `@src/model/enterprise-inputs.test.ts`:
- Around line 361-369: The afterEach block currently clears mocks but does not
undo environment changes, so add cleanup for process.env.UNITY_LICENSE there;
update the afterEach (which calls jest.clearAllMocks and jest.restoreAllMocks)
to also delete or unset process.env.UNITY_LICENSE (or restore it to its previous
value captured in beforeEach) so the environment variable set in beforeEach no
longer leaks to other tests.
---
Duplicate comments:
In `@src/index.ts`:
- Line 254: The code erroneously calls runColdBuild() and assigns its result to
exitCode after the local build already completed, overwriting the original
Docker/MacBuilder result; remove the duplicated call (or guard it) so exitCode
retains the original local build return value (from the prior
Docker/MacBuilder/runLocalBuild call) and only call runColdBuild() when a true
cold build is intended (e.g., behind an explicit condition), ensuring symbols
runColdBuild, exitCode, and the prior local build function
(runDockerBuild/runMacBuilder/runLocalBuild) are used to locate and fix the
overwrite.
---
Nitpick comments:
In `@delete-me-update-all-integration-branches.ps1`:
- Line 109: The code assigns the merge output to an unused variable ($result)
when running "git merge $remoteBranch --no-edit 2>&1"; remove the unused
assignment and either call git merge directly and check $LASTEXITCODE, or pipe
the output to Out-Null if you want to discard it. Update the line that creates
$result so it no longer defines an unused variable (or rename/use it if you
actually need the output), referencing the git merge invocation and the
$remoteBranch variable.
- Line 94: The variable $refExists is assigned the output of git rev-parse only
to inspect $LASTEXITCODE and is never used; replace the assignment with $null
(i.e., use $null = git rev-parse --verify $remoteBranch 2>$null) so the command
output is discarded clearly while preserving the subsequent $LASTEXITCODE check
in the script.
In `@src/index-enterprise-features.test.ts`:
- Around line 36-43: The test mock for LocalCacheService.generateCacheKey
doesn't match the current production call signature and the test lacks argument
assertions; update the mock in src/index-enterprise-features.test.ts so
generateCacheKey is defined as a jest.fn() that accepts (targetPlatform,
editorVersion, branch) and then add assertions (e.g.,
expect(mockLocalCacheService.generateCacheKey).toHaveBeenCalledWith(...)) in the
existing assertion block (around the previous lines 269-321) to verify the exact
parameters passed by the code path in index.ts where generateCacheKey is
invoked; reference the LocalCacheService.generateCacheKey symbol and the code
path in index.ts that calls it to ensure the test will catch signature
regressions.
- Around line 190-204: The test uses a flaky fixed delay (setTimeout(resolve,
100)) inside runIndex; replace that wait with a deterministic
microtask/macro-task flush so async work from
jest.isolateModules()/require('./index') settles reliably. Update runIndex to
await a Promise-based flush (e.g., await new Promise(res => setImmediate(res))
or use a flushPromises helper that returns Promise.resolve().then(...)) instead
of setTimeout; keep mockedBuildParametersCreate and createMockBuildParameters
setup and call jest.isolateModules/require('./index') as-is so the test no
longer depends on a hardcoded 100ms delay.
In `@src/index.ts`:
- Around line 318-325: The loop uses a single-letter lambda parameter `e` in the
uploadResult.entries filter/map chain which triggers the
unicorn/prevent-abbreviations rule; rename `e` to a descriptive name like
`entry` in both the .filter and .map callbacks (the block around the `if
(!uploadResult.success)` check that constructs the warning message), and update
all references inside the template string to use `entry.type` and `entry.error`
so the code remains functionally identical but passes linting.
- Around line 40-42: Replace the Array.prototype.reduce usage that computes
totalFailed with a clearer alternative to satisfy unicorn/no-array-reduce:
iterate over results with a simple for/of loop accumulating r.failed into
totalFailed or compute totalFailed as results.filter(r => r.failed > 0).length
(or sum r.failed via a for loop) and then use core.setFailed as before; update
the variable name totalFailed and the surrounding logic in the same block that
references results so behavior is unchanged.
In `@src/model/enterprise-inputs.test.ts`:
- Around line 1-14: The file header comment in enterprise-inputs.test.ts
incorrectly states "20 new input properties" while the enumerated list contains
19 items; either update the numeric count to "19" or add the missing property to
the list so the header and list match; locate the top-of-file comment block (the
"Tests for enterprise input properties" header) and make the count consistent
with the actual properties enumerated (or update the enumerated list to include
the intended 20th property).
In `@src/model/input.ts`:
- Around line 587-589: The getter hotRunnerTransport currently casts
Input.getInput('hotRunnerTransport') directly to the transport union, allowing
invalid strings to slip through; update Input.hotRunnerTransport to validate the
retrieved string against the allowed set ('websocket','grpc','named-pipe')
before returning, e.g., read value = Input.getInput('hotRunnerTransport'), check
if value is one of the three allowed values, return that value if valid,
otherwise return the default 'websocket' (and optionally log or throw); this
ensures the union type is enforced at runtime without unsafe casting.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Repository UI
Review profile: CHILL
Plan: Pro
Run ID: a3e3a918-6bc7-42ff-a6ca-52b081df8303
⛔ Files ignored due to path filters (2)
dist/index.jsis excluded by!**/dist/**dist/index.js.mapis excluded by!**/dist/**,!**/*.map
📒 Files selected for processing (7)
.github/workflows/build-tests-mac.ymldelete-me-update-all-integration-branches.ps1src/index-enterprise-features.test.tssrc/index.tssrc/model/enterprise-inputs.test.tssrc/model/input.tssrc/model/orchestrator/services/submodule/submodule-profile-service.test.ts
| buildForAllPlatformsMacOS: | ||
| name: ${{ matrix.targetPlatform }} on ${{ matrix.unityVersion }} | ||
| runs-on: macos-latest | ||
| continue-on-error: true |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
GitHub Actions matrix job continue-on-error behavior official documentation
💡 Result:
In matrix jobs, jobs.<job_id>.continue-on-error is evaluated per matrix-generated job, so you can allow only certain matrix combinations (for example, “experimental”) to fail without failing the overall workflow run by setting it to an expression like continue-on-error: ${{ matrix.experimental }}. [1]
Related behavior to be aware of:
jobs.<job_id>.strategy.fail-fastapplies to the entire matrix (defaults totrue): if any matrix job fails, GitHub cancels the other in-progress/queued matrix jobs. [1]continue-on-errorapplies to a single job: if that specific matrix job fails withcontinue-on-error: true, other matrix jobs keep running, and the workflow run can still pass despite that job’s failure. [1]- You can use
fail-fastandcontinue-on-errortogether to get “stop the matrix on real failures, but tolerate experimental ones.” [1], [2]
Sources: GitHub Actions docs on workflow syntax (fail-fast, continue-on-error) and matrix failure handling. [1], [2]
Don't make the entire macOS matrix non-blocking.
Setting continue-on-error: true at the job level applies to every matrix-generated job, allowing the workflow to pass even when supported macOS/iOS build variants fail. GitHub Actions evaluates this setting per matrix instance, and the official documentation recommends scoping tolerance to specific matrix combinations using a matrix field expression like continue-on-error: ${{ matrix.experimental }}.
Suggested scope reduction
- continue-on-error: true
+ continue-on-error: ${{ matrix.allowFailure || false }}
strategy:
fail-fast: false
matrix:
projectPath:
- test-project
@@
include:
- unityVersion: 6000.0.36f1
targetPlatform: StandaloneOSX
+ allowFailure: true
- unityVersion: 6000.0.36f1
targetPlatform: StandaloneOSX
buildProfile: 'Assets/Settings/Build Profiles/Sample macOS Build Profile.asset'
+ allowFailure: true🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.github/workflows/build-tests-mac.yml at line 15, The job-level
continue-on-error: true makes every matrix variant non-blocking; change it to
scope tolerance to only the experimental matrix entries by replacing the
job-level boolean with an expression like continue-on-error: ${{
matrix.experimental }} (or remove the job-level key and add continue-on-error
under the matrix configuration) so only matrix items marked experimental are
allowed to fail; update the workflow so the "continue-on-error" YAML key is
bound to the matrix field rather than applied globally.
| beforeEach(() => { | ||
| jest.spyOn(Versioning, 'determineBuildVersion').mockImplementation(async () => '1.3.37'); | ||
| process.env.UNITY_LICENSE = testLicense; | ||
| }); | ||
|
|
||
| afterEach(() => { | ||
| jest.clearAllMocks(); | ||
| jest.restoreAllMocks(); | ||
| }); |
There was a problem hiding this comment.
Clean up process.env.UNITY_LICENSE in afterEach to avoid test pollution.
The beforeEach sets process.env.UNITY_LICENSE but the afterEach doesn't remove it. This can leak state to subsequent tests running in the same Jest worker.
🧹 Proposed fix
afterEach(() => {
+ delete process.env.UNITY_LICENSE;
jest.clearAllMocks();
jest.restoreAllMocks();
});📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| beforeEach(() => { | |
| jest.spyOn(Versioning, 'determineBuildVersion').mockImplementation(async () => '1.3.37'); | |
| process.env.UNITY_LICENSE = testLicense; | |
| }); | |
| afterEach(() => { | |
| jest.clearAllMocks(); | |
| jest.restoreAllMocks(); | |
| }); | |
| beforeEach(() => { | |
| jest.spyOn(Versioning, 'determineBuildVersion').mockImplementation(async () => '1.3.37'); | |
| process.env.UNITY_LICENSE = testLicense; | |
| }); | |
| afterEach(() => { | |
| delete process.env.UNITY_LICENSE; | |
| jest.clearAllMocks(); | |
| jest.restoreAllMocks(); | |
| }); |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/model/enterprise-inputs.test.ts` around lines 361 - 369, The afterEach
block currently clears mocks but does not undo environment changes, so add
cleanup for process.env.UNITY_LICENSE there; update the afterEach (which calls
jest.clearAllMocks and jest.restoreAllMocks) to also delete or unset
process.env.UNITY_LICENSE (or restore it to its previous value captured in
beforeEach) so the environment variable set in beforeEach no longer leaks to
other tests.
Use the Checks API to flip failed macOS build conclusions to neutral (gray dash) so unstable builds don't show red X marks on PRs. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
There was a problem hiding this comment.
Actionable comments posted: 2
♻️ Duplicate comments (1)
.github/workflows/build-tests-mac.yml (1)
18-18:⚠️ Potential issue | 🟠 MajorDon't make the entire macOS matrix non-blocking.
Line 18 still applies
continue-on-error: trueto every matrix-expanded job. That lets supportedStandaloneOSXandiOSvariants fail without failing the workflow, not just the unstable Unity 6000 rows.🎯 Scope tolerance to the intended matrix rows
- continue-on-error: true + continue-on-error: ${{ matrix.allowFailure || false }} strategy: fail-fast: false matrix: @@ include: - unityVersion: 6000.0.36f1 targetPlatform: StandaloneOSX + allowFailure: true - unityVersion: 6000.0.36f1 targetPlatform: StandaloneOSX buildProfile: 'Assets/Settings/Build Profiles/Sample macOS Build Profile.asset' + allowFailure: trueGitHub Actions matrix job continue-on-error behavior official documentation🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/build-tests-mac.yml at line 18, The workflow currently sets continue-on-error: true globally for the macOS matrix, which makes all matrix rows non-blocking; change this so only the unstable Unity 6000-rows row is non-blocking by removing the top-level continue-on-error and instead wire continue-on-error to a per-matrix flag: update the job to use continue-on-error: ${{ matrix.continue_on_error }} and add continue_on_error: true only to the specific matrix include entry for the unstable Unity/6000-rows row inside strategy.matrix (leave other matrix rows without that flag or with continue_on_error: false).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In @.github/workflows/build-tests-mac.yml:
- Around line 11-12: The workflow-level permissions only set "checks: write"
which leaves repo contents inaccessible; update the permissions block in the
workflow to include "contents: read" alongside "checks: write" so
actions/checkout@v4 can read the repository (i.e., change the permissions stanza
to include contents: read).
- Around line 102-117: The current loop neutralizes any matching check run for
the SHA across all workflows; restrict it to only this workflow by adding a
guard that compares the check run's workflow to the current workflow (use
context.workflow) before neutralizing. In the loop that iterates over
checkRuns.check_runs (and before calling github.rest.checks.update), add a
condition that skips runs where run.check_suite?.workflow (or
run.check_suite?.name if workflow is not present) !== context.workflow so only
check runs belonging to this workflow are updated.
---
Duplicate comments:
In @.github/workflows/build-tests-mac.yml:
- Line 18: The workflow currently sets continue-on-error: true globally for the
macOS matrix, which makes all matrix rows non-blocking; change this so only the
unstable Unity 6000-rows row is non-blocking by removing the top-level
continue-on-error and instead wire continue-on-error to a per-matrix flag:
update the job to use continue-on-error: ${{ matrix.continue_on_error }} and add
continue_on_error: true only to the specific matrix include entry for the
unstable Unity/6000-rows row inside strategy.matrix (leave other matrix rows
without that flag or with continue_on_error: false).
ℹ️ Review info
⚙️ Run configuration
Configuration used: Repository UI
Review profile: CHILL
Plan: Pro
Run ID: e69c99a1-af28-41b6-9ce6-08d36990a379
📒 Files selected for processing (1)
.github/workflows/build-tests-mac.yml
| permissions: | ||
| checks: write |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
GitHub Actions workflow permissions unspecified permissions set to none and actions/checkout v4 required contents read permission
💡 Result:
That error happens when your workflow (or a reusable-workflow caller) has a permissions: block that does not include contents: read. In GitHub Actions, any permission omitted from permissions: is set to none, so actions/checkout@v4 can’t read the repo contents to clone it. [1], [2]
Fix (minimum needed for actions/checkout@v4)
Set contents: read at the workflow or job level:
permissions:
contents: read
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4GitHub explicitly recommends contents: read for actions/checkout. [3]
Common gotcha
If you have something like:
permissions:
id-token: writethen contents becomes none (because it’s unspecified), and checkout fails. Add contents: read alongside your other scopes. [1], [2]
If you need to push back to the repo
Use:
permissions:
contents: write(Otherwise keep read for least privilege.)
Sources:
[1] GitHub Changelog – “Control permissions for GITHUB_TOKEN” (unspecified scopes become none)
[2] DEV Community – permissions key makes unspecified permissions “no access”
[3] actions/checkout README – recommended permissions: contents: read for checkout
Add contents: read to workflow permissions.
The workflow-level permissions block (lines 11-12) sets only checks: write, which implicitly sets all other permissions to none. This prevents actions/checkout@v4 (lines 43-45) from reading repository contents, causing checkout and LFS operations to fail.
Fix
permissions:
+ contents: read
checks: write🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.github/workflows/build-tests-mac.yml around lines 11 - 12, The
workflow-level permissions only set "checks: write" which leaves repo contents
inaccessible; update the permissions block in the workflow to include "contents:
read" alongside "checks: write" so actions/checkout@v4 can read the repository
(i.e., change the permissions stanza to include contents: read).
| const { data: checkRuns } = await github.rest.checks.listForRef({ | ||
| owner: context.repo.owner, | ||
| repo: context.repo.repo, | ||
| ref: context.sha, | ||
| per_page: 100, | ||
| }); | ||
|
|
||
| for (const run of checkRuns.check_runs) { | ||
| if (run.conclusion !== 'failure' || run.app?.slug !== 'github-actions') continue; | ||
| if (!run.name.includes('StandaloneOSX') && !run.name.includes('iOS')) continue; | ||
|
|
||
| await github.rest.checks.update({ | ||
| owner: context.repo.owner, | ||
| repo: context.repo.repo, | ||
| check_run_id: run.id, | ||
| conclusion: 'neutral', |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
GitHub REST API checks listForRef returns all check runs for a commit ref and GitHub Actions check run name is the job name
💡 Result:
-
checks.listForRef/ “List check runs for a Git reference” (GET /repos/{owner}/{repo}/commits/{ref}/check-runs) lists check runs for the commit ref (whererefcan be a SHA, branch, or tag). It supports:check_nameto return only check runs whosenamematchesfilter=latest(default) vsfilter=all(to include older runs)- a practical cap: if there are >1000 check suites on the ref, the response is limited to check runs from the 1000 most recent check suites (and you must page/iterate via check suites to go beyond that). [1]
-
For GitHub Actions, each workflow job publishes a check run. [2] The check run “name” corresponds to the job’s displayed name (
jobs.<job_id>.name). [3][4]- So if you set
jobs.build.name: Build (ubuntu), you should expect the check runnameto beBuild (ubuntu)and you can filter it viacheck_name=Build%20(ubuntu). [1][3][4]
- So if you set
Sources:
[1] GitHub Docs — REST API: Check runs (“List check runs for a Git reference”) https://docs.github.com/en/rest/checks/runs
[2] GitHub Marketplace — “GitHub Checks” action (notes that each Actions job already creates a check run) https://github.com/marketplace/actions/github-checks
[3] GitHub Docs — Workflow syntax: jobs.<job_id>.name (job name displayed in the UI) https://docs.github.com/en/actions/reference/workflows-and-actions/workflow-syntax
[4] GitHub Marketplace — “Wait on check” action (states check name follows jobs.<job_id>.name) https://github.com/marketplace/actions/wait-on-check
🏁 Script executed:
#!/bin/bash
# Find and examine both workflow files
git ls-files | grep -E "(build-tests-mac|build-tests-ubuntu)"Repository: game-ci/unity-builder
Length of output: 143
🏁 Script executed:
#!/bin/bash
# Examine the ubuntu workflow to see if it has StandaloneOSX or iOS in matrix names
head -40 .github/workflows/build-tests-ubuntu.ymlRepository: game-ci/unity-builder
Length of output: 3068
🏁 Script executed:
#!/bin/bash
# Get the matrix configuration from ubuntu workflow
sed -n '14,50p' .github/workflows/build-tests-ubuntu.ymlRepository: game-ci/unity-builder
Length of output: 3011
🏁 Script executed:
#!/bin/bash
# Use grep to find the matrix targetPlatform values
rg -A 20 "targetPlatform:" .github/workflows/build-tests-ubuntu.ymlRepository: game-ci/unity-builder
Length of output: 4925
🏁 Script executed:
#!/bin/bash
# Now check the mac workflow to see its job naming and the check update logic
rg -B 5 -A 15 "targetPlatform:" .github/workflows/build-tests-mac.yml | head -60Repository: game-ci/unity-builder
Length of output: 1889
🏁 Script executed:
#!/bin/bash
# Find the job name in the mac workflow
rg -B 5 "buildForAllPlatformsMac|name:" .github/workflows/build-tests-mac.yml | head -40Repository: game-ci/unity-builder
Length of output: 851
🏁 Script executed:
#!/bin/bash
# Now let's see the exact check update logic at lines 102-117
sed -n '100,120p' .github/workflows/build-tests-mac.ymlRepository: game-ci/unity-builder
Length of output: 964
Scope the neutralization to this workflow's check runs.
checks.listForRef returns every check run for the SHA across all workflows. The filter at lines 110-111 checks only for StandaloneOSX / iOS in the check run name, so it will also match failures from .github/workflows/build-tests-ubuntu.yml, which has both StandaloneOSX and iOS in its matrix targetPlatform values. This will cause unrelated Ubuntu job failures to be marked as neutral.
🔎 One low-friction way to make the filter safe
- name: ${{ matrix.targetPlatform }} on ${{ matrix.unityVersion }}
+ name: ${{ matrix.targetPlatform }} on MacOS (${{ matrix.unityVersion }})
@@
- if (!run.name.includes('StandaloneOSX') && !run.name.includes('iOS')) continue;
+ if (!run.name.includes(' on MacOS ')) continue;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.github/workflows/build-tests-mac.yml around lines 102 - 117, The current
loop neutralizes any matching check run for the SHA across all workflows;
restrict it to only this workflow by adding a guard that compares the check
run's workflow to the current workflow (use context.workflow) before
neutralizing. In the loop that iterates over checkRuns.check_runs (and before
calling github.rest.checks.update), add a condition that skips runs where
run.check_suite?.workflow (or run.check_suite?.name if workflow is not present)
!== context.workflow so only check runs belonging to this workflow are updated.

LTS 2.0.0 — Combined Release
Integration branch combining ALL open feature PRs (infrastructure + next-gen) for the LTS 2.0.0 release.
Infrastructure PRs
Next-Gen PRs
Related Issues
Tracks #794 and #810
Merge Conflict Notes
All conflicts were additive — multiple branches adding properties/inputs to shared files (
action.yml,src/index.ts,src/model/build-parameters.ts,src/model/input.ts,src/model/orchestrator/options/orchestrator-options.ts,src/model/orchestrator/providers/provider-loader.ts). Resolved by keeping all additions from each branch. Thedist/build artifacts accepted from the latest merge — will need ayarn buildto produce the final combined dist.Changelog
New Providers: GCP Cloud Run, Azure ACI, Remote PowerShell, GitHub Actions dispatch, GitLab CI dispatch, Ansible, CLI protocol
Provider Features: Load balancing, fallback strategies, runner availability checks, init timeouts
Security: Secure git authentication via http.extraHeader (no more plaintext URL tokens)
Secrets: Premade secret sources (AWS, GCP, Azure, HashiCorp Vault, env, custom YAML)
Caching: Local filesystem cache (Library, LFS), child workspace isolation
LFS: Custom transfer agent support (elastic-git-storage), configurable storage paths
Git: Submodule profile initialization, git hooks management, integrity checks
Build: Reserved filename detection, archival support, reliability improvements
Testing: Test workflow engine with taxonomy filters, further unit test coverage
Next-Gen: Hot runner protocol, generic artifact system, incremental sync, community plugin validation, official CLI
Update Script
Run
.\delete-me-update-this-integration-branch.ps1to pull latest changes from all component branches.Run
.\delete-me-update-all-integration-branches.ps1to update both integration branches at once.Summary by CodeRabbit
New Features
Documentation
Chores