Add stdio mcp tool support to python-interpreter#10
Conversation
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
WalkthroughAdds stdio-based MCP transport support alongside HTTP, refactors transport models to a discriminated union (HttpTransport / StdioTransport), updates MCPClient and connection-building to accept transport objects, expands tests/fixtures for stdio, introduces full/slim Docker image variants and CI inputs, and bumps package versions. Changes
Sequence Diagram(s)sequenceDiagram
participant AFM as AFM/MCP Definition
participant Factory as MCPClient.from_mcp_server
participant MCP as MCPClient
participant Builder as _build_connection_config
participant HTTP as StreamableHttpConnection
participant STDIO as StdioConnection
AFM->>Factory: provide server with transport (http|stdio)
Factory->>MCP: MCPClient(name, transport, tool_filter)
MCP->>MCP: inspect transport.type
alt transport == "http"
MCP->>Builder: build http config (url, authentication)
Builder->>HTTP: create StreamableHttpConnection
Builder-->>MCP: return HTTP connection
else transport == "stdio"
MCP->>Builder: build stdio config (command, args, env)
Builder->>STDIO: create StdioConnection
Builder-->>MCP: return STDIO connection
end
MCP-->>AFM: client ready with transport-backed connection
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Poem
🚥 Pre-merge checks | ✅ 4 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
e1706e7 to
2b6f075
Compare
2b6f075 to
ef6fcc3
Compare
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (4)
python-interpreter/packages/afm-core/src/afm/cli.py (1)
236-241:elsebranch implicitly assumesStdioTransport— fragile if a third transport type is added.
transport.commandis accessed without an explicitisinstance(transport, StdioTransport)check. Static type-checkers won't narrow the type in theelsebranch, and adding any future transport type without acommandattribute will cause anAttributeErrorat runtime.Consider making the dispatch exhaustive:
♻️ Proposed fix — explicit isinstance narrowing
- transport = server.transport - if isinstance(transport, HttpTransport): - transport_info = transport.url - else: - transport_info = transport.command + from .models import StdioTransport # already imported if added to the import block above + transport = server.transport + if isinstance(transport, HttpTransport): + transport_info = transport.url + elif isinstance(transport, StdioTransport): + transport_info = transport.command + else: + transport_info = repr(transport)Or import
StdioTransportat the top of the file alongsideHttpTransport.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@python-interpreter/packages/afm-core/src/afm/cli.py` around lines 236 - 241, The code assumes server.transport is either HttpTransport or StdioTransport but only checks for HttpTransport; update the dispatch in the block handling server.transport (where transport = server.transport and lines.append is called) to explicitly isinstance-check for StdioTransport as well (import StdioTransport alongside HttpTransport), use transport.command only when isinstance(transport, StdioTransport), and add an explicit fallback (raise TypeError or log an unexpected transport type) so the branch is exhaustive and won’t access .command on unknown transport types.python-interpreter/Dockerfile (2)
36-39: Debugechoin a production image layer.Line 38's
echo "Full variant: nodejs, npm, git installed"adds noise to build logs and has no operational value. Consider removing it.🔧 Proposed fix
RUN if [ "$VARIANT" = "full" ]; then \ - apk add --no-cache nodejs npm git && \ - echo "Full variant: nodejs, npm, git installed"; \ + apk add --no-cache nodejs npm git; \ fi🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@python-interpreter/Dockerfile` around lines 36 - 39, Remove the noisy build-time echo in the conditional RUN that checks VARIANT; locate the RUN block that tests if [ "$VARIANT" = "full" ] and performs apk add nodejs npm git and remove the echo "Full variant: nodejs, npm, git installed" so the layer only installs packages (nodejs, npm, git) without printing a message to build logs.
42-46: Slim variant retains an extra image layer containing the uv/uvx binaries.
COPY --from=ghcr.io/astral-sh/uv:0.10.0 /uv /uvx /uv-bins/always executes and creates a layer, even forVARIANT=slim. The subsequentrm -rf /uv-binsruns in a separateRUNlayer, so the binaries are baked into the slim image's layer history and inflate its size. To avoid this, gate the entireCOPYonVARIANT:♻️ Proposed fix — conditional COPY for uv/uvx
-# Install uv and uvx for Python-based MCP server support (full variant only) -COPY --from=ghcr.io/astral-sh/uv:0.10.0 /uv /uvx /uv-bins/ -RUN if [ "$VARIANT" = "full" ]; then \ - mv /uv-bins/uv /uv-bins/uvx /bin/; \ - fi && \ - rm -rf /uv-bins +# Install uv and uvx for Python-based MCP server support (full variant only) +RUN --mount=from=ghcr.io/astral-sh/uv:0.10.0,source=/uv,target=/tmp/uv \ + --mount=from=ghcr.io/astral-sh/uv:0.10.0,source=/uvx,target=/tmp/uvx \ + if [ "$VARIANT" = "full" ]; then \ + cp /tmp/uv /tmp/uvx /bin/; \ + fiAlternatively, split into two separate stages or use a
COPY --ifworkaround pattern (BuildKit--mount=type=bind), keeping the slim image free of those layers entirely.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@python-interpreter/Dockerfile` around lines 42 - 46, The COPY from ghcr.io/astral-sh/uv:0.10.0 is always executed and leaves /uv /uvx /uv-bins in an image layer even for VARIANT=slim; fix this by moving the uv copy into a separate build flow and only include it in the final image when VARIANT=full — e.g., create an intermediate stage that performs COPY --from=ghcr.io/astral-sh/uv:0.10.0 /uv /uvx /uv-bins/ and the mv /uv-bins/uv /uv-bins/uvx /bin/ steps, then split the Dockerfile into two final targets (full vs slim) or otherwise gate the COPY so the main final stage never COPYs /uv-bins when ARG VARIANT is slim, ensuring no /uv-bins layer exists in the slim image.python-interpreter/packages/afm-core/src/afm/variables.py (1)
178-191:StdioTransport.envvalues are not validated forhttp:variables.
StdioTransportexposes anenvdict whose values could technically contain${http:...}references (same as any other string field). They are currently left unchecked. This is a pre-existing gap, not a regression introduced by this PR, but worth tracking.Do you want me to open a follow-up issue or draft the check for
StdioTransport.envvalues?🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@python-interpreter/packages/afm-core/src/afm/variables.py` around lines 178 - 191, The loop over metadata.tools.mcp misses validating StdioTransport.env values for `${http:...}`; add a branch when isinstance(server.transport, StdioTransport) that iterates server.transport.env values and uses contains_http_variable(...) to detect http variables and append an appropriate field identifier (e.g. "tools.mcp.transport.env" or "tools.mcp.transport.env.<key>") to errored_fields; keep existing checks for HttpTransport (contains_http_variable on url and _authentication_contains_http_variable on authentication) and reuse the same helper contains_http_variable to perform the validation.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In @.github/workflows/release-docker.yml:
- Around line 87-104: The slim image (built in the "Build and push slim image"
step using docker/build-push-action@v5) is pushed but not sent to the
vulnerability scanning/upload step; add a follow-up scan/upload step that
consumes the slim image tags (use the existing
steps.docker-tags.outputs.TAGS_SLIM output) and runs the same scanner/upload
action you already use for the full image so the slim variant is analyzed and
uploaded to Security; ensure the new step runs after the slim push and
references the TAGS_SLIM output and the same platforms/build args as the builder
to keep parity.
---
Nitpick comments:
In `@python-interpreter/Dockerfile`:
- Around line 36-39: Remove the noisy build-time echo in the conditional RUN
that checks VARIANT; locate the RUN block that tests if [ "$VARIANT" = "full" ]
and performs apk add nodejs npm git and remove the echo "Full variant: nodejs,
npm, git installed" so the layer only installs packages (nodejs, npm, git)
without printing a message to build logs.
- Around line 42-46: The COPY from ghcr.io/astral-sh/uv:0.10.0 is always
executed and leaves /uv /uvx /uv-bins in an image layer even for VARIANT=slim;
fix this by moving the uv copy into a separate build flow and only include it in
the final image when VARIANT=full — e.g., create an intermediate stage that
performs COPY --from=ghcr.io/astral-sh/uv:0.10.0 /uv /uvx /uv-bins/ and the mv
/uv-bins/uv /uv-bins/uvx /bin/ steps, then split the Dockerfile into two final
targets (full vs slim) or otherwise gate the COPY so the main final stage never
COPYs /uv-bins when ARG VARIANT is slim, ensuring no /uv-bins layer exists in
the slim image.
In `@python-interpreter/packages/afm-core/src/afm/cli.py`:
- Around line 236-241: The code assumes server.transport is either HttpTransport
or StdioTransport but only checks for HttpTransport; update the dispatch in the
block handling server.transport (where transport = server.transport and
lines.append is called) to explicitly isinstance-check for StdioTransport as
well (import StdioTransport alongside HttpTransport), use transport.command only
when isinstance(transport, StdioTransport), and add an explicit fallback (raise
TypeError or log an unexpected transport type) so the branch is exhaustive and
won’t access .command on unknown transport types.
In `@python-interpreter/packages/afm-core/src/afm/variables.py`:
- Around line 178-191: The loop over metadata.tools.mcp misses validating
StdioTransport.env values for `${http:...}`; add a branch when
isinstance(server.transport, StdioTransport) that iterates server.transport.env
values and uses contains_http_variable(...) to detect http variables and append
an appropriate field identifier (e.g. "tools.mcp.transport.env" or
"tools.mcp.transport.env.<key>") to errored_fields; keep existing checks for
HttpTransport (contains_http_variable on url and
_authentication_contains_http_variable on authentication) and reuse the same
helper contains_http_variable to perform the validation.
ℹ️ Review info
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
python-interpreter/uv.lockis excluded by!**/*.lock
📒 Files selected for processing (14)
.github/workflows/python-interpreter.yml.github/workflows/release-docker.ymlpython-interpreter/Dockerfilepython-interpreter/packages/afm-cli/pyproject.tomlpython-interpreter/packages/afm-core/pyproject.tomlpython-interpreter/packages/afm-core/src/afm/cli.pypython-interpreter/packages/afm-core/src/afm/models.pypython-interpreter/packages/afm-core/src/afm/variables.pypython-interpreter/packages/afm-core/tests/conftest.pypython-interpreter/packages/afm-core/tests/fixtures/sample_stdio_mcp_agent.afm.mdpython-interpreter/packages/afm-core/tests/test_parser.pypython-interpreter/packages/afm-langchain/pyproject.tomlpython-interpreter/packages/afm-langchain/src/afm_langchain/tools/mcp.pypython-interpreter/packages/afm-langchain/tests/test_mcp.py
There was a problem hiding this comment.
Pull request overview
This PR adds support for stdio-based MCP server connections to the python interpreter, enabling tools to be executed locally via command-line utilities in addition to HTTP-based servers.
Changes:
- Added
StdioTransportmodel alongside existingHttpTransportfor MCP server connections - Enhanced MCP client implementation to handle both HTTP and stdio connection types
- Introduced Docker image variants (full and slim) to support optional stdio dependencies
Reviewed changes
Copilot reviewed 14 out of 15 changed files in this pull request and generated 2 comments.
Show a summary per file
| File | Description |
|---|---|
| python-interpreter/packages/afm-core/src/afm/models.py | Defines StdioTransport and updates Transport type as a discriminated union |
| python-interpreter/packages/afm-langchain/src/afm_langchain/tools/mcp.py | Refactors MCPClient to support both HTTP and stdio transports |
| python-interpreter/packages/afm-langchain/tests/test_mcp.py | Adds comprehensive tests for stdio transport functionality |
| python-interpreter/packages/afm-core/tests/test_parser.py | Tests parsing of stdio MCP configurations from AFM files |
| python-interpreter/packages/afm-core/tests/fixtures/sample_stdio_mcp_agent.afm.md | Sample AFM file demonstrating stdio MCP server configuration |
| python-interpreter/packages/afm-core/src/afm/variables.py | Updates HTTP variable validation to handle transport type discrimination |
| python-interpreter/packages/afm-core/src/afm/cli.py | Updates validation output formatting for different transport types |
| python-interpreter/Dockerfile | Adds support for full and slim variants with optional stdio dependencies |
| .github/workflows/release-docker.yml | Publishes both full and slim Docker image variants |
| .github/workflows/python-interpreter.yml | Builds both variants in CI workflow |
| python-interpreter/packages/afm-core/pyproject.toml | Version bump to 0.2.0 |
| python-interpreter/packages/afm-langchain/pyproject.toml | Version bump to 0.2.0 with updated afm-core dependency |
| python-interpreter/packages/afm-cli/pyproject.toml | Version bump to 0.3.0 with updated dependencies |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
python-interpreter/packages/afm-langchain/src/afm_langchain/tools/mcp.py
Outdated
Show resolved
Hide resolved
This makes it so that the ballerina implemntation does not fail at the parser level when trying to run an afm file with stdio transport mcp
There was a problem hiding this comment.
🧹 Nitpick comments (1)
ballerina-interpreter/parser.bal (1)
272-280:envmap keys are not checked for${http:...}variables.
foreach string val in enviterates over map values only; the corresponding env variable names (keys) are never inspected. If a user embeds an${http:...}expression as an env key, it would pass validation silently.This is a low-risk gap — env variable names are OS-level identifiers and not a natural template site — but for completeness the check could be extended to entries.
♻️ Extend the env check to include keys
- foreach string val in env { - if containsHttpVariable(val) { - erroredKeys.push("tools.mcp.transport.env"); - break; - } - } + foreach [string, string] [envKey, envVal] in env.entries() { + if containsHttpVariable(envKey) || containsHttpVariable(envVal) { + erroredKeys.push("tools.mcp.transport.env"); + break; + } + }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@ballerina-interpreter/parser.bal` around lines 272 - 280, The current loop over transport.env only inspects map values so keys containing `${http:...}` can be missed; update the check to iterate entries (or iterate keys and values) for transport.env and call containsHttpVariable on both the key and the value, and if either returns true push "tools.mcp.transport.env" into erroredKeys and break; reference transport.env, containsHttpVariable, and erroredKeys to locate and update the logic.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@ballerina-interpreter/parser.bal`:
- Around line 272-280: The current loop over transport.env only inspects map
values so keys containing `${http:...}` can be missed; update the check to
iterate entries (or iterate keys and values) for transport.env and call
containsHttpVariable on both the key and the value, and if either returns true
push "tools.mcp.transport.env" into erroredKeys and break; reference
transport.env, containsHttpVariable, and erroredKeys to locate and update the
logic.
ℹ️ Review info
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (3)
ballerina-interpreter/agent.balballerina-interpreter/parser.balballerina-interpreter/types.bal
Fix release-python workflow
Due to branch protection rules, the current version nump step in both ballerina and python release workflows fail. Disabled them for now. If a contributor forgets to bump the version the validate step will still fail.
c9976e3 to
67b823a
Compare
There was a problem hiding this comment.
Actionable comments posted: 3
🧹 Nitpick comments (2)
python-interpreter/packages/afm-langchain/src/afm_langchain/tools/mcp.py (1)
142-145: Harden unsupported-transport fallbacks.Using
transport.typeand a genericelsebranch can mask invalid runtime inputs withAttributeErrorinstead of returning a clearMCPError.Suggested defensive patch
- if not isinstance(transport, (HttpTransport, StdioTransport)): + if not isinstance(transport, (HttpTransport, StdioTransport)): raise MCPError( - f"Unsupported transport type: {transport.type}", + f"Unsupported transport type: {type(transport).__name__}", server_name=server.name, ) def _build_connection_config(self) -> StreamableHttpConnection | StdioConnection: if isinstance(self.transport, HttpTransport): config: StreamableHttpConnection = { "transport": "streamable_http", "url": self.transport.url, } auth = build_httpx_auth(self.transport.authentication) if auth is not None: config["auth"] = auth return config - else: + elif isinstance(self.transport, StdioTransport): config: StdioConnection = { "transport": "stdio", "command": self.transport.command, "args": self.transport.args or [], } if self.transport.env is not None: config["env"] = self.transport.env return config + + raise MCPError( + f"Unsupported transport type: {type(self.transport).__name__}", + server_name=self.name, + )Also applies to: 154-173
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@python-interpreter/packages/afm-langchain/src/afm_langchain/tools/mcp.py` around lines 142 - 145, The current unsupported-transport checks access transport.type and rely on an else branch, which can raise AttributeError for invalid runtime inputs; update the checks around the MCPError raises (references: HttpTransport, StdioTransport, MCPError, transport.type, server.name) to defensively read the transport kind using getattr(transport, "type", None) or falling back to type(transport).__name__ and include that safe representation in the MCPError message, and apply the same defensive change to the similar block covering lines 154-173 so no AttributeError is possible when transport is malformed..github/workflows/release-docker.yml (1)
65-71: Use the derived slim tag output as the slim scan target.Line 132 rebuilds the tag string manually. Reusing a primary slim tag output from the tag step avoids drift if tag formatting changes.
♻️ Suggested patch
TAGS_SLIM="$FULL_IMAGE:v$VERSION-slim" [ "$UPDATE_LATEST" = "true" ] && TAGS_SLIM="$TAGS_SLIM,$FULL_IMAGE:slim" echo "TAGS_SLIM=$TAGS_SLIM" >> $GITHUB_OUTPUT + echo "TAG_SLIM_PRIMARY=$FULL_IMAGE:v$VERSION-slim" >> $GITHUB_OUTPUT echo "FULL_IMAGE=$FULL_IMAGE" >> $GITHUB_OUTPUT @@ - name: Scan slim Docker image for vulnerabilities if: ${{ always() && inputs.build_slim }} uses: aquasecurity/trivy-action@0.34.0 with: - image-ref: ${{ steps.docker-tags.outputs.FULL_IMAGE }}:v${{ inputs.version }}-slim + image-ref: ${{ steps.docker-tags.outputs.TAG_SLIM_PRIMARY }} format: "sarif" output: "trivy-results-slim.sarif"Also applies to: 132-132
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/release-docker.yml around lines 65 - 71, The slim scan target should use the previously derived TAGS_SLIM output instead of reconstructing the slim tag string; update the step that currently rebuilds the slim tag (where the scan target is set) to consume the TAGS_SLIM GitHub output value written with echo "TAGS_SLIM=$TAGS_SLIM" >> $GITHUB_OUTPUT (same pattern used for TAGS_FULL), ensuring the scan step reads the TAGS_SLIM variable (not manually concatenating FULL_IMAGE/v$VERSION-slim) so tag formatting stays consistent if TAGS_SLIM logic changes.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In @.github/workflows/release-python.yml:
- Around line 196-203: The current finalize job condition allows
needs.pypi-publish.result == 'skipped' unconditionally which can let finalize
(and bump-version) run after unintended skips; update the guard so finalize only
proceeds when pypi-publish succeeded OR when pypi-publish explicitly signals an
intentional skip (e.g. expose an output like
needs.pypi-publish.outputs.intentional_skip == 'true' from the pypi-publish job
and change the finalize if to require that output instead of a raw 'skipped'
result); ensure the pypi-publish job sets that outputs.intentional_skip flag in
the intentional-skip paths so finalize and bump-version are gated correctly.
In `@python-interpreter/Dockerfile`:
- Line 34: Validate the build ARG VARIANT right after it's declared by checking
its value and failing fast or emitting a clear error when it is not exactly
"full"; update the Dockerfile to test the VARIANT variable (e.g., using a shell
conditional that compares VARIANT = "full") and call echo + exit 1 or use an
explicit warning if the value differs (this prevents silent fallback to slim
when callers pass "Full", "FULL", "slim" or typos); reference the ARG VARIANT
declaration and the subsequent conditional logic in the Dockerfile to implement
this guard.
- Around line 41-45: Replace the unconditional "COPY
--from=ghcr.io/astral-sh/uv:0.10.0 /uv /uvx /uv-bins/" layer with a single RUN
that uses BuildKit's --mount=from=ghcr.io/astral-sh/uv:0.10.0 to access /uv-bins
at build time so files are never persisted in an image layer; inside that same
RUN, check VARIANT ("full") and mv the needed binaries (/uv-bins/uv and
/uv-bins/uvx) into /bin/ when required and remove /uv-bins, ensuring the mv and
cleanup happen in the same RUN that mounts the uv image (refer to the existing
RUN, VARIANT, /uv-bins, uv, uvx and --mount=from= usage).
---
Nitpick comments:
In @.github/workflows/release-docker.yml:
- Around line 65-71: The slim scan target should use the previously derived
TAGS_SLIM output instead of reconstructing the slim tag string; update the step
that currently rebuilds the slim tag (where the scan target is set) to consume
the TAGS_SLIM GitHub output value written with echo "TAGS_SLIM=$TAGS_SLIM" >>
$GITHUB_OUTPUT (same pattern used for TAGS_FULL), ensuring the scan step reads
the TAGS_SLIM variable (not manually concatenating FULL_IMAGE/v$VERSION-slim) so
tag formatting stays consistent if TAGS_SLIM logic changes.
In `@python-interpreter/packages/afm-langchain/src/afm_langchain/tools/mcp.py`:
- Around line 142-145: The current unsupported-transport checks access
transport.type and rely on an else branch, which can raise AttributeError for
invalid runtime inputs; update the checks around the MCPError raises
(references: HttpTransport, StdioTransport, MCPError, transport.type,
server.name) to defensively read the transport kind using getattr(transport,
"type", None) or falling back to type(transport).__name__ and include that safe
representation in the MCPError message, and apply the same defensive change to
the similar block covering lines 154-173 so no AttributeError is possible when
transport is malformed.
ℹ️ Review info
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
python-interpreter/uv.lockis excluded by!**/*.lock
📒 Files selected for processing (4)
.github/workflows/release-docker.yml.github/workflows/release-python.ymlpython-interpreter/Dockerfilepython-interpreter/packages/afm-langchain/src/afm_langchain/tools/mcp.py
There was a problem hiding this comment.
Actionable comments posted: 1
♻️ Duplicate comments (3)
python-interpreter/Dockerfile (2)
34-34:⚠️ Potential issue | 🟡 MinorFail fast for invalid
VARIANTvalues.At Line 34 and Line 36, any value other than
"full"silently falls through to slim behavior. Add an explicit guard so typos don’t produce unexpected images.Suggested fix
ARG VARIANT=full +RUN case "$VARIANT" in \ + full|slim) ;; \ + *) echo "ERROR: VARIANT must be 'full' or 'slim' (got '$VARIANT')" >&2; exit 1 ;; \ + esac RUN if [ "$VARIANT" = "full" ]; then \ apk add --no-cache nodejs npm git; \ fiAlso applies to: 36-38
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@python-interpreter/Dockerfile` at line 34, The Dockerfile currently sets ARG VARIANT=full but silently treats any non-"full" value as the slim path; add a fail-fast guard that validates ARG VARIANT is one of the allowed values (e.g., "full" or "slim") and aborts the build with a clear error message for typos/invalid values. Implement this by inserting a RUN validation step after ARG VARIANT that checks VARIANT and exits non‑zero with a descriptive message when VARIANT is not "full" or "slim", so the subsequent branching logic that relies on VARIANT behaves deterministically.
41-45:⚠️ Potential issue | 🟠 MajorUnconditional uv/uvx
COPYstill bloats slim image layers.Line 41 always adds uv binaries to an image layer. Removing
/uv-binsat Line 45 does not reclaim that layer size. For slim builds, this defeats the footprint goal.Suggested fix (BuildKit mount, no persisted uv layer)
-# Install uv and uvx for Python-based MCP server support (full variant only) -COPY --from=ghcr.io/astral-sh/uv:0.10.0 /uv /uvx /uv-bins/ -RUN if [ "$VARIANT" = "full" ]; then \ - mv /uv-bins/uv /uv-bins/uvx /bin/; \ - fi && \ - rm -rf /uv-bins +# Install uv and uvx for Python-based MCP server support (full variant only) +RUN --mount=from=ghcr.io/astral-sh/uv:0.10.0,source=/uv,target=/tmp/uv \ + --mount=from=ghcr.io/astral-sh/uv:0.10.0,source=/uvx,target=/tmp/uvx \ + if [ "$VARIANT" = "full" ]; then \ + cp /tmp/uv /tmp/uvx /bin/; \ + fi🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@python-interpreter/Dockerfile` around lines 41 - 45, The Dockerfile currently unconditionally COPYs uv binaries which creates a persisted layer; instead remove the unconditional COPY --from=ghcr.io/astral-sh/uv:0.10.0 /uv /uvx /uv-bins/ and use a BuildKit mount in the RUN that conditionally moves binaries when VARIANT=full (e.g. replace the RUN that references VARIANT with a buildkit-aware RUN --mount=type=cache/--mount=from=ghcr.io/astral-sh/uv:0.10.0 to access /uv and /uvx at build time), so the uv files are never committed into an image layer for slim builds; update the RUN that uses VARIANT to reference the buildkit mount and remove the final rm -rf /uv-bins cleanup since nothing is persisted when using the mount..github/workflows/release-python.yml (1)
196-203:finalizestill triggers on unintentionalpypi-publishskips.The condition at line 201 allows
needs.pypi-publish.result == 'skipped'unconditionally. Iftestfails (or any other upstream dependency ofpypi-publishfails),pypi-publishis skipped—and becausefinalizeonly checksvalidate.result, it will still proceed to tag and finalize a release that never actually published.The fix is to add
needs.test.result == 'success'tofinalize's needs/condition, and to guard the skip allowance withinputs.skip_pypiso only an intentional skip is accepted.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/release-python.yml around lines 196 - 203, The finalize job currently allows needs.pypi-publish.result == 'skipped' unconditionally and lacks a dependency on the test job; update the finalize job (the job named finalize in the release workflow) to add needs: test and require needs.test.result == 'success' in its if condition, and change the pypi skip allowance to only permit needs.pypi-publish.result == 'skipped' when the workflow input inputs.skip_pypi is true (e.g., include a clause checking inputs.skip_pypi == 'true' before accepting the skipped result); this ensures finalize only runs when tests passed and PyPI was intentionally skipped.
🧹 Nitpick comments (1)
.github/workflows/release-python.yml (1)
217-217: Remove the permanently disabledbump-versionjob to satisfyactionlint.
if: falseis a constant expression thatactionlint(and the CI pipeline) flags as an error. Since direct pushes are blocked by branch protection and the job is commented as disabled indefinitely, the cleanest resolution is to delete the entire job. If it's meant to be re-enabled later, track it via an issue rather than dead workflow YAML.✂️ Proposed removal of the dead job
- bump-version: - needs: [validate, finalize] - if: false # Disabled: direct push blocked by branch protection - runs-on: ubuntu-latest - permissions: - contents: write - steps: - - name: Checkout repository - ... - - name: Bump version to next - ...🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/release-python.yml at line 217, Remove the permanently disabled GitHub Actions job named "bump-version" from the workflow YAML (the job block that contains "if: false") — delete the entire job definition including its name, steps, and any references to it so the workflow no longer contains a constant false conditional that triggers actionlint; if you need to preserve the intent, record it in an issue or other documentation rather than leaving the disabled job in the workflow.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In @.github/workflows/release-ballerina.yml:
- Line 142: The release workflow has the "bump-version" job permanently disabled
by the literal condition "if: false", which prevents automatic post-release
version increments; restore version progression by re-enabling or relocating the
bump logic: either remove/replace the "if: false" guard on the bump-version job
in .github/workflows/release-ballerina.yml and add appropriate safeguards (e.g.,
require release-finalize success, check actor or tag, or use a dispatch-input
flag), or move the version bump steps into release-finalize.yml (invoke the
existing bump-version steps or call the same script) and wire a boolean input
(e.g., bump: true) and condition to control execution; ensure the job still runs
with proper permissions (GITHUB_TOKEN/push rights) and that the job name
"bump-version" and release-finalize workflow references are updated
consistently.
---
Duplicate comments:
In @.github/workflows/release-python.yml:
- Around line 196-203: The finalize job currently allows
needs.pypi-publish.result == 'skipped' unconditionally and lacks a dependency on
the test job; update the finalize job (the job named finalize in the release
workflow) to add needs: test and require needs.test.result == 'success' in its
if condition, and change the pypi skip allowance to only permit
needs.pypi-publish.result == 'skipped' when the workflow input inputs.skip_pypi
is true (e.g., include a clause checking inputs.skip_pypi == 'true' before
accepting the skipped result); this ensures finalize only runs when tests passed
and PyPI was intentionally skipped.
In `@python-interpreter/Dockerfile`:
- Line 34: The Dockerfile currently sets ARG VARIANT=full but silently treats
any non-"full" value as the slim path; add a fail-fast guard that validates ARG
VARIANT is one of the allowed values (e.g., "full" or "slim") and aborts the
build with a clear error message for typos/invalid values. Implement this by
inserting a RUN validation step after ARG VARIANT that checks VARIANT and exits
non‑zero with a descriptive message when VARIANT is not "full" or "slim", so the
subsequent branching logic that relies on VARIANT behaves deterministically.
- Around line 41-45: The Dockerfile currently unconditionally COPYs uv binaries
which creates a persisted layer; instead remove the unconditional COPY
--from=ghcr.io/astral-sh/uv:0.10.0 /uv /uvx /uv-bins/ and use a BuildKit mount
in the RUN that conditionally moves binaries when VARIANT=full (e.g. replace the
RUN that references VARIANT with a buildkit-aware RUN
--mount=type=cache/--mount=from=ghcr.io/astral-sh/uv:0.10.0 to access /uv and
/uvx at build time), so the uv files are never committed into an image layer for
slim builds; update the RUN that uses VARIANT to reference the buildkit mount
and remove the final rm -rf /uv-bins cleanup since nothing is persisted when
using the mount.
---
Nitpick comments:
In @.github/workflows/release-python.yml:
- Line 217: Remove the permanently disabled GitHub Actions job named
"bump-version" from the workflow YAML (the job block that contains "if: false")
— delete the entire job definition including its name, steps, and any references
to it so the workflow no longer contains a constant false conditional that
triggers actionlint; if you need to preserve the intent, record it in an issue
or other documentation rather than leaving the disabled job in the workflow.
ℹ️ Review info
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
python-interpreter/uv.lockis excluded by!**/*.lock
📒 Files selected for processing (3)
.github/workflows/release-ballerina.yml.github/workflows/release-python.ymlpython-interpreter/Dockerfile
- Bind mount uv to fix unconditional copying uv/uvx binaries to slim image - Add variant validation - Guard finalize against unintended pypi-publish skips
There was a problem hiding this comment.
♻️ Duplicate comments (1)
.github/workflows/release-python.yml (1)
196-203:⚠️ Potential issue | 🟠 Major
finalizecan still run after failed tests whenskip_pypi=true.Line 196 and Line 199-203 do not require
testsuccess. Withskip_pypi=true, bothpypi-publishanddockercan beskipped, yetfinalizestill passes.🔧 Proposed fix
finalize: - needs: [validate, pypi-publish, docker] + needs: [validate, test, pypi-publish, docker] if: >- !cancelled() && needs.validate.result == 'success' + && needs.test.result == 'success' && (needs.pypi-publish.result == 'success' || (needs.pypi-publish.result == 'skipped' && inputs.skip_pypi)) && (needs.docker.result == 'success' || needs.docker.result == 'skipped')#!/bin/bash set -euo pipefail file=".github/workflows/release-python.yml" echo "Checking finalize needs/guard..." nl -ba "$file" | sed -n '195,206p' echo echo "Searching for explicit finalize test gate:" rg -n "needs\.test\.result\s*==\s*'success'" "$file" || true # Expected after fix: # - finalize needs includes test # - finalize if includes needs.test.result == 'success'🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/release-python.yml around lines 196 - 203, The finalize job’s gating lacks a requirement that tests passed, so add the test job to the finalize job’s needs and include a check for needs.test.result == 'success' in the finalize job’s if condition (update the finalize job declaration named "finalize" and its existing needs/if block that currently references needs.validate, needs.pypi-publish, and needs.docker) so finalize cannot run when tests failed even if pypi-publish or docker are skipped; ensure both the needs array and the if expression include the test requirement.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Duplicate comments:
In @.github/workflows/release-python.yml:
- Around line 196-203: The finalize job’s gating lacks a requirement that tests
passed, so add the test job to the finalize job’s needs and include a check for
needs.test.result == 'success' in the finalize job’s if condition (update the
finalize job declaration named "finalize" and its existing needs/if block that
currently references needs.validate, needs.pypi-publish, and needs.docker) so
finalize cannot run when tests failed even if pypi-publish or docker are
skipped; ensure both the needs array and the if expression include the test
requirement.
ℹ️ Review info
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
.github/workflows/release-python.ymlpython-interpreter/Dockerfile
🚧 Files skipped from review as they are similar to previous changes (1)
- python-interpreter/Dockerfile
- Make the default port of the python-interpreter 8085 - Serve the chat UI under /chat/ui regardless of path setting
Package managers (uvx, npx) emit install output to stdout on first run, which the MCP client logs as ERROR. Add MCPStdioNoiseFilter to silence this noise in normal mode while keeping it visible at DEBUG level.
There was a problem hiding this comment.
Pull request overview
Copilot reviewed 26 out of 27 changed files in this pull request and generated 3 comments.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
python-interpreter/packages/afm-core/src/afm/interfaces/webhook.py
Outdated
Show resolved
Hide resolved
c767db4 to
4dc533a
Compare
.github/workflows/release-docker.yml
Outdated
| @@ -57,20 +62,23 @@ jobs: | |||
| # GHCR requires lowercase repository names | |||
| OWNER_LOWER=$(echo "$OWNER" | tr '[:upper:]' '[:lower:]') | |||
| FULL_IMAGE="ghcr.io/$OWNER_LOWER/$IMAGE_NAME" | |||
There was a problem hiding this comment.
Can we change this variable name now that we have two variants of the Docker image, include a "full" one :)
ballerina-interpreter/parser.bal
Outdated
| if args is string[] { | ||
| foreach string arg in args { | ||
| if containsHttpVariable(arg) { | ||
| erroredKeys.push("tools.mcp.transport.args"); |
There was a problem hiding this comment.
I guess we could go one level down and include the arg?
ballerina-interpreter/parser.bal
Outdated
| if env is map<string> { | ||
| foreach string val in env { | ||
| if containsHttpVariable(val) { | ||
| erroredKeys.push("tools.mcp.transport.env"); |
| if _authentication_contains_http_variable( | ||
| server.transport.authentication | ||
| ): |
There was a problem hiding this comment.
Can this go on one line for readability? Or is it too long?
There was a problem hiding this comment.
Yes, it was too long but I renamed the function and now its one line 49c15ed
| errored_fields.append("tools.mcp.transport.url") | ||
| if _authentication_contains_http_variable(server.transport.authentication): | ||
| errored_fields.append("tools.mcp.transport.authentication") | ||
| # HTTP transport fields: url and authentication |
Use array indices (args[i]) and map keys (env.KEY) instead of generic field paths when reporting http: variable violations in StdioTransport. Added matching validation to the Python implementation and tests for both interpreters.
Purpose
This PR adds support for MCP tools accessed via stdio transport to the python interpreter.
Changes
afm-coreandafm-langchainnpx,uvxRelated
Summary by CodeRabbit
New Features
Refactor
Chores
Tests
Bug Fixes