Skip to content

Add stdio mcp tool support to python-interpreter#10

Merged
RadCod3 merged 22 commits intowso2:mainfrom
RadCod3:feat/langchain-stdio-support
Feb 27, 2026
Merged

Add stdio mcp tool support to python-interpreter#10
RadCod3 merged 22 commits intowso2:mainfrom
RadCod3:feat/langchain-stdio-support

Conversation

@RadCod3
Copy link
Copy Markdown
Contributor

@RadCod3 RadCod3 commented Feb 19, 2026

Purpose

This PR adds support for MCP tools accessed via stdio transport to the python interpreter.

Changes

  • Add stdio implementation to afm-core and afm-langchain
  • Adds additional dependencies to the docker image for stdio support: npx, uvx
  • Introduce a slim variant docker image without the additional dependencies for stdio
  • Minor version bump for all 3 packages to represent the new change.

Related

Summary by CodeRabbit

  • New Features

    • CI/release can build/publish two image variants (full and slim) with separate tags, scans, and a slim toggle; PyPI publish can be skipped.
  • Refactor

    • Transport model replaced by concrete HTTP and stdio transports; clients and parsers now operate on transport objects.
  • Chores

    • Package versions bumped and dependency ranges tightened.
  • Tests

    • New fixtures and tests for HTTP and stdio transport parsing and validation.
  • Bug Fixes

    • Stdio transport now fails fast instead of being skipped.

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai bot commented Feb 19, 2026

Note

Reviews paused

It looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the reviews.auto_review.auto_pause_after_reviewed_commits setting.

Use the following commands to manage reviews:

  • @coderabbitai resume to resume automatic reviews.
  • @coderabbitai review to trigger a single review.

Use the checkboxes below for quick actions:

  • ▶️ Resume reviews
  • 🔍 Trigger review

Walkthrough

Adds stdio-based MCP transport support alongside HTTP, refactors transport models to a discriminated union (HttpTransport / StdioTransport), updates MCPClient and connection-building to accept transport objects, expands tests/fixtures for stdio, introduces full/slim Docker image variants and CI inputs, and bumps package versions.

Changes

Cohort / File(s) Summary
CI workflows
.github/workflows/python-interpreter.yml, .github/workflows/release-docker.yml, .github/workflows/release-python.yml, .github/workflows/release-ballerina.yml
Adds full/slim Docker build steps and VARIANT build-arg; adds build_slim and skip_pypi inputs; splits tag outputs (TAGS_FULL/TAGS_SLIM); adjusts workflow conditions and scan/upload steps.
Interpreter Dockerfile
python-interpreter/Dockerfile
Adds ARG VARIANT=full, validates VARIANT, conditionally installs node/npm/git and conditionally copies uv/uvx binaries only for the full variant.
Package metadata
python-interpreter/packages/afm-core/pyproject.toml, python-interpreter/packages/afm-cli/pyproject.toml, python-interpreter/packages/afm-langchain/pyproject.toml
Version bumps: afm-core 0.1.7→0.2.0, afm-cli 0.2.10→0.3.0, afm-langchain 0.1.4→0.2.0; dependency constraints updated to require afm-core >=0.2.0.
Transport models
python-interpreter/packages/afm-core/src/afm/models.py
Removes old Transport/TransportType; adds HttpTransport and StdioTransport models and a discriminated-union Transport alias (Field discriminator on type).
Core CLI & validation
python-interpreter/packages/afm-core/src/afm/cli.py, python-interpreter/packages/afm-core/src/afm/variables.py
Import HttpTransport; format/validation logic now branches on concrete transport types, guarding HTTP-specific checks to HttpTransport instances.
MCP client & langchain integration
python-interpreter/packages/afm-langchain/src/afm_langchain/tools/mcp.py
MCPClient API changed to accept `transport: HttpTransport
Tests & fixtures
python-interpreter/packages/afm-core/tests/conftest.py, .../fixtures/sample_stdio_mcp_agent.afm.md, python-interpreter/packages/afm-core/tests/test_parser.py, python-interpreter/packages/afm-langchain/tests/test_mcp.py
Adds stdio fixture and AFM sample; expands parser and MCP tests to cover stdio transports and mixed scenarios; updates imports and assertions to use HttpTransport/StdioTransport and StdioConnection.
Ballerina interpreter
ballerina-interpreter/agent.bal, ballerina-interpreter/parser.bal, ballerina-interpreter/types.bal
Introduces StdioTransport and Transport union; validation scans command/args/env for http: usage on non-http transports; agent errors on stdio transport instead of warning.
Small public API shifts
python-interpreter/packages/afm-core/src/afm/models.py, python-interpreter/packages/afm-core/src/afm/cli.py, python-interpreter/packages/afm-langchain/src/afm_langchain/tools/mcp.py
Adds public symbols HttpTransport, StdioTransport, and Transport; changes MCPClient constructor signature to accept transport. Review call sites for updated signatures.

Sequence Diagram(s)

sequenceDiagram
    participant AFM as AFM/MCP Definition
    participant Factory as MCPClient.from_mcp_server
    participant MCP as MCPClient
    participant Builder as _build_connection_config
    participant HTTP as StreamableHttpConnection
    participant STDIO as StdioConnection

    AFM->>Factory: provide server with transport (http|stdio)
    Factory->>MCP: MCPClient(name, transport, tool_filter)
    MCP->>MCP: inspect transport.type
    alt transport == "http"
        MCP->>Builder: build http config (url, authentication)
        Builder->>HTTP: create StreamableHttpConnection
        Builder-->>MCP: return HTTP connection
    else transport == "stdio"
        MCP->>Builder: build stdio config (command, args, env)
        Builder->>STDIO: create StdioConnection
        Builder-->>MCP: return STDIO connection
    end
    MCP-->>AFM: client ready with transport-backed connection
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~60 minutes

Poem

🐇 I nibbled specs and swapped a bit of code,

HTTP and stdio now share the road.
Two Docker flavors hop into the sun,
Tests clap paws — the migration's done.
A joyful rabbit basks when builds run.

🚥 Pre-merge checks | ✅ 4 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 5.88% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (4 passed)
Check name Status Explanation
Title check ✅ Passed The title 'Add stdio mcp tool support to python-interpreter' accurately summarizes the main change: adding stdio transport support for MCP tools to the Python interpreter.
Description check ✅ Passed The PR description provides purpose, high-level changes, related issues, and version bumps, though it does not follow the full template structure with all prescribed sections.
Linked Issues check ✅ Passed The PR addresses issue #11 by implementing stdio-based MCP tool support across afm-core, afm-langchain, and Python interpreter components with corresponding test coverage.
Out of Scope Changes check ✅ Passed All changes align with the objective of adding stdio MCP tool support. Docker variant introduction and workflow optimizations support the primary feature.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@RadCod3 RadCod3 force-pushed the feat/langchain-stdio-support branch 2 times, most recently from e1706e7 to 2b6f075 Compare February 25, 2026 05:39
@RadCod3 RadCod3 force-pushed the feat/langchain-stdio-support branch from 2b6f075 to ef6fcc3 Compare February 25, 2026 07:46
@RadCod3 RadCod3 marked this pull request as ready for review February 25, 2026 08:59
@RadCod3 RadCod3 requested a review from MaryamZi as a code owner February 25, 2026 08:59
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (4)
python-interpreter/packages/afm-core/src/afm/cli.py (1)

236-241: else branch implicitly assumes StdioTransport — fragile if a third transport type is added.

transport.command is accessed without an explicit isinstance(transport, StdioTransport) check. Static type-checkers won't narrow the type in the else branch, and adding any future transport type without a command attribute will cause an AttributeError at runtime.

Consider making the dispatch exhaustive:

♻️ Proposed fix — explicit isinstance narrowing
-            transport = server.transport
-            if isinstance(transport, HttpTransport):
-                transport_info = transport.url
-            else:
-                transport_info = transport.command
+            from .models import StdioTransport  # already imported if added to the import block above
+            transport = server.transport
+            if isinstance(transport, HttpTransport):
+                transport_info = transport.url
+            elif isinstance(transport, StdioTransport):
+                transport_info = transport.command
+            else:
+                transport_info = repr(transport)

Or import StdioTransport at the top of the file alongside HttpTransport.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@python-interpreter/packages/afm-core/src/afm/cli.py` around lines 236 - 241,
The code assumes server.transport is either HttpTransport or StdioTransport but
only checks for HttpTransport; update the dispatch in the block handling
server.transport (where transport = server.transport and lines.append is called)
to explicitly isinstance-check for StdioTransport as well (import StdioTransport
alongside HttpTransport), use transport.command only when isinstance(transport,
StdioTransport), and add an explicit fallback (raise TypeError or log an
unexpected transport type) so the branch is exhaustive and won’t access .command
on unknown transport types.
python-interpreter/Dockerfile (2)

36-39: Debug echo in a production image layer.

Line 38's echo "Full variant: nodejs, npm, git installed" adds noise to build logs and has no operational value. Consider removing it.

🔧 Proposed fix
 RUN if [ "$VARIANT" = "full" ]; then \
-        apk add --no-cache nodejs npm git && \
-        echo "Full variant: nodejs, npm, git installed"; \
+        apk add --no-cache nodejs npm git; \
     fi
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@python-interpreter/Dockerfile` around lines 36 - 39, Remove the noisy
build-time echo in the conditional RUN that checks VARIANT; locate the RUN block
that tests if [ "$VARIANT" = "full" ] and performs apk add nodejs npm git and
remove the echo "Full variant: nodejs, npm, git installed" so the layer only
installs packages (nodejs, npm, git) without printing a message to build logs.

42-46: Slim variant retains an extra image layer containing the uv/uvx binaries.

COPY --from=ghcr.io/astral-sh/uv:0.10.0 /uv /uvx /uv-bins/ always executes and creates a layer, even for VARIANT=slim. The subsequent rm -rf /uv-bins runs in a separate RUN layer, so the binaries are baked into the slim image's layer history and inflate its size. To avoid this, gate the entire COPY on VARIANT:

♻️ Proposed fix — conditional COPY for uv/uvx
-# Install uv and uvx for Python-based MCP server support (full variant only)
-COPY --from=ghcr.io/astral-sh/uv:0.10.0 /uv /uvx /uv-bins/
-RUN if [ "$VARIANT" = "full" ]; then \
-        mv /uv-bins/uv /uv-bins/uvx /bin/; \
-    fi && \
-    rm -rf /uv-bins
+# Install uv and uvx for Python-based MCP server support (full variant only)
+RUN --mount=from=ghcr.io/astral-sh/uv:0.10.0,source=/uv,target=/tmp/uv \
+    --mount=from=ghcr.io/astral-sh/uv:0.10.0,source=/uvx,target=/tmp/uvx \
+    if [ "$VARIANT" = "full" ]; then \
+        cp /tmp/uv /tmp/uvx /bin/; \
+    fi

Alternatively, split into two separate stages or use a COPY --if workaround pattern (BuildKit --mount=type=bind), keeping the slim image free of those layers entirely.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@python-interpreter/Dockerfile` around lines 42 - 46, The COPY from
ghcr.io/astral-sh/uv:0.10.0 is always executed and leaves /uv /uvx /uv-bins in
an image layer even for VARIANT=slim; fix this by moving the uv copy into a
separate build flow and only include it in the final image when VARIANT=full —
e.g., create an intermediate stage that performs COPY
--from=ghcr.io/astral-sh/uv:0.10.0 /uv /uvx /uv-bins/ and the mv /uv-bins/uv
/uv-bins/uvx /bin/ steps, then split the Dockerfile into two final targets (full
vs slim) or otherwise gate the COPY so the main final stage never COPYs /uv-bins
when ARG VARIANT is slim, ensuring no /uv-bins layer exists in the slim image.
python-interpreter/packages/afm-core/src/afm/variables.py (1)

178-191: StdioTransport.env values are not validated for http: variables.

StdioTransport exposes an env dict whose values could technically contain ${http:...} references (same as any other string field). They are currently left unchecked. This is a pre-existing gap, not a regression introduced by this PR, but worth tracking.

Do you want me to open a follow-up issue or draft the check for StdioTransport.env values?

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@python-interpreter/packages/afm-core/src/afm/variables.py` around lines 178 -
191, The loop over metadata.tools.mcp misses validating StdioTransport.env
values for `${http:...}`; add a branch when isinstance(server.transport,
StdioTransport) that iterates server.transport.env values and uses
contains_http_variable(...) to detect http variables and append an appropriate
field identifier (e.g. "tools.mcp.transport.env" or
"tools.mcp.transport.env.<key>") to errored_fields; keep existing checks for
HttpTransport (contains_http_variable on url and
_authentication_contains_http_variable on authentication) and reuse the same
helper contains_http_variable to perform the validation.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In @.github/workflows/release-docker.yml:
- Around line 87-104: The slim image (built in the "Build and push slim image"
step using docker/build-push-action@v5) is pushed but not sent to the
vulnerability scanning/upload step; add a follow-up scan/upload step that
consumes the slim image tags (use the existing
steps.docker-tags.outputs.TAGS_SLIM output) and runs the same scanner/upload
action you already use for the full image so the slim variant is analyzed and
uploaded to Security; ensure the new step runs after the slim push and
references the TAGS_SLIM output and the same platforms/build args as the builder
to keep parity.

---

Nitpick comments:
In `@python-interpreter/Dockerfile`:
- Around line 36-39: Remove the noisy build-time echo in the conditional RUN
that checks VARIANT; locate the RUN block that tests if [ "$VARIANT" = "full" ]
and performs apk add nodejs npm git and remove the echo "Full variant: nodejs,
npm, git installed" so the layer only installs packages (nodejs, npm, git)
without printing a message to build logs.
- Around line 42-46: The COPY from ghcr.io/astral-sh/uv:0.10.0 is always
executed and leaves /uv /uvx /uv-bins in an image layer even for VARIANT=slim;
fix this by moving the uv copy into a separate build flow and only include it in
the final image when VARIANT=full — e.g., create an intermediate stage that
performs COPY --from=ghcr.io/astral-sh/uv:0.10.0 /uv /uvx /uv-bins/ and the mv
/uv-bins/uv /uv-bins/uvx /bin/ steps, then split the Dockerfile into two final
targets (full vs slim) or otherwise gate the COPY so the main final stage never
COPYs /uv-bins when ARG VARIANT is slim, ensuring no /uv-bins layer exists in
the slim image.

In `@python-interpreter/packages/afm-core/src/afm/cli.py`:
- Around line 236-241: The code assumes server.transport is either HttpTransport
or StdioTransport but only checks for HttpTransport; update the dispatch in the
block handling server.transport (where transport = server.transport and
lines.append is called) to explicitly isinstance-check for StdioTransport as
well (import StdioTransport alongside HttpTransport), use transport.command only
when isinstance(transport, StdioTransport), and add an explicit fallback (raise
TypeError or log an unexpected transport type) so the branch is exhaustive and
won’t access .command on unknown transport types.

In `@python-interpreter/packages/afm-core/src/afm/variables.py`:
- Around line 178-191: The loop over metadata.tools.mcp misses validating
StdioTransport.env values for `${http:...}`; add a branch when
isinstance(server.transport, StdioTransport) that iterates server.transport.env
values and uses contains_http_variable(...) to detect http variables and append
an appropriate field identifier (e.g. "tools.mcp.transport.env" or
"tools.mcp.transport.env.<key>") to errored_fields; keep existing checks for
HttpTransport (contains_http_variable on url and
_authentication_contains_http_variable on authentication) and reuse the same
helper contains_http_variable to perform the validation.

ℹ️ Review info

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 815af5e and ef6fcc3.

⛔ Files ignored due to path filters (1)
  • python-interpreter/uv.lock is excluded by !**/*.lock
📒 Files selected for processing (14)
  • .github/workflows/python-interpreter.yml
  • .github/workflows/release-docker.yml
  • python-interpreter/Dockerfile
  • python-interpreter/packages/afm-cli/pyproject.toml
  • python-interpreter/packages/afm-core/pyproject.toml
  • python-interpreter/packages/afm-core/src/afm/cli.py
  • python-interpreter/packages/afm-core/src/afm/models.py
  • python-interpreter/packages/afm-core/src/afm/variables.py
  • python-interpreter/packages/afm-core/tests/conftest.py
  • python-interpreter/packages/afm-core/tests/fixtures/sample_stdio_mcp_agent.afm.md
  • python-interpreter/packages/afm-core/tests/test_parser.py
  • python-interpreter/packages/afm-langchain/pyproject.toml
  • python-interpreter/packages/afm-langchain/src/afm_langchain/tools/mcp.py
  • python-interpreter/packages/afm-langchain/tests/test_mcp.py

@MaryamZi MaryamZi requested a review from Copilot February 25, 2026 09:23
Copy link
Copy Markdown

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR adds support for stdio-based MCP server connections to the python interpreter, enabling tools to be executed locally via command-line utilities in addition to HTTP-based servers.

Changes:

  • Added StdioTransport model alongside existing HttpTransport for MCP server connections
  • Enhanced MCP client implementation to handle both HTTP and stdio connection types
  • Introduced Docker image variants (full and slim) to support optional stdio dependencies

Reviewed changes

Copilot reviewed 14 out of 15 changed files in this pull request and generated 2 comments.

Show a summary per file
File Description
python-interpreter/packages/afm-core/src/afm/models.py Defines StdioTransport and updates Transport type as a discriminated union
python-interpreter/packages/afm-langchain/src/afm_langchain/tools/mcp.py Refactors MCPClient to support both HTTP and stdio transports
python-interpreter/packages/afm-langchain/tests/test_mcp.py Adds comprehensive tests for stdio transport functionality
python-interpreter/packages/afm-core/tests/test_parser.py Tests parsing of stdio MCP configurations from AFM files
python-interpreter/packages/afm-core/tests/fixtures/sample_stdio_mcp_agent.afm.md Sample AFM file demonstrating stdio MCP server configuration
python-interpreter/packages/afm-core/src/afm/variables.py Updates HTTP variable validation to handle transport type discrimination
python-interpreter/packages/afm-core/src/afm/cli.py Updates validation output formatting for different transport types
python-interpreter/Dockerfile Adds support for full and slim variants with optional stdio dependencies
.github/workflows/release-docker.yml Publishes both full and slim Docker image variants
.github/workflows/python-interpreter.yml Builds both variants in CI workflow
python-interpreter/packages/afm-core/pyproject.toml Version bump to 0.2.0
python-interpreter/packages/afm-langchain/pyproject.toml Version bump to 0.2.0 with updated afm-core dependency
python-interpreter/packages/afm-cli/pyproject.toml Version bump to 0.3.0 with updated dependencies

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

This makes it so that the ballerina implemntation does not fail at the
parser level when trying to run an afm file with stdio transport mcp
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (1)
ballerina-interpreter/parser.bal (1)

272-280: env map keys are not checked for ${http:...} variables.

foreach string val in env iterates over map values only; the corresponding env variable names (keys) are never inspected. If a user embeds an ${http:...} expression as an env key, it would pass validation silently.

This is a low-risk gap — env variable names are OS-level identifiers and not a natural template site — but for completeness the check could be extended to entries.

♻️ Extend the env check to include keys
-                    foreach string val in env {
-                        if containsHttpVariable(val) {
-                            erroredKeys.push("tools.mcp.transport.env");
-                            break;
-                        }
-                    }
+                    foreach [string, string] [envKey, envVal] in env.entries() {
+                        if containsHttpVariable(envKey) || containsHttpVariable(envVal) {
+                            erroredKeys.push("tools.mcp.transport.env");
+                            break;
+                        }
+                    }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@ballerina-interpreter/parser.bal` around lines 272 - 280, The current loop
over transport.env only inspects map values so keys containing `${http:...}` can
be missed; update the check to iterate entries (or iterate keys and values) for
transport.env and call containsHttpVariable on both the key and the value, and
if either returns true push "tools.mcp.transport.env" into erroredKeys and
break; reference transport.env, containsHttpVariable, and erroredKeys to locate
and update the logic.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Nitpick comments:
In `@ballerina-interpreter/parser.bal`:
- Around line 272-280: The current loop over transport.env only inspects map
values so keys containing `${http:...}` can be missed; update the check to
iterate entries (or iterate keys and values) for transport.env and call
containsHttpVariable on both the key and the value, and if either returns true
push "tools.mcp.transport.env" into erroredKeys and break; reference
transport.env, containsHttpVariable, and erroredKeys to locate and update the
logic.

ℹ️ Review info

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between ef6fcc3 and e4d9f98.

📒 Files selected for processing (3)
  • ballerina-interpreter/agent.bal
  • ballerina-interpreter/parser.bal
  • ballerina-interpreter/types.bal

Due to branch protection rules, the current version nump step in both
ballerina and python release workflows fail. Disabled them for now. If a
contributor forgets to bump the version the validate step will still
fail.
@RadCod3 RadCod3 force-pushed the feat/langchain-stdio-support branch from c9976e3 to 67b823a Compare February 25, 2026 14:04
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🧹 Nitpick comments (2)
python-interpreter/packages/afm-langchain/src/afm_langchain/tools/mcp.py (1)

142-145: Harden unsupported-transport fallbacks.

Using transport.type and a generic else branch can mask invalid runtime inputs with AttributeError instead of returning a clear MCPError.

Suggested defensive patch
-        if not isinstance(transport, (HttpTransport, StdioTransport)):
+        if not isinstance(transport, (HttpTransport, StdioTransport)):
             raise MCPError(
-                f"Unsupported transport type: {transport.type}",
+                f"Unsupported transport type: {type(transport).__name__}",
                 server_name=server.name,
             )

     def _build_connection_config(self) -> StreamableHttpConnection | StdioConnection:
         if isinstance(self.transport, HttpTransport):
             config: StreamableHttpConnection = {
                 "transport": "streamable_http",
                 "url": self.transport.url,
             }
             auth = build_httpx_auth(self.transport.authentication)
             if auth is not None:
                 config["auth"] = auth
             return config

-        else:
+        elif isinstance(self.transport, StdioTransport):
             config: StdioConnection = {
                 "transport": "stdio",
                 "command": self.transport.command,
                 "args": self.transport.args or [],
             }
             if self.transport.env is not None:
                 config["env"] = self.transport.env
             return config
+
+        raise MCPError(
+            f"Unsupported transport type: {type(self.transport).__name__}",
+            server_name=self.name,
+        )

Also applies to: 154-173

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@python-interpreter/packages/afm-langchain/src/afm_langchain/tools/mcp.py`
around lines 142 - 145, The current unsupported-transport checks access
transport.type and rely on an else branch, which can raise AttributeError for
invalid runtime inputs; update the checks around the MCPError raises
(references: HttpTransport, StdioTransport, MCPError, transport.type,
server.name) to defensively read the transport kind using getattr(transport,
"type", None) or falling back to type(transport).__name__ and include that safe
representation in the MCPError message, and apply the same defensive change to
the similar block covering lines 154-173 so no AttributeError is possible when
transport is malformed.
.github/workflows/release-docker.yml (1)

65-71: Use the derived slim tag output as the slim scan target.

Line 132 rebuilds the tag string manually. Reusing a primary slim tag output from the tag step avoids drift if tag formatting changes.

♻️ Suggested patch
           TAGS_SLIM="$FULL_IMAGE:v$VERSION-slim"
           [ "$UPDATE_LATEST" = "true" ] && TAGS_SLIM="$TAGS_SLIM,$FULL_IMAGE:slim"
           echo "TAGS_SLIM=$TAGS_SLIM" >> $GITHUB_OUTPUT
+          echo "TAG_SLIM_PRIMARY=$FULL_IMAGE:v$VERSION-slim" >> $GITHUB_OUTPUT
           echo "FULL_IMAGE=$FULL_IMAGE" >> $GITHUB_OUTPUT
@@
       - name: Scan slim Docker image for vulnerabilities
         if: ${{ always() && inputs.build_slim }}
         uses: aquasecurity/trivy-action@0.34.0
         with:
-          image-ref: ${{ steps.docker-tags.outputs.FULL_IMAGE }}:v${{ inputs.version }}-slim
+          image-ref: ${{ steps.docker-tags.outputs.TAG_SLIM_PRIMARY }}
           format: "sarif"
           output: "trivy-results-slim.sarif"

Also applies to: 132-132

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.github/workflows/release-docker.yml around lines 65 - 71, The slim scan
target should use the previously derived TAGS_SLIM output instead of
reconstructing the slim tag string; update the step that currently rebuilds the
slim tag (where the scan target is set) to consume the TAGS_SLIM GitHub output
value written with echo "TAGS_SLIM=$TAGS_SLIM" >> $GITHUB_OUTPUT (same pattern
used for TAGS_FULL), ensuring the scan step reads the TAGS_SLIM variable (not
manually concatenating FULL_IMAGE/v$VERSION-slim) so tag formatting stays
consistent if TAGS_SLIM logic changes.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In @.github/workflows/release-python.yml:
- Around line 196-203: The current finalize job condition allows
needs.pypi-publish.result == 'skipped' unconditionally which can let finalize
(and bump-version) run after unintended skips; update the guard so finalize only
proceeds when pypi-publish succeeded OR when pypi-publish explicitly signals an
intentional skip (e.g. expose an output like
needs.pypi-publish.outputs.intentional_skip == 'true' from the pypi-publish job
and change the finalize if to require that output instead of a raw 'skipped'
result); ensure the pypi-publish job sets that outputs.intentional_skip flag in
the intentional-skip paths so finalize and bump-version are gated correctly.

In `@python-interpreter/Dockerfile`:
- Line 34: Validate the build ARG VARIANT right after it's declared by checking
its value and failing fast or emitting a clear error when it is not exactly
"full"; update the Dockerfile to test the VARIANT variable (e.g., using a shell
conditional that compares VARIANT = "full") and call echo + exit 1 or use an
explicit warning if the value differs (this prevents silent fallback to slim
when callers pass "Full", "FULL", "slim" or typos); reference the ARG VARIANT
declaration and the subsequent conditional logic in the Dockerfile to implement
this guard.
- Around line 41-45: Replace the unconditional "COPY
--from=ghcr.io/astral-sh/uv:0.10.0 /uv /uvx /uv-bins/" layer with a single RUN
that uses BuildKit's --mount=from=ghcr.io/astral-sh/uv:0.10.0 to access /uv-bins
at build time so files are never persisted in an image layer; inside that same
RUN, check VARIANT ("full") and mv the needed binaries (/uv-bins/uv and
/uv-bins/uvx) into /bin/ when required and remove /uv-bins, ensuring the mv and
cleanup happen in the same RUN that mounts the uv image (refer to the existing
RUN, VARIANT, /uv-bins, uv, uvx and --mount=from= usage).

---

Nitpick comments:
In @.github/workflows/release-docker.yml:
- Around line 65-71: The slim scan target should use the previously derived
TAGS_SLIM output instead of reconstructing the slim tag string; update the step
that currently rebuilds the slim tag (where the scan target is set) to consume
the TAGS_SLIM GitHub output value written with echo "TAGS_SLIM=$TAGS_SLIM" >>
$GITHUB_OUTPUT (same pattern used for TAGS_FULL), ensuring the scan step reads
the TAGS_SLIM variable (not manually concatenating FULL_IMAGE/v$VERSION-slim) so
tag formatting stays consistent if TAGS_SLIM logic changes.

In `@python-interpreter/packages/afm-langchain/src/afm_langchain/tools/mcp.py`:
- Around line 142-145: The current unsupported-transport checks access
transport.type and rely on an else branch, which can raise AttributeError for
invalid runtime inputs; update the checks around the MCPError raises
(references: HttpTransport, StdioTransport, MCPError, transport.type,
server.name) to defensively read the transport kind using getattr(transport,
"type", None) or falling back to type(transport).__name__ and include that safe
representation in the MCPError message, and apply the same defensive change to
the similar block covering lines 154-173 so no AttributeError is possible when
transport is malformed.

ℹ️ Review info

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between e4d9f98 and c9976e3.

⛔ Files ignored due to path filters (1)
  • python-interpreter/uv.lock is excluded by !**/*.lock
📒 Files selected for processing (4)
  • .github/workflows/release-docker.yml
  • .github/workflows/release-python.yml
  • python-interpreter/Dockerfile
  • python-interpreter/packages/afm-langchain/src/afm_langchain/tools/mcp.py

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

♻️ Duplicate comments (3)
python-interpreter/Dockerfile (2)

34-34: ⚠️ Potential issue | 🟡 Minor

Fail fast for invalid VARIANT values.

At Line 34 and Line 36, any value other than "full" silently falls through to slim behavior. Add an explicit guard so typos don’t produce unexpected images.

Suggested fix
 ARG VARIANT=full
+RUN case "$VARIANT" in \
+      full|slim) ;; \
+      *) echo "ERROR: VARIANT must be 'full' or 'slim' (got '$VARIANT')" >&2; exit 1 ;; \
+    esac
 
 RUN if [ "$VARIANT" = "full" ]; then \
         apk add --no-cache nodejs npm git; \
     fi

Also applies to: 36-38

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@python-interpreter/Dockerfile` at line 34, The Dockerfile currently sets ARG
VARIANT=full but silently treats any non-"full" value as the slim path; add a
fail-fast guard that validates ARG VARIANT is one of the allowed values (e.g.,
"full" or "slim") and aborts the build with a clear error message for
typos/invalid values. Implement this by inserting a RUN validation step after
ARG VARIANT that checks VARIANT and exits non‑zero with a descriptive message
when VARIANT is not "full" or "slim", so the subsequent branching logic that
relies on VARIANT behaves deterministically.

41-45: ⚠️ Potential issue | 🟠 Major

Unconditional uv/uvx COPY still bloats slim image layers.

Line 41 always adds uv binaries to an image layer. Removing /uv-bins at Line 45 does not reclaim that layer size. For slim builds, this defeats the footprint goal.

Suggested fix (BuildKit mount, no persisted uv layer)
-# Install uv and uvx for Python-based MCP server support (full variant only)
-COPY --from=ghcr.io/astral-sh/uv:0.10.0 /uv /uvx /uv-bins/
-RUN if [ "$VARIANT" = "full" ]; then \
-        mv /uv-bins/uv /uv-bins/uvx /bin/; \
-    fi && \
-    rm -rf /uv-bins
+# Install uv and uvx for Python-based MCP server support (full variant only)
+RUN --mount=from=ghcr.io/astral-sh/uv:0.10.0,source=/uv,target=/tmp/uv \
+    --mount=from=ghcr.io/astral-sh/uv:0.10.0,source=/uvx,target=/tmp/uvx \
+    if [ "$VARIANT" = "full" ]; then \
+        cp /tmp/uv /tmp/uvx /bin/; \
+    fi
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@python-interpreter/Dockerfile` around lines 41 - 45, The Dockerfile currently
unconditionally COPYs uv binaries which creates a persisted layer; instead
remove the unconditional COPY --from=ghcr.io/astral-sh/uv:0.10.0 /uv /uvx
/uv-bins/ and use a BuildKit mount in the RUN that conditionally moves binaries
when VARIANT=full (e.g. replace the RUN that references VARIANT with a
buildkit-aware RUN --mount=type=cache/--mount=from=ghcr.io/astral-sh/uv:0.10.0
to access /uv and /uvx at build time), so the uv files are never committed into
an image layer for slim builds; update the RUN that uses VARIANT to reference
the buildkit mount and remove the final rm -rf /uv-bins cleanup since nothing is
persisted when using the mount.
.github/workflows/release-python.yml (1)

196-203: finalize still triggers on unintentional pypi-publish skips.

The condition at line 201 allows needs.pypi-publish.result == 'skipped' unconditionally. If test fails (or any other upstream dependency of pypi-publish fails), pypi-publish is skipped—and because finalize only checks validate.result, it will still proceed to tag and finalize a release that never actually published.

The fix is to add needs.test.result == 'success' to finalize's needs/condition, and to guard the skip allowance with inputs.skip_pypi so only an intentional skip is accepted.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.github/workflows/release-python.yml around lines 196 - 203, The finalize
job currently allows needs.pypi-publish.result == 'skipped' unconditionally and
lacks a dependency on the test job; update the finalize job (the job named
finalize in the release workflow) to add needs: test and require
needs.test.result == 'success' in its if condition, and change the pypi skip
allowance to only permit needs.pypi-publish.result == 'skipped' when the
workflow input inputs.skip_pypi is true (e.g., include a clause checking
inputs.skip_pypi == 'true' before accepting the skipped result); this ensures
finalize only runs when tests passed and PyPI was intentionally skipped.
🧹 Nitpick comments (1)
.github/workflows/release-python.yml (1)

217-217: Remove the permanently disabled bump-version job to satisfy actionlint.

if: false is a constant expression that actionlint (and the CI pipeline) flags as an error. Since direct pushes are blocked by branch protection and the job is commented as disabled indefinitely, the cleanest resolution is to delete the entire job. If it's meant to be re-enabled later, track it via an issue rather than dead workflow YAML.

✂️ Proposed removal of the dead job
-  bump-version:
-    needs: [validate, finalize]
-    if: false  # Disabled: direct push blocked by branch protection
-    runs-on: ubuntu-latest
-    permissions:
-      contents: write
-    steps:
-      - name: Checkout repository
-        ...
-      - name: Bump version to next
-        ...
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.github/workflows/release-python.yml at line 217, Remove the permanently
disabled GitHub Actions job named "bump-version" from the workflow YAML (the job
block that contains "if: false") — delete the entire job definition including
its name, steps, and any references to it so the workflow no longer contains a
constant false conditional that triggers actionlint; if you need to preserve the
intent, record it in an issue or other documentation rather than leaving the
disabled job in the workflow.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In @.github/workflows/release-ballerina.yml:
- Line 142: The release workflow has the "bump-version" job permanently disabled
by the literal condition "if: false", which prevents automatic post-release
version increments; restore version progression by re-enabling or relocating the
bump logic: either remove/replace the "if: false" guard on the bump-version job
in .github/workflows/release-ballerina.yml and add appropriate safeguards (e.g.,
require release-finalize success, check actor or tag, or use a dispatch-input
flag), or move the version bump steps into release-finalize.yml (invoke the
existing bump-version steps or call the same script) and wire a boolean input
(e.g., bump: true) and condition to control execution; ensure the job still runs
with proper permissions (GITHUB_TOKEN/push rights) and that the job name
"bump-version" and release-finalize workflow references are updated
consistently.

---

Duplicate comments:
In @.github/workflows/release-python.yml:
- Around line 196-203: The finalize job currently allows
needs.pypi-publish.result == 'skipped' unconditionally and lacks a dependency on
the test job; update the finalize job (the job named finalize in the release
workflow) to add needs: test and require needs.test.result == 'success' in its
if condition, and change the pypi skip allowance to only permit
needs.pypi-publish.result == 'skipped' when the workflow input inputs.skip_pypi
is true (e.g., include a clause checking inputs.skip_pypi == 'true' before
accepting the skipped result); this ensures finalize only runs when tests passed
and PyPI was intentionally skipped.

In `@python-interpreter/Dockerfile`:
- Line 34: The Dockerfile currently sets ARG VARIANT=full but silently treats
any non-"full" value as the slim path; add a fail-fast guard that validates ARG
VARIANT is one of the allowed values (e.g., "full" or "slim") and aborts the
build with a clear error message for typos/invalid values. Implement this by
inserting a RUN validation step after ARG VARIANT that checks VARIANT and exits
non‑zero with a descriptive message when VARIANT is not "full" or "slim", so the
subsequent branching logic that relies on VARIANT behaves deterministically.
- Around line 41-45: The Dockerfile currently unconditionally COPYs uv binaries
which creates a persisted layer; instead remove the unconditional COPY
--from=ghcr.io/astral-sh/uv:0.10.0 /uv /uvx /uv-bins/ and use a BuildKit mount
in the RUN that conditionally moves binaries when VARIANT=full (e.g. replace the
RUN that references VARIANT with a buildkit-aware RUN
--mount=type=cache/--mount=from=ghcr.io/astral-sh/uv:0.10.0 to access /uv and
/uvx at build time), so the uv files are never committed into an image layer for
slim builds; update the RUN that uses VARIANT to reference the buildkit mount
and remove the final rm -rf /uv-bins cleanup since nothing is persisted when
using the mount.

---

Nitpick comments:
In @.github/workflows/release-python.yml:
- Line 217: Remove the permanently disabled GitHub Actions job named
"bump-version" from the workflow YAML (the job block that contains "if: false")
— delete the entire job definition including its name, steps, and any references
to it so the workflow no longer contains a constant false conditional that
triggers actionlint; if you need to preserve the intent, record it in an issue
or other documentation rather than leaving the disabled job in the workflow.

ℹ️ Review info

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between c9976e3 and 67b823a.

⛔ Files ignored due to path filters (1)
  • python-interpreter/uv.lock is excluded by !**/*.lock
📒 Files selected for processing (3)
  • .github/workflows/release-ballerina.yml
  • .github/workflows/release-python.yml
  • python-interpreter/Dockerfile

- Bind mount uv to fix unconditional copying uv/uvx binaries to slim
image
- Add variant validation
- Guard finalize against unintended pypi-publish skips
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

♻️ Duplicate comments (1)
.github/workflows/release-python.yml (1)

196-203: ⚠️ Potential issue | 🟠 Major

finalize can still run after failed tests when skip_pypi=true.

Line 196 and Line 199-203 do not require test success. With skip_pypi=true, both pypi-publish and docker can be skipped, yet finalize still passes.

🔧 Proposed fix
  finalize:
-   needs: [validate, pypi-publish, docker]
+   needs: [validate, test, pypi-publish, docker]
    if: >-
      !cancelled()
      && needs.validate.result == 'success'
+     && needs.test.result == 'success'
      && (needs.pypi-publish.result == 'success'
        || (needs.pypi-publish.result == 'skipped' && inputs.skip_pypi))
      && (needs.docker.result == 'success'
        || needs.docker.result == 'skipped')
#!/bin/bash
set -euo pipefail
file=".github/workflows/release-python.yml"

echo "Checking finalize needs/guard..."
nl -ba "$file" | sed -n '195,206p'
echo
echo "Searching for explicit finalize test gate:"
rg -n "needs\.test\.result\s*==\s*'success'" "$file" || true

# Expected after fix:
# - finalize needs includes test
# - finalize if includes needs.test.result == 'success'
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.github/workflows/release-python.yml around lines 196 - 203, The finalize
job’s gating lacks a requirement that tests passed, so add the test job to the
finalize job’s needs and include a check for needs.test.result == 'success' in
the finalize job’s if condition (update the finalize job declaration named
"finalize" and its existing needs/if block that currently references
needs.validate, needs.pypi-publish, and needs.docker) so finalize cannot run
when tests failed even if pypi-publish or docker are skipped; ensure both the
needs array and the if expression include the test requirement.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Duplicate comments:
In @.github/workflows/release-python.yml:
- Around line 196-203: The finalize job’s gating lacks a requirement that tests
passed, so add the test job to the finalize job’s needs and include a check for
needs.test.result == 'success' in the finalize job’s if condition (update the
finalize job declaration named "finalize" and its existing needs/if block that
currently references needs.validate, needs.pypi-publish, and needs.docker) so
finalize cannot run when tests failed even if pypi-publish or docker are
skipped; ensure both the needs array and the if expression include the test
requirement.

ℹ️ Review info

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 67b823a and 4ad4e7b.

📒 Files selected for processing (2)
  • .github/workflows/release-python.yml
  • python-interpreter/Dockerfile
🚧 Files skipped from review as they are similar to previous changes (1)
  • python-interpreter/Dockerfile

- Make the default port of the python-interpreter 8085
- Serve the chat UI under /chat/ui regardless of path setting
Package managers (uvx, npx) emit install output to stdout on first run,
which the MCP client logs as ERROR. Add MCPStdioNoiseFilter to silence
this noise in normal mode while keeping it visible at DEBUG level.
@RadCod3 RadCod3 requested a review from Copilot February 26, 2026 08:34
Copy link
Copy Markdown

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Copilot reviewed 26 out of 27 changed files in this pull request and generated 3 comments.


💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

@RadCod3 RadCod3 force-pushed the feat/langchain-stdio-support branch from c767db4 to 4dc533a Compare February 26, 2026 10:51
@@ -57,20 +62,23 @@ jobs:
# GHCR requires lowercase repository names
OWNER_LOWER=$(echo "$OWNER" | tr '[:upper:]' '[:lower:]')
FULL_IMAGE="ghcr.io/$OWNER_LOWER/$IMAGE_NAME"
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we change this variable name now that we have two variants of the Docker image, include a "full" one :)

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Addressed in 49c15ed

if args is string[] {
foreach string arg in args {
if containsHttpVariable(arg) {
erroredKeys.push("tools.mcp.transport.args");
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I guess we could go one level down and include the arg?

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Addressed in 7f3763b

if env is map<string> {
foreach string val in env {
if containsHttpVariable(val) {
erroredKeys.push("tools.mcp.transport.env");
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ditto.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Addressed in 7f3763b

Comment on lines +186 to +188
if _authentication_contains_http_variable(
server.transport.authentication
):
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can this go on one line for readability? Or is it too long?

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, it was too long but I renamed the function and now its one line 49c15ed

errored_fields.append("tools.mcp.transport.url")
if _authentication_contains_http_variable(server.transport.authentication):
errored_fields.append("tools.mcp.transport.authentication")
# HTTP transport fields: url and authentication
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Self-explanatory, noh?

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Addressed in 49c15ed

Use array indices (args[i]) and map keys (env.KEY) instead of generic
field paths when reporting http: variable violations in StdioTransport.
Added matching validation to the Python implementation and tests for
both interpreters.
@RadCod3 RadCod3 merged commit 87cbbd1 into wso2:main Feb 27, 2026
6 checks passed
@RadCod3 RadCod3 deleted the feat/langchain-stdio-support branch February 27, 2026 14:58
@RadCod3 RadCod3 restored the feat/langchain-stdio-support branch February 27, 2026 15:01
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Add stdio mcp tool support to python interpreter

3 participants