Skip to content
Open
Show file tree
Hide file tree
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
85 changes: 85 additions & 0 deletions samples/python/agents/a2a_multi_agent_on_single_port/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,85 @@
# Example: Using a2a-python SDK Without an LLM Framework

This repository demonstrates how to set up and use the [a2a-python SDK](https://github.com/google/a2a-python) to create a simple server and client, without relying on any agent framework.

## Overview

- **A2A (Agent-to-Agent):** A protocol and SDK for communication with AI Agents.
- **This Example:** Shows how to support multiple A2AStarletteApplication instances or AgentExecutor implementations on a single port.
Comment on lines +1 to +8
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The title and overview of this README seem to be copied from another example and don't accurately reflect the purpose of this sample. The title says "Using a2a-python SDK Without an LLM Framework", but the example is about "support multiple A2AStarletteApplication instances or AgentExecutor implementations on a single port". This is confusing for users. Please update the title and description to match the example's content.

For example:
Title: Example: Supporting Multiple A2A Agents on a Single Port

Also, the file structure references no_llm_framework which is also confusing. Consider renaming the directory to better reflect the example.


## Prerequisites

- Python 3.13+
- [uv](https://github.com/astral-sh/uv) (for fast dependency management and running)
- An API key for Gemini (set as `GEMINI_API_KEY`)

## Installation

1. **Clone the repository:**

```bash
git clone <this-repo-url>
cd <repo-directory>
```

2. **Install dependencies:**

```bash
uv pip install -e .
```

3. **Set environment variables:**

```bash
export GEMINI_API_KEY=your-gemini-api-key
```

Or create a `.env` file with:

```sh
GEMINI_API_KEY=your-gemini-api-key
```

## Running the Example

### 1. Start the Server

```bash
uv run --env-file .env python -m src.no_llm_framework.server.__main__
```

- The server will start on port `9999`.

### 2. Run the Client

In a new terminal:

```bash
uv run --env-file .env python -m src.no_llm_framework.client --question "What is A2A protocol?"
```

- The client will connect to the server and send a request.

### 3. View the Response

- The response from the client will be saved to [`response.xml`](./response.xml).

## File Structure

- `src/no_llm_framework/server/`: Server implementation.
- `src/no_llm_framework/client/`: Client implementation.
- `response.xml`: Example response from the client.

## Troubleshooting

- **Missing dependencies:** Make sure you have `uv` installed.
- **API key errors:** Ensure `GEMINI_API_KEY` is set correctly.
- **Port conflicts:** Make sure port 9999 is free.

## Disclaimer

Important: The sample code provided is for demonstration purposes and illustrates the mechanics of the Agent-to-Agent (A2A) protocol. When building production applications, it is critical to treat any agent operating outside of your direct control as a potentially untrusted entity.

All data received from an external agent—including but not limited to its AgentCard, messages, artifacts, and task statuses—should be handled as untrusted input. For example, a malicious agent could provide an AgentCard containing crafted data in its fields (e.g., description, name, skills.description). If this data is used without sanitization to construct prompts for a Large Language Model (LLM), it could expose your application to prompt injection attacks. Failure to properly validate and sanitize this data before use can introduce security vulnerabilities into your application.

Developers are responsible for implementing appropriate security measures, such as input validation and secure handling of credentials to protect their systems and users.
Original file line number Diff line number Diff line change
@@ -0,0 +1,24 @@
[project]
name = "no-llm-framework"
version = "0.1.0"
description = "Use A2A without any agent framework"
Comment on lines +2 to +4
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The project name and description in this file are misleading. They seem to be from a different example. The name is "no-llm-framework" and description is "Use A2A without any agent framework", while this example is about supporting multiple agents on a single port. This can cause confusion. Please update them to be more descriptive of this specific example.

name = "a2a-multi-agent-on-single-port"
version = "0.1.0"
description = "An example of how to support multiple A2A agents on a single port"

readme = "README.md"
authors = [{ name = "prem", email = "[email protected]" }]
requires-python = ">=3.13"
dependencies = [
"a2a-sdk>=0.3.0",
"asyncclick>=8.1.8",
"colorama>=0.4.6",
"fastmcp>=2.3.4",
"google-genai",
"jinja2>=3.1.6",
"rich>=14.0.0",
"starlette>=0.47.2"
]
[build-system]
requires = ["hatchling"]
build-backend = "hatchling.build"

[project.scripts]
a2a-server = "no_llm_framework.server.__main__:main"
a2a-client = "no_llm_framework.client.__main__:main"
486 changes: 486 additions & 0 deletions samples/python/agents/a2a_multi_agent_on_single_port/response.xml

Large diffs are not rendered by default.

Original file line number Diff line number Diff line change
@@ -0,0 +1,50 @@
import asyncio
from typing import Literal


import asyncclick as click
import colorama
from no_llm_framework.client.agent import Agent


@click.command()
@click.option('--host', 'host', default='localhost')
@click.option('--port', 'port', default=9999)
@click.option('--mode', 'mode', default='streaming')
@click.option('--question', 'question', required=True)
async def a_main(
host: str,
port: int,
mode: Literal['completion', 'streaming'],
Comment on lines +13 to +18
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The mode command-line option is defined but its value is never used; the Agent is always initialized with mode='stream' on line 31. Additionally, the choices for the mode option ('completion', 'streaming') do not match the values expected by the Agent class ('complete', 'stream').

To fix this, you should pass the mode to the Agent and align the option choices.

question: str,
):
"""Main function to run the A2A Repo Agent client.

Args:
host (str): The host address to run the server on.
port (int): The port number to run the server on.
mode (Literal['completion', 'streaming']): The mode to run the server on.
question (str): The question to ask the Agent.
""" # noqa: E501
agent_index = 1
agent = Agent(
mode='stream',
token_stream_callback=None,
agent_urls=[f'http://{host}:{port}/{agent_index}'],
)
async for chunk in agent.stream(question):
if chunk.startswith('<Agent name="'):
print(colorama.Fore.CYAN + chunk, end='', flush=True)
elif chunk.startswith('</Agent>'):
print(colorama.Fore.RESET + chunk, end='', flush=True)
else:
print(chunk, end='', flush=True)


def main() -> None:
"""Main function to run the A2A Repo Agent client."""
asyncio.run(a_main())


if __name__ == '__main__':
main()
Loading