-
Notifications
You must be signed in to change notification settings - Fork 421
feat: support multiple A2AStarletteApplication on a single port #323
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Changes from 1 commit
1b8b509
0b9ce11
7839d4f
79b1122
2bbe76a
582752c
db82848
ec224cd
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,85 @@ | ||
# Example: Using a2a-python SDK Without an LLM Framework | ||
|
||
This repository demonstrates how to set up and use the [a2a-python SDK](https://github.com/google/a2a-python) to create a simple server and client, without relying on any agent framework. | ||
|
||
## Overview | ||
|
||
- **A2A (Agent-to-Agent):** A protocol and SDK for communication with AI Agents. | ||
- **This Example:** Shows how to support multiple A2AStarletteApplication instances or AgentExecutor implementations on a single port. | ||
|
||
## Prerequisites | ||
|
||
- Python 3.13+ | ||
- [uv](https://github.com/astral-sh/uv) (for fast dependency management and running) | ||
- An API key for Gemini (set as `GEMINI_API_KEY`) | ||
|
||
## Installation | ||
|
||
1. **Clone the repository:** | ||
|
||
```bash | ||
git clone <this-repo-url> | ||
cd <repo-directory> | ||
``` | ||
|
||
2. **Install dependencies:** | ||
|
||
```bash | ||
uv pip install -e . | ||
``` | ||
|
||
3. **Set environment variables:** | ||
|
||
```bash | ||
export GEMINI_API_KEY=your-gemini-api-key | ||
``` | ||
|
||
Or create a `.env` file with: | ||
|
||
```sh | ||
GEMINI_API_KEY=your-gemini-api-key | ||
``` | ||
|
||
## Running the Example | ||
|
||
### 1. Start the Server | ||
|
||
```bash | ||
uv run --env-file .env python -m src.no_llm_framework.server.__main__ | ||
``` | ||
|
||
- The server will start on port `9999`. | ||
|
||
### 2. Run the Client | ||
|
||
In a new terminal: | ||
|
||
```bash | ||
uv run --env-file .env python -m src.no_llm_framework.client --question "What is A2A protocol?" | ||
``` | ||
|
||
- The client will connect to the server and send a request. | ||
|
||
### 3. View the Response | ||
|
||
- The response from the client will be saved to [`response.xml`](./response.xml). | ||
|
||
## File Structure | ||
|
||
- `src/no_llm_framework/server/`: Server implementation. | ||
- `src/no_llm_framework/client/`: Client implementation. | ||
- `response.xml`: Example response from the client. | ||
|
||
## Troubleshooting | ||
|
||
- **Missing dependencies:** Make sure you have `uv` installed. | ||
- **API key errors:** Ensure `GEMINI_API_KEY` is set correctly. | ||
- **Port conflicts:** Make sure port 9999 is free. | ||
|
||
## Disclaimer | ||
|
||
Important: The sample code provided is for demonstration purposes and illustrates the mechanics of the Agent-to-Agent (A2A) protocol. When building production applications, it is critical to treat any agent operating outside of your direct control as a potentially untrusted entity. | ||
|
||
All data received from an external agent—including but not limited to its AgentCard, messages, artifacts, and task statuses—should be handled as untrusted input. For example, a malicious agent could provide an AgentCard containing crafted data in its fields (e.g., description, name, skills.description). If this data is used without sanitization to construct prompts for a Large Language Model (LLM), it could expose your application to prompt injection attacks. Failure to properly validate and sanitize this data before use can introduce security vulnerabilities into your application. | ||
|
||
Developers are responsible for implementing appropriate security measures, such as input validation and secure handling of credentials to protect their systems and users. |
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,24 @@ | ||
[project] | ||
name = "no-llm-framework" | ||
version = "0.1.0" | ||
description = "Use A2A without any agent framework" | ||
Comment on lines
+2
to
+4
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. The project
|
||
readme = "README.md" | ||
authors = [{ name = "prem", email = "[email protected]" }] | ||
requires-python = ">=3.13" | ||
dependencies = [ | ||
"a2a-sdk>=0.3.0", | ||
"asyncclick>=8.1.8", | ||
"colorama>=0.4.6", | ||
"fastmcp>=2.3.4", | ||
"google-genai", | ||
"jinja2>=3.1.6", | ||
"rich>=14.0.0", | ||
"starlette>=0.47.2" | ||
] | ||
[build-system] | ||
requires = ["hatchling"] | ||
build-backend = "hatchling.build" | ||
|
||
[project.scripts] | ||
a2a-server = "no_llm_framework.server.__main__:main" | ||
a2a-client = "no_llm_framework.client.__main__:main" |
Large diffs are not rendered by default.
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,50 @@ | ||
import asyncio | ||
from typing import Literal | ||
|
||
|
||
import asyncclick as click | ||
import colorama | ||
from no_llm_framework.client.agent import Agent | ||
|
||
|
||
@click.command() | ||
@click.option('--host', 'host', default='localhost') | ||
@click.option('--port', 'port', default=9999) | ||
@click.option('--mode', 'mode', default='streaming') | ||
@click.option('--question', 'question', required=True) | ||
async def a_main( | ||
host: str, | ||
port: int, | ||
mode: Literal['completion', 'streaming'], | ||
Comment on lines
+13
to
+18
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. The To fix this, you should pass the |
||
question: str, | ||
): | ||
"""Main function to run the A2A Repo Agent client. | ||
|
||
Args: | ||
host (str): The host address to run the server on. | ||
port (int): The port number to run the server on. | ||
mode (Literal['completion', 'streaming']): The mode to run the server on. | ||
question (str): The question to ask the Agent. | ||
""" # noqa: E501 | ||
agent_index = 1 | ||
agent = Agent( | ||
mode='stream', | ||
token_stream_callback=None, | ||
agent_urls=[f'http://{host}:{port}/{agent_index}'], | ||
) | ||
async for chunk in agent.stream(question): | ||
if chunk.startswith('<Agent name="'): | ||
print(colorama.Fore.CYAN + chunk, end='', flush=True) | ||
elif chunk.startswith('</Agent>'): | ||
print(colorama.Fore.RESET + chunk, end='', flush=True) | ||
else: | ||
print(chunk, end='', flush=True) | ||
|
||
|
||
def main() -> None: | ||
"""Main function to run the A2A Repo Agent client.""" | ||
asyncio.run(a_main()) | ||
|
||
|
||
if __name__ == '__main__': | ||
main() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The title and overview of this README seem to be copied from another example and don't accurately reflect the purpose of this sample. The title says "Using a2a-python SDK Without an LLM Framework", but the example is about "support multiple A2AStarletteApplication instances or AgentExecutor implementations on a single port". This is confusing for users. Please update the title and description to match the example's content.
For example:
Title:
Example: Supporting Multiple A2A Agents on a Single Port
Also, the file structure references
no_llm_framework
which is also confusing. Consider renaming the directory to better reflect the example.