A2A Roadmap #11672
Replies: 3 comments 5 replies
-
LiteLLM's proxy would operate purely as a registry/catalog and the SDK would include an A2A client. The proxy would include the same core functionality the MCP servers was implemented with:
Reasoning: It is the A2A clients responsibility (external ones and litellm Sdk) to identify the associated agent it needs to interact with via the agent card which is required per A2A implementation server. This means Litellm proxy only needs to care about this small info and allow clients to discover the agents from some From other discovery options, the UI can be a bit friendlier and include the catalog view that shows:
Happy to help in this area as well! |
Beta Was this translation helpful? Give feedback.
-
"I don't want to store keys on with my agents. We can't centralize agents until LiteLLM supports it. LiteLLM can offer a central gateway for everything AI by supporting A2A, MCP, LLMs" |
Beta Was this translation helpful? Give feedback.
-
A2A Protocol Support - v0 scope
1. Register agent card on LiteLLM
agent_list:
- agent_name: my_custom_agent
agent_card_params:
name: 'Hello World Agent',
description: Just a hello world agent
url: http://localhost:9999/
version: 1.0.0
default_input_modes: ['text']
default_output_modes: ['text']
capabilities:
streaming: true
skills:
- skill:
name: Hello World Skill
description: Just a hello world skill
url: http://localhost:9999/
version: 1.0.0
input_modes: ['text']
output_modes: ['text']
supports_authenticated_extended_card: True
curl -L -X POST 'http://0.0.0.0:4000/key/generate' \
-H 'Authorization: Bearer sk-1234' \
-H 'Content-Type: application/json' \
-d '{
"agents": ["my_custom_agent"]
}'
User can call curl http://0.0.0.0:4000/v1/agents \
-H "Authorization: Bearer $LITELLM_API_KEY" {
"object": "list",
"data": [
{
"id": "agent-id-1",
"object": "agent",
"created": 1686935002,
"owned_by": "organization-owner",
"metadata": {} # includes non-sensitive agent card params - don't reveal underlying api base
},
{
"id": "agent-id-1",
"object": "agent",
"created": 1686935002,
"owned_by": "organization-owner",
},
{
"id": "agent-id-2",
"object": "agent",
"created": 1686935002,
"owned_by": "openai"
},
],
"object": "list"
}
2. Call Specific Agent
User can call curl http://0.0.0.0:4000/v1/agents \
-H "Authorization: Bearer $LITELLM_API_KEY"
base_url = 'http://0.0.0.0:4000/agents/{agent_id}'
resolver = A2ACardResolver(
httpx_client=httpx_client,
base_url=base_url,
# agent_card_path uses default, extended_agent_card_path also uses default
)
# Fetch Public Agent Card and Initialize Client
final_agent_card_to_use: AgentCard | None = None
_public_card = (
await resolver.get_agent_card()
) # Fetches from default public path - `/agents/{agent_id}/`
final_agent_card_to_use = _public_card
if _public_card.supports_authenticated_extended_card:
try:
auth_headers_dict = {
'Authorization': 'Bearer dummy-token-for-extended-card'
}
_extended_card = await resolver.get_agent_card(
relative_card_path=EXTENDED_AGENT_CARD_PATH,
http_kwargs={'headers': auth_headers_dict},
)
final_agent_card_to_use = (
_extended_card # Update to use the extended card
)
except Exception as e_extended:
logger.warning(
f'Failed to fetch extended agent card: {e_extended}. Will proceed with public card.',
exc_info=True,
)
client = A2AClient(
httpx_client=httpx_client, agent_card=final_agent_card_to_use
)
send_message_payload: dict[str, Any] = {
'message': {
'role': 'user',
'parts': [
{'kind': 'text', 'text': 'how much is 10 USD in INR?'}
],
'messageId': uuid4().hex,
},
}
request = SendMessageRequest(
id=str(uuid4()), params=MessageSendParams(**send_message_payload)
)
response = await client.send_message(request)
print(response.model_dump(mode='json', exclude_none=True)) 3. Agents can see MCPs on LiteLLMJust register the MCP as a tool on A2A for it to work Note: set the base url of the MCP server to the litellm proxy base - async def init_agent(self):
logger.info(f'Initializing {self.agent_name} metadata')
config = get_mcp_server_config()
logger.info(f'MCP Server url={config.url}')
tools = await MCPToolset(
connection_params=SseServerParams(url=config.url)
).get_tools()
for tool in tools:
logger.info(f'Loaded tools {tool.name}')
generate_content_config = genai_types.GenerateContentConfig(
temperature=0.0
)
LITELLM_MODEL = os.getenv('LITELLM_MODEL', 'gemini/gemini-2.0-flash')
self.agent = Agent(
name=self.agent_name,
instruction=self.instructions,
model=LiteLlm(model=LITELLM_MODEL),
disallow_transfer_to_parent=True,
disallow_transfer_to_peers=True,
generate_content_config=generate_content_config,
tools=tools,
)
self.runner = AgentRunner() Open Questions
|
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Starting a discussion for how LiteLLM can support A2A.
How would you expect A2A to work with LiteLLM?
Common Asks:
Relevant docs: https://google-a2a.github.io/A2A/latest/specification
Beta Was this translation helpful? Give feedback.
All reactions