Agent does not pass the tool result to LLM for inference #7015
-
Hi, I am learning to create a chat bot using AssistantAgent with a FunctionTool to get weather information.
The above code prints JSON returned from the get_weather_response tool as it it.
It looks like the Agent is not sending the tool result to the LLM for inference. It just returns the tool result. I am also interested to know if this is a conscious design choice by Microsoft. Other Agent frameworks such as LangGraph and Amazon Strands do not have this limitation. |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment
-
use
Yes. It is because in one-shot scenarios, the tool itself should provide the final result rather than another reflection step, which only adds more token usage. The tool itself should return well-formated output that can be consumed by application. If you are iterating on tool then you should also set |
Beta Was this translation helpful? Give feedback.
use
reflect_on_tool_use=True
to enable reflection: https://microsoft.github.io/autogen/stable/user-guide/agentchat-user-guide/tutorial/agents.html#using-tools-and-workbenchYes. It is because in one-shot scenarios, the tool itself should provide the final result rather than another reflection step, which only adds more token usage. The tool itself should return well-formated output that can be consumed by application.
If you are iterating on tool then you should also set
max_tool_iterations
to enable iterative tool use.