Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Core] [Tool Call] adjust conversable agent to support tool_calls #974

Merged
merged 69 commits into from
Jan 6, 2024
Merged
Show file tree
Hide file tree
Changes from 25 commits
Commits
Show all changes
69 commits
Select commit Hold shift + click to select a range
1bd860a
adjust conversable and compressible agents to support tool_calls
yenif Dec 16, 2023
352ce45
split out tools into their own reply def
yenif Dec 16, 2023
9327c53
copilot typo
yenif Dec 16, 2023
d5fc25b
address review comments
yenif Dec 16, 2023
dba691f
revert compressible_agent and token_count_utils calls
yenif Dec 16, 2023
baa41be
cleanup terminate check and remove unnecessary code
yenif Dec 16, 2023
4eb9c63
doc search and update
yenif Dec 16, 2023
8212429
return function/tool calls as interrupted when user provides a reply …
yenif Dec 20, 2023
6e28f16
fix tool name reference
yenif Dec 22, 2023
5051013
Merge branch 'main' into agent_tool_calls
yiranwu0 Dec 22, 2023
e72e6a7
fix formatting
yenif Dec 22, 2023
86185dd
fix initiate receiving a dict
yenif Dec 22, 2023
2a891fd
Merge branch 'main' into agent_tool_calls
yenif Dec 24, 2023
84c0b3c
missed changed roled
yenif Dec 26, 2023
226e1d9
ignore incoming role, more similiar to existing code
yenif Dec 26, 2023
6e97f76
consistency
yenif Dec 26, 2023
dfedd52
redundant to_dict
yenif Dec 26, 2023
4526607
fix todo comment
yenif Dec 26, 2023
719c87b
uneeded change
yenif Dec 26, 2023
886a6d3
Merge branch 'main' into agent_tool_calls
yenif Dec 26, 2023
2982f64
handle dict reply in groupchat
yenif Dec 26, 2023
7f277b4
Fix generate_tool_call_calls_reply_comment
yenif Dec 27, 2023
cd68128
change method annotation for register_for_llm from functions to tools
yenif Dec 27, 2023
adab4c3
Merge branch 'main' into agent_tool_calls
yenif Dec 27, 2023
6f698ba
Merge branch 'main' into agent_tool_calls
sonichi Dec 27, 2023
9ba8276
typo autogen/agentchat/conversable_agent.py
yenif Dec 27, 2023
623207f
add deprecation comments for function_call
yenif Dec 27, 2023
ea65ab3
Merge branch 'main' into agent_tool_calls
yenif Dec 27, 2023
a5e85f8
tweak doc strings
yenif Dec 27, 2023
973a1f0
switch to ToolFunction type
yenif Dec 28, 2023
f5670ef
update the return to
yenif Dec 28, 2023
0ae4051
Merge branch 'main' into agent_tool_calls
sonichi Dec 28, 2023
645ba8b
fix generate_init_message return type
yenif Dec 28, 2023
8398841
Merge branch 'main' into agent_tool_calls
yenif Dec 28, 2023
c7a6b08
Revert "fix generate_init_message return type"
yenif Dec 28, 2023
212d438
undo force init to dict
yenif Dec 28, 2023
6cf997a
fix notebooks and groupchat tool handling
yenif Dec 29, 2023
cac6e85
fix type
yenif Dec 29, 2023
f36b40b
Merge branch 'main' into agent_tool_calls
yenif Dec 29, 2023
14f7235
use get for key error
yenif Dec 29, 2023
2f69698
Merge branch 'main' into agent_tool_calls
yenif Dec 29, 2023
4e38a5f
fix teachable to pull content from dict
yenif Dec 30, 2023
e9d0a35
Merge branch 'main' into agent_tool_calls
yenif Dec 30, 2023
f1454f8
Merge branch 'main' into agent_tool_calls
yenif Dec 30, 2023
1e86bf0
Merge branch 'main' into agent_tool_calls
yenif Jan 2, 2024
0d1b894
Merge branch 'main' into agent_tool_calls
yenif Jan 3, 2024
8e54ff7
change single message tool response
yenif Jan 3, 2024
7352f09
cleanup unnessary changes
yenif Jan 3, 2024
baf0f87
little better tool response concatenation
yenif Jan 3, 2024
e4bf5c1
update tools tests
yenif Jan 3, 2024
c4fcde5
Merge branch 'main' into agent_tool_calls
yenif Jan 3, 2024
23587e7
add skip openai check to tools tests
yenif Jan 3, 2024
05a2f9c
Merge branch 'main' into agent_tool_calls
yenif Jan 3, 2024
185e116
fix nits
ekzhu Jan 4, 2024
5dd11b2
move func name normalization to oai_reply and assert configured names
yenif Jan 4, 2024
275f986
fix whitespace
yenif Jan 4, 2024
ba56020
remove extra normalize
yenif Jan 4, 2024
69d9de6
Merge branch 'main' into agent_tool_calls
yenif Jan 4, 2024
18cd1d5
tool name is now normalized in the generate_reply function, so will n…
yenif Jan 4, 2024
8a636dc
Merge branch 'main' into agent_tool_calls
sonichi Jan 4, 2024
2a1923b
validate function names in init and expand comments for validation me…
yenif Jan 5, 2024
7efbee9
Merge branch 'main' into agent_tool_calls
yenif Jan 5, 2024
331cd5b
fix dict comprehension
yenif Jan 5, 2024
9ced5ee
Merge branch 'main' into agent_tool_calls
yenif Jan 5, 2024
081df36
Dummy llm config for unit tests
ekzhu Jan 5, 2024
2d82fe3
handle tool_calls set to None
yenif Jan 6, 2024
cfa740b
fix tool name reference
yenif Jan 6, 2024
a6c4db6
method operates on responses not calls
yenif Jan 6, 2024
e4ff66c
Merge branch 'main' into agent_tool_calls
sonichi Jan 6, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
260 changes: 225 additions & 35 deletions autogen/agentchat/conversable_agent.py

Large diffs are not rendered by default.

2 changes: 2 additions & 0 deletions autogen/agentchat/groupchat.py
Original file line number Diff line number Diff line change
Expand Up @@ -275,6 +275,8 @@ def _mentioned_agents(self, message_content: Union[str, List], agents: List[Agen
Dict: a counter for mentioned agents.
"""
# Cast message content to str
if isinstance(message_content, dict):
message_content = message_content["content"]
message_content = content_str(message_content)

mentions = dict()
Expand Down
2 changes: 1 addition & 1 deletion autogen/function_utils.py
Original file line number Diff line number Diff line change
Expand Up @@ -266,7 +266,7 @@ def f(a: Annotated[str, "Parameter a"], b: int = 2, c: Annotated[float, "Paramet
parameters=parameters,
)

return model_dump(function)
return {"type": "function", "function": model_dump(function)}
yenif marked this conversation as resolved.
Show resolved Hide resolved


def get_load_param_if_needed_function(t: Any) -> Optional[Callable[[T, Type], BaseModel]]:
Expand Down
8 changes: 4 additions & 4 deletions autogen/oai/client.py
Original file line number Diff line number Diff line change
Expand Up @@ -287,9 +287,9 @@ def yes_or_no_filter(context, response):

def _completions_create(self, client, params):
completions = client.chat.completions if "messages" in params else client.completions
# If streaming is enabled, has messages, and does not have functions, then
# If streaming is enabled, has messages, and does not have functions or tools, then
# iterate over the chunks of the response
if params.get("stream", False) and "messages" in params and "functions" not in params:
if params.get("stream", False) and "messages" in params and "functions" not in params and "tools" not in params:
response_contents = [""] * params.get("n", 1)
finish_reasons = [""] * params.get("n", 1)
completion_tokens = 0
Expand Down Expand Up @@ -352,8 +352,8 @@ def _completions_create(self, client, params):

response.choices.append(choice)
else:
# If streaming is not enabled or using functions, send a regular chat completion request
# Functions are not supported, so ensure streaming is disabled
# If streaming is not enabled, using functions, or tools, send a regular chat completion request
# Functions and Tools are not supported, so ensure streaming is disabled
params = params.copy()
params["stream"] = False
response = completions.create(**params)
Expand Down
60 changes: 33 additions & 27 deletions test/agentchat/test_conversable_agent.py
Original file line number Diff line number Diff line change
Expand Up @@ -492,27 +492,30 @@ def exec_python(cell: Annotated[str, "Valid Python cell to execute."]) -> str:

expected1 = [
{
"description": "run cell in ipython and return the execution result.",
"name": "exec_python",
"parameters": {
"type": "object",
"properties": {
"cell": {
"type": "string",
"description": "Valid Python cell to execute.",
}
"type": "function",
"function": {
"description": "run cell in ipython and return the execution result.",
"name": "exec_python",
"parameters": {
"type": "object",
"properties": {
"cell": {
"type": "string",
"description": "Valid Python cell to execute.",
}
},
"required": ["cell"],
},
"required": ["cell"],
},
}
]
expected2 = copy.deepcopy(expected1)
expected2[0]["name"] = "python"
expected2[0]["function"]["name"] = "python"
expected3 = expected2

assert agent1.llm_config["functions"] == expected1
assert agent2.llm_config["functions"] == expected2
assert agent3.llm_config["functions"] == expected3
assert agent1.llm_config["tools"] == expected1
assert agent2.llm_config["tools"] == expected2
assert agent3.llm_config["tools"] == expected3

@agent3.register_for_llm()
@agent2.register_for_llm()
Expand All @@ -522,26 +525,29 @@ async def exec_sh(script: Annotated[str, "Valid shell script to execute."]) -> s

expected1 = expected1 + [
{
"name": "sh",
"description": "run a shell script and return the execution result.",
"parameters": {
"type": "object",
"properties": {
"script": {
"type": "string",
"description": "Valid shell script to execute.",
}
"type": "function",
"function": {
"name": "sh",
"description": "run a shell script and return the execution result.",
"parameters": {
"type": "object",
"properties": {
"script": {
"type": "string",
"description": "Valid shell script to execute.",
}
},
"required": ["script"],
},
"required": ["script"],
},
}
]
expected2 = expected2 + [expected1[1]]
expected3 = expected3 + [expected1[1]]

assert agent1.llm_config["functions"] == expected1
assert agent2.llm_config["functions"] == expected2
assert agent3.llm_config["functions"] == expected3
assert agent1.llm_config["tools"] == expected1
assert agent2.llm_config["tools"] == expected2
assert agent3.llm_config["tools"] == expected3


def test_register_for_llm_without_description():
Expand Down
201 changes: 201 additions & 0 deletions test/agentchat/test_tool_calls.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,201 @@
try:
from openai import OpenAI
except ImportError:
OpenAI = None
import pytest
import json
import autogen
from autogen.math_utils import eval_math_responses
from test_assistant_agent import KEY_LOC, OAI_CONFIG_LIST
import sys
from autogen.oai.client import TOOL_ENABLED


@pytest.mark.skipif(not TOOL_ENABLED, reason="openai>=1.1.0 not installed")
def test_eval_math_responses():
config_list = autogen.config_list_from_models(
KEY_LOC, exclude="aoai", model_list=["gpt-4-0613", "gpt-3.5-turbo-0613", "gpt-3.5-turbo-16k"]
)
tools = [
{
"type": "function",
"function": {
"name": "eval_math_responses",
"description": "Select a response for a math problem using voting, and check if the response is correct if the solution is provided",
"parameters": {
"type": "object",
"properties": {
"responses": {
"type": "array",
"items": {"type": "string"},
"description": "The responses in a list",
},
"solution": {
"type": "string",
"description": "The canonical solution",
},
},
"required": ["responses"],
},
},
},
]
client = autogen.OpenAIWrapper(config_list=config_list)
response = client.create(
messages=[
{
"role": "user",
"content": 'evaluate the math responses ["1", "5/2", "5/2"] against the true answer \\frac{5}{2}',
},
],
tools=tools,
)
print(response)
responses = client.extract_text_or_completion_object(response)
print(responses[0])
tool_calls = responses[0].tool_calls
function_call = tool_calls[0].function
name, arguments = function_call.name, json.loads(function_call.arguments)
assert name == "eval_math_responses"
print(arguments["responses"])
# if isinstance(arguments["responses"], str):
# arguments["responses"] = json.loads(arguments["responses"])
arguments["responses"] = [f"\\boxed{{{x}}}" for x in arguments["responses"]]
print(arguments["responses"])
arguments["solution"] = f"\\boxed{{{arguments['solution']}}}"
print(eval_math_responses(**arguments))


@pytest.mark.skipif(
not TOOL_ENABLED or not sys.version.startswith("3.10"),
reason="do not run if openai is <1.1.0 or py!=3.10",
)
def test_update_tool():
config_list_gpt4 = autogen.config_list_from_json(
OAI_CONFIG_LIST,
filter_dict={
"model": ["gpt-4", "gpt-4-0314", "gpt4", "gpt-4-32k", "gpt-4-32k-0314", "gpt-4-32k-v0314"],
},
file_location=KEY_LOC,
)
llm_config = {
"config_list": config_list_gpt4,
"seed": 42,
"tools": [],
}

user_proxy = autogen.UserProxyAgent(
name="user_proxy",
human_input_mode="NEVER",
is_termination_msg=lambda x: True if "TERMINATE" in x.get("content") else False,
)
assistant = autogen.AssistantAgent(name="test", llm_config=llm_config)

# Define a new function *after* the assistant has been created
assistant.update_tool_signature(
{
"type": "function",
"function": {
"name": "greet_user",
"description": "Greets the user.",
"parameters": {
"type": "object",
"properties": {},
"required": [],
},
},
},
is_remove=False,
)
user_proxy.initiate_chat(
assistant,
message="What functions do you know about in the context of this conversation? End your response with 'TERMINATE'.",
)
messages1 = assistant.chat_messages[user_proxy][-1]["content"]
print(messages1)

assistant.update_tool_signature("greet_user", is_remove=True)
user_proxy.initiate_chat(
assistant,
message="What functions do you know about in the context of this conversation? End your response with 'TERMINATE'.",
)
messages2 = assistant.chat_messages[user_proxy][-1]["content"]
print(messages2)
# The model should know about the function in the context of the conversation
assert "greet_user" in messages1
assert "greet_user" not in messages2


@pytest.mark.skipif(not TOOL_ENABLED, reason="openai>=1.1.0 not installed")
def test_multi_tool_call():
class FakeAgent(autogen.Agent):
def __init__(self, name):
super().__init__(name)
self.received = []

def receive(
self,
message,
sender,
request_reply=None,
silent=False,
):
message = message if isinstance(message, list) else [message]
self.received.extend(message)

user_proxy = autogen.UserProxyAgent(
name="user_proxy",
human_input_mode="NEVER",
is_termination_msg=lambda x: True if "TERMINATE" in x.get("content") else False,
)
user_proxy.register_function({"echo": lambda str: str})

fake_agent = FakeAgent("fake_agent")

user_proxy.receive(
message={
"content": "test multi tool call",
"tool_calls": [
{
"id": "tool_1",
"type": "function",
"function": {"name": "echo", "arguments": json.JSONEncoder().encode({"str": "hello world"})},
},
{
"id": "tool_2",
"type": "function",
"function": {
"name": "echo",
"arguments": json.JSONEncoder().encode({"str": "goodbye and thanks for all the fish"}),
},
},
{
"id": "tool_3",
"type": "function",
"function": {
"name": "multi_tool_call.echo",
"arguments": json.JSONEncoder().encode({"str": "goodbye and thanks for all the fish"}),
},
},
],
},
sender=fake_agent,
request_reply=True,
)

assert fake_agent.received == [
{"tool_call_id": "tool_1", "role": "tool", "name": "echo", "content": "hello world"},
{"tool_call_id": "tool_2", "role": "tool", "name": "echo", "content": "goodbye and thanks for all the fish"},
{
"tool_call_id": "tool_3",
"role": "tool",
"name": "multi_tool_call_echo",
"content": "Error: Function multi_tool_call_echo not found.",
},
]


if __name__ == "__main__":
test_update_tool()
test_eval_math_responses()
test_multi_tool_call()
Loading
Loading