Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: Retrieve Agents not working with function calls #1469

Closed
Tracked by #1657
lucascampodonico opened this issue Jan 30, 2024 · 8 comments
Closed
Tracked by #1657

[Bug]: Retrieve Agents not working with function calls #1469

lucascampodonico opened this issue Jan 30, 2024 · 8 comments
Labels
rag retrieve-augmented generative agents

Comments

@lucascampodonico
Copy link

lucascampodonico commented Jan 30, 2024

Describe the bug

Hello, I have an error when I want to use agents group chat with rag with functions.
Without rag it works perfectly, but with rag it throws an error due to the context.

I modified the code in autogen/agentchat)/contrib/retrieve_user_proxy_agent.py to make it work.
I don't know if it's okay that way but at the moment it's working well for me.

Steps to reproduce

  def generate_llm_config(tool):
      # Define the function schema based on the tool's args_schema
      if tool.name == "appointment_scheduler":
          function_schema = {
		      "name": tool.name.lower().replace(" ", "_"),
		      "description": tool.description,
		      "parameters": {
			      "type": "object",
			      "properties": {},
			      "required": ["date", "hour"],
		      },
	      }
          if tool.args is not None:
              function_schema["parameters"]["properties"] = tool.args
      else:
          function_schema = {
              "name": tool.name.lower().replace(" ", "_"),
              "description": tool.description,
              "parameters": {
                  "type": "object",
                  "properties": {},
                  "required": [],
              },
    }
    if tool.args is not None:
        function_schema["parameters"]["properties"] = tool.args

    return function_schema


llm_config_agent = {
    "functions": [
        generate_llm_config(custom_tool),
        generate_llm_config(google_search_tool),
        generate_llm_config(google_places_tool),
        generate_llm_config(appointment_scheduler_tool),
    ],
    "config_list": _config_list,
    "timeout": 60,
    "cache_seed": 42,
}

appointment_scheduler = autogen.AssistantAgent(
    name="appointment_scheduler",
    is_termination_msg=termination_msg,
    system_message="You are a helpful assistant for schedule appointment in ¡spanish language!, The date format to send to the function is YYYY-DD-MM hh:mm:ss. If you not have the date and hour in context, you ask for it with 'TERMINATE' in the end of answer!!. You answer that you do not have data to answer that question. Reply `TERMINATE` in the end when everything is done.",
    llm_config=llm_config_agent,
)

assistant = RetrieveAssistantAgent(
    name="assistant",
    is_termination_msg=termination_msg,
    system_message="You are a useful assistant to answer any questions in ¡spanish language! that are not related to the other agents. You have access to internet with google_search_tool for answer completly. If you not have context and you need context. You answer that you do not have data to answer that question. Reply `TERMINATE` in the end when everything is done.",
    llm_config=llm_config_agent,
)

property_informer = autogen.AssistantAgent(
    name="property_informer",
    is_termination_msg=termination_msg,
    system_message="""You are a helpful assistant for properties information in ¡spanish language!\n 
    You only answer for the information for each ROLE
    If role is agent: full access to answer.
    If role not is agent: You not answer about commission of property.
    You execute the functions to resolve answers.\n
    If you not have context and you need context.\n
    You answer that you do not have data to answer that question.\n
    Reply `TERMINATE` in the end when everything is done.""",
    llm_config=llm_config_agent,
)

ragproxyagent = RetrieveUserProxyAgent(
    name="ragproxyagent",
    is_termination_msg=termination_msg,
    human_input_mode="NEVER",
    max_consecutive_auto_reply=5,
    retrieve_config={
        "task": "code",
        "docs_path": docs_path,
        "chunk_token_size": 2000,
        "model": _config_list[0]["model"],
        "client": chromadb.PersistentClient(path="/tmp/chromadb"),
        "embedding_model": "all-mpnet-base-v2",
        "customized_prompt": PROMPT_CODE,
        "get_or_create": True,
        "collection_name": "agents_rag",
    },
    code_execution_config={"work_dir": "coding"},
)

# Register the tool and start the conversation
ragproxyagent.register_function(
    function_map={
        google_search_tool.name: google_search_tool._run,
        custom_tool.name: custom_tool._run,
        google_places_tool.name: google_places_tool._run,
        appointment_scheduler_tool.name: appointment_scheduler_tool._run,
    }
)

 groupchat = autogen.GroupChat(
        agents=[ragproxyagent, appointment_scheduler, assistant, property_informer]
        messages=[],
        max_round=12,
        speaker_selection_method="auto",
        allow_repeat_speaker=False,
    )

    manager = autogen.GroupChatManager(groupchat=groupchat, llm_config=llm_config)

    ragproxyagent.initiate_chat(manager, problem=problem, n_results=n_results)

Expected Behavior

Let the functions suggested by an agent work.

Screenshots and logs

#Before
image

#After
image

Additional Information

No response

@ekzhu ekzhu added the rag retrieve-augmented generative agents label Jan 30, 2024
@ekzhu
Copy link
Collaborator

ekzhu commented Jan 30, 2024

Thanks for the issue. This could be a PR. @thinkall for awareness

@davorrunje
Copy link
Collaborator

@lucascampodonico which version of autogen are you using? What model are you using? If OpenAI on Azure, which api_version? I see you are using a deprecated version of llm_config using functions and not tools. That might cause a clash.

@lucascampodonico
Copy link
Author

@davorrunje

I am using pyautogen==0.2.9

config_list = [
{
"model": "gpt-4",
"api_key": "sk-.........",
},
]

what is the new version of llm_config to use tools?

@davorrunje
Copy link
Collaborator

@lucascampodonico you can use function decorators @register_for_llm and @register_for_execution to automatically generate and add function specifications to your llm_config. OpenAI recently changed their API and functions are declared wrapped in tools JSON. You are using an old style without tools, but if you use decorators they will create the correct version of the JSON.

@ekzhu
Copy link
Collaborator

ekzhu commented Feb 2, 2024

@sonichi
Copy link
Contributor

sonichi commented Feb 11, 2024

@thinkall please take a note of this issue and make sure you include @lucascampodonico in your RAG refactor issue/PR.

@thinkall thinkall mentioned this issue Feb 13, 2024
11 tasks
@thinkall
Copy link
Collaborator

Hi @lucascampodonico , have you tried the new APIs that @ekzhu and @davorrunje have suggested?
Here #1661 you can also find the updated example of using RAG with functions.

@thinkall
Copy link
Collaborator

Close as no active response for a long time. Please reopen it or create a new issue if needed.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
rag retrieve-augmented generative agents
Projects
None yet
Development

No branches or pull requests

5 participants