You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I've been doing trying to run test.py and multi_test.py and it isn't working properly. Everything works fine up to the final answer, where langchain spits out a OutputParserException: Could not parse LLM output: 'I now know the final answer.'
To replicate:
Run host_local_tools.py with weather tool enabled
Run test.py
Full output:
[INFO|(BMTools)singletool:79]2023-07-11 17:33:42,949 >> Using ChatGPT
[INFO|(BMTools)singletool:25]2023-07-11 17:33:42,953 >> Doc string URL: http://127.0.0.1:8079/tools/weather/openapi.json
[INFO|(BMTools)singletool:34]2023-07-11 17:33:42,954 >> server_url http://127.0.0.1:8079/tools/weather
[INFO|(BMTools)apitool:146]2023-07-11 17:33:42,955 >> API Name: get_weather_today
[INFO|(BMTools)apitool:147]2023-07-11 17:33:42,956 >> API Description: Get today's the weather. Your input should be a json (args json schema): {{"location" : string, }} The Action to trigger this API should be get_weather_today and the input parameters should be a json dict string. Pay attention to the type of parameters.
[INFO|(BMTools)apitool:146]2023-07-11 17:33:42,957 >> API Name: forecast_weather
[INFO|(BMTools)apitool:147]2023-07-11 17:33:42,958 >> API Description: Forecast weather in the upcoming days.. Your input should be a json (args json schema): {{"location" : string, "days" : integer, }} The Action to trigger this API should be forecast_weather and the input parameters should be a json dict string. Pay attention to the type of parameters.
[INFO|(BMTools)singletool:93]2023-07-11 17:33:42,959 >> Tool [weather] has the following apis: [RequestTool(name='get_weather_today', description='Get today\'s the weather. Your input should be a json (args json schema): {{"location" : string, }} The Action to trigger this API should be get_weather_today and the input parameters should be a json dict string. Pay attention to the type of parameters.', args_schema=None, return_direct=False, verbose=False, callback_manager=<langchain.callbacks.shared.SharedCallbackManager object at 0x000001D15E393730>, func=<function RequestTool.__init__.<locals>.func at 0x000001D14CE678B0>, afunc=<function RequestTool.__init__.<locals>.afunc at 0x000001D1211E19D0>, coroutine=None, tool_logo_md='', max_output_len=4000), RequestTool(name='forecast_weather', description='Forecast weather in the upcoming days.. Your input should be a json (args json schema): {{"location" : string, "days" : integer, }} The Action to trigger this API should be forecast_weather and the input parameters should be a json dict string. Pay attention to the type of parameters.', args_schema=None, return_direct=False, verbose=False, callback_manager=<langchain.callbacks.shared.SharedCallbackManager object at 0x000001D15E393730>, func=<function RequestTool.__init__.<locals>.func at 0x000001D14CF0E550>, afunc=<function RequestTool.__init__.<locals>.afunc at 0x000001D1211E1790>, coroutine=None, tool_logo_md='', max_output_len=4000)]
[INFO|(BMTools)singletool:113]2023-07-11 17:33:42,960 >> Full prompt template: Answer the following questions as best you can. General instructions are: Plugin for look up weather information. Specifically, you have access to the following APIs:
get_weather_today: Get today's the weather. Your input should be a json (args json schema): {{"location" : string, }} The Action to trigger this API should be get_weather_today and the input parameters should be a json dict string. Pay attention to the type of parameters.
forecast_weather: Forecast weather in the upcoming days.. Your input should be a json (args json schema): {{"location" : string, "days" : integer, }} The Action to trigger this API should be forecast_weather and the input parameters should be a json dict string. Pay attention to the type of parameters.
Use the following format:
Question: the input question you must answer
Thought: you should always think about what to do
Action: the action to take, should be one of [get_weather_today, forecast_weather]
Action Input: the input to the action
Observation: the result of the action
... (this Thought/Action/Action Input/Observation can repeat N times)
Thought: I now know the final answer
Final Answer: the final answer to the original input question
Begin! Remember: (1) Follow the format, i.e,
Thought:
Action:
Action Input:
Observation:
Final Answer:
. The action you generate must be exact one of the given API names instead of a sentence or any other redundant text. The action input is one json format dict without any redundant text or bracket descriptions . (2) Provide as much as useful information (such as useful values/file paths in your observation) in your Final Answer. Do not describe the process you achieve the goal, but only provide the detailed answer or response to the task goal. (3) Do not make up anything. DO NOT generate observation content by yourself. (4) Read the observation carefully, and pay attention to the messages even if an error occurs. (5) Once you have enough information, please immediately use
Thought: I have got enough information
Final Answer:
Task: {input}
{agent_scratchpad}
weather {'schema_version': 'v1', 'name_for_human': 'Weather Info', 'name_for_model': 'Weather', 'description_for_human': 'Look up weather information', 'description_for_model': 'Plugin for look up weather information', 'auth': {'type': 'none'}, 'api': {'type': 'openapi', 'url': 'http://127.0.0.1:8079/tools/weather/openapi.json', 'is_user_authenticated': False}, 'author_github': None, 'logo_url': 'https://cdn.weatherapi.com/v4/images/weatherapi_logo.png', 'contact_email': '[email protected]', 'legal_info_url': '[email protected]'}
> Entering new AgentExecutorWithTranslation chain...
Thought: I need to get the weather information for San Francisco today.
Action: get_weather_today
Action Input: {"location": "San Francisco"}
Observation: "Today's weather report for San Francisco is:\noverall: Partly cloudy,\nname: San Francisco,\nregion: California,\ncountry: United States of America,\nlocaltime: 2023-07-11 2:33,\ntemperature: 11.1(C), 52.0(F),\npercipitation: 0.0(mm), 0.0(inch),\npressure: 1015.0(milibar),\nhumidity: 89,\ncloud: 25,\nbody temperature: 10.0(C), 50.0(F),\nwind speed: 15.5(kph), 9.6(mph),\nvisibility: 16.0(km), 9.0(miles),\nUV index: 1.0,\n"
Thought:
---------------------------------------------------------------------------
OutputParserException Traceback (most recent call last)
Cell In[9], line 8
5 stqa = STQuestionAnswerer()
7 agent = stqa.load_tools(tool_name, tool_config, prompt_type="react-with-tool-description")
----> 8 agent("write a weather report for SF today")
10 """
11 from bmtools.agent.singletool import load_single_tools, STQuestionAnswerer
12
(...)
20 agent("Where is Yaoming Born?")
21 """
File ~\anaconda3\envs\ToolBench\lib\site-packages\langchain\chains\base.py:116, in Chain.__call__(self, inputs, return_only_outputs)
114 except (KeyboardInterrupt, Exception) as e:
115 self.callback_manager.on_chain_error(e, verbose=self.verbose)
--> 116 raise e
117 self.callback_manager.on_chain_end(outputs, verbose=self.verbose)
118 return self.prep_outputs(inputs, outputs, return_only_outputs)
File ~\anaconda3\envs\ToolBench\lib\site-packages\langchain\chains\base.py:113, in Chain.__call__(self, inputs, return_only_outputs)
107 self.callback_manager.on_chain_start(
108 {"name": self.__class__.__name__},
109 inputs,
110 verbose=self.verbose,
111 )
112 try:
--> 113 outputs = self._call(inputs)
114 except (KeyboardInterrupt, Exception) as e:
115 self.callback_manager.on_chain_error(e, verbose=self.verbose)
File ~\anaconda3\envs\ToolBench\lib\site-packages\langchain\agents\agent.py:792, in AgentExecutor._call(self, inputs)
790 # We now enter the agent loop (until it returns something).
791 while self._should_continue(iterations, time_elapsed):
--> 792 next_step_output = self._take_next_step(
793 name_to_tool_map, color_mapping, inputs, intermediate_steps
794 )
795 if isinstance(next_step_output, AgentFinish):
796 return self._return(next_step_output, intermediate_steps)
File ~\anaconda3\envs\ToolBench\lib\site-packages\langchain\agents\agent.py:672, in AgentExecutor._take_next_step(self, name_to_tool_map, color_mapping, inputs, intermediate_steps)
667 """Take a single step in the thought-action-observation loop.
668
669 Override this to take control of how the agent makes and acts on choices.
670 """
671 # Call the LLM to see what to do.
--> 672 output = self.agent.plan(intermediate_steps, **inputs)
673 # If the tool chosen is the finishing tool, then we end and return.
674 if isinstance(output, AgentFinish):
File ~\anaconda3\envs\ToolBench\lib\site-packages\langchain\agents\agent.py:385, in Agent.plan(self, intermediate_steps, **kwargs)
383 full_inputs = self.get_full_inputs(intermediate_steps, **kwargs)
384 full_output = self.llm_chain.predict(**full_inputs)
--> 385 return self.output_parser.parse(full_output)
File ~\anaconda3\envs\ToolBench\lib\site-packages\langchain\agents\mrkl\output_parser.py:24, in MRKLOutputParser.parse(self, text)
22 match = re.search(regex, text, re.DOTALL)
23 if not match:
---> 24 raise OutputParserException(f"Could not parse LLM output: `{text}`")
25 action = match.group(1).strip()
26 action_input = match.group(2)
OutputParserException: Could not parse LLM output: `I now know the final answer.`
I looked in the langchain code, and from the regex matching pattern in langchain/agents/mrkl/output_parser.py it seems like langchain was expecting an action/tool call but could not find one because the LLM returned "I now know the final answer" as its output. I don't think modifying the langchain prompt is the solution because I know langchain can work with the default prompt, as evident by the SFT data generation I was able to accomplish via the code you gave for ToolBench.
Additionally, I cannot get web_demo.py to run properly, and am getting the same error as #51
The text was updated successfully, but these errors were encountered:
Overview
I've been doing trying to run
test.py
andmulti_test.py
and it isn't working properly. Everything works fine up to the final answer, where langchain spits out aOutputParserException: Could not parse LLM output: 'I now know the final answer.'
To replicate:
host_local_tools.py
with weather tool enabledtest.py
Full output:
I looked in the langchain code, and from the regex matching pattern in
langchain/agents/mrkl/output_parser.py
it seems like langchain was expecting an action/tool call but could not find one because the LLM returned "I now know the final answer" as its output. I don't think modifying the langchain prompt is the solution because I know langchain can work with the default prompt, as evident by the SFT data generation I was able to accomplish via the code you gave for ToolBench.Additionally, I cannot get
web_demo.py
to run properly, and am getting the same error as #51The text was updated successfully, but these errors were encountered: