You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
What is the bug?
If we create an AgentTool containing several tools and some tool may have their own input parameter, when ChatAgentRunner selects this AgentTool to execute, it will generates its input parameter passing to the first tool of AgentTool. However, this input parameter will also override the default manual input parameter of following tools. We should not let it to override since it's only for the input parameter of the first tool.
How can one reproduce the bug?
Steps to reproduce the behavior:
Register an flow agent with 2 tools like:
POST /_plugins/_ml/agents/_register
{
"name": "Test_Agent_For_PPL",
"type": "flow",
"description": "this is a test agent for invoking PPLTool",
"tools": [
{
"type": "MLModelTool",
"parameters": {
"model_id": "xxxxxxxx",
"prompt": """You are given a context of alert definition and a nature language question, summarize a final question as the input to generate PPL based that."""
}
},
{
"type": "PPLTool",
"parameters": {
"model_id": "xxxxxxxx",
"model_type": "FINETUNE",
"execute": true,
"input": {
"question": "${parameters.MLModelTool.output}"
}
}
}
]
}
register a chat agent with AgentTool
OST /_plugins/_ml/agents/_register
{
"name": "Test_Agent_For_ReAct_ClaudeV2",
"type": "conversational",
"description": "this is an agent to help analysis alert, answering follow-up question of the current alert including search data, create new alert.",
"llm": {
"model_id": "xxxxxxxx",
"parameters": {
"max_iteration": 5,
"stop_when_no_tool_found": true
}
},
"memory": {
"type": "conversation_index"
},
"tools": [
{
"type": "AgentTool",
"name": "AgentTool",
"parameters": {
"agent_id": "xxxxxx",
"prompt": """xxxxx"""
}
}
]
}
update the current root agent
PUT .plugins-ml-config/_doc/os_chat
{"type":"os_chat_root_agent","configuration":{"agent_id":"xxxxxxxxxx"}}
In chat window, ask question for providing more data.
In debug mode, you will see the input of PPLTool will be overridden by the input of MLModelTool.
What is the expected behavior?
The input of PPLTool should be our manual set input { "question": "${parameters.MLModelTool.output}"} with placeholder substituted.
What is your host/environment?
OS: mac
Version: 3.0.0-SNAP
Plugins: ml-commons, skills, sql
Do you have any screenshots?
If applicable, add screenshots to help explain your problem.
Do you have any additional context?
Add any other context about the problem.
The text was updated successfully, but these errors were encountered:
In debug mode, you will see the input of PPLTool will be overridden by the input of MLModelTool.
Do you men the input of PPLTool should actually be the output of MLModelTool?
Closely, it should be its own input { "question": "${parameters.MLModelTool.output}"} with placeholder substituted. It means the output of MLModelTool is been put in value of question.
The reason why I don't use the output of MLModelTool directly is that PPLTool cannot handle the output. Otherwise, we may need to develop other tools to parse the output to what exactly PPLTool can accept, or enhance flow agent to do similar things.
What is the bug?
If we create an AgentTool containing several tools and some tool may have their own input parameter, when ChatAgentRunner selects this AgentTool to execute, it will generates its input parameter passing to the first tool of AgentTool. However, this input parameter will also override the default manual input parameter of following tools. We should not let it to override since it's only for the input parameter of the first tool.
How can one reproduce the bug?
Steps to reproduce the behavior:
What is the expected behavior?
The input of PPLTool should be our manual set input
{ "question": "${parameters.MLModelTool.output}"}
with placeholder substituted.What is your host/environment?
Do you have any screenshots?
If applicable, add screenshots to help explain your problem.
Do you have any additional context?
Add any other context about the problem.
The text was updated successfully, but these errors were encountered: