Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: Internal Server Error in Reasoning Engine Deployment #1540

Closed
1 task done
avineet123 opened this issue Dec 16, 2024 · 2 comments
Closed
1 task done

[Bug]: Internal Server Error in Reasoning Engine Deployment #1540

avineet123 opened this issue Dec 16, 2024 · 2 comments
Assignees

Comments

@avineet123
Copy link

File Name

N/A

What happened?

Deployment is successful, but when calling the ReasoningEngine, an InternalServerError occurs with the error message: "create_app() takes 0 positional arguments but 1 was given." This error happens when invoking the agent after creating it using remote_agent.

I am setting up a multi-agent system using create_react_agent as follows:
from langgraph.prebuilt import create_react_agent
tools = [get_weather]
agent1 = create_react_agent(model=llm, state_modifier=format_for_model, tools=tools, state_schema=AgentState)

For deployment, the following code is used:
remote_agent = reasoning_engines.ReasoningEngine.create(
SampleAgent(project=PROJECT_ID, location=LOCATION),
requirements=[
"google-cloud-aiplatform[langchain,reasoningengine]",
"cloudpickle==3.1.0",
"pydantic",
"langgraph",
"langchain-google-community",
"google-cloud-discoveryengine"
],
display_name=INSTANCE_NAME,
description="test langgraph toolnode",
extra_packages=[
"prompt.yaml"
],
)
remote_agent.query(message=message, chat_id="demo_1")

Relevant log output

Log ERROR:
`DEFAULT 2024-12-16T16:39:27.223290Z INFO: Application startup complete.
DEFAULT 2024-12-16T16:39:27.223330Z INFO: Application startup complete.
DEFAULT 2024-12-16T16:39:27.223422Z INFO: ASGI 'lifespan' protocol appears unsupported.
DEFAULT 2024-12-16T16:39:27.223535Z INFO: Application startup complete.
DEFAULT 2024-12-16T16:39:27.224799Z INFO: Started server process [7]
DEFAULT 2024-12-16T16:39:27.224859Z INFO: Waiting for application startup.
DEFAULT 2024-12-16T16:39:27.224984Z INFO: ASGI 'lifespan' protocol appears unsupported.
DEFAULT 2024-12-16T16:39:27.225038Z INFO: Application startup complete.

POST /api/reasoning_engine HTTP/1.1" 500 Internal Server Error"
 create_app() takes 0 positional arguments but 1 were given .`
 




STACK TRACE:
`---------------------------------------------------------------------------
_InactiveRpcError                         Traceback (most recent call last)
File \myenv\Lib\site-packages\google\api_core\grpc_helpers.py:76, in _wrap_unary_errors.<locals>.error_remapped_callable(*args, **kwargs)
     75 try:
---> 76     return callable_(*args, **kwargs)
     77 except grpc.RpcError as exc:

File \myenv\Lib\site-packages\grpc\_channel.py:1181, in _UnaryUnaryMultiCallable.__call__(self, request, timeout, metadata, credentials, wait_for_ready, compression)
   1175 (
   1176     state,
   1177     call,
   1178 ) = self._blocking(
   1179     request, timeout, metadata, credentials, wait_for_ready, compression
   1180 )
-> 1181 return _end_unary_response_blocking(state, call, False, None)

File \myenv\Lib\site-packages\grpc\_channel.py:1006, in _end_unary_response_blocking(state, call, with_call, deadline)
   1005 else:
-> 1006     raise _InactiveRpcError(state)

_InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
	status = StatusCode.FAILED_PRECONDITION
	details = "Reasoning Engine Execution failed.
Please navigate to the Cloud Console Log Explorer page (https://console.cloud.google.com/logs/query;query=resource.type="aiplatform.googleapis.com%2FReasoningEngine"%0Aresource.labels.reasoning_engine_id=~""?project=) to view the specific errors. Additionally, please check our troubleshooting guide (https://cloud.google.com/vertex-ai/generative-ai/docs/reasoning-engine/troubleshooting/use) for more information, or visit our Cloud community forum (https://www.googlecloudcommunity.com/gc/AI-ML/bd-p/cloud-ai-ml) or GitHub repository (https://github.com/googleapis/python-aiplatform/issues) to find answers and ask for help.
Error Details: Internal Server Error"
	debug_error_string = "UNKNOWN:Error received from peer ipv6:%5%5D:43 {grpc_message:"Reasoning Engine Execution failed.\nPlease navigate to the Cloud Console Log Explorer page (https://console.cloud.google.com/logs/query;query=resource.type=\"aiplatform.googleapis.com%2FReasoningEngine\"%0Aresource.labels.reasoning_engine_id=~\"\"?project=) to view the specific errors. Additionally, please check our troubleshooting guide (https://cloud.google.com/vertex-ai/generative-ai/docs/reasoning-engine/troubleshooting/use) for more information, or visit our Cloud community forum (https://www.googlecloudcommunity.com/gc/AI-ML/bd-p/cloud-ai-ml) or GitHub repository (https://github.com/googleapis/python-aiplatform/issues) to find answers and ask for help.\nError Details: Internal Server Error", grpc_status:9, created_time:"2024-12-16T16:05:39.6172289+00:00"}"
>

The above exception was the direct cause of the following exception:

FailedPrecondition                        Traceback (most recent call last)
Cell In[119], line 2
      1 message = "What is weather today?"
----> 2 response=remote_agent.query(message=message, chat_id="demo_1")
      3 display(Markdown(response))

File \myenv\Lib\site-packages\vertexai\reasoning_engines\_reasoning_engines.py:763, in _wrap_query_operation.<locals>._method(self, **kwargs)
    762 def _method(self, **kwargs) -> _utils.JsonDict:
--> 763     response = self.execution_api_client.query_reasoning_engine(
    764         request=aip_types.QueryReasoningEngineRequest(
    765             name=self.resource_name,
    766             input=kwargs,
    767             class_method=method_name,
    768         ),
    769     )
    770     output = _utils.to_dict(response)
    771     return output.get("output", output)

File \myenv\Lib\site-packages\google\cloud\aiplatform_v1beta1\services\reasoning_engine_execution_service\client.py:795, in ReasoningEngineExecutionServiceClient.query_reasoning_engine(self, request, retry, timeout, metadata)
    792 self._validate_universe_domain()
    794 # Send the request.
--> 795 response = rpc(
    796     request,
    797     retry=retry,
    798     timeout=timeout,
    799     metadata=metadata,
    800 )
    802 # Done; return the response.
    803 return response

File \myenv\Lib\site-packages\google\api_core\gapic_v1\method.py:131, in _GapicCallable.__call__(self, timeout, retry, compression, *args, **kwargs)
    128 if self._compression is not None:
    129     kwargs["compression"] = compression
--> 131 return wrapped_func(*args, **kwargs)

File \myenv\Lib\site-packages\google\api_core\grpc_helpers.py:78, in _wrap_unary_errors.<locals>.error_remapped_callable(*args, **kwargs)
     76     return callable_(*args, **kwargs)
     77 except grpc.RpcError as exc:
---> 78     raise exceptions.from_grpc_error(exc) from exc`

Code of Conduct

  • I agree to follow this project's Code of Conduct
@koverholt
Copy link
Member

Thanks for opening this issue. It appears that you're passing remote_agent.query(message=message), whereas the kwarg to the query method should be remote_agent.query(input=message). And there is not a chat_id kwarg, but rather you can pass a unique session_id via the config kwarg. Then your resulting code will look something like:

response = agent.query(
    input=message,
    config={"configurable": {"session_id": "demo_1"}},
)

Refer to the docs and sample notebook to see how to pass the input as well as the session ID in a chat session for an agent with memory.

@koverholt
Copy link
Member

And since this repository deals with sample notebooks and apps rather than direct usage of specific services, you can instead file follow-up bugs or feature requests in the Vertex AI public issue tracker here: https://issuetracker.google.com/issues/new?component=1130925&template=1637248. Hope that helps!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants