You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
For the given model, It has two inputs (e.g., data and axes), and the input axes is optional.
By checking the OV IR in the xml file, It seems that the conversion from ONNX to OpenVINO ignores the optional input axes and leads to the crash when execute inference.
Step-by-step reproduction
import onnxruntime as ort
import openvino as ov
import numpy as np
onnx_model_path = './ReduceL1.onnx'
session = ort.InferenceSession(onnx_model_path)
data = np.random.random([3, 2, 2]).astype(np.float32)
axes = np.random.randint(0, 2, size=[1], dtype=np.int64)
input_data = {"data": data, "axes": axes}
output_name = session.get_outputs()[0].name
onnx_output = session.run(None, input_data)
ov_model = ov.convert_model(onnx_model_path)
ir_path = f"temp_OVIR.xml"
ov.save_model(ov_model, ir_path, compress_to_fp16=False)
core = ov.Core()
model = core.read_model(ir_path)
compiled_model = core.compile_model(model=model, device_name="CPU")
output_key = compiled_model.outputs
for i, output in enumerate(output_key):
ov_output = compiled_model(input_data)[output]
Relevant log output
Traceback (most recent call last):
File "test.py", line 32, in<module>
ov_output = compiled_model(input_data)[output]
File "C:\software\conda\envs\torch\lib\site-packages\openvino\runtime\ie_api.py", line 384, in __call__
return self._infer_request.infer(
File "C:\software\conda\envs\torch\lib\site-packages\openvino\runtime\ie_api.py", line 143, in infer
returnOVDict(super().infer(_data_dispatch(
File "C:\software\conda\envs\torch\lib\site-packages\openvino\runtime\utils\data_helpers\data_dispatcher.py", line 354, in _data_dispatch
return create_shared(inputs, request) if is_shared else create_copied(inputs, request)
File "C:\software\conda\envs\torch\lib\functools.py", line 877, in wrapper
return dispatch(args[0].__class__)(*args, **kw)
File "C:\software\conda\envs\torch\lib\site-packages\openvino\runtime\utils\data_helpers\data_dispatcher.py", line 182, in _
return {k: value_to_tensor(v, request=request, is_shared=True, key=k) fork, vinrequest._inputs_data.items()}
File "C:\software\conda\envs\torch\lib\site-packages\openvino\runtime\utils\data_helpers\data_dispatcher.py", line 182, in<dictcomp>return {k: value_to_tensor(v, request=request, is_shared=True, key=k) fork, vinrequest._inputs_data.items()}
File "C:\software\conda\envs\torch\lib\functools.py", line 877, in wrapper
return dispatch(args[0].__class__)(*args, **kw)
File "C:\software\conda\envs\torch\lib\site-packages\openvino\runtime\utils\data_helpers\data_dispatcher.py", line 59, in _
tensor = get_request_tensor(request, key)
File "C:\software\conda\envs\torch\lib\site-packages\openvino\runtime\utils\data_helpers\data_dispatcher.py", line 27, in get_request_tensor
return request.get_tensor(key)
RuntimeError: Exception from src\inference\src\infer_request.cpp:194:
Check '::getPort(port, name, {_impl->get_inputs(), _impl->get_outputs()})' failed at src\inference\src\infer_request.cpp:194:
Port for tensor name axes was not found.
Process finished with exit code 1
Issue submission checklist
I'm reporting an issue. It's not a question.
I checked the problem with the documentation, FAQ, open issues, Stack Overflow, etc., and have not found a solution.
There is reproducer code and related data files such as images, videos, models, etc.
The text was updated successfully, but these errors were encountered:
BTW, all Reduce-related operators have similar bugs, including ReduceL1, ReduceL2, ReduceMax, ReduceMin, ReduceMean, ReduceSum, ReduceProd, ReduceSumSquare, ReduceLogSum, ReduceLogSumExp
OpenVINO Version
openvino-nightly 2023.2.0.dev20231101
Operating System
Ubuntu 18.04 (LTS)
Device used for inference
BATCH
Framework
ONNX
Model used
https://github.com/jikechao/onnx_models/blob/main/ReduceL1.onnx
Issue description
For the given model, It has two inputs (e.g., data and axes), and the input
axes
is optional.By checking the OV IR in the xml file, It seems that the conversion from ONNX to OpenVINO ignores the optional input
axes
and leads to the crash when execute inference.Step-by-step reproduction
Relevant log output
Issue submission checklist
The text was updated successfully, but these errors were encountered: