Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: Port for tensor name axes was not found for Reduce operator from ONNX frontend #21024

Open
3 tasks done
jikechao opened this issue Nov 12, 2023 · 4 comments
Open
3 tasks done
Assignees
Labels
bug Something isn't working category: ONNX FE OpenVINO ONNX FrontEnd

Comments

@jikechao
Copy link

OpenVINO Version

openvino-nightly 2023.2.0.dev20231101

Operating System

Ubuntu 18.04 (LTS)

Device used for inference

BATCH

Framework

ONNX

Model used

https://github.com/jikechao/onnx_models/blob/main/ReduceL1.onnx

Issue description

image

For the given model, It has two inputs (e.g., data and axes), and the input axes is optional.
By checking the OV IR in the xml file, It seems that the conversion from ONNX to OpenVINO ignores the optional input axes and leads to the crash when execute inference.

Step-by-step reproduction

import onnxruntime as ort
import openvino as ov
import numpy as np

onnx_model_path = './ReduceL1.onnx'
session = ort.InferenceSession(onnx_model_path)

data = np.random.random([3, 2, 2]).astype(np.float32)
axes = np.random.randint(0, 2, size=[1], dtype=np.int64)

input_data = {"data": data, "axes": axes}

output_name = session.get_outputs()[0].name
onnx_output = session.run(None, input_data)


ov_model = ov.convert_model(onnx_model_path)

ir_path = f"temp_OVIR.xml"
ov.save_model(ov_model, ir_path, compress_to_fp16=False)
core = ov.Core()
model = core.read_model(ir_path)

compiled_model = core.compile_model(model=model, device_name="CPU")
output_key = compiled_model.outputs


for i, output in enumerate(output_key):
    ov_output = compiled_model(input_data)[output]


Relevant log output

Traceback (most recent call last):
  File "test.py", line 32, in <module>
    ov_output = compiled_model(input_data)[output]
  File "C:\software\conda\envs\torch\lib\site-packages\openvino\runtime\ie_api.py", line 384, in __call__
    return self._infer_request.infer(
  File "C:\software\conda\envs\torch\lib\site-packages\openvino\runtime\ie_api.py", line 143, in infer
    return OVDict(super().infer(_data_dispatch(
  File "C:\software\conda\envs\torch\lib\site-packages\openvino\runtime\utils\data_helpers\data_dispatcher.py", line 354, in _data_dispatch
    return create_shared(inputs, request) if is_shared else create_copied(inputs, request)
  File "C:\software\conda\envs\torch\lib\functools.py", line 877, in wrapper
    return dispatch(args[0].__class__)(*args, **kw)
  File "C:\software\conda\envs\torch\lib\site-packages\openvino\runtime\utils\data_helpers\data_dispatcher.py", line 182, in _
    return {k: value_to_tensor(v, request=request, is_shared=True, key=k) for k, v in request._inputs_data.items()}
  File "C:\software\conda\envs\torch\lib\site-packages\openvino\runtime\utils\data_helpers\data_dispatcher.py", line 182, in <dictcomp>
    return {k: value_to_tensor(v, request=request, is_shared=True, key=k) for k, v in request._inputs_data.items()}
  File "C:\software\conda\envs\torch\lib\functools.py", line 877, in wrapper
    return dispatch(args[0].__class__)(*args, **kw)
  File "C:\software\conda\envs\torch\lib\site-packages\openvino\runtime\utils\data_helpers\data_dispatcher.py", line 59, in _
    tensor = get_request_tensor(request, key)
  File "C:\software\conda\envs\torch\lib\site-packages\openvino\runtime\utils\data_helpers\data_dispatcher.py", line 27, in get_request_tensor
    return request.get_tensor(key)
RuntimeError: Exception from src\inference\src\infer_request.cpp:194:
Check '::getPort(port, name, {_impl->get_inputs(), _impl->get_outputs()})' failed at src\inference\src\infer_request.cpp:194:
Port for tensor name axes was not found.



Process finished with exit code 1

Issue submission checklist

  • I'm reporting an issue. It's not a question.
  • I checked the problem with the documentation, FAQ, open issues, Stack Overflow, etc., and have not found a solution.
  • There is reproducer code and related data files such as images, videos, models, etc.
@jikechao jikechao added bug Something isn't working support_request labels Nov 12, 2023
@jikechao
Copy link
Author

BTW, all Reduce-related operators have similar bugs, including ReduceL1, ReduceL2, ReduceMax, ReduceMin, ReduceMean, ReduceSum, ReduceProd, ReduceSumSquare, ReduceLogSum, ReduceLogSumExp

@gkrivor
Copy link
Contributor

gkrivor commented Feb 21, 2024

Hi @jikechao,

Sorry for a delay!

I can confirm issues with all Reduce* operations, most of them started using axes as an input instead of attribute since 18th opset.

It could be fixed around 2024.1.

Thanks for reporting!

@turbobuilt
Copy link

Same problem

@github-project-automation github-project-automation bot moved this to Contributors Needed in Good first issues Apr 19, 2024
@p-wysocki p-wysocki moved this from Contributors Needed to Assigned in Good first issues Apr 19, 2024
@p-wysocki
Copy link
Contributor

Hi, should this be a GFI? @gkrivor

@mlukasze mlukasze moved this from Assigned to Contributors Needed in Good first issues Jul 25, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working category: ONNX FE OpenVINO ONNX FrontEnd
Projects
None yet
Development

No branches or pull requests

6 participants