Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: Conversion Failure for torch.nn.BatchNorm1d #23538

Closed
3 tasks done
Thrsu opened this issue Mar 19, 2024 · 2 comments
Closed
3 tasks done

[Bug]: Conversion Failure for torch.nn.BatchNorm1d #23538

Thrsu opened this issue Mar 19, 2024 · 2 comments
Assignees
Labels
bug Something isn't working category: PyTorch FE OpenVINO PyTorch Frontend

Comments

@Thrsu
Copy link

Thrsu commented Mar 19, 2024

OpenVINO Version

2024.0.0-14509-34caeefd078-releases/2024/0

Operating System

Ubuntu 18.04 (LTS)

Device used for inference

CPU

Framework

PyTorch

Model used

Given in the following script

Issue description

When attempting to convert a TorchScript module using torch.nn.BatchNorm1d to an OpenVINO IR model using OpenVINO's ov.convert_model function, the conversion process fails with an OpConversionFailure error: "Input element types do not match".

Step-by-step reproduction

  1. Execute the following script:
import torch
from torch.nn import Module
import openvino as ov
import numpy as np


def compile_torch(model, input_data):
    ov_model = ov.convert_model(model, example_input=input_data)
    ir_path = f"temp_OVIR.xml"
    ov.save_model(ov_model, ir_path, compress_to_fp16=False)
    core = ov.Core()
    model = core.read_model(ir_path)

    compiled_model = core.compile_model(model=model, device_name="CPU")
    output_key = compiled_model.output(0)
    result = compiled_model(input_data)[output_key]
    return result

input_data = torch.randn([0, 97, 40], dtype=torch.float16)

torch_model = torch.nn.BatchNorm1d(100,affine=True,).eval()
torch_outputs = torch_model(input_data).cpu().detach().numpy()

trace = torch.jit.trace(torch_model, input_data)
trace = torch.jit.freeze(trace)

input_shapes = input_data.shape
res_ov = compile_torch(trace, input_data)
np.testing.assert_allclose(torch_outputs, res_ov, rtol=1e-3, atol=1e-3)

Relevant log output

converted_model = super().convert(model)
openvino._pyopenvino.OpConversionFailure: Check 'is_conversion_successful' failed at src/frontends/pytorch/src/frontend.cpp:143:
FrontEnd API failed with OpConversionFailure:
Model wasn't fully converted. Failed operations detailed log:
-- aten::batch_norm with a message:
Exception happened during conversion of operation aten::batch_norm with schema aten::batch_norm(Tensor input, Tensor? weight, Tensor? bias, Tensor? running_mean, Tensor? running_var, bool training, float momentum, float eps, bool cudnn_enabled) -> Tensor
Check 'element::Type::merge(data_et, data_et, input_et)' failed at src/core/src/op/batch_norm.cpp:23:
While validating node 'opset5::BatchNormInference BatchNormInference_24 (opset1::Parameter input[0]:f16[?,?,?], opset1::Constant Constant_10[0]:f32[100], opset1::Constant Constant_8[0]:f32[100], opset1::Constant Constant_8[0]:f32[100], opset1::Constant Constant_10[0]:f32[100]) -> (dynamic[...])' with friendly_name 'BatchNormInference_24':
Input element types do not match.

Summary:
-- Conversion is failed for: aten::batch_norm

Issue submission checklist

  • I'm reporting an issue. It's not a question.
  • I checked the problem with the documentation, FAQ, open issues, Stack Overflow, etc., and have not found a solution.
  • There is reproducer code and related data files such as images, videos, models, etc.
@Thrsu Thrsu added bug Something isn't working support_request labels Mar 19, 2024
@andrei-kochin andrei-kochin added the category: PyTorch FE OpenVINO PyTorch Frontend label Mar 21, 2024
@mvafin
Copy link
Contributor

mvafin commented Mar 28, 2024

This is a valid problem, should be fixed by this PR: #23750

github-merge-queue bot pushed a commit that referenced this issue Apr 2, 2024
### Details:
 - *Support any float type for batch norm*
- *Tests for fp16 sporadically fail by accuracy and fp64 is not
supported by torch, will not update tests this time.*

### Tickets:
 - *#23538*
@mvafin
Copy link
Contributor

mvafin commented Apr 2, 2024

Changes were merged

@mvafin mvafin closed this as completed Apr 2, 2024
bbielawx pushed a commit to bbielawx/openvino that referenced this issue Apr 12, 2024
### Details:
 - *Support any float type for batch norm*
- *Tests for fp16 sporadically fail by accuracy and fp64 is not
supported by torch, will not update tests this time.*

### Tickets:
 - *openvinotoolkit#23538*
alvoron pushed a commit to alvoron/openvino that referenced this issue Apr 29, 2024
### Details:
 - *Support any float type for batch norm*
- *Tests for fp16 sporadically fail by accuracy and fp64 is not
supported by torch, will not update tests this time.*

### Tickets:
 - *openvinotoolkit#23538*
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working category: PyTorch FE OpenVINO PyTorch Frontend
Projects
None yet
Development

No branches or pull requests

4 participants