Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[PT FE] Support aten::aminmax for pytorch models #23879

Merged
merged 8 commits into from
Apr 15, 2024

Conversation

LucaTamSapienza
Copy link
Contributor

Details:

  • Implemented aten::aminmax operation
  • Implemented test for aminmax op
  • registered inside op_table.cpp

Tickets:

@LucaTamSapienza
Copy link
Contributor Author

I have some issues when testing the "out" parameter in my code, I'm getting an error that I've never encountered before:

___________________________________ TestAminMax.test_aminmax[ ie_device:CPU - precision:FP16 - dim:None - keepdim:False - inputs:[0, 1, 2, 3, 4, -1] - mode:out - dtype:float32 ] ____________________________________

mod = <torch.ScriptMethod object at 0x136d11ae0>, inputs = [tensor([ 0.,  1.,  2.,  3.,  4., -1.]), tensor([-1.,  4.,  0.,  0.,  0.,  0.])], running_what = 'trace'

    def run_mod_and_filter_tensor_outputs(mod, inputs, running_what):
        try:
            if isinstance(inputs, dict) and example_inputs_is_kwarg:
                outs = wrap_retval(mod(**inputs))
            else:
>               outs = wrap_retval(mod(*_clone_inputs(inputs)))
E               RuntimeError: tensor does not have a device

../opvenv/lib/python3.9/site-packages/torch/jit/_trace.py:476: RuntimeError

The above exception was the direct cause of the following exception:

self = <openvino.frontend.pytorch.ts_decoder.TorchScriptPythonDecoder object at 0x161a52a90>, pt_module = aten_aminmax(), graph_element = None
example_input = [tensor([ 0.,  1.,  2.,  3.,  4., -1.]), tensor([-1.,  4.,  0.,  0.,  0.,  0.])], alias_db = None, shared_memory = True, skip_freeze = True, constant_cache = None, module_extensions = None

    def __init__(
            self,
            pt_module,
            graph_element=None,
            example_input=None,
            alias_db=None,
            shared_memory=True,
            skip_freeze=False,
            constant_cache=None,
            module_extensions=None):
        Decoder.__init__(self)
        # We store every decoder created by this decoder so that all them are not deleted until the first decoder is deleted
        self.m_decoders = []
        self._input_signature = None
        self._shared_memory = shared_memory
        self._input_is_list = False
        self.constant_cache = constant_cache if constant_cache is not None else dict()
        self.module_extensions = module_extensions
        if graph_element is None:
            try:
>               pt_module = self._get_scripted_model(
                    pt_module, example_input, skip_freeze)

../bin/arm64/Release/python/openvino/frontend/pytorch/ts_decoder.py:41: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
../bin/arm64/Release/python/openvino/frontend/pytorch/ts_decoder.py:133: in _get_scripted_model
    scripted = torch.jit.trace(
../opvenv/lib/python3.9/site-packages/torch/jit/_trace.py:806: in trace
    return trace_module(
../opvenv/lib/python3.9/site-packages/torch/jit/_trace.py:1102: in trace_module
    _check_trace(
../opvenv/lib/python3.9/site-packages/torch/utils/_contextlib.py:115: in decorate_context
    return func(*args, **kwargs)
../opvenv/lib/python3.9/site-packages/torch/jit/_trace.py:567: in _check_trace
    traced_outs = run_mod_and_filter_tensor_outputs(traced_func, inputs, "trace")
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

mod = <torch.ScriptMethod object at 0x136d11ae0>, inputs = [tensor([ 0.,  1.,  2.,  3.,  4., -1.]), tensor([-1.,  4.,  0.,  0.,  0.,  0.])], running_what = 'trace'

    def run_mod_and_filter_tensor_outputs(mod, inputs, running_what):
        try:
            if isinstance(inputs, dict) and example_inputs_is_kwarg:
                outs = wrap_retval(mod(**inputs))
            else:
                outs = wrap_retval(mod(*_clone_inputs(inputs)))
            outs = [out for out in outs if isinstance(out, torch.Tensor)]
            return outs
        except Exception as e:
            graph_diff_errors, tensor_compare_errors = graph_diagnostic_info()
            msg = f"encountered an exception while running the {running_what} with test inputs.\nException:\n{indent(str(e))}"
>           raise TracingCheckError(
                graph_diff_errors,
                tensor_compare_errors,
                extra_msg=msg,
            ) from e
E           torch.jit._trace.TracingCheckError: Tracing failed sanity checks!
E           encountered an exception while running the trace with test inputs.
E           Exception:
E           	tensor does not have a device

../opvenv/lib/python3.9/site-packages/torch/jit/_trace.py:482: TracingCheckError

This Tensor does not have a device error is driving me crazy, I don't understand what might be causing it, I'm testing only with CPU since GPU isn't related to conversion testing.
Do you have any suggestions on how to solve it?

@mlukasze mlukasze linked an issue Apr 5, 2024 that may be closed by this pull request
src/frontends/pytorch/src/op/aminmax.cpp Outdated Show resolved Hide resolved

if (!context.input_is_none(3)) {
auto concat = context.mark_node(std::make_shared<v0::Concat>(OutputVector{amin, amax}, 0));
context.mutate_input(3, concat);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

out is a tuple in torch, so this will not work. You would need to access each element of the tuple and update it using low level functions of context and decoder like add_tensor_to_context. That may be tricky to do, so I would suggest to implement this conversion only for case when out is not provided. Just validate that out is None

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

out is a tuple in torch, so this will not work. You would need to access each element of the tuple and update it using low level functions of context and decoder like add_tensor_to_context. That may be tricky to do, so I would suggest to implement this conversion only for case when out is not provided. Just validate that out is None

Yes, I knew I had to return a tuple since PyTorch was returning one, but since a tuple is immutable it was causing problems in my mind...
Anyway, I added a check and an assert in case out is used as a parameter.

@LucaTamSapienza
Copy link
Contributor Author

Hi @mvafin,
I apologize for not informing you after making the changes regarding your request. This must have slowed down the process of closing this PR. However, if you could give me your review regarding these changes, I would appreciate it greatly. Thank you very much, and sorry again.

Best regards,
Luca

@LucaTamSapienza LucaTamSapienza changed the title [PT FE] Supported aten::aminmax for pytorch models [PT FE] Support aten::aminmax for pytorch models Apr 10, 2024
Copy link
Contributor

@mvafin mvafin left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Minor change, in general looks good

src/frontends/pytorch/src/op/min_max.cpp Outdated Show resolved Hide resolved
@mvafin
Copy link
Contributor

mvafin commented Apr 11, 2024

build_jenkins

@mvafin mvafin added this pull request to the merge queue Apr 15, 2024
Merged via the queue into openvinotoolkit:master with commit e3fbb57 Apr 15, 2024
108 checks passed
@mlukasze mlukasze added this to the 2024.2 milestone Apr 15, 2024
alvoron pushed a commit to alvoron/openvino that referenced this pull request Apr 29, 2024
### Details:
 - Implemented `aten::aminmax` operation
 - Implemented test for aminmax op
 - registered inside `op_table.cpp`

### Tickets:
 - openvinotoolkit#23327

---------

Co-authored-by: Maxim Vafin <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
category: PyTorch FE OpenVINO PyTorch Frontend
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[Good First Issue]: Support aten::aminmax for pytorch models
3 participants