-
Notifications
You must be signed in to change notification settings - Fork 2.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[PT FE] Support aten::aminmax for pytorch models #23879
[PT FE] Support aten::aminmax for pytorch models #23879
Conversation
I have some issues when testing the "out" parameter in my code, I'm getting an error that I've never encountered before: ___________________________________ TestAminMax.test_aminmax[ ie_device:CPU - precision:FP16 - dim:None - keepdim:False - inputs:[0, 1, 2, 3, 4, -1] - mode:out - dtype:float32 ] ____________________________________
mod = <torch.ScriptMethod object at 0x136d11ae0>, inputs = [tensor([ 0., 1., 2., 3., 4., -1.]), tensor([-1., 4., 0., 0., 0., 0.])], running_what = 'trace'
def run_mod_and_filter_tensor_outputs(mod, inputs, running_what):
try:
if isinstance(inputs, dict) and example_inputs_is_kwarg:
outs = wrap_retval(mod(**inputs))
else:
> outs = wrap_retval(mod(*_clone_inputs(inputs)))
E RuntimeError: tensor does not have a device
../opvenv/lib/python3.9/site-packages/torch/jit/_trace.py:476: RuntimeError
The above exception was the direct cause of the following exception:
self = <openvino.frontend.pytorch.ts_decoder.TorchScriptPythonDecoder object at 0x161a52a90>, pt_module = aten_aminmax(), graph_element = None
example_input = [tensor([ 0., 1., 2., 3., 4., -1.]), tensor([-1., 4., 0., 0., 0., 0.])], alias_db = None, shared_memory = True, skip_freeze = True, constant_cache = None, module_extensions = None
def __init__(
self,
pt_module,
graph_element=None,
example_input=None,
alias_db=None,
shared_memory=True,
skip_freeze=False,
constant_cache=None,
module_extensions=None):
Decoder.__init__(self)
# We store every decoder created by this decoder so that all them are not deleted until the first decoder is deleted
self.m_decoders = []
self._input_signature = None
self._shared_memory = shared_memory
self._input_is_list = False
self.constant_cache = constant_cache if constant_cache is not None else dict()
self.module_extensions = module_extensions
if graph_element is None:
try:
> pt_module = self._get_scripted_model(
pt_module, example_input, skip_freeze)
../bin/arm64/Release/python/openvino/frontend/pytorch/ts_decoder.py:41:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
../bin/arm64/Release/python/openvino/frontend/pytorch/ts_decoder.py:133: in _get_scripted_model
scripted = torch.jit.trace(
../opvenv/lib/python3.9/site-packages/torch/jit/_trace.py:806: in trace
return trace_module(
../opvenv/lib/python3.9/site-packages/torch/jit/_trace.py:1102: in trace_module
_check_trace(
../opvenv/lib/python3.9/site-packages/torch/utils/_contextlib.py:115: in decorate_context
return func(*args, **kwargs)
../opvenv/lib/python3.9/site-packages/torch/jit/_trace.py:567: in _check_trace
traced_outs = run_mod_and_filter_tensor_outputs(traced_func, inputs, "trace")
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
mod = <torch.ScriptMethod object at 0x136d11ae0>, inputs = [tensor([ 0., 1., 2., 3., 4., -1.]), tensor([-1., 4., 0., 0., 0., 0.])], running_what = 'trace'
def run_mod_and_filter_tensor_outputs(mod, inputs, running_what):
try:
if isinstance(inputs, dict) and example_inputs_is_kwarg:
outs = wrap_retval(mod(**inputs))
else:
outs = wrap_retval(mod(*_clone_inputs(inputs)))
outs = [out for out in outs if isinstance(out, torch.Tensor)]
return outs
except Exception as e:
graph_diff_errors, tensor_compare_errors = graph_diagnostic_info()
msg = f"encountered an exception while running the {running_what} with test inputs.\nException:\n{indent(str(e))}"
> raise TracingCheckError(
graph_diff_errors,
tensor_compare_errors,
extra_msg=msg,
) from e
E torch.jit._trace.TracingCheckError: Tracing failed sanity checks!
E encountered an exception while running the trace with test inputs.
E Exception:
E tensor does not have a device
../opvenv/lib/python3.9/site-packages/torch/jit/_trace.py:482: TracingCheckError This |
|
||
if (!context.input_is_none(3)) { | ||
auto concat = context.mark_node(std::make_shared<v0::Concat>(OutputVector{amin, amax}, 0)); | ||
context.mutate_input(3, concat); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
out
is a tuple in torch, so this will not work. You would need to access each element of the tuple and update it using low level functions of context and decoder like add_tensor_to_context
. That may be tricky to do, so I would suggest to implement this conversion only for case when out
is not provided. Just validate that out is None
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
out
is a tuple in torch, so this will not work. You would need to access each element of the tuple and update it using low level functions of context and decoder likeadd_tensor_to_context
. That may be tricky to do, so I would suggest to implement this conversion only for case whenout
is not provided. Just validate that out is None
Yes, I knew I had to return a tuple since PyTorch was returning one, but since a tuple is immutable it was causing problems in my mind...
Anyway, I added a check and an assert in case out is used as a parameter.
Hi @mvafin, Best regards, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Minor change, in general looks good
Co-authored-by: Maxim Vafin <[email protected]>
build_jenkins |
### Details: - Implemented `aten::aminmax` operation - Implemented test for aminmax op - registered inside `op_table.cpp` ### Tickets: - openvinotoolkit#23327 --------- Co-authored-by: Maxim Vafin <[email protected]>
Details:
aten::aminmax
operationop_table.cpp
Tickets: