-
Notifications
You must be signed in to change notification settings - Fork 3.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[TFLite] Add transpose_conv to TFLite parser #4440
Conversation
@FrozenGene Could you have a look? |
9ad5d31
to
bdf24fd
Compare
# TF frontend supports 'SAME' padding for kernel 1x1 only. Lets do the same here | ||
if padding == Padding.SAME: | ||
assert (kernel_h, kernel_w) == (1, 1), \ | ||
"SAME padding is supported for kernel (1,1) only" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
if kh kw is 3x3, what is the current error msg?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Where? in TF or TFLite?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
in TFLite it is
File "test_forward.py", line 519, in test_forward_transpose_conv
_test_transpose_conv([4, 32, 32, 16], [3, 3, 5, 16], [4, 32, 32, 5], [1, 1], 'SAME')
File "test_forward.py", line 504, in _test_transpose_conv
compare_tflite_with_tvm(data_array, 'Placeholder:0', [in_data], [out])
File "test_forward.py", line 162, in compare_tflite_with_tvm
num_output=len(out_names), out_names=out_names)
File "test_forward.py", line 75, in run_tvm_graph
dtype_dict=dtype_dict)
File "/home/dlc/workplace/apivovarov/incubator-tvm/python/tvm/relay/frontend/tflite.py", line 1572, in from_tflite
op_converter.convert_op_to_relay()
File "/home/dlc/workplace/apivovarov/incubator-tvm/python/tvm/relay/frontend/tflite.py", line 125, in convert_op_to_relay
ret = self.convert_map[op_code_str](op)
File "/home/dlc/workplace/apivovarov/incubator-tvm/python/tvm/relay/frontend/tflite.py", line 1440, in convert_transpose_conv
"SAME padding is supported for kernel (1,1) only"
AssertionError: SAME padding is supported for kernel (1,1) only
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Both should be the same. i.e. when our model is 3x3 conv_transpose, what is the current error msg of TVM?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I mean if we remove this assert, what will be happened of TVM? If we add this assert, we can not support 3x3 conv_transpose
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
in TF they only test kernel 1,1 for SAME padding
test_forward.py L381
Other kernels will fail for SAME padding. More info on it. https://discuss.tvm.ai/t/why-we-only-support-kernel-1-1-for-tf-conv2d-transpose-same/4957
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could we support non-1x1 conv_transpose too? I think maybe it is a good time to do it completely no matter tf or tflite.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There are two types of padding in TF and TFLite - VALID and SAME.
conv_transpose op does opposite of what conv2d is doing.
conv2d output size is the same or smaller than its input. Bigger the kernel - smaller the output.
conv_transpose is opposite - Bigger the kernel - bigger the output.
We can use any kernel for padding 'VALID' - 2x2, 3x3, etc.
But if we use padding 'SAME' then the output size should be the same as the input.
If kernel is 1x1 - then the output is the same as the input. So, kernel 1x1 is implicitly SAME.
If we increase conv_transpose kernel size then the output will have extra paddings.
In order to make the output size to be the SAME as the input we need to remove padding from the output.
The model which needs conv_transpose op is palm_detection.tflite
. It uses VALID padding.
Looks like in most of the cases people use conv2d and conv_transpose with VALID padding.
I think it is fine if we merge the PR with known limitation for padding SAME (it is not that frequently used anyway). We probably should wait till TF frontend supports SAME paddings for non-1x1 kernel and do the same in TFLite frontend.
Maybe they will decide to add VALID/SAME padding field to Relay op directly. In that case we will just pass padding type to Relay op as-is.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok.
bdf24fd
to
0620b5b
Compare
LGTM |
Thanks @apivovarov @FrozenGene |
This PR adds
transpose_conv
to TFLite frontend.conv2d_transpose TF docs:
https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/nn/conv2d_transpose
TFLite schema
Limitations: