Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[TFLite] Add transpose_conv to TFLite parser #4440

Merged
merged 1 commit into from
Dec 1, 2019

Conversation

apivovarov
Copy link
Contributor

This PR adds transpose_conv to TFLite frontend.

conv2d_transpose TF docs:
https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/nn/conv2d_transpose

TFLite schema

table TransposeConvOptions {
  padding:Padding;
  stride_w:int;
  stride_h:int;
}

Limitations:

  • Similar to TF frontend TFLite frontend supports 'SAME' padding for kernel 1x1 only.

@apivovarov
Copy link
Contributor Author

@FrozenGene Could you have a look?

@apivovarov apivovarov force-pushed the tflite_transpose_conv branch 3 times, most recently from 9ad5d31 to bdf24fd Compare November 28, 2019 02:03
Comment on lines +1437 to +1440
# TF frontend supports 'SAME' padding for kernel 1x1 only. Lets do the same here
if padding == Padding.SAME:
assert (kernel_h, kernel_w) == (1, 1), \
"SAME padding is supported for kernel (1,1) only"
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if kh kw is 3x3, what is the current error msg?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Where? in TF or TFLite?

Copy link
Contributor Author

@apivovarov apivovarov Nov 28, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

in TFLite it is


  File "test_forward.py", line 519, in test_forward_transpose_conv
    _test_transpose_conv([4, 32, 32, 16], [3, 3, 5, 16], [4, 32, 32, 5], [1, 1], 'SAME')

  File "test_forward.py", line 504, in _test_transpose_conv
    compare_tflite_with_tvm(data_array, 'Placeholder:0', [in_data], [out])

  File "test_forward.py", line 162, in compare_tflite_with_tvm
    num_output=len(out_names), out_names=out_names)

  File "test_forward.py", line 75, in run_tvm_graph
    dtype_dict=dtype_dict)

  File "/home/dlc/workplace/apivovarov/incubator-tvm/python/tvm/relay/frontend/tflite.py", line 1572, in from_tflite
    op_converter.convert_op_to_relay()

  File "/home/dlc/workplace/apivovarov/incubator-tvm/python/tvm/relay/frontend/tflite.py", line 125, in convert_op_to_relay
    ret = self.convert_map[op_code_str](op)

  File "/home/dlc/workplace/apivovarov/incubator-tvm/python/tvm/relay/frontend/tflite.py", line 1440, in convert_transpose_conv
    "SAME padding is supported for kernel (1,1) only"

AssertionError: SAME padding is supported for kernel (1,1) only

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Both should be the same. i.e. when our model is 3x3 conv_transpose, what is the current error msg of TVM?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I mean if we remove this assert, what will be happened of TVM? If we add this assert, we can not support 3x3 conv_transpose

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

in TF they only test kernel 1,1 for SAME padding
test_forward.py L381

test_forward.py L385

Other kernels will fail for SAME padding. More info on it. https://discuss.tvm.ai/t/why-we-only-support-kernel-1-1-for-tf-conv2d-transpose-same/4957

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could we support non-1x1 conv_transpose too? I think maybe it is a good time to do it completely no matter tf or tflite.

Copy link
Contributor Author

@apivovarov apivovarov Nov 28, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are two types of padding in TF and TFLite - VALID and SAME.
conv_transpose op does opposite of what conv2d is doing.
conv2d output size is the same or smaller than its input. Bigger the kernel - smaller the output.
conv_transpose is opposite - Bigger the kernel - bigger the output.
We can use any kernel for padding 'VALID' - 2x2, 3x3, etc.
But if we use padding 'SAME' then the output size should be the same as the input.
If kernel is 1x1 - then the output is the same as the input. So, kernel 1x1 is implicitly SAME.
If we increase conv_transpose kernel size then the output will have extra paddings.
In order to make the output size to be the SAME as the input we need to remove padding from the output.
The model which needs conv_transpose op is palm_detection.tflite. It uses VALID padding.
Looks like in most of the cases people use conv2d and conv_transpose with VALID padding.
I think it is fine if we merge the PR with known limitation for padding SAME (it is not that frequently used anyway). We probably should wait till TF frontend supports SAME paddings for non-1x1 kernel and do the same in TFLite frontend.
Maybe they will decide to add VALID/SAME padding field to Relay op directly. In that case we will just pass padding type to Relay op as-is.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok.

python/tvm/relay/frontend/tflite.py Outdated Show resolved Hide resolved
tests/python/frontend/tflite/test_forward.py Outdated Show resolved Hide resolved
tests/python/frontend/tflite/test_forward.py Show resolved Hide resolved
@apivovarov apivovarov force-pushed the tflite_transpose_conv branch from bdf24fd to 0620b5b Compare November 30, 2019 06:58
@FrozenGene
Copy link
Member

LGTM

@tqchen tqchen merged commit a1b6f46 into apache:master Dec 1, 2019
@tqchen
Copy link
Member

tqchen commented Dec 1, 2019

Thanks @apivovarov @FrozenGene

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants