Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Relay][TOPI]Fix meaning of conv2d_transpose output_padding parameter #4318

Merged
merged 19 commits into from
Jan 11, 2020

Conversation

abergeron
Copy link
Contributor

I've fixed the meaning of output_padding to be more in line with what other machine learning frameworks and libraries intend for the meaning and also to make it actually useful for the gradient.

This changed the definition of conv2d_transpose in TOPI in a minor way but I tried to make the change transparent as much as possible.

@vinx13

@vinx13 vinx13 self-assigned this Nov 12, 2019
@vinx13
Copy link
Member

vinx13 commented Nov 13, 2019

there are some issue with ci, please retrigger the ci

@vinx13
Copy link
Member

vinx13 commented Nov 13, 2019

@abergeron
Copy link
Contributor Author

I don't know how to retrigger CI besides adding a dummy commit. If there is a better way, please do tell.

@junrushao
Copy link
Member

git commit --allow-empty -m “Trigger CI”

@vinx13
Copy link
Member

vinx13 commented Nov 14, 2019

CI is still sad :(
You can also git commit --amend to reset the last commit and force push, instead of adding a dummy commit

@abergeron abergeron force-pushed the conv2d_transpose_fix branch from 4a8cd08 to 5c508a0 Compare November 14, 2019 04:05
@abergeron
Copy link
Contributor Author

It looks like I missed the VTA template. This one doesn't have any tests for the output padding and it makes me slightly uncomfortable, but the only test there is is for a very specific workload so I don't want to mess with it.

@vinx13
Copy link
Member

vinx13 commented Nov 14, 2019

It looks like I missed the VTA template. This one doesn't have any tests for the output padding and it makes me slightly uncomfortable, but the only test there is is for a very specific workload so I don't want to mess with it.

cc @tmoreau89

@abergeron
Copy link
Contributor Author

I would appreciate some help with the VTA failure because it doesn't appear to be related to something i've done (I didn't add any if_then_else nodes), but at the same time it is in something I've modified (the conv2d_transpose test).

So maybe I broke something, but I'm really not sure.

@tmoreau89
Copy link
Contributor

@abergeron is the issue right now that VTA test will fail when you set output padding to (1,1)? Or does it also fail when it's set to (0,0) (current test case)

@abergeron
Copy link
Contributor Author

It fails with (0, 0) in the CI right now.

It might also be a good idea to test it with non-zero values to make sure it works.

@tmoreau89
Copy link
Contributor

I see; I'm a little confused because the test case with output padding set to (0,0) should be inconsequential; however it seems to break the VTA test. How time critical is this PR? I can try to reproduce the test in your branch over the weekend.

@abergeron
Copy link
Contributor Author

This is not super time critical, so you can take time to make sure it works properly.

But I would like this to be merged within a week or two if possible.

@tmoreau89
Copy link
Contributor

@abergeron ack, I'll investigate to see what causes the failure at the moment. Will report back by Tuesday.

@abergeron
Copy link
Contributor Author

@tmoreau89 Any news?

@tmoreau89
Copy link
Contributor

@abergeron got sidetracked with other commitments, I'll reproduce the issue tomorrow.

@tmoreau89
Copy link
Contributor

FYI, I was able to reproduce the issue in your branch, I'll look more into what's causing it tomorrow...

@tmoreau89
Copy link
Contributor

@abergeron: I have good new and bad news. The good news is that I was able to find the root cause of the observed bug. By changing the conv2d_transpose_nchw function signature by adding the output_padding argument, we are essentially changing the TOPHUB schedule lookup convention for that operator. In other words, AutoTVM fails to find the schedule for the conv2dtranspose operator because the argument list of the operator doesn't match. Due to this, the schedule defaults to an illegal one for VTA which triggers the error we're seeing in the CI.

In order to circumvent the error the fix will be straightforward on your end: just change v0.06 to v0.07 here: https://github.com/apache/incubator-tvm/blob/master/python/tvm/autotvm/tophub.py#L59

I've updated the schedule for VTA with this commit: https://github.com/uwsampl/tvm-distro/commits?author=tmoreau89

Essentially I changed the DCGAN schedules from
["conv2d_transpose_nchw", [1, 64, 4, 4, 1, 16, "int8"], [32, 64, 4, 4, 16, 16, "int8"], [2, 2], [1, 1], "int32"]
to
["conv2d_transpose_nchw", [1, 64, 4, 4, 1, 16, "int8"], [32, 64, 4, 4, 16, 16, "int8"], [2, 2], [1, 1], "int32", [0, 0]]

The bad news is that this will need to be modified for all hardware targets under tophub. For instance the CUDA schedule will need to change: https://github.com/uwsampl/tvm-distro/blob/master/tophub/cuda_v0.06.log#L690 among other targets.

In order to do so, I recommend first creating new schedule files under a new "package version" and then switching the package version in your PR in tophub.py in order not to break other unit tests. The reason why this is not causing an error is that CPUs and GPUs schedules default to valid, albeit slow ones (VTA is just unstable and not all schedules will lead to correct execution, we need a better defaulting mechanism).

Finally, I also created a branch that will fix some scripts under VTA that will be affected from your change. If you could cherry pick my commits into your branch that would be great: https://github.com/tmoreau89/tvm/tree/4318_fix

Copy link
Contributor

@tmoreau89 tmoreau89 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Changes required:

@abergeron
Copy link
Contributor Author

I consider this to be 100% good news since I would only have to update the schedules. I needed to figure out how to use autotvm for other purposes soon so I'll get on that.

@tmoreau89
Copy link
Contributor

Great then! Happy to provide you with guidance as you go through the changes in tophub.

@abergeron abergeron force-pushed the conv2d_transpose_fix branch from 6ee61d5 to cab70d6 Compare November 26, 2019 23:53
@abergeron
Copy link
Contributor Author

abergeron commented Nov 26, 2019

For now, I've included your fixes and rebased on the current master. I'll do the tuning for non-VTA targets later.

@tmoreau89
Copy link
Contributor

Cool, let's see if the CIs pass; in the meantime, I don't think it's necessary to tune the targets, it's just a matter of changing the entries to include the new operator argument

@tmoreau89
Copy link
Contributor

In the case of VTA, I had to add the new field in the argument field as to not break the schedule lookup mechanism:

from
["conv2d_transpose_nchw", [1, 64, 4, 4, 1, 16, "int8"], [32, 64, 4, 4, 16, 16, "int8"], [2, 2], [1, 1], "int32"]
to
["conv2d_transpose_nchw", [1, 64, 4, 4, 1, 16, "int8"], [32, 64, 4, 4, 16, 16, "int8"], [2, 2], [1, 1], "int32", [0, 0]]

@abergeron
Copy link
Contributor Author

I discovered that backends other than VTA don't have convenient scripts to do the tuning. I will probably write some so that we have a script in the repo that can reproduce tuning files in tophub.

@tmoreau89
Copy link
Contributor

Hmmm, is the reason you want to re-tune schedules to cover cases where output pad = (1,1)?

@abergeron
Copy link
Contributor Author

abergeron commented Nov 27, 2019

I forgot the tophub.py file update because I thought it was in your branch.

Also, if I don't really need to retune anything, then I would rather save the time. But there might have been some output padding in the model that uses conv2d_transpose on tophub. I'm not sure exactly what model that is however.

In any case I think it would be very good to have a reference script that produces the tuning files in tophub, so it would be easy to recreate the files when something changes.

@abergeron
Copy link
Contributor Author

I've tried all the models in the benchmark files (in apps/benchmarks) and none of them use any convolution. So I have no idea where the conv2d_transpose in the tuning files comes from.

If someone knows where those come from, it would help a lot. Also I think this kinds of reinforces the idea that a publicly available script to make those files would help a lot, unless it is already present somewhere I missed.

@abergeron abergeron force-pushed the conv2d_transpose_fix branch from ab80867 to d958cd0 Compare January 7, 2020 18:50
@abergeron abergeron force-pushed the conv2d_transpose_fix branch from d958cd0 to 24cba20 Compare January 7, 2020 19:00
@@ -55,7 +55,7 @@
'mali': "v0.05",
'intel_graphics': "v0.01",

'vta': "v0.06",
'vta': "v0.07",
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

cuda and arm cpu version also need to be updated

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

Copy link
Contributor

@tmoreau89 tmoreau89 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Revisiting the PR, overall everything looks good. Thank you @abergeron for handling the required changes.

@vinx13 vinx13 merged commit dcf7fbf into apache:master Jan 11, 2020
@vinx13
Copy link
Member

vinx13 commented Jan 11, 2020

@abergeron Thanks for the fix

@abergeron abergeron deleted the conv2d_transpose_fix branch January 13, 2020 17:55
@icemelon
Copy link
Member

icemelon commented Jan 15, 2020

@abergeron @vinx13 @tmoreau89
I found two problems in this PR.

  1. In this line, h+dh can be potentially out of boundary. max(h) = out_h - 1 = in_h - filter_h + output_padding[0] and max(dh) = filter_h - 1, therefore, max(h+dh) = in_h + output_padding[0] - 1. When output_padding[0] >= 1, max(h+dh) >= in_h, which is out of data_pad height boundary. Similar to w+dw.
  2. The x86 conv2d_transpose implementation is different from generic conv2d_tranpose. In x86, after you call conv2d_transpose_nchw_preprocess, you directly call the normal conv2d without using output_padding.

I'll revert this PR for now. Could you fix these two bugs and double check whether cuda and arm_cpu implementation are correct? Further, could you investigate why CI doesn't catch these bugs?

icemelon added a commit that referenced this pull request Jan 15, 2020
tqchen pushed a commit that referenced this pull request Jan 15, 2020
alexwong pushed a commit to alexwong/tvm that referenced this pull request Feb 26, 2020
…apache#4318)

* Add output_padding to generic

* Add output_padding to the reference impl

* Add output_padding to arm_cpu

* Add output_padding to the test

* Add output_padding for cuda

* Add output_padding for x86

* Make use of the new output_padding argument in Relay

* Adjust conv2d_transpose Relay test

* Fix lint errors

* Fix the VTA declaration of conv2d_transpose

* support for output padding in conv2d transpose

* some output padding will break IR pass

* Fix new conv2d_transpose test

* Update tophub

* Fix conv1d output_padding too.

* Fix the conv1d_transpose reference function.

* Fix the cuda impl

* fix the topi test for conv1d

* Update the versions in tophub.py

Co-authored-by: Thierry Moreau <[email protected]>
alexwong pushed a commit to alexwong/tvm that referenced this pull request Feb 26, 2020
alexwong pushed a commit to alexwong/tvm that referenced this pull request Feb 28, 2020
…apache#4318)

* Add output_padding to generic

* Add output_padding to the reference impl

* Add output_padding to arm_cpu

* Add output_padding to the test

* Add output_padding for cuda

* Add output_padding for x86

* Make use of the new output_padding argument in Relay

* Adjust conv2d_transpose Relay test

* Fix lint errors

* Fix the VTA declaration of conv2d_transpose

* support for output padding in conv2d transpose

* some output padding will break IR pass

* Fix new conv2d_transpose test

* Update tophub

* Fix conv1d output_padding too.

* Fix the conv1d_transpose reference function.

* Fix the cuda impl

* fix the topi test for conv1d

* Update the versions in tophub.py

Co-authored-by: Thierry Moreau <[email protected]>
alexwong pushed a commit to alexwong/tvm that referenced this pull request Feb 28, 2020
zhiics pushed a commit to neo-ai/tvm that referenced this pull request Mar 2, 2020
…apache#4318)

* Add output_padding to generic

* Add output_padding to the reference impl

* Add output_padding to arm_cpu

* Add output_padding to the test

* Add output_padding for cuda

* Add output_padding for x86

* Make use of the new output_padding argument in Relay

* Adjust conv2d_transpose Relay test

* Fix lint errors

* Fix the VTA declaration of conv2d_transpose

* support for output padding in conv2d transpose

* some output padding will break IR pass

* Fix new conv2d_transpose test

* Update tophub

* Fix conv1d output_padding too.

* Fix the conv1d_transpose reference function.

* Fix the cuda impl

* fix the topi test for conv1d

* Update the versions in tophub.py

Co-authored-by: Thierry Moreau <[email protected]>
zhiics pushed a commit to neo-ai/tvm that referenced this pull request Mar 2, 2020
tqchen pushed a commit to tqchen/tvm that referenced this pull request Mar 29, 2020
…apache#4318)

* Add output_padding to generic

* Add output_padding to the reference impl

* Add output_padding to arm_cpu

* Add output_padding to the test

* Add output_padding for cuda

* Add output_padding for x86

* Make use of the new output_padding argument in Relay

* Adjust conv2d_transpose Relay test

* Fix lint errors

* Fix the VTA declaration of conv2d_transpose

* support for output padding in conv2d transpose

* some output padding will break IR pass

* Fix new conv2d_transpose test

* Update tophub

* Fix conv1d output_padding too.

* Fix the conv1d_transpose reference function.

* Fix the cuda impl

* fix the topi test for conv1d

* Update the versions in tophub.py

Co-authored-by: Thierry Moreau <[email protected]>
tqchen pushed a commit to tqchen/tvm that referenced this pull request Mar 29, 2020
@abergeron abergeron restored the conv2d_transpose_fix branch June 8, 2020 18:00
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants