Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fixed onnx test failures when run on a cpu backend #3764

Merged
merged 2 commits into from
Aug 19, 2019

Conversation

tristan-arm
Copy link
Contributor

@tristan-arm tristan-arm commented Aug 13, 2019

One of the error message we're seeing:

ERROR: test_forward.test_inception Traceback (most recent call last): File "/usr/local/lib/python3.6/dist-packages/nose/case.py", line 198, in runTest self.test(*self.arg) File "/workspace/tests/python/frontend/onnx/test_forward.py", line 1109, in test_inception check_torch_conversion(torchvision.models.inception_v3, (1,3,224,224)) File "/workspace/tests/python/frontend/onnx/test_forward.py", line 1086, in check_torch_conversion expr, params = relay.frontend.from_onnx(onnx_model, shape=shapes) File "/workspace/python/tvm/relay/frontend/onnx.py", line 1263, in from_onnx mod, params = g.from_onnx(graph, opset) File "/workspace/python/tvm/relay/frontend/onnx.py", line 1042, in from_onnx raise ValueError("Must provide an input shape for {0}.".format(i_name)) ValueError: Must provide an input shape for 'input.1'.

@zhiics
Copy link
Member

zhiics commented Aug 13, 2019

Thanks for the fix. Would you mind I ask if this is a version problem or a bug before? I just took a quick look into this function ,check_torch_conversion. It seems that the returned mod and params are not used.

If so, it might indicate that we never actually executed the converted graphs for models that call this function. Would you be interested in verifying it? Thanks.

Copy link
Contributor

@cchung100m cchung100m left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@tristan-arm
Copy link
Contributor Author

@zhiics no problem - we're using the ci docker images, however the behaviour we observed was that the tests calling this function would pass using the gpu image e.g. http://ci.tvm.ai:8080/blue/organizations/jenkins/tvm/detail/master/1443/pipeline/240 but fail when run using cpu images - don't think those are used automatically though for this test suite.

Also, as you say the function output in not checked, however most of the testcases calling check_torch_conversion are commented out, so I guess it's a work in progress - I can't really speak for the original author's intent here though.

@zhiics
Copy link
Member

zhiics commented Aug 14, 2019

@tristan-arm Thanks. Yeah, I didn't know if that was intentional or not, but I think we should probably test them. Otherwise, it's hard to say that the converted models are "correct" in the sense of accuracy. How do you think?

@tristan-arm
Copy link
Contributor Author

@tristan-arm Thanks. Yeah, I didn't know if that was intentional or not, but I think we should probably test them. Otherwise, it's hard to say that the converted models are "correct" in the sense of accuracy. How do you think?

Agreed, I've updated the function to include output comparison

@zhiics
Copy link
Member

zhiics commented Aug 16, 2019

@tristan-arm Could you rebase to fix the CI failure? The error looks unrelated to you PR.

@tristan-arm
Copy link
Contributor Author

@tristan-arm Could you rebase to fix the CI failure? The error looks unrelated to you PR.

Done - CI is all green now if you want to take another look, Thanks.

@zhiics
Copy link
Member

zhiics commented Aug 19, 2019

@tristan-arm Thanks.

@zhiics zhiics merged commit 2b31404 into apache:master Aug 19, 2019
wweic pushed a commit to wweic/tvm that referenced this pull request Sep 16, 2019
* Fixed onnx test failures when run on a cpu backend

* Updated check_torch_conversion function to include output comparison
wweic pushed a commit to wweic/tvm that referenced this pull request Sep 16, 2019
* Fixed onnx test failures when run on a cpu backend

* Updated check_torch_conversion function to include output comparison
wweic pushed a commit to neo-ai/tvm that referenced this pull request Sep 16, 2019
* Fixed onnx test failures when run on a cpu backend

* Updated check_torch_conversion function to include output comparison
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants