-
Notifications
You must be signed in to change notification settings - Fork 3.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fixed onnx test failures when run on a cpu backend #3764
Conversation
Thanks for the fix. Would you mind I ask if this is a version problem or a bug before? I just took a quick look into this function , If so, it might indicate that we never actually executed the converted graphs for models that call this function. Would you be interested in verifying it? Thanks. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
@zhiics no problem - we're using the ci docker images, however the behaviour we observed was that the tests calling this function would pass using the gpu image e.g. http://ci.tvm.ai:8080/blue/organizations/jenkins/tvm/detail/master/1443/pipeline/240 but fail when run using cpu images - don't think those are used automatically though for this test suite. Also, as you say the function output in not checked, however most of the testcases calling |
@tristan-arm Thanks. Yeah, I didn't know if that was intentional or not, but I think we should probably test them. Otherwise, it's hard to say that the converted models are "correct" in the sense of accuracy. How do you think? |
Agreed, I've updated the function to include output comparison |
@tristan-arm Could you rebase to fix the CI failure? The error looks unrelated to you PR. |
Done - CI is all green now if you want to take another look, Thanks. |
@tristan-arm Thanks. |
* Fixed onnx test failures when run on a cpu backend * Updated check_torch_conversion function to include output comparison
* Fixed onnx test failures when run on a cpu backend * Updated check_torch_conversion function to include output comparison
* Fixed onnx test failures when run on a cpu backend * Updated check_torch_conversion function to include output comparison
One of the error message we're seeing:
ERROR: test_forward.test_inception Traceback (most recent call last): File "/usr/local/lib/python3.6/dist-packages/nose/case.py", line 198, in runTest self.test(*self.arg) File "/workspace/tests/python/frontend/onnx/test_forward.py", line 1109, in test_inception check_torch_conversion(torchvision.models.inception_v3, (1,3,224,224)) File "/workspace/tests/python/frontend/onnx/test_forward.py", line 1086, in check_torch_conversion expr, params = relay.frontend.from_onnx(onnx_model, shape=shapes) File "/workspace/python/tvm/relay/frontend/onnx.py", line 1263, in from_onnx mod, params = g.from_onnx(graph, opset) File "/workspace/python/tvm/relay/frontend/onnx.py", line 1042, in from_onnx raise ValueError("Must provide an input shape for
{0}.".format(i_name)) ValueError: Must provide an input shape for 'input.1'.