Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[VTA][Relay] Extending Vision model coverage compilation for VTA #3740

Merged
merged 18 commits into from
Sep 5, 2019

Conversation

tmoreau89
Copy link
Contributor

@tmoreau89 tmoreau89 commented Aug 9, 2019

This fixes Relay compilation for ResNet18_v2, ResNet34_v2, ResNet50_v2, ResNet101_v2, ResNet152_v2, Alexnet, VGG11, VGG13, VGG16, VGG19.

Currently the following models will work in HW/SIM (white-listed in tutorial until more models are supported): ResNet18_v2, ResNet34_v2, ResNet50_v2, ResNet101_v2

Performance on pynq IoT platform: 371.68ms, 470.86ms, 662.13ms and 916.26ms for ResNet-18, 34, 50, and 101 respectively.

Bug fixes:

  • Elementwise multiplication support in graph pack (layout modification)
  • Maxpooling on NCHWnc format
  • Reducing the micro-op/insn buffer size to 1<<24 to avoid out of memory errors on the pynq

[resolved]: getting quantization to pass on these models is pending on #3543 being merged. I've addressed this dependence with a TODO.

@@ -61,6 +62,8 @@
# Make sure that TVM was compiled with RPC=1
assert tvm.module.enabled("rpc")

# Increase python recursion limit to traverse Relay program
sys.setrecursionlimit(10000)
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@jroesch is there a better place to put this line?

@MarisaKirisame
Copy link
Contributor

@tmoreau89 maybe we should set limit in tvm init.py

@tmoreau89
Copy link
Contributor Author

@MarisaKirisame If we set it in init.py, then do we not revert it back to its original value??

@MarisaKirisame
Copy link
Contributor

@tmoreau89 why revert at all? tvm is supposed to be used with a high recursion limit. if user wish otherwise she should pick a better constant herself.

@tmoreau89
Copy link
Contributor Author

Ok, which init.py should we include the lines in? the top level?

@MarisaKirisame
Copy link
Contributor

@tmoreau89 I think relay is fine for now. If tvm also need it, we can move it up lateron.

@tmoreau89 tmoreau89 changed the title [VTA][Relay] Adding mult coverage to layout tiling in graphpack to compile ResNets [VTA][Relay] Extending Vision model coverage compilation for VTA Aug 9, 2019
@jroesch jroesch merged commit 028f47c into apache:master Sep 5, 2019
MarisaKirisame pushed a commit to MarisaKirisame/tvm that referenced this pull request Sep 7, 2019
…che#3740)

* adding support for graphpack over multiply op

* increasing resnet model coverage

* fix indentation

* lint

* moving recursion limit fix into graphpack pass

* moving recursionlimit to relay init

* pooling on NCHWnc format

* adding more models

* deploy_resnet_on_vta.py

* trailing line

* generalizing to vision models

* merge conflicts

* fix, apply quantization to VTA only

* improving comments

* trimming models that have runtime issues for the moment

* lint

* lint

* lint
wweic pushed a commit to wweic/tvm that referenced this pull request Sep 16, 2019
…che#3740)

* adding support for graphpack over multiply op

* increasing resnet model coverage

* fix indentation

* lint

* moving recursion limit fix into graphpack pass

* moving recursionlimit to relay init

* pooling on NCHWnc format

* adding more models

* deploy_resnet_on_vta.py

* trailing line

* generalizing to vision models

* merge conflicts

* fix, apply quantization to VTA only

* improving comments

* trimming models that have runtime issues for the moment

* lint

* lint

* lint
wweic pushed a commit to wweic/tvm that referenced this pull request Sep 16, 2019
…che#3740)

* adding support for graphpack over multiply op

* increasing resnet model coverage

* fix indentation

* lint

* moving recursion limit fix into graphpack pass

* moving recursionlimit to relay init

* pooling on NCHWnc format

* adding more models

* deploy_resnet_on_vta.py

* trailing line

* generalizing to vision models

* merge conflicts

* fix, apply quantization to VTA only

* improving comments

* trimming models that have runtime issues for the moment

* lint

* lint

* lint
wweic pushed a commit to neo-ai/tvm that referenced this pull request Sep 16, 2019
…che#3740)

* adding support for graphpack over multiply op

* increasing resnet model coverage

* fix indentation

* lint

* moving recursion limit fix into graphpack pass

* moving recursionlimit to relay init

* pooling on NCHWnc format

* adding more models

* deploy_resnet_on_vta.py

* trailing line

* generalizing to vision models

* merge conflicts

* fix, apply quantization to VTA only

* improving comments

* trimming models that have runtime issues for the moment

* lint

* lint

* lint
@tmoreau89 tmoreau89 deleted the resnet branch February 13, 2020 21:24
tqchen pushed a commit to tqchen/tvm that referenced this pull request Mar 29, 2020
…che#3740)

* adding support for graphpack over multiply op

* increasing resnet model coverage

* fix indentation

* lint

* moving recursion limit fix into graphpack pass

* moving recursionlimit to relay init

* pooling on NCHWnc format

* adding more models

* deploy_resnet_on_vta.py

* trailing line

* generalizing to vision models

* merge conflicts

* fix, apply quantization to VTA only

* improving comments

* trimming models that have runtime issues for the moment

* lint

* lint

* lint
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants