-
Notifications
You must be signed in to change notification settings - Fork 3.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[RELAY][FRONTEND] Tensorflow frontend. #2216
Conversation
89284d2
to
92ec3f8
Compare
79df1dc
to
90b99db
Compare
7a04cdf
to
6f9be59
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good. I think we should try to remove all references to NNVM from Relay code so we don't confuse new users.
901af11
to
9a704fd
Compare
@jroesch thanks for the review. I came across some strange error for LSTM, yet to debug Does the below log ring any bell ??
|
It looks like a bug in the schedule, maybe fusion is introducing a pattern the master schedule is not written for? Do you know what operator this is failing for? we should probably modify the compile engine to dump more debug information when an exception happens while scheduling. |
@jroesch |
@jroesch
|
4fcae90
to
45e11ef
Compare
@jroesch #2412 fixes the above problem. But LLVM has below issue with fuse. Disabling fuse works fine though.
|
All test cases pass now. Above issue specific to PTB model is unrelated to frontend. |
2f9ac16
to
7613c95
Compare
@jroesch welcome to further review ? |
791bc20
to
f56cb12
Compare
3cbc68a
to
5cfeda0
Compare
@kazum thanks for the review. You may take another look and conclude. |
@ZihengJiang & @yzhliu request to consider this for 0.5 release if possible. |
2ab9d88
to
d69f1a8
Compare
d69f1a8
to
6e977c2
Compare
@srkreddy1238 My previous comments are not addressed in the latest version. |
@kazum handled now :) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good, thanks!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry @srkreddy1238 I missed your message. I am peronally very willing to bring this into v0.5 release, while on the other hand we’ve already started voting against a commit-id. What’s your thought? @tqchen @ZihengJiang
good to see this pr get approved, it would be great to see it get merged soon. |
@yzhliu no problem :) |
Thanks everyone for contribution and reviewing. Let's merge. |
* [RELAY][FRONTEND] Tensorflow frontend support. * * LSTM removed for a while. * * basic ops are good. * * nn wip * * wip * * python2.7 corrections. * * NN ops are good. * * e2e models working good * * all good except LSTM * * rebase, tutorials and CI trigger. * * CI errors. * * enable opt_level=3 * * Docstrings cleanup. testing.tf utils moved to relay from nnvm. * * tutorials update. * * LSTM work good now. * * Rebase * * CI error * * enable PTB. * * rebase. * * tutorials * Update python/tvm/relay/frontend/tensorflow.py Co-Authored-By: srkreddy1238 <[email protected]> * * review comments. * CI fix. * * review comments.
* [RELAY][FRONTEND] Tensorflow frontend support. * * LSTM removed for a while. * * basic ops are good. * * nn wip * * wip * * python2.7 corrections. * * NN ops are good. * * e2e models working good * * all good except LSTM * * rebase, tutorials and CI trigger. * * CI errors. * * enable opt_level=3 * * Docstrings cleanup. testing.tf utils moved to relay from nnvm. * * tutorials update. * * LSTM work good now. * * Rebase * * CI error * * enable PTB. * * rebase. * * tutorials * Update python/tvm/relay/frontend/tensorflow.py Co-Authored-By: srkreddy1238 <[email protected]> * * review comments. * CI fix. * * review comments.
* [RELAY][FRONTEND] Tensorflow frontend support. * * LSTM removed for a while. * * basic ops are good. * * nn wip * * wip * * python2.7 corrections. * * NN ops are good. * * e2e models working good * * all good except LSTM * * rebase, tutorials and CI trigger. * * CI errors. * * enable opt_level=3 * * Docstrings cleanup. testing.tf utils moved to relay from nnvm. * * tutorials update. * * LSTM work good now. * * Rebase * * CI error * * enable PTB. * * rebase. * * tutorials * Update python/tvm/relay/frontend/tensorflow.py Co-Authored-By: srkreddy1238 <[email protected]> * * review comments. * CI fix. * * review comments.
* [RELAY][FRONTEND] Tensorflow frontend support. * * LSTM removed for a while. * * basic ops are good. * * nn wip * * wip * * python2.7 corrections. * * NN ops are good. * * e2e models working good * * all good except LSTM * * rebase, tutorials and CI trigger. * * CI errors. * * enable opt_level=3 * * Docstrings cleanup. testing.tf utils moved to relay from nnvm. * * tutorials update. * * LSTM work good now. * * Rebase * * CI error * * enable PTB. * * rebase. * * tutorials * Update python/tvm/relay/frontend/tensorflow.py Co-Authored-By: srkreddy1238 <[email protected]> * * review comments. * CI fix. * * review comments.
This solved the issue with LWP that appears with maxpool. The problem was that the LWP handler was forgetting to save p0 (used by the handler). This predicate register needs to be saved too, just like r0-r5, as it had been decided that it was the responsibility of the handler to save everything (even these theoretically caller-saved registers). Said differently, since it had been decided that calling the LWP handler would not follow the normal ABI, and that the LWP handler would save everything it touches (even normally caller-saved registers like r0-r15 and p0-3), then it absolutely needs to save the predicate registers too (in particular p0, which was causing the issue). The issue appeared only with maxpool because it's the only one that had a state saved in p0 before calling the LWP handler. And this call destroyed the content of what it had saved, making it subsequently branch to different portions of the code. Fix: Allocate 32 bytes (instead of 24 previously), in order to save p3:0, and I save those at the bottom of the stack. Restore it at the end of the LWP handler.
* Fix LWP assembly handler (predicate register) (#2216) This solved the issue with LWP that appears with maxpool. The problem was that the LWP handler was forgetting to save p0 (used by the handler). This predicate register needs to be saved too, just like r0-r5, as it had been decided that it was the responsibility of the handler to save everything (even these theoretically caller-saved registers). Said differently, since it had been decided that calling the LWP handler would not follow the normal ABI, and that the LWP handler would save everything it touches (even normally caller-saved registers like r0-r15 and p0-3), then it absolutely needs to save the predicate registers too (in particular p0, which was causing the issue). The issue appeared only with maxpool because it's the only one that had a state saved in p0 before calling the LWP handler. And this call destroyed the content of what it had saved, making it subsequently branch to different portions of the code. Fix: Allocate 32 bytes (instead of 24 previously), in order to save p3:0, and I save those at the bottom of the stack. Restore it at the end of the LWP handler. * Remove training spaces --------- Co-authored-by: Slama, Franck <[email protected]>
Thanks for contributing to TVM! Please refer to guideline https://docs.tvm.ai/contribute/ for useful information and tips. After the pull request is submitted, please request code reviews from Reviewers.