-
Notifications
You must be signed in to change notification settings - Fork 3.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Frontend] clean duplicates of infer_type and infer_shape in frontends #8709
Conversation
f6ffa3c
to
6361d12
Compare
6361d12
to
7efd53e
Compare
0ff0245
to
423fbf9
Compare
423fbf9
to
b929e64
Compare
entry = new_mod["main"] | ||
ty = entry.body.checked_type | ||
self.types[node] = ty | ||
return self.types[node] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think removing this code would effectively revert the change in #6900? That change was done for a good reason, so I think we should keep it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The quick answer is no. I understood the idea when I updated code and kept it, but I tried to unify infer_type from common.py and use it instead of duplicated code in pytorch frontend.
Thank you for good reference. I studied #6900, https://discuss.tvm.apache.org/t/incremental-type-propagation/7446 and references there (PRs #5755, #6080, #6008, #6007, #7008). I agree with ideas 1. do graph pass once and save result (done), 2. unify it for all frontends not only for pytorch (not done). My main doubt is I removed some code from infer type implementation inside pytorch frontend, It seems that it does the same but by another way. if not then why it is not used in infer type in common.py, if yes then why we have excess functionality. May be @t-vi helps us.
I wanted to ask about current status of point 2 task, but found the following answer #7008 (comment). Is it correct? If unified approach of infer type, shape and others is done soon my PR can be close, but please take into account idea to hide checked_type or type_annotation inside infer_type. Seems it can be done and not need to think about it by final client.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Other changes look good, but this change effectively makes _TypeFinder
class a dead code, so it probably is not a good idea to remove it. And infer_type(...)
is certainly not incremental. I haven't really taken a deep look at the code, but you can assume that there have been no progress on incremental type inference since #6900.
So I'd suggest revert this change which could be controversial, and I'll merge other easy changes.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If I revert this change back it kills the idea of the patch. But I agree with you. The best exit from this situation I'm seeing is to transfer approach from #6900 to common.py. I need a time to learn deeper the approach and think about general class managing of graph pass once and keeping results.
My suggestion is to stop considering of the PR just now. I will try to bring more convinient functionality and keep current ones. I'll give you know when it will be ready
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think generalizing the incremental inference would be great but so while it has the clearly better asymptotic complexity, it is a bit of a mess to work with the existing type inference pass because of how it works.
So if other frontends do not need the incremental type inference as much as the PyTorch one, standardizing to the PyTorch front end's incremental one will introduce additional complexity and a performance hit to them.
As such, it seems dubious if "cleanup" is a good reason to impose this to other frontends (if they benefit from incremental inference, that would be different).
Of course, the "grand solution" (with the refactoring mentioned in the #7008 discussion) of enabling incremental type inference in the type inference pass rather than the ad-hoc way in the front end would be desirable, but will be a lot of work.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you @t-vi ! I think other frontends needs incremental type inference. It is simply to understand by counting all calls of infer_type and infer_shape methods in frontend (~100 for onnx). In general frontends have many similarities and it moved me try to unify some duplicates. This task is not so simple as seemed before I will try to study all details around and do it carefully.
Duplicates of infer_type and infer_shape were cleaned in frontends. These methods were developed in common.py
@jwfromm please review