Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Frontend] clean duplicates of infer_type and infer_shape in frontends #8709

Closed
wants to merge 3 commits into from

Conversation

vvchernov
Copy link
Contributor

@vvchernov vvchernov commented Aug 10, 2021

Duplicates of infer_type and infer_shape were cleaned in frontends. These methods were developed in common.py

  1. In general duplicates were observed in pytorch frontend. Wrapper over infer_type is used to write nodes after graph pass to map and reuse them further. Any other frontends do not use such mechanism.
  2. Practicaly the result of infer_type is needed in "checked_type" method, it was inserted to infer_type. Other option is "type_annotation", it can be directly return from infer_type, condition check was goten from pytorch frontend code for this.

@jwfromm please review

@vvchernov vvchernov force-pushed the vc/clean_infer_shape branch 2 times, most recently from f6ffa3c to 6361d12 Compare August 10, 2021 22:14
@vvchernov vvchernov changed the title [Frontend] clean duplicates of infer_type and infer_shape in frontends WIP: [Frontend] clean duplicates of infer_type and infer_shape in frontends Aug 11, 2021
@vvchernov vvchernov force-pushed the vc/clean_infer_shape branch from 6361d12 to 7efd53e Compare August 11, 2021 06:55
@vvchernov vvchernov force-pushed the vc/clean_infer_shape branch from 0ff0245 to 423fbf9 Compare August 19, 2021 13:37
@vvchernov vvchernov changed the title WIP: [Frontend] clean duplicates of infer_type and infer_shape in frontends [Frontend] clean duplicates of infer_type and infer_shape in frontends Aug 19, 2021
@vvchernov vvchernov force-pushed the vc/clean_infer_shape branch from 423fbf9 to b929e64 Compare August 19, 2021 18:17
entry = new_mod["main"]
ty = entry.body.checked_type
self.types[node] = ty
return self.types[node]
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think removing this code would effectively revert the change in #6900? That change was done for a good reason, so I think we should keep it.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The quick answer is no. I understood the idea when I updated code and kept it, but I tried to unify infer_type from common.py and use it instead of duplicated code in pytorch frontend.
Thank you for good reference. I studied #6900, https://discuss.tvm.apache.org/t/incremental-type-propagation/7446 and references there (PRs #5755, #6080, #6008, #6007, #7008). I agree with ideas 1. do graph pass once and save result (done), 2. unify it for all frontends not only for pytorch (not done). My main doubt is I removed some code from infer type implementation inside pytorch frontend, It seems that it does the same but by another way. if not then why it is not used in infer type in common.py, if yes then why we have excess functionality. May be @t-vi helps us.
I wanted to ask about current status of point 2 task, but found the following answer #7008 (comment). Is it correct? If unified approach of infer type, shape and others is done soon my PR can be close, but please take into account idea to hide checked_type or type_annotation inside infer_type. Seems it can be done and not need to think about it by final client.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Other changes look good, but this change effectively makes _TypeFinder class a dead code, so it probably is not a good idea to remove it. And infer_type(...) is certainly not incremental. I haven't really taken a deep look at the code, but you can assume that there have been no progress on incremental type inference since #6900.

So I'd suggest revert this change which could be controversial, and I'll merge other easy changes.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If I revert this change back it kills the idea of the patch. But I agree with you. The best exit from this situation I'm seeing is to transfer approach from #6900 to common.py. I need a time to learn deeper the approach and think about general class managing of graph pass once and keeping results.
My suggestion is to stop considering of the PR just now. I will try to bring more convinient functionality and keep current ones. I'll give you know when it will be ready

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think generalizing the incremental inference would be great but so while it has the clearly better asymptotic complexity, it is a bit of a mess to work with the existing type inference pass because of how it works.
So if other frontends do not need the incremental type inference as much as the PyTorch one, standardizing to the PyTorch front end's incremental one will introduce additional complexity and a performance hit to them.

As such, it seems dubious if "cleanup" is a good reason to impose this to other frontends (if they benefit from incremental inference, that would be different).

Of course, the "grand solution" (with the refactoring mentioned in the #7008 discussion) of enabling incremental type inference in the type inference pass rather than the ad-hoc way in the front end would be desirable, but will be a lot of work.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you @t-vi ! I think other frontends needs incremental type inference. It is simply to understand by counting all calls of infer_type and infer_shape methods in frontend (~100 for onnx). In general frontends have many similarities and it moved me try to unify some duplicates. This task is not so simple as seemed before I will try to study all details around and do it carefully.

@areusch areusch removed their request for review September 14, 2021 20:41
@masahi masahi closed this Nov 14, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants