-
Notifications
You must be signed in to change notification settings - Fork 505
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add decomposition for aten.flatten.using_ints #1161
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we remove ConvertAtenFlattenUsingIntsOp in lib/Conversion/TorchToLinalg/DataMovement.cpp?
@qedawkins can you help with the TorchToLinalg failures? Can we improve the lowering of ConvertAtenViewOp to support this? @fortianyou -- We probably want to add support for a list of ops to decompose. I am happy to do that, but probably won't be able to until after next week. |
@silvasean These failures are specific to this PR correct? I hope this wasn't a regression on my part. Also I was looking at adding support for other cases of AtenView needed for another model so I can add this in as well. |
I think it is specific to the decomposition in this PR. |
I see now. The decomposition is overriding the previous conversion and relying on missing cases for AtenView. The existing View lowering tries to avoid any control flow but it looks like that won't be possible for these cases, in particular
Out of curiosity can I ask why this is being moved to a decomposition? |
Why is control flow needed? |
I'm counting asserts to make sure the shape arguments are valid (e.g. (2, 3) can't be viewed as (3, 3)) as control flow. Also, to avoid actual control flow for the dynamic cases I think we have to just greedily flatten then expand (e.g. [?, ?, ?] -> [?, ?] = [?, ?, ?] -> [?] -> [?, ?]). LMK if you have another suggestion. |
Thanks very much, @silvasean.
@qedawkins We want to support |
I see, that makes sense. I opened a PR that should solve the view problems you're having: #1168 |
c01fb18
to
898d47b
Compare
We were already hitting many cases where backends different in terms of the legal ops that they wanted. This caused unnecessary coupling between the backends. Examples: - llvm#1161 - llvm#862 This PR centralizes all compilation to go through `torch_mlir.compile` so that we can keep the logic centralized there. We should move these lists closer to each backend. Especially cases like llvm#862 where blocking a decomposition is necessary to avoid a crash emphasize that the set of decompositions is tightly coupled to the backend, and should be "controlled by the backend" and not something arbitrarily tweakable. Also: - Fix a small bug in the way we passed through the backendLegalOps option. - Add better error messages in `torch_mlir.compile` for import errors.
We were already hitting many cases where backends different in terms of the legal ops that they wanted. This caused unnecessary coupling between the backends. Examples: - llvm#1161 - llvm#862 This PR centralizes all compilation to go through `torch_mlir.compile` so that we can keep the logic centralized there. We should move these lists closer to each backend. Especially cases like llvm#862 where blocking a decomposition is necessary to avoid a crash emphasize that the set of decompositions is tightly coupled to the backend, and should be "controlled by the backend" and not something arbitrarily tweakable. Also: - Fix a small bug in the way we passed through the backendLegalOps option. - Add better error messages in `torch_mlir.compile` for import errors.
We were already hitting many cases where backends different in terms of the legal ops that they wanted. This caused unnecessary coupling between the backends. Examples: - #1161 - #862 This PR centralizes all compilation to go through `torch_mlir.compile` so that we can keep the logic centralized there. We should move these lists closer to each backend. Especially cases like #862 where blocking a decomposition is necessary to avoid a crash emphasize that the set of decompositions is tightly coupled to the backend, and should be "controlled by the backend" and not something arbitrarily tweakable. Also: - Fix a small bug in the way we passed through the backendLegalOps option. - Add better error messages in `torch_mlir.compile` for import errors.
898d47b
to
f14aa19
Compare
f14aa19
to
756c196
Compare
Thanks, @silvasean @qedawkins. After rebasing and some modification to the backend configurations all the ut passed. cc @ZihengJiang @Vremold we can go on to upstream the resnet18 example :D |
* update llvm to 853e0aa Signed-off-by: Ettore Tiotto <[email protected]> * update llvm to 853e0aa Signed-off-by: Ettore Tiotto <[email protected]> * update llvm to 853e0aa Signed-off-by: Ettore Tiotto <[email protected]>
Can you review for me @silvasean @powderluv. Thanks!