Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add decomposition for aten.flatten.using_ints #1161

Merged
merged 1 commit into from
Aug 23, 2022

Conversation

tanyokwok
Copy link
Collaborator

@tanyokwok tanyokwok commented Aug 5, 2022

Can you review for me @silvasean @powderluv. Thanks!

Copy link
Contributor

@silvasean silvasean left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we remove ConvertAtenFlattenUsingIntsOp in lib/Conversion/TorchToLinalg/DataMovement.cpp?

@silvasean
Copy link
Contributor

@qedawkins can you help with the TorchToLinalg failures? Can we improve the lowering of ConvertAtenViewOp to support this?

@fortianyou -- We probably want to add support for a list of ops to decompose. I am happy to do that, but probably won't be able to until after next week.

@qedawkins
Copy link
Collaborator

qedawkins commented Aug 5, 2022

@silvasean These failures are specific to this PR correct? I hope this wasn't a regression on my part. Also I was looking at adding support for other cases of AtenView needed for another model so I can add this in as well.

@silvasean
Copy link
Contributor

@silvasean These failures are specific to this PR correct? I hope this wasn't a regression on my part. Also I was looking at adding support for other cases of AtenView needed for another model so I can add this in as well.

I think it is specific to the decomposition in this PR.

@qedawkins
Copy link
Collaborator

@silvasean These failures are specific to this PR correct? I hope this wasn't a regression on my part. Also I was looking at adding support for other cases of AtenView needed for another model so I can add this in as well.

I think it is specific to the decomposition in this PR.

I see now. The decomposition is overriding the previous conversion and relying on missing cases for AtenView. The existing View lowering tries to avoid any control flow but it looks like that won't be possible for these cases, in particular

note: see current operation: %2386 = "torch.aten.view"(%2372, %2385) : (!torch.vtensor<[?,512,1,1],f32>, !torch.list<int>) -> !torch.vtensor<[?,512],f32>

Out of curiosity can I ask why this is being moved to a decomposition?

@silvasean
Copy link
Contributor

I see now. The decomposition is overriding the previous conversion and relying on missing cases for AtenView. The existing View lowering tries to avoid any control flow but it looks like that won't be possible for these cases, in particular

Why is control flow needed?

@qedawkins
Copy link
Collaborator

I see now. The decomposition is overriding the previous conversion and relying on missing cases for AtenView. The existing View lowering tries to avoid any control flow but it looks like that won't be possible for these cases, in particular

Why is control flow needed?

I'm counting asserts to make sure the shape arguments are valid (e.g. (2, 3) can't be viewed as (3, 3)) as control flow. Also, to avoid actual control flow for the dynamic cases I think we have to just greedily flatten then expand (e.g. [?, ?, ?] -> [?, ?] = [?, ?, ?] -> [?] -> [?, ?]). LMK if you have another suggestion.

@tanyokwok
Copy link
Collaborator Author

I am happy to do that, but probably won't be able to until after next week.

Thanks very much, @silvasean.

Out of curiosity can I ask why this is being moved to a decomposition?

@qedawkins We want to support aten.flatten.using_ints lowering in another pass like TorchToMhlo(fully dynamic shape). And it's moved to decomposition pass because it's a view like op. With the decomposition, multiple backends will benefit from it.

@qedawkins
Copy link
Collaborator

@qedawkins We want to support aten.flatten.using_ints lowering in another pass like TorchToMhlo(fully dynamic shape). And it's moved to decomposition pass because it's a view like op. With the decomposition, multiple backends will benefit from it.

I see, that makes sense. I opened a PR that should solve the view problems you're having: #1168

silvasean added a commit to silvasean/torch-mlir that referenced this pull request Aug 19, 2022
We were already hitting many cases where backends different in terms of
the legal ops that they wanted. This caused unnecessary coupling between
the backends. Examples:
- llvm#1161
- llvm#862

This PR centralizes all compilation to go through `torch_mlir.compile`
so that we can keep the logic centralized there. We should move these
lists closer to each backend. Especially cases like
llvm#862 where blocking a
decomposition is necessary to avoid a crash emphasize that the set of
decompositions is tightly coupled to the backend, and should be
"controlled by the backend" and not something arbitrarily tweakable.

Also:
- Fix a small bug in the way we passed through the backendLegalOps
  option.
- Add better error messages in `torch_mlir.compile` for import errors.
silvasean added a commit to silvasean/torch-mlir that referenced this pull request Aug 19, 2022
We were already hitting many cases where backends different in terms of
the legal ops that they wanted. This caused unnecessary coupling between
the backends. Examples:
- llvm#1161
- llvm#862

This PR centralizes all compilation to go through `torch_mlir.compile`
so that we can keep the logic centralized there. We should move these
lists closer to each backend. Especially cases like
llvm#862 where blocking a
decomposition is necessary to avoid a crash emphasize that the set of
decompositions is tightly coupled to the backend, and should be
"controlled by the backend" and not something arbitrarily tweakable.

Also:
- Fix a small bug in the way we passed through the backendLegalOps
  option.
- Add better error messages in `torch_mlir.compile` for import errors.
silvasean added a commit that referenced this pull request Aug 22, 2022
We were already hitting many cases where backends different in terms of
the legal ops that they wanted. This caused unnecessary coupling between
the backends. Examples:
- #1161
- #862

This PR centralizes all compilation to go through `torch_mlir.compile`
so that we can keep the logic centralized there. We should move these
lists closer to each backend. Especially cases like
#862 where blocking a
decomposition is necessary to avoid a crash emphasize that the set of
decompositions is tightly coupled to the backend, and should be
"controlled by the backend" and not something arbitrarily tweakable.

Also:
- Fix a small bug in the way we passed through the backendLegalOps
  option.
- Add better error messages in `torch_mlir.compile` for import errors.
@tanyokwok
Copy link
Collaborator Author

Thanks, @silvasean @qedawkins. After rebasing and some modification to the backend configurations all the ut passed. cc @ZihengJiang @Vremold we can go on to upstream the resnet18 example :D

@tanyokwok tanyokwok merged commit 9176b5e into llvm:main Aug 23, 2022
@tanyokwok tanyokwok deleted the tanyo/flatten_ints branch August 23, 2022 03:52
qedawkins pushed a commit to nod-ai/torch-mlir that referenced this pull request Oct 3, 2022
* update llvm to 853e0aa

Signed-off-by: Ettore Tiotto <[email protected]>

* update llvm to 853e0aa

Signed-off-by: Ettore Tiotto <[email protected]>

* update llvm to 853e0aa

Signed-off-by: Ettore Tiotto <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants