Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add non-unit groups to aten::convolution #858

Merged
merged 1 commit into from
Aug 4, 2022

Conversation

gpetters94
Copy link
Collaborator

Adds support for group sizes > 1 for at::convolution (and by extension the ops that decompose into it).

Copy link
Collaborator

@ramiro050 ramiro050 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looking good! I have a few initial comments

lib/Conversion/TorchToLinalg/Linear.cpp Outdated Show resolved Hide resolved
lib/Conversion/TorchToLinalg/Linear.cpp Outdated Show resolved Hide resolved
lib/Conversion/TorchToLinalg/Linear.cpp Outdated Show resolved Hide resolved
lib/Conversion/TorchToLinalg/Linear.cpp Outdated Show resolved Hide resolved
lib/Conversion/TorchToLinalg/Linear.cpp Outdated Show resolved Hide resolved
lib/Conversion/TorchToLinalg/Linear.cpp Outdated Show resolved Hide resolved
lib/Conversion/TorchToLinalg/Linear.cpp Outdated Show resolved Hide resolved
lib/Conversion/TorchToLinalg/Linear.cpp Outdated Show resolved Hide resolved
lib/Conversion/TorchToLinalg/Linear.cpp Outdated Show resolved Hide resolved
lib/Conversion/TorchToLinalg/Linear.cpp Outdated Show resolved Hide resolved
@gpetters94
Copy link
Collaborator Author

Okay, most recent commit is confirmed working with my new op in LLVM. Once that lands and we can push torch-mlir's LLVM version up this should be good to go.

gpetters94 added a commit to gpetters94/mlir-npcomp that referenced this pull request Jun 23, 2022
Adds my grouped conv2d op for llvm#858
@gpetters94 gpetters94 mentioned this pull request Jun 23, 2022
powderluv pushed a commit that referenced this pull request Jun 23, 2022
Adds my grouped conv2d op for #858
@gpetters94 gpetters94 force-pushed the convolution_groups branch 4 times, most recently from 20e097e to 56f69bd Compare June 24, 2022 07:07
@gpetters94
Copy link
Collaborator Author

Now that the LLVM op has landed, here's the updated PR using the new logic. I spoke with Mahesh about adding a depthwise case and my concerns that it won't be possible with dynamic tensors because I won't be able to do the necessary calculation at compile time. He suggested that the optimization to depthwise convolution could be done on the backend (i.e. IREE) side. If you think I should add a specific case for depthwise static tensors then I can do that, otherwise this is complete.

@silvasean
Copy link
Contributor

Now that the LLVM op has landed, here's the updated PR using the new logic. I spoke with Mahesh about adding a depthwise case and my concerns that it won't be possible with dynamic tensors because I won't be able to do the necessary calculation at compile time. He suggested that the optimization to depthwise convolution could be done on the backend (i.e. IREE) side. If you think I should add a specific case for depthwise static tensors then I can do that, otherwise this is complete.

I think we should support the special case when enough static info is available (the whole shape doesn't have to be static for us to detect it). It is a very important pattern in practice and we don't want all backends replicating that code.

@gpetters94
Copy link
Collaborator Author

Now that the LLVM op has landed, here's the updated PR using the new logic. I spoke with Mahesh about adding a depthwise case and my concerns that it won't be possible with dynamic tensors because I won't be able to do the necessary calculation at compile time. He suggested that the optimization to depthwise convolution could be done on the backend (i.e. IREE) side. If you think I should add a specific case for depthwise static tensors then I can do that, otherwise this is complete.

I think we should support the special case when enough static info is available (the whole shape doesn't have to be static for us to detect it). It is a very important pattern in practice and we don't want all backends replicating that code.

Okay, I've put in a PR at LLVM for the new op here: https://reviews.llvm.org/D128575. It's only for the case where the multiplier is 1, which covers Mobilenet. I can add the non-unit multiplier case if you think it's relevant, but for now I'm going to check for multiplier of 1 and fall through to the grouped op otherwise.

@gpetters94
Copy link
Collaborator Author

Testing my patch with other models (i.e. not Mobilenet) exposed an issue in the ordering of the op I wrote in upstream. This latest commit fixes the issue, although it does require a change to the upstream op which I have put up at https://reviews.llvm.org/D128880.

gpetters94 added a commit to gpetters94/mlir-npcomp that referenced this pull request Jul 12, 2022
Adds my grouped conv2d op for llvm#858
@gpetters94
Copy link
Collaborator Author

Both LLVM patches have landed, now this is just waiting for an LLVM bump before it's good to go.

gpetters94 added a commit to gpetters94/mlir-npcomp that referenced this pull request Jul 27, 2022
Adds my grouped conv2d op for llvm#858
@gpetters94 gpetters94 force-pushed the convolution_groups branch 2 times, most recently from 14e2710 to 8215edc Compare August 2, 2022 06:43
@gpetters94 gpetters94 merged commit 08fc2d8 into llvm:main Aug 4, 2022
qedawkins pushed a commit to nod-ai/torch-mlir that referenced this pull request Oct 3, 2022
…llvm#858)

* shape inference

Signed-off-by: Tong Chen <[email protected]>

* modify test

Signed-off-by: Tong Chen <[email protected]>

* lower compiled

Signed-off-by: Tong Chen <[email protected]>

* neagive

Signed-off-by: Tong Chen <[email protected]>

* add backend test

Signed-off-by: Tong Chen <[email protected]>

* format

Signed-off-by: Tong Chen <[email protected]>

* polish

Signed-off-by: Tong Chen <[email protected]>

* try

Signed-off-by: Tong Chen <[email protected]>

* modify dynamic

Signed-off-by: Tong Chen <[email protected]>

* check dim

Signed-off-by: Tong Chen <[email protected]>

* dynamic axes

Signed-off-by: Tong Chen <[email protected]>

* noop_empty_axes

Signed-off-by: Tong Chen <[email protected]>

* lit test

Signed-off-by: Tong Chen <[email protected]>

* fix test

Signed-off-by: Tong Chen <[email protected]>

* format

Signed-off-by: Tong Chen <[email protected]>

* restrict constant

Signed-off-by: Tong Chen <[email protected]>

Co-authored-by: Tung D. Le <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants