Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[MHLO]add native_dropout and related ops pattern #1211

Merged
merged 1 commit into from
Aug 15, 2022

Conversation

Yancey1989
Copy link
Collaborator

See RFC #999
This PR added aten::native_dropout op pattern, and some decomposed op patterns, .e.g aten::size.int, aten::valsem.aten.uniform. In the dropout.mlir Filecheck test, this PR used the Mhlo e2e compilation pass pipeline instead of the single TorchToMhloConvert pass to test the op conversion works as we expected.

Co-authored-by: Bairen Yi [email protected]
Co-authored-by: Jiawei Wu [email protected]
Co-authored-by: Tianyou Guo [email protected]
Co-authored-by: Xu Yan [email protected]
Co-authored-by: Ziheng Jiang [email protected]

Copy link
Collaborator

@tanyokwok tanyokwok left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@tanyokwok tanyokwok merged commit c935795 into llvm:main Aug 15, 2022
@@ -5313,9 +5313,10 @@ def Torch_AtenOnesLikeOp : Torch_Op<"aten.ones_like", [
}

def Torch_AtenEmptyMemoryFormatOp : Torch_Op<"aten.empty.memory_format", [
NoSideEffect,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi, this file is generated and should not be edited manually. Can you please revert this patch so that it does not block other progress, since other people will be running https://github.com/llvm/torch-mlir/blob/main/build_tools/update_torch_ods.sh to update this file and lose your changes and break these tests?

You will need to update this script to make it smart enough to emit these traits:

emit("aten::empty.memory_format : (int[], int?, int?, Device?, bool?, int?) -> (Tensor)")

In this case, I don't think that aten.empty.memory_format is NoSideEffect because it can raise an error if the parameters are wrong. Can you find a way to resubmit this PR without needing this?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I updated this attribute just because mhlo is value sematic, but it seems aten.empty.memory_format against it. The decomposed pass lowers some patterns such as aten.empty.memory_format + aten.fill, in mhlo sematic, we can lower the fill op to be a constant tensor, instead of creating an empty tensor and fill it. To eliminate the aten.empty.memory_format after Canonicalizer pass, I added the NoSideEffect attribute.

But I agree with you, aten.empty.memory_format may raise an error, I will revert this PR and find another way to lower the native_dropout op.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could we change the decomposition to use aten::new_full ?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It would also be good for us to add a dialect-level canonicalization pattern that erases ReadOnly ops if they don't have uses. That would cover this and many other useful cases.

Yancey1989 added a commit that referenced this pull request Aug 16, 2022
Yancey1989 added a commit to Yancey1989/torch-mlir that referenced this pull request Aug 17, 2022
Yancey1989 added a commit that referenced this pull request Aug 17, 2022
tanyokwok pushed a commit to pai-disc/torch-mlir that referenced this pull request Aug 24, 2022
tanyokwok pushed a commit to pai-disc/torch-mlir that referenced this pull request Aug 31, 2022
tanyokwok pushed a commit to pai-disc/torch-mlir that referenced this pull request Aug 31, 2022
tanyokwok pushed a commit to pai-disc/torch-mlir that referenced this pull request Aug 31, 2022
tanyokwok pushed a commit to pai-disc/torch-mlir that referenced this pull request Aug 31, 2022
tanyokwok pushed a commit to pai-disc/torch-mlir that referenced this pull request Sep 1, 2022
tanyokwok pushed a commit to pai-disc/torch-mlir that referenced this pull request Sep 23, 2022
tanyokwok pushed a commit to pai-disc/torch-mlir that referenced this pull request Sep 23, 2022
qedawkins pushed a commit to nod-ai/torch-mlir that referenced this pull request Oct 3, 2022
Signed-off-by: Whitney Tsang <[email protected]>

Co-authored-by: Whitney Tsang <[email protected]>
Co-authored-by: Ettore Tiotto <[email protected]>
Co-authored-by: Tung D. Le <[email protected]>
qedawkins pushed a commit to nod-ai/torch-mlir that referenced this pull request Oct 3, 2022
* Add git commit id to llvm.ident metadata
* Addressing review comment: llvm#1211 (comment)

Signed-off-by: Whitney Tsang <[email protected]>

Co-authored-by: Whitney Tsang <[email protected]>
Co-authored-by: Ettore Tiotto <[email protected]>
wyzero pushed a commit to pai-disc/torch-mlir that referenced this pull request Nov 4, 2022
* fix float width
* fix divide_floor & export promoteTypes api (#9)
* To comply with the old pytorch versions
* Add native_dropout_backward & native_layer_norm_backward decomposition (#15)
* add native_dropout and related ops pattern (llvm#1211)
* [MHLO] fix dot general contract
* Fix batch_norm, div.Tensor_mode and folder (#21)
* reimplement linear lowering
* reimplement 2-D rhs for mutmul
* add torchdynamo
tanyokwok pushed a commit that referenced this pull request Nov 10, 2022
* fix float width
* fix divide_floor & export promoteTypes api (#9)
* To comply with the old pytorch versions
* Add native_dropout_backward & native_layer_norm_backward decomposition (#15)
* add native_dropout and related ops pattern (#1211)
* [MHLO] fix dot general contract
* Fix batch_norm, div.Tensor_mode and folder (#21)
* reimplement linear lowering
* reimplement 2-D rhs for mutmul
* add torchdynamo
tanyokwok pushed a commit to pai-disc/torch-mlir that referenced this pull request Feb 2, 2023
* fix float width
* fix divide_floor & export promoteTypes api (#9)
* To comply with the old pytorch versions
* Add native_dropout_backward & native_layer_norm_backward decomposition (#15)
* add native_dropout and related ops pattern (llvm#1211)
* [MHLO] fix dot general contract
* Fix batch_norm, div.Tensor_mode and folder (#21)
* reimplement linear lowering
* reimplement 2-D rhs for mutmul
* add torchdynamo
JamesTheZ pushed a commit to pai-disc/torch-mlir that referenced this pull request Jul 19, 2023
* Fix float width
* Fix divide_floor & export promoteTypes api (#9)
* To comply with the old pytorch versions
* Add native_dropout_backward & native_layer_norm_backward decomposition (#15)
* Add native_dropout and related ops pattern (llvm#1211)
* [MHLO] fix dot general contract
* Fix batch_norm, div.Tensor_mode and folder (#21)
* Reimplement linear lowering
* Reimplement 2-D rhs for mutmul
* Add torchdynamo
* Decompose torch.slice_scatter (llvm#1622)
* Fix i64 torch.tensor dtype
* Add more mhlo basic converters
* Alleviate softmax datatype check (#24)
* Fix decompose native_batch_norm (#27)
* Support group_norm lowering (#25)
* Decompose torch.ones/zeros (#28)
* Fix softmax output type
* Fix gather
* Fix some decompose patterns
* Not check assert at runtime (#31)
* Fix bool tensor attr conversion bug (#32)
* Fix mlirDenseElementsAttrBoolGet
JamesTheZ added a commit to pai-disc/torch-mlir that referenced this pull request Jul 19, 2023
* Fix float width
* Fix divide_floor & export promoteTypes api (#9)
* To comply with the old pytorch versions
* Add native_dropout_backward & native_layer_norm_backward decomposition (#15)
* Add native_dropout and related ops pattern (llvm#1211)
* [MHLO] fix dot general contract
* Fix batch_norm, div.Tensor_mode and folder (#21)
* Reimplement linear lowering
* Reimplement 2-D rhs for mutmul
* Add torchdynamo
* Decompose torch.slice_scatter (llvm#1622)
* Fix i64 torch.tensor dtype
* Add more mhlo basic converters
* Alleviate softmax datatype check (#24)
* Fix decompose native_batch_norm (#27)
* Support group_norm lowering (#25)
* Decompose torch.ones/zeros (#28)
* Fix softmax output type
* Fix gather
* Fix some decompose patterns
* Not check assert at runtime (#31)
* Fix bool tensor attr conversion bug (#32)
* Fix mlirDenseElementsAttrBoolGet

Co-Authored-By: ZHENG, Zhen <[email protected]>
JamesTheZ added a commit to pai-disc/torch-mlir that referenced this pull request Jul 25, 2023
* Rewrite mhlo with stablehlo after rebase.
* Fix BAZEL building error of multiple definition.
* Fix float width
* Fix divide_floor & export promoteTypes api (#9)
* To comply with the old pytorch versions
* Add native_dropout_backward & native_layer_norm_backward decomposition (#15)
* Add native_dropout and related ops pattern (llvm#1211)
* [MHLO] fix dot general contract
* Fix batch_norm, div.Tensor_mode and folder (#21)
* Reimplement linear lowering
* Reimplement 2-D rhs for mutmul
* Add torchdynamo
* Decompose torch.slice_scatter (llvm#1622)
* Fix i64 torch.tensor dtype
* Add more mhlo basic converters
* Alleviate softmax datatype check (#24)
* Fix decompose native_batch_norm (#27)
* Support group_norm lowering (#25)
* Decompose torch.ones/zeros (#28)
* Fix softmax output type
* Fix gather
* Fix some decompose patterns
* Not check assert at runtime (#31)
* Fix bool tensor attr conversion bug (#32)
* Fix mlirDenseElementsAttrBoolGet

---------

Co-authored-by: ZHENG, Zhen <[email protected]>
JamesTheZ added a commit to pai-disc/torch-mlir that referenced this pull request Jul 25, 2023
* Rewrite mhlo with stablehlo after rebase.
* Fix BAZEL building error of multiple definition.
* Fix float width
* Fix divide_floor & export promoteTypes api (#9)
* To comply with the old pytorch versions
* Add native_dropout_backward & native_layer_norm_backward decomposition (#15)
* Add native_dropout and related ops pattern (llvm#1211)
* [MHLO] fix dot general contract
* Fix batch_norm, div.Tensor_mode and folder (#21)
* Reimplement linear lowering
* Reimplement 2-D rhs for mutmul
* Add torchdynamo
* Decompose torch.slice_scatter (llvm#1622)
* Fix i64 torch.tensor dtype
* Add more mhlo basic converters
* Alleviate softmax datatype check (#24)
* Fix decompose native_batch_norm (#27)
* Support group_norm lowering (#25)
* Decompose torch.ones/zeros (#28)
* Fix softmax output type
* Fix gather
* Fix some decompose patterns
* Not check assert at runtime (#31)
* Fix bool tensor attr conversion bug (#32)
* Fix mlirDenseElementsAttrBoolGet
JamesTheZ added a commit to pai-disc/torch-mlir that referenced this pull request Jul 27, 2023
* Rewrite mhlo with stablehlo after rebase.
* Fix BAZEL building error of multiple definition.
* Fix float width
* Fix divide_floor & export promoteTypes api (#9)
* To comply with the old pytorch versions
* Add native_dropout_backward & native_layer_norm_backward decomposition (#15)
* Add native_dropout and related ops pattern (llvm#1211)
* [MHLO] fix dot general contract
* Fix batch_norm, div.Tensor_mode and folder (#21)
* Reimplement linear lowering
* Reimplement 2-D rhs for mutmul
* Add torchdynamo
* Decompose torch.slice_scatter (llvm#1622)
* Fix i64 torch.tensor dtype
* Add more mhlo basic converters
* Alleviate softmax datatype check (#24)
* Fix decompose native_batch_norm (#27)
* Support group_norm lowering (#25)
* Decompose torch.ones/zeros (#28)
* Fix softmax output type
* Fix gather
* Fix some decompose patterns
* Not check assert at runtime (#31)
* Fix bool tensor attr conversion bug (#32)
* Fix mlirDenseElementsAttrBoolGet
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants