-
Notifications
You must be signed in to change notification settings - Fork 505
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[MHLO]add native_dropout and related ops pattern #1211
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
@@ -5313,9 +5313,10 @@ def Torch_AtenOnesLikeOp : Torch_Op<"aten.ones_like", [ | |||
} | |||
|
|||
def Torch_AtenEmptyMemoryFormatOp : Torch_Op<"aten.empty.memory_format", [ | |||
NoSideEffect, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi, this file is generated and should not be edited manually. Can you please revert this patch so that it does not block other progress, since other people will be running https://github.com/llvm/torch-mlir/blob/main/build_tools/update_torch_ods.sh to update this file and lose your changes and break these tests?
You will need to update this script to make it smart enough to emit these traits:
torch-mlir/python/torch_mlir/dialects/torch/importer/jit_ir/build_tools/torch_ods_gen.py
Line 435 in 9d6ee48
emit("aten::empty.memory_format : (int[], int?, int?, Device?, bool?, int?) -> (Tensor)") |
In this case, I don't think that aten.empty.memory_format
is NoSideEffect because it can raise an error if the parameters are wrong. Can you find a way to resubmit this PR without needing this?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I updated this attribute just because mhlo is value sematic, but it seems aten.empty.memory_format
against it. The decomposed pass lowers some patterns such as aten.empty.memory_format
+ aten.fill
, in mhlo sematic, we can lower the fill op to be a constant tensor, instead of creating an empty tensor and fill it. To eliminate the aten.empty.memory_format
after Canonicalizer pass, I added the NoSideEffect attribute.
But I agree with you, aten.empty.memory_format
may raise an error, I will revert this PR and find another way to lower the native_dropout
op.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could we change the decomposition to use aten::new_full ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It would also be good for us to add a dialect-level canonicalization pattern that erases ReadOnly ops if they don't have uses. That would cover this and many other useful cases.
This reverts commit c935795.
This reverts commit c935795.
Signed-off-by: Whitney Tsang <[email protected]> Co-authored-by: Whitney Tsang <[email protected]> Co-authored-by: Ettore Tiotto <[email protected]> Co-authored-by: Tung D. Le <[email protected]>
* Add git commit id to llvm.ident metadata * Addressing review comment: llvm#1211 (comment) Signed-off-by: Whitney Tsang <[email protected]> Co-authored-by: Whitney Tsang <[email protected]> Co-authored-by: Ettore Tiotto <[email protected]>
* fix float width * fix divide_floor & export promoteTypes api (#9) * To comply with the old pytorch versions * Add native_dropout_backward & native_layer_norm_backward decomposition (#15) * add native_dropout and related ops pattern (llvm#1211) * [MHLO] fix dot general contract * Fix batch_norm, div.Tensor_mode and folder (#21) * reimplement linear lowering * reimplement 2-D rhs for mutmul * add torchdynamo
* fix float width * fix divide_floor & export promoteTypes api (#9) * To comply with the old pytorch versions * Add native_dropout_backward & native_layer_norm_backward decomposition (#15) * add native_dropout and related ops pattern (#1211) * [MHLO] fix dot general contract * Fix batch_norm, div.Tensor_mode and folder (#21) * reimplement linear lowering * reimplement 2-D rhs for mutmul * add torchdynamo
* fix float width * fix divide_floor & export promoteTypes api (#9) * To comply with the old pytorch versions * Add native_dropout_backward & native_layer_norm_backward decomposition (#15) * add native_dropout and related ops pattern (llvm#1211) * [MHLO] fix dot general contract * Fix batch_norm, div.Tensor_mode and folder (#21) * reimplement linear lowering * reimplement 2-D rhs for mutmul * add torchdynamo
* Fix float width * Fix divide_floor & export promoteTypes api (#9) * To comply with the old pytorch versions * Add native_dropout_backward & native_layer_norm_backward decomposition (#15) * Add native_dropout and related ops pattern (llvm#1211) * [MHLO] fix dot general contract * Fix batch_norm, div.Tensor_mode and folder (#21) * Reimplement linear lowering * Reimplement 2-D rhs for mutmul * Add torchdynamo * Decompose torch.slice_scatter (llvm#1622) * Fix i64 torch.tensor dtype * Add more mhlo basic converters * Alleviate softmax datatype check (#24) * Fix decompose native_batch_norm (#27) * Support group_norm lowering (#25) * Decompose torch.ones/zeros (#28) * Fix softmax output type * Fix gather * Fix some decompose patterns * Not check assert at runtime (#31) * Fix bool tensor attr conversion bug (#32) * Fix mlirDenseElementsAttrBoolGet
* Fix float width * Fix divide_floor & export promoteTypes api (#9) * To comply with the old pytorch versions * Add native_dropout_backward & native_layer_norm_backward decomposition (#15) * Add native_dropout and related ops pattern (llvm#1211) * [MHLO] fix dot general contract * Fix batch_norm, div.Tensor_mode and folder (#21) * Reimplement linear lowering * Reimplement 2-D rhs for mutmul * Add torchdynamo * Decompose torch.slice_scatter (llvm#1622) * Fix i64 torch.tensor dtype * Add more mhlo basic converters * Alleviate softmax datatype check (#24) * Fix decompose native_batch_norm (#27) * Support group_norm lowering (#25) * Decompose torch.ones/zeros (#28) * Fix softmax output type * Fix gather * Fix some decompose patterns * Not check assert at runtime (#31) * Fix bool tensor attr conversion bug (#32) * Fix mlirDenseElementsAttrBoolGet Co-Authored-By: ZHENG, Zhen <[email protected]>
* Rewrite mhlo with stablehlo after rebase. * Fix BAZEL building error of multiple definition. * Fix float width * Fix divide_floor & export promoteTypes api (#9) * To comply with the old pytorch versions * Add native_dropout_backward & native_layer_norm_backward decomposition (#15) * Add native_dropout and related ops pattern (llvm#1211) * [MHLO] fix dot general contract * Fix batch_norm, div.Tensor_mode and folder (#21) * Reimplement linear lowering * Reimplement 2-D rhs for mutmul * Add torchdynamo * Decompose torch.slice_scatter (llvm#1622) * Fix i64 torch.tensor dtype * Add more mhlo basic converters * Alleviate softmax datatype check (#24) * Fix decompose native_batch_norm (#27) * Support group_norm lowering (#25) * Decompose torch.ones/zeros (#28) * Fix softmax output type * Fix gather * Fix some decompose patterns * Not check assert at runtime (#31) * Fix bool tensor attr conversion bug (#32) * Fix mlirDenseElementsAttrBoolGet --------- Co-authored-by: ZHENG, Zhen <[email protected]>
* Rewrite mhlo with stablehlo after rebase. * Fix BAZEL building error of multiple definition. * Fix float width * Fix divide_floor & export promoteTypes api (#9) * To comply with the old pytorch versions * Add native_dropout_backward & native_layer_norm_backward decomposition (#15) * Add native_dropout and related ops pattern (llvm#1211) * [MHLO] fix dot general contract * Fix batch_norm, div.Tensor_mode and folder (#21) * Reimplement linear lowering * Reimplement 2-D rhs for mutmul * Add torchdynamo * Decompose torch.slice_scatter (llvm#1622) * Fix i64 torch.tensor dtype * Add more mhlo basic converters * Alleviate softmax datatype check (#24) * Fix decompose native_batch_norm (#27) * Support group_norm lowering (#25) * Decompose torch.ones/zeros (#28) * Fix softmax output type * Fix gather * Fix some decompose patterns * Not check assert at runtime (#31) * Fix bool tensor attr conversion bug (#32) * Fix mlirDenseElementsAttrBoolGet
* Rewrite mhlo with stablehlo after rebase. * Fix BAZEL building error of multiple definition. * Fix float width * Fix divide_floor & export promoteTypes api (#9) * To comply with the old pytorch versions * Add native_dropout_backward & native_layer_norm_backward decomposition (#15) * Add native_dropout and related ops pattern (llvm#1211) * [MHLO] fix dot general contract * Fix batch_norm, div.Tensor_mode and folder (#21) * Reimplement linear lowering * Reimplement 2-D rhs for mutmul * Add torchdynamo * Decompose torch.slice_scatter (llvm#1622) * Fix i64 torch.tensor dtype * Add more mhlo basic converters * Alleviate softmax datatype check (#24) * Fix decompose native_batch_norm (#27) * Support group_norm lowering (#25) * Decompose torch.ones/zeros (#28) * Fix softmax output type * Fix gather * Fix some decompose patterns * Not check assert at runtime (#31) * Fix bool tensor attr conversion bug (#32) * Fix mlirDenseElementsAttrBoolGet
See RFC #999
This PR added
aten::native_dropout
op pattern, and some decomposed op patterns, .e.gaten::size.int
,aten::valsem.aten.uniform
. In thedropout.mlir
Filecheck test, this PR used the Mhlo e2e compilation pass pipeline instead of the singleTorchToMhloConvert
pass to test the op conversion works as we expected.Co-authored-by: Bairen Yi [email protected]
Co-authored-by: Jiawei Wu [email protected]
Co-authored-by: Tianyou Guo [email protected]
Co-authored-by: Xu Yan [email protected]
Co-authored-by: Ziheng Jiang [email protected]