This repository has been archived by the owner on Jun 19, 2024. It is now read-only.
forked from llvm/torch-mlir
-
Notifications
You must be signed in to change notification settings - Fork 5
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
emitError is intended for error cases and not match failures of patterns. notifyMatchFailure is intended where pattern reports reason for not matching. Op verification should also not happen inside patterns but as part of verify/verification, but left ones that were obviously verification to emitError inside patterns to keep this change small.
This commit adds the canonicalization pattern for the `prim.ListUnpack` op. Signed-Off By: Vivek Khandelwal <[email protected]>
This commit adds the support for negative dim cases for `aten.cat`, `aten.slice.Tensor` and `aten.slice_scatter` op. Signed-Off By: Vivek Khandelwal <[email protected]>
This patch makes some rudimentary changes to torch-mlir's use of MLIR Python bindings to work with the most recent LLVM code. We can perhaps do better by being more selective in what we link against, instead of using `MLIRPythonExtension.RegisterEverything`.
This patch still only supports a single indexing tensor.
…lvm#1081) An upstream MLIR bug (that was recently fixed) caused the result to be ignored for Region- and Block-visitor functions. Now that the bug is fixed, we don't need an auxiliary boolean to track whether the visitor function has succeeded.
- Supports cases where the view op expands and collapses dims simulataneously. This does not handle the case where it is neither expanding nor collapsing (e.g. [2, 3] -> [3, 2]) - Additionally fixes a previous bug with adding 1-sized dims on both sides of a tensor with aten.view
Co-authored-by: Bairen Yi <[email protected]> Co-authored-by: Jiawei Wu <[email protected]> Co-authored-by: Tianyou Guo <[email protected]> Co-authored-by: Xu Yan <[email protected]> Co-authored-by: Ziheng Jiang <[email protected]>
In the interest of merging upstream LLVM quickly, a previous patch (7f08169) updated the torch-mlir build to register all dialects and passes through Python bindings. This patch limits the dialects and passes to only those that are used in torch-mlir. Key to this change are the removal of `MLIRPythonExtension.RegisterEverything` and the introduction of a new Python module (`_mlir_libs/_site_initialize_0.py`), where we register the dialects and passes used by torch-mlir.
…llvm#1093) This reverts commit ad283c1, since it's causing nightly build failures for all platforms.
The unknown dtype case can come from RefineTypes.
This enables building Pytorch from source in the CI. The build should mostly hit the ccache. Release builds will follow once we have some runtime on the CI.
This commit adds verifiers to the ops `ToBuiltinTensorOp` and `FromBuiltinTensorOp` that make sure that the input and output have the same shape and data type.
See RFC: llvm#999 Co-authored-by: Bairen Yi [email protected] Co-authored-by: Jiawei Wu [email protected] Co-authored-by: Tianyou Guo [email protected] Co-authored-by: Xu Yan [email protected] Co-authored-by: Ziheng Jiang [email protected]
* [MHLO] Init MHLO view like op patterns See RFC: llvm#999 Co-authored-by: Bairen Yi [email protected] Co-authored-by: Jiawei Wu [email protected] Co-authored-by: Tianyou Guo [email protected] Co-authored-by: Xu Yan [email protected] Co-authored-by: Ziheng Jiang [email protected] * update filecheck test cases * rebase, remove chlo and clang-format
* [MHLO] Add [un]squeeze op patterns * Conform to llvm coding standard * minor update
See RFC llvm#999 Co-authored-by: Bairen Yi [email protected] Co-authored-by: Jiawei Wu [email protected] Co-authored-by: Tianyou Guo [email protected] Co-authored-by: Xu Yan [email protected] Co-authored-by: Ziheng Jiang [email protected]
Follows existing conventions for unary operators.
Add source builds and remove deprecated libtorch information.
* [MHLO] Init MHLO basic Op Conversion Co-authored-by: Bairen Yi <[email protected]> Co-authored-by: Jiawei Wu <[email protected]> Co-authored-by: Tianyou Guo <[email protected]> Co-authored-by: Xu Yan <[email protected]> Co-authored-by: Ziheng Jiang <[email protected]> * [NFC] Remove 'from @llvm-project' annotation Co-authored-by: wujiawei.jw <[email protected]>
…x.Tensor (llvm#1097) - Includes a canonicalizer for `aten.add.t`needed for successfully lowering the shape function - Only offers support for statically sized index tensors when there is more than one - Dynamic shape support remains for single indexing tensors
This commit adds lowering of `aten.ge.int` op. Signed-Off By: Vivek Khandelwal <[email protected]>
This commit adds the decomposition for `aten.var.correction` op. Signed-Off By: Vivek Khandelwal <[email protected]
This commit fixes the shape calculation for: 1.) aten.mean.dim 2.) aten.var.dim 3.) aten.sum.dim_IntList op Also, it fixes the lowering of `aten.mean.dim` and `aten.sum.dim_IntList` for handling the cases of empty dim list. Signed-Off By: Vivek Khandelwal <[email protected]
Fixes llvm#1121 Signed-Off By: Vivek Khandelwal<[email protected]>
* Fix symint related functionalization ops * Remove zeros xfail from LTC tests
* Add LTC architecture diagram * Use PNG for diagrams * Update diagram
This commit adds a file explaining the structure of an E2E test, as well as useful things to keep in mind when adding new tests.
Change logic so that we never run the multiprocessing codepath with only 1 worker. That configuration was causing all subsequent tests to spuriously fail if one test failed with a crash (this was easy to see after sorting the tests). That configuration was the one used by the CI. Also, sort tests to make output nicer. Also, make verbose mode more verbose so that it is easy to see in `-s` mode which test is crashing.
* Disable LTC by default until upstream revert relands Tracked with the WIP llvm#1292 * Disable LTC e2e tests temporarily * Update setup.py Disable LTC in setup.py temporarily until upstream is fixed.
We used to not have "value-semantic" tensors but rather "immutable" tensors
This ensures that they are always up to date. This also updates the shape lib to make the new CI actually pass :)
We use it for more than TorchScript testing now. This is a purely mechanical change to adjust some file paths to remove "torchscript". The most perceptible change here is that now e2e tests are run with ``` ./tools/e2e_test.sh instead of: ./tools/torchscript_e2e_test.sh ```
In the sequential case we weren't sorting, which was confusing.
Gets both CI and Release builds integrated in one workflow. Mount ccache and pip cache as required for fast iterative builds Current Release docker builds still run with root perms, fix it in the future to run as the same user. There may be some corner cases left especially when switching build types etc. Docker build TEST plan: tl;dr: Build everythin: Releases (Python 3.8, 3.9, 3.10) and CIs. TM_PACKAGES="torch-mlir out-of-tree in-tree" 2.57s user 2.49s system 0% cpu 30:33.11 total Out of Tree + PyTorch binaries: Fresh build (purged cache): TM_PACKAGES="out-of-tree" 0.47s user 0.51s system 0% cpu 5:24.99 total Incremental with ccache: TM_PACKAGES="out-of-tree" 0.09s user 0.08s system 0% cpu 34.817 total Out of Tree + PyTorch from source Incremental TM_PACKAGES="out-of-tree" TM_USE_PYTORCH_BINARY=OFF 1.58s user 1.81s system 2% cpu 1:59.61 total In-Tree + PyTorch binaries: Fresh build and tests: (purge ccache) TM_PACKAGES="in-tree" 0.53s user 0.49s system 0% cpu 6:23.35 total Fresh build/ but with prior ccache TM_PACKAGES="in-tree" 0.45s user 0.66s system 0% cpu 3:57.47 total Incremental in-tree with all tests and regression tests TM_PACKAGES="in-tree" 0.16s user 0.09s system 0% cpu 2:18.52 total In-Tree + PyTorch from source Fresh build and tests: (purge ccache) TM_PACKAGES="in-tree" TM_USE_PYTORCH_BINARY=OFF 2.03s user 2.28s system 0% cpu 11:11.86 total Fresh build/ but with prior ccache TM_PACKAGES="in-tree" TM_USE_PYTORCH_BINARY=OFF 1.58s user 1.88s system 1% cpu 4:53.15 total Incremental in-tree with all tests and regression tests TM_PACKAGES="in-tree" TM_USE_PYTORCH_BINARY=OFF 1.09s user 1.10s system 1% cpu 3:29.84 total Incremental without tests TM_PACKAGES="in-tree" TM_USE_PYTORCH_BINARY=OFF TM_SKIP_TESTS=ON 1.52s user 1.42s system 3% cpu 1:15.82 total In-tree+out-of-tree + Pytorch Binaries TM_PACKAGES="out-of-tree in-tree" 0.25s user 0.18s system 0% cpu 3:01.91 total To clear all artifacts: rm -rf build build_oot llvm-build libtorch docker_venv externals/pytorch/build
Caught in the wild here: https://github.com/llvm/torch-mlir/runs/8046660640?check_suite_focus=true It is common for a missing dependency to only surface as an issue on the CI machines since they have fewer cores which prevents a "race" that happens to cause the dependency to be built before the dependent.
This should speed up source builds and ccache. May cause issues on macOS (pytorch/pytorch#80018)
This is to debug what is causing the exactly ccache look up failures etc.
Related to llvm#1227 1. Reduce MHLO #ifdefs 2. Dismiss compilation warnings
The new logic has the following benefits: 1. It does not clobber the working tree state. We expect testing to not change the work tree. 2. It correctly handles the case where a user has changes to the generated files, but hasn't checked them in yet (this happens frequently when adding new ops).
fix float width fix divide_floor & export promoteTypes api (#9)
Sign up for free
to subscribe to this conversation on GitHub.
Already have an account?
Sign in.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
WalkResult::isInterrupted()
instead of booleans (torch: [nfc] useWalkResult::isInterrupted()
instead of booleans llvm/torch-mlir#1081)lift_fresh
op llvm/torch-mlir#1101)dim=None
toAtenMeanDimOp
(Add support fordim=None
toAtenMeanDimOp
llvm/torch-mlir#1129)aten.masked.tensor
op.LowerToBackendContract.cpp
toTorchMLIRTorchPasses
bazel target llvm/torch-mlir#1243)sum_mean_dim
(Clean up shape functions that usesum_mean_dim
llvm/torch-mlir#1217)OpaqueElementsAttr
(mlir: fix replacement ofOpaqueElementsAttr
llvm/torch-mlir#1274)