forked from triton-lang/triton
-
Notifications
You must be signed in to change notification settings - Fork 17
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Rebase xsmm to main #177
Draft
Devjiu
wants to merge
18
commits into
triton-lang:main
Choose a base branch
from
Devjiu:rebase_xsmm_to_main
base: main
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Draft
Rebase xsmm to main #177
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Adds a `USE_BLOCK_POINTER` flag to the matmul_kernel so we can get IR for pointers-to-tensors instead of tensors-of-pointers.
Implements lowering pass from vector to XSMM microkernels. libxsmm is added as an external dependency together with general MLIR infrastructure for handling XSMM code generation and runtime execution. The XSMM lowering is optional and can be enabled at JIT step by environment variable TRITON_CPU_XSMM=1 libxsmm is built as a shared library and linked with XSMM-related libraries. These are also added to the Python infrastructure. Additionally, general MLIR utilities are imported to allow analysis, code generation and microkernel execution. Initially, a simple pattern mapping vector contraction to an XSMM kernel is added.
…riton-lang#5) Contraction lowering now moves accumulation buffer outside of a reduction loop when possible. This reduces data movement between memory and registers needed to accommodate mixed memref and vector abstractions.
Adds lowering pass from triton to XSMM microkernels. XSMM utility APIs are generalized to work on opaque operations representing contractions. A simple pattern mapping tt.dot to XSMM kernel is added. The runtime lowering to XSMM is now controlled by two separate flags: - TRITON_CPU_VECTOR_XSMM=1 to lower from vector as before - TRITON_CPU_TRITON_XSMM=1 to lower from triton ops
…on (triton-lang#7) * Lift -triton-raise-block-pointer pass from intel-xpu-backend-for-triton Code was in turn taken from triton-shared (though does not use the tts dialect).
Ports hoisting from Vector to XSMM pass to Triton lowering. Dot lowering now moves accumulation buffer outside of a reduction loop when possible.
Updates libxsmm version. Brings support for vnni sw pipeline.
Extends XSMM code generation to allow for mixed precision computations to match triton requirements for <bf16 x bf16 -> f32> contraction. Data type selection is added as a global variable to the matmul tutorial. BF16 can suffer from some inaccuracies compared to PyTorch baseline. However, the difference appears to be the same between native triton-cpu and XSMM lowering - no mismatch on SPR. The matmul tutorial is aligned more with the main branch. V2 backend benchmarking is disable due to its instabilities. Default tile sizes are increased to improve general performance.
Adds two new optional flags to the matmul tutorial: - K dim padding - pads input matrices into multiple of chosen BLOCK_SIZE_K - dynamic K blocking - overrides set BLOCK_SIZE_K and adjusts it based on the input K dimension; input is padded if needed The main motivation is to allow testing with larger reduction dimension blocks without kernel lossing support for various sizes. Padding is required to meet triton's requirement for power-of-2 sizes. Dynamic blocking can be used to decrease reduction dimension range or completely eliminate it. Allowing the kernel to work on larger K blocks is also helpful for future rewriting of GEMM into BRGEMM to ensure larger batch dimension.
Adds extra optional padding that can be use to ensure that input matrices' strides are non-power-of-two to improve cache behavior. Currently, it is most useful with DYNAMIC_K_BLOCK enabled.
Extends contraction lowering to XSMM by rewriting plain GEMM into a BRGEMM kernel when possible. The rewrite improves performance of larger K block sizes thanks to extra reduction dim tiling. Use of BRGEMM kernel also enables online VNNI packing for BF16.
Adds an optional flag to move matmul input preprocessing outside of the benchmarked kernel. This option allows to exclude preprocessing overhead from performance measurements.
Adds a python wrapper for a parallelized in-place copy function using libxsmm and OpenMP. It is intended to be used for efficient tensor padding implementation. The libxsmm path have to be specified through env variables: - XSMM_ROOT_DIR - path to libxsmm root dir with headers - XSMM_LIB_DIR - path to libxsmm.so location libxsmm .so also has to be available during runtime execution e.g., exposed through LD_LIBRARY_PATH. The XSMM python module can be built and installed using command: pip install -e ./third_party/cpu/python/
Adds experimental rewrite collapsing reduction loop over GEMM into a BRGEMM ukernel. The pattern matches the hand-written kernel using block pointers and is not compatible with IR generated by triton pointer raising. Direct lowering to XSMM allows to bypass triton load restriction when K dimension is not power-of-two. The pattern is quite brittle but functional for the matmul tutorial example. The rewriting is disable by default and can be enabled with environment variable: TRITON_CPU_LOOP_BRGEMM_XSMM=1
Adds option to apply padding only to matrix B. This allows to explore potential speedups by limiting padding to weights which is reasonably common strategy in e.g., ML inference. Full padding still has to occur when K dimension is padded to avoid dimension mismatch and/or meet power-of-two size requirement.
Devjiu
force-pushed
the
rebase_xsmm_to_main
branch
from
November 13, 2024 17:24
1e7d9f3
to
7eb5e04
Compare
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Ref to take a look on changes, prepared by xsmm team.