-
Notifications
You must be signed in to change notification settings - Fork 12.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[InstSimplify] (a ^ b) ? (~a) ^ b : a ^ (~b) #63104
Comments
I'd like to fix this, but I'm not able to find the exact transform in |
This fold doesn't introduce new instructions so what you really should take a look is InstructionSimplify, maybe
Alive proof: https://alive2.llvm.org/ce/z/KEt9R5 |
This probably needs a generalization of simplifyWithOpReplaced to look at more than one instruction. |
@nikic: Not entirely sure where this plays in. Stepping through the code, the transformation comes from here: |
@dc03 The optimization regression comes from making use of dominating conditions in icmp simplification. However, we don't want to avoid that and instead make sure that the new IR can still be simplified. simplifyWithOpReplaced() is the transform that handles this kind of pattern. |
The regression between llvm 15 and llvm 16 , https://gcc.godbolt.org/z/Goq9ETh67 |
maybe this is the simplify case https://alive2.llvm.org/ce/z/TGgJTq , show that the simplifyWithOpReplaced need look at more than one instruction. |
A similar assumption as for the x^x case also existed for the absorber case, which lead to a stage2 miscompile. That assumption is not fixed. ----- Support replacement of operands not only in the immediate instruction, but also instructions it uses. To the most part, this extension is straightforward, but there are two bits worth highlighting: First, we can now no longer assume that if the Op is a vector, the instruction also returns a vector. If Op is a vector and the instruction returns a scalar, we should consider it as a cross-lane operation. Second, for the x ^ x special case and the absorber special case, we can no longer assume that one of the operands is RepOp, as we might have a replacement higher up the instruction chain. There is one optimization regression, but it is in a fuzzer-generated test case. Fixes #63104.
A similar assumption as for the x^x case also existed for the absorber case, which lead to a stage2 miscompile. That assumption is not fixed. ----- Support replacement of operands not only in the immediate instruction, but also instructions it uses. To the most part, this extension is straightforward, but there are two bits worth highlighting: First, we can now no longer assume that if the Op is a vector, the instruction also returns a vector. If Op is a vector and the instruction returns a scalar, we should consider it as a cross-lane operation. Second, for the x ^ x special case and the absorber special case, we can no longer assume that one of the operands is RepOp, as we might have a replacement higher up the instruction chain. There is one optimization regression, but it is in a fuzzer-generated test case. Fixes llvm#63104.
Support replacement of operands not only in the immediate instruction, but also instructions it uses. To the most part, this extension is straightforward, but there are two bits worth highlighting: First, we can now no longer assume that if the Op is a vector, the instruction also returns a vector. If Op is a vector and the instruction returns a scalar, we should consider it as a cross-lane operation. Second, for the x ^ x special case, we can no longer assume that the operand is RepOp, as we might have a replacement higher up the instruction chain. There is one optimization regression, but it is in a fuzzer-generated test case. Fixes llvm/llvm-project#63104.
A similar assumption as for the x^x case also existed for the absorber case, which lead to a stage2 miscompile. That assumption is not fixed. ----- Support replacement of operands not only in the immediate instruction, but also instructions it uses. To the most part, this extension is straightforward, but there are two bits worth highlighting: First, we can now no longer assume that if the Op is a vector, the instruction also returns a vector. If Op is a vector and the instruction returns a scalar, we should consider it as a cross-lane operation. Second, for the x ^ x special case and the absorber special case, we can no longer assume that one of the operands is RepOp, as we might have a replacement higher up the instruction chain. There is one optimization regression, but it is in a fuzzer-generated test case. Fixes llvm/llvm-project#63104.
Test code:
Clang 15.0.0 (and Gcc trunk)
Clang 16.0.0 (and Clang trunk)
https://godbolt.org/z/qdhKnj3af
The text was updated successfully, but these errors were encountered: