Skip to content

Commit

Permalink
[FastISel] Fix load folding for registers with fixups
Browse files Browse the repository at this point in the history
FastISel tries to fold loads into the single using instruction.
However, if the register has fixups, then there may be additional
uses through an alias of the register.

In particular, this fixes the problem reported at
https://reviews.llvm.org/D119432#3507087. The load register is
(at the time of load folding) only used in a single call instruction.
However, selection of the bitcast has added a fixup between the
load register and the cross-BB register of the bitcast result.
After fixups are applied, there would now be two uses of the load
register, so load folding is not legal.

Differential Revision: https://reviews.llvm.org/D125459
  • Loading branch information
nikic committed May 16, 2022
1 parent e57f578 commit 05c3fe0
Show file tree
Hide file tree
Showing 2 changed files with 41 additions and 0 deletions.
5 changes: 5 additions & 0 deletions llvm/lib/CodeGen/SelectionDAG/FastISel.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -2230,6 +2230,11 @@ bool FastISel::tryToFoldLoad(const LoadInst *LI, const Instruction *FoldInst) {
if (!MRI.hasOneUse(LoadReg))
return false;

// If the register has fixups, there may be additional uses through a
// different alias of the register.
if (FuncInfo.RegsWithFixups.contains(LoadReg))
return false;

MachineRegisterInfo::reg_iterator RI = MRI.reg_begin(LoadReg);
MachineInstr *User = RI->getParent();

Expand Down
36 changes: 36 additions & 0 deletions llvm/test/CodeGen/X86/fast-isel-load-bitcast-fold.ll
Original file line number Diff line number Diff line change
@@ -0,0 +1,36 @@
; NOTE: Assertions have been autogenerated by utils/update_llc_test_checks.py
; RUN: llc -O0 -mtriple=x86_64-- -verify-machineinstrs < %s | FileCheck %s

define void @repro(i8** %a0, i1 %a1) nounwind {
; CHECK-LABEL: repro:
; CHECK: # %bb.0:
; CHECK-NEXT: subq $24, %rsp
; CHECK-NEXT: movb %sil, %al
; CHECK-NEXT: movb %al, {{[-0-9]+}}(%r{{[sb]}}p) # 1-byte Spill
; CHECK-NEXT: movq (%rdi), %rax
; CHECK-NEXT: movq %rax, {{[-0-9]+}}(%r{{[sb]}}p) # 8-byte Spill
; CHECK-NEXT: callq *%rax
; CHECK-NEXT: movb {{[-0-9]+}}(%r{{[sb]}}p), %al # 1-byte Reload
; CHECK-NEXT: testb $1, %al
; CHECK-NEXT: jne .LBB0_1
; CHECK-NEXT: jmp .LBB0_2
; CHECK-NEXT: .LBB0_1: # %bb1
; CHECK-NEXT: movq {{[-0-9]+}}(%r{{[sb]}}p), %rax # 8-byte Reload
; CHECK-NEXT: callq *%rax
; CHECK-NEXT: addq $24, %rsp
; CHECK-NEXT: retq
; CHECK-NEXT: .LBB0_2: # %bb2
; CHECK-NEXT: addq $24, %rsp
; CHECK-NEXT: retq
%tmp0 = load i8*, i8** %a0
%tmp1 = bitcast i8* %tmp0 to void ()*
call void %tmp1()
br i1 %a1, label %bb1, label %bb2

bb1:
call void %tmp1()
ret void

bb2:
ret void
}

0 comments on commit 05c3fe0

Please sign in to comment.