Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

JIT: Make BasicBlock jump target private #93152

Merged
merged 4 commits into from
Oct 10, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 3 additions & 3 deletions src/coreclr/jit/assertionprop.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -5260,7 +5260,7 @@ class AssertionPropFlowCallback
{
ASSERT_TP pAssertionOut;

if (predBlock->KindIs(BBJ_COND) && (predBlock->bbJumpDest == block))
if (predBlock->KindIs(BBJ_COND) && predBlock->HasJumpTo(block))
{
pAssertionOut = mJumpDestOut[predBlock->bbNum];

Expand Down Expand Up @@ -5462,7 +5462,7 @@ ASSERT_TP* Compiler::optComputeAssertionGen()
optPrintAssertionIndices(block->bbAssertionGen);
if (block->KindIs(BBJ_COND))
{
printf(" => " FMT_BB " valueGen = ", block->bbJumpDest->bbNum);
printf(" => " FMT_BB " valueGen = ", block->GetJumpDest()->bbNum);
optPrintAssertionIndices(jumpDestGen[block->bbNum]);
}
printf("\n");
Expand Down Expand Up @@ -6022,7 +6022,7 @@ PhaseStatus Compiler::optAssertionPropMain()
optDumpAssertionIndices(" out = ", block->bbAssertionOut, "\n");
if (block->KindIs(BBJ_COND))
{
printf(" " FMT_BB " = ", block->bbJumpDest->bbNum);
printf(" " FMT_BB " = ", block->GetJumpDest()->bbNum);
optDumpAssertionIndices(bbJtrueAssertionOut[block->bbNum], "\n");
}
}
Expand Down
2 changes: 1 addition & 1 deletion src/coreclr/jit/block.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -1449,7 +1449,7 @@ BasicBlock* Compiler::bbNewBasicBlock(BBjumpKinds jumpKind)

/* Record the jump kind in the block */

block->SetBBJumpKind(jumpKind DEBUG_ARG(this));
block->SetJumpKind(jumpKind DEBUG_ARG(this));

if (jumpKind == BBJ_THROW)
{
Expand Down
96 changes: 76 additions & 20 deletions src/coreclr/jit/block.h
Original file line number Diff line number Diff line change
Expand Up @@ -514,21 +514,28 @@ struct BasicBlock : private LIR::Range

BBjumpKinds bbJumpKind; // jump (if any) at the end of this block

/* The following union describes the jump target(s) of this block */
union {
unsigned bbJumpOffs; // PC offset (temporary only)
BasicBlock* bbJumpDest; // basic block
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You could imagine that BBJ_COND could have, here, struct { BasicBlock* bbJumpTrue; BasicBlock* bbJumpFalse; } where the true case would currently map to bbJumpDest and the false case to bbNext. And have accessors for each. Maybe that's where this will evolve?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think that's Andy's plan. He mentions in #93020 adding an explicit fall-through successor pointer to BasicBlock that would apply to BBJ_ALWAYS, BBJ_COND, etc. Would it be worth including that work in this PR, or save it for the next one? That change will probably introduce some TP diffs...

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's hold off... where I'd really like to get to is that these references are either:

  • some kind of BasicBlockReference holder struct (so reassignment updates ref counts automagically); or even better, perhaps:
  • FlowEdge(s)

The appeal of BasicBlockReference is that someday perhaps I'd like to be able to enumerate all the things that refer a block, including "external" BasicBlock references as well (references from the EH table, from SSA defs, special block references from the compiler object (fgFirstBB, etc), etc).

We could then eliminate all the fussing about with bbRefCounts and either have them maintained automatically or recomputed when needed.

The appeal of FlowEdges is that it simplifies some of the forward likelihood analysis, currently to find an edge to a successor you need to find the successor block and then enumerate the predecessor edges.

But not sure I want to do any of this just yet.

Copy link
Contributor

@SingleAccretion SingleAccretion Oct 6, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

While we are discussing successor edges, I'd like to bring up the fact there is duplication between IR control flow nodes such as SWITCH/JTRUE and the block kinds. In some systems, information about successors is determined purely from the IR and "terminator" nodes.

It is an appealing system from the orthogonality/elegance perspective since there is no duplication, but it is unlikely to be suitable for us due to the throughput cost (of at least two additional indirections in HIR).

BBswtDesc* bbJumpSwt; // switch descriptor
};

public:
BBjumpKinds GetBBJumpKind() const
BBjumpKinds GetJumpKind() const
{
return bbJumpKind;
}

void SetBBJumpKind(BBjumpKinds kind DEBUG_ARG(Compiler* compiler))
void SetJumpKind(BBjumpKinds jumpKind DEBUG_ARG(Compiler* compiler))
{
#ifdef DEBUG
// BBJ_NONE should only be assigned when optimizing jumps in Compiler::optOptimizeLayout
// TODO: Change assert to check if compiler is in appropriate optimization phase to use BBJ_NONE
// (right now, this assertion does the null check to avoid unused variable warnings)
assert((kind != BBJ_NONE) || (compiler != nullptr));
assert((jumpKind != BBJ_NONE) || (compiler != nullptr));
#endif // DEBUG
bbJumpKind = kind;
bbJumpKind = jumpKind;
}

BasicBlock* Prev() const
Expand Down Expand Up @@ -569,12 +576,12 @@ struct BasicBlock : private LIR::Range
return (bbNext == nullptr);
}

bool PrevIs(BasicBlock* block) const
bool PrevIs(const BasicBlock* block) const
{
return (bbPrev == block);
}

bool NextIs(BasicBlock* block) const
bool NextIs(const BasicBlock* block) const
{
return (bbNext == block);
}
Expand All @@ -583,12 +590,61 @@ struct BasicBlock : private LIR::Range

bool IsFirstColdBlock(Compiler* compiler) const;

/* The following union describes the jump target(s) of this block */
union {
unsigned bbJumpOffs; // PC offset (temporary only)
BasicBlock* bbJumpDest; // basic block
BBswtDesc* bbJumpSwt; // switch descriptor
};
unsigned GetJumpOffs() const
{
return bbJumpOffs;
}

void SetJumpOffs(unsigned jumpOffs)
{
bbJumpOffs = jumpOffs;
}

BasicBlock* GetJumpDest() const
{
return bbJumpDest;
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wanted to take these new methods as an opportunity to assert we are accessing the jump target correctly. For example, we should only read bbJumpDest if bbJumpKind is valid, and bbJumpDest should not be null for such blocks (and if we can assert bbJumpDest is only accessed when bbJumpKind is valid, then perhaps there's no need to set bbJumpDest to null when setting bbJumpKind to something like BBJ_NONE). However, these asserts turned out to be overly aggressive -- we seem to have some edge cases in a couple of phases that break these assumptions. For one, we do set and compare bbJumpDest to null in a few places, and checking if the jump kind is valid instead of null-checking bbJumpDest would probably hurt TP (as we would have to compare bbJumpKind to multiple possible valid values, versus just one null check). Would it be worth trying to strengthen our bbJumpKind/bbJumpDest abstraction if it comes at the cost of TP?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As a next step, this could have an assert for those kinds where bbJumpDest is legal:

assert(KindIs(BBJ_CALLFINALLY,BBJ_ALWAYS,BBJ_EHCATCHRET,BBJ_LEAVE,BBJ_COND));

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wanted to include that too, but this proved to be overly aggressive in a few spots. For example, in Compiler::fgFindBlockILOffset(), we check curr's bbJumpDest as long as it's not a switch statement. I could check if curr's jump kind is valid first to satisfy the assert in GetJumpDest(), but I'm not sure what our stance is on reducing TP to satisfy this constraint.

}

void SetJumpDest(BasicBlock* jumpDest)
{
bbJumpDest = jumpDest;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

maybe the same assert here, which would assume the bbJumpKind is set before changing/setting the bbJumpDest.

}

void SetJumpKindAndTarget(BBjumpKinds jumpKind, BasicBlock* jumpDest)
{
assert(jumpDest != nullptr);
bbJumpKind = jumpKind;
bbJumpDest = jumpDest;
assert(KindIs(BBJ_ALWAYS, BBJ_CALLFINALLY, BBJ_COND, BBJ_EHCATCHRET, BBJ_LEAVE));
}

bool HasJumpTo(const BasicBlock* jumpDest) const
{
return (bbJumpDest == jumpDest);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same assert here.

I wonder if we should treat switch and non-switch more uniformly. i.e., JumpsTo might imply any flow graph control transfer, including switches. Maybe we'll evolve this over time.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Asserting bbJumpKind is valid proved to be too aggressive in JumpsTo(), as well. Should I try enabling these asserts in a follow-up PR? I think that refactor will be nontrivial, since we null-check bbJumpDest for non-jump blocks in a bunch of places.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A follow-up PR to add asserts is fine.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It seems odd that SetJumpKind has a Compiler* arg but SetJumpKindAndTarget doesn't. I would expect the latter to be a logical superset.

If you added a Compiler* arg you could then fix the switch version to allocate the BBswtDesc automatically, perhaps.

}

bool JumpsToNext() const
{
return (bbJumpDest == bbNext);
}

BBswtDesc* GetJumpSwt() const
{
return bbJumpSwt;
}

void SetJumpSwt(BBswtDesc* jumpSwt)
{
bbJumpSwt = jumpSwt;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also assert(KindIs(BBJ_SWITCH)); here?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I thought about this, but we have several instances where we set bbJumpSwt before updating bbJumpKind, and I wouldn't want to enforce an order of the function calls -- though maybe that means we should have a function for converting the block to BBJ_SWITCH that updates bbJumpKind and bbJumpSwt simultaneously?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we should have a function for converting the block to BBJ_SWITCH that updates bbJumpKind and bbJumpSwt simultaneously?

Maybe an overloaded:

SetKindAndTarget(BBjumpKinds kind, BasicBlock* target) => set bbJumpKind/bbJumpDest
SetKindAndTarget(BBjumpKinds kind, BBswtDesc* target) => set bbJumpKind/bbJumpSwt

for those cases where both are being set?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure thing, I'll add that in.

}

void SetJumpKindAndTarget(BBjumpKinds jumpKind, BBswtDesc* jumpSwt)
{
assert(jumpKind == BBJ_SWITCH);
assert(jumpSwt != nullptr);
bbJumpKind = jumpKind;
bbJumpSwt = jumpSwt;
}

BasicBlockFlags bbFlags;

Expand Down Expand Up @@ -1617,7 +1673,7 @@ inline BBArrayIterator BBSwitchTargetList::end() const
inline BasicBlock::BBSuccList::BBSuccList(const BasicBlock* block)
{
assert(block != nullptr);
switch (block->GetBBJumpKind())
switch (block->GetJumpKind())
{
case BBJ_THROW:
case BBJ_RETURN:
Expand All @@ -1633,7 +1689,7 @@ inline BasicBlock::BBSuccList::BBSuccList(const BasicBlock* block)
case BBJ_ALWAYS:
case BBJ_EHCATCHRET:
case BBJ_LEAVE:
m_succs[0] = block->bbJumpDest;
m_succs[0] = block->GetJumpDest();
m_begin = &m_succs[0];
m_end = &m_succs[1];
break;
Expand All @@ -1650,23 +1706,23 @@ inline BasicBlock::BBSuccList::BBSuccList(const BasicBlock* block)

// If both fall-through and branch successors are identical, then only include
// them once in the iteration (this is the same behavior as NumSucc()/GetSucc()).
if (block->NextIs(block->bbJumpDest))
if (block->JumpsToNext())
{
m_end = &m_succs[1];
}
else
{
m_succs[1] = block->bbJumpDest;
m_succs[1] = block->GetJumpDest();
m_end = &m_succs[2];
}
break;

case BBJ_SWITCH:
// We don't use the m_succs in-line data for switches; use the existing jump table in the block.
assert(block->bbJumpSwt != nullptr);
assert(block->bbJumpSwt->bbsDstTab != nullptr);
m_begin = block->bbJumpSwt->bbsDstTab;
m_end = block->bbJumpSwt->bbsDstTab + block->bbJumpSwt->bbsCount;
assert(block->GetJumpSwt() != nullptr);
assert(block->GetJumpSwt()->bbsDstTab != nullptr);
m_begin = block->GetJumpSwt()->bbsDstTab;
m_end = block->GetJumpSwt()->bbsDstTab + block->GetJumpSwt()->bbsCount;
break;

default:
Expand Down
16 changes: 8 additions & 8 deletions src/coreclr/jit/codegenarm.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -125,17 +125,17 @@ BasicBlock* CodeGen::genCallFinally(BasicBlock* block)

assert(!block->IsLast());
assert(block->Next()->KindIs(BBJ_ALWAYS));
assert(block->Next()->bbJumpDest != NULL);
assert(block->Next()->bbJumpDest->bbFlags & BBF_FINALLY_TARGET);
assert(!block->Next()->HasJumpTo(nullptr));
assert(block->Next()->GetJumpDest()->bbFlags & BBF_FINALLY_TARGET);

bbFinallyRet = block->Next()->bbJumpDest;
bbFinallyRet = block->Next()->GetJumpDest();

// Load the address where the finally funclet should return into LR.
// The funclet prolog/epilog will do "push {lr}" / "pop {pc}" to do the return.
genMov32RelocatableDisplacement(bbFinallyRet, REG_LR);

// Jump to the finally BB
inst_JMP(EJ_jmp, block->bbJumpDest);
inst_JMP(EJ_jmp, block->GetJumpDest());

// The BBJ_ALWAYS is used because the BBJ_CALLFINALLY can't point to the
// jump target using bbJumpDest - that is already used to point
Expand All @@ -150,7 +150,7 @@ BasicBlock* CodeGen::genCallFinally(BasicBlock* block)
// genEHCatchRet:
void CodeGen::genEHCatchRet(BasicBlock* block)
{
genMov32RelocatableDisplacement(block->bbJumpDest, REG_INTRET);
genMov32RelocatableDisplacement(block->GetJumpDest(), REG_INTRET);
}

//------------------------------------------------------------------------
Expand Down Expand Up @@ -633,8 +633,8 @@ void CodeGen::genJumpTable(GenTree* treeNode)
noway_assert(compiler->compCurBB->KindIs(BBJ_SWITCH));
assert(treeNode->OperGet() == GT_JMPTABLE);

unsigned jumpCount = compiler->compCurBB->bbJumpSwt->bbsCount;
BasicBlock** jumpTable = compiler->compCurBB->bbJumpSwt->bbsDstTab;
unsigned jumpCount = compiler->compCurBB->GetJumpSwt()->bbsCount;
BasicBlock** jumpTable = compiler->compCurBB->GetJumpSwt()->bbsDstTab;
unsigned jmpTabBase;

jmpTabBase = GetEmitter()->emitBBTableDataGenBeg(jumpCount, false);
Expand Down Expand Up @@ -1299,7 +1299,7 @@ void CodeGen::genCodeForJTrue(GenTreeOp* jtrue)
GenTree* op = jtrue->gtGetOp1();
regNumber reg = genConsumeReg(op);
inst_RV_RV(INS_tst, reg, reg, genActualType(op));
inst_JMP(EJ_ne, compiler->compCurBB->bbJumpDest);
inst_JMP(EJ_ne, compiler->compCurBB->GetJumpDest());
}

//------------------------------------------------------------------------
Expand Down
16 changes: 8 additions & 8 deletions src/coreclr/jit/codegenarm64.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -2158,7 +2158,7 @@ BasicBlock* CodeGen::genCallFinally(BasicBlock* block)
{
GetEmitter()->emitIns_Mov(INS_mov, EA_PTRSIZE, REG_R0, REG_SPBASE, /* canSkip */ false);
}
GetEmitter()->emitIns_J(INS_bl_local, block->bbJumpDest);
GetEmitter()->emitIns_J(INS_bl_local, block->GetJumpDest());

BasicBlock* const nextBlock = block->Next();

Expand All @@ -2181,7 +2181,7 @@ BasicBlock* CodeGen::genCallFinally(BasicBlock* block)
// handler. So turn off GC reporting for this single instruction.
GetEmitter()->emitDisableGC();

BasicBlock* const jumpDest = nextBlock->bbJumpDest;
BasicBlock* const jumpDest = nextBlock->GetJumpDest();

// Now go to where the finally funclet needs to return to.
if (nextBlock->NextIs(jumpDest) && !compiler->fgInDifferentRegions(nextBlock, jumpDest))
Expand Down Expand Up @@ -2216,7 +2216,7 @@ void CodeGen::genEHCatchRet(BasicBlock* block)
{
// For long address (default): `adrp + add` will be emitted.
// For short address (proven later): `adr` will be emitted.
GetEmitter()->emitIns_R_L(INS_adr, EA_PTRSIZE, block->bbJumpDest, REG_INTRET);
GetEmitter()->emitIns_R_L(INS_adr, EA_PTRSIZE, block->GetJumpDest(), REG_INTRET);
}

// move an immediate value into an integer register
Expand Down Expand Up @@ -3752,8 +3752,8 @@ void CodeGen::genJumpTable(GenTree* treeNode)
noway_assert(compiler->compCurBB->KindIs(BBJ_SWITCH));
assert(treeNode->OperGet() == GT_JMPTABLE);

unsigned jumpCount = compiler->compCurBB->bbJumpSwt->bbsCount;
BasicBlock** jumpTable = compiler->compCurBB->bbJumpSwt->bbsDstTab;
unsigned jumpCount = compiler->compCurBB->GetJumpSwt()->bbsCount;
BasicBlock** jumpTable = compiler->compCurBB->GetJumpSwt()->bbsDstTab;
unsigned jmpTabOffs;
unsigned jmpTabBase;

Expand Down Expand Up @@ -4654,7 +4654,7 @@ void CodeGen::genCodeForJTrue(GenTreeOp* jtrue)

GenTree* op = jtrue->gtGetOp1();
regNumber reg = genConsumeReg(op);
GetEmitter()->emitIns_J_R(INS_cbnz, emitActualTypeSize(op), compiler->compCurBB->bbJumpDest, reg);
GetEmitter()->emitIns_J_R(INS_cbnz, emitActualTypeSize(op), compiler->compCurBB->GetJumpDest(), reg);
}

//------------------------------------------------------------------------
Expand Down Expand Up @@ -4872,15 +4872,15 @@ void CodeGen::genCodeForJumpCompare(GenTreeOpCC* tree)
instruction ins = (cc.GetCode() == GenCondition::EQ) ? INS_tbz : INS_tbnz;
int imm = genLog2((size_t)compareImm);

GetEmitter()->emitIns_J_R_I(ins, attr, compiler->compCurBB->bbJumpDest, reg, imm);
GetEmitter()->emitIns_J_R_I(ins, attr, compiler->compCurBB->GetJumpDest(), reg, imm);
}
else
{
assert(op2->IsIntegralConst(0));

instruction ins = (cc.GetCode() == GenCondition::EQ) ? INS_cbz : INS_cbnz;

GetEmitter()->emitIns_J_R(ins, attr, compiler->compCurBB->bbJumpDest, reg);
GetEmitter()->emitIns_J_R(ins, attr, compiler->compCurBB->GetJumpDest(), reg);
}
}

Expand Down
6 changes: 3 additions & 3 deletions src/coreclr/jit/codegencommon.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -376,13 +376,13 @@ void CodeGen::genMarkLabelsForCodegen()

for (BasicBlock* const block : compiler->Blocks())
{
switch (block->GetBBJumpKind())
switch (block->GetJumpKind())
{
case BBJ_ALWAYS: // This will also handle the BBJ_ALWAYS of a BBJ_CALLFINALLY/BBJ_ALWAYS pair.
case BBJ_COND:
case BBJ_EHCATCHRET:
JITDUMP(" " FMT_BB " : branch target\n", block->bbJumpDest->bbNum);
block->bbJumpDest->bbFlags |= BBF_HAS_LABEL;
JITDUMP(" " FMT_BB " : branch target\n", block->GetJumpDest()->bbNum);
block->GetJumpDest()->bbFlags |= BBF_HAS_LABEL;
break;

case BBJ_SWITCH:
Expand Down
16 changes: 8 additions & 8 deletions src/coreclr/jit/codegenlinear.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -622,7 +622,7 @@ void CodeGen::genCodeForBBlist()
{
// We only need the NOP if we're not going to generate any more code as part of the block end.

switch (block->GetBBJumpKind())
switch (block->GetJumpKind())
{
case BBJ_ALWAYS:
case BBJ_THROW:
Expand Down Expand Up @@ -665,7 +665,7 @@ void CodeGen::genCodeForBBlist()

/* Do we need to generate a jump or return? */

switch (block->GetBBJumpKind())
switch (block->GetJumpKind())
{
case BBJ_RETURN:
genExitCode(block);
Expand Down Expand Up @@ -749,7 +749,7 @@ void CodeGen::genCodeForBBlist()
// with a jump, do not remove jumps from such blocks.
// Do not remove a jump between hot and cold regions.
bool isRemovableJmpCandidate =
!block->hasAlign() && !compiler->fgInDifferentRegions(block, block->bbJumpDest);
!block->hasAlign() && !compiler->fgInDifferentRegions(block, block->GetJumpDest());

#ifdef TARGET_AMD64
// AMD64 requires an instruction after a call instruction for unwinding
Expand All @@ -758,10 +758,10 @@ void CodeGen::genCodeForBBlist()
isRemovableJmpCandidate = isRemovableJmpCandidate && !GetEmitter()->emitIsLastInsCall();
#endif // TARGET_AMD64

inst_JMP(EJ_jmp, block->bbJumpDest, isRemovableJmpCandidate);
inst_JMP(EJ_jmp, block->GetJumpDest(), isRemovableJmpCandidate);
}
#else
inst_JMP(EJ_jmp, block->bbJumpDest);
inst_JMP(EJ_jmp, block->GetJumpDest());
#endif // TARGET_XARCH

FALLTHROUGH;
Expand All @@ -782,9 +782,9 @@ void CodeGen::genCodeForBBlist()
// block, even if one is not otherwise needed, to be able to calculate the size of this
// loop (loop size is calculated by walking the instruction groups; see emitter::getLoopSize()).

if (block->bbJumpDest->isLoopAlign())
if (block->GetJumpDest()->isLoopAlign())
{
GetEmitter()->emitSetLoopBackEdge(block->bbJumpDest);
GetEmitter()->emitSetLoopBackEdge(block->GetJumpDest());

if (!block->IsLast())
{
Expand Down Expand Up @@ -2621,7 +2621,7 @@ void CodeGen::genCodeForJcc(GenTreeCC* jcc)
assert(compiler->compCurBB->KindIs(BBJ_COND));
assert(jcc->OperIs(GT_JCC));

inst_JCC(jcc->gtCondition, compiler->compCurBB->bbJumpDest);
inst_JCC(jcc->gtCondition, compiler->compCurBB->GetJumpDest());
}

//------------------------------------------------------------------------
Expand Down
Loading