Skip to content

Commit

Permalink
ArchHelpers/Arm64: Fixes LDAPUR and STLUR backpatching
Browse files Browse the repository at this point in the history
The immediate offset masking was at the completely wrong offset when I
wrote these handlers. No idea how I managed to mess those up so badly.

Should fix at least some of the issues with #4216
  • Loading branch information
Sonicadvance1 committed Dec 20, 2024
1 parent e44d1f1 commit 2019f81
Showing 1 changed file with 2 additions and 2 deletions.
4 changes: 2 additions & 2 deletions FEXCore/Source/Utils/ArchHelpers/Arm64.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -2118,7 +2118,7 @@ HandleUnalignedAccess(FEXCore::Core::InternalThreadState* Thread, UnalignedHandl
LDUR |= Size << 30;
LDUR |= AddrReg << 5;
LDUR |= DataReg;
LDUR |= Instr & (0b1'1111'1111 << 9);
LDUR |= Instr & (0b1'1111'1111 << 12);
if (HandleType != UnalignedHandlerType::NonAtomic) {
// Ordering matters with cross-thread visibility!
std::atomic_ref<uint32_t>(PC[1]).store(DMB_LD, std::memory_order_release); // Back-patch the half-barrier.
Expand All @@ -2132,7 +2132,7 @@ HandleUnalignedAccess(FEXCore::Core::InternalThreadState* Thread, UnalignedHandl
STUR |= Size << 30;
STUR |= AddrReg << 5;
STUR |= DataReg;
STUR |= Instr & (0b1'1111'1111 << 9);
STUR |= Instr & (0b1'1111'1111 << 12);
if (HandleType != UnalignedHandlerType::NonAtomic) {
std::atomic_ref<uint32_t>(PC[-1]).store(DMB, std::memory_order_release); // Back-patch the half-barrier.
}
Expand Down

0 comments on commit 2019f81

Please sign in to comment.