Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[MemCpyOpt] Forward memcpy based on the actual copy memory location. #87190

Merged
merged 12 commits into from
Jul 12, 2024

Conversation

DianQK
Copy link
Member

@DianQK DianQK commented Mar 31, 2024

Fixes #85560.

We can forward memcpy as long as the actual memory location being copied have not been altered.

alive2: https://alive2.llvm.org/ce/z/q9JaHV
perf: https://llvm-compile-time-tracker.com/compare.php?from=ea92b1f9d0fc31f1fd97ad04eb0412003a37cb0d&to=16454b7658974dfee96232b76881d75c8f7c1bbe&stat=instructions%3Au (It seems there's no relevant matching pattern in the C++ test cases.)

@llvmbot
Copy link
Member

llvmbot commented Mar 31, 2024

@llvm/pr-subscribers-llvm-transforms

Author: Quentin Dian (DianQK)

Changes

Fixes #85560.

We can forward the memcpy as long as the actual memory location being copied have not been altered.

alive2: https://alive2.llvm.org/ce/z/q9JaHV


Full diff: https://github.com/llvm/llvm-project/pull/87190.diff

2 Files Affected:

  • (modified) llvm/lib/Transforms/Scalar/MemCpyOptimizer.cpp (+60-24)
  • (added) llvm/test/Transforms/MemCpyOpt/memcpy-memcpy-offset.ll (+179)
diff --git a/llvm/lib/Transforms/Scalar/MemCpyOptimizer.cpp b/llvm/lib/Transforms/Scalar/MemCpyOptimizer.cpp
index 1036b8ae963a24..86c3f8b9bb335a 100644
--- a/llvm/lib/Transforms/Scalar/MemCpyOptimizer.cpp
+++ b/llvm/lib/Transforms/Scalar/MemCpyOptimizer.cpp
@@ -1121,28 +1121,64 @@ bool MemCpyOptPass::performCallSlotOptzn(Instruction *cpyLoad,
 bool MemCpyOptPass::processMemCpyMemCpyDependence(MemCpyInst *M,
                                                   MemCpyInst *MDep,
                                                   BatchAAResults &BAA) {
-  // We can only transforms memcpy's where the dest of one is the source of the
-  // other.
-  if (M->getSource() != MDep->getDest() || MDep->isVolatile())
-    return false;
-
   // If dep instruction is reading from our current input, then it is a noop
-  // transfer and substituting the input won't change this instruction.  Just
-  // ignore the input and let someone else zap MDep.  This handles cases like:
+  // transfer and substituting the input won't change this instruction. Just
+  // ignore the input and let someone else zap MDep. This handles cases like:
   //    memcpy(a <- a)
   //    memcpy(b <- a)
   if (M->getSource() == MDep->getSource())
     return false;
 
-  // Second, the length of the memcpy's must be the same, or the preceding one
+  // We can only optimize non-volatile memcpy's.
+  if (MDep->isVolatile())
+    return false;
+
+  int64_t MForwardOffset = 0;
+  const DataLayout &DL = M->getModule()->getDataLayout();
+  // We can only transforms memcpy's where the dest of one is the source of the
+  // other, or they have an offset in a range.
+  if (M->getSource() != MDep->getDest()) {
+    std::optional<int64_t> Offset =
+        M->getSource()->getPointerOffsetFrom(MDep->getDest(), DL);
+    if (!Offset || *Offset < 0)
+      return false;
+    MForwardOffset = *Offset;
+  }
+
+  // The length of the memcpy's must be the same, or the preceding one
   // must be larger than the following one.
-  if (MDep->getLength() != M->getLength()) {
+  if (MForwardOffset != 0 || (MDep->getLength() != M->getLength())) {
     auto *MDepLen = dyn_cast<ConstantInt>(MDep->getLength());
     auto *MLen = dyn_cast<ConstantInt>(M->getLength());
-    if (!MDepLen || !MLen || MDepLen->getZExtValue() < MLen->getZExtValue())
+    if (!MDepLen || !MLen)
+      return false;
+    if (MDepLen->getZExtValue() < MLen->getZExtValue() + MForwardOffset)
       return false;
   }
 
+  IRBuilder<> Builder(M);
+  auto *CopySource = MDep->getRawSource();
+  MaybeAlign CopySourceAlign = MDep->getSourceAlign();
+  // We just need to calculate the actual size of the copy.
+  auto MCopyLoc = MemoryLocation::getForSource(MDep).getWithNewSize(
+      MemoryLocation::getForSource(M).Size);
+
+  // We need to update `MCopyLoc` if an offset exists.
+  if (MForwardOffset > 0) {
+    // The copy destination of `M` maybe can serve as the source of copying.
+    std::optional<int64_t> MDestOffset =
+        M->getRawDest()->getPointerOffsetFrom(MDep->getRawSource(), DL);
+    if (MDestOffset && *MDestOffset == MForwardOffset)
+      CopySource = M->getRawDest();
+    else
+      CopySource = Builder.CreateInBoundsPtrAdd(
+          CopySource, ConstantInt::get(Type::getInt64Ty(Builder.getContext()),
+                                       MForwardOffset));
+    MCopyLoc = MCopyLoc.getWithNewPtr(CopySource);
+    if (CopySourceAlign)
+      CopySourceAlign = commonAlignment(*CopySourceAlign, MForwardOffset);
+  }
+
   // Verify that the copied-from memory doesn't change in between the two
   // transfers.  For example, in:
   //    memcpy(a <- b)
@@ -1152,11 +1188,12 @@ bool MemCpyOptPass::processMemCpyMemCpyDependence(MemCpyInst *M,
   //
   // TODO: If the code between M and MDep is transparent to the destination "c",
   // then we could still perform the xform by moving M up to the first memcpy.
-  // TODO: It would be sufficient to check the MDep source up to the memcpy
-  // size of M, rather than MDep.
-  if (writtenBetween(MSSA, BAA, MemoryLocation::getForSource(MDep),
-                     MSSA->getMemoryAccess(MDep), MSSA->getMemoryAccess(M)))
+  if (writtenBetween(MSSA, BAA, MCopyLoc, MSSA->getMemoryAccess(MDep),
+                     MSSA->getMemoryAccess(M))) {
+    if (MForwardOffset > 0 && CopySource->use_empty())
+      cast<Instruction>(CopySource)->eraseFromParent();
     return false;
+  }
 
   // If the dest of the second might alias the source of the first, then the
   // source and dest might overlap. In addition, if the source of the first
@@ -1179,23 +1216,22 @@ bool MemCpyOptPass::processMemCpyMemCpyDependence(MemCpyInst *M,
 
   // TODO: Is this worth it if we're creating a less aligned memcpy? For
   // example we could be moving from movaps -> movq on x86.
-  IRBuilder<> Builder(M);
   Instruction *NewM;
   if (UseMemMove)
-    NewM = Builder.CreateMemMove(M->getRawDest(), M->getDestAlign(),
-                                 MDep->getRawSource(), MDep->getSourceAlign(),
-                                 M->getLength(), M->isVolatile());
+    NewM =
+        Builder.CreateMemMove(M->getRawDest(), M->getDestAlign(), CopySource,
+                              CopySourceAlign, M->getLength(), M->isVolatile());
   else if (isa<MemCpyInlineInst>(M)) {
     // llvm.memcpy may be promoted to llvm.memcpy.inline, but the converse is
     // never allowed since that would allow the latter to be lowered as a call
     // to an external function.
-    NewM = Builder.CreateMemCpyInline(
-        M->getRawDest(), M->getDestAlign(), MDep->getRawSource(),
-        MDep->getSourceAlign(), M->getLength(), M->isVolatile());
+    NewM = Builder.CreateMemCpyInline(M->getRawDest(), M->getDestAlign(),
+                                      CopySource, CopySourceAlign,
+                                      M->getLength(), M->isVolatile());
   } else
-    NewM = Builder.CreateMemCpy(M->getRawDest(), M->getDestAlign(),
-                                MDep->getRawSource(), MDep->getSourceAlign(),
-                                M->getLength(), M->isVolatile());
+    NewM =
+        Builder.CreateMemCpy(M->getRawDest(), M->getDestAlign(), CopySource,
+                             CopySourceAlign, M->getLength(), M->isVolatile());
   NewM->copyMetadata(*M, LLVMContext::MD_DIAssignID);
 
   assert(isa<MemoryDef>(MSSAU->getMemorySSA()->getMemoryAccess(M)));
diff --git a/llvm/test/Transforms/MemCpyOpt/memcpy-memcpy-offset.ll b/llvm/test/Transforms/MemCpyOpt/memcpy-memcpy-offset.ll
new file mode 100644
index 00000000000000..abf051d55fc8b2
--- /dev/null
+++ b/llvm/test/Transforms/MemCpyOpt/memcpy-memcpy-offset.ll
@@ -0,0 +1,179 @@
+; NOTE: Assertions have been autogenerated by utils/update_test_checks.py UTC_ARGS: --version 4
+; RUN: opt < %s -passes=memcpyopt -S -verify-memoryssa | FileCheck %s
+
+%buf = type [7 x i8]
+
+; We can forward `memcpy` because the copy location are the same,
+define void @forward_offset(ptr %dep_src) {
+; CHECK-LABEL: define void @forward_offset(
+; CHECK-SAME: ptr [[DEP_SRC:%.*]]) {
+; CHECK-NEXT:    [[DEP_DEST:%.*]] = alloca [7 x i8], align 1
+; CHECK-NEXT:    call void @llvm.memcpy.p0.p0.i64(ptr align 1 [[DEP_DEST]], ptr align 1 [[DEP_SRC]], i64 7, i1 false)
+; CHECK-NEXT:    [[SRC:%.*]] = getelementptr inbounds i8, ptr [[DEP_DEST]], i64 1
+; CHECK-NEXT:    [[DEP:%.*]] = getelementptr inbounds i8, ptr [[DEP_SRC]], i64 1
+; CHECK-NEXT:    call void @llvm.memmove.p0.p0.i64(ptr align 1 [[DEP]], ptr align 1 [[DEP]], i64 6, i1 false)
+; CHECK-NEXT:    ret void
+;
+  %dep_dest = alloca %buf, align 1
+  call void @llvm.memcpy.p0.p0.i64(ptr align 1 %dep_dest, ptr align 1 %dep_src, i64 7, i1 false)
+  %src = getelementptr inbounds i8, ptr %dep_dest, i64 1
+  %dest = getelementptr inbounds i8, ptr %dep_src, i64 1
+  call void @llvm.memcpy.p0.p0.i64(ptr align 1 %dest, ptr align 1 %src, i64 6, i1 false)
+  ret void
+}
+
+; We need to update the align value when forwarding.
+define void @forward_offset_align(ptr %dep_src) {
+; CHECK-LABEL: define void @forward_offset_align(
+; CHECK-SAME: ptr [[DEP_SRC:%.*]]) {
+; CHECK-NEXT:    [[DEP_DEST:%.*]] = alloca [7 x i8], align 1
+; CHECK-NEXT:    call void @llvm.memcpy.p0.p0.i64(ptr align 1 [[DEP_DEST]], ptr align 2 [[DEP_SRC]], i64 7, i1 false)
+; CHECK-NEXT:    [[SRC:%.*]] = getelementptr inbounds i8, ptr [[DEP_DEST]], i64 1
+; CHECK-NEXT:    [[DEP:%.*]] = getelementptr inbounds i8, ptr [[DEP_SRC]], i64 1
+; CHECK-NEXT:    call void @llvm.memmove.p0.p0.i64(ptr align 1 [[DEP]], ptr align 1 [[DEP]], i64 6, i1 false)
+; CHECK-NEXT:    ret void
+;
+  %dep_dest = alloca %buf, align 1
+  call void @llvm.memcpy.p0.p0.i64(ptr align 1 %dep_dest, ptr align 2 %dep_src, i64 7, i1 false)
+  %src = getelementptr inbounds i8, ptr %dep_dest, i64 1
+  %dest = getelementptr inbounds i8, ptr %dep_src, i64 1
+  call void @llvm.memcpy.p0.p0.i64(ptr align 1 %dest, ptr align 1 %src, i64 6, i1 false)
+  ret void
+}
+
+; We need to create a GEP instruction when forwarding.
+define void @forward_offset_with_gep(ptr %dep_src) {
+; CHECK-LABEL: define void @forward_offset_with_gep(
+; CHECK-SAME: ptr [[DEP_SRC:%.*]]) {
+; CHECK-NEXT:    [[DEP_DEST:%.*]] = alloca [7 x i8], align 1
+; CHECK-NEXT:    call void @llvm.memcpy.p0.p0.i64(ptr align 1 [[DEP_DEST]], ptr align 1 [[DEP_SRC]], i64 7, i1 false)
+; CHECK-NEXT:    [[SRC:%.*]] = getelementptr inbounds i8, ptr [[DEP_DEST]], i64 1
+; CHECK-NEXT:    [[DEP1:%.*]] = getelementptr inbounds i8, ptr [[DEP_SRC]], i64 2
+; CHECK-NEXT:    [[TMP1:%.*]] = getelementptr inbounds i8, ptr [[DEP_SRC]], i64 1
+; CHECK-NEXT:    call void @llvm.memmove.p0.p0.i64(ptr align 1 [[DEP1]], ptr align 1 [[TMP1]], i64 6, i1 false)
+; CHECK-NEXT:    ret void
+;
+  %dep_dest = alloca %buf, align 1
+  call void @llvm.memcpy.p0.p0.i64(ptr align 1 %dep_dest, ptr align 1 %dep_src, i64 7, i1 false)
+  %src = getelementptr inbounds i8, ptr %dep_dest, i64 1
+  %dest = getelementptr inbounds i8, ptr %dep_src, i64 2
+  call void @llvm.memcpy.p0.p0.i64(ptr align 1 %dest, ptr align 1 %src, i64 6, i1 false)
+  ret void
+}
+
+; Make sure we pass the right parameters when calling `memcpy`.
+define void @forward_offset_memcpy(ptr %dep_src) {
+; CHECK-LABEL: define void @forward_offset_memcpy(
+; CHECK-SAME: ptr [[DEP_SRC:%.*]]) {
+; CHECK-NEXT:    [[DEP_DEST:%.*]] = alloca [7 x i8], align 1
+; CHECK-NEXT:    [[DEST:%.*]] = alloca [7 x i8], align 1
+; CHECK-NEXT:    call void @llvm.memcpy.p0.p0.i64(ptr align 1 [[DEP_DEST]], ptr align 1 [[DEP_SRC]], i64 7, i1 false)
+; CHECK-NEXT:    [[SRC:%.*]] = getelementptr inbounds i8, ptr [[DEP_DEST]], i64 1
+; CHECK-NEXT:    [[TMP1:%.*]] = getelementptr inbounds i8, ptr [[DEP_SRC]], i64 1
+; CHECK-NEXT:    call void @llvm.memcpy.p0.p0.i64(ptr align 1 [[DEST]], ptr align 1 [[TMP1]], i64 6, i1 false)
+; CHECK-NEXT:    call void @use(ptr [[DEST]])
+; CHECK-NEXT:    ret void
+;
+  %dep_dest = alloca %buf, align 1
+  %dest = alloca %buf, align 1
+  call void @llvm.memcpy.p0.p0.i64(ptr align 1 %dep_dest, ptr align 1 %dep_src, i64 7, i1 false)
+  %src = getelementptr inbounds i8, ptr %dep_dest, i64 1
+  call void @llvm.memcpy.p0.p0.i64(ptr align 1 %dest, ptr align 1 %src, i64 6, i1 false)
+  call void @use(ptr %dest)
+  ret void
+}
+
+; Make sure we pass the right parameters when calling `memcpy.inline`.
+define void @forward_offset_memcpy_inline(ptr %dep_src) {
+; CHECK-LABEL: define void @forward_offset_memcpy_inline(
+; CHECK-SAME: ptr [[DEP_SRC:%.*]]) {
+; CHECK-NEXT:    [[DEP_DEST:%.*]] = alloca [7 x i8], align 1
+; CHECK-NEXT:    [[DEST:%.*]] = alloca [7 x i8], align 1
+; CHECK-NEXT:    call void @llvm.memcpy.p0.p0.i64(ptr align 1 [[DEP_DEST]], ptr align 1 [[DEP_SRC]], i64 7, i1 false)
+; CHECK-NEXT:    [[SRC:%.*]] = getelementptr inbounds i8, ptr [[DEP_DEST]], i64 1
+; CHECK-NEXT:    [[TMP1:%.*]] = getelementptr inbounds i8, ptr [[DEP_SRC]], i64 1
+; CHECK-NEXT:    call void @llvm.memcpy.inline.p0.p0.i64(ptr align 1 [[DEST]], ptr align 1 [[TMP1]], i64 6, i1 false)
+; CHECK-NEXT:    call void @use(ptr [[DEST]])
+; CHECK-NEXT:    ret void
+;
+  %dep_dest = alloca %buf, align 1
+  %dest = alloca %buf, align 1
+  call void @llvm.memcpy.p0.p0.i64(ptr align 1 %dep_dest, ptr align 1 %dep_src, i64 7, i1 false)
+  %src = getelementptr inbounds i8, ptr %dep_dest, i64 1
+  call void @llvm.memcpy.inline.p0.p0.i64(ptr align 1 %dest, ptr align 1 %src, i64 6, i1 false)
+  call void @use(ptr %dest)
+  ret void
+}
+
+; We cannot forward `memcpy` because it exceeds the size of `memcpy` it depends on.
+define void @do_not_forward_oversize_offset(ptr %dep_src) {
+; CHECK-LABEL: define void @do_not_forward_oversize_offset(
+; CHECK-SAME: ptr [[DEP_SRC:%.*]]) {
+; CHECK-NEXT:    [[DEP_DEST:%.*]] = alloca [7 x i8], align 1
+; CHECK-NEXT:    call void @llvm.memcpy.p0.p0.i64(ptr align 1 [[DEP_DEST]], ptr align 1 [[DEP_SRC]], i64 6, i1 false)
+; CHECK-NEXT:    [[SRC:%.*]] = getelementptr inbounds i8, ptr [[DEP_DEST]], i64 1
+; CHECK-NEXT:    [[DEP:%.*]] = getelementptr inbounds i8, ptr [[DEP_SRC]], i64 1
+; CHECK-NEXT:    call void @llvm.memcpy.p0.p0.i64(ptr align 1 [[DEP]], ptr align 1 [[SRC]], i64 6, i1 false)
+; CHECK-NEXT:    ret void
+;
+  %dep_dest = alloca %buf, align 1
+  call void @llvm.memcpy.p0.p0.i64(ptr align 1 %dep_dest, ptr align 1 %dep_src, i64 6, i1 false)
+  %src = getelementptr inbounds i8, ptr %dep_dest, i64 1
+  %dest = getelementptr inbounds i8, ptr %dep_src, i64 1
+  call void @llvm.memcpy.p0.p0.i64(ptr align 1 %dest, ptr align 1 %src, i64 6, i1 false)
+  ret void
+}
+
+; We can forward `memcpy` because the write operation does not corrupt the location to be copied.
+define void @forward_offset_and_store(ptr %dep_src) {
+; CHECK-LABEL: define void @forward_offset_and_store(
+; CHECK-SAME: ptr [[DEP_SRC:%.*]]) {
+; CHECK-NEXT:    [[DEP_DEST:%.*]] = alloca [7 x i8], align 1
+; CHECK-NEXT:    call void @llvm.memcpy.p0.p0.i64(ptr align 1 [[DEP_DEST]], ptr align 1 [[DEP_SRC]], i64 7, i1 false)
+; CHECK-NEXT:    store i8 1, ptr [[DEP_SRC]], align 1
+; CHECK-NEXT:    [[DEP_SRC_END:%.*]] = getelementptr inbounds i8, ptr [[DEP_SRC]], i64 6
+; CHECK-NEXT:    store i8 1, ptr [[DEP_SRC_END]], align 1
+; CHECK-NEXT:    [[SRC:%.*]] = getelementptr inbounds i8, ptr [[DEP_DEST]], i64 1
+; CHECK-NEXT:    [[DEP:%.*]] = getelementptr inbounds i8, ptr [[DEP_SRC]], i64 1
+; CHECK-NEXT:    call void @llvm.memmove.p0.p0.i64(ptr align 1 [[DEP]], ptr align 1 [[DEP]], i64 5, i1 false)
+; CHECK-NEXT:    ret void
+;
+  %dep_dest = alloca %buf, align 1
+  call void @llvm.memcpy.p0.p0.i64(ptr align 1 %dep_dest, ptr align 1 %dep_src, i64 7, i1 false)
+  store i8 1, ptr %dep_src, align 1
+  %dep_src_end = getelementptr inbounds i8, ptr %dep_src, i64 6
+  store i8 1, ptr %dep_src_end, align 1
+  %src = getelementptr inbounds i8, ptr %dep_dest, i64 1
+  %dest = getelementptr inbounds i8, ptr %dep_src, i64 1
+  call void @llvm.memcpy.p0.p0.i64(ptr align 1 %dest, ptr align 1 %src, i64 5, i1 false)
+  ret void
+}
+
+; We cannot forward `memcpy` because the write operation alters the location to be copied.
+; Also, make sure we have removed the GEP instruction that was created temporarily.
+define void @do_not_forward_offset_and_store(ptr %dep_src) {
+; CHECK-LABEL: define void @do_not_forward_offset_and_store(
+; CHECK-SAME: ptr [[DEP_SRC:%.*]]) {
+; CHECK-NEXT:    [[DEP_DEST:%.*]] = alloca [7 x i8], align 1
+; CHECK-NEXT:    call void @llvm.memcpy.p0.p0.i64(ptr align 1 [[DEP_DEST]], ptr align 1 [[DEP_SRC]], i64 7, i1 false)
+; CHECK-NEXT:    [[DEP:%.*]] = getelementptr inbounds i8, ptr [[DEP_SRC]], i64 1
+; CHECK-NEXT:    store i8 1, ptr [[DEP]], align 1
+; CHECK-NEXT:    [[SRC:%.*]] = getelementptr inbounds i8, ptr [[DEP_DEST]], i64 1
+; CHECK-NEXT:    [[DEST:%.*]] = getelementptr inbounds i8, ptr [[DEP_SRC]], i64 2
+; CHECK-NEXT:    call void @llvm.memcpy.p0.p0.i64(ptr align 1 [[DEST]], ptr align 1 [[SRC]], i64 5, i1 false)
+; CHECK-NEXT:    ret void
+;
+  %dep_dest = alloca %buf, align 1
+  call void @llvm.memcpy.p0.p0.i64(ptr align 1 %dep_dest, ptr align 1 %dep_src, i64 7, i1 false)
+  %dep_src_offset = getelementptr inbounds i8, ptr %dep_src, i64 1
+  store i8 1, ptr %dep_src_offset, align 1
+  %src = getelementptr inbounds i8, ptr %dep_dest, i64 1
+  %dest = getelementptr inbounds i8, ptr %dep_src, i64 2
+  call void @llvm.memcpy.p0.p0.i64(ptr align 1 %dest, ptr align 1 %src, i64 5, i1 false)
+  ret void
+}
+
+declare void @use(ptr)
+
+declare void @llvm.memcpy.p0.p0.i64(ptr nocapture, ptr nocapture, i64, i1)
+declare void @llvm.memcpy.inline.p0.p0.i64(ptr nocapture, ptr nocapture, i64, i1)

@DianQK
Copy link
Member Author

DianQK commented Mar 31, 2024

@nikic https://llvm-compile-time-tracker.com/ is currently unavailable, possibly due to some meaningless force pushes I made that I apologize.

@nikic
Copy link
Contributor

nikic commented Mar 31, 2024

@nikic https://llvm-compile-time-tracker.com/ is currently unavailable, possibly due to some meaningless force pushes I made that I apologize.

I've restarted the display server, it should work again now. It's not related to anything you did :)

@DianQK
Copy link
Member Author

DianQK commented Mar 31, 2024

@dtcxzyw Could you test this PR on llvm-opt-benchmark? This might be a common pattern in Rust.

@DianQK DianQK requested a review from dtcxzyw March 31, 2024 13:28
dtcxzyw added a commit to dtcxzyw/llvm-opt-benchmark that referenced this pull request Mar 31, 2024
@AreaZR
Copy link
Contributor

AreaZR commented Mar 31, 2024

I am confused, the issue linked says you can remove memcpy, but the test that shows this only shows it being replaced with memmove?

@DianQK
Copy link
Member Author

DianQK commented Mar 31, 2024

I am confused, the issue linked says you can remove memcpy, but the test that shows this only shows it being replaced with memmove?

instcombine or dse will remove memcpy: https://llvm.godbolt.org/z/EbxvWvrYb.
Of course, I could just delete it directly in memcpyopt, but that would make the code less charming. This is for memcpy forwarding.

Copy link
Member

@dtcxzyw dtcxzyw left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you add some tests to demonstrate that the staging allocas will be eliminated after memcpy forwarding?

llvm/lib/Transforms/Scalar/MemCpyOptimizer.cpp Outdated Show resolved Hide resolved
@DianQK DianQK force-pushed the memcpy-offset branch 2 times, most recently from f5297fe to 105dde3 Compare April 2, 2024 00:49
@DianQK
Copy link
Member Author

DianQK commented Apr 2, 2024

Can you add some tests to demonstrate that the staging allocas will be eliminated after memcpy forwarding?

Done. I've also updated some tests related to alignment.

@DianQK DianQK changed the title [MemCpyOpt] Calculate the offset value to forward memcpy [MemCpyOpt] Forward memcpy based on the actual copy memory location. Apr 2, 2024
@DianQK
Copy link
Member Author

DianQK commented Jun 18, 2024

Rebased and ping.

@DianQK
Copy link
Member Author

DianQK commented Jul 9, 2024

Ping~ :p

Copy link
Member

@dtcxzyw dtcxzyw left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM. Thank you!

llvm/lib/Transforms/Scalar/MemCpyOptimizer.cpp Outdated Show resolved Hide resolved
llvm/lib/Transforms/Scalar/MemCpyOptimizer.cpp Outdated Show resolved Hide resolved
llvm/lib/Transforms/Scalar/MemCpyOptimizer.cpp Outdated Show resolved Hide resolved
std::optional<int64_t> MDestOffset =
M->getRawDest()->getPointerOffsetFrom(MDep->getRawSource(), DL);
if (MDestOffset && *MDestOffset == MForwardOffset)
CopySource = M->getRawDest();
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please add some comments for this case.

CopySource = MDep->getRawSource() + MForwardOffset = MDep->getRawSource() + MDestOffset = M->getRawDest()

Is there a test covering this case?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do I understand correctly that this is supposed to handle this special case?

memcpy(d1,s1)
memcpy(s1+o,d1+o)

Why is there special handling for this? Is this a common pattern?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Okay, it seems like it was the original motivating case in Rust?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Okay, it seems like it was the original motivating case in Rust?

Yes. This IR is: https://godbolt.org/z/Ya3v5E4G7. The full memcpy like this:

memcpy(d1, s1)
memcpy(tmp, d1+o)
memcpy(s1+o, tmp)

After this PR, it will become memmove(x+o, x+o): https://llvm.godbolt.org/z/oc4M4v6qj. However, in the current pipeline, this memmove hasn't been eliminated. I'm not sure why, but that should be another PR.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

However, in the current pipeline, this memmove hasn't been eliminated. I'm not sure why, but that should be another PR.

It seems I should complete this in this PR. I will likely verify it thoroughly tomorrow.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

However, in the current pipeline, this memmove hasn't been eliminated. I'm not sure why, but that should be another PR.

It seems I should complete this in this PR. I will likely verify it thoroughly tomorrow.

This should be put in another PR: https://llvm.godbolt.org/z/obo4j4G6x.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please add some comments for this case.

Added.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This should be put in another PR: https://llvm.godbolt.org/z/obo4j4G6x.

This is the PR for the fix: #98321.

@dtcxzyw
Copy link
Member

dtcxzyw commented Jul 9, 2024

(It seems there's no relevant matching pattern in the C++ test cases.)

Can you run rust performance benchmark like rust-lang/rust#127347?

auto *CopySource = MDep->getRawSource();
auto CleanupOnFailure = llvm::make_scope_exit([&CopySource] {
if (CopySource->use_empty())
cast<Instruction>(CopySource)->eraseFromParent();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is dangerous when BatchAA is used, if the pointer is cached in AAQI and then later a new instruction is allocated with the same pointer.

I think it is safe here because we will only allocate more instructions after finishing all BatchAA queries, but we have to be careful if we want to do something like this in another place. Then we'd probably have to delay instruction removal until all transforms on an instruction finished.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks! I'll add a comment or change the code. This reminds me of #79820.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please make the bulk of the coverage not copy back into the same allocation, but into a separate (noalias) destination. The same-allocation case is a confusing edge-case where we should only verify that we switch to memmove in that case.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Updated.

bors added a commit to rust-lang-ci/rust that referenced this pull request Jul 9, 2024
Test the compile time for llvm#87190

Test for llvm/llvm-project#87190.

r? ghost

`@rustbot` label +S-experimental
@DianQK
Copy link
Member Author

DianQK commented Jul 10, 2024

Test the compile time for #87190 rust-lang/rust#127514

Currently, only the results related to compile time are relatively accurate: rust-lang/rust#127514 (comment).

@DianQK DianQK marked this pull request as draft July 11, 2024 12:47
@DianQK DianQK marked this pull request as ready for review July 11, 2024 14:02
@DianQK
Copy link
Member Author

DianQK commented Jul 11, 2024

I used the original IR from the Rust to reflect the impact of #98321.

IRBuilder<> Builder(M);
auto *CopySource = MDep->getSource();
auto CleanupOnFailure = llvm::make_scope_exit([&CopySource] {
if (CopySource->use_empty())
Copy link
Contributor

@nikic nikic Jul 11, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we make this specific to only the case where we create the instruction? I think your current code will cause a crash if the destination of the first memcpy is a non-instruction (e.g. argument) and we fall into the same-offset special-case below. (Such that all users are removed, but we didn't add the instruction.)

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Updated. But I haven’t got a crash example. When we are deleting an existing but unused value, we need to follow the must alias path, and this dest cannot be used by the first memcpy. Perhaps we need to create a pair of must aliases on the arguments?

Copy link
Contributor

@nikic nikic left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@DianQK DianQK merged commit fa24213 into llvm:main Jul 12, 2024
7 checks passed
@DianQK DianQK deleted the memcpy-offset branch July 12, 2024 14:58
@dyung
Copy link
Collaborator

dyung commented Jul 13, 2024

Hi @DianQK, we started to see an assertion failure in some of our internal tests which I bisected back to your change. I have put a repro in #98726, can you take a look?

aaryanshukla pushed a commit to aaryanshukla/llvm-project that referenced this pull request Jul 14, 2024
…n. (llvm#87190)

Fixes llvm#85560.

We can forward `memcpy` as long as the actual memory location being
copied have not been altered.

alive2: https://alive2.llvm.org/ce/z/q9JaHV
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[MemCpyOpt] Failure to eliminate memcpy
6 participants