Skip to content

Commit

Permalink
[MSAN] Support load and stores of scalable vector types
Browse files Browse the repository at this point in the history
This adds support for scalable vector types - at least far enough to get basic load and store cases working. It turns out that load/store without origin tracking already worked; I apparently got that working with one of the pre patches to use TypeSize utilities and didn't notice. The code changes here are required to enable origin tracking.

For origin tracking, a 4 byte value - the origin - is broadcast into a shadow region whose size exactly matches the type being accessed. This origin is only written if the shadow value is non-zero. The details of how shadow is computed from the original value being stored aren't relevant for this patch.

The code changes involve two related primitives.

First, we need to be able to perform that broadcast into a scalable sized memory region. This requires the use of a loop, and appropriate bound. The fixed size case optimizes with larger stores and alignment; I did not bother with that for the scalable case for now. We can optimize this codepath later if desired.

Second, we need a way to test if the shadow is zero. The mechanism for this in the code is to convert the shadow value into a scalar, and then zero check that. There's an assumption that this scalar is zero exactly when all elements of the shadow value are zero. As a result, we use an OR reduction on the scalable vector. This is analogous to how e.g. an array is handled. I landed a bunch of cleanup changes to remove other direct uses of the scalar conversion to convince myself there were no other undocumented invariants.

Differential Revision: https://reviews.llvm.org/D146157
  • Loading branch information
preames committed Mar 23, 2023
1 parent 434b0ba commit 5bcb4c4
Show file tree
Hide file tree
Showing 2 changed files with 528 additions and 1 deletion.
20 changes: 19 additions & 1 deletion llvm/lib/Transforms/Instrumentation/MemorySanitizer.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -1183,13 +1183,29 @@ struct MemorySanitizerVisitor : public InstVisitor<MemorySanitizerVisitor> {
/// Fill memory range with the given origin value.
void paintOrigin(IRBuilder<> &IRB, Value *Origin, Value *OriginPtr,
TypeSize TS, Align Alignment) {
unsigned Size = TS.getFixedValue();
const DataLayout &DL = F.getParent()->getDataLayout();
const Align IntptrAlignment = DL.getABITypeAlign(MS.IntptrTy);
unsigned IntptrSize = DL.getTypeStoreSize(MS.IntptrTy);
assert(IntptrAlignment >= kMinOriginAlignment);
assert(IntptrSize >= kOriginSize);

// Note: The loop based formation works for fixed length vectors too,
// however we prefer to unroll and specialize alignment below.
if (TS.isScalable()) {
Value *Size = IRB.CreateTypeSize(IRB.getInt32Ty(), TS);
Value *RoundUp = IRB.CreateAdd(Size, IRB.getInt32(kOriginSize - 1));
Value *End = IRB.CreateUDiv(RoundUp, IRB.getInt32(kOriginSize));
auto [InsertPt, Index] =
SplitBlockAndInsertSimpleForLoop(End, &*IRB.GetInsertPoint());
IRB.SetInsertPoint(InsertPt);

Value *GEP = IRB.CreateGEP(MS.OriginTy, OriginPtr, Index);
IRB.CreateAlignedStore(Origin, GEP, kMinOriginAlignment);
return;
}

unsigned Size = TS.getFixedValue();

unsigned Ofs = 0;
Align CurrentAlignment = Alignment;
if (Alignment >= IntptrAlignment && IntptrSize > kOriginSize) {
Expand Down Expand Up @@ -1575,6 +1591,8 @@ struct MemorySanitizerVisitor : public InstVisitor<MemorySanitizerVisitor> {
if (ArrayType *Array = dyn_cast<ArrayType>(V->getType()))
return collapseArrayShadow(Array, V, IRB);
if (isa<VectorType>(V->getType())) {
if (isa<ScalableVectorType>(V->getType()))
return convertShadowToScalar(IRB.CreateOrReduce(V), IRB);
unsigned BitWidth =
V->getType()->getPrimitiveSizeInBits().getFixedValue();
return IRB.CreateBitCast(V, IntegerType::get(*MS.C, BitWidth));
Expand Down
Loading

0 comments on commit 5bcb4c4

Please sign in to comment.