Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cleanup x86_mmx after removing IR type #100646

Merged
merged 4 commits into from
Jul 28, 2024

Conversation

jyknight
Copy link
Member

@jyknight jyknight commented Jul 25, 2024

After #98505, the textual IR keyword x86_mmx was temporarily made to parse as <1 x i64>, so as not to require a lot of test update noise.

This completes the removal of the type, by removing thex86_mmx keyword from the IR parser, and making the (now no-op) test updates via sed -i 's/\bx86_mmx\b/<1 x i64>/g' $(git grep -l x86_mmx llvm/test/). Resulting bitcasts from <1 x i64> to itself were then manually deleted.

Changes to llvm/test/Bitcode/compatibility-$VERSION.ll were reverted, as they're intended to be equivalent to the .bc file, if parsed by old LLVM, so shouldn't be updated.

A few tests were removed, as they're no longer testing anything, in the following files:

  • llvm/test/Transforms/GlobalOpt/x86_mmx_load.ll
  • llvm/test/Transforms/InstCombine/cast.ll
  • llvm/test/Transforms/InstSimplify/ConstProp/gep-zeroinit-vector.ll

Works towards issue #98272.

Ran `sed -i 's/\bx86_mmx\b/<1 x i64>/g' $(git grep -l x86_mmx llvm/test/)`
 - Delete a couple now-useless tests.
 - Revert changes to compatibility-VERS.ll, since those are to be parsed by old LLVM.
@llvmbot
Copy link
Member

llvmbot commented Jul 25, 2024

@llvm/pr-subscribers-mc

@llvm/pr-subscribers-llvm-transforms

Author: James Y Knight (jyknight)

Changes

After #98505, the textual IR keyword x86_mmx was temporarily made to parse as &lt;1 x i64&gt;, so as not to require a lot of test update noise.

This completes the removal of the type, by removing thex86_mmx keyword from the IR parser, and making the (now no-op) test updates via sed -i 's/\bx86_mmx\b/&lt;1 x i64&gt;/g' $(git grep -l x86_mmx llvm/test/).

Changes to llvm/test/Bitcode/compatibility-$VERSION.ll were reverted, as they're intended to be equivalent to the .bc file, if parsed by old LLVM, so shouldn't be updated.

A few tests were removed, as they're no longer testing anything, in the following files:

  • llvm/test/Transforms/GlobalOpt/x86_mmx_load.ll
  • llvm/test/Transforms/InstCombine/cast.ll
  • llvm/test/Transforms/InstSimplify/ConstProp/gep-zeroinit-vector.ll

Patch is 365.87 KiB, truncated to 20.00 KiB below, full version: https://github.com/llvm/llvm-project/pull/100646.diff

54 Files Affected:

  • (modified) llvm/lib/AsmParser/LLLexer.cpp (-2)
  • (modified) llvm/test/Bindings/llvm-c/echo.ll (+1-1)
  • (modified) llvm/test/Bitcode/compatibility.ll (-2)
  • (modified) llvm/test/CodeGen/X86/2007-05-15-maskmovq.ll (+4-4)
  • (modified) llvm/test/CodeGen/X86/2007-07-03-GR64ToVR64.ll (+6-6)
  • (modified) llvm/test/CodeGen/X86/2008-04-08-CoalescerCrash.ll (+4-4)
  • (modified) llvm/test/CodeGen/X86/2008-08-23-64Bit-maskmovq.ll (+4-4)
  • (modified) llvm/test/CodeGen/X86/2008-09-05-sinttofp-2xi32.ll (+7-7)
  • (modified) llvm/test/CodeGen/X86/2011-06-14-mmx-inlineasm.ll (+4-4)
  • (modified) llvm/test/CodeGen/X86/avx-vbroadcast.ll (+2-2)
  • (modified) llvm/test/CodeGen/X86/avx2-vbroadcast.ll (+2-2)
  • (modified) llvm/test/CodeGen/X86/bitcast-mmx.ll (+17-17)
  • (modified) llvm/test/CodeGen/X86/expand-vr64-gr64-copy.mir (+3-3)
  • (modified) llvm/test/CodeGen/X86/fast-isel-bc.ll (+3-3)
  • (modified) llvm/test/CodeGen/X86/fast-isel-nontemporal.ll (+4-4)
  • (modified) llvm/test/CodeGen/X86/mmx-arg-passing-x86-64.ll (+7-7)
  • (modified) llvm/test/CodeGen/X86/mmx-arg-passing.ll (+6-6)
  • (modified) llvm/test/CodeGen/X86/mmx-arith.ll (+154-154)
  • (modified) llvm/test/CodeGen/X86/mmx-bitcast-fold.ll (+3-3)
  • (modified) llvm/test/CodeGen/X86/mmx-bitcast.ll (+26-26)
  • (modified) llvm/test/CodeGen/X86/mmx-build-vector.ll (+67-67)
  • (modified) llvm/test/CodeGen/X86/mmx-coalescing.ll (+14-14)
  • (modified) llvm/test/CodeGen/X86/mmx-cvt.ll (+31-31)
  • (modified) llvm/test/CodeGen/X86/mmx-fold-load.ll (+84-84)
  • (modified) llvm/test/CodeGen/X86/mmx-fold-zero.ll (+26-26)
  • (modified) llvm/test/CodeGen/X86/mmx-intrinsics.ll (+433-433)
  • (modified) llvm/test/CodeGen/X86/mmx-only.ll (+5-5)
  • (modified) llvm/test/CodeGen/X86/mxcsr-reg-usage.ll (+12-12)
  • (modified) llvm/test/CodeGen/X86/nontemporal.ll (+4-4)
  • (modified) llvm/test/CodeGen/X86/pr13859.ll (+2-2)
  • (modified) llvm/test/CodeGen/X86/pr23246.ll (+2-2)
  • (modified) llvm/test/CodeGen/X86/pr29222.ll (+4-4)
  • (modified) llvm/test/CodeGen/X86/pr35982.ll (+4-4)
  • (modified) llvm/test/CodeGen/X86/select-mmx.ll (+9-9)
  • (modified) llvm/test/CodeGen/X86/stack-folding-mmx.ll (+390-390)
  • (modified) llvm/test/CodeGen/X86/vec_extract-mmx.ll (+16-16)
  • (modified) llvm/test/CodeGen/X86/vec_insert-5.ll (+2-2)
  • (modified) llvm/test/CodeGen/X86/vec_insert-7.ll (+4-4)
  • (modified) llvm/test/CodeGen/X86/vec_insert-mmx.ll (+3-3)
  • (modified) llvm/test/CodeGen/X86/vector-shuffle-mmx.ll (+15-15)
  • (modified) llvm/test/CodeGen/X86/x86-64-psub.ll (+35-35)
  • (modified) llvm/test/Instrumentation/MemorySanitizer/X86/mmx-intrinsics.ll (+433-433)
  • (modified) llvm/test/Instrumentation/MemorySanitizer/vector_arith.ll (+8-8)
  • (modified) llvm/test/Instrumentation/MemorySanitizer/vector_cvt.ll (+5-5)
  • (modified) llvm/test/Instrumentation/MemorySanitizer/vector_pack.ll (+4-4)
  • (modified) llvm/test/Instrumentation/MemorySanitizer/vector_shift.ll (+5-5)
  • (modified) llvm/test/MC/X86/x86-GCC-inline-asm-Y-constraints.ll (+1-1)
  • (removed) llvm/test/Transforms/GlobalOpt/x86_mmx_load.ll (-12)
  • (modified) llvm/test/Transforms/InstCombine/X86/x86-movmsk.ll (+8-8)
  • (modified) llvm/test/Transforms/InstCombine/cast.ll (-21)
  • (modified) llvm/test/Transforms/InstSimplify/ConstProp/gep-zeroinit-vector.ll (+1-14)
  • (modified) llvm/test/Transforms/SCCP/crash.ll (+3-3)
  • (modified) llvm/test/Transforms/SROA/pr57796.ll (+4-4)
  • (modified) llvm/test/Verifier/atomics.ll (+5-5)
diff --git a/llvm/lib/AsmParser/LLLexer.cpp b/llvm/lib/AsmParser/LLLexer.cpp
index c82e74972b67c..7c97f7afbe093 100644
--- a/llvm/lib/AsmParser/LLLexer.cpp
+++ b/llvm/lib/AsmParser/LLLexer.cpp
@@ -838,8 +838,6 @@ lltok::Kind LLLexer::LexIdentifier() {
   TYPEKEYWORD("ppc_fp128", Type::getPPC_FP128Ty(Context));
   TYPEKEYWORD("label",     Type::getLabelTy(Context));
   TYPEKEYWORD("metadata",  Type::getMetadataTy(Context));
-  TYPEKEYWORD("x86_mmx", llvm::FixedVectorType::get(
-                             llvm::IntegerType::get(Context, 64), 1));
   TYPEKEYWORD("x86_amx",   Type::getX86_AMXTy(Context));
   TYPEKEYWORD("token",     Type::getTokenTy(Context));
   TYPEKEYWORD("ptr",       PointerType::getUnqual(Context));
diff --git a/llvm/test/Bindings/llvm-c/echo.ll b/llvm/test/Bindings/llvm-c/echo.ll
index ab9acbc0a66a5..45e3d0357ebdf 100644
--- a/llvm/test/Bindings/llvm-c/echo.ll
+++ b/llvm/test/Bindings/llvm-c/echo.ll
@@ -70,7 +70,7 @@ define void @types() {
   %9 = alloca [3 x i22], align 4
   %10 = alloca ptr addrspace(5), align 8
   %11 = alloca <5 x ptr>, align 64
-  %12 = alloca x86_mmx, align 8
+  %12 = alloca <1 x i64>, align 8
   ret void
 }
 
diff --git a/llvm/test/Bitcode/compatibility.ll b/llvm/test/Bitcode/compatibility.ll
index a7567038b7a7b..e5592b347425a 100644
--- a/llvm/test/Bitcode/compatibility.ll
+++ b/llvm/test/Bitcode/compatibility.ll
@@ -1112,8 +1112,6 @@ define void @typesystem() {
   ; CHECK: %t5 = alloca x86_fp80
   %t6 = alloca ppc_fp128
   ; CHECK: %t6 = alloca ppc_fp128
-  %t7 = alloca x86_mmx
-  ; CHECK: %t7 = alloca <1 x i64>
   %t8 = alloca ptr
   ; CHECK: %t8 = alloca ptr
   %t9 = alloca <4 x i32>
diff --git a/llvm/test/CodeGen/X86/2007-05-15-maskmovq.ll b/llvm/test/CodeGen/X86/2007-05-15-maskmovq.ll
index 69f733461efc7..5c39c93fec995 100644
--- a/llvm/test/CodeGen/X86/2007-05-15-maskmovq.ll
+++ b/llvm/test/CodeGen/X86/2007-05-15-maskmovq.ll
@@ -25,10 +25,10 @@ define void @test(<1 x i64> %c64, <1 x i64> %mask1, ptr %P) {
 ; CHECK-NEXT:    popl %edi
 ; CHECK-NEXT:    retl
 entry:
-	%tmp4 = bitcast <1 x i64> %mask1 to x86_mmx		; <x86_mmx> [#uses=1]
-	%tmp6 = bitcast <1 x i64> %c64 to x86_mmx		; <x86_mmx> [#uses=1]
-	tail call void @llvm.x86.mmx.maskmovq( x86_mmx %tmp4, x86_mmx %tmp6, ptr %P )
+	%tmp4 = bitcast <1 x i64> %mask1 to <1 x i64>		; <<1 x i64>> [#uses=1]
+	%tmp6 = bitcast <1 x i64> %c64 to <1 x i64>		; <<1 x i64>> [#uses=1]
+	tail call void @llvm.x86.mmx.maskmovq( <1 x i64> %tmp4, <1 x i64> %tmp6, ptr %P )
 	ret void
 }
 
-declare void @llvm.x86.mmx.maskmovq(x86_mmx, x86_mmx, ptr)
+declare void @llvm.x86.mmx.maskmovq(<1 x i64>, <1 x i64>, ptr)
diff --git a/llvm/test/CodeGen/X86/2007-07-03-GR64ToVR64.ll b/llvm/test/CodeGen/X86/2007-07-03-GR64ToVR64.ll
index 79b06ba836af2..4edffe48ec1ca 100644
--- a/llvm/test/CodeGen/X86/2007-07-03-GR64ToVR64.ll
+++ b/llvm/test/CodeGen/X86/2007-07-03-GR64ToVR64.ll
@@ -1,7 +1,7 @@
 ; NOTE: Assertions have been autogenerated by utils/update_llc_test_checks.py
 ; RUN: llc < %s -mtriple=x86_64-apple-darwin -mattr=+mmx | FileCheck %s
 
-@R = external global x86_mmx		; <ptr> [#uses=1]
+@R = external global <1 x i64>		; <ptr> [#uses=1]
 
 define void @foo(<1 x i64> %A, <1 x i64> %B) nounwind {
 ; CHECK-LABEL: foo:
@@ -14,13 +14,13 @@ define void @foo(<1 x i64> %A, <1 x i64> %B) nounwind {
 ; CHECK-NEXT:    emms
 ; CHECK-NEXT:    retq
 entry:
-	%tmp4 = bitcast <1 x i64> %B to x86_mmx		; <<4 x i16>> [#uses=1]
-	%tmp6 = bitcast <1 x i64> %A to x86_mmx		; <<4 x i16>> [#uses=1]
-	%tmp7 = tail call x86_mmx @llvm.x86.mmx.paddus.w( x86_mmx %tmp6, x86_mmx %tmp4 )		; <x86_mmx> [#uses=1]
-	store x86_mmx %tmp7, ptr @R
+	%tmp4 = bitcast <1 x i64> %B to <1 x i64>		; <<4 x i16>> [#uses=1]
+	%tmp6 = bitcast <1 x i64> %A to <1 x i64>		; <<4 x i16>> [#uses=1]
+	%tmp7 = tail call <1 x i64> @llvm.x86.mmx.paddus.w( <1 x i64> %tmp6, <1 x i64> %tmp4 )		; <<1 x i64>> [#uses=1]
+	store <1 x i64> %tmp7, ptr @R
 	tail call void @llvm.x86.mmx.emms( )
 	ret void
 }
 
-declare x86_mmx @llvm.x86.mmx.paddus.w(x86_mmx, x86_mmx)
+declare <1 x i64> @llvm.x86.mmx.paddus.w(<1 x i64>, <1 x i64>)
 declare void @llvm.x86.mmx.emms()
diff --git a/llvm/test/CodeGen/X86/2008-04-08-CoalescerCrash.ll b/llvm/test/CodeGen/X86/2008-04-08-CoalescerCrash.ll
index d439e827e8199..0c792644fc5c8 100644
--- a/llvm/test/CodeGen/X86/2008-04-08-CoalescerCrash.ll
+++ b/llvm/test/CodeGen/X86/2008-04-08-CoalescerCrash.ll
@@ -5,15 +5,15 @@ entry:
 	tail call void asm sideeffect "# top of block", "~{dirflag},~{fpsr},~{flags},~{di},~{si},~{dx},~{cx},~{ax}"( ) nounwind 
 	tail call void asm sideeffect ".file \224443946.c\22", "~{dirflag},~{fpsr},~{flags}"( ) nounwind 
 	tail call void asm sideeffect ".line 8", "~{dirflag},~{fpsr},~{flags}"( ) nounwind 
-	%tmp1 = tail call x86_mmx asm sideeffect "movd $1, $0", "=={mm4},{bp},~{dirflag},~{fpsr},~{flags},~{memory}"( i32 undef ) nounwind 		; <x86_mmx> [#uses=1]
+	%tmp1 = tail call <1 x i64> asm sideeffect "movd $1, $0", "=={mm4},{bp},~{dirflag},~{fpsr},~{flags},~{memory}"( i32 undef ) nounwind 		; <<1 x i64>> [#uses=1]
 	tail call void asm sideeffect ".file \224443946.c\22", "~{dirflag},~{fpsr},~{flags}"( ) nounwind 
 	tail call void asm sideeffect ".line 9", "~{dirflag},~{fpsr},~{flags}"( ) nounwind 
-	%tmp3 = tail call i32 asm sideeffect "movd $1, $0", "=={bp},{mm3},~{dirflag},~{fpsr},~{flags},~{memory}"( x86_mmx undef ) nounwind 		; <i32> [#uses=1]
+	%tmp3 = tail call i32 asm sideeffect "movd $1, $0", "=={bp},{mm3},~{dirflag},~{fpsr},~{flags},~{memory}"( <1 x i64> undef ) nounwind 		; <i32> [#uses=1]
 	tail call void asm sideeffect ".file \224443946.c\22", "~{dirflag},~{fpsr},~{flags}"( ) nounwind 
 	tail call void asm sideeffect ".line 10", "~{dirflag},~{fpsr},~{flags}"( ) nounwind 
-	tail call void asm sideeffect "movntq $0, 0($1,$2)", "{mm0},{di},{bp},~{dirflag},~{fpsr},~{flags},~{memory}"( x86_mmx undef, i32 undef, i32 %tmp3 ) nounwind 
+	tail call void asm sideeffect "movntq $0, 0($1,$2)", "{mm0},{di},{bp},~{dirflag},~{fpsr},~{flags},~{memory}"( <1 x i64> undef, i32 undef, i32 %tmp3 ) nounwind 
 	tail call void asm sideeffect ".file \224443946.c\22", "~{dirflag},~{fpsr},~{flags}"( ) nounwind 
 	tail call void asm sideeffect ".line 11", "~{dirflag},~{fpsr},~{flags}"( ) nounwind 
-	%tmp8 = tail call i32 asm sideeffect "movd $1, $0", "=={bp},{mm4},~{dirflag},~{fpsr},~{flags},~{memory}"( x86_mmx %tmp1 ) nounwind 		; <i32> [#uses=0]
+	%tmp8 = tail call i32 asm sideeffect "movd $1, $0", "=={bp},{mm4},~{dirflag},~{fpsr},~{flags},~{memory}"( <1 x i64> %tmp1 ) nounwind 		; <i32> [#uses=0]
 	ret i32 undef
 }
diff --git a/llvm/test/CodeGen/X86/2008-08-23-64Bit-maskmovq.ll b/llvm/test/CodeGen/X86/2008-08-23-64Bit-maskmovq.ll
index 594edbaad2944..4a4477823a61d 100644
--- a/llvm/test/CodeGen/X86/2008-08-23-64Bit-maskmovq.ll
+++ b/llvm/test/CodeGen/X86/2008-08-23-64Bit-maskmovq.ll
@@ -17,13 +17,13 @@ entry:
 	br i1 false, label %bb.nph144.split, label %bb133
 
 bb.nph144.split:		; preds = %entry
-        %tmp = bitcast <8 x i8> zeroinitializer to x86_mmx
-        %tmp2 = bitcast <8 x i8> zeroinitializer to x86_mmx
-	tail call void @llvm.x86.mmx.maskmovq( x86_mmx %tmp, x86_mmx %tmp2, ptr null ) nounwind
+        %tmp = bitcast <8 x i8> zeroinitializer to <1 x i64>
+        %tmp2 = bitcast <8 x i8> zeroinitializer to <1 x i64>
+	tail call void @llvm.x86.mmx.maskmovq( <1 x i64> %tmp, <1 x i64> %tmp2, ptr null ) nounwind
 	unreachable
 
 bb133:		; preds = %entry
 	ret void
 }
 
-declare void @llvm.x86.mmx.maskmovq(x86_mmx, x86_mmx, ptr) nounwind
+declare void @llvm.x86.mmx.maskmovq(<1 x i64>, <1 x i64>, ptr) nounwind
diff --git a/llvm/test/CodeGen/X86/2008-09-05-sinttofp-2xi32.ll b/llvm/test/CodeGen/X86/2008-09-05-sinttofp-2xi32.ll
index 3a112ae2a2113..20673a177ac31 100644
--- a/llvm/test/CodeGen/X86/2008-09-05-sinttofp-2xi32.ll
+++ b/llvm/test/CodeGen/X86/2008-09-05-sinttofp-2xi32.ll
@@ -26,7 +26,7 @@ entry:
 
 ; This is how to get MMX instructions.
 
-define <2 x double> @a2(x86_mmx %x) nounwind {
+define <2 x double> @a2(<1 x i64> %x) nounwind {
 ; CHECK-LABEL: a2:
 ; CHECK:       # %bb.0: # %entry
 ; CHECK-NEXT:    pushl %ebp
@@ -42,11 +42,11 @@ define <2 x double> @a2(x86_mmx %x) nounwind {
 ; CHECK-NEXT:    popl %ebp
 ; CHECK-NEXT:    retl
 entry:
-  %y = tail call <2 x double> @llvm.x86.sse.cvtpi2pd(x86_mmx %x)
+  %y = tail call <2 x double> @llvm.x86.sse.cvtpi2pd(<1 x i64> %x)
   ret <2 x double> %y
 }
 
-define x86_mmx @b2(<2 x double> %x) nounwind {
+define <1 x i64> @b2(<2 x double> %x) nounwind {
 ; CHECK-LABEL: b2:
 ; CHECK:       # %bb.0: # %entry
 ; CHECK-NEXT:    pushl %ebp
@@ -61,9 +61,9 @@ define x86_mmx @b2(<2 x double> %x) nounwind {
 ; CHECK-NEXT:    popl %ebp
 ; CHECK-NEXT:    retl
 entry:
-  %y = tail call x86_mmx @llvm.x86.sse.cvttpd2pi (<2 x double> %x)
-  ret x86_mmx %y
+  %y = tail call <1 x i64> @llvm.x86.sse.cvttpd2pi (<2 x double> %x)
+  ret <1 x i64> %y
 }
 
-declare <2 x double> @llvm.x86.sse.cvtpi2pd(x86_mmx)
-declare x86_mmx @llvm.x86.sse.cvttpd2pi(<2 x double>)
+declare <2 x double> @llvm.x86.sse.cvtpi2pd(<1 x i64>)
+declare <1 x i64> @llvm.x86.sse.cvttpd2pi(<2 x double>)
diff --git a/llvm/test/CodeGen/X86/2011-06-14-mmx-inlineasm.ll b/llvm/test/CodeGen/X86/2011-06-14-mmx-inlineasm.ll
index 306aeed1ace3e..582ebb9bdcfd1 100644
--- a/llvm/test/CodeGen/X86/2011-06-14-mmx-inlineasm.ll
+++ b/llvm/test/CodeGen/X86/2011-06-14-mmx-inlineasm.ll
@@ -3,14 +3,14 @@
 target datalayout = "e-p:32:32:32-i1:8:8-i8:8:8-i16:16:16-i32:32:32-i64:32:64-f32:32:32-f64:32:64-v64:64:64-v128:128:128-a0:0:64-f80:128:128-n8:16:32"
 target triple = "i386-apple-macosx10.6.6"
 
-%0 = type { x86_mmx, x86_mmx, x86_mmx, x86_mmx, x86_mmx, x86_mmx, x86_mmx }
+%0 = type { <1 x i64>, <1 x i64>, <1 x i64>, <1 x i64>, <1 x i64>, <1 x i64>, <1 x i64> }
 
 define i32 @pixman_fill_mmx(ptr nocapture %bits, i32 %stride, i32 %bpp, i32 %x, i32 %y, i32 %width, i32 %height, i32 %xor) nounwind ssp {
 entry:
   %conv = zext i32 %xor to i64
   %shl = shl nuw i64 %conv, 32
   %or = or i64 %shl, %conv
-  %0 = bitcast i64 %or to x86_mmx
+  %0 = bitcast i64 %or to <1 x i64>
 ; CHECK:      movq [[MMXR:%mm[0-7],]] {{%mm[0-7]}}
 ; CHECK-NEXT: movq [[MMXR]] {{%mm[0-7]}}
 ; CHECK-NEXT: movq [[MMXR]] {{%mm[0-7]}}
@@ -18,7 +18,7 @@ entry:
 ; CHECK-NEXT: movq [[MMXR]] {{%mm[0-7]}}
 ; CHECK-NEXT: movq [[MMXR]] {{%mm[0-7]}}
 ; CHECK-NEXT: movq [[MMXR]] {{%mm[0-7]}}
-  %1 = tail call %0 asm "movq\09\09$7,\09$0\0Amovq\09\09$7,\09$1\0Amovq\09\09$7,\09$2\0Amovq\09\09$7,\09$3\0Amovq\09\09$7,\09$4\0Amovq\09\09$7,\09$5\0Amovq\09\09$7,\09$6\0A", "=&y,=&y,=&y,=&y,=&y,=&y,=y,y,~{dirflag},~{fpsr},~{flags}"(x86_mmx %0) nounwind, !srcloc !0
+  %1 = tail call %0 asm "movq\09\09$7,\09$0\0Amovq\09\09$7,\09$1\0Amovq\09\09$7,\09$2\0Amovq\09\09$7,\09$3\0Amovq\09\09$7,\09$4\0Amovq\09\09$7,\09$5\0Amovq\09\09$7,\09$6\0A", "=&y,=&y,=&y,=&y,=&y,=&y,=y,y,~{dirflag},~{fpsr},~{flags}"(<1 x i64> %0) nounwind, !srcloc !0
   %asmresult = extractvalue %0 %1, 0
   %asmresult6 = extractvalue %0 %1, 1
   %asmresult7 = extractvalue %0 %1, 2
@@ -34,7 +34,7 @@ entry:
 ; CHECK-NEXT: movq {{%mm[0-7]}},
 ; CHECK-NEXT: movq {{%mm[0-7]}},
 ; CHECK-NEXT: movq {{%mm[0-7]}},
-  tail call void asm sideeffect "movq\09$1,\09  ($0)\0Amovq\09$2,\09 8($0)\0Amovq\09$3,\0916($0)\0Amovq\09$4,\0924($0)\0Amovq\09$5,\0932($0)\0Amovq\09$6,\0940($0)\0Amovq\09$7,\0948($0)\0Amovq\09$8,\0956($0)\0A", "r,y,y,y,y,y,y,y,y,~{memory},~{dirflag},~{fpsr},~{flags}"(ptr undef, x86_mmx %0, x86_mmx %asmresult, x86_mmx %asmresult6, x86_mmx %asmresult7, x86_mmx %asmresult8, x86_mmx %asmresult9, x86_mmx %asmresult10, x86_mmx %asmresult11) nounwind, !srcloc !1
+  tail call void asm sideeffect "movq\09$1,\09  ($0)\0Amovq\09$2,\09 8($0)\0Amovq\09$3,\0916($0)\0Amovq\09$4,\0924($0)\0Amovq\09$5,\0932($0)\0Amovq\09$6,\0940($0)\0Amovq\09$7,\0948($0)\0Amovq\09$8,\0956($0)\0A", "r,y,y,y,y,y,y,y,y,~{memory},~{dirflag},~{fpsr},~{flags}"(ptr undef, <1 x i64> %0, <1 x i64> %asmresult, <1 x i64> %asmresult6, <1 x i64> %asmresult7, <1 x i64> %asmresult8, <1 x i64> %asmresult9, <1 x i64> %asmresult10, <1 x i64> %asmresult11) nounwind, !srcloc !1
   tail call void @llvm.x86.mmx.emms() nounwind
   ret i32 1
 }
diff --git a/llvm/test/CodeGen/X86/avx-vbroadcast.ll b/llvm/test/CodeGen/X86/avx-vbroadcast.ll
index 3f6f8c01b9049..c69886df82bdf 100644
--- a/llvm/test/CodeGen/X86/avx-vbroadcast.ll
+++ b/llvm/test/CodeGen/X86/avx-vbroadcast.ll
@@ -1011,7 +1011,7 @@ define float @broadcast_lifetime() nounwind {
   ret float %7
 }
 
-define <8 x i16> @broadcast_x86_mmx(x86_mmx %tmp) nounwind {
+define <8 x i16> @broadcast_x86_mmx(<1 x i64> %tmp) nounwind {
 ; X86-LABEL: broadcast_x86_mmx:
 ; X86:       ## %bb.0: ## %bb
 ; X86-NEXT:    vmovddup {{.*#+}} xmm0 = mem[0,0]
@@ -1023,7 +1023,7 @@ define <8 x i16> @broadcast_x86_mmx(x86_mmx %tmp) nounwind {
 ; X64-NEXT:    vpshufd {{.*#+}} xmm0 = xmm0[0,1,0,1]
 ; X64-NEXT:    retq
 bb:
-  %tmp1 = bitcast x86_mmx %tmp to i64
+  %tmp1 = bitcast <1 x i64> %tmp to i64
   %tmp2 = insertelement <2 x i64> undef, i64 %tmp1, i32 0
   %tmp3 = bitcast <2 x i64> %tmp2 to <8 x i16>
   %tmp4 = shufflevector <8 x i16> %tmp3, <8 x i16> poison, <8 x i32> <i32 0, i32 1, i32 2, i32 3, i32 0, i32 1, i32 2, i32 3>
diff --git a/llvm/test/CodeGen/X86/avx2-vbroadcast.ll b/llvm/test/CodeGen/X86/avx2-vbroadcast.ll
index fed6c2eb8ba0a..9ac0503831eb7 100644
--- a/llvm/test/CodeGen/X86/avx2-vbroadcast.ll
+++ b/llvm/test/CodeGen/X86/avx2-vbroadcast.ll
@@ -1449,7 +1449,7 @@ eintry:
   ret void
 }
 
-define <8 x i16> @broadcast_x86_mmx(x86_mmx %tmp) nounwind {
+define <8 x i16> @broadcast_x86_mmx(<1 x i64> %tmp) nounwind {
 ; X86-LABEL: broadcast_x86_mmx:
 ; X86:       ## %bb.0: ## %bb
 ; X86-NEXT:    vmovddup {{.*#+}} xmm0 = mem[0,0]
@@ -1466,7 +1466,7 @@ define <8 x i16> @broadcast_x86_mmx(x86_mmx %tmp) nounwind {
 ; X64-AVX512VL-NEXT:    vpbroadcastq %rdi, %xmm0
 ; X64-AVX512VL-NEXT:    retq
 bb:
-  %tmp1 = bitcast x86_mmx %tmp to i64
+  %tmp1 = bitcast <1 x i64> %tmp to i64
   %tmp2 = insertelement <2 x i64> undef, i64 %tmp1, i32 0
   %tmp3 = bitcast <2 x i64> %tmp2 to <8 x i16>
   %tmp4 = shufflevector <8 x i16> %tmp3, <8 x i16> poison, <8 x i32> <i32 0, i32 1, i32 2, i32 3, i32 0, i32 1, i32 2, i32 3>
diff --git a/llvm/test/CodeGen/X86/bitcast-mmx.ll b/llvm/test/CodeGen/X86/bitcast-mmx.ll
index 061723a0966e2..fe48a96a51d3e 100644
--- a/llvm/test/CodeGen/X86/bitcast-mmx.ll
+++ b/llvm/test/CodeGen/X86/bitcast-mmx.ll
@@ -17,9 +17,9 @@ define i32 @t0(i64 %x) nounwind {
 ; X64-NEXT:    retq
 entry:
   %0 = bitcast i64 %x to <4 x i16>
-  %1 = bitcast <4 x i16> %0 to x86_mmx
-  %2 = tail call x86_mmx @llvm.x86.sse.pshuf.w(x86_mmx %1, i8 -18)
-  %3 = bitcast x86_mmx %2 to <4 x i16>
+  %1 = bitcast <4 x i16> %0 to <1 x i64>
+  %2 = tail call <1 x i64> @llvm.x86.sse.pshuf.w(<1 x i64> %1, i8 -18)
+  %3 = bitcast <1 x i64> %2 to <4 x i16>
   %4 = bitcast <4 x i16> %3 to <1 x i64>
   %5 = extractelement <1 x i64> %4, i32 0
   %6 = bitcast i64 %5 to <2 x i32>
@@ -52,9 +52,9 @@ define i64 @t1(i64 %x, i32 %n) nounwind {
 ; X64-NEXT:    movq %mm0, %rax
 ; X64-NEXT:    retq
 entry:
-  %0 = bitcast i64 %x to x86_mmx
-  %1 = tail call x86_mmx @llvm.x86.mmx.pslli.q(x86_mmx %0, i32 %n)
-  %2 = bitcast x86_mmx %1 to i64
+  %0 = bitcast i64 %x to <1 x i64>
+  %1 = tail call <1 x i64> @llvm.x86.mmx.pslli.q(<1 x i64> %0, i32 %n)
+  %2 = bitcast <1 x i64> %1 to i64
   ret i64 %2
 }
 
@@ -88,11 +88,11 @@ define i64 @t2(i64 %x, i32 %n, i32 %w) nounwind {
 entry:
   %0 = insertelement <2 x i32> undef, i32 %w, i32 0
   %1 = insertelement <2 x i32> %0, i32 0, i32 1
-  %2 = bitcast <2 x i32> %1 to x86_mmx
-  %3 = tail call x86_mmx @llvm.x86.mmx.pslli.q(x86_mmx %2, i32 %n)
-  %4 = bitcast i64 %x to x86_mmx
-  %5 = tail call x86_mmx @llvm.x86.mmx.por(x86_mmx %4, x86_mmx %3)
-  %6 = bitcast x86_mmx %5 to i64
+  %2 = bitcast <2 x i32> %1 to <1 x i64>
+  %3 = tail call <1 x i64> @llvm.x86.mmx.pslli.q(<1 x i64> %2, i32 %n)
+  %4 = bitcast i64 %x to <1 x i64>
+  %5 = tail call <1 x i64> @llvm.x86.mmx.por(<1 x i64> %4, <1 x i64> %3)
+  %6 = bitcast <1 x i64> %5 to i64
   ret i64 %6
 }
 
@@ -123,14 +123,14 @@ define i64 @t3(ptr %y, ptr %n) nounwind {
 ; X64-NEXT:    movq %mm0, %rax
 ; X64-NEXT:    retq
 entry:
-  %0 = load x86_mmx, ptr %y, align 8
+  %0 = load <1 x i64>, ptr %y, align 8
   %1 = load i32, ptr %n, align 4
-  %2 = tail call x86_mmx @llvm.x86.mmx.pslli.q(x86_mmx %0, i32 %1)
-  %3 = bitcast x86_mmx %2 to i64
+  %2 = tail call <1 x i64> @llvm.x86.mmx.pslli.q(<1 x i64> %0, i32 %1)
+  %3 = bitcast <1 x i64> %2 to i64
   ret i64 %3
 }
 
-declare x86_mmx @llvm.x86.sse.pshuf.w(x86_mmx, i8)
-declare x86_mmx @llvm.x86.mmx.pslli.q(x86_mmx, i32)
-declare x86_mmx @llvm.x86.mmx.por(x86_mmx, x86_mmx)
+declare <1 x i64> @llvm.x86.sse.pshuf.w(<1 x i64>, i8)
+declare <1 x i64> @llvm.x86.mmx.pslli.q(<1 x i64>, i32)
+declare <1 x i64> @llvm.x86.mmx.por(<1 x i64>, <1 x i64>)
 
diff --git a/llvm/test/CodeGen/X86/expand-vr64-gr64-copy.mir b/llvm/test/CodeGen/X86/expand-vr64-gr64-copy.mir
index 559560ac20f8a..aa637e7408f22 100644
--- a/llvm/test/CodeGen/X86/expand-vr64-gr64-copy.mir
+++ b/llvm/test/CodeGen/X86/expand-vr64-gr64-copy.mir
@@ -6,9 +6,9 @@
 
   define <2 x i32> @test_paddw(<2 x i32> %a) nounwind readnone {
   entry:
-    %0 = bitcast <2 x i32> %a to x86_mmx
-    %1 = tail call x86_mmx @llvm.x86.mmx.padd.w(x86_mmx %0, x86_mmx %0)
-    %2 = bitcast x86_mmx %1 to <2 x i32>
+    %0 = bitcast <2 x i32> %a to <1 x i64>
+    %1 = tail call <1 x i64> @llvm.x86.mmx.padd.w(<1 x i64> %0, <1 x i64> %0)
+    %2 = bitcast <1 x i64> %1 to <2 x i32>
     ret <2 x i32> %2
   }
 
diff --git a/llvm/test/CodeGen/X86/fast-isel-bc.ll b/llvm/test/CodeGen/X86/fast-isel-bc.ll
index e3bb5e7176e57..64bdfd6d4f863 100644
--- a/llvm/test/CodeGen/X86/fast-isel-bc.ll
+++ b/llvm/test/CodeGen/X86/fast-isel-bc.ll
@@ -4,7 +4,7 @@
 
 ; PR4684
 
-declare void @func2(x86_mmx)
+declare void @func2(<1 x i64>)
 
 ; This isn't spectacular, but it's MMX code at -O0...
 
@@ -28,7 +28,7 @@ define void @func1() nounwind {
 ; X64-NEXT:    callq _func2
 ; X64-NEXT:    popq %rax
 ; X64-NEXT:    retq
-  %tmp0 = bitcast <2 x i32> <i32 0, i32 2> to x86_mmx
-  call void @func2(x86_mmx %tmp0)
+  %tmp0 = bitcast <2 x i32> <i32 0, i32 2> to <1 x i64>
+  call void @func2(<1 x i64> %tmp0)
   ret void
 }
diff --git a/llvm/test/CodeGen/X86/fast-isel-nontemporal.ll b/llvm/test/CodeGen/X86/fast-isel-nontemporal.ll
index c13fdae540d0b..3b1a8f541b490 100644
--- a/llvm/test/CodeGen/X86/fast-isel-nontemporal.ll
+++ b/llvm/test/CodeGen/X86/fast-isel-nontemporal.ll
@@ -104,12 +104,12 @@ define void @test_mmx(ptr nocapture %a0, ptr nocapture %a1) {
 ; ALL-NEXT:    movntq %mm0, (%rsi)
 ; ALL-NEXT:    retq
 entry:
-  %0 = load x86_mmx, ptr %a0
-  %1 = call x86_mmx @llvm.x86.mmx.psrli.q(x86_mmx %0, i32 3)
-  store x86_mmx %1, ptr %a1, align 8, !nontemporal !1
+  %0 = load <1 x i64>, ptr %a0
+  %1 = call <1 x i64> @llvm.x86.mmx.psrli.q(<1 x i64> %0, i32 3)
+  store <1 x i64> %1, ptr %a1, align 8, !nontemporal !1
   ret void
 }
-declare x86_mmx @llvm.x86.mmx.psrli.q(x86_mmx, i32) nounwind readnone
+declare <1 x i64> @llvm.x86.mmx.psrli.q(<1 x i64>, i32) nounwind readnone
 
 ;
 ; 128-bit Vector Stores
diff --git a/llvm/test/CodeGen/X86/mmx-arg-passing-x86-64.ll b/llvm/test/CodeGen/X86/mmx-arg-passing-x86-64.ll
index 54f048eb697f6..439d7efc2d755 100644
--- a/llvm/test/CodeGen/X86/mmx-arg-passing-x86-64.ll
+++ b/llvm/test/CodeGen/X86/mmx-arg-passing-x86-64.ll
@@ -14,12 +14,12 @@ define void @t3() nounwind  {
 ; X86-64-NEXT:    xorl %eax, %eax
 ; X86-64-NEXT:    jmp _pass_v8qi ## TAILCALL
   %tmp3 = load <8 x i8>, ptr @g_v8qi, align 8
-  %tmp3a = bitcast <8 x i8> %tmp3 to x86_mmx
-  %tmp4 = tail call i32 (...) @pass_v8qi( x86_mmx %tmp3a ) nounwind
+  %tmp3a = bitcast <8 x i8> %tmp3 to <1 x i64>
+  %tmp4 = tail call i32 (...) @pass_v8qi( <1 x i64> %tmp3a ) nounwind
   ret void
 }
 
-define void @t4(x86_mmx %v1, x86_mmx %v2) nounwind  {
+define void @t4(<1 x i64> %v1, <1 x i64> %v2) nounwind  {
 ; X86-64-LABEL: t4:
 ; X86-64:       ## %bb.0:
 ; X86-64-NEXT:    movq %rdi, %xmm0
@@ -28,11 +28,11 @@ define void @t4(x86_mmx %v1, x86_mmx %v2) nounwind  {
 ; X86-64-NEXT:    movq %xmm1, %rdi
 ; X86-64-NEXT:    xorl %eax, %eax
 ; X86-64-NEXT:    jmp _pass_v8qi ## TAILCALL
-  %v1a = bitcast x86_mmx %v1 to <8 x i8>
-  %v2b = bitcast x86_mmx %v2 to <8 x i8>
+  %v1a = bitcast <1 x i64> %v1 to <8 x i8>
+  %v2b = bitcast <1 x i64> %v2 to <8 x i8>
   %tmp3 = add <8 x i8> %v1a, %v2b
-  %tmp3a = bitc...
[truncated]

@llvmbot
Copy link
Member

llvmbot commented Jul 25, 2024

@llvm/pr-subscribers-backend-x86

Author: James Y Knight (jyknight)

Changes

After #98505, the textual IR keyword x86_mmx was temporarily made to parse as &lt;1 x i64&gt;, so as not to require a lot of test update noise.

This completes the removal of the type, by removing thex86_mmx keyword from the IR parser, and making the (now no-op) test updates via sed -i 's/\bx86_mmx\b/&lt;1 x i64&gt;/g' $(git grep -l x86_mmx llvm/test/).

Changes to llvm/test/Bitcode/compatibility-$VERSION.ll were reverted, as they're intended to be equivalent to the .bc file, if parsed by old LLVM, so shouldn't be updated.

A few tests were removed, as they're no longer testing anything, in the following files:

  • llvm/test/Transforms/GlobalOpt/x86_mmx_load.ll
  • llvm/test/Transforms/InstCombine/cast.ll
  • llvm/test/Transforms/InstSimplify/ConstProp/gep-zeroinit-vector.ll

Patch is 365.87 KiB, truncated to 20.00 KiB below, full version: https://github.com/llvm/llvm-project/pull/100646.diff

54 Files Affected:

  • (modified) llvm/lib/AsmParser/LLLexer.cpp (-2)
  • (modified) llvm/test/Bindings/llvm-c/echo.ll (+1-1)
  • (modified) llvm/test/Bitcode/compatibility.ll (-2)
  • (modified) llvm/test/CodeGen/X86/2007-05-15-maskmovq.ll (+4-4)
  • (modified) llvm/test/CodeGen/X86/2007-07-03-GR64ToVR64.ll (+6-6)
  • (modified) llvm/test/CodeGen/X86/2008-04-08-CoalescerCrash.ll (+4-4)
  • (modified) llvm/test/CodeGen/X86/2008-08-23-64Bit-maskmovq.ll (+4-4)
  • (modified) llvm/test/CodeGen/X86/2008-09-05-sinttofp-2xi32.ll (+7-7)
  • (modified) llvm/test/CodeGen/X86/2011-06-14-mmx-inlineasm.ll (+4-4)
  • (modified) llvm/test/CodeGen/X86/avx-vbroadcast.ll (+2-2)
  • (modified) llvm/test/CodeGen/X86/avx2-vbroadcast.ll (+2-2)
  • (modified) llvm/test/CodeGen/X86/bitcast-mmx.ll (+17-17)
  • (modified) llvm/test/CodeGen/X86/expand-vr64-gr64-copy.mir (+3-3)
  • (modified) llvm/test/CodeGen/X86/fast-isel-bc.ll (+3-3)
  • (modified) llvm/test/CodeGen/X86/fast-isel-nontemporal.ll (+4-4)
  • (modified) llvm/test/CodeGen/X86/mmx-arg-passing-x86-64.ll (+7-7)
  • (modified) llvm/test/CodeGen/X86/mmx-arg-passing.ll (+6-6)
  • (modified) llvm/test/CodeGen/X86/mmx-arith.ll (+154-154)
  • (modified) llvm/test/CodeGen/X86/mmx-bitcast-fold.ll (+3-3)
  • (modified) llvm/test/CodeGen/X86/mmx-bitcast.ll (+26-26)
  • (modified) llvm/test/CodeGen/X86/mmx-build-vector.ll (+67-67)
  • (modified) llvm/test/CodeGen/X86/mmx-coalescing.ll (+14-14)
  • (modified) llvm/test/CodeGen/X86/mmx-cvt.ll (+31-31)
  • (modified) llvm/test/CodeGen/X86/mmx-fold-load.ll (+84-84)
  • (modified) llvm/test/CodeGen/X86/mmx-fold-zero.ll (+26-26)
  • (modified) llvm/test/CodeGen/X86/mmx-intrinsics.ll (+433-433)
  • (modified) llvm/test/CodeGen/X86/mmx-only.ll (+5-5)
  • (modified) llvm/test/CodeGen/X86/mxcsr-reg-usage.ll (+12-12)
  • (modified) llvm/test/CodeGen/X86/nontemporal.ll (+4-4)
  • (modified) llvm/test/CodeGen/X86/pr13859.ll (+2-2)
  • (modified) llvm/test/CodeGen/X86/pr23246.ll (+2-2)
  • (modified) llvm/test/CodeGen/X86/pr29222.ll (+4-4)
  • (modified) llvm/test/CodeGen/X86/pr35982.ll (+4-4)
  • (modified) llvm/test/CodeGen/X86/select-mmx.ll (+9-9)
  • (modified) llvm/test/CodeGen/X86/stack-folding-mmx.ll (+390-390)
  • (modified) llvm/test/CodeGen/X86/vec_extract-mmx.ll (+16-16)
  • (modified) llvm/test/CodeGen/X86/vec_insert-5.ll (+2-2)
  • (modified) llvm/test/CodeGen/X86/vec_insert-7.ll (+4-4)
  • (modified) llvm/test/CodeGen/X86/vec_insert-mmx.ll (+3-3)
  • (modified) llvm/test/CodeGen/X86/vector-shuffle-mmx.ll (+15-15)
  • (modified) llvm/test/CodeGen/X86/x86-64-psub.ll (+35-35)
  • (modified) llvm/test/Instrumentation/MemorySanitizer/X86/mmx-intrinsics.ll (+433-433)
  • (modified) llvm/test/Instrumentation/MemorySanitizer/vector_arith.ll (+8-8)
  • (modified) llvm/test/Instrumentation/MemorySanitizer/vector_cvt.ll (+5-5)
  • (modified) llvm/test/Instrumentation/MemorySanitizer/vector_pack.ll (+4-4)
  • (modified) llvm/test/Instrumentation/MemorySanitizer/vector_shift.ll (+5-5)
  • (modified) llvm/test/MC/X86/x86-GCC-inline-asm-Y-constraints.ll (+1-1)
  • (removed) llvm/test/Transforms/GlobalOpt/x86_mmx_load.ll (-12)
  • (modified) llvm/test/Transforms/InstCombine/X86/x86-movmsk.ll (+8-8)
  • (modified) llvm/test/Transforms/InstCombine/cast.ll (-21)
  • (modified) llvm/test/Transforms/InstSimplify/ConstProp/gep-zeroinit-vector.ll (+1-14)
  • (modified) llvm/test/Transforms/SCCP/crash.ll (+3-3)
  • (modified) llvm/test/Transforms/SROA/pr57796.ll (+4-4)
  • (modified) llvm/test/Verifier/atomics.ll (+5-5)
diff --git a/llvm/lib/AsmParser/LLLexer.cpp b/llvm/lib/AsmParser/LLLexer.cpp
index c82e74972b67c..7c97f7afbe093 100644
--- a/llvm/lib/AsmParser/LLLexer.cpp
+++ b/llvm/lib/AsmParser/LLLexer.cpp
@@ -838,8 +838,6 @@ lltok::Kind LLLexer::LexIdentifier() {
   TYPEKEYWORD("ppc_fp128", Type::getPPC_FP128Ty(Context));
   TYPEKEYWORD("label",     Type::getLabelTy(Context));
   TYPEKEYWORD("metadata",  Type::getMetadataTy(Context));
-  TYPEKEYWORD("x86_mmx", llvm::FixedVectorType::get(
-                             llvm::IntegerType::get(Context, 64), 1));
   TYPEKEYWORD("x86_amx",   Type::getX86_AMXTy(Context));
   TYPEKEYWORD("token",     Type::getTokenTy(Context));
   TYPEKEYWORD("ptr",       PointerType::getUnqual(Context));
diff --git a/llvm/test/Bindings/llvm-c/echo.ll b/llvm/test/Bindings/llvm-c/echo.ll
index ab9acbc0a66a5..45e3d0357ebdf 100644
--- a/llvm/test/Bindings/llvm-c/echo.ll
+++ b/llvm/test/Bindings/llvm-c/echo.ll
@@ -70,7 +70,7 @@ define void @types() {
   %9 = alloca [3 x i22], align 4
   %10 = alloca ptr addrspace(5), align 8
   %11 = alloca <5 x ptr>, align 64
-  %12 = alloca x86_mmx, align 8
+  %12 = alloca <1 x i64>, align 8
   ret void
 }
 
diff --git a/llvm/test/Bitcode/compatibility.ll b/llvm/test/Bitcode/compatibility.ll
index a7567038b7a7b..e5592b347425a 100644
--- a/llvm/test/Bitcode/compatibility.ll
+++ b/llvm/test/Bitcode/compatibility.ll
@@ -1112,8 +1112,6 @@ define void @typesystem() {
   ; CHECK: %t5 = alloca x86_fp80
   %t6 = alloca ppc_fp128
   ; CHECK: %t6 = alloca ppc_fp128
-  %t7 = alloca x86_mmx
-  ; CHECK: %t7 = alloca <1 x i64>
   %t8 = alloca ptr
   ; CHECK: %t8 = alloca ptr
   %t9 = alloca <4 x i32>
diff --git a/llvm/test/CodeGen/X86/2007-05-15-maskmovq.ll b/llvm/test/CodeGen/X86/2007-05-15-maskmovq.ll
index 69f733461efc7..5c39c93fec995 100644
--- a/llvm/test/CodeGen/X86/2007-05-15-maskmovq.ll
+++ b/llvm/test/CodeGen/X86/2007-05-15-maskmovq.ll
@@ -25,10 +25,10 @@ define void @test(<1 x i64> %c64, <1 x i64> %mask1, ptr %P) {
 ; CHECK-NEXT:    popl %edi
 ; CHECK-NEXT:    retl
 entry:
-	%tmp4 = bitcast <1 x i64> %mask1 to x86_mmx		; <x86_mmx> [#uses=1]
-	%tmp6 = bitcast <1 x i64> %c64 to x86_mmx		; <x86_mmx> [#uses=1]
-	tail call void @llvm.x86.mmx.maskmovq( x86_mmx %tmp4, x86_mmx %tmp6, ptr %P )
+	%tmp4 = bitcast <1 x i64> %mask1 to <1 x i64>		; <<1 x i64>> [#uses=1]
+	%tmp6 = bitcast <1 x i64> %c64 to <1 x i64>		; <<1 x i64>> [#uses=1]
+	tail call void @llvm.x86.mmx.maskmovq( <1 x i64> %tmp4, <1 x i64> %tmp6, ptr %P )
 	ret void
 }
 
-declare void @llvm.x86.mmx.maskmovq(x86_mmx, x86_mmx, ptr)
+declare void @llvm.x86.mmx.maskmovq(<1 x i64>, <1 x i64>, ptr)
diff --git a/llvm/test/CodeGen/X86/2007-07-03-GR64ToVR64.ll b/llvm/test/CodeGen/X86/2007-07-03-GR64ToVR64.ll
index 79b06ba836af2..4edffe48ec1ca 100644
--- a/llvm/test/CodeGen/X86/2007-07-03-GR64ToVR64.ll
+++ b/llvm/test/CodeGen/X86/2007-07-03-GR64ToVR64.ll
@@ -1,7 +1,7 @@
 ; NOTE: Assertions have been autogenerated by utils/update_llc_test_checks.py
 ; RUN: llc < %s -mtriple=x86_64-apple-darwin -mattr=+mmx | FileCheck %s
 
-@R = external global x86_mmx		; <ptr> [#uses=1]
+@R = external global <1 x i64>		; <ptr> [#uses=1]
 
 define void @foo(<1 x i64> %A, <1 x i64> %B) nounwind {
 ; CHECK-LABEL: foo:
@@ -14,13 +14,13 @@ define void @foo(<1 x i64> %A, <1 x i64> %B) nounwind {
 ; CHECK-NEXT:    emms
 ; CHECK-NEXT:    retq
 entry:
-	%tmp4 = bitcast <1 x i64> %B to x86_mmx		; <<4 x i16>> [#uses=1]
-	%tmp6 = bitcast <1 x i64> %A to x86_mmx		; <<4 x i16>> [#uses=1]
-	%tmp7 = tail call x86_mmx @llvm.x86.mmx.paddus.w( x86_mmx %tmp6, x86_mmx %tmp4 )		; <x86_mmx> [#uses=1]
-	store x86_mmx %tmp7, ptr @R
+	%tmp4 = bitcast <1 x i64> %B to <1 x i64>		; <<4 x i16>> [#uses=1]
+	%tmp6 = bitcast <1 x i64> %A to <1 x i64>		; <<4 x i16>> [#uses=1]
+	%tmp7 = tail call <1 x i64> @llvm.x86.mmx.paddus.w( <1 x i64> %tmp6, <1 x i64> %tmp4 )		; <<1 x i64>> [#uses=1]
+	store <1 x i64> %tmp7, ptr @R
 	tail call void @llvm.x86.mmx.emms( )
 	ret void
 }
 
-declare x86_mmx @llvm.x86.mmx.paddus.w(x86_mmx, x86_mmx)
+declare <1 x i64> @llvm.x86.mmx.paddus.w(<1 x i64>, <1 x i64>)
 declare void @llvm.x86.mmx.emms()
diff --git a/llvm/test/CodeGen/X86/2008-04-08-CoalescerCrash.ll b/llvm/test/CodeGen/X86/2008-04-08-CoalescerCrash.ll
index d439e827e8199..0c792644fc5c8 100644
--- a/llvm/test/CodeGen/X86/2008-04-08-CoalescerCrash.ll
+++ b/llvm/test/CodeGen/X86/2008-04-08-CoalescerCrash.ll
@@ -5,15 +5,15 @@ entry:
 	tail call void asm sideeffect "# top of block", "~{dirflag},~{fpsr},~{flags},~{di},~{si},~{dx},~{cx},~{ax}"( ) nounwind 
 	tail call void asm sideeffect ".file \224443946.c\22", "~{dirflag},~{fpsr},~{flags}"( ) nounwind 
 	tail call void asm sideeffect ".line 8", "~{dirflag},~{fpsr},~{flags}"( ) nounwind 
-	%tmp1 = tail call x86_mmx asm sideeffect "movd $1, $0", "=={mm4},{bp},~{dirflag},~{fpsr},~{flags},~{memory}"( i32 undef ) nounwind 		; <x86_mmx> [#uses=1]
+	%tmp1 = tail call <1 x i64> asm sideeffect "movd $1, $0", "=={mm4},{bp},~{dirflag},~{fpsr},~{flags},~{memory}"( i32 undef ) nounwind 		; <<1 x i64>> [#uses=1]
 	tail call void asm sideeffect ".file \224443946.c\22", "~{dirflag},~{fpsr},~{flags}"( ) nounwind 
 	tail call void asm sideeffect ".line 9", "~{dirflag},~{fpsr},~{flags}"( ) nounwind 
-	%tmp3 = tail call i32 asm sideeffect "movd $1, $0", "=={bp},{mm3},~{dirflag},~{fpsr},~{flags},~{memory}"( x86_mmx undef ) nounwind 		; <i32> [#uses=1]
+	%tmp3 = tail call i32 asm sideeffect "movd $1, $0", "=={bp},{mm3},~{dirflag},~{fpsr},~{flags},~{memory}"( <1 x i64> undef ) nounwind 		; <i32> [#uses=1]
 	tail call void asm sideeffect ".file \224443946.c\22", "~{dirflag},~{fpsr},~{flags}"( ) nounwind 
 	tail call void asm sideeffect ".line 10", "~{dirflag},~{fpsr},~{flags}"( ) nounwind 
-	tail call void asm sideeffect "movntq $0, 0($1,$2)", "{mm0},{di},{bp},~{dirflag},~{fpsr},~{flags},~{memory}"( x86_mmx undef, i32 undef, i32 %tmp3 ) nounwind 
+	tail call void asm sideeffect "movntq $0, 0($1,$2)", "{mm0},{di},{bp},~{dirflag},~{fpsr},~{flags},~{memory}"( <1 x i64> undef, i32 undef, i32 %tmp3 ) nounwind 
 	tail call void asm sideeffect ".file \224443946.c\22", "~{dirflag},~{fpsr},~{flags}"( ) nounwind 
 	tail call void asm sideeffect ".line 11", "~{dirflag},~{fpsr},~{flags}"( ) nounwind 
-	%tmp8 = tail call i32 asm sideeffect "movd $1, $0", "=={bp},{mm4},~{dirflag},~{fpsr},~{flags},~{memory}"( x86_mmx %tmp1 ) nounwind 		; <i32> [#uses=0]
+	%tmp8 = tail call i32 asm sideeffect "movd $1, $0", "=={bp},{mm4},~{dirflag},~{fpsr},~{flags},~{memory}"( <1 x i64> %tmp1 ) nounwind 		; <i32> [#uses=0]
 	ret i32 undef
 }
diff --git a/llvm/test/CodeGen/X86/2008-08-23-64Bit-maskmovq.ll b/llvm/test/CodeGen/X86/2008-08-23-64Bit-maskmovq.ll
index 594edbaad2944..4a4477823a61d 100644
--- a/llvm/test/CodeGen/X86/2008-08-23-64Bit-maskmovq.ll
+++ b/llvm/test/CodeGen/X86/2008-08-23-64Bit-maskmovq.ll
@@ -17,13 +17,13 @@ entry:
 	br i1 false, label %bb.nph144.split, label %bb133
 
 bb.nph144.split:		; preds = %entry
-        %tmp = bitcast <8 x i8> zeroinitializer to x86_mmx
-        %tmp2 = bitcast <8 x i8> zeroinitializer to x86_mmx
-	tail call void @llvm.x86.mmx.maskmovq( x86_mmx %tmp, x86_mmx %tmp2, ptr null ) nounwind
+        %tmp = bitcast <8 x i8> zeroinitializer to <1 x i64>
+        %tmp2 = bitcast <8 x i8> zeroinitializer to <1 x i64>
+	tail call void @llvm.x86.mmx.maskmovq( <1 x i64> %tmp, <1 x i64> %tmp2, ptr null ) nounwind
 	unreachable
 
 bb133:		; preds = %entry
 	ret void
 }
 
-declare void @llvm.x86.mmx.maskmovq(x86_mmx, x86_mmx, ptr) nounwind
+declare void @llvm.x86.mmx.maskmovq(<1 x i64>, <1 x i64>, ptr) nounwind
diff --git a/llvm/test/CodeGen/X86/2008-09-05-sinttofp-2xi32.ll b/llvm/test/CodeGen/X86/2008-09-05-sinttofp-2xi32.ll
index 3a112ae2a2113..20673a177ac31 100644
--- a/llvm/test/CodeGen/X86/2008-09-05-sinttofp-2xi32.ll
+++ b/llvm/test/CodeGen/X86/2008-09-05-sinttofp-2xi32.ll
@@ -26,7 +26,7 @@ entry:
 
 ; This is how to get MMX instructions.
 
-define <2 x double> @a2(x86_mmx %x) nounwind {
+define <2 x double> @a2(<1 x i64> %x) nounwind {
 ; CHECK-LABEL: a2:
 ; CHECK:       # %bb.0: # %entry
 ; CHECK-NEXT:    pushl %ebp
@@ -42,11 +42,11 @@ define <2 x double> @a2(x86_mmx %x) nounwind {
 ; CHECK-NEXT:    popl %ebp
 ; CHECK-NEXT:    retl
 entry:
-  %y = tail call <2 x double> @llvm.x86.sse.cvtpi2pd(x86_mmx %x)
+  %y = tail call <2 x double> @llvm.x86.sse.cvtpi2pd(<1 x i64> %x)
   ret <2 x double> %y
 }
 
-define x86_mmx @b2(<2 x double> %x) nounwind {
+define <1 x i64> @b2(<2 x double> %x) nounwind {
 ; CHECK-LABEL: b2:
 ; CHECK:       # %bb.0: # %entry
 ; CHECK-NEXT:    pushl %ebp
@@ -61,9 +61,9 @@ define x86_mmx @b2(<2 x double> %x) nounwind {
 ; CHECK-NEXT:    popl %ebp
 ; CHECK-NEXT:    retl
 entry:
-  %y = tail call x86_mmx @llvm.x86.sse.cvttpd2pi (<2 x double> %x)
-  ret x86_mmx %y
+  %y = tail call <1 x i64> @llvm.x86.sse.cvttpd2pi (<2 x double> %x)
+  ret <1 x i64> %y
 }
 
-declare <2 x double> @llvm.x86.sse.cvtpi2pd(x86_mmx)
-declare x86_mmx @llvm.x86.sse.cvttpd2pi(<2 x double>)
+declare <2 x double> @llvm.x86.sse.cvtpi2pd(<1 x i64>)
+declare <1 x i64> @llvm.x86.sse.cvttpd2pi(<2 x double>)
diff --git a/llvm/test/CodeGen/X86/2011-06-14-mmx-inlineasm.ll b/llvm/test/CodeGen/X86/2011-06-14-mmx-inlineasm.ll
index 306aeed1ace3e..582ebb9bdcfd1 100644
--- a/llvm/test/CodeGen/X86/2011-06-14-mmx-inlineasm.ll
+++ b/llvm/test/CodeGen/X86/2011-06-14-mmx-inlineasm.ll
@@ -3,14 +3,14 @@
 target datalayout = "e-p:32:32:32-i1:8:8-i8:8:8-i16:16:16-i32:32:32-i64:32:64-f32:32:32-f64:32:64-v64:64:64-v128:128:128-a0:0:64-f80:128:128-n8:16:32"
 target triple = "i386-apple-macosx10.6.6"
 
-%0 = type { x86_mmx, x86_mmx, x86_mmx, x86_mmx, x86_mmx, x86_mmx, x86_mmx }
+%0 = type { <1 x i64>, <1 x i64>, <1 x i64>, <1 x i64>, <1 x i64>, <1 x i64>, <1 x i64> }
 
 define i32 @pixman_fill_mmx(ptr nocapture %bits, i32 %stride, i32 %bpp, i32 %x, i32 %y, i32 %width, i32 %height, i32 %xor) nounwind ssp {
 entry:
   %conv = zext i32 %xor to i64
   %shl = shl nuw i64 %conv, 32
   %or = or i64 %shl, %conv
-  %0 = bitcast i64 %or to x86_mmx
+  %0 = bitcast i64 %or to <1 x i64>
 ; CHECK:      movq [[MMXR:%mm[0-7],]] {{%mm[0-7]}}
 ; CHECK-NEXT: movq [[MMXR]] {{%mm[0-7]}}
 ; CHECK-NEXT: movq [[MMXR]] {{%mm[0-7]}}
@@ -18,7 +18,7 @@ entry:
 ; CHECK-NEXT: movq [[MMXR]] {{%mm[0-7]}}
 ; CHECK-NEXT: movq [[MMXR]] {{%mm[0-7]}}
 ; CHECK-NEXT: movq [[MMXR]] {{%mm[0-7]}}
-  %1 = tail call %0 asm "movq\09\09$7,\09$0\0Amovq\09\09$7,\09$1\0Amovq\09\09$7,\09$2\0Amovq\09\09$7,\09$3\0Amovq\09\09$7,\09$4\0Amovq\09\09$7,\09$5\0Amovq\09\09$7,\09$6\0A", "=&y,=&y,=&y,=&y,=&y,=&y,=y,y,~{dirflag},~{fpsr},~{flags}"(x86_mmx %0) nounwind, !srcloc !0
+  %1 = tail call %0 asm "movq\09\09$7,\09$0\0Amovq\09\09$7,\09$1\0Amovq\09\09$7,\09$2\0Amovq\09\09$7,\09$3\0Amovq\09\09$7,\09$4\0Amovq\09\09$7,\09$5\0Amovq\09\09$7,\09$6\0A", "=&y,=&y,=&y,=&y,=&y,=&y,=y,y,~{dirflag},~{fpsr},~{flags}"(<1 x i64> %0) nounwind, !srcloc !0
   %asmresult = extractvalue %0 %1, 0
   %asmresult6 = extractvalue %0 %1, 1
   %asmresult7 = extractvalue %0 %1, 2
@@ -34,7 +34,7 @@ entry:
 ; CHECK-NEXT: movq {{%mm[0-7]}},
 ; CHECK-NEXT: movq {{%mm[0-7]}},
 ; CHECK-NEXT: movq {{%mm[0-7]}},
-  tail call void asm sideeffect "movq\09$1,\09  ($0)\0Amovq\09$2,\09 8($0)\0Amovq\09$3,\0916($0)\0Amovq\09$4,\0924($0)\0Amovq\09$5,\0932($0)\0Amovq\09$6,\0940($0)\0Amovq\09$7,\0948($0)\0Amovq\09$8,\0956($0)\0A", "r,y,y,y,y,y,y,y,y,~{memory},~{dirflag},~{fpsr},~{flags}"(ptr undef, x86_mmx %0, x86_mmx %asmresult, x86_mmx %asmresult6, x86_mmx %asmresult7, x86_mmx %asmresult8, x86_mmx %asmresult9, x86_mmx %asmresult10, x86_mmx %asmresult11) nounwind, !srcloc !1
+  tail call void asm sideeffect "movq\09$1,\09  ($0)\0Amovq\09$2,\09 8($0)\0Amovq\09$3,\0916($0)\0Amovq\09$4,\0924($0)\0Amovq\09$5,\0932($0)\0Amovq\09$6,\0940($0)\0Amovq\09$7,\0948($0)\0Amovq\09$8,\0956($0)\0A", "r,y,y,y,y,y,y,y,y,~{memory},~{dirflag},~{fpsr},~{flags}"(ptr undef, <1 x i64> %0, <1 x i64> %asmresult, <1 x i64> %asmresult6, <1 x i64> %asmresult7, <1 x i64> %asmresult8, <1 x i64> %asmresult9, <1 x i64> %asmresult10, <1 x i64> %asmresult11) nounwind, !srcloc !1
   tail call void @llvm.x86.mmx.emms() nounwind
   ret i32 1
 }
diff --git a/llvm/test/CodeGen/X86/avx-vbroadcast.ll b/llvm/test/CodeGen/X86/avx-vbroadcast.ll
index 3f6f8c01b9049..c69886df82bdf 100644
--- a/llvm/test/CodeGen/X86/avx-vbroadcast.ll
+++ b/llvm/test/CodeGen/X86/avx-vbroadcast.ll
@@ -1011,7 +1011,7 @@ define float @broadcast_lifetime() nounwind {
   ret float %7
 }
 
-define <8 x i16> @broadcast_x86_mmx(x86_mmx %tmp) nounwind {
+define <8 x i16> @broadcast_x86_mmx(<1 x i64> %tmp) nounwind {
 ; X86-LABEL: broadcast_x86_mmx:
 ; X86:       ## %bb.0: ## %bb
 ; X86-NEXT:    vmovddup {{.*#+}} xmm0 = mem[0,0]
@@ -1023,7 +1023,7 @@ define <8 x i16> @broadcast_x86_mmx(x86_mmx %tmp) nounwind {
 ; X64-NEXT:    vpshufd {{.*#+}} xmm0 = xmm0[0,1,0,1]
 ; X64-NEXT:    retq
 bb:
-  %tmp1 = bitcast x86_mmx %tmp to i64
+  %tmp1 = bitcast <1 x i64> %tmp to i64
   %tmp2 = insertelement <2 x i64> undef, i64 %tmp1, i32 0
   %tmp3 = bitcast <2 x i64> %tmp2 to <8 x i16>
   %tmp4 = shufflevector <8 x i16> %tmp3, <8 x i16> poison, <8 x i32> <i32 0, i32 1, i32 2, i32 3, i32 0, i32 1, i32 2, i32 3>
diff --git a/llvm/test/CodeGen/X86/avx2-vbroadcast.ll b/llvm/test/CodeGen/X86/avx2-vbroadcast.ll
index fed6c2eb8ba0a..9ac0503831eb7 100644
--- a/llvm/test/CodeGen/X86/avx2-vbroadcast.ll
+++ b/llvm/test/CodeGen/X86/avx2-vbroadcast.ll
@@ -1449,7 +1449,7 @@ eintry:
   ret void
 }
 
-define <8 x i16> @broadcast_x86_mmx(x86_mmx %tmp) nounwind {
+define <8 x i16> @broadcast_x86_mmx(<1 x i64> %tmp) nounwind {
 ; X86-LABEL: broadcast_x86_mmx:
 ; X86:       ## %bb.0: ## %bb
 ; X86-NEXT:    vmovddup {{.*#+}} xmm0 = mem[0,0]
@@ -1466,7 +1466,7 @@ define <8 x i16> @broadcast_x86_mmx(x86_mmx %tmp) nounwind {
 ; X64-AVX512VL-NEXT:    vpbroadcastq %rdi, %xmm0
 ; X64-AVX512VL-NEXT:    retq
 bb:
-  %tmp1 = bitcast x86_mmx %tmp to i64
+  %tmp1 = bitcast <1 x i64> %tmp to i64
   %tmp2 = insertelement <2 x i64> undef, i64 %tmp1, i32 0
   %tmp3 = bitcast <2 x i64> %tmp2 to <8 x i16>
   %tmp4 = shufflevector <8 x i16> %tmp3, <8 x i16> poison, <8 x i32> <i32 0, i32 1, i32 2, i32 3, i32 0, i32 1, i32 2, i32 3>
diff --git a/llvm/test/CodeGen/X86/bitcast-mmx.ll b/llvm/test/CodeGen/X86/bitcast-mmx.ll
index 061723a0966e2..fe48a96a51d3e 100644
--- a/llvm/test/CodeGen/X86/bitcast-mmx.ll
+++ b/llvm/test/CodeGen/X86/bitcast-mmx.ll
@@ -17,9 +17,9 @@ define i32 @t0(i64 %x) nounwind {
 ; X64-NEXT:    retq
 entry:
   %0 = bitcast i64 %x to <4 x i16>
-  %1 = bitcast <4 x i16> %0 to x86_mmx
-  %2 = tail call x86_mmx @llvm.x86.sse.pshuf.w(x86_mmx %1, i8 -18)
-  %3 = bitcast x86_mmx %2 to <4 x i16>
+  %1 = bitcast <4 x i16> %0 to <1 x i64>
+  %2 = tail call <1 x i64> @llvm.x86.sse.pshuf.w(<1 x i64> %1, i8 -18)
+  %3 = bitcast <1 x i64> %2 to <4 x i16>
   %4 = bitcast <4 x i16> %3 to <1 x i64>
   %5 = extractelement <1 x i64> %4, i32 0
   %6 = bitcast i64 %5 to <2 x i32>
@@ -52,9 +52,9 @@ define i64 @t1(i64 %x, i32 %n) nounwind {
 ; X64-NEXT:    movq %mm0, %rax
 ; X64-NEXT:    retq
 entry:
-  %0 = bitcast i64 %x to x86_mmx
-  %1 = tail call x86_mmx @llvm.x86.mmx.pslli.q(x86_mmx %0, i32 %n)
-  %2 = bitcast x86_mmx %1 to i64
+  %0 = bitcast i64 %x to <1 x i64>
+  %1 = tail call <1 x i64> @llvm.x86.mmx.pslli.q(<1 x i64> %0, i32 %n)
+  %2 = bitcast <1 x i64> %1 to i64
   ret i64 %2
 }
 
@@ -88,11 +88,11 @@ define i64 @t2(i64 %x, i32 %n, i32 %w) nounwind {
 entry:
   %0 = insertelement <2 x i32> undef, i32 %w, i32 0
   %1 = insertelement <2 x i32> %0, i32 0, i32 1
-  %2 = bitcast <2 x i32> %1 to x86_mmx
-  %3 = tail call x86_mmx @llvm.x86.mmx.pslli.q(x86_mmx %2, i32 %n)
-  %4 = bitcast i64 %x to x86_mmx
-  %5 = tail call x86_mmx @llvm.x86.mmx.por(x86_mmx %4, x86_mmx %3)
-  %6 = bitcast x86_mmx %5 to i64
+  %2 = bitcast <2 x i32> %1 to <1 x i64>
+  %3 = tail call <1 x i64> @llvm.x86.mmx.pslli.q(<1 x i64> %2, i32 %n)
+  %4 = bitcast i64 %x to <1 x i64>
+  %5 = tail call <1 x i64> @llvm.x86.mmx.por(<1 x i64> %4, <1 x i64> %3)
+  %6 = bitcast <1 x i64> %5 to i64
   ret i64 %6
 }
 
@@ -123,14 +123,14 @@ define i64 @t3(ptr %y, ptr %n) nounwind {
 ; X64-NEXT:    movq %mm0, %rax
 ; X64-NEXT:    retq
 entry:
-  %0 = load x86_mmx, ptr %y, align 8
+  %0 = load <1 x i64>, ptr %y, align 8
   %1 = load i32, ptr %n, align 4
-  %2 = tail call x86_mmx @llvm.x86.mmx.pslli.q(x86_mmx %0, i32 %1)
-  %3 = bitcast x86_mmx %2 to i64
+  %2 = tail call <1 x i64> @llvm.x86.mmx.pslli.q(<1 x i64> %0, i32 %1)
+  %3 = bitcast <1 x i64> %2 to i64
   ret i64 %3
 }
 
-declare x86_mmx @llvm.x86.sse.pshuf.w(x86_mmx, i8)
-declare x86_mmx @llvm.x86.mmx.pslli.q(x86_mmx, i32)
-declare x86_mmx @llvm.x86.mmx.por(x86_mmx, x86_mmx)
+declare <1 x i64> @llvm.x86.sse.pshuf.w(<1 x i64>, i8)
+declare <1 x i64> @llvm.x86.mmx.pslli.q(<1 x i64>, i32)
+declare <1 x i64> @llvm.x86.mmx.por(<1 x i64>, <1 x i64>)
 
diff --git a/llvm/test/CodeGen/X86/expand-vr64-gr64-copy.mir b/llvm/test/CodeGen/X86/expand-vr64-gr64-copy.mir
index 559560ac20f8a..aa637e7408f22 100644
--- a/llvm/test/CodeGen/X86/expand-vr64-gr64-copy.mir
+++ b/llvm/test/CodeGen/X86/expand-vr64-gr64-copy.mir
@@ -6,9 +6,9 @@
 
   define <2 x i32> @test_paddw(<2 x i32> %a) nounwind readnone {
   entry:
-    %0 = bitcast <2 x i32> %a to x86_mmx
-    %1 = tail call x86_mmx @llvm.x86.mmx.padd.w(x86_mmx %0, x86_mmx %0)
-    %2 = bitcast x86_mmx %1 to <2 x i32>
+    %0 = bitcast <2 x i32> %a to <1 x i64>
+    %1 = tail call <1 x i64> @llvm.x86.mmx.padd.w(<1 x i64> %0, <1 x i64> %0)
+    %2 = bitcast <1 x i64> %1 to <2 x i32>
     ret <2 x i32> %2
   }
 
diff --git a/llvm/test/CodeGen/X86/fast-isel-bc.ll b/llvm/test/CodeGen/X86/fast-isel-bc.ll
index e3bb5e7176e57..64bdfd6d4f863 100644
--- a/llvm/test/CodeGen/X86/fast-isel-bc.ll
+++ b/llvm/test/CodeGen/X86/fast-isel-bc.ll
@@ -4,7 +4,7 @@
 
 ; PR4684
 
-declare void @func2(x86_mmx)
+declare void @func2(<1 x i64>)
 
 ; This isn't spectacular, but it's MMX code at -O0...
 
@@ -28,7 +28,7 @@ define void @func1() nounwind {
 ; X64-NEXT:    callq _func2
 ; X64-NEXT:    popq %rax
 ; X64-NEXT:    retq
-  %tmp0 = bitcast <2 x i32> <i32 0, i32 2> to x86_mmx
-  call void @func2(x86_mmx %tmp0)
+  %tmp0 = bitcast <2 x i32> <i32 0, i32 2> to <1 x i64>
+  call void @func2(<1 x i64> %tmp0)
   ret void
 }
diff --git a/llvm/test/CodeGen/X86/fast-isel-nontemporal.ll b/llvm/test/CodeGen/X86/fast-isel-nontemporal.ll
index c13fdae540d0b..3b1a8f541b490 100644
--- a/llvm/test/CodeGen/X86/fast-isel-nontemporal.ll
+++ b/llvm/test/CodeGen/X86/fast-isel-nontemporal.ll
@@ -104,12 +104,12 @@ define void @test_mmx(ptr nocapture %a0, ptr nocapture %a1) {
 ; ALL-NEXT:    movntq %mm0, (%rsi)
 ; ALL-NEXT:    retq
 entry:
-  %0 = load x86_mmx, ptr %a0
-  %1 = call x86_mmx @llvm.x86.mmx.psrli.q(x86_mmx %0, i32 3)
-  store x86_mmx %1, ptr %a1, align 8, !nontemporal !1
+  %0 = load <1 x i64>, ptr %a0
+  %1 = call <1 x i64> @llvm.x86.mmx.psrli.q(<1 x i64> %0, i32 3)
+  store <1 x i64> %1, ptr %a1, align 8, !nontemporal !1
   ret void
 }
-declare x86_mmx @llvm.x86.mmx.psrli.q(x86_mmx, i32) nounwind readnone
+declare <1 x i64> @llvm.x86.mmx.psrli.q(<1 x i64>, i32) nounwind readnone
 
 ;
 ; 128-bit Vector Stores
diff --git a/llvm/test/CodeGen/X86/mmx-arg-passing-x86-64.ll b/llvm/test/CodeGen/X86/mmx-arg-passing-x86-64.ll
index 54f048eb697f6..439d7efc2d755 100644
--- a/llvm/test/CodeGen/X86/mmx-arg-passing-x86-64.ll
+++ b/llvm/test/CodeGen/X86/mmx-arg-passing-x86-64.ll
@@ -14,12 +14,12 @@ define void @t3() nounwind  {
 ; X86-64-NEXT:    xorl %eax, %eax
 ; X86-64-NEXT:    jmp _pass_v8qi ## TAILCALL
   %tmp3 = load <8 x i8>, ptr @g_v8qi, align 8
-  %tmp3a = bitcast <8 x i8> %tmp3 to x86_mmx
-  %tmp4 = tail call i32 (...) @pass_v8qi( x86_mmx %tmp3a ) nounwind
+  %tmp3a = bitcast <8 x i8> %tmp3 to <1 x i64>
+  %tmp4 = tail call i32 (...) @pass_v8qi( <1 x i64> %tmp3a ) nounwind
   ret void
 }
 
-define void @t4(x86_mmx %v1, x86_mmx %v2) nounwind  {
+define void @t4(<1 x i64> %v1, <1 x i64> %v2) nounwind  {
 ; X86-64-LABEL: t4:
 ; X86-64:       ## %bb.0:
 ; X86-64-NEXT:    movq %rdi, %xmm0
@@ -28,11 +28,11 @@ define void @t4(x86_mmx %v1, x86_mmx %v2) nounwind  {
 ; X86-64-NEXT:    movq %xmm1, %rdi
 ; X86-64-NEXT:    xorl %eax, %eax
 ; X86-64-NEXT:    jmp _pass_v8qi ## TAILCALL
-  %v1a = bitcast x86_mmx %v1 to <8 x i8>
-  %v2b = bitcast x86_mmx %v2 to <8 x i8>
+  %v1a = bitcast <1 x i64> %v1 to <8 x i8>
+  %v2b = bitcast <1 x i64> %v2 to <8 x i8>
   %tmp3 = add <8 x i8> %v1a, %v2b
-  %tmp3a = bitc...
[truncated]

Copy link

github-actions bot commented Jul 25, 2024

⚠️ C/C++ code formatter, clang-format found issues in your code. ⚠️

You can test this locally with the following command:
git-clang-format --diff dfeb3991fb489a703f631ab0c34b58f80568038d 9bc2e8692acbd70c08aece752445ead024aa6117 --extensions cpp -- llvm/lib/AsmParser/LLLexer.cpp
View the diff from clang-format here.
diff --git a/llvm/lib/AsmParser/LLLexer.cpp b/llvm/lib/AsmParser/LLLexer.cpp
index 7c97f7afbe..22950d325d 100644
--- a/llvm/lib/AsmParser/LLLexer.cpp
+++ b/llvm/lib/AsmParser/LLLexer.cpp
@@ -837,7 +837,7 @@ lltok::Kind LLLexer::LexIdentifier() {
   TYPEKEYWORD("fp128",     Type::getFP128Ty(Context));
   TYPEKEYWORD("ppc_fp128", Type::getPPC_FP128Ty(Context));
   TYPEKEYWORD("label",     Type::getLabelTy(Context));
-  TYPEKEYWORD("metadata",  Type::getMetadataTy(Context));
+  TYPEKEYWORD("metadata", Type::getMetadataTy(Context));
   TYPEKEYWORD("x86_amx",   Type::getX86_AMXTy(Context));
   TYPEKEYWORD("token",     Type::getTokenTy(Context));
   TYPEKEYWORD("ptr",       PointerType::getUnqual(Context));

Comment on lines 17 to 18
%tmp4 = bitcast <1 x i64> %B to <1 x i64> ; <<4 x i16>> [#uses=1]
%tmp6 = bitcast <1 x i64> %A to <1 x i64> ; <<4 x i16>> [#uses=1]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we remove such bitcast <1 x i64>.*to <1 x i64>?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There's indeed a lot of extraneous bitcast throughout all these modified tests, many where it's casting through a series of different types and back to the first. I started to look at cleaning it up, but since the plan is actually to delete the MMX intrinsics in the IR, and mostly the bitcast mess is around MMX IR intrinsics, I put that aside.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My concern is the future developer may be confused by the code and think it's intended for some purpose. It should be fine just clean up the specific <1 x i64> to <1 x i64> bitcast, which don't look many too me.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No, not too many just for that. Done.

Copy link
Contributor

@phoebewang phoebewang left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM.

@jyknight jyknight merged commit b7e4fba into llvm:main Jul 28, 2024
4 of 7 checks passed
@jyknight jyknight deleted the remove-mmx-x86mmx-cleanup branch July 28, 2024 22:12
banach-space pushed a commit to banach-space/llvm-project that referenced this pull request Aug 7, 2024
After llvm#98505, the textual IR keyword `x86_mmx` was temporarily made to
parse as `<1 x i64>`, so as not to require a lot of test update noise.

This completes the removal of the type, by removing the`x86_mmx` keyword
from the IR parser, and making the (now no-op) test updates via `sed -i
's/\bx86_mmx\b/<1 x i64>/g' $(git grep -l x86_mmx llvm/test/)`.
Resulting bitcasts from <1 x i64> to itself were then manually deleted.

Changes to llvm/test/Bitcode/compatibility-$VERSION.ll were reverted, as
they're intended to be equivalent to the .bc file, if parsed by old
LLVM, so shouldn't be updated.

A few tests were removed, as they're no longer testing anything, in the
following files:
- llvm/test/Transforms/GlobalOpt/x86_mmx_load.ll
- llvm/test/Transforms/InstCombine/cast.ll
- llvm/test/Transforms/InstSimplify/ConstProp/gep-zeroinit-vector.ll

Works towards issue llvm#98272.
searlmc1 pushed a commit to ROCm/llvm-project that referenced this pull request Aug 22, 2024
Revret: b7e4fba Cleanup x86_mmx after removing IR type  (llvm#100646) (Reason: dependent on dfeb399)
Change-Id: I8e9a9a897b4a44dd5f0a79dd7a630f0051a3f1ad
searlmc1 pushed a commit to ROCm/llvm-project that referenced this pull request Nov 9, 2024
cherry-pick:
dfeb399 [email protected] Thu Jul 25 09:19:22 2024 -0400 Remove the `x86_mmx` IR type. (llvm#98505)
b7e4fba [email protected] Sun Jul 28 18:12:47 2024 -0400 Cleanup x86_mmx after removing IR type  (llvm#100646)

Change-Id: I987eda387fc403ab249f9d48eeb13fd66606343a
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants