1 //===---------------------------------------------------------------------===//
2 // Random ideas for the X86 backend: SSE-specific stuff.
3 //===---------------------------------------------------------------------===//
5 - Consider eliminating the unaligned SSE load intrinsics, replacing them with
6 unaligned LLVM load instructions.
8 //===---------------------------------------------------------------------===//
10 Expand libm rounding functions inline: Significant speedups possible.
11 http://gcc.gnu.org/ml/gcc-patches/2006-10/msg00909.html
13 //===---------------------------------------------------------------------===//
15 When compiled with unsafemath enabled, "main" should enable SSE DAZ mode and
18 //===---------------------------------------------------------------------===//
20 Think about doing i64 math in SSE regs.
22 //===---------------------------------------------------------------------===//
24 This testcase should have no SSE instructions in it, and only one load from
27 double %test3(bool %B) {
28 %C = select bool %B, double 123.412, double 523.01123123
32 Currently, the select is being lowered, which prevents the dag combiner from
33 turning 'select (load CPI1), (load CPI2)' -> 'load (select CPI1, CPI2)'
35 The pattern isel got this one right.
37 //===---------------------------------------------------------------------===//
39 SSE doesn't have [mem] op= reg instructions. If we have an SSE instruction
44 and the register allocator decides to spill X, it is cheaper to emit this as:
55 ..and this uses one fewer register (so this should be done at load folding
56 time, not at spiller time). *Note* however that this can only be done
57 if Y is dead. Here's a testcase:
59 @.str_3 = external global [15 x i8] ; <[15 x i8]*> [#uses=0]
60 declare void @printf(i32, ...)
65 no_exit.i7: ; preds = %no_exit.i7, %build_tree.exit
66 %tmp.0.1.0.i9 = phi double [ 0.000000e+00, %build_tree.exit ], [ %tmp.34.i18, %no_exit.i7 ] ; <double> [#uses=1]
67 %tmp.0.0.0.i10 = phi double [ 0.000000e+00, %build_tree.exit ], [ %tmp.28.i16, %no_exit.i7 ] ; <double> [#uses=1]
68 %tmp.28.i16 = add double %tmp.0.0.0.i10, 0.000000e+00 ; <double> [#uses=1]
69 %tmp.34.i18 = add double %tmp.0.1.0.i9, 0.000000e+00 ; <double> [#uses=2]
70 br i1 false, label %Compute_Tree.exit23, label %no_exit.i7
72 Compute_Tree.exit23: ; preds = %no_exit.i7
73 tail call void (i32, ...)* @printf( i32 0 )
74 store double %tmp.34.i18, double* null
83 *** movsd %XMM2, QWORD PTR [%ESP + 8]
84 *** addsd %XMM2, %XMM1
85 *** movsd QWORD PTR [%ESP + 8], %XMM2
86 jmp .BBmain_1 # no_exit.i7
88 This is a bugpoint reduced testcase, which is why the testcase doesn't make
89 much sense (e.g. its an infinite loop). :)
91 //===---------------------------------------------------------------------===//
93 SSE should implement 'select_cc' using 'emulated conditional moves' that use
94 pcmp/pand/pandn/por to do a selection instead of a conditional branch:
96 double %X(double %Y, double %Z, double %A, double %B) {
97 %C = setlt double %A, %B
98 %z = add double %Z, 0.0 ;; select operand is not a load
99 %D = select bool %C, double %Y, double %z
108 addsd 24(%esp), %xmm0
109 movsd 32(%esp), %xmm1
110 movsd 16(%esp), %xmm2
111 ucomisd 40(%esp), %xmm1
121 //===---------------------------------------------------------------------===//
123 It's not clear whether we should use pxor or xorps / xorpd to clear XMM
124 registers. The choice may depend on subtarget information. We should do some
125 more experiments on different x86 machines.
127 //===---------------------------------------------------------------------===//
129 Lower memcpy / memset to a series of SSE 128 bit move instructions when it's
132 //===---------------------------------------------------------------------===//
134 Teach the coalescer to commute 2-addr instructions, allowing us to eliminate
135 the reg-reg copy in this example:
137 float foo(int *x, float *y, unsigned c) {
140 for (i = 0; i < c; i++) {
141 float xx = (float)x[i];
150 cvtsi2ss %XMM0, DWORD PTR [%EDX + 4*%ESI]
151 mulss %XMM0, DWORD PTR [%EAX + 4*%ESI]
155 **** movaps %XMM1, %XMM0
156 jb LBB_foo_3 # no_exit
158 //===---------------------------------------------------------------------===//
161 if (copysign(1.0, x) == copysign(1.0, y))
166 //===---------------------------------------------------------------------===//
168 Use movhps to update upper 64-bits of a v4sf value. Also movlps on lower half
171 //===---------------------------------------------------------------------===//
173 Better codegen for vector_shuffles like this { x, 0, 0, 0 } or { x, 0, x, 0}.
174 Perhaps use pxor / xorp* to clear a XMM register first?
176 //===---------------------------------------------------------------------===//
178 How to decide when to use the "floating point version" of logical ops? Here are
181 movaps LCPI5_5, %xmm2
184 mulps 8656(%ecx), %xmm3
185 addps 8672(%ecx), %xmm3
191 movaps LCPI5_5, %xmm1
194 mulps 8656(%ecx), %xmm3
195 addps 8672(%ecx), %xmm3
199 movaps %xmm3, 112(%esp)
202 Due to some minor source change, the later case ended up using orps and movaps
203 instead of por and movdqa. Does it matter?
205 //===---------------------------------------------------------------------===//
207 X86RegisterInfo::copyRegToReg() returns X86::MOVAPSrr for VR128. Is it possible
208 to choose between movaps, movapd, and movdqa based on types of source and
211 How about andps, andpd, and pand? Do we really care about the type of the packed
212 elements? If not, why not always use the "ps" variants which are likely to be
215 //===---------------------------------------------------------------------===//
217 External test Nurbs exposed some problems. Look for
218 __ZN15Nurbs_SSE_Cubic17TessellateSurfaceE, bb cond_next140. This is what icc
221 movaps (%edx), %xmm2 #59.21
222 movaps (%edx), %xmm5 #60.21
223 movaps (%edx), %xmm4 #61.21
224 movaps (%edx), %xmm3 #62.21
225 movl 40(%ecx), %ebp #69.49
226 shufps $0, %xmm2, %xmm5 #60.21
227 movl 100(%esp), %ebx #69.20
228 movl (%ebx), %edi #69.20
229 imull %ebp, %edi #69.49
230 addl (%eax), %edi #70.33
231 shufps $85, %xmm2, %xmm4 #61.21
232 shufps $170, %xmm2, %xmm3 #62.21
233 shufps $255, %xmm2, %xmm2 #63.21
234 lea (%ebp,%ebp,2), %ebx #69.49
236 lea -3(%edi,%ebx), %ebx #70.33
238 addl 32(%ecx), %ebx #68.37
239 testb $15, %bl #91.13
240 jne L_B1.24 # Prob 5% #91.13
242 This is the llvm code after instruction scheduling:
244 cond_next140 (0xa910740, LLVM BB @0xa90beb0):
245 %reg1078 = MOV32ri -3
246 %reg1079 = ADD32rm %reg1078, %reg1068, 1, %NOREG, 0
247 %reg1037 = MOV32rm %reg1024, 1, %NOREG, 40
248 %reg1080 = IMUL32rr %reg1079, %reg1037
249 %reg1081 = MOV32rm %reg1058, 1, %NOREG, 0
250 %reg1038 = LEA32r %reg1081, 1, %reg1080, -3
251 %reg1036 = MOV32rm %reg1024, 1, %NOREG, 32
252 %reg1082 = SHL32ri %reg1038, 4
253 %reg1039 = ADD32rr %reg1036, %reg1082
254 %reg1083 = MOVAPSrm %reg1059, 1, %NOREG, 0
255 %reg1034 = SHUFPSrr %reg1083, %reg1083, 170
256 %reg1032 = SHUFPSrr %reg1083, %reg1083, 0
257 %reg1035 = SHUFPSrr %reg1083, %reg1083, 255
258 %reg1033 = SHUFPSrr %reg1083, %reg1083, 85
259 %reg1040 = MOV32rr %reg1039
260 %reg1084 = AND32ri8 %reg1039, 15
262 JE mbb<cond_next204,0xa914d30>
264 Still ok. After register allocation:
266 cond_next140 (0xa910740, LLVM BB @0xa90beb0):
268 %EDX = MOV32rm <fi#3>, 1, %NOREG, 0
269 ADD32rm %EAX<def&use>, %EDX, 1, %NOREG, 0
270 %EDX = MOV32rm <fi#7>, 1, %NOREG, 0
271 %EDX = MOV32rm %EDX, 1, %NOREG, 40
272 IMUL32rr %EAX<def&use>, %EDX
273 %ESI = MOV32rm <fi#5>, 1, %NOREG, 0
274 %ESI = MOV32rm %ESI, 1, %NOREG, 0
275 MOV32mr <fi#4>, 1, %NOREG, 0, %ESI
276 %EAX = LEA32r %ESI, 1, %EAX, -3
277 %ESI = MOV32rm <fi#7>, 1, %NOREG, 0
278 %ESI = MOV32rm %ESI, 1, %NOREG, 32
280 SHL32ri %EDI<def&use>, 4
281 ADD32rr %EDI<def&use>, %ESI
282 %XMM0 = MOVAPSrm %ECX, 1, %NOREG, 0
283 %XMM1 = MOVAPSrr %XMM0
284 SHUFPSrr %XMM1<def&use>, %XMM1, 170
285 %XMM2 = MOVAPSrr %XMM0
286 SHUFPSrr %XMM2<def&use>, %XMM2, 0
287 %XMM3 = MOVAPSrr %XMM0
288 SHUFPSrr %XMM3<def&use>, %XMM3, 255
289 SHUFPSrr %XMM0<def&use>, %XMM0, 85
291 AND32ri8 %EBX<def&use>, 15
293 JE mbb<cond_next204,0xa914d30>
295 This looks really bad. The problem is shufps is a destructive opcode. Since it
296 appears as operand two in more than one shufps ops. It resulted in a number of
297 copies. Note icc also suffers from the same problem. Either the instruction
298 selector should select pshufd or The register allocator can made the two-address
299 to three-address transformation.
301 It also exposes some other problems. See MOV32ri -3 and the spills.
303 //===---------------------------------------------------------------------===//
305 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=25500
307 LLVM is producing bad code.
309 LBB_main_4: # cond_true44
320 jne LBB_main_4 # cond_true44
322 There are two problems. 1) No need to two loop induction variables. We can
323 compare against 262144 * 16. 2) Known register coalescer issue. We should
324 be able eliminate one of the movaps:
326 addps %xmm2, %xmm1 <=== Commute!
329 movaps %xmm1, %xmm1 <=== Eliminate!
336 jne LBB_main_4 # cond_true44
338 //===---------------------------------------------------------------------===//
342 __m128 test(float a) {
343 return _mm_set_ps(0.0, 0.0, 0.0, a*a);
354 Because mulss doesn't modify the top 3 elements, the top elements of
355 xmm1 are already zero'd. We could compile this to:
361 //===---------------------------------------------------------------------===//
363 Here's a sick and twisted idea. Consider code like this:
365 __m128 test(__m128 a) {
366 float b = *(float*)&A;
368 return _mm_set_ps(0.0, 0.0, 0.0, b);
371 This might compile to this code:
373 movaps c(%esp), %xmm1
378 Now consider if the ... code caused xmm1 to get spilled. This might produce
381 movaps c(%esp), %xmm1
382 movaps %xmm1, c2(%esp)
386 movaps c2(%esp), %xmm1
390 However, since the reload is only used by these instructions, we could
391 "fold" it into the uses, producing something like this:
393 movaps c(%esp), %xmm1
394 movaps %xmm1, c2(%esp)
397 movss c2(%esp), %xmm0
400 ... saving two instructions.
402 The basic idea is that a reload from a spill slot, can, if only one 4-byte
403 chunk is used, bring in 3 zeros the the one element instead of 4 elements.
404 This can be used to simplify a variety of shuffle operations, where the
405 elements are fixed zeros.
407 //===---------------------------------------------------------------------===//
411 #include <emmintrin.h>
412 void test(__m128d *r, __m128d *A, double B) {
413 *r = _mm_loadl_pd(*A, &B);
419 movsd 24(%esp), %xmm0
431 movl 4(%esp), %edx #3.6
432 movl 8(%esp), %eax #3.6
433 movapd (%eax), %xmm0 #4.22
434 movlpd 12(%esp), %xmm0 #4.8
435 movapd %xmm0, (%edx) #4.3
438 So icc is smart enough to know that B is in memory so it doesn't load it and
439 store it back to stack.
441 This should be fixed by eliminating the llvm.x86.sse2.loadl.pd intrinsic,
442 lowering it to a load+insertelement instead. Already match the load+shuffle
443 as movlpd, so this should be easy. We already get optimal code for:
445 define void @test2(<2 x double>* %r, <2 x double>* %A, double %B) {
447 %tmp2 = load <2 x double>* %A, align 16
448 %tmp8 = insertelement <2 x double> %tmp2, double %B, i32 0
449 store <2 x double> %tmp8, <2 x double>* %r, align 16
453 //===---------------------------------------------------------------------===//
455 __m128d test1( __m128d A, __m128d B) {
456 return _mm_shuffle_pd(A, B, 0x3);
461 shufpd $3, %xmm1, %xmm0
463 Perhaps it's better to use unpckhpd instead?
465 unpckhpd %xmm1, %xmm0
467 Don't know if unpckhpd is faster. But it is shorter.
469 //===---------------------------------------------------------------------===//
471 This code generates ugly code, probably due to costs being off or something:
473 define void @test(float* %P, <4 x float>* %P2 ) {
474 %xFloat0.688 = load float* %P
475 %tmp = load <4 x float>* %P2
476 %inFloat3.713 = insertelement <4 x float> %tmp, float 0.0, i32 3
477 store <4 x float> %inFloat3.713, <4 x float>* %P2
488 shufps $50, %xmm1, %xmm2
489 shufps $132, %xmm2, %xmm0
493 Would it be better to generate:
499 pinsrw $6, %eax, %xmm0
500 pinsrw $7, %eax, %xmm0
506 //===---------------------------------------------------------------------===//
508 Some useful information in the Apple Altivec / SSE Migration Guide:
510 http://developer.apple.com/documentation/Performance/Conceptual/
511 Accelerate_sse_migration/index.html
513 e.g. SSE select using and, andnot, or. Various SSE compare translations.
515 //===---------------------------------------------------------------------===//
517 Add hooks to commute some CMPP operations.
519 //===---------------------------------------------------------------------===//
521 Apply the same transformation that merged four float into a single 128-bit load
522 to loads from constant pool.
524 //===---------------------------------------------------------------------===//
526 Floating point max / min are commutable when -enable-unsafe-fp-path is
527 specified. We should turn int_x86_sse_max_ss and X86ISD::FMIN etc. into other
528 nodes which are selected to max / min instructions that are marked commutable.
530 //===---------------------------------------------------------------------===//
532 We should compile this:
533 #include <xmmintrin.h>
539 void swizzle (const void *a, vector4_t * b, vector4_t * c) {
540 b->v = _mm_loadl_pi (b->v, (__m64 *) a);
541 c->v = _mm_loadl_pi (c->v, ((__m64 *) a) + 1);
552 movlps 8(%eax), %xmm0
566 movlps 8(%ecx), %xmm0
570 //===---------------------------------------------------------------------===//
572 These functions should produce the same code:
574 #include <emmintrin.h>
576 typedef long long __m128i __attribute__ ((__vector_size__ (16)));
578 int foo(__m128i* val) {
579 return __builtin_ia32_vec_ext_v4si(*val, 1);
581 int bar(__m128i* val) {
589 We currently produce (with -m64):
592 pshufd $1, (%rdi), %xmm0
599 //===---------------------------------------------------------------------===//
601 We should materialize vector constants like "all ones" and "signbit" with
604 cmpeqps xmm1, xmm1 ; xmm1 = all-ones
607 cmpeqps xmm1, xmm1 ; xmm1 = all-ones
608 psrlq xmm1, 31 ; xmm1 = all 100000000000...
610 instead of using a load from the constant pool. The later is important for
611 ABS/NEG/copysign etc.
613 //===---------------------------------------------------------------------===//
615 "converting 64-bit constant pool entry to 32-bit not necessarily beneficial"
616 http://llvm.org/PR1264
620 define double @foo(double %x) {
621 %y = mul double %x, 5.000000e-01
625 llc -march=x86-64 currently produces a 32-bit constant pool entry and this code:
627 cvtss2sd .LCPI1_0(%rip), %xmm1
630 instead of just using a 64-bit constant pool entry with this:
632 mulsd .LCPI1_0(%rip), %xmm0
634 This is due to the code in ExpandConstantFP in LegalizeDAG.cpp. It notices that
635 x86-64 indeed has an instruction to load a 32-bit float from memory and convert
636 it into a 64-bit float in a register, however it doesn't notice that this isn't
637 beneficial because it prevents the load from being folded into the multiply.
639 //===---------------------------------------------------------------------===//
643 #include <xmmintrin.h>
645 void x(unsigned short n) {
646 a = _mm_slli_epi32 (a, n);
649 a = _mm_slli_epi32 (a, n);
652 compile to ( -O3 -static -fomit-frame-pointer):
667 "y" looks good, but "x" does silly movzwl stuff around into a GPR. It seems
668 like movd would be sufficient in both cases as the value is already zero
669 extended in the 32-bit stack slot IIRC. For signed short, it should also be
670 save, as a really-signed value would be undefined for pslld.
673 //===---------------------------------------------------------------------===//
676 int t1(double d) { return signbit(d); }
678 This currently compiles to:
680 movsd 16(%esp), %xmm0
687 We should use movmskp{s|d} instead.
689 //===---------------------------------------------------------------------===//
691 CodeGen/X86/vec_align.ll tests whether we can turn 4 scalar loads into a single
692 (aligned) vector load. This functionality has a couple of problems.
694 1. The code to infer alignment from loads of globals is in the X86 backend,
695 not the dag combiner. This is because dagcombine2 needs to be able to see
696 through the X86ISD::Wrapper node, which DAGCombine can't really do.
697 2. The code for turning 4 x load into a single vector load is target
698 independent and should be moved to the dag combiner.
699 3. The code for turning 4 x load into a vector load can only handle a direct
700 load from a global or a direct load from the stack. It should be generalized
701 to handle any load from P, P+4, P+8, P+12, where P can be anything.
702 4. The alignment inference code cannot handle loads from globals in non-static
703 mode because it doesn't look through the extra dyld stub load. If you try
704 vec_align.ll without -relocation-model=static, you'll see what I mean.
706 //===---------------------------------------------------------------------===//
708 We should lower store(fneg(load p), q) into an integer load+xor+store, which
709 eliminates a constant pool load. For example, consider:
711 define i64 @ccosf(float %z.0, float %z.1) nounwind readonly {
713 %tmp6 = sub float -0.000000e+00, %z.1 ; <float> [#uses=1]
714 %tmp20 = tail call i64 @ccoshf( float %tmp6, float %z.0 ) nounwind readonly ; <i64> [#uses=1]
718 This currently compiles to:
720 LCPI1_0: # <4 x float>
721 .long 2147483648 # float -0
722 .long 2147483648 # float -0
723 .long 2147483648 # float -0
724 .long 2147483648 # float -0
727 movss 16(%esp), %xmm0
729 movss 20(%esp), %xmm0
736 Note the load into xmm0, then xor (to negate), then store. In PIC mode,
737 this code computes the pic base and does two loads to do the constant pool
738 load, so the improvement is much bigger.
740 The tricky part about this xform is that the argument load/store isn't exposed
741 until post-legalize, and at that point, the fneg has been custom expanded into
742 an X86 fxor. This means that we need to handle this case in the x86 backend
743 instead of in target independent code.
745 //===---------------------------------------------------------------------===//
747 Non-SSE4 insert into 16 x i8 is atrociously bad.
749 //===---------------------------------------------------------------------===//
751 <2 x i64> extract is substantially worse than <2 x f64>, even if the destination
754 //===---------------------------------------------------------------------===//
756 SSE4 extract-to-mem ops aren't being pattern matched because of the AssertZext
757 sitting between the truncate and the extract.
759 //===---------------------------------------------------------------------===//
761 INSERTPS can match any insert (extract, imm1), imm2 for 4 x float, and insert
762 any number of 0.0 simultaneously. Currently we only use it for simple
765 See comments in LowerINSERT_VECTOR_ELT_SSE4.
767 //===---------------------------------------------------------------------===//
769 On a random note, SSE2 should declare insert/extract of 2 x f64 as legal, not
770 Custom. All combinations of insert/extract reg-reg, reg-mem, and mem-reg are
771 legal, it'll just take a few extra patterns written in the .td file.
773 Note: this is not a code quality issue; the custom lowered code happens to be
774 right, but we shouldn't have to custom lower anything. This is probably related
775 to <2 x i64> ops being so bad.