1 //===---------------------------------------------------------------------===//
2 // Random ideas for the X86 backend.
3 //===---------------------------------------------------------------------===//
5 Improvements to the multiply -> shift/add algorithm:
6 http://gcc.gnu.org/ml/gcc-patches/2004-08/msg01590.html
8 //===---------------------------------------------------------------------===//
10 Improve code like this (occurs fairly frequently, e.g. in LLVM):
11 long long foo(int x) { return 1LL << x; }
13 http://gcc.gnu.org/ml/gcc-patches/2004-09/msg01109.html
14 http://gcc.gnu.org/ml/gcc-patches/2004-09/msg01128.html
15 http://gcc.gnu.org/ml/gcc-patches/2004-09/msg01136.html
17 Another useful one would be ~0ULL >> X and ~0ULL << X.
19 One better solution for 1LL << x is:
28 But that requires good 8-bit subreg support.
30 Also, this might be better. It's an extra shift, but it's one instruction
31 shorter, and doesn't stress 8-bit subreg support.
32 (From http://gcc.gnu.org/ml/gcc-patches/2004-09/msg01148.html,
33 but without the unnecessary and.)
41 64-bit shifts (in general) expand to really bad code. Instead of using
42 cmovs, we should expand to a conditional branch like GCC produces.
44 //===---------------------------------------------------------------------===//
48 1. Dynamic programming based approach when compile time is not an
50 2. Code duplication (addressing mode) during isel.
51 3. Other ideas from "Register-Sensitive Selection, Duplication, and
52 Sequencing of Instructions".
53 4. Scheduling for reduced register pressure. E.g. "Minimum Register
54 Instruction Sequence Problem: Revisiting Optimal Code Generation for DAGs"
55 and other related papers.
56 http://citeseer.ist.psu.edu/govindarajan01minimum.html
58 //===---------------------------------------------------------------------===//
60 Should we promote i16 to i32 to avoid partial register update stalls?
62 //===---------------------------------------------------------------------===//
64 Leave any_extend as pseudo instruction and hint to register
65 allocator. Delay codegen until post register allocation.
66 Note. any_extend is now turned into an INSERT_SUBREG. We still need to teach
67 the coalescer how to deal with it though.
69 //===---------------------------------------------------------------------===//
71 It appears icc use push for parameter passing. Need to investigate.
73 //===---------------------------------------------------------------------===//
75 The instruction selector sometimes misses folding a load into a compare. The
76 pattern is written as (cmp reg, (load p)). Because the compare isn't
77 commutative, it is not matched with the load on both sides. The dag combiner
78 should be made smart enough to canonicalize the load into the RHS of a compare
79 when it can invert the result of the compare for free.
81 //===---------------------------------------------------------------------===//
83 In many cases, LLVM generates code like this:
92 on some processors (which ones?), it is more efficient to do this:
101 Doing this correctly is tricky though, as the xor clobbers the flags.
103 //===---------------------------------------------------------------------===//
105 We should generate bts/btr/etc instructions on targets where they are cheap or
106 when codesize is important. e.g., for:
108 void setbit(int *target, int bit) {
109 *target |= (1 << bit);
111 void clearbit(int *target, int bit) {
112 *target &= ~(1 << bit);
115 //===---------------------------------------------------------------------===//
117 Instead of the following for memset char*, 1, 10:
119 movl $16843009, 4(%edx)
120 movl $16843009, (%edx)
123 It might be better to generate
130 when we can spare a register. It reduces code size.
132 //===---------------------------------------------------------------------===//
134 Evaluate what the best way to codegen sdiv X, (2^C) is. For X/8, we currently
137 define i32 @test1(i32 %X) {
151 GCC knows several different ways to codegen it, one of which is this:
161 which is probably slower, but it's interesting at least :)
163 //===---------------------------------------------------------------------===//
165 We are currently lowering large (1MB+) memmove/memcpy to rep/stosl and rep/movsl
166 We should leave these as libcalls for everything over a much lower threshold,
167 since libc is hand tuned for medium and large mem ops (avoiding RFO for large
168 stores, TLB preheating, etc)
170 //===---------------------------------------------------------------------===//
172 Optimize this into something reasonable:
173 x * copysign(1.0, y) * copysign(1.0, z)
175 //===---------------------------------------------------------------------===//
177 Optimize copysign(x, *y) to use an integer load from y.
179 //===---------------------------------------------------------------------===//
181 The following tests perform worse with LSR:
183 lambda, siod, optimizer-eval, ackermann, hash2, nestedloop, strcat, and Treesor.
185 //===---------------------------------------------------------------------===//
187 Adding to the list of cmp / test poor codegen issues:
189 int test(__m128 *A, __m128 *B) {
190 if (_mm_comige_ss(*A, *B))
210 Note the setae, movzbl, cmpl, cmove can be replaced with a single cmovae. There
211 are a number of issues. 1) We are introducing a setcc between the result of the
212 intrisic call and select. 2) The intrinsic is expected to produce a i32 value
213 so a any extend (which becomes a zero extend) is added.
215 We probably need some kind of target DAG combine hook to fix this.
217 //===---------------------------------------------------------------------===//
219 We generate significantly worse code for this than GCC:
220 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=21150
221 http://gcc.gnu.org/bugzilla/attachment.cgi?id=8701
223 There is also one case we do worse on PPC.
225 //===---------------------------------------------------------------------===//
235 imull $3, 4(%esp), %eax
237 Perhaps this is what we really should generate is? Is imull three or four
238 cycles? Note: ICC generates this:
240 leal (%eax,%eax,2), %eax
242 The current instruction priority is based on pattern complexity. The former is
243 more "complex" because it folds a load so the latter will not be emitted.
245 Perhaps we should use AddedComplexity to give LEA32r a higher priority? We
246 should always try to match LEA first since the LEA matching code does some
247 estimate to determine whether the match is profitable.
249 However, if we care more about code size, then imull is better. It's two bytes
250 shorter than movl + leal.
252 On a Pentium M, both variants have the same characteristics with regard
253 to throughput; however, the multiplication has a latency of four cycles, as
254 opposed to two cycles for the movl+lea variant.
256 //===---------------------------------------------------------------------===//
258 It appears gcc place string data with linkonce linkage in
259 .section __TEXT,__const_coal,coalesced instead of
260 .section __DATA,__const_coal,coalesced.
261 Take a look at darwin.h, there are other Darwin assembler directives that we
264 //===---------------------------------------------------------------------===//
266 define i32 @foo(i32* %a, i32 %t) {
270 cond_true: ; preds = %cond_true, %entry
271 %x.0.0 = phi i32 [ 0, %entry ], [ %tmp9, %cond_true ] ; <i32> [#uses=3]
272 %t_addr.0.0 = phi i32 [ %t, %entry ], [ %tmp7, %cond_true ] ; <i32> [#uses=1]
273 %tmp2 = getelementptr i32* %a, i32 %x.0.0 ; <i32*> [#uses=1]
274 %tmp3 = load i32* %tmp2 ; <i32> [#uses=1]
275 %tmp5 = add i32 %t_addr.0.0, %x.0.0 ; <i32> [#uses=1]
276 %tmp7 = add i32 %tmp5, %tmp3 ; <i32> [#uses=2]
277 %tmp9 = add i32 %x.0.0, 1 ; <i32> [#uses=2]
278 %tmp = icmp sgt i32 %tmp9, 39 ; <i1> [#uses=1]
279 br i1 %tmp, label %bb12, label %cond_true
281 bb12: ; preds = %cond_true
284 is pessimized by -loop-reduce and -indvars
286 //===---------------------------------------------------------------------===//
288 u32 to float conversion improvement:
290 float uint32_2_float( unsigned u ) {
291 float fl = (int) (u & 0xffff);
292 float fh = (int) (u >> 16);
297 00000000 subl $0x04,%esp
298 00000003 movl 0x08(%esp,1),%eax
299 00000007 movl %eax,%ecx
300 00000009 shrl $0x10,%ecx
301 0000000c cvtsi2ss %ecx,%xmm0
302 00000010 andl $0x0000ffff,%eax
303 00000015 cvtsi2ss %eax,%xmm1
304 00000019 mulss 0x00000078,%xmm0
305 00000021 addss %xmm1,%xmm0
306 00000025 movss %xmm0,(%esp,1)
307 0000002a flds (%esp,1)
308 0000002d addl $0x04,%esp
311 //===---------------------------------------------------------------------===//
313 When using fastcc abi, align stack slot of argument of type double on 8 byte
314 boundary to improve performance.
316 //===---------------------------------------------------------------------===//
318 GCC's ix86_expand_int_movcc function (in i386.c) has a ton of interesting
319 simplifications for integer "x cmp y ? a : b".
321 //===---------------------------------------------------------------------===//
323 Consider the expansion of:
325 define i32 @test3(i32 %X) {
326 %tmp1 = urem i32 %X, 255
330 Currently it compiles to:
333 movl $2155905153, %ecx
339 This could be "reassociated" into:
341 movl $2155905153, %eax
345 to avoid the copy. In fact, the existing two-address stuff would do this
346 except that mul isn't a commutative 2-addr instruction. I guess this has
347 to be done at isel time based on the #uses to mul?
349 //===---------------------------------------------------------------------===//
351 Make sure the instruction which starts a loop does not cross a cacheline
352 boundary. This requires knowning the exact length of each machine instruction.
353 That is somewhat complicated, but doable. Example 256.bzip2:
355 In the new trace, the hot loop has an instruction which crosses a cacheline
356 boundary. In addition to potential cache misses, this can't help decoding as I
357 imagine there has to be some kind of complicated decoder reset and realignment
358 to grab the bytes from the next cacheline.
360 532 532 0x3cfc movb (1809(%esp, %esi), %bl <<<--- spans 2 64 byte lines
361 942 942 0x3d03 movl %dh, (1809(%esp, %esi)
362 937 937 0x3d0a incl %esi
363 3 3 0x3d0b cmpb %bl, %dl
364 27 27 0x3d0d jnz 0x000062db <main+11707>
366 //===---------------------------------------------------------------------===//
368 In c99 mode, the preprocessor doesn't like assembly comments like #TRUNCATE.
370 //===---------------------------------------------------------------------===//
372 This could be a single 16-bit load.
375 if ((p[0] == 1) & (p[1] == 2)) return 1;
379 //===---------------------------------------------------------------------===//
381 We should inline lrintf and probably other libc functions.
383 //===---------------------------------------------------------------------===//
391 is currently compiled to:
402 It would be better to produce:
411 This can be applied to any no-return function call that takes no arguments etc.
412 Alternatively, the stack save/restore logic could be shrink-wrapped, producing
423 Both are useful in different situations. Finally, it could be shrink-wrapped
424 and tail called, like this:
431 pop %eax # realign stack.
434 Though this probably isn't worth it.
436 //===---------------------------------------------------------------------===//
438 Sometimes it is better to codegen subtractions from a constant (e.g. 7-x) with
439 a neg instead of a sub instruction. Consider:
441 int test(char X) { return 7-X; }
443 we currently produce:
450 We would use one fewer register if codegen'd as:
457 Note that this isn't beneficial if the load can be folded into the sub. In
458 this case, we want a sub:
460 int test(int X) { return 7-X; }
466 //===---------------------------------------------------------------------===//
468 Leaf functions that require one 4-byte spill slot have a prolog like this:
474 and an epilog like this:
479 It would be smaller, and potentially faster, to push eax on entry and to
480 pop into a dummy register instead of using addl/subl of esp. Just don't pop
481 into any return registers :)
483 //===---------------------------------------------------------------------===//
485 The X86 backend should fold (branch (or (setcc, setcc))) into multiple
486 branches. We generate really poor code for:
488 double testf(double a) {
489 return a == 0.0 ? 0.0 : (a > 0.0 ? 1.0 : -1.0);
492 For example, the entry BB is:
497 movsd 24(%esp), %xmm1
502 jne LBB1_5 # UnifiedReturnBlock
506 it would be better to replace the last four instructions with:
512 We also codegen the inner ?: into a diamond:
514 cvtss2sd LCPI1_0(%rip), %xmm2
515 cvtss2sd LCPI1_1(%rip), %xmm3
517 ja LBB1_3 # cond_true
524 We should sink the load into xmm3 into the LBB1_2 block. This should
525 be pretty easy, and will nuke all the copies.
527 //===---------------------------------------------------------------------===//
531 inline std::pair<unsigned, bool> full_add(unsigned a, unsigned b)
532 { return std::make_pair(a + b, a + b < a); }
533 bool no_overflow(unsigned a, unsigned b)
534 { return !full_add(a, b).second; }
542 on x86-64, instead of the rather stupid-looking:
550 //===---------------------------------------------------------------------===//
554 bb114.preheader: ; preds = %cond_next94
555 %tmp231232 = sext i16 %tmp62 to i32 ; <i32> [#uses=1]
556 %tmp233 = sub i32 32, %tmp231232 ; <i32> [#uses=1]
557 %tmp245246 = sext i16 %tmp65 to i32 ; <i32> [#uses=1]
558 %tmp252253 = sext i16 %tmp68 to i32 ; <i32> [#uses=1]
559 %tmp254 = sub i32 32, %tmp252253 ; <i32> [#uses=1]
560 %tmp553554 = bitcast i16* %tmp37 to i8* ; <i8*> [#uses=2]
561 %tmp583584 = sext i16 %tmp98 to i32 ; <i32> [#uses=1]
562 %tmp585 = sub i32 32, %tmp583584 ; <i32> [#uses=1]
563 %tmp614615 = sext i16 %tmp101 to i32 ; <i32> [#uses=1]
564 %tmp621622 = sext i16 %tmp104 to i32 ; <i32> [#uses=1]
565 %tmp623 = sub i32 32, %tmp621622 ; <i32> [#uses=1]
570 LBB3_5: # bb114.preheader
571 movswl -68(%ebp), %eax
575 movswl -52(%ebp), %eax
578 movswl -70(%ebp), %eax
581 movswl -50(%ebp), %eax
584 movswl -42(%ebp), %eax
586 movswl -66(%ebp), %eax
590 This appears to be bad because the RA is not folding the store to the stack
591 slot into the movl. The above instructions could be:
596 This seems like a cross between remat and spill folding.
598 This has redundant subtractions of %eax from a stack slot. However, %ecx doesn't
599 change, so we could simply subtract %eax from %ecx first and then use %ecx (or
602 //===---------------------------------------------------------------------===//
606 %tmp659 = icmp slt i16 %tmp654, 0 ; <i1> [#uses=1]
607 br i1 %tmp659, label %cond_true662, label %cond_next715
613 jns LBB4_109 # cond_next715
615 Shark tells us that using %cx in the testw instruction is sub-optimal. It
616 suggests using the 32-bit register (which is what ICC uses).
618 //===---------------------------------------------------------------------===//
622 void compare (long long foo) {
623 if (foo < 4294967297LL)
639 jne .LBB1_2 # UnifiedReturnBlock
642 .LBB1_2: # UnifiedReturnBlock
646 (also really horrible code on ppc). This is due to the expand code for 64-bit
647 compares. GCC produces multiple branches, which is much nicer:
668 //===---------------------------------------------------------------------===//
670 Tail call optimization improvements: Tail call optimization currently
671 pushes all arguments on the top of the stack (their normal place for
672 non-tail call optimized calls) that source from the callers arguments
673 or that source from a virtual register (also possibly sourcing from
675 This is done to prevent overwriting of parameters (see example
676 below) that might be used later.
680 int callee(int32, int64);
681 int caller(int32 arg1, int32 arg2) {
682 int64 local = arg2 * 2;
683 return callee(arg2, (int64)local);
686 [arg1] [!arg2 no longer valid since we moved local onto it]
690 Moving arg1 onto the stack slot of callee function would overwrite
693 Possible optimizations:
696 - Analyse the actual parameters of the callee to see which would
697 overwrite a caller parameter which is used by the callee and only
698 push them onto the top of the stack.
700 int callee (int32 arg1, int32 arg2);
701 int caller (int32 arg1, int32 arg2) {
702 return callee(arg1,arg2);
705 Here we don't need to write any variables to the top of the stack
706 since they don't overwrite each other.
708 int callee (int32 arg1, int32 arg2);
709 int caller (int32 arg1, int32 arg2) {
710 return callee(arg2,arg1);
713 Here we need to push the arguments because they overwrite each
716 //===---------------------------------------------------------------------===//
721 unsigned long int z = 0;
732 gcc compiles this to:
758 jge LBB1_4 # cond_true
761 addl $4294950912, %ecx
771 1. LSR should rewrite the first cmp with induction variable %ecx.
772 2. DAG combiner should fold
778 //===---------------------------------------------------------------------===//
780 define i64 @test(double %X) {
781 %Y = fptosi double %X to i64
789 movsd 24(%esp), %xmm0
799 This should just fldl directly from the input stack slot.
801 //===---------------------------------------------------------------------===//
804 int foo (int x) { return (x & 65535) | 255; }
820 //===---------------------------------------------------------------------===//
822 We're codegen'ing multiply of long longs inefficiently:
824 unsigned long long LLM(unsigned long long arg1, unsigned long long arg2) {
828 We compile to (fomit-frame-pointer):
844 This looks like a scheduling deficiency and lack of remat of the load from
845 the argument area. ICC apparently produces:
857 Note that it remat'd loads from 4(esp) and 12(esp). See this GCC PR:
858 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=17236
860 //===---------------------------------------------------------------------===//
862 We can fold a store into "zeroing a reg". Instead of:
871 if the flags of the xor are dead.
873 Likewise, we isel "x<<1" into "add reg,reg". If reg is spilled, this should
874 be folded into: shl [mem], 1
876 //===---------------------------------------------------------------------===//
878 In SSE mode, we turn abs and neg into a load from the constant pool plus a xor
879 or and instruction, for example:
883 However, if xmm2 gets spilled, we end up with really ugly code like this:
889 Since we 'know' that this is a 'neg', we can actually "fold" the spill into
890 the neg/abs instruction, turning it into an *integer* operation, like this:
892 xorl 2147483648, [mem+4] ## 2147483648 = (1 << 31)
894 you could also use xorb, but xorl is less likely to lead to a partial register
895 stall. Here is a contrived testcase:
898 void test(double *P) {
908 //===---------------------------------------------------------------------===//
910 The generated code on x86 for checking for signed overflow on a multiply the
911 obvious way is much longer than it needs to be.
913 int x(int a, int b) {
914 long long prod = (long long)a*b;
915 return prod > 0x7FFFFFFF || prod < (-0x7FFFFFFF-1);
918 See PR2053 for more details.
920 //===---------------------------------------------------------------------===//
922 We should investigate using cdq/ctld (effect: edx = sar eax, 31)
923 more aggressively; it should cost the same as a move+shift on any modern
924 processor, but it's a lot shorter. Downside is that it puts more
925 pressure on register allocation because it has fixed operands.
928 int abs(int x) {return x < 0 ? -x : x;}
930 gcc compiles this to the following when using march/mtune=pentium2/3/4/m/etc.:
938 //===---------------------------------------------------------------------===//
940 Take the following code (from
941 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=16541):
943 extern unsigned char first_one[65536];
944 int FirstOnet(unsigned long long arg1)
947 return (first_one[arg1 >> 48]);
952 The following code is currently generated:
957 jb .LBB1_2 # UnifiedReturnBlock
960 movzbl first_one(%eax), %eax
962 .LBB1_2: # UnifiedReturnBlock
966 We could change the "movl 8(%esp), %eax" into "movzwl 10(%esp), %eax"; this
967 lets us change the cmpl into a testl, which is shorter, and eliminate the shift.
969 //===---------------------------------------------------------------------===//
971 We compile this function:
973 define i32 @foo(i32 %a, i32 %b, i32 %c, i8 zeroext %d) nounwind {
975 %tmp2 = icmp eq i8 %d, 0 ; <i1> [#uses=1]
976 br i1 %tmp2, label %bb7, label %bb
979 %tmp6 = add i32 %b, %a ; <i32> [#uses=1]
982 bb7: ; preds = %entry
983 %tmp10 = sub i32 %a, %c ; <i32> [#uses=1]
1004 There's an obviously unnecessary movl in .LBB0_2, and we could eliminate a
1005 couple more movls by putting 4(%esp) into %eax instead of %ecx.
1007 //===---------------------------------------------------------------------===//
1014 cvtss2sd LCPI1_0, %xmm1
1016 movsd 176(%esp), %xmm2
1021 mulsd LCPI1_23, %xmm4
1022 addsd LCPI1_24, %xmm4
1024 addsd LCPI1_25, %xmm4
1026 addsd LCPI1_26, %xmm4
1028 addsd LCPI1_27, %xmm4
1030 addsd LCPI1_28, %xmm4
1034 movsd 152(%esp), %xmm1
1036 movsd %xmm1, 152(%esp)
1040 LBB1_16: # bb358.loopexit
1041 movsd 152(%esp), %xmm0
1043 addsd LCPI1_22, %xmm0
1044 movsd %xmm0, 152(%esp)
1046 Rather than spilling the result of the last addsd in the loop, we should have
1047 insert a copy to split the interval (one for the duration of the loop, one
1048 extending to the fall through). The register pressure in the loop isn't high
1049 enough to warrant the spill.
1051 Also check why xmm7 is not used at all in the function.
1053 //===---------------------------------------------------------------------===//
1057 target datalayout = "e-p:32:32:32-i1:8:8-i8:8:8-i16:16:16-i32:32:32-i64:32:64-f32:32:32-f64:32:64-v64:64:64-v128:128:128-a0:0:64-f80:128:128-S128"
1058 target triple = "i386-apple-darwin8"
1059 @in_exit.4870.b = internal global i1 false ; <i1*> [#uses=2]
1060 define fastcc void @abort_gzip() noreturn nounwind {
1062 %tmp.b.i = load i1* @in_exit.4870.b ; <i1> [#uses=1]
1063 br i1 %tmp.b.i, label %bb.i, label %bb4.i
1064 bb.i: ; preds = %entry
1065 tail call void @exit( i32 1 ) noreturn nounwind
1067 bb4.i: ; preds = %entry
1068 store i1 true, i1* @in_exit.4870.b
1069 tail call void @exit( i32 1 ) noreturn nounwind
1072 declare void @exit(i32) noreturn nounwind
1075 _abort_gzip: ## @abort_gzip
1078 movb _in_exit.4870.b, %al
1082 We somehow miss folding the movb into the cmpb.
1084 //===---------------------------------------------------------------------===//
1088 int test(int x, int y) {
1100 it would be better to codegen as: x+~y (notl+addl)
1102 //===---------------------------------------------------------------------===//
1106 int foo(const char *str,...)
1108 __builtin_va_list a; int x;
1109 __builtin_va_start(a,str); x = __builtin_va_arg(a,int); __builtin_va_end(a);
1113 gets compiled into this on x86-64:
1115 movaps %xmm7, 160(%rsp)
1116 movaps %xmm6, 144(%rsp)
1117 movaps %xmm5, 128(%rsp)
1118 movaps %xmm4, 112(%rsp)
1119 movaps %xmm3, 96(%rsp)
1120 movaps %xmm2, 80(%rsp)
1121 movaps %xmm1, 64(%rsp)
1122 movaps %xmm0, 48(%rsp)
1129 movq %rax, 192(%rsp)
1130 leaq 208(%rsp), %rax
1131 movq %rax, 184(%rsp)
1134 movl 176(%rsp), %eax
1138 movq 184(%rsp), %rcx
1140 movq %rax, 184(%rsp)
1148 addq 192(%rsp), %rcx
1149 movl %eax, 176(%rsp)
1155 leaq 104(%rsp), %rax
1156 movq %rsi, -80(%rsp)
1158 movq %rax, -112(%rsp)
1159 leaq -88(%rsp), %rax
1160 movq %rax, -104(%rsp)
1164 movq -112(%rsp), %rdx
1172 addq -104(%rsp), %rdx
1174 movl %eax, -120(%rsp)
1179 and it gets compiled into this on x86:
1199 //===---------------------------------------------------------------------===//
1201 Teach tblgen not to check bitconvert source type in some cases. This allows us
1202 to consolidate the following patterns in X86InstrMMX.td:
1204 def : Pat<(v2i32 (bitconvert (i64 (vector_extract (v2i64 VR128:$src),
1206 (v2i32 (MMX_MOVDQ2Qrr VR128:$src))>;
1207 def : Pat<(v4i16 (bitconvert (i64 (vector_extract (v2i64 VR128:$src),
1209 (v4i16 (MMX_MOVDQ2Qrr VR128:$src))>;
1210 def : Pat<(v8i8 (bitconvert (i64 (vector_extract (v2i64 VR128:$src),
1212 (v8i8 (MMX_MOVDQ2Qrr VR128:$src))>;
1214 There are other cases in various td files.
1216 //===---------------------------------------------------------------------===//
1218 Take something like the following on x86-32:
1219 unsigned a(unsigned long long x, unsigned y) {return x % y;}
1221 We currently generate a libcall, but we really shouldn't: the expansion is
1222 shorter and likely faster than the libcall. The expected code is something
1234 A similar code sequence works for division.
1236 //===---------------------------------------------------------------------===//
1238 We currently compile this:
1240 define i32 @func1(i32 %v1, i32 %v2) nounwind {
1242 %t = call {i32, i1} @llvm.sadd.with.overflow.i32(i32 %v1, i32 %v2)
1243 %sum = extractvalue {i32, i1} %t, 0
1244 %obit = extractvalue {i32, i1} %t, 1
1245 br i1 %obit, label %overflow, label %normal
1249 call void @llvm.trap()
1252 declare {i32, i1} @llvm.sadd.with.overflow.i32(i32, i32)
1253 declare void @llvm.trap()
1260 jo LBB1_2 ## overflow
1266 it would be nice to produce "into" someday.
1268 //===---------------------------------------------------------------------===//
1270 Test instructions can be eliminated by using EFLAGS values from arithmetic
1271 instructions. This is currently not done for mul, and, or, xor, neg, shl,
1272 sra, srl, shld, shrd, atomic ops, and others. It is also currently not done
1273 for read-modify-write instructions. It is also current not done if the
1274 OF or CF flags are needed.
1276 The shift operators have the complication that when the shift count is
1277 zero, EFLAGS is not set, so they can only subsume a test instruction if
1278 the shift count is known to be non-zero. Also, using the EFLAGS value
1279 from a shift is apparently very slow on some x86 implementations.
1281 In read-modify-write instructions, the root node in the isel match is
1282 the store, and isel has no way for the use of the EFLAGS result of the
1283 arithmetic to be remapped to the new node.
1285 Add and subtract instructions set OF on signed overflow and CF on unsiged
1286 overflow, while test instructions always clear OF and CF. In order to
1287 replace a test with an add or subtract in a situation where OF or CF is
1288 needed, codegen must be able to prove that the operation cannot see
1289 signed or unsigned overflow, respectively.
1291 //===---------------------------------------------------------------------===//
1293 memcpy/memmove do not lower to SSE copies when possible. A silly example is:
1294 define <16 x float> @foo(<16 x float> %A) nounwind {
1295 %tmp = alloca <16 x float>, align 16
1296 %tmp2 = alloca <16 x float>, align 16
1297 store <16 x float> %A, <16 x float>* %tmp
1298 %s = bitcast <16 x float>* %tmp to i8*
1299 %s2 = bitcast <16 x float>* %tmp2 to i8*
1300 call void @llvm.memcpy.i64(i8* %s, i8* %s2, i64 64, i32 16)
1301 %R = load <16 x float>* %tmp2
1305 declare void @llvm.memcpy.i64(i8* nocapture, i8* nocapture, i64, i32) nounwind
1311 movaps %xmm3, 112(%esp)
1312 movaps %xmm2, 96(%esp)
1313 movaps %xmm1, 80(%esp)
1314 movaps %xmm0, 64(%esp)
1316 movl %eax, 124(%esp)
1318 movl %eax, 120(%esp)
1320 <many many more 32-bit copies>
1321 movaps (%esp), %xmm0
1322 movaps 16(%esp), %xmm1
1323 movaps 32(%esp), %xmm2
1324 movaps 48(%esp), %xmm3
1328 On Nehalem, it may even be cheaper to just use movups when unaligned than to
1329 fall back to lower-granularity chunks.
1331 //===---------------------------------------------------------------------===//
1333 Implement processor-specific optimizations for parity with GCC on these
1334 processors. GCC does two optimizations:
1336 1. ix86_pad_returns inserts a noop before ret instructions if immediately
1337 preceded by a conditional branch or is the target of a jump.
1338 2. ix86_avoid_jump_misspredicts inserts noops in cases where a 16-byte block of
1339 code contains more than 3 branches.
1341 The first one is done for all AMDs, Core2, and "Generic"
1342 The second one is done for: Atom, Pentium Pro, all AMDs, Pentium 4, Nocona,
1343 Core 2, and "Generic"
1345 //===---------------------------------------------------------------------===//
1347 int x(int a) { return (a&0xf0)>>4; }
1356 movzbl 4(%esp), %eax
1360 //===---------------------------------------------------------------------===//
1362 Re-implement atomic builtins __sync_add_and_fetch() and __sync_sub_and_fetch
1365 When the return value is not used (i.e. only care about the value in the
1366 memory), x86 does not have to use add to implement these. Instead, it can use
1367 add, sub, inc, dec instructions with the "lock" prefix.
1369 This is currently implemented using a bit of instruction selection trick. The
1370 issue is the target independent pattern produces one output and a chain and we
1371 want to map it into one that just output a chain. The current trick is to select
1372 it into a MERGE_VALUES with the first definition being an implicit_def. The
1373 proper solution is to add new ISD opcodes for the no-output variant. DAG
1374 combiner can then transform the node before it gets to target node selection.
1376 Problem #2 is we are adding a whole bunch of x86 atomic instructions when in
1377 fact these instructions are identical to the non-lock versions. We need a way to
1378 add target specific information to target nodes and have this information
1379 carried over to machine instructions. Asm printer (or JIT) can use this
1380 information to add the "lock" prefix.
1382 //===---------------------------------------------------------------------===//
1385 unsigned char y0 : 1;
1388 int bar(struct B* a) { return a->y0; }
1390 define i32 @bar(%struct.B* nocapture %a) nounwind readonly optsize {
1391 %1 = getelementptr inbounds %struct.B* %a, i64 0, i32 0
1392 %2 = load i8* %1, align 1
1394 %4 = zext i8 %3 to i32
1405 Missed optimization: should be movl+andl.
1407 //===---------------------------------------------------------------------===//
1409 The x86_64 abi says:
1411 Booleans, when stored in a memory object, are stored as single byte objects the
1412 value of which is always 0 (false) or 1 (true).
1414 We are not using this fact:
1416 int bar(_Bool *a) { return *a; }
1418 define i32 @bar(i8* nocapture %a) nounwind readonly optsize {
1419 %1 = load i8* %a, align 1, !tbaa !0
1421 %2 = zext i8 %tmp to i32
1437 //===---------------------------------------------------------------------===//
1439 Consider the following two functions compiled with clang:
1440 _Bool foo(int *x) { return !(*x & 4); }
1441 unsigned bar(int *x) { return !(*x & 4); }
1458 The second function generates more code even though the two functions are
1459 are functionally identical.
1461 //===---------------------------------------------------------------------===//
1463 Take the following C code:
1464 int f(int a, int b) { return (unsigned char)a == (unsigned char)b; }
1466 We generate the following IR with clang:
1467 define i32 @f(i32 %a, i32 %b) nounwind readnone {
1469 %tmp = xor i32 %b, %a ; <i32> [#uses=1]
1470 %tmp6 = and i32 %tmp, 255 ; <i32> [#uses=1]
1471 %cmp = icmp eq i32 %tmp6, 0 ; <i1> [#uses=1]
1472 %conv5 = zext i1 %cmp to i32 ; <i32> [#uses=1]
1476 And the following x86 code:
1483 A cmpb instead of the xorl+testb would be one instruction shorter.
1485 //===---------------------------------------------------------------------===//
1487 Given the following C code:
1488 int f(int a, int b) { return (signed char)a == (signed char)b; }
1490 We generate the following IR with clang:
1491 define i32 @f(i32 %a, i32 %b) nounwind readnone {
1493 %sext = shl i32 %a, 24 ; <i32> [#uses=1]
1494 %conv1 = ashr i32 %sext, 24 ; <i32> [#uses=1]
1495 %sext6 = shl i32 %b, 24 ; <i32> [#uses=1]
1496 %conv4 = ashr i32 %sext6, 24 ; <i32> [#uses=1]
1497 %cmp = icmp eq i32 %conv1, %conv4 ; <i1> [#uses=1]
1498 %conv5 = zext i1 %cmp to i32 ; <i32> [#uses=1]
1502 And the following x86 code:
1511 It should be possible to eliminate the sign extensions.
1513 //===---------------------------------------------------------------------===//
1515 LLVM misses a load+store narrowing opportunity in this code:
1517 %struct.bf = type { i64, i16, i16, i32 }
1519 @bfi = external global %struct.bf* ; <%struct.bf**> [#uses=2]
1521 define void @t1() nounwind ssp {
1523 %0 = load %struct.bf** @bfi, align 8 ; <%struct.bf*> [#uses=1]
1524 %1 = getelementptr %struct.bf* %0, i64 0, i32 1 ; <i16*> [#uses=1]
1525 %2 = bitcast i16* %1 to i32* ; <i32*> [#uses=2]
1526 %3 = load i32* %2, align 1 ; <i32> [#uses=1]
1527 %4 = and i32 %3, -65537 ; <i32> [#uses=1]
1528 store i32 %4, i32* %2, align 1
1529 %5 = load %struct.bf** @bfi, align 8 ; <%struct.bf*> [#uses=1]
1530 %6 = getelementptr %struct.bf* %5, i64 0, i32 1 ; <i16*> [#uses=1]
1531 %7 = bitcast i16* %6 to i32* ; <i32*> [#uses=2]
1532 %8 = load i32* %7, align 1 ; <i32> [#uses=1]
1533 %9 = and i32 %8, -131073 ; <i32> [#uses=1]
1534 store i32 %9, i32* %7, align 1
1538 LLVM currently emits this:
1540 movq bfi(%rip), %rax
1541 andl $-65537, 8(%rax)
1542 movq bfi(%rip), %rax
1543 andl $-131073, 8(%rax)
1546 It could narrow the loads and stores to emit this:
1548 movq bfi(%rip), %rax
1550 movq bfi(%rip), %rax
1554 The trouble is that there is a TokenFactor between the store and the
1555 load, making it non-trivial to determine if there's anything between
1556 the load and the store which would prohibit narrowing.
1558 //===---------------------------------------------------------------------===//
1561 void foo(unsigned x) {
1563 else if (x == 1) qux();
1566 currently compiles into:
1574 the testl could be removed:
1581 0 is the only unsigned number < 1.
1583 //===---------------------------------------------------------------------===//
1587 %0 = type { i32, i1 }
1589 define i32 @add32carry(i32 %sum, i32 %x) nounwind readnone ssp {
1591 %uadd = tail call %0 @llvm.uadd.with.overflow.i32(i32 %sum, i32 %x)
1592 %cmp = extractvalue %0 %uadd, 1
1593 %inc = zext i1 %cmp to i32
1594 %add = add i32 %x, %sum
1595 %z.0 = add i32 %add, %inc
1599 declare %0 @llvm.uadd.with.overflow.i32(i32, i32) nounwind readnone
1603 _add32carry: ## @add32carry
1613 leal (%rsi,%rdi), %eax
1618 //===---------------------------------------------------------------------===//
1620 The hot loop of 256.bzip2 contains code that looks a bit like this:
1622 int foo(char *P, char *Q, int x, int y) {
1632 In the real code, we get a lot more wrong than this. However, even in this
1649 ## BB#3: ## %if.end38
1654 ## BB#4: ## %if.end60
1657 LBB0_5: ## %if.end60
1662 Note that we generate jumps to LBB0_1 which does a redundant compare. The
1663 redundant compare also forces the register values to be live, which prevents
1664 folding one of the loads into the compare. In contrast, GCC 4.2 produces:
1671 movzbl 1(%rsi), %eax
1674 movzbl 2(%rsi), %eax
1677 movzbl 3(%rdi), %eax
1686 //===---------------------------------------------------------------------===//
1688 For the branch in the following code:
1690 int b(int x, int y) {
1696 We currently generate:
1703 movl+andl would be shorter than the movb+andb+movzbl sequence.
1705 //===---------------------------------------------------------------------===//
1711 float foo(struct u1 u) {
1715 We currently generate:
1717 pshufd $1, %xmm0, %xmm0 # xmm0 = xmm0[1,0,0,0]
1721 We could save an instruction here by commuting the addss.
1723 //===---------------------------------------------------------------------===//
1727 float clamp_float(float a) {
1738 clamp_float: # @clamp_float
1739 movss .LCPI0_0(%rip), %xmm1
1747 //===---------------------------------------------------------------------===//
1749 This function (from PR9803):
1773 The move of 0 could be scheduled above the test to make it is xor reg,reg.
1775 //===---------------------------------------------------------------------===//
1777 GCC PR48986. We currently compile this:
1781 if (__sync_fetch_and_add(p, -1) == 1)
1792 Instead we could generate:
1798 The trick is to match "fetch_and_add(X, -C) == C".
1800 //===---------------------------------------------------------------------===//
1802 unsigned t(unsigned a, unsigned b) {
1803 return a <= b ? 5 : -5;
1818 //===---------------------------------------------------------------------===//