1 //===---------------------------------------------------------------------===//
2 // Random ideas for the X86 backend.
3 //===---------------------------------------------------------------------===//
6 //===---------------------------------------------------------------------===//
8 CodeGen/X86/lea-3.ll:test3 should be a single LEA, not a shift/move. The X86
9 backend knows how to three-addressify this shift, but it appears the register
10 allocator isn't even asking it to do so in this case. We should investigate
11 why this isn't happening, it could have significant impact on other important
12 cases for X86 as well.
14 //===---------------------------------------------------------------------===//
16 This should be one DIV/IDIV instruction, not a libcall:
18 unsigned test(unsigned long long X, unsigned Y) {
22 This can be done trivially with a custom legalizer. What about overflow
23 though? http://gcc.gnu.org/bugzilla/show_bug.cgi?id=14224
25 //===---------------------------------------------------------------------===//
27 Improvements to the multiply -> shift/add algorithm:
28 http://gcc.gnu.org/ml/gcc-patches/2004-08/msg01590.html
30 //===---------------------------------------------------------------------===//
32 Improve code like this (occurs fairly frequently, e.g. in LLVM):
33 long long foo(int x) { return 1LL << x; }
35 http://gcc.gnu.org/ml/gcc-patches/2004-09/msg01109.html
36 http://gcc.gnu.org/ml/gcc-patches/2004-09/msg01128.html
37 http://gcc.gnu.org/ml/gcc-patches/2004-09/msg01136.html
39 Another useful one would be ~0ULL >> X and ~0ULL << X.
41 One better solution for 1LL << x is:
50 But that requires good 8-bit subreg support.
52 Also, this might be better. It's an extra shift, but it's one instruction
53 shorter, and doesn't stress 8-bit subreg support.
54 (From http://gcc.gnu.org/ml/gcc-patches/2004-09/msg01148.html,
55 but without the unnecessary and.)
63 64-bit shifts (in general) expand to really bad code. Instead of using
64 cmovs, we should expand to a conditional branch like GCC produces.
66 //===---------------------------------------------------------------------===//
69 _Bool f(_Bool a) { return a!=1; }
76 (Although note that this isn't a legal way to express the code that llvm-gcc
77 currently generates for that function.)
79 //===---------------------------------------------------------------------===//
83 1. Dynamic programming based approach when compile time if not an
85 2. Code duplication (addressing mode) during isel.
86 3. Other ideas from "Register-Sensitive Selection, Duplication, and
87 Sequencing of Instructions".
88 4. Scheduling for reduced register pressure. E.g. "Minimum Register
89 Instruction Sequence Problem: Revisiting Optimal Code Generation for DAGs"
90 and other related papers.
91 http://citeseer.ist.psu.edu/govindarajan01minimum.html
93 //===---------------------------------------------------------------------===//
95 Should we promote i16 to i32 to avoid partial register update stalls?
97 //===---------------------------------------------------------------------===//
99 Leave any_extend as pseudo instruction and hint to register
100 allocator. Delay codegen until post register allocation.
101 Note. any_extend is now turned into an INSERT_SUBREG. We still need to teach
102 the coalescer how to deal with it though.
104 //===---------------------------------------------------------------------===//
106 It appears icc use push for parameter passing. Need to investigate.
108 //===---------------------------------------------------------------------===//
110 Only use inc/neg/not instructions on processors where they are faster than
111 add/sub/xor. They are slower on the P4 due to only updating some processor
114 //===---------------------------------------------------------------------===//
116 The instruction selector sometimes misses folding a load into a compare. The
117 pattern is written as (cmp reg, (load p)). Because the compare isn't
118 commutative, it is not matched with the load on both sides. The dag combiner
119 should be made smart enough to cannonicalize the load into the RHS of a compare
120 when it can invert the result of the compare for free.
122 //===---------------------------------------------------------------------===//
124 How about intrinsics? An example is:
125 *res = _mm_mulhi_epu16(*A, _mm_mul_epu32(*B, *C));
128 pmuludq (%eax), %xmm0
133 The transformation probably requires a X86 specific pass or a DAG combiner
134 target specific hook.
136 //===---------------------------------------------------------------------===//
138 In many cases, LLVM generates code like this:
147 on some processors (which ones?), it is more efficient to do this:
156 Doing this correctly is tricky though, as the xor clobbers the flags.
158 //===---------------------------------------------------------------------===//
160 We should generate bts/btr/etc instructions on targets where they are cheap or
161 when codesize is important. e.g., for:
163 void setbit(int *target, int bit) {
164 *target |= (1 << bit);
166 void clearbit(int *target, int bit) {
167 *target &= ~(1 << bit);
170 //===---------------------------------------------------------------------===//
172 Instead of the following for memset char*, 1, 10:
174 movl $16843009, 4(%edx)
175 movl $16843009, (%edx)
178 It might be better to generate
185 when we can spare a register. It reduces code size.
187 //===---------------------------------------------------------------------===//
189 Evaluate what the best way to codegen sdiv X, (2^C) is. For X/8, we currently
192 define i32 @test1(i32 %X) {
206 GCC knows several different ways to codegen it, one of which is this:
216 which is probably slower, but it's interesting at least :)
218 //===---------------------------------------------------------------------===//
220 We are currently lowering large (1MB+) memmove/memcpy to rep/stosl and rep/movsl
221 We should leave these as libcalls for everything over a much lower threshold,
222 since libc is hand tuned for medium and large mem ops (avoiding RFO for large
223 stores, TLB preheating, etc)
225 //===---------------------------------------------------------------------===//
227 Optimize this into something reasonable:
228 x * copysign(1.0, y) * copysign(1.0, z)
230 //===---------------------------------------------------------------------===//
232 Optimize copysign(x, *y) to use an integer load from y.
234 //===---------------------------------------------------------------------===//
236 %X = weak global int 0
239 %N = cast int %N to uint
240 %tmp.24 = setgt int %N, 0
241 br bool %tmp.24, label %no_exit, label %return
244 %indvar = phi uint [ 0, %entry ], [ %indvar.next, %no_exit ]
245 %i.0.0 = cast uint %indvar to int
246 volatile store int %i.0.0, int* %X
247 %indvar.next = add uint %indvar, 1
248 %exitcond = seteq uint %indvar.next, %N
249 br bool %exitcond, label %return, label %no_exit
263 jl LBB_foo_4 # return
264 LBB_foo_1: # no_exit.preheader
267 movl L_X$non_lazy_ptr, %edx
271 jne LBB_foo_2 # no_exit
272 LBB_foo_3: # return.loopexit
276 We should hoist "movl L_X$non_lazy_ptr, %edx" out of the loop after
277 remateralization is implemented. This can be accomplished with 1) a target
278 dependent LICM pass or 2) makeing SelectDAG represent the whole function.
280 //===---------------------------------------------------------------------===//
282 The following tests perform worse with LSR:
284 lambda, siod, optimizer-eval, ackermann, hash2, nestedloop, strcat, and Treesor.
286 //===---------------------------------------------------------------------===//
288 We are generating far worse code than gcc:
294 for (i = 0; i < N; i++) { X = i; Y = i*4; }
297 LBB1_1: # entry.bb_crit_edge
301 movl L_X$non_lazy_ptr, %esi
303 movl L_Y$non_lazy_ptr, %esi
313 movl L_X$non_lazy_ptr-"L00000000001$pb"(%ebx), %esi
314 movl L_Y$non_lazy_ptr-"L00000000001$pb"(%ebx), %ecx
317 leal 0(,%edx,4), %eax
323 This is due to the lack of post regalloc LICM.
325 //===---------------------------------------------------------------------===//
327 Teach the coalescer to coalesce vregs of different register classes. e.g. FR32 /
330 //===---------------------------------------------------------------------===//
338 Obviously it would have been better for the first mov (or any op) to store
339 directly %esp[0] if there are no other uses.
341 //===---------------------------------------------------------------------===//
343 Adding to the list of cmp / test poor codegen issues:
345 int test(__m128 *A, __m128 *B) {
346 if (_mm_comige_ss(*A, *B))
366 Note the setae, movzbl, cmpl, cmove can be replaced with a single cmovae. There
367 are a number of issues. 1) We are introducing a setcc between the result of the
368 intrisic call and select. 2) The intrinsic is expected to produce a i32 value
369 so a any extend (which becomes a zero extend) is added.
371 We probably need some kind of target DAG combine hook to fix this.
373 //===---------------------------------------------------------------------===//
375 We generate significantly worse code for this than GCC:
376 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=21150
377 http://gcc.gnu.org/bugzilla/attachment.cgi?id=8701
379 There is also one case we do worse on PPC.
381 //===---------------------------------------------------------------------===//
383 If shorter, we should use things like:
388 The former can also be used when the two-addressy nature of the 'and' would
389 require a copy to be inserted (in X86InstrInfo::convertToThreeAddress).
391 //===---------------------------------------------------------------------===//
393 Another instruction selector deficiency:
396 %tmp = load int (int)** %foo
397 %tmp = tail call int %tmp( int 3 )
403 movl L_foo$non_lazy_ptr, %eax
409 The current isel scheme will not allow the load to be folded in the call since
410 the load's chain result is read by the callseq_start.
412 //===---------------------------------------------------------------------===//
422 imull $3, 4(%esp), %eax
424 Perhaps this is what we really should generate is? Is imull three or four
425 cycles? Note: ICC generates this:
427 leal (%eax,%eax,2), %eax
429 The current instruction priority is based on pattern complexity. The former is
430 more "complex" because it folds a load so the latter will not be emitted.
432 Perhaps we should use AddedComplexity to give LEA32r a higher priority? We
433 should always try to match LEA first since the LEA matching code does some
434 estimate to determine whether the match is profitable.
436 However, if we care more about code size, then imull is better. It's two bytes
437 shorter than movl + leal.
439 //===---------------------------------------------------------------------===//
441 __builtin_ffs codegen is messy.
443 int ffs_(unsigned X) { return __builtin_ffs(X); }
466 Another example of __builtin_ffs (use predsimplify to eliminate a select):
468 int foo (unsigned long j) {
470 return __builtin_ffs (j) - 1;
475 //===---------------------------------------------------------------------===//
477 It appears gcc place string data with linkonce linkage in
478 .section __TEXT,__const_coal,coalesced instead of
479 .section __DATA,__const_coal,coalesced.
480 Take a look at darwin.h, there are other Darwin assembler directives that we
483 //===---------------------------------------------------------------------===//
485 define i32 @foo(i32* %a, i32 %t) {
489 cond_true: ; preds = %cond_true, %entry
490 %x.0.0 = phi i32 [ 0, %entry ], [ %tmp9, %cond_true ] ; <i32> [#uses=3]
491 %t_addr.0.0 = phi i32 [ %t, %entry ], [ %tmp7, %cond_true ] ; <i32> [#uses=1]
492 %tmp2 = getelementptr i32* %a, i32 %x.0.0 ; <i32*> [#uses=1]
493 %tmp3 = load i32* %tmp2 ; <i32> [#uses=1]
494 %tmp5 = add i32 %t_addr.0.0, %x.0.0 ; <i32> [#uses=1]
495 %tmp7 = add i32 %tmp5, %tmp3 ; <i32> [#uses=2]
496 %tmp9 = add i32 %x.0.0, 1 ; <i32> [#uses=2]
497 %tmp = icmp sgt i32 %tmp9, 39 ; <i1> [#uses=1]
498 br i1 %tmp, label %bb12, label %cond_true
500 bb12: ; preds = %cond_true
503 is pessimized by -loop-reduce and -indvars
505 //===---------------------------------------------------------------------===//
507 u32 to float conversion improvement:
509 float uint32_2_float( unsigned u ) {
510 float fl = (int) (u & 0xffff);
511 float fh = (int) (u >> 16);
516 00000000 subl $0x04,%esp
517 00000003 movl 0x08(%esp,1),%eax
518 00000007 movl %eax,%ecx
519 00000009 shrl $0x10,%ecx
520 0000000c cvtsi2ss %ecx,%xmm0
521 00000010 andl $0x0000ffff,%eax
522 00000015 cvtsi2ss %eax,%xmm1
523 00000019 mulss 0x00000078,%xmm0
524 00000021 addss %xmm1,%xmm0
525 00000025 movss %xmm0,(%esp,1)
526 0000002a flds (%esp,1)
527 0000002d addl $0x04,%esp
530 //===---------------------------------------------------------------------===//
532 When using fastcc abi, align stack slot of argument of type double on 8 byte
533 boundary to improve performance.
535 //===---------------------------------------------------------------------===//
539 int f(int a, int b) {
540 if (a == 4 || a == 6)
552 //===---------------------------------------------------------------------===//
554 GCC's ix86_expand_int_movcc function (in i386.c) has a ton of interesting
555 simplifications for integer "x cmp y ? a : b". For example, instead of:
558 void f(int X, int Y) {
585 int usesbb(unsigned int a, unsigned int b) {
586 return (a < b ? -1 : 0);
600 movl $4294967295, %ecx
604 //===---------------------------------------------------------------------===//
606 Currently we don't have elimination of redundant stack manipulations. Consider
611 call fastcc void %test1( )
612 call fastcc void %test2( sbyte* cast (void ()* %test1 to sbyte*) )
616 declare fastcc void %test1()
618 declare fastcc void %test2(sbyte*)
621 This currently compiles to:
631 The add\sub pair is really unneeded here.
633 //===---------------------------------------------------------------------===//
635 Consider the expansion of:
637 define i32 @test3(i32 %X) {
638 %tmp1 = urem i32 %X, 255
642 Currently it compiles to:
645 movl $2155905153, %ecx
651 This could be "reassociated" into:
653 movl $2155905153, %eax
657 to avoid the copy. In fact, the existing two-address stuff would do this
658 except that mul isn't a commutative 2-addr instruction. I guess this has
659 to be done at isel time based on the #uses to mul?
661 //===---------------------------------------------------------------------===//
663 Make sure the instruction which starts a loop does not cross a cacheline
664 boundary. This requires knowning the exact length of each machine instruction.
665 That is somewhat complicated, but doable. Example 256.bzip2:
667 In the new trace, the hot loop has an instruction which crosses a cacheline
668 boundary. In addition to potential cache misses, this can't help decoding as I
669 imagine there has to be some kind of complicated decoder reset and realignment
670 to grab the bytes from the next cacheline.
672 532 532 0x3cfc movb (1809(%esp, %esi), %bl <<<--- spans 2 64 byte lines
673 942 942 0x3d03 movl %dh, (1809(%esp, %esi)
674 937 937 0x3d0a incl %esi
675 3 3 0x3d0b cmpb %bl, %dl
676 27 27 0x3d0d jnz 0x000062db <main+11707>
678 //===---------------------------------------------------------------------===//
680 In c99 mode, the preprocessor doesn't like assembly comments like #TRUNCATE.
682 //===---------------------------------------------------------------------===//
684 This could be a single 16-bit load.
687 if ((p[0] == 1) & (p[1] == 2)) return 1;
691 //===---------------------------------------------------------------------===//
693 We should inline lrintf and probably other libc functions.
695 //===---------------------------------------------------------------------===//
697 Start using the flags more. For example, compile:
699 int add_zf(int *x, int y, int a, int b) {
723 int add_zf(int *x, int y, int a, int b) {
747 //===---------------------------------------------------------------------===//
749 These two functions have identical effects:
751 unsigned int f(unsigned int i, unsigned int n) {++i; if (i == n) ++i; return i;}
752 unsigned int f2(unsigned int i, unsigned int n) {++i; i += i == n; return i;}
754 We currently compile them to:
762 jne LBB1_2 #UnifiedReturnBlock
766 LBB1_2: #UnifiedReturnBlock
776 leal 1(%ecx,%eax), %eax
779 both of which are inferior to GCC's:
797 //===---------------------------------------------------------------------===//
805 is currently compiled to:
816 It would be better to produce:
825 This can be applied to any no-return function call that takes no arguments etc.
826 Alternatively, the stack save/restore logic could be shrink-wrapped, producing
837 Both are useful in different situations. Finally, it could be shrink-wrapped
838 and tail called, like this:
845 pop %eax # realign stack.
848 Though this probably isn't worth it.
850 //===---------------------------------------------------------------------===//
852 We need to teach the codegen to convert two-address INC instructions to LEA
853 when the flags are dead (likewise dec). For example, on X86-64, compile:
855 int foo(int A, int B) {
874 ;; X's live range extends beyond the shift, so the register allocator
875 ;; cannot coalesce it with Y. Because of this, a copy needs to be
876 ;; emitted before the shift to save the register value before it is
877 ;; clobbered. However, this copy is not needed if the register
878 ;; allocator turns the shift into an LEA. This also occurs for ADD.
880 ; Check that the shift gets turned into an LEA.
881 ; RUN: llvm-as < %s | llc -march=x86 -x86-asm-syntax=intel | \
882 ; RUN: not grep {mov E.X, E.X}
884 @G = external global i32 ; <i32*> [#uses=3]
886 define i32 @test1(i32 %X, i32 %Y) {
887 %Z = add i32 %X, %Y ; <i32> [#uses=1]
888 volatile store i32 %Y, i32* @G
889 volatile store i32 %Z, i32* @G
893 define i32 @test2(i32 %X) {
894 %Z = add i32 %X, 1 ; <i32> [#uses=1]
895 volatile store i32 %Z, i32* @G
899 //===---------------------------------------------------------------------===//
901 Sometimes it is better to codegen subtractions from a constant (e.g. 7-x) with
902 a neg instead of a sub instruction. Consider:
904 int test(char X) { return 7-X; }
906 we currently produce:
913 We would use one fewer register if codegen'd as:
920 Note that this isn't beneficial if the load can be folded into the sub. In
921 this case, we want a sub:
923 int test(int X) { return 7-X; }
929 //===---------------------------------------------------------------------===//
931 Leaf functions that require one 4-byte spill slot have a prolog like this:
937 and an epilog like this:
942 It would be smaller, and potentially faster, to push eax on entry and to
943 pop into a dummy register instead of using addl/subl of esp. Just don't pop
944 into any return registers :)
946 //===---------------------------------------------------------------------===//
948 The X86 backend should fold (branch (or (setcc, setcc))) into multiple
949 branches. We generate really poor code for:
951 double testf(double a) {
952 return a == 0.0 ? 0.0 : (a > 0.0 ? 1.0 : -1.0);
955 For example, the entry BB is:
960 movsd 24(%esp), %xmm1
965 jne LBB1_5 # UnifiedReturnBlock
969 it would be better to replace the last four instructions with:
975 We also codegen the inner ?: into a diamond:
977 cvtss2sd LCPI1_0(%rip), %xmm2
978 cvtss2sd LCPI1_1(%rip), %xmm3
980 ja LBB1_3 # cond_true
987 We should sink the load into xmm3 into the LBB1_2 block. This should
988 be pretty easy, and will nuke all the copies.
990 //===---------------------------------------------------------------------===//
994 inline std::pair<unsigned, bool> full_add(unsigned a, unsigned b)
995 { return std::make_pair(a + b, a + b < a); }
996 bool no_overflow(unsigned a, unsigned b)
997 { return !full_add(a, b).second; }
1007 FIXME: That code looks wrong; bool return is normally defined as zext.
1019 //===---------------------------------------------------------------------===//
1021 Re-materialize MOV32r0 etc. with xor instead of changing them to moves if the
1022 condition register is dead. xor reg reg is shorter than mov reg, #0.
1024 //===---------------------------------------------------------------------===//
1026 We aren't matching RMW instructions aggressively
1027 enough. Here's a reduced testcase (more in PR1160):
1029 define void @test(i32* %huge_ptr, i32* %target_ptr) {
1030 %A = load i32* %huge_ptr ; <i32> [#uses=1]
1031 %B = load i32* %target_ptr ; <i32> [#uses=1]
1032 %C = or i32 %A, %B ; <i32> [#uses=1]
1033 store i32 %C, i32* %target_ptr
1037 $ llvm-as < t.ll | llc -march=x86-64
1045 That should be something like:
1052 //===---------------------------------------------------------------------===//
1056 bb114.preheader: ; preds = %cond_next94
1057 %tmp231232 = sext i16 %tmp62 to i32 ; <i32> [#uses=1]
1058 %tmp233 = sub i32 32, %tmp231232 ; <i32> [#uses=1]
1059 %tmp245246 = sext i16 %tmp65 to i32 ; <i32> [#uses=1]
1060 %tmp252253 = sext i16 %tmp68 to i32 ; <i32> [#uses=1]
1061 %tmp254 = sub i32 32, %tmp252253 ; <i32> [#uses=1]
1062 %tmp553554 = bitcast i16* %tmp37 to i8* ; <i8*> [#uses=2]
1063 %tmp583584 = sext i16 %tmp98 to i32 ; <i32> [#uses=1]
1064 %tmp585 = sub i32 32, %tmp583584 ; <i32> [#uses=1]
1065 %tmp614615 = sext i16 %tmp101 to i32 ; <i32> [#uses=1]
1066 %tmp621622 = sext i16 %tmp104 to i32 ; <i32> [#uses=1]
1067 %tmp623 = sub i32 32, %tmp621622 ; <i32> [#uses=1]
1072 LBB3_5: # bb114.preheader
1073 movswl -68(%ebp), %eax
1075 movl %ecx, -80(%ebp)
1076 subl %eax, -80(%ebp)
1077 movswl -52(%ebp), %eax
1078 movl %ecx, -84(%ebp)
1079 subl %eax, -84(%ebp)
1080 movswl -70(%ebp), %eax
1081 movl %ecx, -88(%ebp)
1082 subl %eax, -88(%ebp)
1083 movswl -50(%ebp), %eax
1085 movl %ecx, -76(%ebp)
1086 movswl -42(%ebp), %eax
1087 movl %eax, -92(%ebp)
1088 movswl -66(%ebp), %eax
1089 movl %eax, -96(%ebp)
1092 This appears to be bad because the RA is not folding the store to the stack
1093 slot into the movl. The above instructions could be:
1098 This seems like a cross between remat and spill folding.
1100 This has redundant subtractions of %eax from a stack slot. However, %ecx doesn't
1101 change, so we could simply subtract %eax from %ecx first and then use %ecx (or
1104 //===---------------------------------------------------------------------===//
1108 cond_next603: ; preds = %bb493, %cond_true336, %cond_next599
1109 %v.21050.1 = phi i32 [ %v.21050.0, %cond_next599 ], [ %tmp344, %cond_true336 ], [ %v.2, %bb493 ] ; <i32> [#uses=1]
1110 %maxz.21051.1 = phi i32 [ %maxz.21051.0, %cond_next599 ], [ 0, %cond_true336 ], [ %maxz.2, %bb493 ] ; <i32> [#uses=2]
1111 %cnt.01055.1 = phi i32 [ %cnt.01055.0, %cond_next599 ], [ 0, %cond_true336 ], [ %cnt.0, %bb493 ] ; <i32> [#uses=2]
1112 %byteptr.9 = phi i8* [ %byteptr.12, %cond_next599 ], [ %byteptr.0, %cond_true336 ], [ %byteptr.10, %bb493 ] ; <i8*> [#uses=9]
1113 %bitptr.6 = phi i32 [ %tmp5571104.1, %cond_next599 ], [ %tmp4921049, %cond_true336 ], [ %bitptr.7, %bb493 ] ; <i32> [#uses=4]
1114 %source.5 = phi i32 [ %tmp602, %cond_next599 ], [ %source.0, %cond_true336 ], [ %source.6, %bb493 ] ; <i32> [#uses=7]
1115 %tmp606 = getelementptr %struct.const_tables* @tables, i32 0, i32 0, i32 %cnt.01055.1 ; <i8*> [#uses=1]
1116 %tmp607 = load i8* %tmp606, align 1 ; <i8> [#uses=1]
1120 LBB4_70: # cond_next603
1121 movl -20(%ebp), %esi
1122 movl L_tables$non_lazy_ptr-"L4$pb"(%esi), %esi
1124 However, ICC caches this information before the loop and produces this:
1126 movl 88(%esp), %eax #481.12
1128 //===---------------------------------------------------------------------===//
1132 %tmp659 = icmp slt i16 %tmp654, 0 ; <i1> [#uses=1]
1133 br i1 %tmp659, label %cond_true662, label %cond_next715
1139 jns LBB4_109 # cond_next715
1141 Shark tells us that using %cx in the testw instruction is sub-optimal. It
1142 suggests using the 32-bit register (which is what ICC uses).
1144 //===---------------------------------------------------------------------===//
1148 void compare (long long foo) {
1149 if (foo < 4294967297LL)
1165 jne .LBB1_2 # UnifiedReturnBlock
1168 .LBB1_2: # UnifiedReturnBlock
1172 (also really horrible code on ppc). This is due to the expand code for 64-bit
1173 compares. GCC produces multiple branches, which is much nicer:
1194 //===---------------------------------------------------------------------===//
1196 Tail call optimization improvements: Tail call optimization currently
1197 pushes all arguments on the top of the stack (their normal place for
1198 non-tail call optimized calls) that source from the callers arguments
1199 or that source from a virtual register (also possibly sourcing from
1201 This is done to prevent overwriting of parameters (see example
1202 below) that might be used later.
1206 int callee(int32, int64);
1207 int caller(int32 arg1, int32 arg2) {
1208 int64 local = arg2 * 2;
1209 return callee(arg2, (int64)local);
1212 [arg1] [!arg2 no longer valid since we moved local onto it]
1216 Moving arg1 onto the stack slot of callee function would overwrite
1219 Possible optimizations:
1222 - Analyse the actual parameters of the callee to see which would
1223 overwrite a caller parameter which is used by the callee and only
1224 push them onto the top of the stack.
1226 int callee (int32 arg1, int32 arg2);
1227 int caller (int32 arg1, int32 arg2) {
1228 return callee(arg1,arg2);
1231 Here we don't need to write any variables to the top of the stack
1232 since they don't overwrite each other.
1234 int callee (int32 arg1, int32 arg2);
1235 int caller (int32 arg1, int32 arg2) {
1236 return callee(arg2,arg1);
1239 Here we need to push the arguments because they overwrite each
1242 //===---------------------------------------------------------------------===//
1247 unsigned long int z = 0;
1258 gcc compiles this to:
1284 jge LBB1_4 # cond_true
1287 addl $4294950912, %ecx
1297 1. LSR should rewrite the first cmp with induction variable %ecx.
1298 2. DAG combiner should fold
1304 //===---------------------------------------------------------------------===//
1306 define i64 @test(double %X) {
1307 %Y = fptosi double %X to i64
1315 movsd 24(%esp), %xmm0
1316 movsd %xmm0, 8(%esp)
1325 This should just fldl directly from the input stack slot.
1327 //===---------------------------------------------------------------------===//
1330 int foo (int x) { return (x & 65535) | 255; }
1332 Should compile into:
1335 movzwl 4(%esp), %eax
1346 //===---------------------------------------------------------------------===//
1348 We're codegen'ing multiply of long longs inefficiently:
1350 unsigned long long LLM(unsigned long long arg1, unsigned long long arg2) {
1354 We compile to (fomit-frame-pointer):
1362 imull 12(%esp), %esi
1364 imull 20(%esp), %ecx
1370 This looks like a scheduling deficiency and lack of remat of the load from
1371 the argument area. ICC apparently produces:
1374 imull 12(%esp), %ecx
1383 Note that it remat'd loads from 4(esp) and 12(esp). See this GCC PR:
1384 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=17236
1386 //===---------------------------------------------------------------------===//
1388 We can fold a store into "zeroing a reg". Instead of:
1391 movl %eax, 124(%esp)
1397 if the flags of the xor are dead.
1399 Likewise, we isel "x<<1" into "add reg,reg". If reg is spilled, this should
1400 be folded into: shl [mem], 1
1402 //===---------------------------------------------------------------------===//
1404 This testcase misses a read/modify/write opportunity (from PR1425):
1406 void vertical_decompose97iH1(int *b0, int *b1, int *b2, int width){
1408 for(i=0; i<width; i++)
1409 b1[i] += (1*(b0[i] + b2[i])+0)>>0;
1412 We compile it down to:
1415 movl (%esi,%edi,4), %ebx
1416 addl (%ecx,%edi,4), %ebx
1417 addl (%edx,%edi,4), %ebx
1418 movl %ebx, (%ecx,%edi,4)
1423 the inner loop should add to the memory location (%ecx,%edi,4), saving
1424 a mov. Something like:
1426 movl (%esi,%edi,4), %ebx
1427 addl (%edx,%edi,4), %ebx
1428 addl %ebx, (%ecx,%edi,4)
1430 Here is another interesting example:
1432 void vertical_compose97iH1(int *b0, int *b1, int *b2, int width){
1434 for(i=0; i<width; i++)
1435 b1[i] -= (1*(b0[i] + b2[i])+0)>>0;
1438 We miss the r/m/w opportunity here by using 2 subs instead of an add+sub[mem]:
1441 movl (%ecx,%edi,4), %ebx
1442 subl (%esi,%edi,4), %ebx
1443 subl (%edx,%edi,4), %ebx
1444 movl %ebx, (%ecx,%edi,4)
1449 Additionally, LSR should rewrite the exit condition of these loops to use
1450 a stride-4 IV, would would allow all the scales in the loop to go away.
1451 This would result in smaller code and more efficient microops.
1453 //===---------------------------------------------------------------------===//
1455 In SSE mode, we turn abs and neg into a load from the constant pool plus a xor
1456 or and instruction, for example:
1458 xorpd LCPI1_0, %xmm2
1460 However, if xmm2 gets spilled, we end up with really ugly code like this:
1463 xorpd LCPI1_0, %xmm0
1466 Since we 'know' that this is a 'neg', we can actually "fold" the spill into
1467 the neg/abs instruction, turning it into an *integer* operation, like this:
1469 xorl 2147483648, [mem+4] ## 2147483648 = (1 << 31)
1471 you could also use xorb, but xorl is less likely to lead to a partial register
1472 stall. Here is a contrived testcase:
1475 void test(double *P) {
1485 //===---------------------------------------------------------------------===//
1487 handling llvm.memory.barrier on pre SSE2 cpus
1490 lock ; mov %esp, %esp
1492 //===---------------------------------------------------------------------===//
1494 The generated code on x86 for checking for signed overflow on a multiply the
1495 obvious way is much longer than it needs to be.
1497 int x(int a, int b) {
1498 long long prod = (long long)a*b;
1499 return prod > 0x7FFFFFFF || prod < (-0x7FFFFFFF-1);
1502 See PR2053 for more details.
1504 //===---------------------------------------------------------------------===//
1506 We should investigate using cdq/ctld (effect: edx = sar eax, 31)
1507 more aggressively; it should cost the same as a move+shift on any modern
1508 processor, but it's a lot shorter. Downside is that it puts more
1509 pressure on register allocation because it has fixed operands.
1512 int abs(int x) {return x < 0 ? -x : x;}
1514 gcc compiles this to the following when using march/mtune=pentium2/3/4/m/etc.:
1522 //===---------------------------------------------------------------------===//
1525 int test(unsigned long a, unsigned long b) { return -(a < b); }
1527 We currently compile this to:
1529 define i32 @test(i32 %a, i32 %b) nounwind {
1530 %tmp3 = icmp ult i32 %a, %b ; <i1> [#uses=1]
1531 %tmp34 = zext i1 %tmp3 to i32 ; <i32> [#uses=1]
1532 %tmp5 = sub i32 0, %tmp34 ; <i32> [#uses=1]
1546 Several deficiencies here. First, we should instcombine zext+neg into sext:
1548 define i32 @test2(i32 %a, i32 %b) nounwind {
1549 %tmp3 = icmp ult i32 %a, %b ; <i1> [#uses=1]
1550 %tmp34 = sext i1 %tmp3 to i32 ; <i32> [#uses=1]
1554 However, before we can do that, we have to fix the bad codegen that we get for
1566 This code should be at least as good as the code above. Once this is fixed, we
1567 can optimize this specific case even more to:
1574 //===---------------------------------------------------------------------===//
1576 Take the following code (from
1577 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=16541):
1579 extern unsigned char first_one[65536];
1580 int FirstOnet(unsigned long long arg1)
1583 return (first_one[arg1 >> 48]);
1588 The following code is currently generated:
1593 jb .LBB1_2 # UnifiedReturnBlock
1596 movzbl first_one(%eax), %eax
1598 .LBB1_2: # UnifiedReturnBlock
1602 There are a few possible improvements here:
1603 1. We should be able to eliminate the dead load into %ecx
1604 2. We could change the "movl 8(%esp), %eax" into
1605 "movzwl 10(%esp), %eax"; this lets us change the cmpl
1606 into a testl, which is shorter, and eliminate the shift.
1608 We could also in theory eliminate the branch by using a conditional
1609 for the address of the load, but that seems unlikely to be worthwhile
1612 //===---------------------------------------------------------------------===//
1614 We compile this function:
1616 define i32 @foo(i32 %a, i32 %b, i32 %c, i8 zeroext %d) nounwind {
1618 %tmp2 = icmp eq i8 %d, 0 ; <i1> [#uses=1]
1619 br i1 %tmp2, label %bb7, label %bb
1621 bb: ; preds = %entry
1622 %tmp6 = add i32 %b, %a ; <i32> [#uses=1]
1625 bb7: ; preds = %entry
1626 %tmp10 = sub i32 %a, %c ; <i32> [#uses=1]
1646 The coalescer could coalesce "edx" with "eax" to avoid the movl in LBB1_2
1647 if it commuted the addl in LBB1_1.
1649 //===---------------------------------------------------------------------===//
1651 These two functions perform identical operations:
1653 define i32 @test(i32 %f12) {
1654 %tmp7.25 = lshr i32 %f12, 16
1655 %tmp7.26 = trunc i32 %tmp7.25 to i8
1656 %tmp78.2 = sext i8 %tmp7.26 to i32
1660 define i32 @test2(i32 %f12) {
1661 %f11 = shl i32 %f12, 8
1662 %tmp7.25 = ashr i32 %f11, 24
1666 but the first compiles into significantly better code on x86-32:
1669 movsbl 6(%esp), %eax
1689 I would like instcombine to canonicalize the first into the second (since it is
1690 shorter and doesn't involve type width changes) but the x86 backend needs to do
1691 the right thing with the later sequence first.
1693 //===---------------------------------------------------------------------===//