1 //===---------------------------------------------------------------------===//
2 // Random ideas for the X86 backend.
3 //===---------------------------------------------------------------------===//
6 - Support for SSE4: http://www.intel.com/software/penryn
7 http://softwarecommunity.intel.com/isn/Downloads/Intel%20SSE4%20Programming%20Reference.pdf
11 //===---------------------------------------------------------------------===//
13 CodeGen/X86/lea-3.ll:test3 should be a single LEA, not a shift/move. The X86
14 backend knows how to three-addressify this shift, but it appears the register
15 allocator isn't even asking it to do so in this case. We should investigate
16 why this isn't happening, it could have significant impact on other important
17 cases for X86 as well.
19 //===---------------------------------------------------------------------===//
21 This should be one DIV/IDIV instruction, not a libcall:
23 unsigned test(unsigned long long X, unsigned Y) {
27 This can be done trivially with a custom legalizer. What about overflow
28 though? http://gcc.gnu.org/bugzilla/show_bug.cgi?id=14224
30 //===---------------------------------------------------------------------===//
32 Improvements to the multiply -> shift/add algorithm:
33 http://gcc.gnu.org/ml/gcc-patches/2004-08/msg01590.html
35 //===---------------------------------------------------------------------===//
37 Improve code like this (occurs fairly frequently, e.g. in LLVM):
38 long long foo(int x) { return 1LL << x; }
40 http://gcc.gnu.org/ml/gcc-patches/2004-09/msg01109.html
41 http://gcc.gnu.org/ml/gcc-patches/2004-09/msg01128.html
42 http://gcc.gnu.org/ml/gcc-patches/2004-09/msg01136.html
44 Another useful one would be ~0ULL >> X and ~0ULL << X.
46 One better solution for 1LL << x is:
55 But that requires good 8-bit subreg support.
57 64-bit shifts (in general) expand to really bad code. Instead of using
58 cmovs, we should expand to a conditional branch like GCC produces.
60 //===---------------------------------------------------------------------===//
63 _Bool f(_Bool a) { return a!=1; }
70 //===---------------------------------------------------------------------===//
74 1. Dynamic programming based approach when compile time if not an
76 2. Code duplication (addressing mode) during isel.
77 3. Other ideas from "Register-Sensitive Selection, Duplication, and
78 Sequencing of Instructions".
79 4. Scheduling for reduced register pressure. E.g. "Minimum Register
80 Instruction Sequence Problem: Revisiting Optimal Code Generation for DAGs"
81 and other related papers.
82 http://citeseer.ist.psu.edu/govindarajan01minimum.html
84 //===---------------------------------------------------------------------===//
86 Should we promote i16 to i32 to avoid partial register update stalls?
88 //===---------------------------------------------------------------------===//
90 Leave any_extend as pseudo instruction and hint to register
91 allocator. Delay codegen until post register allocation.
92 Note. any_extend is now turned into an INSERT_SUBREG. We still need to teach
93 the coalescer how to deal with it though.
95 //===---------------------------------------------------------------------===//
97 Count leading zeros and count trailing zeros:
99 int clz(int X) { return __builtin_clz(X); }
100 int ctz(int X) { return __builtin_ctz(X); }
102 $ gcc t.c -S -o - -O3 -fomit-frame-pointer -masm=intel
104 bsr %eax, DWORD PTR [%esp+4]
108 bsf %eax, DWORD PTR [%esp+4]
111 however, check that these are defined for 0 and 32. Our intrinsics are, GCC's
114 Another example (use predsimplify to eliminate a select):
116 int foo (unsigned long j) {
118 return __builtin_ffs (j) - 1;
123 //===---------------------------------------------------------------------===//
125 It appears icc use push for parameter passing. Need to investigate.
127 //===---------------------------------------------------------------------===//
129 Only use inc/neg/not instructions on processors where they are faster than
130 add/sub/xor. They are slower on the P4 due to only updating some processor
133 //===---------------------------------------------------------------------===//
135 The instruction selector sometimes misses folding a load into a compare. The
136 pattern is written as (cmp reg, (load p)). Because the compare isn't
137 commutative, it is not matched with the load on both sides. The dag combiner
138 should be made smart enough to cannonicalize the load into the RHS of a compare
139 when it can invert the result of the compare for free.
141 //===---------------------------------------------------------------------===//
143 How about intrinsics? An example is:
144 *res = _mm_mulhi_epu16(*A, _mm_mul_epu32(*B, *C));
147 pmuludq (%eax), %xmm0
152 The transformation probably requires a X86 specific pass or a DAG combiner
153 target specific hook.
155 //===---------------------------------------------------------------------===//
157 In many cases, LLVM generates code like this:
166 on some processors (which ones?), it is more efficient to do this:
175 Doing this correctly is tricky though, as the xor clobbers the flags.
177 //===---------------------------------------------------------------------===//
179 We should generate bts/btr/etc instructions on targets where they are cheap or
180 when codesize is important. e.g., for:
182 void setbit(int *target, int bit) {
183 *target |= (1 << bit);
185 void clearbit(int *target, int bit) {
186 *target &= ~(1 << bit);
189 //===---------------------------------------------------------------------===//
191 Instead of the following for memset char*, 1, 10:
193 movl $16843009, 4(%edx)
194 movl $16843009, (%edx)
197 It might be better to generate
204 when we can spare a register. It reduces code size.
206 //===---------------------------------------------------------------------===//
208 Evaluate what the best way to codegen sdiv X, (2^C) is. For X/8, we currently
225 GCC knows several different ways to codegen it, one of which is this:
235 which is probably slower, but it's interesting at least :)
237 //===---------------------------------------------------------------------===//
239 The first BB of this code:
243 %V = call bool %foo()
244 br bool %V, label %T, label %F
261 It would be better to emit "cmp %al, 1" than a xor and test.
263 //===---------------------------------------------------------------------===//
265 We are currently lowering large (1MB+) memmove/memcpy to rep/stosl and rep/movsl
266 We should leave these as libcalls for everything over a much lower threshold,
267 since libc is hand tuned for medium and large mem ops (avoiding RFO for large
268 stores, TLB preheating, etc)
270 //===---------------------------------------------------------------------===//
272 Optimize this into something reasonable:
273 x * copysign(1.0, y) * copysign(1.0, z)
275 //===---------------------------------------------------------------------===//
277 Optimize copysign(x, *y) to use an integer load from y.
279 //===---------------------------------------------------------------------===//
281 %X = weak global int 0
284 %N = cast int %N to uint
285 %tmp.24 = setgt int %N, 0
286 br bool %tmp.24, label %no_exit, label %return
289 %indvar = phi uint [ 0, %entry ], [ %indvar.next, %no_exit ]
290 %i.0.0 = cast uint %indvar to int
291 volatile store int %i.0.0, int* %X
292 %indvar.next = add uint %indvar, 1
293 %exitcond = seteq uint %indvar.next, %N
294 br bool %exitcond, label %return, label %no_exit
308 jl LBB_foo_4 # return
309 LBB_foo_1: # no_exit.preheader
312 movl L_X$non_lazy_ptr, %edx
316 jne LBB_foo_2 # no_exit
317 LBB_foo_3: # return.loopexit
321 We should hoist "movl L_X$non_lazy_ptr, %edx" out of the loop after
322 remateralization is implemented. This can be accomplished with 1) a target
323 dependent LICM pass or 2) makeing SelectDAG represent the whole function.
325 //===---------------------------------------------------------------------===//
327 The following tests perform worse with LSR:
329 lambda, siod, optimizer-eval, ackermann, hash2, nestedloop, strcat, and Treesor.
331 //===---------------------------------------------------------------------===//
333 We are generating far worse code than gcc:
339 for (i = 0; i < N; i++) { X = i; Y = i*4; }
342 LBB1_1: # entry.bb_crit_edge
346 movl L_X$non_lazy_ptr, %esi
348 movl L_Y$non_lazy_ptr, %esi
358 movl L_X$non_lazy_ptr-"L00000000001$pb"(%ebx), %esi
359 movl L_Y$non_lazy_ptr-"L00000000001$pb"(%ebx), %ecx
362 leal 0(,%edx,4), %eax
368 This is due to the lack of post regalloc LICM.
370 //===---------------------------------------------------------------------===//
372 Teach the coalescer to coalesce vregs of different register classes. e.g. FR32 /
375 //===---------------------------------------------------------------------===//
383 Obviously it would have been better for the first mov (or any op) to store
384 directly %esp[0] if there are no other uses.
386 //===---------------------------------------------------------------------===//
388 Adding to the list of cmp / test poor codegen issues:
390 int test(__m128 *A, __m128 *B) {
391 if (_mm_comige_ss(*A, *B))
411 Note the setae, movzbl, cmpl, cmove can be replaced with a single cmovae. There
412 are a number of issues. 1) We are introducing a setcc between the result of the
413 intrisic call and select. 2) The intrinsic is expected to produce a i32 value
414 so a any extend (which becomes a zero extend) is added.
416 We probably need some kind of target DAG combine hook to fix this.
418 //===---------------------------------------------------------------------===//
420 We generate significantly worse code for this than GCC:
421 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=21150
422 http://gcc.gnu.org/bugzilla/attachment.cgi?id=8701
424 There is also one case we do worse on PPC.
426 //===---------------------------------------------------------------------===//
428 If shorter, we should use things like:
433 The former can also be used when the two-addressy nature of the 'and' would
434 require a copy to be inserted (in X86InstrInfo::convertToThreeAddress).
436 //===---------------------------------------------------------------------===//
438 Another instruction selector deficiency:
441 %tmp = load int (int)** %foo
442 %tmp = tail call int %tmp( int 3 )
448 movl L_foo$non_lazy_ptr, %eax
454 The current isel scheme will not allow the load to be folded in the call since
455 the load's chain result is read by the callseq_start.
457 //===---------------------------------------------------------------------===//
467 imull $3, 4(%esp), %eax
469 Perhaps this is what we really should generate is? Is imull three or four
470 cycles? Note: ICC generates this:
472 leal (%eax,%eax,2), %eax
474 The current instruction priority is based on pattern complexity. The former is
475 more "complex" because it folds a load so the latter will not be emitted.
477 Perhaps we should use AddedComplexity to give LEA32r a higher priority? We
478 should always try to match LEA first since the LEA matching code does some
479 estimate to determine whether the match is profitable.
481 However, if we care more about code size, then imull is better. It's two bytes
482 shorter than movl + leal.
484 //===---------------------------------------------------------------------===//
486 Implement CTTZ, CTLZ with bsf and bsr. GCC produces:
488 int ctz_(unsigned X) { return __builtin_ctz(X); }
489 int clz_(unsigned X) { return __builtin_clz(X); }
490 int ffs_(unsigned X) { return __builtin_ffs(X); }
506 //===---------------------------------------------------------------------===//
508 It appears gcc place string data with linkonce linkage in
509 .section __TEXT,__const_coal,coalesced instead of
510 .section __DATA,__const_coal,coalesced.
511 Take a look at darwin.h, there are other Darwin assembler directives that we
514 //===---------------------------------------------------------------------===//
516 define i32 @foo(i32* %a, i32 %t) {
520 cond_true: ; preds = %cond_true, %entry
521 %x.0.0 = phi i32 [ 0, %entry ], [ %tmp9, %cond_true ] ; <i32> [#uses=3]
522 %t_addr.0.0 = phi i32 [ %t, %entry ], [ %tmp7, %cond_true ] ; <i32> [#uses=1]
523 %tmp2 = getelementptr i32* %a, i32 %x.0.0 ; <i32*> [#uses=1]
524 %tmp3 = load i32* %tmp2 ; <i32> [#uses=1]
525 %tmp5 = add i32 %t_addr.0.0, %x.0.0 ; <i32> [#uses=1]
526 %tmp7 = add i32 %tmp5, %tmp3 ; <i32> [#uses=2]
527 %tmp9 = add i32 %x.0.0, 1 ; <i32> [#uses=2]
528 %tmp = icmp sgt i32 %tmp9, 39 ; <i1> [#uses=1]
529 br i1 %tmp, label %bb12, label %cond_true
531 bb12: ; preds = %cond_true
534 is pessimized by -loop-reduce and -indvars
536 //===---------------------------------------------------------------------===//
538 u32 to float conversion improvement:
540 float uint32_2_float( unsigned u ) {
541 float fl = (int) (u & 0xffff);
542 float fh = (int) (u >> 16);
547 00000000 subl $0x04,%esp
548 00000003 movl 0x08(%esp,1),%eax
549 00000007 movl %eax,%ecx
550 00000009 shrl $0x10,%ecx
551 0000000c cvtsi2ss %ecx,%xmm0
552 00000010 andl $0x0000ffff,%eax
553 00000015 cvtsi2ss %eax,%xmm1
554 00000019 mulss 0x00000078,%xmm0
555 00000021 addss %xmm1,%xmm0
556 00000025 movss %xmm0,(%esp,1)
557 0000002a flds (%esp,1)
558 0000002d addl $0x04,%esp
561 //===---------------------------------------------------------------------===//
563 When using fastcc abi, align stack slot of argument of type double on 8 byte
564 boundary to improve performance.
566 //===---------------------------------------------------------------------===//
570 int f(int a, int b) {
571 if (a == 4 || a == 6)
583 //===---------------------------------------------------------------------===//
585 GCC's ix86_expand_int_movcc function (in i386.c) has a ton of interesting
586 simplifications for integer "x cmp y ? a : b". For example, instead of:
589 void f(int X, int Y) {
616 int usesbb(unsigned int a, unsigned int b) {
617 return (a < b ? -1 : 0);
631 movl $4294967295, %ecx
635 //===---------------------------------------------------------------------===//
637 Currently we don't have elimination of redundant stack manipulations. Consider
642 call fastcc void %test1( )
643 call fastcc void %test2( sbyte* cast (void ()* %test1 to sbyte*) )
647 declare fastcc void %test1()
649 declare fastcc void %test2(sbyte*)
652 This currently compiles to:
662 The add\sub pair is really unneeded here.
664 //===---------------------------------------------------------------------===//
666 Consider the expansion of:
668 define i32 @test3(i32 %X) {
669 %tmp1 = urem i32 %X, 255
673 Currently it compiles to:
676 movl $2155905153, %ecx
682 This could be "reassociated" into:
684 movl $2155905153, %eax
688 to avoid the copy. In fact, the existing two-address stuff would do this
689 except that mul isn't a commutative 2-addr instruction. I guess this has
690 to be done at isel time based on the #uses to mul?
692 //===---------------------------------------------------------------------===//
694 Make sure the instruction which starts a loop does not cross a cacheline
695 boundary. This requires knowning the exact length of each machine instruction.
696 That is somewhat complicated, but doable. Example 256.bzip2:
698 In the new trace, the hot loop has an instruction which crosses a cacheline
699 boundary. In addition to potential cache misses, this can't help decoding as I
700 imagine there has to be some kind of complicated decoder reset and realignment
701 to grab the bytes from the next cacheline.
703 532 532 0x3cfc movb (1809(%esp, %esi), %bl <<<--- spans 2 64 byte lines
704 942 942 0x3d03 movl %dh, (1809(%esp, %esi)
705 937 937 0x3d0a incl %esi
706 3 3 0x3d0b cmpb %bl, %dl
707 27 27 0x3d0d jnz 0x000062db <main+11707>
709 //===---------------------------------------------------------------------===//
711 In c99 mode, the preprocessor doesn't like assembly comments like #TRUNCATE.
713 //===---------------------------------------------------------------------===//
715 This could be a single 16-bit load.
718 if ((p[0] == 1) & (p[1] == 2)) return 1;
722 //===---------------------------------------------------------------------===//
724 We should inline lrintf and probably other libc functions.
726 //===---------------------------------------------------------------------===//
728 Start using the flags more. For example, compile:
730 int add_zf(int *x, int y, int a, int b) {
754 int add_zf(int *x, int y, int a, int b) {
778 //===---------------------------------------------------------------------===//
780 These two functions have identical effects:
782 unsigned int f(unsigned int i, unsigned int n) {++i; if (i == n) ++i; return i;}
783 unsigned int f2(unsigned int i, unsigned int n) {++i; i += i == n; return i;}
785 We currently compile them to:
793 jne LBB1_2 #UnifiedReturnBlock
797 LBB1_2: #UnifiedReturnBlock
807 leal 1(%ecx,%eax), %eax
810 both of which are inferior to GCC's:
828 //===---------------------------------------------------------------------===//
836 is currently compiled to:
847 It would be better to produce:
856 This can be applied to any no-return function call that takes no arguments etc.
857 Alternatively, the stack save/restore logic could be shrink-wrapped, producing
868 Both are useful in different situations. Finally, it could be shrink-wrapped
869 and tail called, like this:
876 pop %eax # realign stack.
879 Though this probably isn't worth it.
881 //===---------------------------------------------------------------------===//
883 We need to teach the codegen to convert two-address INC instructions to LEA
884 when the flags are dead (likewise dec). For example, on X86-64, compile:
886 int foo(int A, int B) {
905 ;; X's live range extends beyond the shift, so the register allocator
906 ;; cannot coalesce it with Y. Because of this, a copy needs to be
907 ;; emitted before the shift to save the register value before it is
908 ;; clobbered. However, this copy is not needed if the register
909 ;; allocator turns the shift into an LEA. This also occurs for ADD.
911 ; Check that the shift gets turned into an LEA.
912 ; RUN: llvm-as < %s | llc -march=x86 -x86-asm-syntax=intel | \
913 ; RUN: not grep {mov E.X, E.X}
915 @G = external global i32 ; <i32*> [#uses=3]
917 define i32 @test1(i32 %X, i32 %Y) {
918 %Z = add i32 %X, %Y ; <i32> [#uses=1]
919 volatile store i32 %Y, i32* @G
920 volatile store i32 %Z, i32* @G
924 define i32 @test2(i32 %X) {
925 %Z = add i32 %X, 1 ; <i32> [#uses=1]
926 volatile store i32 %Z, i32* @G
930 //===---------------------------------------------------------------------===//
932 Sometimes it is better to codegen subtractions from a constant (e.g. 7-x) with
933 a neg instead of a sub instruction. Consider:
935 int test(char X) { return 7-X; }
937 we currently produce:
944 We would use one fewer register if codegen'd as:
951 Note that this isn't beneficial if the load can be folded into the sub. In
952 this case, we want a sub:
954 int test(int X) { return 7-X; }
960 //===---------------------------------------------------------------------===//
962 This is a "commutable two-address" register coallescing deficiency:
964 define <4 x float> @test1(<4 x float> %V) {
966 %tmp8 = shufflevector <4 x float> %V, <4 x float> undef,
967 <4 x i32> < i32 3, i32 2, i32 1, i32 0 >
968 %add = add <4 x float> %tmp8, %V
975 pshufd $27, %xmm0, %xmm1
983 pshufd $27, %xmm0, %xmm1
987 //===---------------------------------------------------------------------===//
989 Leaf functions that require one 4-byte spill slot have a prolog like this:
995 and an epilog like this:
1000 It would be smaller, and potentially faster, to push eax on entry and to
1001 pop into a dummy register instead of using addl/subl of esp. Just don't pop
1002 into any return registers :)
1004 //===---------------------------------------------------------------------===//
1006 The X86 backend should fold (branch (or (setcc, setcc))) into multiple
1007 branches. We generate really poor code for:
1009 double testf(double a) {
1010 return a == 0.0 ? 0.0 : (a > 0.0 ? 1.0 : -1.0);
1013 For example, the entry BB is:
1018 movsd 24(%esp), %xmm1
1019 ucomisd %xmm0, %xmm1
1023 jne LBB1_5 # UnifiedReturnBlock
1027 it would be better to replace the last four instructions with:
1033 We also codegen the inner ?: into a diamond:
1035 cvtss2sd LCPI1_0(%rip), %xmm2
1036 cvtss2sd LCPI1_1(%rip), %xmm3
1037 ucomisd %xmm1, %xmm0
1038 ja LBB1_3 # cond_true
1045 We should sink the load into xmm3 into the LBB1_2 block. This should
1046 be pretty easy, and will nuke all the copies.
1048 //===---------------------------------------------------------------------===//
1051 #include <algorithm>
1052 inline std::pair<unsigned, bool> full_add(unsigned a, unsigned b)
1053 { return std::make_pair(a + b, a + b < a); }
1054 bool no_overflow(unsigned a, unsigned b)
1055 { return !full_add(a, b).second; }
1075 //===---------------------------------------------------------------------===//
1077 Re-materialize MOV32r0 etc. with xor instead of changing them to moves if the
1078 condition register is dead. xor reg reg is shorter than mov reg, #0.
1080 //===---------------------------------------------------------------------===//
1082 We aren't matching RMW instructions aggressively
1083 enough. Here's a reduced testcase (more in PR1160):
1085 define void @test(i32* %huge_ptr, i32* %target_ptr) {
1086 %A = load i32* %huge_ptr ; <i32> [#uses=1]
1087 %B = load i32* %target_ptr ; <i32> [#uses=1]
1088 %C = or i32 %A, %B ; <i32> [#uses=1]
1089 store i32 %C, i32* %target_ptr
1093 $ llvm-as < t.ll | llc -march=x86-64
1101 That should be something like:
1108 //===---------------------------------------------------------------------===//
1112 bb114.preheader: ; preds = %cond_next94
1113 %tmp231232 = sext i16 %tmp62 to i32 ; <i32> [#uses=1]
1114 %tmp233 = sub i32 32, %tmp231232 ; <i32> [#uses=1]
1115 %tmp245246 = sext i16 %tmp65 to i32 ; <i32> [#uses=1]
1116 %tmp252253 = sext i16 %tmp68 to i32 ; <i32> [#uses=1]
1117 %tmp254 = sub i32 32, %tmp252253 ; <i32> [#uses=1]
1118 %tmp553554 = bitcast i16* %tmp37 to i8* ; <i8*> [#uses=2]
1119 %tmp583584 = sext i16 %tmp98 to i32 ; <i32> [#uses=1]
1120 %tmp585 = sub i32 32, %tmp583584 ; <i32> [#uses=1]
1121 %tmp614615 = sext i16 %tmp101 to i32 ; <i32> [#uses=1]
1122 %tmp621622 = sext i16 %tmp104 to i32 ; <i32> [#uses=1]
1123 %tmp623 = sub i32 32, %tmp621622 ; <i32> [#uses=1]
1128 LBB3_5: # bb114.preheader
1129 movswl -68(%ebp), %eax
1131 movl %ecx, -80(%ebp)
1132 subl %eax, -80(%ebp)
1133 movswl -52(%ebp), %eax
1134 movl %ecx, -84(%ebp)
1135 subl %eax, -84(%ebp)
1136 movswl -70(%ebp), %eax
1137 movl %ecx, -88(%ebp)
1138 subl %eax, -88(%ebp)
1139 movswl -50(%ebp), %eax
1141 movl %ecx, -76(%ebp)
1142 movswl -42(%ebp), %eax
1143 movl %eax, -92(%ebp)
1144 movswl -66(%ebp), %eax
1145 movl %eax, -96(%ebp)
1148 This appears to be bad because the RA is not folding the store to the stack
1149 slot into the movl. The above instructions could be:
1154 This seems like a cross between remat and spill folding.
1156 This has redundant subtractions of %eax from a stack slot. However, %ecx doesn't
1157 change, so we could simply subtract %eax from %ecx first and then use %ecx (or
1160 //===---------------------------------------------------------------------===//
1164 cond_next603: ; preds = %bb493, %cond_true336, %cond_next599
1165 %v.21050.1 = phi i32 [ %v.21050.0, %cond_next599 ], [ %tmp344, %cond_true336 ], [ %v.2, %bb493 ] ; <i32> [#uses=1]
1166 %maxz.21051.1 = phi i32 [ %maxz.21051.0, %cond_next599 ], [ 0, %cond_true336 ], [ %maxz.2, %bb493 ] ; <i32> [#uses=2]
1167 %cnt.01055.1 = phi i32 [ %cnt.01055.0, %cond_next599 ], [ 0, %cond_true336 ], [ %cnt.0, %bb493 ] ; <i32> [#uses=2]
1168 %byteptr.9 = phi i8* [ %byteptr.12, %cond_next599 ], [ %byteptr.0, %cond_true336 ], [ %byteptr.10, %bb493 ] ; <i8*> [#uses=9]
1169 %bitptr.6 = phi i32 [ %tmp5571104.1, %cond_next599 ], [ %tmp4921049, %cond_true336 ], [ %bitptr.7, %bb493 ] ; <i32> [#uses=4]
1170 %source.5 = phi i32 [ %tmp602, %cond_next599 ], [ %source.0, %cond_true336 ], [ %source.6, %bb493 ] ; <i32> [#uses=7]
1171 %tmp606 = getelementptr %struct.const_tables* @tables, i32 0, i32 0, i32 %cnt.01055.1 ; <i8*> [#uses=1]
1172 %tmp607 = load i8* %tmp606, align 1 ; <i8> [#uses=1]
1176 LBB4_70: # cond_next603
1177 movl -20(%ebp), %esi
1178 movl L_tables$non_lazy_ptr-"L4$pb"(%esi), %esi
1180 However, ICC caches this information before the loop and produces this:
1182 movl 88(%esp), %eax #481.12
1184 //===---------------------------------------------------------------------===//
1188 %tmp659 = icmp slt i16 %tmp654, 0 ; <i1> [#uses=1]
1189 br i1 %tmp659, label %cond_true662, label %cond_next715
1195 jns LBB4_109 # cond_next715
1197 Shark tells us that using %cx in the testw instruction is sub-optimal. It
1198 suggests using the 32-bit register (which is what ICC uses).
1200 //===---------------------------------------------------------------------===//
1204 void compare (long long foo) {
1205 if (foo < 4294967297LL)
1222 je LBB1_2 # cond_true
1224 (also really horrible code on ppc). This is due to the expand code for 64-bit
1225 compares. GCC produces multiple branches, which is much nicer:
1241 //===---------------------------------------------------------------------===//
1243 Tail call optimization improvements: Tail call optimization currently
1244 pushes all arguments on the top of the stack (their normal place for
1245 non-tail call optimized calls) that source from the callers arguments
1246 or that source from a virtual register (also possibly sourcing from
1248 This is done to prevent overwriting of parameters (see example
1249 below) that might be used later.
1253 int callee(int32, int64);
1254 int caller(int32 arg1, int32 arg2) {
1255 int64 local = arg2 * 2;
1256 return callee(arg2, (int64)local);
1259 [arg1] [!arg2 no longer valid since we moved local onto it]
1263 Moving arg1 onto the stack slot of callee function would overwrite
1266 Possible optimizations:
1269 - Analyse the actual parameters of the callee to see which would
1270 overwrite a caller parameter which is used by the callee and only
1271 push them onto the top of the stack.
1273 int callee (int32 arg1, int32 arg2);
1274 int caller (int32 arg1, int32 arg2) {
1275 return callee(arg1,arg2);
1278 Here we don't need to write any variables to the top of the stack
1279 since they don't overwrite each other.
1281 int callee (int32 arg1, int32 arg2);
1282 int caller (int32 arg1, int32 arg2) {
1283 return callee(arg2,arg1);
1286 Here we need to push the arguments because they overwrite each
1289 //===---------------------------------------------------------------------===//
1294 unsigned long int z = 0;
1305 gcc compiles this to:
1331 jge LBB1_4 # cond_true
1334 addl $4294950912, %ecx
1344 1. LSR should rewrite the first cmp with induction variable %ecx.
1345 2. DAG combiner should fold
1351 //===---------------------------------------------------------------------===//
1353 define i64 @test(double %X) {
1354 %Y = fptosi double %X to i64
1362 movsd 24(%esp), %xmm0
1363 movsd %xmm0, 8(%esp)
1372 This should just fldl directly from the input stack slot.
1374 //===---------------------------------------------------------------------===//
1377 int foo (int x) { return (x & 65535) | 255; }
1379 Should compile into:
1382 movzwl 4(%esp), %eax
1383 orb $-1, %al ;; 'orl 255' is also fine :)
1393 //===---------------------------------------------------------------------===//
1395 We're missing an obvious fold of a load into imul:
1397 int test(long a, long b) { return a * b; }
1412 //===---------------------------------------------------------------------===//
1414 We can fold a store into "zeroing a reg". Instead of:
1417 movl %eax, 124(%esp)
1423 if the flags of the xor are dead.
1425 Likewise, we isel "x<<1" into "add reg,reg". If reg is spilled, this should
1426 be folded into: shl [mem], 1
1428 //===---------------------------------------------------------------------===//
1430 This testcase misses a read/modify/write opportunity (from PR1425):
1432 void vertical_decompose97iH1(int *b0, int *b1, int *b2, int width){
1434 for(i=0; i<width; i++)
1435 b1[i] += (1*(b0[i] + b2[i])+0)>>0;
1438 We compile it down to:
1441 movl (%esi,%edi,4), %ebx
1442 addl (%ecx,%edi,4), %ebx
1443 addl (%edx,%edi,4), %ebx
1444 movl %ebx, (%ecx,%edi,4)
1449 the inner loop should add to the memory location (%ecx,%edi,4), saving
1450 a mov. Something like:
1452 movl (%esi,%edi,4), %ebx
1453 addl (%edx,%edi,4), %ebx
1454 addl %ebx, (%ecx,%edi,4)
1456 Here is another interesting example:
1458 void vertical_compose97iH1(int *b0, int *b1, int *b2, int width){
1460 for(i=0; i<width; i++)
1461 b1[i] -= (1*(b0[i] + b2[i])+0)>>0;
1464 We miss the r/m/w opportunity here by using 2 subs instead of an add+sub[mem]:
1467 movl (%ecx,%edi,4), %ebx
1468 subl (%esi,%edi,4), %ebx
1469 subl (%edx,%edi,4), %ebx
1470 movl %ebx, (%ecx,%edi,4)
1475 Additionally, LSR should rewrite the exit condition of these loops to use
1476 a stride-4 IV, would would allow all the scales in the loop to go away.
1477 This would result in smaller code and more efficient microops.
1479 //===---------------------------------------------------------------------===//
1481 In SSE mode, we turn abs and neg into a load from the constant pool plus a xor
1482 or and instruction, for example:
1484 xorpd LCPI1_0, %xmm2
1486 However, if xmm2 gets spilled, we end up with really ugly code like this:
1489 xorpd LCPI1_0, %xmm0
1492 Since we 'know' that this is a 'neg', we can actually "fold" the spill into
1493 the neg/abs instruction, turning it into an *integer* operation, like this:
1495 xorl 2147483648, [mem+4] ## 2147483648 = (1 << 31)
1497 you could also use xorb, but xorl is less likely to lead to a partial register
1498 stall. Here is a contrived testcase:
1501 void test(double *P) {
1511 //===---------------------------------------------------------------------===//
1513 handling llvm.memory.barrier on pre SSE2 cpus
1516 lock ; mov %esp, %esp
1518 //===---------------------------------------------------------------------===//
1520 The generated code on x86 for checking for signed overflow on a multiply the
1521 obvious way is much longer than it needs to be.
1523 int x(int a, int b) {
1524 long long prod = (long long)a*b;
1525 return prod > 0x7FFFFFFF || prod < (-0x7FFFFFFF-1);
1528 See PR2053 for more details.
1530 //===---------------------------------------------------------------------===//
1533 int test(unsigned long a, unsigned long b) { return -(a < b); }
1535 We currently compile this to:
1537 define i32 @test(i32 %a, i32 %b) nounwind {
1538 %tmp3 = icmp ult i32 %a, %b ; <i1> [#uses=1]
1539 %tmp34 = zext i1 %tmp3 to i32 ; <i32> [#uses=1]
1540 %tmp5 = sub i32 0, %tmp34 ; <i32> [#uses=1]
1554 Several deficiencies here. First, we should instcombine zext+neg into sext:
1556 define i32 @test2(i32 %a, i32 %b) nounwind {
1557 %tmp3 = icmp ult i32 %a, %b ; <i1> [#uses=1]
1558 %tmp34 = sext i1 %tmp3 to i32 ; <i32> [#uses=1]
1562 However, before we can do that, we have to fix the bad codegen that we get for
1574 This code should be at least as good as the code above. Once this is fixed, we
1575 can optimize this specific case even more to:
1582 //===---------------------------------------------------------------------===//