1 //===---------------------------------------------------------------------===//
2 // Random ideas for the X86 backend.
3 //===---------------------------------------------------------------------===//
6 - Support for SSE4: http://www.intel.com/software/penryn
7 http://softwarecommunity.intel.com/isn/Downloads/Intel%20SSE4%20Programming%20Reference.pdf
11 //===---------------------------------------------------------------------===//
13 CodeGen/X86/lea-3.ll:test3 should be a single LEA, not a shift/move. The X86
14 backend knows how to three-addressify this shift, but it appears the register
15 allocator isn't even asking it to do so in this case. We should investigate
16 why this isn't happening, it could have significant impact on other important
17 cases for X86 as well.
19 //===---------------------------------------------------------------------===//
21 This should be one DIV/IDIV instruction, not a libcall:
23 unsigned test(unsigned long long X, unsigned Y) {
27 This can be done trivially with a custom legalizer. What about overflow
28 though? http://gcc.gnu.org/bugzilla/show_bug.cgi?id=14224
30 //===---------------------------------------------------------------------===//
32 Improvements to the multiply -> shift/add algorithm:
33 http://gcc.gnu.org/ml/gcc-patches/2004-08/msg01590.html
35 //===---------------------------------------------------------------------===//
37 Improve code like this (occurs fairly frequently, e.g. in LLVM):
38 long long foo(int x) { return 1LL << x; }
40 http://gcc.gnu.org/ml/gcc-patches/2004-09/msg01109.html
41 http://gcc.gnu.org/ml/gcc-patches/2004-09/msg01128.html
42 http://gcc.gnu.org/ml/gcc-patches/2004-09/msg01136.html
44 Another useful one would be ~0ULL >> X and ~0ULL << X.
46 One better solution for 1LL << x is:
55 But that requires good 8-bit subreg support.
57 64-bit shifts (in general) expand to really bad code. Instead of using
58 cmovs, we should expand to a conditional branch like GCC produces.
60 //===---------------------------------------------------------------------===//
63 _Bool f(_Bool a) { return a!=1; }
70 //===---------------------------------------------------------------------===//
74 1. Dynamic programming based approach when compile time if not an
76 2. Code duplication (addressing mode) during isel.
77 3. Other ideas from "Register-Sensitive Selection, Duplication, and
78 Sequencing of Instructions".
79 4. Scheduling for reduced register pressure. E.g. "Minimum Register
80 Instruction Sequence Problem: Revisiting Optimal Code Generation for DAGs"
81 and other related papers.
82 http://citeseer.ist.psu.edu/govindarajan01minimum.html
84 //===---------------------------------------------------------------------===//
86 Should we promote i16 to i32 to avoid partial register update stalls?
88 //===---------------------------------------------------------------------===//
90 Leave any_extend as pseudo instruction and hint to register
91 allocator. Delay codegen until post register allocation.
92 Note. any_extend is now turned into an INSERT_SUBREG. We still need to teach
93 the coalescer how to deal with it though.
95 //===---------------------------------------------------------------------===//
97 Count leading zeros and count trailing zeros:
99 int clz(int X) { return __builtin_clz(X); }
100 int ctz(int X) { return __builtin_ctz(X); }
102 $ gcc t.c -S -o - -O3 -fomit-frame-pointer -masm=intel
104 bsr %eax, DWORD PTR [%esp+4]
108 bsf %eax, DWORD PTR [%esp+4]
111 however, check that these are defined for 0 and 32. Our intrinsics are, GCC's
114 Another example (use predsimplify to eliminate a select):
116 int foo (unsigned long j) {
118 return __builtin_ffs (j) - 1;
123 //===---------------------------------------------------------------------===//
125 It appears icc use push for parameter passing. Need to investigate.
127 //===---------------------------------------------------------------------===//
129 Only use inc/neg/not instructions on processors where they are faster than
130 add/sub/xor. They are slower on the P4 due to only updating some processor
133 //===---------------------------------------------------------------------===//
135 The instruction selector sometimes misses folding a load into a compare. The
136 pattern is written as (cmp reg, (load p)). Because the compare isn't
137 commutative, it is not matched with the load on both sides. The dag combiner
138 should be made smart enough to cannonicalize the load into the RHS of a compare
139 when it can invert the result of the compare for free.
141 //===---------------------------------------------------------------------===//
143 How about intrinsics? An example is:
144 *res = _mm_mulhi_epu16(*A, _mm_mul_epu32(*B, *C));
147 pmuludq (%eax), %xmm0
152 The transformation probably requires a X86 specific pass or a DAG combiner
153 target specific hook.
155 //===---------------------------------------------------------------------===//
157 In many cases, LLVM generates code like this:
166 on some processors (which ones?), it is more efficient to do this:
175 Doing this correctly is tricky though, as the xor clobbers the flags.
177 //===---------------------------------------------------------------------===//
179 We should generate bts/btr/etc instructions on targets where they are cheap or
180 when codesize is important. e.g., for:
182 void setbit(int *target, int bit) {
183 *target |= (1 << bit);
185 void clearbit(int *target, int bit) {
186 *target &= ~(1 << bit);
189 //===---------------------------------------------------------------------===//
191 Instead of the following for memset char*, 1, 10:
193 movl $16843009, 4(%edx)
194 movl $16843009, (%edx)
197 It might be better to generate
204 when we can spare a register. It reduces code size.
206 //===---------------------------------------------------------------------===//
208 Evaluate what the best way to codegen sdiv X, (2^C) is. For X/8, we currently
225 GCC knows several different ways to codegen it, one of which is this:
235 which is probably slower, but it's interesting at least :)
237 //===---------------------------------------------------------------------===//
239 The first BB of this code:
243 %V = call bool %foo()
244 br bool %V, label %T, label %F
261 It would be better to emit "cmp %al, 1" than a xor and test.
263 //===---------------------------------------------------------------------===//
265 We are currently lowering large (1MB+) memmove/memcpy to rep/stosl and rep/movsl
266 We should leave these as libcalls for everything over a much lower threshold,
267 since libc is hand tuned for medium and large mem ops (avoiding RFO for large
268 stores, TLB preheating, etc)
270 //===---------------------------------------------------------------------===//
272 Optimize this into something reasonable:
273 x * copysign(1.0, y) * copysign(1.0, z)
275 //===---------------------------------------------------------------------===//
277 Optimize copysign(x, *y) to use an integer load from y.
279 //===---------------------------------------------------------------------===//
281 %X = weak global int 0
284 %N = cast int %N to uint
285 %tmp.24 = setgt int %N, 0
286 br bool %tmp.24, label %no_exit, label %return
289 %indvar = phi uint [ 0, %entry ], [ %indvar.next, %no_exit ]
290 %i.0.0 = cast uint %indvar to int
291 volatile store int %i.0.0, int* %X
292 %indvar.next = add uint %indvar, 1
293 %exitcond = seteq uint %indvar.next, %N
294 br bool %exitcond, label %return, label %no_exit
308 jl LBB_foo_4 # return
309 LBB_foo_1: # no_exit.preheader
312 movl L_X$non_lazy_ptr, %edx
316 jne LBB_foo_2 # no_exit
317 LBB_foo_3: # return.loopexit
321 We should hoist "movl L_X$non_lazy_ptr, %edx" out of the loop after
322 remateralization is implemented. This can be accomplished with 1) a target
323 dependent LICM pass or 2) makeing SelectDAG represent the whole function.
325 //===---------------------------------------------------------------------===//
327 The following tests perform worse with LSR:
329 lambda, siod, optimizer-eval, ackermann, hash2, nestedloop, strcat, and Treesor.
331 //===---------------------------------------------------------------------===//
333 We are generating far worse code than gcc:
339 for (i = 0; i < N; i++) { X = i; Y = i*4; }
342 LBB1_1: # entry.bb_crit_edge
346 movl L_X$non_lazy_ptr, %esi
348 movl L_Y$non_lazy_ptr, %esi
358 movl L_X$non_lazy_ptr-"L00000000001$pb"(%ebx), %esi
359 movl L_Y$non_lazy_ptr-"L00000000001$pb"(%ebx), %ecx
362 leal 0(,%edx,4), %eax
368 This is due to the lack of post regalloc LICM.
370 //===---------------------------------------------------------------------===//
372 Teach the coalescer to coalesce vregs of different register classes. e.g. FR32 /
375 //===---------------------------------------------------------------------===//
383 Obviously it would have been better for the first mov (or any op) to store
384 directly %esp[0] if there are no other uses.
386 //===---------------------------------------------------------------------===//
388 Adding to the list of cmp / test poor codegen issues:
390 int test(__m128 *A, __m128 *B) {
391 if (_mm_comige_ss(*A, *B))
411 Note the setae, movzbl, cmpl, cmove can be replaced with a single cmovae. There
412 are a number of issues. 1) We are introducing a setcc between the result of the
413 intrisic call and select. 2) The intrinsic is expected to produce a i32 value
414 so a any extend (which becomes a zero extend) is added.
416 We probably need some kind of target DAG combine hook to fix this.
418 //===---------------------------------------------------------------------===//
420 We generate significantly worse code for this than GCC:
421 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=21150
422 http://gcc.gnu.org/bugzilla/attachment.cgi?id=8701
424 There is also one case we do worse on PPC.
426 //===---------------------------------------------------------------------===//
428 If shorter, we should use things like:
433 The former can also be used when the two-addressy nature of the 'and' would
434 require a copy to be inserted (in X86InstrInfo::convertToThreeAddress).
436 //===---------------------------------------------------------------------===//
440 typedef struct pair { float A, B; } pair;
441 void pairtest(pair P, float *FP) {
445 We currently generate this code with llvmgcc4:
457 we should be able to generate:
465 The issue is that llvmgcc4 is forcing the struct to memory, then passing it as
466 integer chunks. It does this so that structs like {short,short} are passed in
467 a single 32-bit integer stack slot. We should handle the safe cases above much
468 nicer, while still handling the hard cases.
470 While true in general, in this specific case we could do better by promoting
471 load int + bitcast to float -> load fload. This basically needs alignment info,
472 the code is already implemented (but disabled) in dag combine).
474 //===---------------------------------------------------------------------===//
476 Another instruction selector deficiency:
479 %tmp = load int (int)** %foo
480 %tmp = tail call int %tmp( int 3 )
486 movl L_foo$non_lazy_ptr, %eax
492 The current isel scheme will not allow the load to be folded in the call since
493 the load's chain result is read by the callseq_start.
495 //===---------------------------------------------------------------------===//
505 imull $3, 4(%esp), %eax
507 Perhaps this is what we really should generate is? Is imull three or four
508 cycles? Note: ICC generates this:
510 leal (%eax,%eax,2), %eax
512 The current instruction priority is based on pattern complexity. The former is
513 more "complex" because it folds a load so the latter will not be emitted.
515 Perhaps we should use AddedComplexity to give LEA32r a higher priority? We
516 should always try to match LEA first since the LEA matching code does some
517 estimate to determine whether the match is profitable.
519 However, if we care more about code size, then imull is better. It's two bytes
520 shorter than movl + leal.
522 //===---------------------------------------------------------------------===//
524 Implement CTTZ, CTLZ with bsf and bsr. GCC produces:
526 int ctz_(unsigned X) { return __builtin_ctz(X); }
527 int clz_(unsigned X) { return __builtin_clz(X); }
528 int ffs_(unsigned X) { return __builtin_ffs(X); }
544 //===---------------------------------------------------------------------===//
546 It appears gcc place string data with linkonce linkage in
547 .section __TEXT,__const_coal,coalesced instead of
548 .section __DATA,__const_coal,coalesced.
549 Take a look at darwin.h, there are other Darwin assembler directives that we
552 //===---------------------------------------------------------------------===//
554 int %foo(int* %a, int %t) {
558 cond_true: ; preds = %cond_true, %entry
559 %x.0.0 = phi int [ 0, %entry ], [ %tmp9, %cond_true ]
560 %t_addr.0.0 = phi int [ %t, %entry ], [ %tmp7, %cond_true ]
561 %tmp2 = getelementptr int* %a, int %x.0.0
562 %tmp3 = load int* %tmp2 ; <int> [#uses=1]
563 %tmp5 = add int %t_addr.0.0, %x.0.0 ; <int> [#uses=1]
564 %tmp7 = add int %tmp5, %tmp3 ; <int> [#uses=2]
565 %tmp9 = add int %x.0.0, 1 ; <int> [#uses=2]
566 %tmp = setgt int %tmp9, 39 ; <bool> [#uses=1]
567 br bool %tmp, label %bb12, label %cond_true
569 bb12: ; preds = %cond_true
573 is pessimized by -loop-reduce and -indvars
575 //===---------------------------------------------------------------------===//
577 u32 to float conversion improvement:
579 float uint32_2_float( unsigned u ) {
580 float fl = (int) (u & 0xffff);
581 float fh = (int) (u >> 16);
586 00000000 subl $0x04,%esp
587 00000003 movl 0x08(%esp,1),%eax
588 00000007 movl %eax,%ecx
589 00000009 shrl $0x10,%ecx
590 0000000c cvtsi2ss %ecx,%xmm0
591 00000010 andl $0x0000ffff,%eax
592 00000015 cvtsi2ss %eax,%xmm1
593 00000019 mulss 0x00000078,%xmm0
594 00000021 addss %xmm1,%xmm0
595 00000025 movss %xmm0,(%esp,1)
596 0000002a flds (%esp,1)
597 0000002d addl $0x04,%esp
600 //===---------------------------------------------------------------------===//
602 When using fastcc abi, align stack slot of argument of type double on 8 byte
603 boundary to improve performance.
605 //===---------------------------------------------------------------------===//
609 int f(int a, int b) {
610 if (a == 4 || a == 6)
622 //===---------------------------------------------------------------------===//
624 GCC's ix86_expand_int_movcc function (in i386.c) has a ton of interesting
625 simplifications for integer "x cmp y ? a : b". For example, instead of:
628 void f(int X, int Y) {
655 int usesbb(unsigned int a, unsigned int b) {
656 return (a < b ? -1 : 0);
670 movl $4294967295, %ecx
674 //===---------------------------------------------------------------------===//
676 Currently we don't have elimination of redundant stack manipulations. Consider
681 call fastcc void %test1( )
682 call fastcc void %test2( sbyte* cast (void ()* %test1 to sbyte*) )
686 declare fastcc void %test1()
688 declare fastcc void %test2(sbyte*)
691 This currently compiles to:
701 The add\sub pair is really unneeded here.
703 //===---------------------------------------------------------------------===//
705 Consider the expansion of:
707 uint %test3(uint %X) {
708 %tmp1 = rem uint %X, 255
712 Currently it compiles to:
715 movl $2155905153, %ecx
721 This could be "reassociated" into:
723 movl $2155905153, %eax
727 to avoid the copy. In fact, the existing two-address stuff would do this
728 except that mul isn't a commutative 2-addr instruction. I guess this has
729 to be done at isel time based on the #uses to mul?
731 //===---------------------------------------------------------------------===//
733 Make sure the instruction which starts a loop does not cross a cacheline
734 boundary. This requires knowning the exact length of each machine instruction.
735 That is somewhat complicated, but doable. Example 256.bzip2:
737 In the new trace, the hot loop has an instruction which crosses a cacheline
738 boundary. In addition to potential cache misses, this can't help decoding as I
739 imagine there has to be some kind of complicated decoder reset and realignment
740 to grab the bytes from the next cacheline.
742 532 532 0x3cfc movb (1809(%esp, %esi), %bl <<<--- spans 2 64 byte lines
743 942 942 0x3d03 movl %dh, (1809(%esp, %esi)
744 937 937 0x3d0a incl %esi
745 3 3 0x3d0b cmpb %bl, %dl
746 27 27 0x3d0d jnz 0x000062db <main+11707>
748 //===---------------------------------------------------------------------===//
750 In c99 mode, the preprocessor doesn't like assembly comments like #TRUNCATE.
752 //===---------------------------------------------------------------------===//
754 This could be a single 16-bit load.
757 if ((p[0] == 1) & (p[1] == 2)) return 1;
761 //===---------------------------------------------------------------------===//
763 We should inline lrintf and probably other libc functions.
765 //===---------------------------------------------------------------------===//
767 Start using the flags more. For example, compile:
769 int add_zf(int *x, int y, int a, int b) {
793 int add_zf(int *x, int y, int a, int b) {
817 //===---------------------------------------------------------------------===//
819 These two functions have identical effects:
821 unsigned int f(unsigned int i, unsigned int n) {++i; if (i == n) ++i; return i;}
822 unsigned int f2(unsigned int i, unsigned int n) {++i; i += i == n; return i;}
824 We currently compile them to:
832 jne LBB1_2 #UnifiedReturnBlock
836 LBB1_2: #UnifiedReturnBlock
846 leal 1(%ecx,%eax), %eax
849 both of which are inferior to GCC's:
867 //===---------------------------------------------------------------------===//
875 is currently compiled to:
886 It would be better to produce:
895 This can be applied to any no-return function call that takes no arguments etc.
896 Alternatively, the stack save/restore logic could be shrink-wrapped, producing
907 Both are useful in different situations. Finally, it could be shrink-wrapped
908 and tail called, like this:
915 pop %eax # realign stack.
918 Though this probably isn't worth it.
920 //===---------------------------------------------------------------------===//
922 We need to teach the codegen to convert two-address INC instructions to LEA
923 when the flags are dead (likewise dec). For example, on X86-64, compile:
925 int foo(int A, int B) {
944 ;; X's live range extends beyond the shift, so the register allocator
945 ;; cannot coalesce it with Y. Because of this, a copy needs to be
946 ;; emitted before the shift to save the register value before it is
947 ;; clobbered. However, this copy is not needed if the register
948 ;; allocator turns the shift into an LEA. This also occurs for ADD.
950 ; Check that the shift gets turned into an LEA.
951 ; RUN: llvm-upgrade < %s | llvm-as | llc -march=x86 -x86-asm-syntax=intel | \
952 ; RUN: not grep {mov E.X, E.X}
954 %G = external global int
956 int %test1(int %X, int %Y) {
958 volatile store int %Y, int* %G
959 volatile store int %Z, int* %G
964 %Z = add int %X, 1 ;; inc
965 volatile store int %Z, int* %G
969 //===---------------------------------------------------------------------===//
971 Sometimes it is better to codegen subtractions from a constant (e.g. 7-x) with
972 a neg instead of a sub instruction. Consider:
974 int test(char X) { return 7-X; }
976 we currently produce:
983 We would use one fewer register if codegen'd as:
990 Note that this isn't beneficial if the load can be folded into the sub. In
991 this case, we want a sub:
993 int test(int X) { return 7-X; }
999 //===---------------------------------------------------------------------===//
1004 We get an implicit def on the undef side. If the phi is spilled, we then get:
1008 It should be possible to teach the x86 backend to "fold" the store into the
1009 implicitdef, which just deletes the implicit def.
1011 These instructions should go away:
1013 movaps %xmm1, 192(%esp)
1014 movaps %xmm1, 224(%esp)
1015 movaps %xmm1, 176(%esp)
1017 //===---------------------------------------------------------------------===//
1019 This is a "commutable two-address" register coallescing deficiency:
1021 define <4 x float> @test1(<4 x float> %V) {
1023 %tmp8 = shufflevector <4 x float> %V, <4 x float> undef,
1024 <4 x i32> < i32 3, i32 2, i32 1, i32 0 >
1025 %add = add <4 x float> %tmp8, %V
1026 ret <4 x float> %add
1032 pshufd $27, %xmm0, %xmm1
1040 pshufd $27, %xmm0, %xmm1
1044 //===---------------------------------------------------------------------===//
1046 Leaf functions that require one 4-byte spill slot have a prolog like this:
1052 and an epilog like this:
1057 It would be smaller, and potentially faster, to push eax on entry and to
1058 pop into a dummy register instead of using addl/subl of esp. Just don't pop
1059 into any return registers :)
1061 //===---------------------------------------------------------------------===//
1063 The X86 backend should fold (branch (or (setcc, setcc))) into multiple
1064 branches. We generate really poor code for:
1066 double testf(double a) {
1067 return a == 0.0 ? 0.0 : (a > 0.0 ? 1.0 : -1.0);
1070 For example, the entry BB is:
1075 movsd 24(%esp), %xmm1
1076 ucomisd %xmm0, %xmm1
1080 jne LBB1_5 # UnifiedReturnBlock
1084 it would be better to replace the last four instructions with:
1090 We also codegen the inner ?: into a diamond:
1092 cvtss2sd LCPI1_0(%rip), %xmm2
1093 cvtss2sd LCPI1_1(%rip), %xmm3
1094 ucomisd %xmm1, %xmm0
1095 ja LBB1_3 # cond_true
1102 We should sink the load into xmm3 into the LBB1_2 block. This should
1103 be pretty easy, and will nuke all the copies.
1105 //===---------------------------------------------------------------------===//
1108 #include <algorithm>
1109 inline std::pair<unsigned, bool> full_add(unsigned a, unsigned b)
1110 { return std::make_pair(a + b, a + b < a); }
1111 bool no_overflow(unsigned a, unsigned b)
1112 { return !full_add(a, b).second; }
1132 //===---------------------------------------------------------------------===//
1134 Re-materialize MOV32r0 etc. with xor instead of changing them to moves if the
1135 condition register is dead. xor reg reg is shorter than mov reg, #0.
1137 //===---------------------------------------------------------------------===//
1139 We aren't matching RMW instructions aggressively
1140 enough. Here's a reduced testcase (more in PR1160):
1142 define void @test(i32* %huge_ptr, i32* %target_ptr) {
1143 %A = load i32* %huge_ptr ; <i32> [#uses=1]
1144 %B = load i32* %target_ptr ; <i32> [#uses=1]
1145 %C = or i32 %A, %B ; <i32> [#uses=1]
1146 store i32 %C, i32* %target_ptr
1150 $ llvm-as < t.ll | llc -march=x86-64
1158 That should be something like:
1165 //===---------------------------------------------------------------------===//
1169 bb114.preheader: ; preds = %cond_next94
1170 %tmp231232 = sext i16 %tmp62 to i32 ; <i32> [#uses=1]
1171 %tmp233 = sub i32 32, %tmp231232 ; <i32> [#uses=1]
1172 %tmp245246 = sext i16 %tmp65 to i32 ; <i32> [#uses=1]
1173 %tmp252253 = sext i16 %tmp68 to i32 ; <i32> [#uses=1]
1174 %tmp254 = sub i32 32, %tmp252253 ; <i32> [#uses=1]
1175 %tmp553554 = bitcast i16* %tmp37 to i8* ; <i8*> [#uses=2]
1176 %tmp583584 = sext i16 %tmp98 to i32 ; <i32> [#uses=1]
1177 %tmp585 = sub i32 32, %tmp583584 ; <i32> [#uses=1]
1178 %tmp614615 = sext i16 %tmp101 to i32 ; <i32> [#uses=1]
1179 %tmp621622 = sext i16 %tmp104 to i32 ; <i32> [#uses=1]
1180 %tmp623 = sub i32 32, %tmp621622 ; <i32> [#uses=1]
1185 LBB3_5: # bb114.preheader
1186 movswl -68(%ebp), %eax
1188 movl %ecx, -80(%ebp)
1189 subl %eax, -80(%ebp)
1190 movswl -52(%ebp), %eax
1191 movl %ecx, -84(%ebp)
1192 subl %eax, -84(%ebp)
1193 movswl -70(%ebp), %eax
1194 movl %ecx, -88(%ebp)
1195 subl %eax, -88(%ebp)
1196 movswl -50(%ebp), %eax
1198 movl %ecx, -76(%ebp)
1199 movswl -42(%ebp), %eax
1200 movl %eax, -92(%ebp)
1201 movswl -66(%ebp), %eax
1202 movl %eax, -96(%ebp)
1205 This appears to be bad because the RA is not folding the store to the stack
1206 slot into the movl. The above instructions could be:
1211 This seems like a cross between remat and spill folding.
1213 This has redundant subtractions of %eax from a stack slot. However, %ecx doesn't
1214 change, so we could simply subtract %eax from %ecx first and then use %ecx (or
1217 //===---------------------------------------------------------------------===//
1221 cond_next603: ; preds = %bb493, %cond_true336, %cond_next599
1222 %v.21050.1 = phi i32 [ %v.21050.0, %cond_next599 ], [ %tmp344, %cond_true336 ], [ %v.2, %bb493 ] ; <i32> [#uses=1]
1223 %maxz.21051.1 = phi i32 [ %maxz.21051.0, %cond_next599 ], [ 0, %cond_true336 ], [ %maxz.2, %bb493 ] ; <i32> [#uses=2]
1224 %cnt.01055.1 = phi i32 [ %cnt.01055.0, %cond_next599 ], [ 0, %cond_true336 ], [ %cnt.0, %bb493 ] ; <i32> [#uses=2]
1225 %byteptr.9 = phi i8* [ %byteptr.12, %cond_next599 ], [ %byteptr.0, %cond_true336 ], [ %byteptr.10, %bb493 ] ; <i8*> [#uses=9]
1226 %bitptr.6 = phi i32 [ %tmp5571104.1, %cond_next599 ], [ %tmp4921049, %cond_true336 ], [ %bitptr.7, %bb493 ] ; <i32> [#uses=4]
1227 %source.5 = phi i32 [ %tmp602, %cond_next599 ], [ %source.0, %cond_true336 ], [ %source.6, %bb493 ] ; <i32> [#uses=7]
1228 %tmp606 = getelementptr %struct.const_tables* @tables, i32 0, i32 0, i32 %cnt.01055.1 ; <i8*> [#uses=1]
1229 %tmp607 = load i8* %tmp606, align 1 ; <i8> [#uses=1]
1233 LBB4_70: # cond_next603
1234 movl -20(%ebp), %esi
1235 movl L_tables$non_lazy_ptr-"L4$pb"(%esi), %esi
1237 However, ICC caches this information before the loop and produces this:
1239 movl 88(%esp), %eax #481.12
1241 //===---------------------------------------------------------------------===//
1245 %tmp659 = icmp slt i16 %tmp654, 0 ; <i1> [#uses=1]
1246 br i1 %tmp659, label %cond_true662, label %cond_next715
1252 jns LBB4_109 # cond_next715
1254 Shark tells us that using %cx in the testw instruction is sub-optimal. It
1255 suggests using the 32-bit register (which is what ICC uses).
1257 //===---------------------------------------------------------------------===//
1259 rdar://5506677 - We compile this:
1261 define i32 @foo(double %x) {
1262 %x14 = bitcast double %x to i64 ; <i64> [#uses=1]
1263 %tmp713 = trunc i64 %x14 to i32 ; <i32> [#uses=1]
1264 %tmp8 = and i32 %tmp713, 2147483647 ; <i32> [#uses=1]
1274 movl $2147483647, %eax
1280 It would be much better to eliminate the fldl/fstpl by folding the bitcast
1281 into the load SDNode. That would give us:
1284 movl $2147483647, %eax
1288 //===---------------------------------------------------------------------===//
1292 void compare (long long foo) {
1293 if (foo < 4294967297LL)
1310 je LBB1_2 # cond_true
1312 (also really horrible code on ppc). This is due to the expand code for 64-bit
1313 compares. GCC produces multiple branches, which is much nicer:
1329 //===---------------------------------------------------------------------===//
1331 Tail call optimization improvements: Tail call optimization currently
1332 pushes all arguments on the top of the stack (their normal place for
1333 non-tail call optimized calls) before moving them to actual stack
1334 slot. This is done to prevent overwriting of parameters (see example
1335 below) that might be used, since the arguments of the callee
1336 overwrites caller's arguments.
1340 int callee(int32, int64);
1341 int caller(int32 arg1, int32 arg2) {
1342 int64 local = arg2 * 2;
1343 return callee(arg2, (int64)local);
1346 [arg1] [!arg2 no longer valid since we moved local onto it]
1350 Moving arg1 onto the stack slot of callee function would overwrite
1353 Possible optimizations:
1355 - Only push those arguments to the top of the stack that are actual
1356 parameters of the caller function and have no local value in the
1359 In the above example local does not need to be pushed onto the top
1360 of the stack as it is definitely not a caller's function
1363 - Analyse the actual parameters of the callee to see which would
1364 overwrite a caller parameter which is used by the callee and only
1365 push them onto the top of the stack.
1367 int callee (int32 arg1, int32 arg2);
1368 int caller (int32 arg1, int32 arg2) {
1369 return callee(arg1,arg2);
1372 Here we don't need to write any variables to the top of the stack
1373 since they don't overwrite each other.
1375 int callee (int32 arg1, int32 arg2);
1376 int caller (int32 arg1, int32 arg2) {
1377 return callee(arg2,arg1);
1380 Here we need to push the arguments because they overwrite each
1384 Code for lowering directly onto callers arguments:
1385 + SmallVector<std::pair<unsigned, SDOperand>, 8> RegsToPass;
1386 + SmallVector<SDOperand, 8> MemOpChains;
1388 + SDOperand FramePtr;
1392 + // Walk the register/memloc assignments, inserting copies/loads.
1393 + for (unsigned i = 0, e = ArgLocs.size(); i != e; ++i) {
1394 + CCValAssign &VA = ArgLocs[i];
1395 + SDOperand Arg = Op.getOperand(5+2*VA.getValNo());
1399 + if (VA.isRegLoc()) {
1400 + RegsToPass.push_back(std::make_pair(VA.getLocReg(), Arg));
1402 + assert(VA.isMemLoc());
1403 + // create frame index
1404 + int32_t Offset = VA.getLocMemOffset()+FPDiff;
1405 + uint32_t OpSize = (MVT::getSizeInBits(VA.getLocVT())+7)/8;
1406 + FI = MF.getFrameInfo()->CreateFixedObject(OpSize, Offset);
1407 + FIN = DAG.getFrameIndex(FI, MVT::i32);
1408 + // store relative to framepointer
1409 + MemOpChains.push_back(DAG.getStore(Chain, Arg, FIN, NULL, 0));
1412 //===---------------------------------------------------------------------===//
1417 unsigned long int z = 0;
1428 gcc compiles this to:
1454 jge LBB1_4 # cond_true
1457 addl $4294950912, %ecx
1467 1. LSR should rewrite the first cmp with induction variable %ecx.
1468 2. DAG combiner should fold
1474 //===---------------------------------------------------------------------===//
1476 define i64 @test(double %X) {
1477 %Y = fptosi double %X to i64
1485 movsd 24(%esp), %xmm0
1486 movsd %xmm0, 8(%esp)
1495 This should just fldl directly from the input stack slot.
1497 //===---------------------------------------------------------------------===//
1500 int foo (int x) { return (x & 65535) | 255; }
1502 Should compile into:
1505 movzwl 4(%esp), %eax
1506 orb $-1, %al ;; 'orl 255' is also fine :)
1516 //===---------------------------------------------------------------------===//
1518 We're missing an obvious fold of a load into imul:
1520 int test(long a, long b) { return a * b; }
1535 //===---------------------------------------------------------------------===//
1537 We can fold a store into "zeroing a reg". Instead of:
1540 movl %eax, 124(%esp)
1546 if the flags of the xor are dead.
1548 //===---------------------------------------------------------------------===//
1550 This testcase misses a read/modify/write opportunity (from PR1425):
1552 void vertical_decompose97iH1(int *b0, int *b1, int *b2, int width){
1554 for(i=0; i<width; i++)
1555 b1[i] += (1*(b0[i] + b2[i])+0)>>0;
1558 We compile it down to:
1561 movl (%esi,%edi,4), %ebx
1562 addl (%ecx,%edi,4), %ebx
1563 addl (%edx,%edi,4), %ebx
1564 movl %ebx, (%ecx,%edi,4)
1569 the inner loop should add to the memory location (%ecx,%edi,4), saving
1570 a mov. Something like:
1572 movl (%esi,%edi,4), %ebx
1573 addl (%edx,%edi,4), %ebx
1574 addl %ebx, (%ecx,%edi,4)
1576 Here is another interesting example:
1578 void vertical_compose97iH1(int *b0, int *b1, int *b2, int width){
1580 for(i=0; i<width; i++)
1581 b1[i] -= (1*(b0[i] + b2[i])+0)>>0;
1584 We miss the r/m/w opportunity here by using 2 subs instead of an add+sub[mem]:
1587 movl (%ecx,%edi,4), %ebx
1588 subl (%esi,%edi,4), %ebx
1589 subl (%edx,%edi,4), %ebx
1590 movl %ebx, (%ecx,%edi,4)
1595 Additionally, LSR should rewrite the exit condition of these loops to use
1596 a stride-4 IV, would would allow all the scales in the loop to go away.
1597 This would result in smaller code and more efficient microops.
1599 //===---------------------------------------------------------------------===//
1601 In SSE mode, we turn abs and neg into a load from the constant pool plus a xor
1602 or and instruction, for example:
1604 xorpd LCPI2_0-"L2$pb"(%esi), %xmm2
1606 However, if xmm2 gets spilled, we end up with really ugly code like this:
1608 %xmm2 = reload [mem]
1609 xorpd LCPI2_0-"L2$pb"(%esi), %xmm2
1610 store %xmm2 -> [mem]
1612 Since we 'know' that this is a 'neg', we can actually "fold" the spill into
1613 the neg/abs instruction, turning it into an *integer* operation, like this:
1615 xorl 2147483648, [mem+4] ## 2147483648 = (1 << 31)
1617 you could also use xorb, but xorl is less likely to lead to a partial register
1620 //===---------------------------------------------------------------------===//