1 //===---------------------------------------------------------------------===//
2 // Random ideas for the X86 backend.
3 //===---------------------------------------------------------------------===//
5 We should add support for the "movbe" instruction, which does a byte-swapping
6 copy (3-addr bswap + memory support?) This is available on Atom processors.
8 //===---------------------------------------------------------------------===//
10 This should be one DIV/IDIV instruction, not a libcall:
12 unsigned test(unsigned long long X, unsigned Y) {
16 This can be done trivially with a custom legalizer. What about overflow
17 though? http://gcc.gnu.org/bugzilla/show_bug.cgi?id=14224
19 //===---------------------------------------------------------------------===//
21 Improvements to the multiply -> shift/add algorithm:
22 http://gcc.gnu.org/ml/gcc-patches/2004-08/msg01590.html
24 //===---------------------------------------------------------------------===//
26 Improve code like this (occurs fairly frequently, e.g. in LLVM):
27 long long foo(int x) { return 1LL << x; }
29 http://gcc.gnu.org/ml/gcc-patches/2004-09/msg01109.html
30 http://gcc.gnu.org/ml/gcc-patches/2004-09/msg01128.html
31 http://gcc.gnu.org/ml/gcc-patches/2004-09/msg01136.html
33 Another useful one would be ~0ULL >> X and ~0ULL << X.
35 One better solution for 1LL << x is:
44 But that requires good 8-bit subreg support.
46 Also, this might be better. It's an extra shift, but it's one instruction
47 shorter, and doesn't stress 8-bit subreg support.
48 (From http://gcc.gnu.org/ml/gcc-patches/2004-09/msg01148.html,
49 but without the unnecessary and.)
57 64-bit shifts (in general) expand to really bad code. Instead of using
58 cmovs, we should expand to a conditional branch like GCC produces.
60 //===---------------------------------------------------------------------===//
64 1. Dynamic programming based approach when compile time if not an
66 2. Code duplication (addressing mode) during isel.
67 3. Other ideas from "Register-Sensitive Selection, Duplication, and
68 Sequencing of Instructions".
69 4. Scheduling for reduced register pressure. E.g. "Minimum Register
70 Instruction Sequence Problem: Revisiting Optimal Code Generation for DAGs"
71 and other related papers.
72 http://citeseer.ist.psu.edu/govindarajan01minimum.html
74 //===---------------------------------------------------------------------===//
76 Should we promote i16 to i32 to avoid partial register update stalls?
78 //===---------------------------------------------------------------------===//
80 Leave any_extend as pseudo instruction and hint to register
81 allocator. Delay codegen until post register allocation.
82 Note. any_extend is now turned into an INSERT_SUBREG. We still need to teach
83 the coalescer how to deal with it though.
85 //===---------------------------------------------------------------------===//
87 It appears icc use push for parameter passing. Need to investigate.
89 //===---------------------------------------------------------------------===//
94 void bar(int x, int *P) {
109 Instead of doing an explicit test, we can use the flags off the sar. This
110 occurs in a bigger testcase like this, which is pretty common:
113 int test1(std::vector<int> &X) {
115 for (long i = 0, e = X.size(); i != e; ++i)
120 //===---------------------------------------------------------------------===//
122 Only use inc/neg/not instructions on processors where they are faster than
123 add/sub/xor. They are slower on the P4 due to only updating some processor
126 //===---------------------------------------------------------------------===//
128 The instruction selector sometimes misses folding a load into a compare. The
129 pattern is written as (cmp reg, (load p)). Because the compare isn't
130 commutative, it is not matched with the load on both sides. The dag combiner
131 should be made smart enough to cannonicalize the load into the RHS of a compare
132 when it can invert the result of the compare for free.
134 //===---------------------------------------------------------------------===//
136 In many cases, LLVM generates code like this:
145 on some processors (which ones?), it is more efficient to do this:
154 Doing this correctly is tricky though, as the xor clobbers the flags.
156 //===---------------------------------------------------------------------===//
158 We should generate bts/btr/etc instructions on targets where they are cheap or
159 when codesize is important. e.g., for:
161 void setbit(int *target, int bit) {
162 *target |= (1 << bit);
164 void clearbit(int *target, int bit) {
165 *target &= ~(1 << bit);
168 //===---------------------------------------------------------------------===//
170 Instead of the following for memset char*, 1, 10:
172 movl $16843009, 4(%edx)
173 movl $16843009, (%edx)
176 It might be better to generate
183 when we can spare a register. It reduces code size.
185 //===---------------------------------------------------------------------===//
187 Evaluate what the best way to codegen sdiv X, (2^C) is. For X/8, we currently
190 define i32 @test1(i32 %X) {
204 GCC knows several different ways to codegen it, one of which is this:
214 which is probably slower, but it's interesting at least :)
216 //===---------------------------------------------------------------------===//
218 We are currently lowering large (1MB+) memmove/memcpy to rep/stosl and rep/movsl
219 We should leave these as libcalls for everything over a much lower threshold,
220 since libc is hand tuned for medium and large mem ops (avoiding RFO for large
221 stores, TLB preheating, etc)
223 //===---------------------------------------------------------------------===//
225 Optimize this into something reasonable:
226 x * copysign(1.0, y) * copysign(1.0, z)
228 //===---------------------------------------------------------------------===//
230 Optimize copysign(x, *y) to use an integer load from y.
232 //===---------------------------------------------------------------------===//
234 The following tests perform worse with LSR:
236 lambda, siod, optimizer-eval, ackermann, hash2, nestedloop, strcat, and Treesor.
238 //===---------------------------------------------------------------------===//
240 Adding to the list of cmp / test poor codegen issues:
242 int test(__m128 *A, __m128 *B) {
243 if (_mm_comige_ss(*A, *B))
263 Note the setae, movzbl, cmpl, cmove can be replaced with a single cmovae. There
264 are a number of issues. 1) We are introducing a setcc between the result of the
265 intrisic call and select. 2) The intrinsic is expected to produce a i32 value
266 so a any extend (which becomes a zero extend) is added.
268 We probably need some kind of target DAG combine hook to fix this.
270 //===---------------------------------------------------------------------===//
272 We generate significantly worse code for this than GCC:
273 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=21150
274 http://gcc.gnu.org/bugzilla/attachment.cgi?id=8701
276 There is also one case we do worse on PPC.
278 //===---------------------------------------------------------------------===//
288 imull $3, 4(%esp), %eax
290 Perhaps this is what we really should generate is? Is imull three or four
291 cycles? Note: ICC generates this:
293 leal (%eax,%eax,2), %eax
295 The current instruction priority is based on pattern complexity. The former is
296 more "complex" because it folds a load so the latter will not be emitted.
298 Perhaps we should use AddedComplexity to give LEA32r a higher priority? We
299 should always try to match LEA first since the LEA matching code does some
300 estimate to determine whether the match is profitable.
302 However, if we care more about code size, then imull is better. It's two bytes
303 shorter than movl + leal.
305 On a Pentium M, both variants have the same characteristics with regard
306 to throughput; however, the multiplication has a latency of four cycles, as
307 opposed to two cycles for the movl+lea variant.
309 //===---------------------------------------------------------------------===//
311 __builtin_ffs codegen is messy.
313 int ffs_(unsigned X) { return __builtin_ffs(X); }
336 Another example of __builtin_ffs (use predsimplify to eliminate a select):
338 int foo (unsigned long j) {
340 return __builtin_ffs (j) - 1;
345 //===---------------------------------------------------------------------===//
347 It appears gcc place string data with linkonce linkage in
348 .section __TEXT,__const_coal,coalesced instead of
349 .section __DATA,__const_coal,coalesced.
350 Take a look at darwin.h, there are other Darwin assembler directives that we
353 //===---------------------------------------------------------------------===//
355 define i32 @foo(i32* %a, i32 %t) {
359 cond_true: ; preds = %cond_true, %entry
360 %x.0.0 = phi i32 [ 0, %entry ], [ %tmp9, %cond_true ] ; <i32> [#uses=3]
361 %t_addr.0.0 = phi i32 [ %t, %entry ], [ %tmp7, %cond_true ] ; <i32> [#uses=1]
362 %tmp2 = getelementptr i32* %a, i32 %x.0.0 ; <i32*> [#uses=1]
363 %tmp3 = load i32* %tmp2 ; <i32> [#uses=1]
364 %tmp5 = add i32 %t_addr.0.0, %x.0.0 ; <i32> [#uses=1]
365 %tmp7 = add i32 %tmp5, %tmp3 ; <i32> [#uses=2]
366 %tmp9 = add i32 %x.0.0, 1 ; <i32> [#uses=2]
367 %tmp = icmp sgt i32 %tmp9, 39 ; <i1> [#uses=1]
368 br i1 %tmp, label %bb12, label %cond_true
370 bb12: ; preds = %cond_true
373 is pessimized by -loop-reduce and -indvars
375 //===---------------------------------------------------------------------===//
377 u32 to float conversion improvement:
379 float uint32_2_float( unsigned u ) {
380 float fl = (int) (u & 0xffff);
381 float fh = (int) (u >> 16);
386 00000000 subl $0x04,%esp
387 00000003 movl 0x08(%esp,1),%eax
388 00000007 movl %eax,%ecx
389 00000009 shrl $0x10,%ecx
390 0000000c cvtsi2ss %ecx,%xmm0
391 00000010 andl $0x0000ffff,%eax
392 00000015 cvtsi2ss %eax,%xmm1
393 00000019 mulss 0x00000078,%xmm0
394 00000021 addss %xmm1,%xmm0
395 00000025 movss %xmm0,(%esp,1)
396 0000002a flds (%esp,1)
397 0000002d addl $0x04,%esp
400 //===---------------------------------------------------------------------===//
402 When using fastcc abi, align stack slot of argument of type double on 8 byte
403 boundary to improve performance.
405 //===---------------------------------------------------------------------===//
407 GCC's ix86_expand_int_movcc function (in i386.c) has a ton of interesting
408 simplifications for integer "x cmp y ? a : b".
410 //===---------------------------------------------------------------------===//
412 Consider the expansion of:
414 define i32 @test3(i32 %X) {
415 %tmp1 = urem i32 %X, 255
419 Currently it compiles to:
422 movl $2155905153, %ecx
428 This could be "reassociated" into:
430 movl $2155905153, %eax
434 to avoid the copy. In fact, the existing two-address stuff would do this
435 except that mul isn't a commutative 2-addr instruction. I guess this has
436 to be done at isel time based on the #uses to mul?
438 //===---------------------------------------------------------------------===//
440 Make sure the instruction which starts a loop does not cross a cacheline
441 boundary. This requires knowning the exact length of each machine instruction.
442 That is somewhat complicated, but doable. Example 256.bzip2:
444 In the new trace, the hot loop has an instruction which crosses a cacheline
445 boundary. In addition to potential cache misses, this can't help decoding as I
446 imagine there has to be some kind of complicated decoder reset and realignment
447 to grab the bytes from the next cacheline.
449 532 532 0x3cfc movb (1809(%esp, %esi), %bl <<<--- spans 2 64 byte lines
450 942 942 0x3d03 movl %dh, (1809(%esp, %esi)
451 937 937 0x3d0a incl %esi
452 3 3 0x3d0b cmpb %bl, %dl
453 27 27 0x3d0d jnz 0x000062db <main+11707>
455 //===---------------------------------------------------------------------===//
457 In c99 mode, the preprocessor doesn't like assembly comments like #TRUNCATE.
459 //===---------------------------------------------------------------------===//
461 This could be a single 16-bit load.
464 if ((p[0] == 1) & (p[1] == 2)) return 1;
468 //===---------------------------------------------------------------------===//
470 We should inline lrintf and probably other libc functions.
472 //===---------------------------------------------------------------------===//
474 Use the FLAGS values from arithmetic instructions more. For example, compile:
476 int add_zf(int *x, int y, int a, int b) {
498 As another example, compile function f2 in test/CodeGen/X86/cmp-test.ll
499 without a test instruction.
501 //===---------------------------------------------------------------------===//
503 These two functions have identical effects:
505 unsigned int f(unsigned int i, unsigned int n) {++i; if (i == n) ++i; return i;}
506 unsigned int f2(unsigned int i, unsigned int n) {++i; i += i == n; return i;}
508 We currently compile them to:
516 jne LBB1_2 #UnifiedReturnBlock
520 LBB1_2: #UnifiedReturnBlock
530 leal 1(%ecx,%eax), %eax
533 both of which are inferior to GCC's:
551 //===---------------------------------------------------------------------===//
559 is currently compiled to:
570 It would be better to produce:
579 This can be applied to any no-return function call that takes no arguments etc.
580 Alternatively, the stack save/restore logic could be shrink-wrapped, producing
591 Both are useful in different situations. Finally, it could be shrink-wrapped
592 and tail called, like this:
599 pop %eax # realign stack.
602 Though this probably isn't worth it.
604 //===---------------------------------------------------------------------===//
606 Sometimes it is better to codegen subtractions from a constant (e.g. 7-x) with
607 a neg instead of a sub instruction. Consider:
609 int test(char X) { return 7-X; }
611 we currently produce:
618 We would use one fewer register if codegen'd as:
625 Note that this isn't beneficial if the load can be folded into the sub. In
626 this case, we want a sub:
628 int test(int X) { return 7-X; }
634 //===---------------------------------------------------------------------===//
636 Leaf functions that require one 4-byte spill slot have a prolog like this:
642 and an epilog like this:
647 It would be smaller, and potentially faster, to push eax on entry and to
648 pop into a dummy register instead of using addl/subl of esp. Just don't pop
649 into any return registers :)
651 //===---------------------------------------------------------------------===//
653 The X86 backend should fold (branch (or (setcc, setcc))) into multiple
654 branches. We generate really poor code for:
656 double testf(double a) {
657 return a == 0.0 ? 0.0 : (a > 0.0 ? 1.0 : -1.0);
660 For example, the entry BB is:
665 movsd 24(%esp), %xmm1
670 jne LBB1_5 # UnifiedReturnBlock
674 it would be better to replace the last four instructions with:
680 We also codegen the inner ?: into a diamond:
682 cvtss2sd LCPI1_0(%rip), %xmm2
683 cvtss2sd LCPI1_1(%rip), %xmm3
685 ja LBB1_3 # cond_true
692 We should sink the load into xmm3 into the LBB1_2 block. This should
693 be pretty easy, and will nuke all the copies.
695 //===---------------------------------------------------------------------===//
699 inline std::pair<unsigned, bool> full_add(unsigned a, unsigned b)
700 { return std::make_pair(a + b, a + b < a); }
701 bool no_overflow(unsigned a, unsigned b)
702 { return !full_add(a, b).second; }
710 on x86-64, instead of the rather stupid-looking:
718 //===---------------------------------------------------------------------===//
722 bb114.preheader: ; preds = %cond_next94
723 %tmp231232 = sext i16 %tmp62 to i32 ; <i32> [#uses=1]
724 %tmp233 = sub i32 32, %tmp231232 ; <i32> [#uses=1]
725 %tmp245246 = sext i16 %tmp65 to i32 ; <i32> [#uses=1]
726 %tmp252253 = sext i16 %tmp68 to i32 ; <i32> [#uses=1]
727 %tmp254 = sub i32 32, %tmp252253 ; <i32> [#uses=1]
728 %tmp553554 = bitcast i16* %tmp37 to i8* ; <i8*> [#uses=2]
729 %tmp583584 = sext i16 %tmp98 to i32 ; <i32> [#uses=1]
730 %tmp585 = sub i32 32, %tmp583584 ; <i32> [#uses=1]
731 %tmp614615 = sext i16 %tmp101 to i32 ; <i32> [#uses=1]
732 %tmp621622 = sext i16 %tmp104 to i32 ; <i32> [#uses=1]
733 %tmp623 = sub i32 32, %tmp621622 ; <i32> [#uses=1]
738 LBB3_5: # bb114.preheader
739 movswl -68(%ebp), %eax
743 movswl -52(%ebp), %eax
746 movswl -70(%ebp), %eax
749 movswl -50(%ebp), %eax
752 movswl -42(%ebp), %eax
754 movswl -66(%ebp), %eax
758 This appears to be bad because the RA is not folding the store to the stack
759 slot into the movl. The above instructions could be:
764 This seems like a cross between remat and spill folding.
766 This has redundant subtractions of %eax from a stack slot. However, %ecx doesn't
767 change, so we could simply subtract %eax from %ecx first and then use %ecx (or
770 //===---------------------------------------------------------------------===//
774 %tmp659 = icmp slt i16 %tmp654, 0 ; <i1> [#uses=1]
775 br i1 %tmp659, label %cond_true662, label %cond_next715
781 jns LBB4_109 # cond_next715
783 Shark tells us that using %cx in the testw instruction is sub-optimal. It
784 suggests using the 32-bit register (which is what ICC uses).
786 //===---------------------------------------------------------------------===//
790 void compare (long long foo) {
791 if (foo < 4294967297LL)
807 jne .LBB1_2 # UnifiedReturnBlock
810 .LBB1_2: # UnifiedReturnBlock
814 (also really horrible code on ppc). This is due to the expand code for 64-bit
815 compares. GCC produces multiple branches, which is much nicer:
836 //===---------------------------------------------------------------------===//
838 Tail call optimization improvements: Tail call optimization currently
839 pushes all arguments on the top of the stack (their normal place for
840 non-tail call optimized calls) that source from the callers arguments
841 or that source from a virtual register (also possibly sourcing from
843 This is done to prevent overwriting of parameters (see example
844 below) that might be used later.
848 int callee(int32, int64);
849 int caller(int32 arg1, int32 arg2) {
850 int64 local = arg2 * 2;
851 return callee(arg2, (int64)local);
854 [arg1] [!arg2 no longer valid since we moved local onto it]
858 Moving arg1 onto the stack slot of callee function would overwrite
861 Possible optimizations:
864 - Analyse the actual parameters of the callee to see which would
865 overwrite a caller parameter which is used by the callee and only
866 push them onto the top of the stack.
868 int callee (int32 arg1, int32 arg2);
869 int caller (int32 arg1, int32 arg2) {
870 return callee(arg1,arg2);
873 Here we don't need to write any variables to the top of the stack
874 since they don't overwrite each other.
876 int callee (int32 arg1, int32 arg2);
877 int caller (int32 arg1, int32 arg2) {
878 return callee(arg2,arg1);
881 Here we need to push the arguments because they overwrite each
884 //===---------------------------------------------------------------------===//
889 unsigned long int z = 0;
900 gcc compiles this to:
926 jge LBB1_4 # cond_true
929 addl $4294950912, %ecx
939 1. LSR should rewrite the first cmp with induction variable %ecx.
940 2. DAG combiner should fold
946 //===---------------------------------------------------------------------===//
948 define i64 @test(double %X) {
949 %Y = fptosi double %X to i64
957 movsd 24(%esp), %xmm0
967 This should just fldl directly from the input stack slot.
969 //===---------------------------------------------------------------------===//
972 int foo (int x) { return (x & 65535) | 255; }
988 //===---------------------------------------------------------------------===//
990 We're codegen'ing multiply of long longs inefficiently:
992 unsigned long long LLM(unsigned long long arg1, unsigned long long arg2) {
996 We compile to (fomit-frame-pointer):
1004 imull 12(%esp), %esi
1006 imull 20(%esp), %ecx
1012 This looks like a scheduling deficiency and lack of remat of the load from
1013 the argument area. ICC apparently produces:
1016 imull 12(%esp), %ecx
1025 Note that it remat'd loads from 4(esp) and 12(esp). See this GCC PR:
1026 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=17236
1028 //===---------------------------------------------------------------------===//
1030 We can fold a store into "zeroing a reg". Instead of:
1033 movl %eax, 124(%esp)
1039 if the flags of the xor are dead.
1041 Likewise, we isel "x<<1" into "add reg,reg". If reg is spilled, this should
1042 be folded into: shl [mem], 1
1044 //===---------------------------------------------------------------------===//
1046 In SSE mode, we turn abs and neg into a load from the constant pool plus a xor
1047 or and instruction, for example:
1049 xorpd LCPI1_0, %xmm2
1051 However, if xmm2 gets spilled, we end up with really ugly code like this:
1054 xorpd LCPI1_0, %xmm0
1057 Since we 'know' that this is a 'neg', we can actually "fold" the spill into
1058 the neg/abs instruction, turning it into an *integer* operation, like this:
1060 xorl 2147483648, [mem+4] ## 2147483648 = (1 << 31)
1062 you could also use xorb, but xorl is less likely to lead to a partial register
1063 stall. Here is a contrived testcase:
1066 void test(double *P) {
1076 //===---------------------------------------------------------------------===//
1078 The generated code on x86 for checking for signed overflow on a multiply the
1079 obvious way is much longer than it needs to be.
1081 int x(int a, int b) {
1082 long long prod = (long long)a*b;
1083 return prod > 0x7FFFFFFF || prod < (-0x7FFFFFFF-1);
1086 See PR2053 for more details.
1088 //===---------------------------------------------------------------------===//
1090 We should investigate using cdq/ctld (effect: edx = sar eax, 31)
1091 more aggressively; it should cost the same as a move+shift on any modern
1092 processor, but it's a lot shorter. Downside is that it puts more
1093 pressure on register allocation because it has fixed operands.
1096 int abs(int x) {return x < 0 ? -x : x;}
1098 gcc compiles this to the following when using march/mtune=pentium2/3/4/m/etc.:
1106 //===---------------------------------------------------------------------===//
1108 Take the following code (from
1109 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=16541):
1111 extern unsigned char first_one[65536];
1112 int FirstOnet(unsigned long long arg1)
1115 return (first_one[arg1 >> 48]);
1120 The following code is currently generated:
1125 jb .LBB1_2 # UnifiedReturnBlock
1128 movzbl first_one(%eax), %eax
1130 .LBB1_2: # UnifiedReturnBlock
1134 We could change the "movl 8(%esp), %eax" into "movzwl 10(%esp), %eax"; this
1135 lets us change the cmpl into a testl, which is shorter, and eliminate the shift.
1137 //===---------------------------------------------------------------------===//
1139 We compile this function:
1141 define i32 @foo(i32 %a, i32 %b, i32 %c, i8 zeroext %d) nounwind {
1143 %tmp2 = icmp eq i8 %d, 0 ; <i1> [#uses=1]
1144 br i1 %tmp2, label %bb7, label %bb
1146 bb: ; preds = %entry
1147 %tmp6 = add i32 %b, %a ; <i32> [#uses=1]
1150 bb7: ; preds = %entry
1151 %tmp10 = sub i32 %a, %c ; <i32> [#uses=1]
1172 There's an obviously unnecessary movl in .LBB0_2, and we could eliminate a
1173 couple more movls by putting 4(%esp) into %eax instead of %ecx.
1175 //===---------------------------------------------------------------------===//
1182 cvtss2sd LCPI1_0, %xmm1
1184 movsd 176(%esp), %xmm2
1189 mulsd LCPI1_23, %xmm4
1190 addsd LCPI1_24, %xmm4
1192 addsd LCPI1_25, %xmm4
1194 addsd LCPI1_26, %xmm4
1196 addsd LCPI1_27, %xmm4
1198 addsd LCPI1_28, %xmm4
1202 movsd 152(%esp), %xmm1
1204 movsd %xmm1, 152(%esp)
1208 LBB1_16: # bb358.loopexit
1209 movsd 152(%esp), %xmm0
1211 addsd LCPI1_22, %xmm0
1212 movsd %xmm0, 152(%esp)
1214 Rather than spilling the result of the last addsd in the loop, we should have
1215 insert a copy to split the interval (one for the duration of the loop, one
1216 extending to the fall through). The register pressure in the loop isn't high
1217 enough to warrant the spill.
1219 Also check why xmm7 is not used at all in the function.
1221 //===---------------------------------------------------------------------===//
1225 target datalayout = "e-p:32:32:32-i1:8:8-i8:8:8-i16:16:16-i32:32:32-i64:32:64-f32:32:32-f64:32:64-v64:64:64-v128:128:128-a0:0:64-f80:128:128"
1226 target triple = "i386-apple-darwin8"
1227 @in_exit.4870.b = internal global i1 false ; <i1*> [#uses=2]
1228 define fastcc void @abort_gzip() noreturn nounwind {
1230 %tmp.b.i = load i1* @in_exit.4870.b ; <i1> [#uses=1]
1231 br i1 %tmp.b.i, label %bb.i, label %bb4.i
1232 bb.i: ; preds = %entry
1233 tail call void @exit( i32 1 ) noreturn nounwind
1235 bb4.i: ; preds = %entry
1236 store i1 true, i1* @in_exit.4870.b
1237 tail call void @exit( i32 1 ) noreturn nounwind
1240 declare void @exit(i32) noreturn nounwind
1243 _abort_gzip: ## @abort_gzip
1246 movb _in_exit.4870.b, %al
1250 We somehow miss folding the movb into the cmpb.
1252 //===---------------------------------------------------------------------===//
1256 int test(int x, int y) {
1268 it would be better to codegen as: x+~y (notl+addl)
1270 //===---------------------------------------------------------------------===//
1274 int foo(const char *str,...)
1276 __builtin_va_list a; int x;
1277 __builtin_va_start(a,str); x = __builtin_va_arg(a,int); __builtin_va_end(a);
1281 gets compiled into this on x86-64:
1283 movaps %xmm7, 160(%rsp)
1284 movaps %xmm6, 144(%rsp)
1285 movaps %xmm5, 128(%rsp)
1286 movaps %xmm4, 112(%rsp)
1287 movaps %xmm3, 96(%rsp)
1288 movaps %xmm2, 80(%rsp)
1289 movaps %xmm1, 64(%rsp)
1290 movaps %xmm0, 48(%rsp)
1297 movq %rax, 192(%rsp)
1298 leaq 208(%rsp), %rax
1299 movq %rax, 184(%rsp)
1302 movl 176(%rsp), %eax
1306 movq 184(%rsp), %rcx
1308 movq %rax, 184(%rsp)
1316 addq 192(%rsp), %rcx
1317 movl %eax, 176(%rsp)
1323 leaq 104(%rsp), %rax
1324 movq %rsi, -80(%rsp)
1326 movq %rax, -112(%rsp)
1327 leaq -88(%rsp), %rax
1328 movq %rax, -104(%rsp)
1332 movq -112(%rsp), %rdx
1340 addq -104(%rsp), %rdx
1342 movl %eax, -120(%rsp)
1347 and it gets compiled into this on x86:
1367 //===---------------------------------------------------------------------===//
1369 Teach tblgen not to check bitconvert source type in some cases. This allows us
1370 to consolidate the following patterns in X86InstrMMX.td:
1372 def : Pat<(v2i32 (bitconvert (i64 (vector_extract (v2i64 VR128:$src),
1374 (v2i32 (MMX_MOVDQ2Qrr VR128:$src))>;
1375 def : Pat<(v4i16 (bitconvert (i64 (vector_extract (v2i64 VR128:$src),
1377 (v4i16 (MMX_MOVDQ2Qrr VR128:$src))>;
1378 def : Pat<(v8i8 (bitconvert (i64 (vector_extract (v2i64 VR128:$src),
1380 (v8i8 (MMX_MOVDQ2Qrr VR128:$src))>;
1382 There are other cases in various td files.
1384 //===---------------------------------------------------------------------===//
1386 Take something like the following on x86-32:
1387 unsigned a(unsigned long long x, unsigned y) {return x % y;}
1389 We currently generate a libcall, but we really shouldn't: the expansion is
1390 shorter and likely faster than the libcall. The expected code is something
1402 A similar code sequence works for division.
1404 //===---------------------------------------------------------------------===//
1406 These should compile to the same code, but the later codegen's to useless
1407 instructions on X86. This may be a trivial dag combine (GCC PR7061):
1409 struct s1 { unsigned char a, b; };
1410 unsigned long f1(struct s1 x) {
1413 struct s2 { unsigned a: 8, b: 8; };
1414 unsigned long f2(struct s2 x) {
1418 //===---------------------------------------------------------------------===//
1420 We currently compile this:
1422 define i32 @func1(i32 %v1, i32 %v2) nounwind {
1424 %t = call {i32, i1} @llvm.sadd.with.overflow.i32(i32 %v1, i32 %v2)
1425 %sum = extractvalue {i32, i1} %t, 0
1426 %obit = extractvalue {i32, i1} %t, 1
1427 br i1 %obit, label %overflow, label %normal
1431 call void @llvm.trap()
1434 declare {i32, i1} @llvm.sadd.with.overflow.i32(i32, i32)
1435 declare void @llvm.trap()
1442 jo LBB1_2 ## overflow
1448 it would be nice to produce "into" someday.
1450 //===---------------------------------------------------------------------===//
1454 void vec_mpys1(int y[], const int x[], int scaler) {
1456 for (i = 0; i < 150; i++)
1457 y[i] += (((long long)scaler * (long long)x[i]) >> 31);
1460 Compiles to this loop with GCC 3.x:
1465 shrdl $31, %edx, %eax
1466 addl %eax, (%esi,%ecx,4)
1471 llvm-gcc compiles it to the much uglier:
1475 movl (%eax,%edi,4), %ebx
1484 shldl $1, %eax, %ebx
1486 addl %ebx, (%eax,%edi,4)
1491 The issue is that we hoist the cast of "scaler" to long long outside of the
1492 loop, the value comes into the loop as two values, and
1493 RegsForValue::getCopyFromRegs doesn't know how to put an AssertSext on the
1494 constructed BUILD_PAIR which represents the cast value.
1496 This can be handled by making CodeGenPrepare sink the cast.
1498 //===---------------------------------------------------------------------===//
1500 Test instructions can be eliminated by using EFLAGS values from arithmetic
1501 instructions. This is currently not done for mul, and, or, xor, neg, shl,
1502 sra, srl, shld, shrd, atomic ops, and others. It is also currently not done
1503 for read-modify-write instructions. It is also current not done if the
1504 OF or CF flags are needed.
1506 The shift operators have the complication that when the shift count is
1507 zero, EFLAGS is not set, so they can only subsume a test instruction if
1508 the shift count is known to be non-zero. Also, using the EFLAGS value
1509 from a shift is apparently very slow on some x86 implementations.
1511 In read-modify-write instructions, the root node in the isel match is
1512 the store, and isel has no way for the use of the EFLAGS result of the
1513 arithmetic to be remapped to the new node.
1515 Add and subtract instructions set OF on signed overflow and CF on unsiged
1516 overflow, while test instructions always clear OF and CF. In order to
1517 replace a test with an add or subtract in a situation where OF or CF is
1518 needed, codegen must be able to prove that the operation cannot see
1519 signed or unsigned overflow, respectively.
1521 //===---------------------------------------------------------------------===//
1523 memcpy/memmove do not lower to SSE copies when possible. A silly example is:
1524 define <16 x float> @foo(<16 x float> %A) nounwind {
1525 %tmp = alloca <16 x float>, align 16
1526 %tmp2 = alloca <16 x float>, align 16
1527 store <16 x float> %A, <16 x float>* %tmp
1528 %s = bitcast <16 x float>* %tmp to i8*
1529 %s2 = bitcast <16 x float>* %tmp2 to i8*
1530 call void @llvm.memcpy.i64(i8* %s, i8* %s2, i64 64, i32 16)
1531 %R = load <16 x float>* %tmp2
1535 declare void @llvm.memcpy.i64(i8* nocapture, i8* nocapture, i64, i32) nounwind
1541 movaps %xmm3, 112(%esp)
1542 movaps %xmm2, 96(%esp)
1543 movaps %xmm1, 80(%esp)
1544 movaps %xmm0, 64(%esp)
1546 movl %eax, 124(%esp)
1548 movl %eax, 120(%esp)
1550 <many many more 32-bit copies>
1551 movaps (%esp), %xmm0
1552 movaps 16(%esp), %xmm1
1553 movaps 32(%esp), %xmm2
1554 movaps 48(%esp), %xmm3
1558 On Nehalem, it may even be cheaper to just use movups when unaligned than to
1559 fall back to lower-granularity chunks.
1561 //===---------------------------------------------------------------------===//
1563 Implement processor-specific optimizations for parity with GCC on these
1564 processors. GCC does two optimizations:
1566 1. ix86_pad_returns inserts a noop before ret instructions if immediately
1567 preceded by a conditional branch or is the target of a jump.
1568 2. ix86_avoid_jump_misspredicts inserts noops in cases where a 16-byte block of
1569 code contains more than 3 branches.
1571 The first one is done for all AMDs, Core2, and "Generic"
1572 The second one is done for: Atom, Pentium Pro, all AMDs, Pentium 4, Nocona,
1573 Core 2, and "Generic"
1575 //===---------------------------------------------------------------------===//
1578 int a(int x) { return (x & 127) > 31; }
1594 This should definitely be done in instcombine, canonicalizing the range
1595 condition into a != condition. We get this IR:
1597 define i32 @a(i32 %x) nounwind readnone {
1599 %0 = and i32 %x, 127 ; <i32> [#uses=1]
1600 %1 = icmp ugt i32 %0, 31 ; <i1> [#uses=1]
1601 %2 = zext i1 %1 to i32 ; <i32> [#uses=1]
1605 Instcombine prefers to strength reduce relational comparisons to equality
1606 comparisons when possible, this should be another case of that. This could
1607 be handled pretty easily in InstCombiner::visitICmpInstWithInstAndIntCst, but it
1608 looks like InstCombiner::visitICmpInstWithInstAndIntCst should really already
1609 be redesigned to use ComputeMaskedBits and friends.
1612 //===---------------------------------------------------------------------===//
1614 int x(int a) { return (a&0xf0)>>4; }
1623 movzbl 4(%esp), %eax
1627 //===---------------------------------------------------------------------===//
1629 Re-implement atomic builtins __sync_add_and_fetch() and __sync_sub_and_fetch
1632 When the return value is not used (i.e. only care about the value in the
1633 memory), x86 does not have to use add to implement these. Instead, it can use
1634 add, sub, inc, dec instructions with the "lock" prefix.
1636 This is currently implemented using a bit of instruction selection trick. The
1637 issue is the target independent pattern produces one output and a chain and we
1638 want to map it into one that just output a chain. The current trick is to select
1639 it into a MERGE_VALUES with the first definition being an implicit_def. The
1640 proper solution is to add new ISD opcodes for the no-output variant. DAG
1641 combiner can then transform the node before it gets to target node selection.
1643 Problem #2 is we are adding a whole bunch of x86 atomic instructions when in
1644 fact these instructions are identical to the non-lock versions. We need a way to
1645 add target specific information to target nodes and have this information
1646 carried over to machine instructions. Asm printer (or JIT) can use this
1647 information to add the "lock" prefix.
1649 //===---------------------------------------------------------------------===//
1652 unsigned char y0 : 1;
1655 int bar(struct B* a) { return a->y0; }
1657 define i32 @bar(%struct.B* nocapture %a) nounwind readonly optsize {
1658 %1 = getelementptr inbounds %struct.B* %a, i64 0, i32 0
1659 %2 = load i8* %1, align 1
1661 %4 = zext i8 %3 to i32
1672 Missed optimization: should be movl+andl.
1674 //===---------------------------------------------------------------------===//
1676 The x86_64 abi says:
1678 Booleans, when stored in a memory object, are stored as single byte objects the
1679 value of which is always 0 (false) or 1 (true).
1681 We are not using this fact:
1683 int bar(_Bool *a) { return *a; }
1685 define i32 @bar(i8* nocapture %a) nounwind readonly optsize {
1686 %1 = load i8* %a, align 1, !tbaa !0
1688 %2 = zext i8 %tmp to i32
1704 //===---------------------------------------------------------------------===//
1706 Consider the following two functions compiled with clang:
1707 _Bool foo(int *x) { return !(*x & 4); }
1708 unsigned bar(int *x) { return !(*x & 4); }
1725 The second function generates more code even though the two functions are
1726 are functionally identical.
1728 //===---------------------------------------------------------------------===//
1730 Take the following C code:
1731 int f(int a, int b) { return (unsigned char)a == (unsigned char)b; }
1733 We generate the following IR with clang:
1734 define i32 @f(i32 %a, i32 %b) nounwind readnone {
1736 %tmp = xor i32 %b, %a ; <i32> [#uses=1]
1737 %tmp6 = and i32 %tmp, 255 ; <i32> [#uses=1]
1738 %cmp = icmp eq i32 %tmp6, 0 ; <i1> [#uses=1]
1739 %conv5 = zext i1 %cmp to i32 ; <i32> [#uses=1]
1743 And the following x86 code:
1750 A cmpb instead of the xorl+testb would be one instruction shorter.
1752 //===---------------------------------------------------------------------===//
1754 Given the following C code:
1755 int f(int a, int b) { return (signed char)a == (signed char)b; }
1757 We generate the following IR with clang:
1758 define i32 @f(i32 %a, i32 %b) nounwind readnone {
1760 %sext = shl i32 %a, 24 ; <i32> [#uses=1]
1761 %conv1 = ashr i32 %sext, 24 ; <i32> [#uses=1]
1762 %sext6 = shl i32 %b, 24 ; <i32> [#uses=1]
1763 %conv4 = ashr i32 %sext6, 24 ; <i32> [#uses=1]
1764 %cmp = icmp eq i32 %conv1, %conv4 ; <i1> [#uses=1]
1765 %conv5 = zext i1 %cmp to i32 ; <i32> [#uses=1]
1769 And the following x86 code:
1778 It should be possible to eliminate the sign extensions.
1780 //===---------------------------------------------------------------------===//
1782 LLVM misses a load+store narrowing opportunity in this code:
1784 %struct.bf = type { i64, i16, i16, i32 }
1786 @bfi = external global %struct.bf* ; <%struct.bf**> [#uses=2]
1788 define void @t1() nounwind ssp {
1790 %0 = load %struct.bf** @bfi, align 8 ; <%struct.bf*> [#uses=1]
1791 %1 = getelementptr %struct.bf* %0, i64 0, i32 1 ; <i16*> [#uses=1]
1792 %2 = bitcast i16* %1 to i32* ; <i32*> [#uses=2]
1793 %3 = load i32* %2, align 1 ; <i32> [#uses=1]
1794 %4 = and i32 %3, -65537 ; <i32> [#uses=1]
1795 store i32 %4, i32* %2, align 1
1796 %5 = load %struct.bf** @bfi, align 8 ; <%struct.bf*> [#uses=1]
1797 %6 = getelementptr %struct.bf* %5, i64 0, i32 1 ; <i16*> [#uses=1]
1798 %7 = bitcast i16* %6 to i32* ; <i32*> [#uses=2]
1799 %8 = load i32* %7, align 1 ; <i32> [#uses=1]
1800 %9 = and i32 %8, -131073 ; <i32> [#uses=1]
1801 store i32 %9, i32* %7, align 1
1805 LLVM currently emits this:
1807 movq bfi(%rip), %rax
1808 andl $-65537, 8(%rax)
1809 movq bfi(%rip), %rax
1810 andl $-131073, 8(%rax)
1813 It could narrow the loads and stores to emit this:
1815 movq bfi(%rip), %rax
1817 movq bfi(%rip), %rax
1821 The trouble is that there is a TokenFactor between the store and the
1822 load, making it non-trivial to determine if there's anything between
1823 the load and the store which would prohibit narrowing.
1825 //===---------------------------------------------------------------------===//
1828 void foo(unsigned x) {
1830 else if (x == 1) qux();
1833 currently compiles into:
1841 the testl could be removed:
1848 0 is the only unsigned number < 1.
1850 //===---------------------------------------------------------------------===//
1854 %0 = type { i32, i1 }
1856 define i32 @add32carry(i32 %sum, i32 %x) nounwind readnone ssp {
1858 %uadd = tail call %0 @llvm.uadd.with.overflow.i32(i32 %sum, i32 %x)
1859 %cmp = extractvalue %0 %uadd, 1
1860 %inc = zext i1 %cmp to i32
1861 %add = add i32 %x, %sum
1862 %z.0 = add i32 %add, %inc
1866 declare %0 @llvm.uadd.with.overflow.i32(i32, i32) nounwind readnone
1870 _add32carry: ## @add32carry
1880 leal (%rsi,%rdi), %eax
1885 //===---------------------------------------------------------------------===//
1887 The hot loop of 256.bzip2 contains code that looks a bit like this:
1889 int foo(char *P, char *Q, int x, int y) {
1899 In the real code, we get a lot more wrong than this. However, even in this
1916 ## BB#3: ## %if.end38
1921 ## BB#4: ## %if.end60
1924 LBB0_5: ## %if.end60
1929 Note that we generate jumps to LBB0_1 which does a redundant compare. The
1930 redundant compare also forces the register values to be live, which prevents
1931 folding one of the loads into the compare. In contrast, GCC 4.2 produces:
1938 movzbl 1(%rsi), %eax
1941 movzbl 2(%rsi), %eax
1944 movzbl 3(%rdi), %eax
1953 //===---------------------------------------------------------------------===//
1955 For the branch in the following code:
1957 int b(int x, int y) {
1963 We currently generate:
1970 movl+andl would be shorter than the movb+andb+movzbl sequence.
1972 //===---------------------------------------------------------------------===//
1978 float foo(struct u1 u) {
1982 We currently generate:
1984 pshufd $1, %xmm0, %xmm0 # xmm0 = xmm0[1,0,0,0]
1988 We could save an instruction here by commuting the addss.
1990 //===---------------------------------------------------------------------===//
1994 float clamp_float(float a) {
2005 clamp_float: # @clamp_float
2006 movss .LCPI0_0(%rip), %xmm1
2014 //===---------------------------------------------------------------------===//
2016 This function (from PR9803):
2040 The move of 0 could be scheduled above the test to make it is xor reg,reg.
2042 //===---------------------------------------------------------------------===//
2044 GCC PR48986. We currently compile this:
2048 if (__sync_fetch_and_add(p, -1) == 1)
2059 Instead we could generate:
2065 The trick is to match "fetch_and_add(X, -C) == C".
2067 //===---------------------------------------------------------------------===//
2069 unsigned log2(unsigned x) {
2070 return x > 1 ? 32-__builtin_clz(x-1) : 0;
2088 The cmov and the early test are redundant:
2101 If we want to get really fancy we could use some two's complement magic:
2113 This is only useful on targets that can't encode the first operand of a sub
2114 directly. The rule is C1 - (X^C2) -> (C1+1) + (X^~C2).
2116 //===---------------------------------------------------------------------===//