1 //===---------------------------------------------------------------------===//
2 // Random ideas for the X86 backend.
3 //===---------------------------------------------------------------------===//
5 We should add support for the "movbe" instruction, which does a byte-swapping
6 copy (3-addr bswap + memory support?) This is available on Atom processors.
8 //===---------------------------------------------------------------------===//
10 CodeGen/X86/lea-3.ll:test3 should be a single LEA, not a shift/move. The X86
11 backend knows how to three-addressify this shift, but it appears the register
12 allocator isn't even asking it to do so in this case. We should investigate
13 why this isn't happening, it could have significant impact on other important
14 cases for X86 as well.
16 //===---------------------------------------------------------------------===//
18 This should be one DIV/IDIV instruction, not a libcall:
20 unsigned test(unsigned long long X, unsigned Y) {
24 This can be done trivially with a custom legalizer. What about overflow
25 though? http://gcc.gnu.org/bugzilla/show_bug.cgi?id=14224
27 //===---------------------------------------------------------------------===//
29 Improvements to the multiply -> shift/add algorithm:
30 http://gcc.gnu.org/ml/gcc-patches/2004-08/msg01590.html
32 //===---------------------------------------------------------------------===//
34 Improve code like this (occurs fairly frequently, e.g. in LLVM):
35 long long foo(int x) { return 1LL << x; }
37 http://gcc.gnu.org/ml/gcc-patches/2004-09/msg01109.html
38 http://gcc.gnu.org/ml/gcc-patches/2004-09/msg01128.html
39 http://gcc.gnu.org/ml/gcc-patches/2004-09/msg01136.html
41 Another useful one would be ~0ULL >> X and ~0ULL << X.
43 One better solution for 1LL << x is:
52 But that requires good 8-bit subreg support.
54 Also, this might be better. It's an extra shift, but it's one instruction
55 shorter, and doesn't stress 8-bit subreg support.
56 (From http://gcc.gnu.org/ml/gcc-patches/2004-09/msg01148.html,
57 but without the unnecessary and.)
65 64-bit shifts (in general) expand to really bad code. Instead of using
66 cmovs, we should expand to a conditional branch like GCC produces.
68 //===---------------------------------------------------------------------===//
72 1. Dynamic programming based approach when compile time if not an
74 2. Code duplication (addressing mode) during isel.
75 3. Other ideas from "Register-Sensitive Selection, Duplication, and
76 Sequencing of Instructions".
77 4. Scheduling for reduced register pressure. E.g. "Minimum Register
78 Instruction Sequence Problem: Revisiting Optimal Code Generation for DAGs"
79 and other related papers.
80 http://citeseer.ist.psu.edu/govindarajan01minimum.html
82 //===---------------------------------------------------------------------===//
84 Should we promote i16 to i32 to avoid partial register update stalls?
86 //===---------------------------------------------------------------------===//
88 Leave any_extend as pseudo instruction and hint to register
89 allocator. Delay codegen until post register allocation.
90 Note. any_extend is now turned into an INSERT_SUBREG. We still need to teach
91 the coalescer how to deal with it though.
93 //===---------------------------------------------------------------------===//
95 It appears icc use push for parameter passing. Need to investigate.
97 //===---------------------------------------------------------------------===//
102 void bar(int x, int *P) {
117 Instead of doing an explicit test, we can use the flags off the sar. This
118 occurs in a bigger testcase like this, which is pretty common:
123 int test1(std::vector<int> &X) {
125 for (long i = 0, e = X.size(); i != e; ++i)
137 Instead of doing an explicit test, we can use the flags off the sar. This
138 occurs in a bigger testcase like this, which is pretty common:
143 int test1(std::vector<int> &X) {
145 for (long i = 0, e = X.size(); i != e; ++i)
157 Instead of doing an explicit test, we can use the flags off the sar. This
158 occurs in a bigger testcase like this, which is pretty common:
163 int test1(std::vector<int> &X) {
165 for (long i = 0, e = X.size(); i != e; ++i)
177 Instead of doing an explicit test, we can use the flags off the sar. This
178 occurs in a bigger testcase like this, which is pretty common in bootstrap:
181 int test1(std::vector<int> &X) {
183 for (long i = 0, e = X.size(); i != e; ++i)
188 //===---------------------------------------------------------------------===//
190 Only use inc/neg/not instructions on processors where they are faster than
191 add/sub/xor. They are slower on the P4 due to only updating some processor
194 //===---------------------------------------------------------------------===//
196 The instruction selector sometimes misses folding a load into a compare. The
197 pattern is written as (cmp reg, (load p)). Because the compare isn't
198 commutative, it is not matched with the load on both sides. The dag combiner
199 should be made smart enough to cannonicalize the load into the RHS of a compare
200 when it can invert the result of the compare for free.
202 //===---------------------------------------------------------------------===//
204 In many cases, LLVM generates code like this:
213 on some processors (which ones?), it is more efficient to do this:
222 Doing this correctly is tricky though, as the xor clobbers the flags.
224 //===---------------------------------------------------------------------===//
226 We should generate bts/btr/etc instructions on targets where they are cheap or
227 when codesize is important. e.g., for:
229 void setbit(int *target, int bit) {
230 *target |= (1 << bit);
232 void clearbit(int *target, int bit) {
233 *target &= ~(1 << bit);
236 //===---------------------------------------------------------------------===//
238 Instead of the following for memset char*, 1, 10:
240 movl $16843009, 4(%edx)
241 movl $16843009, (%edx)
244 It might be better to generate
251 when we can spare a register. It reduces code size.
253 //===---------------------------------------------------------------------===//
255 Evaluate what the best way to codegen sdiv X, (2^C) is. For X/8, we currently
258 define i32 @test1(i32 %X) {
272 GCC knows several different ways to codegen it, one of which is this:
282 which is probably slower, but it's interesting at least :)
284 //===---------------------------------------------------------------------===//
286 We are currently lowering large (1MB+) memmove/memcpy to rep/stosl and rep/movsl
287 We should leave these as libcalls for everything over a much lower threshold,
288 since libc is hand tuned for medium and large mem ops (avoiding RFO for large
289 stores, TLB preheating, etc)
291 //===---------------------------------------------------------------------===//
293 Optimize this into something reasonable:
294 x * copysign(1.0, y) * copysign(1.0, z)
296 //===---------------------------------------------------------------------===//
298 Optimize copysign(x, *y) to use an integer load from y.
300 //===---------------------------------------------------------------------===//
302 The following tests perform worse with LSR:
304 lambda, siod, optimizer-eval, ackermann, hash2, nestedloop, strcat, and Treesor.
306 //===---------------------------------------------------------------------===//
308 Adding to the list of cmp / test poor codegen issues:
310 int test(__m128 *A, __m128 *B) {
311 if (_mm_comige_ss(*A, *B))
331 Note the setae, movzbl, cmpl, cmove can be replaced with a single cmovae. There
332 are a number of issues. 1) We are introducing a setcc between the result of the
333 intrisic call and select. 2) The intrinsic is expected to produce a i32 value
334 so a any extend (which becomes a zero extend) is added.
336 We probably need some kind of target DAG combine hook to fix this.
338 //===---------------------------------------------------------------------===//
340 We generate significantly worse code for this than GCC:
341 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=21150
342 http://gcc.gnu.org/bugzilla/attachment.cgi?id=8701
344 There is also one case we do worse on PPC.
346 //===---------------------------------------------------------------------===//
356 imull $3, 4(%esp), %eax
358 Perhaps this is what we really should generate is? Is imull three or four
359 cycles? Note: ICC generates this:
361 leal (%eax,%eax,2), %eax
363 The current instruction priority is based on pattern complexity. The former is
364 more "complex" because it folds a load so the latter will not be emitted.
366 Perhaps we should use AddedComplexity to give LEA32r a higher priority? We
367 should always try to match LEA first since the LEA matching code does some
368 estimate to determine whether the match is profitable.
370 However, if we care more about code size, then imull is better. It's two bytes
371 shorter than movl + leal.
373 On a Pentium M, both variants have the same characteristics with regard
374 to throughput; however, the multiplication has a latency of four cycles, as
375 opposed to two cycles for the movl+lea variant.
377 //===---------------------------------------------------------------------===//
379 __builtin_ffs codegen is messy.
381 int ffs_(unsigned X) { return __builtin_ffs(X); }
404 Another example of __builtin_ffs (use predsimplify to eliminate a select):
406 int foo (unsigned long j) {
408 return __builtin_ffs (j) - 1;
413 //===---------------------------------------------------------------------===//
415 It appears gcc place string data with linkonce linkage in
416 .section __TEXT,__const_coal,coalesced instead of
417 .section __DATA,__const_coal,coalesced.
418 Take a look at darwin.h, there are other Darwin assembler directives that we
421 //===---------------------------------------------------------------------===//
423 define i32 @foo(i32* %a, i32 %t) {
427 cond_true: ; preds = %cond_true, %entry
428 %x.0.0 = phi i32 [ 0, %entry ], [ %tmp9, %cond_true ] ; <i32> [#uses=3]
429 %t_addr.0.0 = phi i32 [ %t, %entry ], [ %tmp7, %cond_true ] ; <i32> [#uses=1]
430 %tmp2 = getelementptr i32* %a, i32 %x.0.0 ; <i32*> [#uses=1]
431 %tmp3 = load i32* %tmp2 ; <i32> [#uses=1]
432 %tmp5 = add i32 %t_addr.0.0, %x.0.0 ; <i32> [#uses=1]
433 %tmp7 = add i32 %tmp5, %tmp3 ; <i32> [#uses=2]
434 %tmp9 = add i32 %x.0.0, 1 ; <i32> [#uses=2]
435 %tmp = icmp sgt i32 %tmp9, 39 ; <i1> [#uses=1]
436 br i1 %tmp, label %bb12, label %cond_true
438 bb12: ; preds = %cond_true
441 is pessimized by -loop-reduce and -indvars
443 //===---------------------------------------------------------------------===//
445 u32 to float conversion improvement:
447 float uint32_2_float( unsigned u ) {
448 float fl = (int) (u & 0xffff);
449 float fh = (int) (u >> 16);
454 00000000 subl $0x04,%esp
455 00000003 movl 0x08(%esp,1),%eax
456 00000007 movl %eax,%ecx
457 00000009 shrl $0x10,%ecx
458 0000000c cvtsi2ss %ecx,%xmm0
459 00000010 andl $0x0000ffff,%eax
460 00000015 cvtsi2ss %eax,%xmm1
461 00000019 mulss 0x00000078,%xmm0
462 00000021 addss %xmm1,%xmm0
463 00000025 movss %xmm0,(%esp,1)
464 0000002a flds (%esp,1)
465 0000002d addl $0x04,%esp
468 //===---------------------------------------------------------------------===//
470 When using fastcc abi, align stack slot of argument of type double on 8 byte
471 boundary to improve performance.
473 //===---------------------------------------------------------------------===//
475 GCC's ix86_expand_int_movcc function (in i386.c) has a ton of interesting
476 simplifications for integer "x cmp y ? a : b".
478 //===---------------------------------------------------------------------===//
480 Consider the expansion of:
482 define i32 @test3(i32 %X) {
483 %tmp1 = urem i32 %X, 255
487 Currently it compiles to:
490 movl $2155905153, %ecx
496 This could be "reassociated" into:
498 movl $2155905153, %eax
502 to avoid the copy. In fact, the existing two-address stuff would do this
503 except that mul isn't a commutative 2-addr instruction. I guess this has
504 to be done at isel time based on the #uses to mul?
506 //===---------------------------------------------------------------------===//
508 Make sure the instruction which starts a loop does not cross a cacheline
509 boundary. This requires knowning the exact length of each machine instruction.
510 That is somewhat complicated, but doable. Example 256.bzip2:
512 In the new trace, the hot loop has an instruction which crosses a cacheline
513 boundary. In addition to potential cache misses, this can't help decoding as I
514 imagine there has to be some kind of complicated decoder reset and realignment
515 to grab the bytes from the next cacheline.
517 532 532 0x3cfc movb (1809(%esp, %esi), %bl <<<--- spans 2 64 byte lines
518 942 942 0x3d03 movl %dh, (1809(%esp, %esi)
519 937 937 0x3d0a incl %esi
520 3 3 0x3d0b cmpb %bl, %dl
521 27 27 0x3d0d jnz 0x000062db <main+11707>
523 //===---------------------------------------------------------------------===//
525 In c99 mode, the preprocessor doesn't like assembly comments like #TRUNCATE.
527 //===---------------------------------------------------------------------===//
529 This could be a single 16-bit load.
532 if ((p[0] == 1) & (p[1] == 2)) return 1;
536 //===---------------------------------------------------------------------===//
538 We should inline lrintf and probably other libc functions.
540 //===---------------------------------------------------------------------===//
542 Use the FLAGS values from arithmetic instructions more. For example, compile:
544 int add_zf(int *x, int y, int a, int b) {
566 As another example, compile function f2 in test/CodeGen/X86/cmp-test.ll
567 without a test instruction.
569 //===---------------------------------------------------------------------===//
571 These two functions have identical effects:
573 unsigned int f(unsigned int i, unsigned int n) {++i; if (i == n) ++i; return i;}
574 unsigned int f2(unsigned int i, unsigned int n) {++i; i += i == n; return i;}
576 We currently compile them to:
584 jne LBB1_2 #UnifiedReturnBlock
588 LBB1_2: #UnifiedReturnBlock
598 leal 1(%ecx,%eax), %eax
601 both of which are inferior to GCC's:
619 //===---------------------------------------------------------------------===//
627 is currently compiled to:
638 It would be better to produce:
647 This can be applied to any no-return function call that takes no arguments etc.
648 Alternatively, the stack save/restore logic could be shrink-wrapped, producing
659 Both are useful in different situations. Finally, it could be shrink-wrapped
660 and tail called, like this:
667 pop %eax # realign stack.
670 Though this probably isn't worth it.
672 //===---------------------------------------------------------------------===//
674 Sometimes it is better to codegen subtractions from a constant (e.g. 7-x) with
675 a neg instead of a sub instruction. Consider:
677 int test(char X) { return 7-X; }
679 we currently produce:
686 We would use one fewer register if codegen'd as:
693 Note that this isn't beneficial if the load can be folded into the sub. In
694 this case, we want a sub:
696 int test(int X) { return 7-X; }
702 //===---------------------------------------------------------------------===//
704 Leaf functions that require one 4-byte spill slot have a prolog like this:
710 and an epilog like this:
715 It would be smaller, and potentially faster, to push eax on entry and to
716 pop into a dummy register instead of using addl/subl of esp. Just don't pop
717 into any return registers :)
719 //===---------------------------------------------------------------------===//
721 The X86 backend should fold (branch (or (setcc, setcc))) into multiple
722 branches. We generate really poor code for:
724 double testf(double a) {
725 return a == 0.0 ? 0.0 : (a > 0.0 ? 1.0 : -1.0);
728 For example, the entry BB is:
733 movsd 24(%esp), %xmm1
738 jne LBB1_5 # UnifiedReturnBlock
742 it would be better to replace the last four instructions with:
748 We also codegen the inner ?: into a diamond:
750 cvtss2sd LCPI1_0(%rip), %xmm2
751 cvtss2sd LCPI1_1(%rip), %xmm3
753 ja LBB1_3 # cond_true
760 We should sink the load into xmm3 into the LBB1_2 block. This should
761 be pretty easy, and will nuke all the copies.
763 //===---------------------------------------------------------------------===//
767 inline std::pair<unsigned, bool> full_add(unsigned a, unsigned b)
768 { return std::make_pair(a + b, a + b < a); }
769 bool no_overflow(unsigned a, unsigned b)
770 { return !full_add(a, b).second; }
780 FIXME: That code looks wrong; bool return is normally defined as zext.
792 //===---------------------------------------------------------------------===//
796 bb114.preheader: ; preds = %cond_next94
797 %tmp231232 = sext i16 %tmp62 to i32 ; <i32> [#uses=1]
798 %tmp233 = sub i32 32, %tmp231232 ; <i32> [#uses=1]
799 %tmp245246 = sext i16 %tmp65 to i32 ; <i32> [#uses=1]
800 %tmp252253 = sext i16 %tmp68 to i32 ; <i32> [#uses=1]
801 %tmp254 = sub i32 32, %tmp252253 ; <i32> [#uses=1]
802 %tmp553554 = bitcast i16* %tmp37 to i8* ; <i8*> [#uses=2]
803 %tmp583584 = sext i16 %tmp98 to i32 ; <i32> [#uses=1]
804 %tmp585 = sub i32 32, %tmp583584 ; <i32> [#uses=1]
805 %tmp614615 = sext i16 %tmp101 to i32 ; <i32> [#uses=1]
806 %tmp621622 = sext i16 %tmp104 to i32 ; <i32> [#uses=1]
807 %tmp623 = sub i32 32, %tmp621622 ; <i32> [#uses=1]
812 LBB3_5: # bb114.preheader
813 movswl -68(%ebp), %eax
817 movswl -52(%ebp), %eax
820 movswl -70(%ebp), %eax
823 movswl -50(%ebp), %eax
826 movswl -42(%ebp), %eax
828 movswl -66(%ebp), %eax
832 This appears to be bad because the RA is not folding the store to the stack
833 slot into the movl. The above instructions could be:
838 This seems like a cross between remat and spill folding.
840 This has redundant subtractions of %eax from a stack slot. However, %ecx doesn't
841 change, so we could simply subtract %eax from %ecx first and then use %ecx (or
844 //===---------------------------------------------------------------------===//
848 %tmp659 = icmp slt i16 %tmp654, 0 ; <i1> [#uses=1]
849 br i1 %tmp659, label %cond_true662, label %cond_next715
855 jns LBB4_109 # cond_next715
857 Shark tells us that using %cx in the testw instruction is sub-optimal. It
858 suggests using the 32-bit register (which is what ICC uses).
860 //===---------------------------------------------------------------------===//
864 void compare (long long foo) {
865 if (foo < 4294967297LL)
881 jne .LBB1_2 # UnifiedReturnBlock
884 .LBB1_2: # UnifiedReturnBlock
888 (also really horrible code on ppc). This is due to the expand code for 64-bit
889 compares. GCC produces multiple branches, which is much nicer:
910 //===---------------------------------------------------------------------===//
912 Tail call optimization improvements: Tail call optimization currently
913 pushes all arguments on the top of the stack (their normal place for
914 non-tail call optimized calls) that source from the callers arguments
915 or that source from a virtual register (also possibly sourcing from
917 This is done to prevent overwriting of parameters (see example
918 below) that might be used later.
922 int callee(int32, int64);
923 int caller(int32 arg1, int32 arg2) {
924 int64 local = arg2 * 2;
925 return callee(arg2, (int64)local);
928 [arg1] [!arg2 no longer valid since we moved local onto it]
932 Moving arg1 onto the stack slot of callee function would overwrite
935 Possible optimizations:
938 - Analyse the actual parameters of the callee to see which would
939 overwrite a caller parameter which is used by the callee and only
940 push them onto the top of the stack.
942 int callee (int32 arg1, int32 arg2);
943 int caller (int32 arg1, int32 arg2) {
944 return callee(arg1,arg2);
947 Here we don't need to write any variables to the top of the stack
948 since they don't overwrite each other.
950 int callee (int32 arg1, int32 arg2);
951 int caller (int32 arg1, int32 arg2) {
952 return callee(arg2,arg1);
955 Here we need to push the arguments because they overwrite each
958 //===---------------------------------------------------------------------===//
963 unsigned long int z = 0;
974 gcc compiles this to:
1000 jge LBB1_4 # cond_true
1003 addl $4294950912, %ecx
1013 1. LSR should rewrite the first cmp with induction variable %ecx.
1014 2. DAG combiner should fold
1020 //===---------------------------------------------------------------------===//
1022 define i64 @test(double %X) {
1023 %Y = fptosi double %X to i64
1031 movsd 24(%esp), %xmm0
1032 movsd %xmm0, 8(%esp)
1041 This should just fldl directly from the input stack slot.
1043 //===---------------------------------------------------------------------===//
1046 int foo (int x) { return (x & 65535) | 255; }
1048 Should compile into:
1051 movzwl 4(%esp), %eax
1062 //===---------------------------------------------------------------------===//
1064 We're codegen'ing multiply of long longs inefficiently:
1066 unsigned long long LLM(unsigned long long arg1, unsigned long long arg2) {
1070 We compile to (fomit-frame-pointer):
1078 imull 12(%esp), %esi
1080 imull 20(%esp), %ecx
1086 This looks like a scheduling deficiency and lack of remat of the load from
1087 the argument area. ICC apparently produces:
1090 imull 12(%esp), %ecx
1099 Note that it remat'd loads from 4(esp) and 12(esp). See this GCC PR:
1100 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=17236
1102 //===---------------------------------------------------------------------===//
1104 We can fold a store into "zeroing a reg". Instead of:
1107 movl %eax, 124(%esp)
1113 if the flags of the xor are dead.
1115 Likewise, we isel "x<<1" into "add reg,reg". If reg is spilled, this should
1116 be folded into: shl [mem], 1
1118 //===---------------------------------------------------------------------===//
1120 In SSE mode, we turn abs and neg into a load from the constant pool plus a xor
1121 or and instruction, for example:
1123 xorpd LCPI1_0, %xmm2
1125 However, if xmm2 gets spilled, we end up with really ugly code like this:
1128 xorpd LCPI1_0, %xmm0
1131 Since we 'know' that this is a 'neg', we can actually "fold" the spill into
1132 the neg/abs instruction, turning it into an *integer* operation, like this:
1134 xorl 2147483648, [mem+4] ## 2147483648 = (1 << 31)
1136 you could also use xorb, but xorl is less likely to lead to a partial register
1137 stall. Here is a contrived testcase:
1140 void test(double *P) {
1150 //===---------------------------------------------------------------------===//
1152 The generated code on x86 for checking for signed overflow on a multiply the
1153 obvious way is much longer than it needs to be.
1155 int x(int a, int b) {
1156 long long prod = (long long)a*b;
1157 return prod > 0x7FFFFFFF || prod < (-0x7FFFFFFF-1);
1160 See PR2053 for more details.
1162 //===---------------------------------------------------------------------===//
1164 We should investigate using cdq/ctld (effect: edx = sar eax, 31)
1165 more aggressively; it should cost the same as a move+shift on any modern
1166 processor, but it's a lot shorter. Downside is that it puts more
1167 pressure on register allocation because it has fixed operands.
1170 int abs(int x) {return x < 0 ? -x : x;}
1172 gcc compiles this to the following when using march/mtune=pentium2/3/4/m/etc.:
1180 //===---------------------------------------------------------------------===//
1182 Take the following code (from
1183 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=16541):
1185 extern unsigned char first_one[65536];
1186 int FirstOnet(unsigned long long arg1)
1189 return (first_one[arg1 >> 48]);
1194 The following code is currently generated:
1199 jb .LBB1_2 # UnifiedReturnBlock
1202 movzbl first_one(%eax), %eax
1204 .LBB1_2: # UnifiedReturnBlock
1208 We could change the "movl 8(%esp), %eax" into "movzwl 10(%esp), %eax"; this
1209 lets us change the cmpl into a testl, which is shorter, and eliminate the shift.
1211 //===---------------------------------------------------------------------===//
1213 We compile this function:
1215 define i32 @foo(i32 %a, i32 %b, i32 %c, i8 zeroext %d) nounwind {
1217 %tmp2 = icmp eq i8 %d, 0 ; <i1> [#uses=1]
1218 br i1 %tmp2, label %bb7, label %bb
1220 bb: ; preds = %entry
1221 %tmp6 = add i32 %b, %a ; <i32> [#uses=1]
1224 bb7: ; preds = %entry
1225 %tmp10 = sub i32 %a, %c ; <i32> [#uses=1]
1246 There's an obviously unnecessary movl in .LBB0_2, and we could eliminate a
1247 couple more movls by putting 4(%esp) into %eax instead of %ecx.
1249 //===---------------------------------------------------------------------===//
1256 cvtss2sd LCPI1_0, %xmm1
1258 movsd 176(%esp), %xmm2
1263 mulsd LCPI1_23, %xmm4
1264 addsd LCPI1_24, %xmm4
1266 addsd LCPI1_25, %xmm4
1268 addsd LCPI1_26, %xmm4
1270 addsd LCPI1_27, %xmm4
1272 addsd LCPI1_28, %xmm4
1276 movsd 152(%esp), %xmm1
1278 movsd %xmm1, 152(%esp)
1282 LBB1_16: # bb358.loopexit
1283 movsd 152(%esp), %xmm0
1285 addsd LCPI1_22, %xmm0
1286 movsd %xmm0, 152(%esp)
1288 Rather than spilling the result of the last addsd in the loop, we should have
1289 insert a copy to split the interval (one for the duration of the loop, one
1290 extending to the fall through). The register pressure in the loop isn't high
1291 enough to warrant the spill.
1293 Also check why xmm7 is not used at all in the function.
1295 //===---------------------------------------------------------------------===//
1299 target datalayout = "e-p:32:32:32-i1:8:8-i8:8:8-i16:16:16-i32:32:32-i64:32:64-f32:32:32-f64:32:64-v64:64:64-v128:128:128-a0:0:64-f80:128:128"
1300 target triple = "i386-apple-darwin8"
1301 @in_exit.4870.b = internal global i1 false ; <i1*> [#uses=2]
1302 define fastcc void @abort_gzip() noreturn nounwind {
1304 %tmp.b.i = load i1* @in_exit.4870.b ; <i1> [#uses=1]
1305 br i1 %tmp.b.i, label %bb.i, label %bb4.i
1306 bb.i: ; preds = %entry
1307 tail call void @exit( i32 1 ) noreturn nounwind
1309 bb4.i: ; preds = %entry
1310 store i1 true, i1* @in_exit.4870.b
1311 tail call void @exit( i32 1 ) noreturn nounwind
1314 declare void @exit(i32) noreturn nounwind
1317 _abort_gzip: ## @abort_gzip
1320 movb _in_exit.4870.b, %al
1324 We somehow miss folding the movb into the cmpb.
1326 //===---------------------------------------------------------------------===//
1330 int test(int x, int y) {
1342 it would be better to codegen as: x+~y (notl+addl)
1344 //===---------------------------------------------------------------------===//
1348 int foo(const char *str,...)
1350 __builtin_va_list a; int x;
1351 __builtin_va_start(a,str); x = __builtin_va_arg(a,int); __builtin_va_end(a);
1355 gets compiled into this on x86-64:
1357 movaps %xmm7, 160(%rsp)
1358 movaps %xmm6, 144(%rsp)
1359 movaps %xmm5, 128(%rsp)
1360 movaps %xmm4, 112(%rsp)
1361 movaps %xmm3, 96(%rsp)
1362 movaps %xmm2, 80(%rsp)
1363 movaps %xmm1, 64(%rsp)
1364 movaps %xmm0, 48(%rsp)
1371 movq %rax, 192(%rsp)
1372 leaq 208(%rsp), %rax
1373 movq %rax, 184(%rsp)
1376 movl 176(%rsp), %eax
1380 movq 184(%rsp), %rcx
1382 movq %rax, 184(%rsp)
1390 addq 192(%rsp), %rcx
1391 movl %eax, 176(%rsp)
1397 leaq 104(%rsp), %rax
1398 movq %rsi, -80(%rsp)
1400 movq %rax, -112(%rsp)
1401 leaq -88(%rsp), %rax
1402 movq %rax, -104(%rsp)
1406 movq -112(%rsp), %rdx
1414 addq -104(%rsp), %rdx
1416 movl %eax, -120(%rsp)
1421 and it gets compiled into this on x86:
1441 //===---------------------------------------------------------------------===//
1443 Teach tblgen not to check bitconvert source type in some cases. This allows us
1444 to consolidate the following patterns in X86InstrMMX.td:
1446 def : Pat<(v2i32 (bitconvert (i64 (vector_extract (v2i64 VR128:$src),
1448 (v2i32 (MMX_MOVDQ2Qrr VR128:$src))>;
1449 def : Pat<(v4i16 (bitconvert (i64 (vector_extract (v2i64 VR128:$src),
1451 (v4i16 (MMX_MOVDQ2Qrr VR128:$src))>;
1452 def : Pat<(v8i8 (bitconvert (i64 (vector_extract (v2i64 VR128:$src),
1454 (v8i8 (MMX_MOVDQ2Qrr VR128:$src))>;
1456 There are other cases in various td files.
1458 //===---------------------------------------------------------------------===//
1460 Take something like the following on x86-32:
1461 unsigned a(unsigned long long x, unsigned y) {return x % y;}
1463 We currently generate a libcall, but we really shouldn't: the expansion is
1464 shorter and likely faster than the libcall. The expected code is something
1476 A similar code sequence works for division.
1478 //===---------------------------------------------------------------------===//
1480 These should compile to the same code, but the later codegen's to useless
1481 instructions on X86. This may be a trivial dag combine (GCC PR7061):
1483 struct s1 { unsigned char a, b; };
1484 unsigned long f1(struct s1 x) {
1487 struct s2 { unsigned a: 8, b: 8; };
1488 unsigned long f2(struct s2 x) {
1492 //===---------------------------------------------------------------------===//
1494 We currently compile this:
1496 define i32 @func1(i32 %v1, i32 %v2) nounwind {
1498 %t = call {i32, i1} @llvm.sadd.with.overflow.i32(i32 %v1, i32 %v2)
1499 %sum = extractvalue {i32, i1} %t, 0
1500 %obit = extractvalue {i32, i1} %t, 1
1501 br i1 %obit, label %overflow, label %normal
1505 call void @llvm.trap()
1508 declare {i32, i1} @llvm.sadd.with.overflow.i32(i32, i32)
1509 declare void @llvm.trap()
1516 jo LBB1_2 ## overflow
1522 it would be nice to produce "into" someday.
1524 //===---------------------------------------------------------------------===//
1528 void vec_mpys1(int y[], const int x[], int scaler) {
1530 for (i = 0; i < 150; i++)
1531 y[i] += (((long long)scaler * (long long)x[i]) >> 31);
1534 Compiles to this loop with GCC 3.x:
1539 shrdl $31, %edx, %eax
1540 addl %eax, (%esi,%ecx,4)
1545 llvm-gcc compiles it to the much uglier:
1549 movl (%eax,%edi,4), %ebx
1558 shldl $1, %eax, %ebx
1560 addl %ebx, (%eax,%edi,4)
1565 The issue is that we hoist the cast of "scaler" to long long outside of the
1566 loop, the value comes into the loop as two values, and
1567 RegsForValue::getCopyFromRegs doesn't know how to put an AssertSext on the
1568 constructed BUILD_PAIR which represents the cast value.
1570 //===---------------------------------------------------------------------===//
1572 Test instructions can be eliminated by using EFLAGS values from arithmetic
1573 instructions. This is currently not done for mul, and, or, xor, neg, shl,
1574 sra, srl, shld, shrd, atomic ops, and others. It is also currently not done
1575 for read-modify-write instructions. It is also current not done if the
1576 OF or CF flags are needed.
1578 The shift operators have the complication that when the shift count is
1579 zero, EFLAGS is not set, so they can only subsume a test instruction if
1580 the shift count is known to be non-zero. Also, using the EFLAGS value
1581 from a shift is apparently very slow on some x86 implementations.
1583 In read-modify-write instructions, the root node in the isel match is
1584 the store, and isel has no way for the use of the EFLAGS result of the
1585 arithmetic to be remapped to the new node.
1587 Add and subtract instructions set OF on signed overflow and CF on unsiged
1588 overflow, while test instructions always clear OF and CF. In order to
1589 replace a test with an add or subtract in a situation where OF or CF is
1590 needed, codegen must be able to prove that the operation cannot see
1591 signed or unsigned overflow, respectively.
1593 //===---------------------------------------------------------------------===//
1595 memcpy/memmove do not lower to SSE copies when possible. A silly example is:
1596 define <16 x float> @foo(<16 x float> %A) nounwind {
1597 %tmp = alloca <16 x float>, align 16
1598 %tmp2 = alloca <16 x float>, align 16
1599 store <16 x float> %A, <16 x float>* %tmp
1600 %s = bitcast <16 x float>* %tmp to i8*
1601 %s2 = bitcast <16 x float>* %tmp2 to i8*
1602 call void @llvm.memcpy.i64(i8* %s, i8* %s2, i64 64, i32 16)
1603 %R = load <16 x float>* %tmp2
1607 declare void @llvm.memcpy.i64(i8* nocapture, i8* nocapture, i64, i32) nounwind
1613 movaps %xmm3, 112(%esp)
1614 movaps %xmm2, 96(%esp)
1615 movaps %xmm1, 80(%esp)
1616 movaps %xmm0, 64(%esp)
1618 movl %eax, 124(%esp)
1620 movl %eax, 120(%esp)
1622 <many many more 32-bit copies>
1623 movaps (%esp), %xmm0
1624 movaps 16(%esp), %xmm1
1625 movaps 32(%esp), %xmm2
1626 movaps 48(%esp), %xmm3
1630 On Nehalem, it may even be cheaper to just use movups when unaligned than to
1631 fall back to lower-granularity chunks.
1633 //===---------------------------------------------------------------------===//
1635 Implement processor-specific optimizations for parity with GCC on these
1636 processors. GCC does two optimizations:
1638 1. ix86_pad_returns inserts a noop before ret instructions if immediately
1639 preceeded by a conditional branch or is the target of a jump.
1640 2. ix86_avoid_jump_misspredicts inserts noops in cases where a 16-byte block of
1641 code contains more than 3 branches.
1643 The first one is done for all AMDs, Core2, and "Generic"
1644 The second one is done for: Atom, Pentium Pro, all AMDs, Pentium 4, Nocona,
1645 Core 2, and "Generic"
1647 //===---------------------------------------------------------------------===//
1650 int a(int x) { return (x & 127) > 31; }
1666 This should definitely be done in instcombine, canonicalizing the range
1667 condition into a != condition. We get this IR:
1669 define i32 @a(i32 %x) nounwind readnone {
1671 %0 = and i32 %x, 127 ; <i32> [#uses=1]
1672 %1 = icmp ugt i32 %0, 31 ; <i1> [#uses=1]
1673 %2 = zext i1 %1 to i32 ; <i32> [#uses=1]
1677 Instcombine prefers to strength reduce relational comparisons to equality
1678 comparisons when possible, this should be another case of that. This could
1679 be handled pretty easily in InstCombiner::visitICmpInstWithInstAndIntCst, but it
1680 looks like InstCombiner::visitICmpInstWithInstAndIntCst should really already
1681 be redesigned to use ComputeMaskedBits and friends.
1684 //===---------------------------------------------------------------------===//
1686 int x(int a) { return (a&0xf0)>>4; }
1695 movzbl 4(%esp), %eax
1699 //===---------------------------------------------------------------------===//
1701 Re-implement atomic builtins __sync_add_and_fetch() and __sync_sub_and_fetch
1704 When the return value is not used (i.e. only care about the value in the
1705 memory), x86 does not have to use add to implement these. Instead, it can use
1706 add, sub, inc, dec instructions with the "lock" prefix.
1708 This is currently implemented using a bit of instruction selection trick. The
1709 issue is the target independent pattern produces one output and a chain and we
1710 want to map it into one that just output a chain. The current trick is to select
1711 it into a MERGE_VALUES with the first definition being an implicit_def. The
1712 proper solution is to add new ISD opcodes for the no-output variant. DAG
1713 combiner can then transform the node before it gets to target node selection.
1715 Problem #2 is we are adding a whole bunch of x86 atomic instructions when in
1716 fact these instructions are identical to the non-lock versions. We need a way to
1717 add target specific information to target nodes and have this information
1718 carried over to machine instructions. Asm printer (or JIT) can use this
1719 information to add the "lock" prefix.
1721 //===---------------------------------------------------------------------===//
1723 _Bool bar(int *x) { return *x & 1; }
1725 define zeroext i1 @bar(i32* nocapture %x) nounwind readonly {
1727 %tmp1 = load i32* %x ; <i32> [#uses=1]
1728 %and = and i32 %tmp1, 1 ; <i32> [#uses=1]
1729 %tobool = icmp ne i32 %and, 0 ; <i1> [#uses=1]
1741 Missed optimization: should be movl+andl.
1743 //===---------------------------------------------------------------------===//
1745 Consider the following two functions compiled with clang:
1746 _Bool foo(int *x) { return !(*x & 4); }
1747 unsigned bar(int *x) { return !(*x & 4); }
1764 The second function generates more code even though the two functions are
1765 are functionally identical.
1767 //===---------------------------------------------------------------------===//
1769 Take the following C code:
1770 int x(int y) { return (y & 63) << 14; }
1772 Code produced by gcc:
1778 Code produced by clang:
1784 The code produced by gcc is 3 bytes shorter. This sort of construct often
1785 shows up with bitfields.
1787 //===---------------------------------------------------------------------===//
1789 Take the following C code:
1790 int f(int a, int b) { return (unsigned char)a == (unsigned char)b; }
1792 We generate the following IR with clang:
1793 define i32 @f(i32 %a, i32 %b) nounwind readnone {
1795 %tmp = xor i32 %b, %a ; <i32> [#uses=1]
1796 %tmp6 = and i32 %tmp, 255 ; <i32> [#uses=1]
1797 %cmp = icmp eq i32 %tmp6, 0 ; <i1> [#uses=1]
1798 %conv5 = zext i1 %cmp to i32 ; <i32> [#uses=1]
1802 And the following x86 code:
1809 A cmpb instead of the xorl+testb would be one instruction shorter.
1811 //===---------------------------------------------------------------------===//
1813 Given the following C code:
1814 int f(int a, int b) { return (signed char)a == (signed char)b; }
1816 We generate the following IR with clang:
1817 define i32 @f(i32 %a, i32 %b) nounwind readnone {
1819 %sext = shl i32 %a, 24 ; <i32> [#uses=1]
1820 %conv1 = ashr i32 %sext, 24 ; <i32> [#uses=1]
1821 %sext6 = shl i32 %b, 24 ; <i32> [#uses=1]
1822 %conv4 = ashr i32 %sext6, 24 ; <i32> [#uses=1]
1823 %cmp = icmp eq i32 %conv1, %conv4 ; <i1> [#uses=1]
1824 %conv5 = zext i1 %cmp to i32 ; <i32> [#uses=1]
1828 And the following x86 code:
1837 It should be possible to eliminate the sign extensions.
1839 //===---------------------------------------------------------------------===//
1841 LLVM misses a load+store narrowing opportunity in this code:
1843 %struct.bf = type { i64, i16, i16, i32 }
1845 @bfi = external global %struct.bf* ; <%struct.bf**> [#uses=2]
1847 define void @t1() nounwind ssp {
1849 %0 = load %struct.bf** @bfi, align 8 ; <%struct.bf*> [#uses=1]
1850 %1 = getelementptr %struct.bf* %0, i64 0, i32 1 ; <i16*> [#uses=1]
1851 %2 = bitcast i16* %1 to i32* ; <i32*> [#uses=2]
1852 %3 = load i32* %2, align 1 ; <i32> [#uses=1]
1853 %4 = and i32 %3, -65537 ; <i32> [#uses=1]
1854 store i32 %4, i32* %2, align 1
1855 %5 = load %struct.bf** @bfi, align 8 ; <%struct.bf*> [#uses=1]
1856 %6 = getelementptr %struct.bf* %5, i64 0, i32 1 ; <i16*> [#uses=1]
1857 %7 = bitcast i16* %6 to i32* ; <i32*> [#uses=2]
1858 %8 = load i32* %7, align 1 ; <i32> [#uses=1]
1859 %9 = and i32 %8, -131073 ; <i32> [#uses=1]
1860 store i32 %9, i32* %7, align 1
1864 LLVM currently emits this:
1866 movq bfi(%rip), %rax
1867 andl $-65537, 8(%rax)
1868 movq bfi(%rip), %rax
1869 andl $-131073, 8(%rax)
1872 It could narrow the loads and stores to emit this:
1874 movq bfi(%rip), %rax
1876 movq bfi(%rip), %rax
1880 The trouble is that there is a TokenFactor between the store and the
1881 load, making it non-trivial to determine if there's anything between
1882 the load and the store which would prohibit narrowing.
1884 //===---------------------------------------------------------------------===//
1887 void foo(unsigned x) {
1889 else if (x == 1) qux();
1892 currently compiles into:
1900 the testl could be removed:
1907 0 is the only unsigned number < 1.
1909 //===---------------------------------------------------------------------===//