1 Target Independent Opportunities:
3 //===---------------------------------------------------------------------===//
5 Dead argument elimination should be enhanced to handle cases when an argument is
6 dead to an externally visible function. Though the argument can't be removed
7 from the externally visible function, the caller doesn't need to pass it in.
8 For example in this testcase:
10 void foo(int X) __attribute__((noinline));
11 void foo(int X) { sideeffect(); }
12 void bar(int A) { foo(A+1); }
16 define void @bar(i32 %A) nounwind ssp {
17 %0 = add nsw i32 %A, 1 ; <i32> [#uses=1]
18 tail call void @foo(i32 %0) nounwind noinline ssp
22 The add is dead, we could pass in 'i32 undef' instead. This occurs for C++
23 templates etc, which usually have linkonce_odr/weak_odr linkage, not internal
26 //===---------------------------------------------------------------------===//
28 With the recent changes to make the implicit def/use set explicit in
29 machineinstrs, we should change the target descriptions for 'call' instructions
30 so that the .td files don't list all the call-clobbered registers as implicit
31 defs. Instead, these should be added by the code generator (e.g. on the dag).
33 This has a number of uses:
35 1. PPC32/64 and X86 32/64 can avoid having multiple copies of call instructions
36 for their different impdef sets.
37 2. Targets with multiple calling convs (e.g. x86) which have different clobber
38 sets don't need copies of call instructions.
39 3. 'Interprocedural register allocation' can be done to reduce the clobber sets
42 //===---------------------------------------------------------------------===//
44 Make the PPC branch selector target independant
46 //===---------------------------------------------------------------------===//
48 Get the C front-end to expand hypot(x,y) -> llvm.sqrt(x*x+y*y) when errno and
49 precision don't matter (ffastmath). Misc/mandel will like this. :) This isn't
50 safe in general, even on darwin. See the libm implementation of hypot for
51 examples (which special case when x/y are exactly zero to get signed zeros etc
54 //===---------------------------------------------------------------------===//
56 Solve this DAG isel folding deficiency:
74 The problem is the store's chain operand is not the load X but rather
75 a TokenFactor of the load X and load Y, which prevents the folding.
77 There are two ways to fix this:
79 1. The dag combiner can start using alias analysis to realize that y/x
80 don't alias, making the store to X not dependent on the load from Y.
81 2. The generated isel could be made smarter in the case it can't
82 disambiguate the pointers.
84 Number 1 is the preferred solution.
86 This has been "fixed" by a TableGen hack. But that is a short term workaround
87 which will be removed once the proper fix is made.
89 //===---------------------------------------------------------------------===//
91 On targets with expensive 64-bit multiply, we could LSR this:
98 for (i = ...; ++i, tmp+=tmp)
101 This would be a win on ppc32, but not x86 or ppc64.
103 //===---------------------------------------------------------------------===//
105 Shrink: (setlt (loadi32 P), 0) -> (setlt (loadi8 Phi), 0)
107 //===---------------------------------------------------------------------===//
109 Reassociate should turn things like:
111 int factorial(int X) {
112 return X*X*X*X*X*X*X*X;
115 into llvm.powi calls, allowing the code generator to produce balanced
116 multiplication trees.
118 First, the intrinsic needs to be extended to support integers, and second the
119 code generator needs to be enhanced to lower these to multiplication trees.
121 //===---------------------------------------------------------------------===//
123 Interesting? testcase for add/shift/mul reassoc:
125 int bar(int x, int y) {
126 return x*x*x+y+x*x*x*x*x*y*y*y*y;
128 int foo(int z, int n) {
129 return bar(z, n) + bar(2*z, 2*n);
132 This is blocked on not handling X*X*X -> powi(X, 3) (see note above). The issue
133 is that we end up getting t = 2*X s = t*t and don't turn this into 4*X*X,
134 which is the same number of multiplies and is canonical, because the 2*X has
135 multiple uses. Here's a simple example:
137 define i32 @test15(i32 %X1) {
138 %B = mul i32 %X1, 47 ; X1*47
144 //===---------------------------------------------------------------------===//
146 Reassociate should handle the example in GCC PR16157:
148 extern int a0, a1, a2, a3, a4; extern int b0, b1, b2, b3, b4;
149 void f () { /* this can be optimized to four additions... */
150 b4 = a4 + a3 + a2 + a1 + a0;
151 b3 = a3 + a2 + a1 + a0;
156 This requires reassociating to forms of expressions that are already available,
157 something that reassoc doesn't think about yet.
160 //===---------------------------------------------------------------------===//
162 This function: (derived from GCC PR19988)
163 double foo(double x, double y) {
164 return ((x + 0.1234 * y) * (x + -0.1234 * y));
170 mulsd LCPI1_1(%rip), %xmm1
171 mulsd LCPI1_0(%rip), %xmm2
178 Reassociate should be able to turn it into:
180 double foo(double x, double y) {
181 return ((x + 0.1234 * y) * (x - 0.1234 * y));
184 Which allows the multiply by constant to be CSE'd, producing:
187 mulsd LCPI1_0(%rip), %xmm1
194 This doesn't need -ffast-math support at all. This is particularly bad because
195 the llvm-gcc frontend is canonicalizing the later into the former, but clang
196 doesn't have this problem.
198 //===---------------------------------------------------------------------===//
200 These two functions should generate the same code on big-endian systems:
202 int g(int *j,int *l) { return memcmp(j,l,4); }
203 int h(int *j, int *l) { return *j - *l; }
205 this could be done in SelectionDAGISel.cpp, along with other special cases,
208 //===---------------------------------------------------------------------===//
210 It would be nice to revert this patch:
211 http://lists.cs.uiuc.edu/pipermail/llvm-commits/Week-of-Mon-20060213/031986.html
213 And teach the dag combiner enough to simplify the code expanded before
214 legalize. It seems plausible that this knowledge would let it simplify other
217 //===---------------------------------------------------------------------===//
219 For vector types, TargetData.cpp::getTypeInfo() returns alignment that is equal
220 to the type size. It works but can be overly conservative as the alignment of
221 specific vector types are target dependent.
223 //===---------------------------------------------------------------------===//
225 We should produce an unaligned load from code like this:
227 v4sf example(float *P) {
228 return (v4sf){P[0], P[1], P[2], P[3] };
231 //===---------------------------------------------------------------------===//
233 Add support for conditional increments, and other related patterns. Instead
238 je LBB16_2 #cond_next
249 //===---------------------------------------------------------------------===//
251 Combine: a = sin(x), b = cos(x) into a,b = sincos(x).
253 Expand these to calls of sin/cos and stores:
254 double sincos(double x, double *sin, double *cos);
255 float sincosf(float x, float *sin, float *cos);
256 long double sincosl(long double x, long double *sin, long double *cos);
258 Doing so could allow SROA of the destination pointers. See also:
259 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=17687
261 This is now easily doable with MRVs. We could even make an intrinsic for this
262 if anyone cared enough about sincos.
264 //===---------------------------------------------------------------------===//
266 quantum_sigma_x in 462.libquantum contains the following loop:
268 for(i=0; i<reg->size; i++)
270 /* Flip the target bit of each basis state */
271 reg->node[i].state ^= ((MAX_UNSIGNED) 1 << target);
274 Where MAX_UNSIGNED/state is a 64-bit int. On a 32-bit platform it would be just
275 so cool to turn it into something like:
277 long long Res = ((MAX_UNSIGNED) 1 << target);
279 for(i=0; i<reg->size; i++)
280 reg->node[i].state ^= Res & 0xFFFFFFFFULL;
282 for(i=0; i<reg->size; i++)
283 reg->node[i].state ^= Res & 0xFFFFFFFF00000000ULL
286 ... which would only do one 32-bit XOR per loop iteration instead of two.
288 It would also be nice to recognize the reg->size doesn't alias reg->node[i], but
291 //===---------------------------------------------------------------------===//
293 This isn't recognized as bswap by instcombine (yes, it really is bswap):
295 unsigned long reverse(unsigned v) {
297 t = v ^ ((v << 16) | (v >> 16));
299 v = (v << 24) | (v >> 8);
303 Neither is this (very standard idiom):
307 return (((n) << 24) | (((n) & 0xff00) << 8)
308 | (((n) >> 8) & 0xff00) | ((n) >> 24));
311 //===---------------------------------------------------------------------===//
315 These idioms should be recognized as popcount (see PR1488):
317 unsigned countbits_slow(unsigned v) {
319 for (c = 0; v; v >>= 1)
323 unsigned countbits_fast(unsigned v){
326 v &= v - 1; // clear the least significant bit set
330 BITBOARD = unsigned long long
331 int PopCnt(register BITBOARD a) {
339 unsigned int popcount(unsigned int input) {
340 unsigned int count = 0;
341 for (unsigned int i = 0; i < 4 * 8; i++)
342 count += (input >> i) & i;
346 This is a form of idiom recognition for loops, the same thing that could be
347 useful for recognizing memset/memcpy.
349 //===---------------------------------------------------------------------===//
351 These should turn into single 16-bit (unaligned?) loads on little/big endian
354 unsigned short read_16_le(const unsigned char *adr) {
355 return adr[0] | (adr[1] << 8);
357 unsigned short read_16_be(const unsigned char *adr) {
358 return (adr[0] << 8) | adr[1];
361 //===---------------------------------------------------------------------===//
363 -instcombine should handle this transform:
364 icmp pred (sdiv X / C1 ), C2
365 when X, C1, and C2 are unsigned. Similarly for udiv and signed operands.
367 Currently InstCombine avoids this transform but will do it when the signs of
368 the operands and the sign of the divide match. See the FIXME in
369 InstructionCombining.cpp in the visitSetCondInst method after the switch case
370 for Instruction::UDiv (around line 4447) for more details.
372 The SingleSource/Benchmarks/Shootout-C++/hash and hash2 tests have examples of
375 //===---------------------------------------------------------------------===//
379 viterbi speeds up *significantly* if the various "history" related copy loops
380 are turned into memcpy calls at the source level. We need a "loops to memcpy"
383 //===---------------------------------------------------------------------===//
387 SingleSource/Benchmarks/Misc/dt.c shows several interesting optimization
388 opportunities in its double_array_divs_variable function: it needs loop
389 interchange, memory promotion (which LICM already does), vectorization and
390 variable trip count loop unrolling (since it has a constant trip count). ICC
391 apparently produces this very nice code with -ffast-math:
393 ..B1.70: # Preds ..B1.70 ..B1.69
394 mulpd %xmm0, %xmm1 #108.2
395 mulpd %xmm0, %xmm1 #108.2
396 mulpd %xmm0, %xmm1 #108.2
397 mulpd %xmm0, %xmm1 #108.2
399 cmpl $131072, %edx #108.2
400 jb ..B1.70 # Prob 99% #108.2
402 It would be better to count down to zero, but this is a lot better than what we
405 //===---------------------------------------------------------------------===//
409 typedef unsigned U32;
410 typedef unsigned long long U64;
411 int test (U32 *inst, U64 *regs) {
414 int r1 = (temp >> 20) & 0xf;
415 int b2 = (temp >> 16) & 0xf;
416 effective_addr2 = temp & 0xfff;
417 if (b2) effective_addr2 += regs[b2];
418 b2 = (temp >> 12) & 0xf;
419 if (b2) effective_addr2 += regs[b2];
420 effective_addr2 &= regs[4];
421 if ((effective_addr2 & 3) == 0)
426 Note that only the low 2 bits of effective_addr2 are used. On 32-bit systems,
427 we don't eliminate the computation of the top half of effective_addr2 because
428 we don't have whole-function selection dags. On x86, this means we use one
429 extra register for the function when effective_addr2 is declared as U64 than
430 when it is declared U32.
432 PHI Slicing could be extended to do this.
434 //===---------------------------------------------------------------------===//
436 LSR should know what GPR types a target has from TargetData. This code:
438 volatile short X, Y; // globals
442 for (i = 0; i < N; i++) { X = i; Y = i*4; }
445 produces two near identical IV's (after promotion) on PPC/ARM:
455 add r2, r2, #1 <- [0,+,1]
456 sub r0, r0, #1 <- [0,-,1]
460 LSR should reuse the "+" IV for the exit test.
462 //===---------------------------------------------------------------------===//
464 Tail call elim should be more aggressive, checking to see if the call is
465 followed by an uncond branch to an exit block.
467 ; This testcase is due to tail-duplication not wanting to copy the return
468 ; instruction into the terminating blocks because there was other code
469 ; optimized out of the function after the taildup happened.
470 ; RUN: llvm-as < %s | opt -tailcallelim | llvm-dis | not grep call
472 define i32 @t4(i32 %a) {
474 %tmp.1 = and i32 %a, 1 ; <i32> [#uses=1]
475 %tmp.2 = icmp ne i32 %tmp.1, 0 ; <i1> [#uses=1]
476 br i1 %tmp.2, label %then.0, label %else.0
478 then.0: ; preds = %entry
479 %tmp.5 = add i32 %a, -1 ; <i32> [#uses=1]
480 %tmp.3 = call i32 @t4( i32 %tmp.5 ) ; <i32> [#uses=1]
483 else.0: ; preds = %entry
484 %tmp.7 = icmp ne i32 %a, 0 ; <i1> [#uses=1]
485 br i1 %tmp.7, label %then.1, label %return
487 then.1: ; preds = %else.0
488 %tmp.11 = add i32 %a, -2 ; <i32> [#uses=1]
489 %tmp.9 = call i32 @t4( i32 %tmp.11 ) ; <i32> [#uses=1]
492 return: ; preds = %then.1, %else.0, %then.0
493 %result.0 = phi i32 [ 0, %else.0 ], [ %tmp.3, %then.0 ],
498 //===---------------------------------------------------------------------===//
500 Tail recursion elimination should handle:
505 return 2 * pow2m1 (n - 1) + 1;
508 Also, multiplies can be turned into SHL's, so they should be handled as if
509 they were associative. "return foo() << 1" can be tail recursion eliminated.
511 //===---------------------------------------------------------------------===//
513 Argument promotion should promote arguments for recursive functions, like
516 ; RUN: llvm-as < %s | opt -argpromotion | llvm-dis | grep x.val
518 define internal i32 @foo(i32* %x) {
520 %tmp = load i32* %x ; <i32> [#uses=0]
521 %tmp.foo = call i32 @foo( i32* %x ) ; <i32> [#uses=1]
525 define i32 @bar(i32* %x) {
527 %tmp3 = call i32 @foo( i32* %x ) ; <i32> [#uses=1]
531 //===---------------------------------------------------------------------===//
533 We should investigate an instruction sinking pass. Consider this silly
549 je LBB1_2 # cond_true
557 The PIC base computation (call+popl) is only used on one path through the
558 code, but is currently always computed in the entry block. It would be
559 better to sink the picbase computation down into the block for the
560 assertion, as it is the only one that uses it. This happens for a lot of
561 code with early outs.
563 Another example is loads of arguments, which are usually emitted into the
564 entry block on targets like x86. If not used in all paths through a
565 function, they should be sunk into the ones that do.
567 In this case, whole-function-isel would also handle this.
569 //===---------------------------------------------------------------------===//
571 Investigate lowering of sparse switch statements into perfect hash tables:
572 http://burtleburtle.net/bob/hash/perfect.html
574 //===---------------------------------------------------------------------===//
576 We should turn things like "load+fabs+store" and "load+fneg+store" into the
577 corresponding integer operations. On a yonah, this loop:
582 for (b = 0; b < 10000000; b++)
583 for (i = 0; i < 256; i++)
587 is twice as slow as this loop:
592 for (b = 0; b < 10000000; b++)
593 for (i = 0; i < 256; i++)
594 a[i] ^= (1ULL << 63);
597 and I suspect other processors are similar. On X86 in particular this is a
598 big win because doing this with integers allows the use of read/modify/write
601 //===---------------------------------------------------------------------===//
603 DAG Combiner should try to combine small loads into larger loads when
604 profitable. For example, we compile this C++ example:
606 struct THotKey { short Key; bool Control; bool Shift; bool Alt; };
607 extern THotKey m_HotKey;
608 THotKey GetHotKey () { return m_HotKey; }
610 into (-O3 -fno-exceptions -static -fomit-frame-pointer):
615 movb _m_HotKey+3, %cl
616 movb _m_HotKey+4, %dl
617 movb _m_HotKey+2, %ch
632 movzwl _m_HotKey+4, %edx
636 The LLVM IR contains the needed alignment info, so we should be able to
637 merge the loads and stores into 4-byte loads:
639 %struct.THotKey = type { i16, i8, i8, i8 }
640 define void @_Z9GetHotKeyv(%struct.THotKey* sret %agg.result) nounwind {
642 %tmp2 = load i16* getelementptr (@m_HotKey, i32 0, i32 0), align 8
643 %tmp5 = load i8* getelementptr (@m_HotKey, i32 0, i32 1), align 2
644 %tmp8 = load i8* getelementptr (@m_HotKey, i32 0, i32 2), align 1
645 %tmp11 = load i8* getelementptr (@m_HotKey, i32 0, i32 3), align 2
647 Alternatively, we should use a small amount of base-offset alias analysis
648 to make it so the scheduler doesn't need to hold all the loads in regs at
651 //===---------------------------------------------------------------------===//
653 We should add an FRINT node to the DAG to model targets that have legal
654 implementations of ceil/floor/rint.
656 //===---------------------------------------------------------------------===//
661 long long input[8] = {1,1,1,1,1,1,1,1};
665 We currently compile this into a memcpy from a global array since the
666 initializer is fairly large and not memset'able. This is good, but the memcpy
667 gets lowered to load/stores in the code generator. This is also ok, except
668 that the codegen lowering for memcpy doesn't handle the case when the source
669 is a constant global. This gives us atrocious code like this:
674 movl _C.0.1444-"L1$pb"+32(%eax), %ecx
676 movl _C.0.1444-"L1$pb"+20(%eax), %ecx
678 movl _C.0.1444-"L1$pb"+36(%eax), %ecx
680 movl _C.0.1444-"L1$pb"+44(%eax), %ecx
682 movl _C.0.1444-"L1$pb"+40(%eax), %ecx
684 movl _C.0.1444-"L1$pb"+12(%eax), %ecx
686 movl _C.0.1444-"L1$pb"+4(%eax), %ecx
698 //===---------------------------------------------------------------------===//
700 http://llvm.org/PR717:
702 The following code should compile into "ret int undef". Instead, LLVM
703 produces "ret int 0":
712 //===---------------------------------------------------------------------===//
714 The loop unroller should partially unroll loops (instead of peeling them)
715 when code growth isn't too bad and when an unroll count allows simplification
716 of some code within the loop. One trivial example is:
722 for ( nLoop = 0; nLoop < 1000; nLoop++ ) {
731 Unrolling by 2 would eliminate the '&1' in both copies, leading to a net
732 reduction in code size. The resultant code would then also be suitable for
733 exit value computation.
735 //===---------------------------------------------------------------------===//
737 We miss a bunch of rotate opportunities on various targets, including ppc, x86,
738 etc. On X86, we miss a bunch of 'rotate by variable' cases because the rotate
739 matching code in dag combine doesn't look through truncates aggressively
740 enough. Here are some testcases reduces from GCC PR17886:
742 unsigned long long f(unsigned long long x, int y) {
743 return (x << y) | (x >> 64-y);
745 unsigned f2(unsigned x, int y){
746 return (x << y) | (x >> 32-y);
748 unsigned long long f3(unsigned long long x){
750 return (x << y) | (x >> 64-y);
752 unsigned f4(unsigned x){
754 return (x << y) | (x >> 32-y);
756 unsigned long long f5(unsigned long long x, unsigned long long y) {
757 return (x << 8) | ((y >> 48) & 0xffull);
759 unsigned long long f6(unsigned long long x, unsigned long long y, int z) {
762 return (x << 8) | ((y >> 48) & 0xffull);
764 return (x << 16) | ((y >> 40) & 0xffffull);
766 return (x << 24) | ((y >> 32) & 0xffffffull);
768 return (x << 32) | ((y >> 24) & 0xffffffffull);
770 return (x << 40) | ((y >> 16) & 0xffffffffffull);
774 On X86-64, we only handle f2/f3/f4 right. On x86-32, a few of these
775 generate truly horrible code, instead of using shld and friends. On
776 ARM, we end up with calls to L___lshrdi3/L___ashldi3 in f, which is
777 badness. PPC64 misses f, f5 and f6. CellSPU aborts in isel.
779 //===---------------------------------------------------------------------===//
781 We do a number of simplifications in simplify libcalls to strength reduce
782 standard library functions, but we don't currently merge them together. For
783 example, it is useful to merge memcpy(a,b,strlen(b)) -> strcpy. This can only
784 be done safely if "b" isn't modified between the strlen and memcpy of course.
786 //===---------------------------------------------------------------------===//
788 We compile this program: (from GCC PR11680)
789 http://gcc.gnu.org/bugzilla/attachment.cgi?id=4487
791 Into code that runs the same speed in fast/slow modes, but both modes run 2x
792 slower than when compile with GCC (either 4.0 or 4.2):
794 $ llvm-g++ perf.cpp -O3 -fno-exceptions
796 1.821u 0.003s 0:01.82 100.0% 0+0k 0+0io 0pf+0w
798 $ g++ perf.cpp -O3 -fno-exceptions
800 0.821u 0.001s 0:00.82 100.0% 0+0k 0+0io 0pf+0w
802 It looks like we are making the same inlining decisions, so this may be raw
803 codegen badness or something else (haven't investigated).
805 //===---------------------------------------------------------------------===//
807 We miss some instcombines for stuff like this:
809 void foo (unsigned int a) {
810 /* This one is equivalent to a >= (3 << 2). */
815 A few other related ones are in GCC PR14753.
817 //===---------------------------------------------------------------------===//
819 Divisibility by constant can be simplified (according to GCC PR12849) from
820 being a mulhi to being a mul lo (cheaper). Testcase:
822 void bar(unsigned n) {
827 This is equivalent to the following, where 2863311531 is the multiplicative
828 inverse of 3, and 1431655766 is ((2^32)-1)/3+1:
829 void bar(unsigned n) {
830 if (n * 2863311531U < 1431655766U)
834 The same transformation can work with an even modulo with the addition of a
835 rotate: rotate the result of the multiply to the right by the number of bits
836 which need to be zero for the condition to be true, and shrink the compare RHS
837 by the same amount. Unless the target supports rotates, though, that
838 transformation probably isn't worthwhile.
840 The transformation can also easily be made to work with non-zero equality
841 comparisons: just transform, for example, "n % 3 == 1" to "(n-1) % 3 == 0".
843 //===---------------------------------------------------------------------===//
845 Better mod/ref analysis for scanf would allow us to eliminate the vtable and a
846 bunch of other stuff from this example (see PR1604):
856 std::scanf("%d", &t.val);
857 std::printf("%d\n", t.val);
860 //===---------------------------------------------------------------------===//
862 These functions perform the same computation, but produce different assembly.
864 define i8 @select(i8 %x) readnone nounwind {
865 %A = icmp ult i8 %x, 250
866 %B = select i1 %A, i8 0, i8 1
870 define i8 @addshr(i8 %x) readnone nounwind {
871 %A = zext i8 %x to i9
872 %B = add i9 %A, 6 ;; 256 - 250 == 6
874 %D = trunc i9 %C to i8
878 //===---------------------------------------------------------------------===//
882 f (unsigned long a, unsigned long b, unsigned long c)
884 return ((a & (c - 1)) != 0) || ((b & (c - 1)) != 0);
887 f (unsigned long a, unsigned long b, unsigned long c)
889 return ((a & (c - 1)) != 0) | ((b & (c - 1)) != 0);
891 Both should combine to ((a|b) & (c-1)) != 0. Currently not optimized with
892 "clang -emit-llvm-bc | opt -std-compile-opts".
894 //===---------------------------------------------------------------------===//
897 #define PMD_MASK (~((1UL << 23) - 1))
898 void clear_pmd_range(unsigned long start, unsigned long end)
900 if (!(start & ~PMD_MASK) && !(end & ~PMD_MASK))
903 The expression should optimize to something like
904 "!((start|end)&~PMD_MASK). Currently not optimized with "clang
905 -emit-llvm-bc | opt -std-compile-opts".
907 //===---------------------------------------------------------------------===//
911 if (variable == 4 || variable == 6)
914 This should optimize to "if ((variable | 2) == 6)". Currently not
915 optimized with "clang -emit-llvm-bc | opt -std-compile-opts | llc".
917 //===---------------------------------------------------------------------===//
919 unsigned int f(unsigned int i, unsigned int n) {++i; if (i == n) ++i; return
921 unsigned int f2(unsigned int i, unsigned int n) {++i; i += i == n; return i;}
922 These should combine to the same thing. Currently, the first function
923 produces better code on X86.
925 //===---------------------------------------------------------------------===//
928 #define abs(x) x>0?x:-x
931 return (abs(x)) >= 0;
933 This should optimize to x == INT_MIN. (With -fwrapv.) Currently not
934 optimized with "clang -emit-llvm-bc | opt -std-compile-opts".
936 //===---------------------------------------------------------------------===//
940 rotate_cst (unsigned int a)
942 a = (a << 10) | (a >> 22);
947 minus_cst (unsigned int a)
956 mask_gt (unsigned int a)
958 /* This is equivalent to a > 15. */
963 rshift_gt (unsigned int a)
965 /* This is equivalent to a > 23. */
969 All should simplify to a single comparison. All of these are
970 currently not optimized with "clang -emit-llvm-bc | opt
973 //===---------------------------------------------------------------------===//
976 int c(int* x) {return (char*)x+2 == (char*)x;}
977 Should combine to 0. Currently not optimized with "clang
978 -emit-llvm-bc | opt -std-compile-opts" (although llc can optimize it).
980 //===---------------------------------------------------------------------===//
982 int a(unsigned b) {return ((b << 31) | (b << 30)) >> 31;}
983 Should be combined to "((b >> 1) | b) & 1". Currently not optimized
984 with "clang -emit-llvm-bc | opt -std-compile-opts".
986 //===---------------------------------------------------------------------===//
988 unsigned a(unsigned x, unsigned y) { return x | (y & 1) | (y & 2);}
989 Should combine to "x | (y & 3)". Currently not optimized with "clang
990 -emit-llvm-bc | opt -std-compile-opts".
992 //===---------------------------------------------------------------------===//
994 int a(int a, int b, int c) {return (~a & c) | ((c|a) & b);}
995 Should fold to "(~a & c) | (a & b)". Currently not optimized with
996 "clang -emit-llvm-bc | opt -std-compile-opts".
998 //===---------------------------------------------------------------------===//
1000 int a(int a,int b) {return (~(a|b))|a;}
1001 Should fold to "a|~b". Currently not optimized with "clang
1002 -emit-llvm-bc | opt -std-compile-opts".
1004 //===---------------------------------------------------------------------===//
1006 int a(int a, int b) {return (a&&b) || (a&&!b);}
1007 Should fold to "a". Currently not optimized with "clang -emit-llvm-bc
1008 | opt -std-compile-opts".
1010 //===---------------------------------------------------------------------===//
1012 int a(int a, int b, int c) {return (a&&b) || (!a&&c);}
1013 Should fold to "a ? b : c", or at least something sane. Currently not
1014 optimized with "clang -emit-llvm-bc | opt -std-compile-opts".
1016 //===---------------------------------------------------------------------===//
1018 int a(int a, int b, int c) {return (a&&b) || (a&&c) || (a&&b&&c);}
1019 Should fold to a && (b || c). Currently not optimized with "clang
1020 -emit-llvm-bc | opt -std-compile-opts".
1022 //===---------------------------------------------------------------------===//
1024 int a(int x) {return x | ((x & 8) ^ 8);}
1025 Should combine to x | 8. Currently not optimized with "clang
1026 -emit-llvm-bc | opt -std-compile-opts".
1028 //===---------------------------------------------------------------------===//
1030 int a(int x) {return x ^ ((x & 8) ^ 8);}
1031 Should also combine to x | 8. Currently not optimized with "clang
1032 -emit-llvm-bc | opt -std-compile-opts".
1034 //===---------------------------------------------------------------------===//
1036 int a(int x) {return (x & 8) == 0 ? -1 : -9;}
1037 Should combine to (x | -9) ^ 8. Currently not optimized with "clang
1038 -emit-llvm-bc | opt -std-compile-opts".
1040 //===---------------------------------------------------------------------===//
1042 int a(int x) {return (x & 8) == 0 ? -9 : -1;}
1043 Should combine to x | -9. Currently not optimized with "clang
1044 -emit-llvm-bc | opt -std-compile-opts".
1046 //===---------------------------------------------------------------------===//
1048 int a(int x) {return ((x | -9) ^ 8) & x;}
1049 Should combine to x & -9. Currently not optimized with "clang
1050 -emit-llvm-bc | opt -std-compile-opts".
1052 //===---------------------------------------------------------------------===//
1054 unsigned a(unsigned a) {return a * 0x11111111 >> 28 & 1;}
1055 Should combine to "a * 0x88888888 >> 31". Currently not optimized
1056 with "clang -emit-llvm-bc | opt -std-compile-opts".
1058 //===---------------------------------------------------------------------===//
1060 unsigned a(char* x) {if ((*x & 32) == 0) return b();}
1061 There's an unnecessary zext in the generated code with "clang
1062 -emit-llvm-bc | opt -std-compile-opts".
1064 //===---------------------------------------------------------------------===//
1066 unsigned a(unsigned long long x) {return 40 * (x >> 1);}
1067 Should combine to "20 * (((unsigned)x) & -2)". Currently not
1068 optimized with "clang -emit-llvm-bc | opt -std-compile-opts".
1070 //===---------------------------------------------------------------------===//
1072 This was noticed in the entryblock for grokdeclarator in 403.gcc:
1074 %tmp = icmp eq i32 %decl_context, 4
1075 %decl_context_addr.0 = select i1 %tmp, i32 3, i32 %decl_context
1076 %tmp1 = icmp eq i32 %decl_context_addr.0, 1
1077 %decl_context_addr.1 = select i1 %tmp1, i32 0, i32 %decl_context_addr.0
1079 tmp1 should be simplified to something like:
1080 (!tmp || decl_context == 1)
1082 This allows recursive simplifications, tmp1 is used all over the place in
1083 the function, e.g. by:
1085 %tmp23 = icmp eq i32 %decl_context_addr.1, 0 ; <i1> [#uses=1]
1086 %tmp24 = xor i1 %tmp1, true ; <i1> [#uses=1]
1087 %or.cond8 = and i1 %tmp23, %tmp24 ; <i1> [#uses=1]
1091 //===---------------------------------------------------------------------===//
1095 Store sinking: This code:
1097 void f (int n, int *cond, int *res) {
1100 for (i = 0; i < n; i++)
1102 *res ^= 234; /* (*) */
1105 On this function GVN hoists the fully redundant value of *res, but nothing
1106 moves the store out. This gives us this code:
1108 bb: ; preds = %bb2, %entry
1109 %.rle = phi i32 [ 0, %entry ], [ %.rle6, %bb2 ]
1110 %i.05 = phi i32 [ 0, %entry ], [ %indvar.next, %bb2 ]
1111 %1 = load i32* %cond, align 4
1112 %2 = icmp eq i32 %1, 0
1113 br i1 %2, label %bb2, label %bb1
1116 %3 = xor i32 %.rle, 234
1117 store i32 %3, i32* %res, align 4
1120 bb2: ; preds = %bb, %bb1
1121 %.rle6 = phi i32 [ %3, %bb1 ], [ %.rle, %bb ]
1122 %indvar.next = add i32 %i.05, 1
1123 %exitcond = icmp eq i32 %indvar.next, %n
1124 br i1 %exitcond, label %return, label %bb
1126 DSE should sink partially dead stores to get the store out of the loop.
1128 Here's another partial dead case:
1129 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=12395
1131 //===---------------------------------------------------------------------===//
1133 Scalar PRE hoists the mul in the common block up to the else:
1135 int test (int a, int b, int c, int g) {
1145 It would be better to do the mul once to reduce codesize above the if.
1146 This is GCC PR38204.
1148 //===---------------------------------------------------------------------===//
1152 GCC PR37810 is an interesting case where we should sink load/store reload
1153 into the if block and outside the loop, so we don't reload/store it on the
1174 We now hoist the reload after the call (Transforms/GVN/lpre-call-wrap.ll), but
1175 we don't sink the store. We need partially dead store sinking.
1177 //===---------------------------------------------------------------------===//
1179 [LOAD PRE CRIT EDGE SPLITTING]
1181 GCC PR37166: Sinking of loads prevents SROA'ing the "g" struct on the stack
1182 leading to excess stack traffic. This could be handled by GVN with some crazy
1183 symbolic phi translation. The code we get looks like (g is on the stack):
1187 %9 = getelementptr %struct.f* %g, i32 0, i32 0
1188 store i32 %8, i32* %9, align bel %bb3
1190 bb3: ; preds = %bb1, %bb2, %bb
1191 %c_addr.0 = phi %struct.f* [ %g, %bb2 ], [ %c, %bb ], [ %c, %bb1 ]
1192 %b_addr.0 = phi %struct.f* [ %b, %bb2 ], [ %g, %bb ], [ %b, %bb1 ]
1193 %10 = getelementptr %struct.f* %c_addr.0, i32 0, i32 0
1194 %11 = load i32* %10, align 4
1196 %11 is partially redundant, an in BB2 it should have the value %8.
1198 GCC PR33344 and PR35287 are similar cases.
1201 //===---------------------------------------------------------------------===//
1205 There are many load PRE testcases in testsuite/gcc.dg/tree-ssa/loadpre* in the
1206 GCC testsuite, ones we don't get yet are (checked through loadpre25):
1208 [CRIT EDGE BREAKING]
1209 loadpre3.c predcom-4.c
1211 [PRE OF READONLY CALL]
1214 [TURN SELECT INTO BRANCH]
1215 loadpre14.c loadpre15.c
1217 actually a conditional increment: loadpre18.c loadpre19.c
1220 //===---------------------------------------------------------------------===//
1223 There are many PRE testcases in testsuite/gcc.dg/tree-ssa/ssa-pre-*.c in the
1226 //===---------------------------------------------------------------------===//
1228 There are some interesting cases in testsuite/gcc.dg/tree-ssa/pred-comm* in the
1229 GCC testsuite. For example, we get the first example in predcom-1.c, but
1230 miss the second one:
1235 __attribute__ ((noinline))
1236 void count_averages(int n) {
1238 for (i = 1; i < n; i++)
1239 avg[i] = (((unsigned long) fib[i - 1] + fib[i] + fib[i + 1]) / 3) & 0xffff;
1242 which compiles into two loads instead of one in the loop.
1244 predcom-2.c is the same as predcom-1.c
1246 predcom-3.c is very similar but needs loads feeding each other instead of
1250 //===---------------------------------------------------------------------===//
1254 Type based alias analysis:
1255 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=14705
1257 We should do better analysis of posix_memalign. At the least it should
1258 no-capture its pointer argument, at best, we should know that the out-value
1259 result doesn't point to anything (like malloc). One example of this is in
1260 SingleSource/Benchmarks/Misc/dt.c
1262 //===---------------------------------------------------------------------===//
1264 A/B get pinned to the stack because we turn an if/then into a select instead
1265 of PRE'ing the load/store. This may be fixable in instcombine:
1266 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=37892
1268 struct X { int i; };
1282 //===---------------------------------------------------------------------===//
1284 Interesting missed case because of control flow flattening (should be 2 loads):
1285 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=26629
1286 With: llvm-gcc t2.c -S -o - -O0 -emit-llvm | llvm-as |
1287 opt -mem2reg -gvn -instcombine | llvm-dis
1288 we miss it because we need 1) CRIT EDGE 2) MULTIPLE DIFFERENT
1289 VALS PRODUCED BY ONE BLOCK OVER DIFFERENT PATHS
1291 //===---------------------------------------------------------------------===//
1293 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=19633
1294 We could eliminate the branch condition here, loading from null is undefined:
1296 struct S { int w, x, y, z; };
1297 struct T { int r; struct S s; };
1298 void bar (struct S, int);
1299 void foo (int a, struct T b)
1307 //===---------------------------------------------------------------------===//
1309 simplifylibcalls should do several optimizations for strspn/strcspn:
1311 strcspn(x, "") -> strlen(x)
1314 strspn(x, "") -> strlen(x)
1315 strspn(x, "a") -> strchr(x, 'a')-x
1317 strcspn(x, "a") -> inlined loop for up to 3 letters (similarly for strspn):
1319 size_t __strcspn_c3 (__const char *__s, int __reject1, int __reject2,
1321 register size_t __result = 0;
1322 while (__s[__result] != '\0' && __s[__result] != __reject1 &&
1323 __s[__result] != __reject2 && __s[__result] != __reject3)
1328 This should turn into a switch on the character. See PR3253 for some notes on
1331 456.hmmer apparently uses strcspn and strspn a lot. 471.omnetpp uses strspn.
1333 //===---------------------------------------------------------------------===//
1335 "gas" uses this idiom:
1336 else if (strchr ("+-/*%|&^:[]()~", *intel_parser.op_string))
1338 else if (strchr ("<>", *intel_parser.op_string)
1340 Those should be turned into a switch.
1342 //===---------------------------------------------------------------------===//
1344 252.eon contains this interesting code:
1346 %3072 = getelementptr [100 x i8]* %tempString, i32 0, i32 0
1347 %3073 = call i8* @strcpy(i8* %3072, i8* %3071) nounwind
1348 %strlen = call i32 @strlen(i8* %3072) ; uses = 1
1349 %endptr = getelementptr [100 x i8]* %tempString, i32 0, i32 %strlen
1350 call void @llvm.memcpy.i32(i8* %endptr,
1351 i8* getelementptr ([5 x i8]* @"\01LC42", i32 0, i32 0), i32 5, i32 1)
1352 %3074 = call i32 @strlen(i8* %endptr) nounwind readonly
1354 This is interesting for a couple reasons. First, in this:
1356 %3073 = call i8* @strcpy(i8* %3072, i8* %3071) nounwind
1357 %strlen = call i32 @strlen(i8* %3072)
1359 The strlen could be replaced with: %strlen = sub %3072, %3073, because the
1360 strcpy call returns a pointer to the end of the string. Based on that, the
1361 endptr GEP just becomes equal to 3073, which eliminates a strlen call and GEP.
1363 Second, the memcpy+strlen strlen can be replaced with:
1365 %3074 = call i32 @strlen([5 x i8]* @"\01LC42") nounwind readonly
1367 Because the destination was just copied into the specified memory buffer. This,
1368 in turn, can be constant folded to "4".
1370 In other code, it contains:
1372 %endptr6978 = bitcast i8* %endptr69 to i32*
1373 store i32 7107374, i32* %endptr6978, align 1
1374 %3167 = call i32 @strlen(i8* %endptr69) nounwind readonly
1376 Which could also be constant folded. Whatever is producing this should probably
1377 be fixed to leave this as a memcpy from a string.
1379 Further, eon also has an interesting partially redundant strlen call:
1381 bb8: ; preds = %_ZN18eonImageCalculatorC1Ev.exit
1382 %682 = getelementptr i8** %argv, i32 6 ; <i8**> [#uses=2]
1383 %683 = load i8** %682, align 4 ; <i8*> [#uses=4]
1384 %684 = load i8* %683, align 1 ; <i8> [#uses=1]
1385 %685 = icmp eq i8 %684, 0 ; <i1> [#uses=1]
1386 br i1 %685, label %bb10, label %bb9
1389 %686 = call i32 @strlen(i8* %683) nounwind readonly
1390 %687 = icmp ugt i32 %686, 254 ; <i1> [#uses=1]
1391 br i1 %687, label %bb10, label %bb11
1393 bb10: ; preds = %bb9, %bb8
1394 %688 = call i32 @strlen(i8* %683) nounwind readonly
1396 This could be eliminated by doing the strlen once in bb8, saving code size and
1397 improving perf on the bb8->9->10 path.
1399 //===---------------------------------------------------------------------===//
1401 I see an interesting fully redundant call to strlen left in 186.crafty:InputMove
1403 %movetext11 = getelementptr [128 x i8]* %movetext, i32 0, i32 0
1406 bb62: ; preds = %bb55, %bb53
1407 %promote.0 = phi i32 [ %169, %bb55 ], [ 0, %bb53 ]
1408 %171 = call i32 @strlen(i8* %movetext11) nounwind readonly align 1
1409 %172 = add i32 %171, -1 ; <i32> [#uses=1]
1410 %173 = getelementptr [128 x i8]* %movetext, i32 0, i32 %172
1413 br i1 %or.cond, label %bb65, label %bb72
1415 bb65: ; preds = %bb62
1416 store i8 0, i8* %173, align 1
1419 bb72: ; preds = %bb65, %bb62
1420 %trank.1 = phi i32 [ %176, %bb65 ], [ -1, %bb62 ]
1421 %177 = call i32 @strlen(i8* %movetext11) nounwind readonly align 1
1423 Note that on the bb62->bb72 path, that the %177 strlen call is partially
1424 redundant with the %171 call. At worst, we could shove the %177 strlen call
1425 up into the bb65 block moving it out of the bb62->bb72 path. However, note
1426 that bb65 stores to the string, zeroing out the last byte. This means that on
1427 that path the value of %177 is actually just %171-1. A sub is cheaper than a
1430 This pattern repeats several times, basically doing:
1435 where it is "obvious" that B = A-1.
1437 //===---------------------------------------------------------------------===//
1439 186.crafty also contains this code:
1441 %1906 = call i32 @strlen(i8* getelementptr ([32 x i8]* @pgn_event, i32 0,i32 0))
1442 %1907 = getelementptr [32 x i8]* @pgn_event, i32 0, i32 %1906
1443 %1908 = call i8* @strcpy(i8* %1907, i8* %1905) nounwind align 1
1444 %1909 = call i32 @strlen(i8* getelementptr ([32 x i8]* @pgn_event, i32 0,i32 0))
1445 %1910 = getelementptr [32 x i8]* @pgn_event, i32 0, i32 %1909
1447 The last strlen is computable as 1908-@pgn_event, which means 1910=1908.
1449 //===---------------------------------------------------------------------===//
1451 186.crafty has this interesting pattern with the "out.4543" variable:
1453 call void @llvm.memcpy.i32(
1454 i8* getelementptr ([10 x i8]* @out.4543, i32 0, i32 0),
1455 i8* getelementptr ([7 x i8]* @"\01LC28700", i32 0, i32 0), i32 7, i32 1)
1456 %101 = call@printf(i8* ... @out.4543, i32 0, i32 0)) nounwind
1458 It is basically doing:
1460 memcpy(globalarray, "string");
1461 printf(..., globalarray);
1463 Anyway, by knowing that printf just reads the memory and forward substituting
1464 the string directly into the printf, this eliminates reads from globalarray.
1465 Since this pattern occurs frequently in crafty (due to the "DisplayTime" and
1466 other similar functions) there are many stores to "out". Once all the printfs
1467 stop using "out", all that is left is the memcpy's into it. This should allow
1468 globalopt to remove the "stored only" global.
1470 //===---------------------------------------------------------------------===//
1474 define inreg i32 @foo(i8* inreg %p) nounwind {
1476 %tmp1 = ashr i8 %tmp0, 5
1477 %tmp2 = sext i8 %tmp1 to i32
1481 could be dagcombine'd to a sign-extending load with a shift.
1482 For example, on x86 this currently gets this:
1488 while it could get this:
1493 //===---------------------------------------------------------------------===//
1497 int test(int x) { return 1-x == x; } // --> return false
1498 int test2(int x) { return 2-x == x; } // --> return x == 1 ?
1500 Always foldable for odd constants, what is the rule for even?
1502 //===---------------------------------------------------------------------===//
1504 PR 3381: GEP to field of size 0 inside a struct could be turned into GEP
1505 for next field in struct (which is at same address).
1507 For example: store of float into { {{}}, float } could be turned into a store to
1510 //===---------------------------------------------------------------------===//
1513 double foo(double a) { return sin(a); }
1515 This compiles into this on x86-64 Linux:
1526 //===---------------------------------------------------------------------===//
1528 The arg promotion pass should make use of nocapture to make its alias analysis
1529 stuff much more precise.
1531 //===---------------------------------------------------------------------===//
1533 The following functions should be optimized to use a select instead of a
1534 branch (from gcc PR40072):
1536 char char_int(int m) {if(m>7) return 0; return m;}
1537 int int_char(char m) {if(m>7) return 0; return m;}
1539 //===---------------------------------------------------------------------===//
1541 int func(int a, int b) { if (a & 0x80) b |= 0x80; else b &= ~0x80; return b; }
1545 define i32 @func(i32 %a, i32 %b) nounwind readnone ssp {
1547 %0 = and i32 %a, 128 ; <i32> [#uses=1]
1548 %1 = icmp eq i32 %0, 0 ; <i1> [#uses=1]
1549 %2 = or i32 %b, 128 ; <i32> [#uses=1]
1550 %3 = and i32 %b, -129 ; <i32> [#uses=1]
1551 %b_addr.0 = select i1 %1, i32 %3, i32 %2 ; <i32> [#uses=1]
1555 However, it's functionally equivalent to:
1557 b = (b & ~0x80) | (a & 0x80);
1559 Which generates this:
1561 define i32 @func(i32 %a, i32 %b) nounwind readnone ssp {
1563 %0 = and i32 %b, -129 ; <i32> [#uses=1]
1564 %1 = and i32 %a, 128 ; <i32> [#uses=1]
1565 %2 = or i32 %0, %1 ; <i32> [#uses=1]
1569 This can be generalized for other forms:
1571 b = (b & ~0x80) | (a & 0x40) << 1;
1573 //===---------------------------------------------------------------------===//
1575 These two functions produce different code. They shouldn't:
1579 uint8_t p1(uint8_t b, uint8_t a) {
1580 b = (b & ~0xc0) | (a & 0xc0);
1584 uint8_t p2(uint8_t b, uint8_t a) {
1585 b = (b & ~0x40) | (a & 0x40);
1586 b = (b & ~0x80) | (a & 0x80);
1590 define zeroext i8 @p1(i8 zeroext %b, i8 zeroext %a) nounwind readnone ssp {
1592 %0 = and i8 %b, 63 ; <i8> [#uses=1]
1593 %1 = and i8 %a, -64 ; <i8> [#uses=1]
1594 %2 = or i8 %1, %0 ; <i8> [#uses=1]
1598 define zeroext i8 @p2(i8 zeroext %b, i8 zeroext %a) nounwind readnone ssp {
1600 %0 = and i8 %b, 63 ; <i8> [#uses=1]
1601 %.masked = and i8 %a, 64 ; <i8> [#uses=1]
1602 %1 = and i8 %a, -128 ; <i8> [#uses=1]
1603 %2 = or i8 %1, %0 ; <i8> [#uses=1]
1604 %3 = or i8 %2, %.masked ; <i8> [#uses=1]
1608 //===---------------------------------------------------------------------===//
1610 IPSCCP does not currently propagate argument dependent constants through
1611 functions where it does not not all of the callers. This includes functions
1612 with normal external linkage as well as templates, C99 inline functions etc.
1613 Specifically, it does nothing to:
1615 define i32 @test(i32 %x, i32 %y, i32 %z) nounwind {
1617 %0 = add nsw i32 %y, %z
1620 %3 = add nsw i32 %1, %2
1624 define i32 @test2() nounwind {
1626 %0 = call i32 @test(i32 1, i32 2, i32 4) nounwind
1630 It would be interesting extend IPSCCP to be able to handle simple cases like
1631 this, where all of the arguments to a call are constant. Because IPSCCP runs
1632 before inlining, trivial templates and inline functions are not yet inlined.
1633 The results for a function + set of constant arguments should be memoized in a
1636 //===---------------------------------------------------------------------===//
1638 The libcall constant folding stuff should be moved out of SimplifyLibcalls into
1639 libanalysis' constantfolding logic. This would allow IPSCCP to be able to
1640 handle simple things like this:
1642 static int foo(const char *X) { return strlen(X); }
1643 int bar() { return foo("abcd"); }
1645 //===---------------------------------------------------------------------===//
1647 InstCombine should use SimplifyDemandedBits to remove the or instruction:
1649 define i1 @test(i8 %x, i8 %y) {
1651 %B = icmp ugt i8 %A, 3
1655 Currently instcombine calls SimplifyDemandedBits with either all bits or just
1656 the sign bit, if the comparison is obviously a sign test. In this case, we only
1657 need all but the bottom two bits from %A, and if we gave that mask to SDB it
1658 would delete the or instruction for us.
1660 //===---------------------------------------------------------------------===//
1662 functionattrs doesn't know much about memcpy/memset. This function should be
1663 marked readnone rather than readonly, since it only twiddles local memory, but
1664 functionattrs doesn't handle memset/memcpy/memmove aggressively:
1666 struct X { int *p; int *q; };
1673 p = __builtin_memcpy (&x, &y, sizeof (int *));
1677 //===---------------------------------------------------------------------===//
1679 Missed instcombine transformation:
1680 define i1 @a(i32 %x) nounwind readnone {
1682 %cmp = icmp eq i32 %x, 30
1683 %sub = add i32 %x, -30
1684 %cmp2 = icmp ugt i32 %sub, 9
1685 %or = or i1 %cmp, %cmp2
1688 This should be optimized to a single compare. Testcase derived from gcc.
1690 //===---------------------------------------------------------------------===//
1692 Missed instcombine transformation:
1694 void a(int x) { if (((1<<x)&8)==0) b(); }
1696 The shift should be optimized out. Testcase derived from gcc.
1698 //===---------------------------------------------------------------------===//
1700 Missed instcombine or reassociate transformation:
1701 int a(int a, int b) { return (a==12)&(b>47)&(b<58); }
1703 The sgt and slt should be combined into a single comparison. Testcase derived
1706 //===---------------------------------------------------------------------===//
1708 Missed instcombine transformation:
1709 define i32 @a(i32 %x) nounwind readnone {
1711 %rem = srem i32 %x, 32
1712 %shl = shl i32 1, %rem
1716 The srem can be transformed to an and because if x is negative, the shift is
1717 undefined. Testcase derived from gcc.
1719 //===---------------------------------------------------------------------===//
1721 Missed instcombine/dagcombine transformation:
1722 define i32 @a(i32 %x, i32 %y) nounwind readnone {
1724 %mul = mul i32 %y, -8
1725 %sub = sub i32 %x, %mul
1729 Should compile to something like x+y*8, but currently compiles to an
1730 inefficient result. Testcase derived from gcc.
1732 //===---------------------------------------------------------------------===//
1734 Missed instcombine/dagcombine transformation:
1735 define void @lshift_lt(i8 zeroext %a) nounwind {
1737 %conv = zext i8 %a to i32
1738 %shl = shl i32 %conv, 3
1739 %cmp = icmp ult i32 %shl, 33
1740 br i1 %cmp, label %if.then, label %if.end
1743 tail call void @bar() nounwind
1749 declare void @bar() nounwind
1751 The shift should be eliminated. Testcase derived from gcc.
1753 //===---------------------------------------------------------------------===//
1755 These compile into different code, one gets recognized as a switch and the
1756 other doesn't due to phase ordering issues (PR6212):
1758 int test1(int mainType, int subType) {
1761 else if (mainType == 9)
1763 else if (mainType == 11)
1768 int test2(int mainType, int subType) {
1778 //===---------------------------------------------------------------------===//
1780 The following test case (from PR6576):
1782 define i32 @mul(i32 %a, i32 %b) nounwind readnone {
1784 %cond1 = icmp eq i32 %b, 0 ; <i1> [#uses=1]
1785 br i1 %cond1, label %exit, label %bb.nph
1786 bb.nph: ; preds = %entry
1787 %tmp = mul i32 %b, %a ; <i32> [#uses=1]
1789 exit: ; preds = %entry
1793 could be reduced to:
1795 define i32 @mul(i32 %a, i32 %b) nounwind readnone {
1797 %tmp = mul i32 %b, %a
1801 //===---------------------------------------------------------------------===//
1803 We should use DSE + llvm.lifetime.end to delete dead vtable pointer updates.
1806 Another interesting case is that something related could be used for variables
1807 that go const after their ctor has finished. In these cases, globalopt (which
1808 can statically run the constructor) could mark the global const (so it gets put
1809 in the readonly section). A testcase would be:
1812 using namespace std;
1813 const complex<char> should_be_in_rodata (42,-42);
1814 complex<char> should_be_in_data (42,-42);
1815 complex<char> should_be_in_bss;
1817 Where we currently evaluate the ctors but the globals don't become const because
1818 the optimizer doesn't know they "become const" after the ctor is done. See
1819 GCC PR4131 for more examples.
1821 //===---------------------------------------------------------------------===//
1826 return x > 1 ? x : 1;
1829 LLVM emits a comparison with 1 instead of 0. 0 would be equivalent
1830 and cheaper on most targets.
1832 LLVM prefers comparisons with zero over non-zero in general, but in this
1833 case it choses instead to keep the max operation obvious.
1835 //===---------------------------------------------------------------------===//
1837 Take the following testcase on x86-64 (similar testcases exist for all targets
1840 define void @a(i64* nocapture %s, i64* nocapture %t, i64 %a, i64 %b,
1843 %0 = zext i64 %a to i128 ; <i128> [#uses=1]
1844 %1 = zext i64 %b to i128 ; <i128> [#uses=1]
1845 %2 = add i128 %1, %0 ; <i128> [#uses=2]
1846 %3 = zext i64 %c to i128 ; <i128> [#uses=1]
1847 %4 = shl i128 %3, 64 ; <i128> [#uses=1]
1848 %5 = add i128 %4, %2 ; <i128> [#uses=1]
1849 %6 = lshr i128 %5, 64 ; <i128> [#uses=1]
1850 %7 = trunc i128 %6 to i64 ; <i64> [#uses=1]
1851 store i64 %7, i64* %s, align 8
1852 %8 = trunc i128 %2 to i64 ; <i64> [#uses=1]
1853 store i64 %8, i64* %t, align 8
1873 The generated SelectionDAG has an ADD of an ADDE, where both operands of the
1874 ADDE are zero. Replacing one of the operands of the ADDE with the other operand
1875 of the ADD, and replacing the ADD with the ADDE, should give the desired result.
1877 (That said, we are doing a lot better than gcc on this testcase. :) )
1879 //===---------------------------------------------------------------------===//
1881 Switch lowering generates less than ideal code for the following switch:
1882 define void @a(i32 %x) nounwind {
1884 switch i32 %x, label %if.end [
1885 i32 0, label %if.then
1886 i32 1, label %if.then
1887 i32 2, label %if.then
1888 i32 3, label %if.then
1889 i32 5, label %if.then
1892 tail call void @foo() nounwind
1899 Generated code on x86-64 (other platforms give similar results):
1912 The movl+movl+btq+jb could be simplified to a cmpl+jne.
1914 Or, if we wanted to be really clever, we could simplify the whole thing to
1915 something like the following, which eliminates a branch:
1922 //===---------------------------------------------------------------------===//
1923 Given a branch where the two target blocks are identical ("ret i32 %b" in
1924 both), simplifycfg will simplify them away. But not so for a switch statement:
1926 define i32 @f(i32 %a, i32 %b) nounwind readnone {
1928 switch i32 %a, label %bb3 [
1933 bb: ; preds = %entry, %entry
1936 bb3: ; preds = %entry
1939 //===---------------------------------------------------------------------===//