1 Target Independent Opportunities:
3 //===---------------------------------------------------------------------===//
5 We should recognize idioms for add-with-carry and turn it into the appropriate
6 intrinsics. This example:
8 unsigned add32carry(unsigned sum, unsigned x) {
15 Compiles to: clang t.c -S -o - -O3 -fomit-frame-pointer -m64 -mkernel
17 _add32carry: ## @add32carry
28 leal (%rsi,%rdi), %eax
35 //===---------------------------------------------------------------------===//
37 Dead argument elimination should be enhanced to handle cases when an argument is
38 dead to an externally visible function. Though the argument can't be removed
39 from the externally visible function, the caller doesn't need to pass it in.
40 For example in this testcase:
42 void foo(int X) __attribute__((noinline));
43 void foo(int X) { sideeffect(); }
44 void bar(int A) { foo(A+1); }
48 define void @bar(i32 %A) nounwind ssp {
49 %0 = add nsw i32 %A, 1 ; <i32> [#uses=1]
50 tail call void @foo(i32 %0) nounwind noinline ssp
54 The add is dead, we could pass in 'i32 undef' instead. This occurs for C++
55 templates etc, which usually have linkonce_odr/weak_odr linkage, not internal
58 //===---------------------------------------------------------------------===//
60 With the recent changes to make the implicit def/use set explicit in
61 machineinstrs, we should change the target descriptions for 'call' instructions
62 so that the .td files don't list all the call-clobbered registers as implicit
63 defs. Instead, these should be added by the code generator (e.g. on the dag).
65 This has a number of uses:
67 1. PPC32/64 and X86 32/64 can avoid having multiple copies of call instructions
68 for their different impdef sets.
69 2. Targets with multiple calling convs (e.g. x86) which have different clobber
70 sets don't need copies of call instructions.
71 3. 'Interprocedural register allocation' can be done to reduce the clobber sets
74 //===---------------------------------------------------------------------===//
76 We should recognized various "overflow detection" idioms and translate them into
77 llvm.uadd.with.overflow and similar intrinsics. Here is a multiply idiom:
79 unsigned int mul(unsigned int a,unsigned int b) {
80 if ((unsigned long long)a*b>0xffffffff)
85 //===---------------------------------------------------------------------===//
87 Get the C front-end to expand hypot(x,y) -> llvm.sqrt(x*x+y*y) when errno and
88 precision don't matter (ffastmath). Misc/mandel will like this. :) This isn't
89 safe in general, even on darwin. See the libm implementation of hypot for
90 examples (which special case when x/y are exactly zero to get signed zeros etc
93 //===---------------------------------------------------------------------===//
95 Solve this DAG isel folding deficiency:
113 The problem is the store's chain operand is not the load X but rather
114 a TokenFactor of the load X and load Y, which prevents the folding.
116 There are two ways to fix this:
118 1. The dag combiner can start using alias analysis to realize that y/x
119 don't alias, making the store to X not dependent on the load from Y.
120 2. The generated isel could be made smarter in the case it can't
121 disambiguate the pointers.
123 Number 1 is the preferred solution.
125 This has been "fixed" by a TableGen hack. But that is a short term workaround
126 which will be removed once the proper fix is made.
128 //===---------------------------------------------------------------------===//
130 On targets with expensive 64-bit multiply, we could LSR this:
137 for (i = ...; ++i, tmp+=tmp)
140 This would be a win on ppc32, but not x86 or ppc64.
142 //===---------------------------------------------------------------------===//
144 Shrink: (setlt (loadi32 P), 0) -> (setlt (loadi8 Phi), 0)
146 //===---------------------------------------------------------------------===//
148 Reassociate should turn things like:
150 int factorial(int X) {
151 return X*X*X*X*X*X*X*X;
154 into llvm.powi calls, allowing the code generator to produce balanced
155 multiplication trees.
157 First, the intrinsic needs to be extended to support integers, and second the
158 code generator needs to be enhanced to lower these to multiplication trees.
160 //===---------------------------------------------------------------------===//
162 Interesting? testcase for add/shift/mul reassoc:
164 int bar(int x, int y) {
165 return x*x*x+y+x*x*x*x*x*y*y*y*y;
167 int foo(int z, int n) {
168 return bar(z, n) + bar(2*z, 2*n);
171 This is blocked on not handling X*X*X -> powi(X, 3) (see note above). The issue
172 is that we end up getting t = 2*X s = t*t and don't turn this into 4*X*X,
173 which is the same number of multiplies and is canonical, because the 2*X has
174 multiple uses. Here's a simple example:
176 define i32 @test15(i32 %X1) {
177 %B = mul i32 %X1, 47 ; X1*47
183 //===---------------------------------------------------------------------===//
185 Reassociate should handle the example in GCC PR16157:
187 extern int a0, a1, a2, a3, a4; extern int b0, b1, b2, b3, b4;
188 void f () { /* this can be optimized to four additions... */
189 b4 = a4 + a3 + a2 + a1 + a0;
190 b3 = a3 + a2 + a1 + a0;
195 This requires reassociating to forms of expressions that are already available,
196 something that reassoc doesn't think about yet.
199 //===---------------------------------------------------------------------===//
201 This function: (derived from GCC PR19988)
202 double foo(double x, double y) {
203 return ((x + 0.1234 * y) * (x + -0.1234 * y));
209 mulsd LCPI1_1(%rip), %xmm1
210 mulsd LCPI1_0(%rip), %xmm2
217 Reassociate should be able to turn it into:
219 double foo(double x, double y) {
220 return ((x + 0.1234 * y) * (x - 0.1234 * y));
223 Which allows the multiply by constant to be CSE'd, producing:
226 mulsd LCPI1_0(%rip), %xmm1
233 This doesn't need -ffast-math support at all. This is particularly bad because
234 the llvm-gcc frontend is canonicalizing the later into the former, but clang
235 doesn't have this problem.
237 //===---------------------------------------------------------------------===//
239 These two functions should generate the same code on big-endian systems:
241 int g(int *j,int *l) { return memcmp(j,l,4); }
242 int h(int *j, int *l) { return *j - *l; }
244 this could be done in SelectionDAGISel.cpp, along with other special cases,
247 //===---------------------------------------------------------------------===//
249 It would be nice to revert this patch:
250 http://lists.cs.uiuc.edu/pipermail/llvm-commits/Week-of-Mon-20060213/031986.html
252 And teach the dag combiner enough to simplify the code expanded before
253 legalize. It seems plausible that this knowledge would let it simplify other
256 //===---------------------------------------------------------------------===//
258 For vector types, TargetData.cpp::getTypeInfo() returns alignment that is equal
259 to the type size. It works but can be overly conservative as the alignment of
260 specific vector types are target dependent.
262 //===---------------------------------------------------------------------===//
264 We should produce an unaligned load from code like this:
266 v4sf example(float *P) {
267 return (v4sf){P[0], P[1], P[2], P[3] };
270 //===---------------------------------------------------------------------===//
272 Add support for conditional increments, and other related patterns. Instead
277 je LBB16_2 #cond_next
288 //===---------------------------------------------------------------------===//
290 Combine: a = sin(x), b = cos(x) into a,b = sincos(x).
292 Expand these to calls of sin/cos and stores:
293 double sincos(double x, double *sin, double *cos);
294 float sincosf(float x, float *sin, float *cos);
295 long double sincosl(long double x, long double *sin, long double *cos);
297 Doing so could allow SROA of the destination pointers. See also:
298 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=17687
300 This is now easily doable with MRVs. We could even make an intrinsic for this
301 if anyone cared enough about sincos.
303 //===---------------------------------------------------------------------===//
305 quantum_sigma_x in 462.libquantum contains the following loop:
307 for(i=0; i<reg->size; i++)
309 /* Flip the target bit of each basis state */
310 reg->node[i].state ^= ((MAX_UNSIGNED) 1 << target);
313 Where MAX_UNSIGNED/state is a 64-bit int. On a 32-bit platform it would be just
314 so cool to turn it into something like:
316 long long Res = ((MAX_UNSIGNED) 1 << target);
318 for(i=0; i<reg->size; i++)
319 reg->node[i].state ^= Res & 0xFFFFFFFFULL;
321 for(i=0; i<reg->size; i++)
322 reg->node[i].state ^= Res & 0xFFFFFFFF00000000ULL
325 ... which would only do one 32-bit XOR per loop iteration instead of two.
327 It would also be nice to recognize the reg->size doesn't alias reg->node[i], but
330 //===---------------------------------------------------------------------===//
332 This isn't recognized as bswap by instcombine (yes, it really is bswap):
334 unsigned long reverse(unsigned v) {
336 t = v ^ ((v << 16) | (v >> 16));
338 v = (v << 24) | (v >> 8);
342 Neither is this (very standard idiom):
346 return (((n) << 24) | (((n) & 0xff00) << 8)
347 | (((n) >> 8) & 0xff00) | ((n) >> 24));
350 //===---------------------------------------------------------------------===//
354 These idioms should be recognized as popcount (see PR1488):
356 unsigned countbits_slow(unsigned v) {
358 for (c = 0; v; v >>= 1)
362 unsigned countbits_fast(unsigned v){
365 v &= v - 1; // clear the least significant bit set
369 BITBOARD = unsigned long long
370 int PopCnt(register BITBOARD a) {
378 unsigned int popcount(unsigned int input) {
379 unsigned int count = 0;
380 for (unsigned int i = 0; i < 4 * 8; i++)
381 count += (input >> i) & i;
385 This is a form of idiom recognition for loops, the same thing that could be
386 useful for recognizing memset/memcpy.
388 //===---------------------------------------------------------------------===//
390 These should turn into single 16-bit (unaligned?) loads on little/big endian
393 unsigned short read_16_le(const unsigned char *adr) {
394 return adr[0] | (adr[1] << 8);
396 unsigned short read_16_be(const unsigned char *adr) {
397 return (adr[0] << 8) | adr[1];
400 //===---------------------------------------------------------------------===//
402 -instcombine should handle this transform:
403 icmp pred (sdiv X / C1 ), C2
404 when X, C1, and C2 are unsigned. Similarly for udiv and signed operands.
406 Currently InstCombine avoids this transform but will do it when the signs of
407 the operands and the sign of the divide match. See the FIXME in
408 InstructionCombining.cpp in the visitSetCondInst method after the switch case
409 for Instruction::UDiv (around line 4447) for more details.
411 The SingleSource/Benchmarks/Shootout-C++/hash and hash2 tests have examples of
414 //===---------------------------------------------------------------------===//
418 viterbi speeds up *significantly* if the various "history" related copy loops
419 are turned into memcpy calls at the source level. We need a "loops to memcpy"
422 //===---------------------------------------------------------------------===//
426 SingleSource/Benchmarks/Misc/dt.c shows several interesting optimization
427 opportunities in its double_array_divs_variable function: it needs loop
428 interchange, memory promotion (which LICM already does), vectorization and
429 variable trip count loop unrolling (since it has a constant trip count). ICC
430 apparently produces this very nice code with -ffast-math:
432 ..B1.70: # Preds ..B1.70 ..B1.69
433 mulpd %xmm0, %xmm1 #108.2
434 mulpd %xmm0, %xmm1 #108.2
435 mulpd %xmm0, %xmm1 #108.2
436 mulpd %xmm0, %xmm1 #108.2
438 cmpl $131072, %edx #108.2
439 jb ..B1.70 # Prob 99% #108.2
441 It would be better to count down to zero, but this is a lot better than what we
444 //===---------------------------------------------------------------------===//
448 typedef unsigned U32;
449 typedef unsigned long long U64;
450 int test (U32 *inst, U64 *regs) {
453 int r1 = (temp >> 20) & 0xf;
454 int b2 = (temp >> 16) & 0xf;
455 effective_addr2 = temp & 0xfff;
456 if (b2) effective_addr2 += regs[b2];
457 b2 = (temp >> 12) & 0xf;
458 if (b2) effective_addr2 += regs[b2];
459 effective_addr2 &= regs[4];
460 if ((effective_addr2 & 3) == 0)
465 Note that only the low 2 bits of effective_addr2 are used. On 32-bit systems,
466 we don't eliminate the computation of the top half of effective_addr2 because
467 we don't have whole-function selection dags. On x86, this means we use one
468 extra register for the function when effective_addr2 is declared as U64 than
469 when it is declared U32.
471 PHI Slicing could be extended to do this.
473 //===---------------------------------------------------------------------===//
475 LSR should know what GPR types a target has from TargetData. This code:
477 volatile short X, Y; // globals
481 for (i = 0; i < N; i++) { X = i; Y = i*4; }
484 produces two near identical IV's (after promotion) on PPC/ARM:
494 add r2, r2, #1 <- [0,+,1]
495 sub r0, r0, #1 <- [0,-,1]
499 LSR should reuse the "+" IV for the exit test.
501 //===---------------------------------------------------------------------===//
503 Tail call elim should be more aggressive, checking to see if the call is
504 followed by an uncond branch to an exit block.
506 ; This testcase is due to tail-duplication not wanting to copy the return
507 ; instruction into the terminating blocks because there was other code
508 ; optimized out of the function after the taildup happened.
509 ; RUN: llvm-as < %s | opt -tailcallelim | llvm-dis | not grep call
511 define i32 @t4(i32 %a) {
513 %tmp.1 = and i32 %a, 1 ; <i32> [#uses=1]
514 %tmp.2 = icmp ne i32 %tmp.1, 0 ; <i1> [#uses=1]
515 br i1 %tmp.2, label %then.0, label %else.0
517 then.0: ; preds = %entry
518 %tmp.5 = add i32 %a, -1 ; <i32> [#uses=1]
519 %tmp.3 = call i32 @t4( i32 %tmp.5 ) ; <i32> [#uses=1]
522 else.0: ; preds = %entry
523 %tmp.7 = icmp ne i32 %a, 0 ; <i1> [#uses=1]
524 br i1 %tmp.7, label %then.1, label %return
526 then.1: ; preds = %else.0
527 %tmp.11 = add i32 %a, -2 ; <i32> [#uses=1]
528 %tmp.9 = call i32 @t4( i32 %tmp.11 ) ; <i32> [#uses=1]
531 return: ; preds = %then.1, %else.0, %then.0
532 %result.0 = phi i32 [ 0, %else.0 ], [ %tmp.3, %then.0 ],
537 //===---------------------------------------------------------------------===//
539 Tail recursion elimination should handle:
544 return 2 * pow2m1 (n - 1) + 1;
547 Also, multiplies can be turned into SHL's, so they should be handled as if
548 they were associative. "return foo() << 1" can be tail recursion eliminated.
550 //===---------------------------------------------------------------------===//
552 Argument promotion should promote arguments for recursive functions, like
555 ; RUN: llvm-as < %s | opt -argpromotion | llvm-dis | grep x.val
557 define internal i32 @foo(i32* %x) {
559 %tmp = load i32* %x ; <i32> [#uses=0]
560 %tmp.foo = call i32 @foo( i32* %x ) ; <i32> [#uses=1]
564 define i32 @bar(i32* %x) {
566 %tmp3 = call i32 @foo( i32* %x ) ; <i32> [#uses=1]
570 //===---------------------------------------------------------------------===//
572 We should investigate an instruction sinking pass. Consider this silly
588 je LBB1_2 # cond_true
596 The PIC base computation (call+popl) is only used on one path through the
597 code, but is currently always computed in the entry block. It would be
598 better to sink the picbase computation down into the block for the
599 assertion, as it is the only one that uses it. This happens for a lot of
600 code with early outs.
602 Another example is loads of arguments, which are usually emitted into the
603 entry block on targets like x86. If not used in all paths through a
604 function, they should be sunk into the ones that do.
606 In this case, whole-function-isel would also handle this.
608 //===---------------------------------------------------------------------===//
610 Investigate lowering of sparse switch statements into perfect hash tables:
611 http://burtleburtle.net/bob/hash/perfect.html
613 //===---------------------------------------------------------------------===//
615 We should turn things like "load+fabs+store" and "load+fneg+store" into the
616 corresponding integer operations. On a yonah, this loop:
621 for (b = 0; b < 10000000; b++)
622 for (i = 0; i < 256; i++)
626 is twice as slow as this loop:
631 for (b = 0; b < 10000000; b++)
632 for (i = 0; i < 256; i++)
633 a[i] ^= (1ULL << 63);
636 and I suspect other processors are similar. On X86 in particular this is a
637 big win because doing this with integers allows the use of read/modify/write
640 //===---------------------------------------------------------------------===//
642 DAG Combiner should try to combine small loads into larger loads when
643 profitable. For example, we compile this C++ example:
645 struct THotKey { short Key; bool Control; bool Shift; bool Alt; };
646 extern THotKey m_HotKey;
647 THotKey GetHotKey () { return m_HotKey; }
649 into (-O3 -fno-exceptions -static -fomit-frame-pointer):
654 movb _m_HotKey+3, %cl
655 movb _m_HotKey+4, %dl
656 movb _m_HotKey+2, %ch
671 movzwl _m_HotKey+4, %edx
675 The LLVM IR contains the needed alignment info, so we should be able to
676 merge the loads and stores into 4-byte loads:
678 %struct.THotKey = type { i16, i8, i8, i8 }
679 define void @_Z9GetHotKeyv(%struct.THotKey* sret %agg.result) nounwind {
681 %tmp2 = load i16* getelementptr (@m_HotKey, i32 0, i32 0), align 8
682 %tmp5 = load i8* getelementptr (@m_HotKey, i32 0, i32 1), align 2
683 %tmp8 = load i8* getelementptr (@m_HotKey, i32 0, i32 2), align 1
684 %tmp11 = load i8* getelementptr (@m_HotKey, i32 0, i32 3), align 2
686 Alternatively, we should use a small amount of base-offset alias analysis
687 to make it so the scheduler doesn't need to hold all the loads in regs at
690 //===---------------------------------------------------------------------===//
692 We should add an FRINT node to the DAG to model targets that have legal
693 implementations of ceil/floor/rint.
695 //===---------------------------------------------------------------------===//
700 long long input[8] = {1,1,1,1,1,1,1,1};
704 We currently compile this into a memcpy from a global array since the
705 initializer is fairly large and not memset'able. This is good, but the memcpy
706 gets lowered to load/stores in the code generator. This is also ok, except
707 that the codegen lowering for memcpy doesn't handle the case when the source
708 is a constant global. This gives us atrocious code like this:
713 movl _C.0.1444-"L1$pb"+32(%eax), %ecx
715 movl _C.0.1444-"L1$pb"+20(%eax), %ecx
717 movl _C.0.1444-"L1$pb"+36(%eax), %ecx
719 movl _C.0.1444-"L1$pb"+44(%eax), %ecx
721 movl _C.0.1444-"L1$pb"+40(%eax), %ecx
723 movl _C.0.1444-"L1$pb"+12(%eax), %ecx
725 movl _C.0.1444-"L1$pb"+4(%eax), %ecx
737 //===---------------------------------------------------------------------===//
739 http://llvm.org/PR717:
741 The following code should compile into "ret int undef". Instead, LLVM
742 produces "ret int 0":
751 //===---------------------------------------------------------------------===//
753 The loop unroller should partially unroll loops (instead of peeling them)
754 when code growth isn't too bad and when an unroll count allows simplification
755 of some code within the loop. One trivial example is:
761 for ( nLoop = 0; nLoop < 1000; nLoop++ ) {
770 Unrolling by 2 would eliminate the '&1' in both copies, leading to a net
771 reduction in code size. The resultant code would then also be suitable for
772 exit value computation.
774 //===---------------------------------------------------------------------===//
776 We miss a bunch of rotate opportunities on various targets, including ppc, x86,
777 etc. On X86, we miss a bunch of 'rotate by variable' cases because the rotate
778 matching code in dag combine doesn't look through truncates aggressively
779 enough. Here are some testcases reduces from GCC PR17886:
781 unsigned long long f(unsigned long long x, int y) {
782 return (x << y) | (x >> 64-y);
784 unsigned f2(unsigned x, int y){
785 return (x << y) | (x >> 32-y);
787 unsigned long long f3(unsigned long long x){
789 return (x << y) | (x >> 64-y);
791 unsigned f4(unsigned x){
793 return (x << y) | (x >> 32-y);
795 unsigned long long f5(unsigned long long x, unsigned long long y) {
796 return (x << 8) | ((y >> 48) & 0xffull);
798 unsigned long long f6(unsigned long long x, unsigned long long y, int z) {
801 return (x << 8) | ((y >> 48) & 0xffull);
803 return (x << 16) | ((y >> 40) & 0xffffull);
805 return (x << 24) | ((y >> 32) & 0xffffffull);
807 return (x << 32) | ((y >> 24) & 0xffffffffull);
809 return (x << 40) | ((y >> 16) & 0xffffffffffull);
813 On X86-64, we only handle f2/f3/f4 right. On x86-32, a few of these
814 generate truly horrible code, instead of using shld and friends. On
815 ARM, we end up with calls to L___lshrdi3/L___ashldi3 in f, which is
816 badness. PPC64 misses f, f5 and f6. CellSPU aborts in isel.
818 //===---------------------------------------------------------------------===//
820 This (and similar related idioms):
822 unsigned int foo(unsigned char i) {
823 return i | (i<<8) | (i<<16) | (i<<24);
828 define i32 @foo(i8 zeroext %i) nounwind readnone ssp noredzone {
830 %conv = zext i8 %i to i32
831 %shl = shl i32 %conv, 8
832 %shl5 = shl i32 %conv, 16
833 %shl9 = shl i32 %conv, 24
834 %or = or i32 %shl9, %conv
835 %or6 = or i32 %or, %shl5
836 %or10 = or i32 %or6, %shl
840 it would be better as:
842 unsigned int bar(unsigned char i) {
843 unsigned int j=i | (i << 8);
849 define i32 @bar(i8 zeroext %i) nounwind readnone ssp noredzone {
851 %conv = zext i8 %i to i32
852 %shl = shl i32 %conv, 8
853 %or = or i32 %shl, %conv
854 %shl5 = shl i32 %or, 16
855 %or6 = or i32 %shl5, %or
859 or even i*0x01010101, depending on the speed of the multiplier. The best way to
860 handle this is to canonicalize it to a multiply in IR and have codegen handle
861 lowering multiplies to shifts on cpus where shifts are faster.
863 //===---------------------------------------------------------------------===//
865 We do a number of simplifications in simplify libcalls to strength reduce
866 standard library functions, but we don't currently merge them together. For
867 example, it is useful to merge memcpy(a,b,strlen(b)) -> strcpy. This can only
868 be done safely if "b" isn't modified between the strlen and memcpy of course.
870 //===---------------------------------------------------------------------===//
872 We compile this program: (from GCC PR11680)
873 http://gcc.gnu.org/bugzilla/attachment.cgi?id=4487
875 Into code that runs the same speed in fast/slow modes, but both modes run 2x
876 slower than when compile with GCC (either 4.0 or 4.2):
878 $ llvm-g++ perf.cpp -O3 -fno-exceptions
880 1.821u 0.003s 0:01.82 100.0% 0+0k 0+0io 0pf+0w
882 $ g++ perf.cpp -O3 -fno-exceptions
884 0.821u 0.001s 0:00.82 100.0% 0+0k 0+0io 0pf+0w
886 It looks like we are making the same inlining decisions, so this may be raw
887 codegen badness or something else (haven't investigated).
889 //===---------------------------------------------------------------------===//
891 We miss some instcombines for stuff like this:
893 void foo (unsigned int a) {
894 /* This one is equivalent to a >= (3 << 2). */
899 A few other related ones are in GCC PR14753.
901 //===---------------------------------------------------------------------===//
903 Divisibility by constant can be simplified (according to GCC PR12849) from
904 being a mulhi to being a mul lo (cheaper). Testcase:
906 void bar(unsigned n) {
911 This is equivalent to the following, where 2863311531 is the multiplicative
912 inverse of 3, and 1431655766 is ((2^32)-1)/3+1:
913 void bar(unsigned n) {
914 if (n * 2863311531U < 1431655766U)
918 The same transformation can work with an even modulo with the addition of a
919 rotate: rotate the result of the multiply to the right by the number of bits
920 which need to be zero for the condition to be true, and shrink the compare RHS
921 by the same amount. Unless the target supports rotates, though, that
922 transformation probably isn't worthwhile.
924 The transformation can also easily be made to work with non-zero equality
925 comparisons: just transform, for example, "n % 3 == 1" to "(n-1) % 3 == 0".
927 //===---------------------------------------------------------------------===//
929 Better mod/ref analysis for scanf would allow us to eliminate the vtable and a
930 bunch of other stuff from this example (see PR1604):
940 std::scanf("%d", &t.val);
941 std::printf("%d\n", t.val);
944 //===---------------------------------------------------------------------===//
946 These functions perform the same computation, but produce different assembly.
948 define i8 @select(i8 %x) readnone nounwind {
949 %A = icmp ult i8 %x, 250
950 %B = select i1 %A, i8 0, i8 1
954 define i8 @addshr(i8 %x) readnone nounwind {
955 %A = zext i8 %x to i9
956 %B = add i9 %A, 6 ;; 256 - 250 == 6
958 %D = trunc i9 %C to i8
962 //===---------------------------------------------------------------------===//
966 f (unsigned long a, unsigned long b, unsigned long c)
968 return ((a & (c - 1)) != 0) || ((b & (c - 1)) != 0);
971 f (unsigned long a, unsigned long b, unsigned long c)
973 return ((a & (c - 1)) != 0) | ((b & (c - 1)) != 0);
975 Both should combine to ((a|b) & (c-1)) != 0. Currently not optimized with
976 "clang -emit-llvm-bc | opt -std-compile-opts".
978 //===---------------------------------------------------------------------===//
981 #define PMD_MASK (~((1UL << 23) - 1))
982 void clear_pmd_range(unsigned long start, unsigned long end)
984 if (!(start & ~PMD_MASK) && !(end & ~PMD_MASK))
987 The expression should optimize to something like
988 "!((start|end)&~PMD_MASK). Currently not optimized with "clang
989 -emit-llvm-bc | opt -std-compile-opts".
991 //===---------------------------------------------------------------------===//
993 unsigned int f(unsigned int i, unsigned int n) {++i; if (i == n) ++i; return
995 unsigned int f2(unsigned int i, unsigned int n) {++i; i += i == n; return i;}
996 These should combine to the same thing. Currently, the first function
997 produces better code on X86.
999 //===---------------------------------------------------------------------===//
1002 #define abs(x) x>0?x:-x
1005 return (abs(x)) >= 0;
1007 This should optimize to x == INT_MIN. (With -fwrapv.) Currently not
1008 optimized with "clang -emit-llvm-bc | opt -std-compile-opts".
1010 //===---------------------------------------------------------------------===//
1014 rotate_cst (unsigned int a)
1016 a = (a << 10) | (a >> 22);
1021 minus_cst (unsigned int a)
1030 mask_gt (unsigned int a)
1032 /* This is equivalent to a > 15. */
1037 rshift_gt (unsigned int a)
1039 /* This is equivalent to a > 23. */
1043 All should simplify to a single comparison. All of these are
1044 currently not optimized with "clang -emit-llvm-bc | opt
1047 //===---------------------------------------------------------------------===//
1050 int c(int* x) {return (char*)x+2 == (char*)x;}
1051 Should combine to 0. Currently not optimized with "clang
1052 -emit-llvm-bc | opt -std-compile-opts" (although llc can optimize it).
1054 //===---------------------------------------------------------------------===//
1056 int a(unsigned b) {return ((b << 31) | (b << 30)) >> 31;}
1057 Should be combined to "((b >> 1) | b) & 1". Currently not optimized
1058 with "clang -emit-llvm-bc | opt -std-compile-opts".
1060 //===---------------------------------------------------------------------===//
1062 unsigned a(unsigned x, unsigned y) { return x | (y & 1) | (y & 2);}
1063 Should combine to "x | (y & 3)". Currently not optimized with "clang
1064 -emit-llvm-bc | opt -std-compile-opts".
1066 //===---------------------------------------------------------------------===//
1068 int a(int a, int b, int c) {return (~a & c) | ((c|a) & b);}
1069 Should fold to "(~a & c) | (a & b)". Currently not optimized with
1070 "clang -emit-llvm-bc | opt -std-compile-opts".
1072 //===---------------------------------------------------------------------===//
1074 int a(int a,int b) {return (~(a|b))|a;}
1075 Should fold to "a|~b". Currently not optimized with "clang
1076 -emit-llvm-bc | opt -std-compile-opts".
1078 //===---------------------------------------------------------------------===//
1080 int a(int a, int b) {return (a&&b) || (a&&!b);}
1081 Should fold to "a". Currently not optimized with "clang -emit-llvm-bc
1082 | opt -std-compile-opts".
1084 //===---------------------------------------------------------------------===//
1086 int a(int a, int b, int c) {return (a&&b) || (!a&&c);}
1087 Should fold to "a ? b : c", or at least something sane. Currently not
1088 optimized with "clang -emit-llvm-bc | opt -std-compile-opts".
1090 //===---------------------------------------------------------------------===//
1092 int a(int a, int b, int c) {return (a&&b) || (a&&c) || (a&&b&&c);}
1093 Should fold to a && (b || c). Currently not optimized with "clang
1094 -emit-llvm-bc | opt -std-compile-opts".
1096 //===---------------------------------------------------------------------===//
1098 int a(int x) {return x | ((x & 8) ^ 8);}
1099 Should combine to x | 8. Currently not optimized with "clang
1100 -emit-llvm-bc | opt -std-compile-opts".
1102 //===---------------------------------------------------------------------===//
1104 int a(int x) {return x ^ ((x & 8) ^ 8);}
1105 Should also combine to x | 8. Currently not optimized with "clang
1106 -emit-llvm-bc | opt -std-compile-opts".
1108 //===---------------------------------------------------------------------===//
1110 int a(int x) {return ((x | -9) ^ 8) & x;}
1111 Should combine to x & -9. Currently not optimized with "clang
1112 -emit-llvm-bc | opt -std-compile-opts".
1114 //===---------------------------------------------------------------------===//
1116 unsigned a(unsigned a) {return a * 0x11111111 >> 28 & 1;}
1117 Should combine to "a * 0x88888888 >> 31". Currently not optimized
1118 with "clang -emit-llvm-bc | opt -std-compile-opts".
1120 //===---------------------------------------------------------------------===//
1122 unsigned a(char* x) {if ((*x & 32) == 0) return b();}
1123 There's an unnecessary zext in the generated code with "clang
1124 -emit-llvm-bc | opt -std-compile-opts".
1126 //===---------------------------------------------------------------------===//
1128 unsigned a(unsigned long long x) {return 40 * (x >> 1);}
1129 Should combine to "20 * (((unsigned)x) & -2)". Currently not
1130 optimized with "clang -emit-llvm-bc | opt -std-compile-opts".
1132 //===---------------------------------------------------------------------===//
1134 This was noticed in the entryblock for grokdeclarator in 403.gcc:
1136 %tmp = icmp eq i32 %decl_context, 4
1137 %decl_context_addr.0 = select i1 %tmp, i32 3, i32 %decl_context
1138 %tmp1 = icmp eq i32 %decl_context_addr.0, 1
1139 %decl_context_addr.1 = select i1 %tmp1, i32 0, i32 %decl_context_addr.0
1141 tmp1 should be simplified to something like:
1142 (!tmp || decl_context == 1)
1144 This allows recursive simplifications, tmp1 is used all over the place in
1145 the function, e.g. by:
1147 %tmp23 = icmp eq i32 %decl_context_addr.1, 0 ; <i1> [#uses=1]
1148 %tmp24 = xor i1 %tmp1, true ; <i1> [#uses=1]
1149 %or.cond8 = and i1 %tmp23, %tmp24 ; <i1> [#uses=1]
1153 //===---------------------------------------------------------------------===//
1157 Store sinking: This code:
1159 void f (int n, int *cond, int *res) {
1162 for (i = 0; i < n; i++)
1164 *res ^= 234; /* (*) */
1167 On this function GVN hoists the fully redundant value of *res, but nothing
1168 moves the store out. This gives us this code:
1170 bb: ; preds = %bb2, %entry
1171 %.rle = phi i32 [ 0, %entry ], [ %.rle6, %bb2 ]
1172 %i.05 = phi i32 [ 0, %entry ], [ %indvar.next, %bb2 ]
1173 %1 = load i32* %cond, align 4
1174 %2 = icmp eq i32 %1, 0
1175 br i1 %2, label %bb2, label %bb1
1178 %3 = xor i32 %.rle, 234
1179 store i32 %3, i32* %res, align 4
1182 bb2: ; preds = %bb, %bb1
1183 %.rle6 = phi i32 [ %3, %bb1 ], [ %.rle, %bb ]
1184 %indvar.next = add i32 %i.05, 1
1185 %exitcond = icmp eq i32 %indvar.next, %n
1186 br i1 %exitcond, label %return, label %bb
1188 DSE should sink partially dead stores to get the store out of the loop.
1190 Here's another partial dead case:
1191 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=12395
1193 //===---------------------------------------------------------------------===//
1195 Scalar PRE hoists the mul in the common block up to the else:
1197 int test (int a, int b, int c, int g) {
1207 It would be better to do the mul once to reduce codesize above the if.
1208 This is GCC PR38204.
1210 //===---------------------------------------------------------------------===//
1214 GCC PR37810 is an interesting case where we should sink load/store reload
1215 into the if block and outside the loop, so we don't reload/store it on the
1236 We now hoist the reload after the call (Transforms/GVN/lpre-call-wrap.ll), but
1237 we don't sink the store. We need partially dead store sinking.
1239 //===---------------------------------------------------------------------===//
1241 [LOAD PRE CRIT EDGE SPLITTING]
1243 GCC PR37166: Sinking of loads prevents SROA'ing the "g" struct on the stack
1244 leading to excess stack traffic. This could be handled by GVN with some crazy
1245 symbolic phi translation. The code we get looks like (g is on the stack):
1249 %9 = getelementptr %struct.f* %g, i32 0, i32 0
1250 store i32 %8, i32* %9, align bel %bb3
1252 bb3: ; preds = %bb1, %bb2, %bb
1253 %c_addr.0 = phi %struct.f* [ %g, %bb2 ], [ %c, %bb ], [ %c, %bb1 ]
1254 %b_addr.0 = phi %struct.f* [ %b, %bb2 ], [ %g, %bb ], [ %b, %bb1 ]
1255 %10 = getelementptr %struct.f* %c_addr.0, i32 0, i32 0
1256 %11 = load i32* %10, align 4
1258 %11 is partially redundant, an in BB2 it should have the value %8.
1260 GCC PR33344 and PR35287 are similar cases.
1263 //===---------------------------------------------------------------------===//
1267 There are many load PRE testcases in testsuite/gcc.dg/tree-ssa/loadpre* in the
1268 GCC testsuite, ones we don't get yet are (checked through loadpre25):
1270 [CRIT EDGE BREAKING]
1271 loadpre3.c predcom-4.c
1273 [PRE OF READONLY CALL]
1276 [TURN SELECT INTO BRANCH]
1277 loadpre14.c loadpre15.c
1279 actually a conditional increment: loadpre18.c loadpre19.c
1281 //===---------------------------------------------------------------------===//
1283 [LOAD PRE / STORE SINKING / SPEC HACK]
1285 This is a chunk of code from 456.hmmer:
1287 int f(int M, int *mc, int *mpp, int *tpmm, int *ip, int *tpim, int *dpp,
1288 int *tpdm, int xmb, int *bp, int *ms) {
1290 for (k = 1; k <= M; k++) {
1291 mc[k] = mpp[k-1] + tpmm[k-1];
1292 if ((sc = ip[k-1] + tpim[k-1]) > mc[k]) mc[k] = sc;
1293 if ((sc = dpp[k-1] + tpdm[k-1]) > mc[k]) mc[k] = sc;
1294 if ((sc = xmb + bp[k]) > mc[k]) mc[k] = sc;
1299 It is very profitable for this benchmark to turn the conditional stores to mc[k]
1300 into a conditional move (select instr in IR) and allow the final store to do the
1301 store. See GCC PR27313 for more details. Note that this is valid to xform even
1302 with the new C++ memory model, since mc[k] is previously loaded and later
1305 //===---------------------------------------------------------------------===//
1308 There are many PRE testcases in testsuite/gcc.dg/tree-ssa/ssa-pre-*.c in the
1311 //===---------------------------------------------------------------------===//
1313 There are some interesting cases in testsuite/gcc.dg/tree-ssa/pred-comm* in the
1314 GCC testsuite. For example, we get the first example in predcom-1.c, but
1315 miss the second one:
1320 __attribute__ ((noinline))
1321 void count_averages(int n) {
1323 for (i = 1; i < n; i++)
1324 avg[i] = (((unsigned long) fib[i - 1] + fib[i] + fib[i + 1]) / 3) & 0xffff;
1327 which compiles into two loads instead of one in the loop.
1329 predcom-2.c is the same as predcom-1.c
1331 predcom-3.c is very similar but needs loads feeding each other instead of
1335 //===---------------------------------------------------------------------===//
1339 Type based alias analysis:
1340 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=14705
1342 We should do better analysis of posix_memalign. At the least it should
1343 no-capture its pointer argument, at best, we should know that the out-value
1344 result doesn't point to anything (like malloc). One example of this is in
1345 SingleSource/Benchmarks/Misc/dt.c
1347 //===---------------------------------------------------------------------===//
1349 A/B get pinned to the stack because we turn an if/then into a select instead
1350 of PRE'ing the load/store. This may be fixable in instcombine:
1351 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=37892
1353 struct X { int i; };
1367 //===---------------------------------------------------------------------===//
1369 Interesting missed case because of control flow flattening (should be 2 loads):
1370 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=26629
1371 With: llvm-gcc t2.c -S -o - -O0 -emit-llvm | llvm-as |
1372 opt -mem2reg -gvn -instcombine | llvm-dis
1373 we miss it because we need 1) CRIT EDGE 2) MULTIPLE DIFFERENT
1374 VALS PRODUCED BY ONE BLOCK OVER DIFFERENT PATHS
1376 //===---------------------------------------------------------------------===//
1378 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=19633
1379 We could eliminate the branch condition here, loading from null is undefined:
1381 struct S { int w, x, y, z; };
1382 struct T { int r; struct S s; };
1383 void bar (struct S, int);
1384 void foo (int a, struct T b)
1392 //===---------------------------------------------------------------------===//
1394 simplifylibcalls should do several optimizations for strspn/strcspn:
1396 strcspn(x, "a") -> inlined loop for up to 3 letters (similarly for strspn):
1398 size_t __strcspn_c3 (__const char *__s, int __reject1, int __reject2,
1400 register size_t __result = 0;
1401 while (__s[__result] != '\0' && __s[__result] != __reject1 &&
1402 __s[__result] != __reject2 && __s[__result] != __reject3)
1407 This should turn into a switch on the character. See PR3253 for some notes on
1410 456.hmmer apparently uses strcspn and strspn a lot. 471.omnetpp uses strspn.
1412 //===---------------------------------------------------------------------===//
1414 "gas" uses this idiom:
1415 else if (strchr ("+-/*%|&^:[]()~", *intel_parser.op_string))
1417 else if (strchr ("<>", *intel_parser.op_string)
1419 Those should be turned into a switch.
1421 //===---------------------------------------------------------------------===//
1423 252.eon contains this interesting code:
1425 %3072 = getelementptr [100 x i8]* %tempString, i32 0, i32 0
1426 %3073 = call i8* @strcpy(i8* %3072, i8* %3071) nounwind
1427 %strlen = call i32 @strlen(i8* %3072) ; uses = 1
1428 %endptr = getelementptr [100 x i8]* %tempString, i32 0, i32 %strlen
1429 call void @llvm.memcpy.i32(i8* %endptr,
1430 i8* getelementptr ([5 x i8]* @"\01LC42", i32 0, i32 0), i32 5, i32 1)
1431 %3074 = call i32 @strlen(i8* %endptr) nounwind readonly
1433 This is interesting for a couple reasons. First, in this:
1435 %3073 = call i8* @strcpy(i8* %3072, i8* %3071) nounwind
1436 %strlen = call i32 @strlen(i8* %3072)
1438 The strlen could be replaced with: %strlen = sub %3072, %3073, because the
1439 strcpy call returns a pointer to the end of the string. Based on that, the
1440 endptr GEP just becomes equal to 3073, which eliminates a strlen call and GEP.
1442 Second, the memcpy+strlen strlen can be replaced with:
1444 %3074 = call i32 @strlen([5 x i8]* @"\01LC42") nounwind readonly
1446 Because the destination was just copied into the specified memory buffer. This,
1447 in turn, can be constant folded to "4".
1449 In other code, it contains:
1451 %endptr6978 = bitcast i8* %endptr69 to i32*
1452 store i32 7107374, i32* %endptr6978, align 1
1453 %3167 = call i32 @strlen(i8* %endptr69) nounwind readonly
1455 Which could also be constant folded. Whatever is producing this should probably
1456 be fixed to leave this as a memcpy from a string.
1458 Further, eon also has an interesting partially redundant strlen call:
1460 bb8: ; preds = %_ZN18eonImageCalculatorC1Ev.exit
1461 %682 = getelementptr i8** %argv, i32 6 ; <i8**> [#uses=2]
1462 %683 = load i8** %682, align 4 ; <i8*> [#uses=4]
1463 %684 = load i8* %683, align 1 ; <i8> [#uses=1]
1464 %685 = icmp eq i8 %684, 0 ; <i1> [#uses=1]
1465 br i1 %685, label %bb10, label %bb9
1468 %686 = call i32 @strlen(i8* %683) nounwind readonly
1469 %687 = icmp ugt i32 %686, 254 ; <i1> [#uses=1]
1470 br i1 %687, label %bb10, label %bb11
1472 bb10: ; preds = %bb9, %bb8
1473 %688 = call i32 @strlen(i8* %683) nounwind readonly
1475 This could be eliminated by doing the strlen once in bb8, saving code size and
1476 improving perf on the bb8->9->10 path.
1478 //===---------------------------------------------------------------------===//
1480 I see an interesting fully redundant call to strlen left in 186.crafty:InputMove
1482 %movetext11 = getelementptr [128 x i8]* %movetext, i32 0, i32 0
1485 bb62: ; preds = %bb55, %bb53
1486 %promote.0 = phi i32 [ %169, %bb55 ], [ 0, %bb53 ]
1487 %171 = call i32 @strlen(i8* %movetext11) nounwind readonly align 1
1488 %172 = add i32 %171, -1 ; <i32> [#uses=1]
1489 %173 = getelementptr [128 x i8]* %movetext, i32 0, i32 %172
1492 br i1 %or.cond, label %bb65, label %bb72
1494 bb65: ; preds = %bb62
1495 store i8 0, i8* %173, align 1
1498 bb72: ; preds = %bb65, %bb62
1499 %trank.1 = phi i32 [ %176, %bb65 ], [ -1, %bb62 ]
1500 %177 = call i32 @strlen(i8* %movetext11) nounwind readonly align 1
1502 Note that on the bb62->bb72 path, that the %177 strlen call is partially
1503 redundant with the %171 call. At worst, we could shove the %177 strlen call
1504 up into the bb65 block moving it out of the bb62->bb72 path. However, note
1505 that bb65 stores to the string, zeroing out the last byte. This means that on
1506 that path the value of %177 is actually just %171-1. A sub is cheaper than a
1509 This pattern repeats several times, basically doing:
1514 where it is "obvious" that B = A-1.
1516 //===---------------------------------------------------------------------===//
1518 186.crafty also contains this code:
1520 %1906 = call i32 @strlen(i8* getelementptr ([32 x i8]* @pgn_event, i32 0,i32 0))
1521 %1907 = getelementptr [32 x i8]* @pgn_event, i32 0, i32 %1906
1522 %1908 = call i8* @strcpy(i8* %1907, i8* %1905) nounwind align 1
1523 %1909 = call i32 @strlen(i8* getelementptr ([32 x i8]* @pgn_event, i32 0,i32 0))
1524 %1910 = getelementptr [32 x i8]* @pgn_event, i32 0, i32 %1909
1526 The last strlen is computable as 1908-@pgn_event, which means 1910=1908.
1528 //===---------------------------------------------------------------------===//
1530 186.crafty has this interesting pattern with the "out.4543" variable:
1532 call void @llvm.memcpy.i32(
1533 i8* getelementptr ([10 x i8]* @out.4543, i32 0, i32 0),
1534 i8* getelementptr ([7 x i8]* @"\01LC28700", i32 0, i32 0), i32 7, i32 1)
1535 %101 = call@printf(i8* ... @out.4543, i32 0, i32 0)) nounwind
1537 It is basically doing:
1539 memcpy(globalarray, "string");
1540 printf(..., globalarray);
1542 Anyway, by knowing that printf just reads the memory and forward substituting
1543 the string directly into the printf, this eliminates reads from globalarray.
1544 Since this pattern occurs frequently in crafty (due to the "DisplayTime" and
1545 other similar functions) there are many stores to "out". Once all the printfs
1546 stop using "out", all that is left is the memcpy's into it. This should allow
1547 globalopt to remove the "stored only" global.
1549 //===---------------------------------------------------------------------===//
1553 define inreg i32 @foo(i8* inreg %p) nounwind {
1555 %tmp1 = ashr i8 %tmp0, 5
1556 %tmp2 = sext i8 %tmp1 to i32
1560 could be dagcombine'd to a sign-extending load with a shift.
1561 For example, on x86 this currently gets this:
1567 while it could get this:
1572 //===---------------------------------------------------------------------===//
1576 int test(int x) { return 1-x == x; } // --> return false
1577 int test2(int x) { return 2-x == x; } // --> return x == 1 ?
1579 Always foldable for odd constants, what is the rule for even?
1581 //===---------------------------------------------------------------------===//
1583 PR 3381: GEP to field of size 0 inside a struct could be turned into GEP
1584 for next field in struct (which is at same address).
1586 For example: store of float into { {{}}, float } could be turned into a store to
1589 //===---------------------------------------------------------------------===//
1591 The arg promotion pass should make use of nocapture to make its alias analysis
1592 stuff much more precise.
1594 //===---------------------------------------------------------------------===//
1596 The following functions should be optimized to use a select instead of a
1597 branch (from gcc PR40072):
1599 char char_int(int m) {if(m>7) return 0; return m;}
1600 int int_char(char m) {if(m>7) return 0; return m;}
1602 //===---------------------------------------------------------------------===//
1604 int func(int a, int b) { if (a & 0x80) b |= 0x80; else b &= ~0x80; return b; }
1608 define i32 @func(i32 %a, i32 %b) nounwind readnone ssp {
1610 %0 = and i32 %a, 128 ; <i32> [#uses=1]
1611 %1 = icmp eq i32 %0, 0 ; <i1> [#uses=1]
1612 %2 = or i32 %b, 128 ; <i32> [#uses=1]
1613 %3 = and i32 %b, -129 ; <i32> [#uses=1]
1614 %b_addr.0 = select i1 %1, i32 %3, i32 %2 ; <i32> [#uses=1]
1618 However, it's functionally equivalent to:
1620 b = (b & ~0x80) | (a & 0x80);
1622 Which generates this:
1624 define i32 @func(i32 %a, i32 %b) nounwind readnone ssp {
1626 %0 = and i32 %b, -129 ; <i32> [#uses=1]
1627 %1 = and i32 %a, 128 ; <i32> [#uses=1]
1628 %2 = or i32 %0, %1 ; <i32> [#uses=1]
1632 This can be generalized for other forms:
1634 b = (b & ~0x80) | (a & 0x40) << 1;
1636 //===---------------------------------------------------------------------===//
1638 These two functions produce different code. They shouldn't:
1642 uint8_t p1(uint8_t b, uint8_t a) {
1643 b = (b & ~0xc0) | (a & 0xc0);
1647 uint8_t p2(uint8_t b, uint8_t a) {
1648 b = (b & ~0x40) | (a & 0x40);
1649 b = (b & ~0x80) | (a & 0x80);
1653 define zeroext i8 @p1(i8 zeroext %b, i8 zeroext %a) nounwind readnone ssp {
1655 %0 = and i8 %b, 63 ; <i8> [#uses=1]
1656 %1 = and i8 %a, -64 ; <i8> [#uses=1]
1657 %2 = or i8 %1, %0 ; <i8> [#uses=1]
1661 define zeroext i8 @p2(i8 zeroext %b, i8 zeroext %a) nounwind readnone ssp {
1663 %0 = and i8 %b, 63 ; <i8> [#uses=1]
1664 %.masked = and i8 %a, 64 ; <i8> [#uses=1]
1665 %1 = and i8 %a, -128 ; <i8> [#uses=1]
1666 %2 = or i8 %1, %0 ; <i8> [#uses=1]
1667 %3 = or i8 %2, %.masked ; <i8> [#uses=1]
1671 //===---------------------------------------------------------------------===//
1673 IPSCCP does not currently propagate argument dependent constants through
1674 functions where it does not not all of the callers. This includes functions
1675 with normal external linkage as well as templates, C99 inline functions etc.
1676 Specifically, it does nothing to:
1678 define i32 @test(i32 %x, i32 %y, i32 %z) nounwind {
1680 %0 = add nsw i32 %y, %z
1683 %3 = add nsw i32 %1, %2
1687 define i32 @test2() nounwind {
1689 %0 = call i32 @test(i32 1, i32 2, i32 4) nounwind
1693 It would be interesting extend IPSCCP to be able to handle simple cases like
1694 this, where all of the arguments to a call are constant. Because IPSCCP runs
1695 before inlining, trivial templates and inline functions are not yet inlined.
1696 The results for a function + set of constant arguments should be memoized in a
1699 //===---------------------------------------------------------------------===//
1701 The libcall constant folding stuff should be moved out of SimplifyLibcalls into
1702 libanalysis' constantfolding logic. This would allow IPSCCP to be able to
1703 handle simple things like this:
1705 static int foo(const char *X) { return strlen(X); }
1706 int bar() { return foo("abcd"); }
1708 //===---------------------------------------------------------------------===//
1710 InstCombine should use SimplifyDemandedBits to remove the or instruction:
1712 define i1 @test(i8 %x, i8 %y) {
1714 %B = icmp ugt i8 %A, 3
1718 Currently instcombine calls SimplifyDemandedBits with either all bits or just
1719 the sign bit, if the comparison is obviously a sign test. In this case, we only
1720 need all but the bottom two bits from %A, and if we gave that mask to SDB it
1721 would delete the or instruction for us.
1723 //===---------------------------------------------------------------------===//
1725 functionattrs doesn't know much about memcpy/memset. This function should be
1726 marked readnone rather than readonly, since it only twiddles local memory, but
1727 functionattrs doesn't handle memset/memcpy/memmove aggressively:
1729 struct X { int *p; int *q; };
1736 p = __builtin_memcpy (&x, &y, sizeof (int *));
1740 //===---------------------------------------------------------------------===//
1742 Missed instcombine transformation:
1743 define i1 @a(i32 %x) nounwind readnone {
1745 %cmp = icmp eq i32 %x, 30
1746 %sub = add i32 %x, -30
1747 %cmp2 = icmp ugt i32 %sub, 9
1748 %or = or i1 %cmp, %cmp2
1751 This should be optimized to a single compare. Testcase derived from gcc.
1753 //===---------------------------------------------------------------------===//
1755 Missed instcombine or reassociate transformation:
1756 int a(int a, int b) { return (a==12)&(b>47)&(b<58); }
1758 The sgt and slt should be combined into a single comparison. Testcase derived
1761 //===---------------------------------------------------------------------===//
1763 Missed instcombine transformation:
1765 %382 = srem i32 %tmp14.i, 64 ; [#uses=1]
1766 %383 = zext i32 %382 to i64 ; [#uses=1]
1767 %384 = shl i64 %381, %383 ; [#uses=1]
1768 %385 = icmp slt i32 %tmp14.i, 64 ; [#uses=1]
1770 The srem can be transformed to an and because if %tmp14.i is negative, the
1771 shift is undefined. Testcase derived from 403.gcc.
1773 //===---------------------------------------------------------------------===//
1775 This is a range comparison on a divided result (from 403.gcc):
1777 %1337 = sdiv i32 %1336, 8 ; [#uses=1]
1778 %.off.i208 = add i32 %1336, 7 ; [#uses=1]
1779 %1338 = icmp ult i32 %.off.i208, 15 ; [#uses=1]
1781 We already catch this (removing the sdiv) if there isn't an add, we should
1782 handle the 'add' as well. This is a common idiom with it's builtin_alloca code.
1785 int a(int x) { return (unsigned)(x/16+7) < 15; }
1787 Another similar case involves truncations on 64-bit targets:
1789 %361 = sdiv i64 %.046, 8 ; [#uses=1]
1790 %362 = trunc i64 %361 to i32 ; [#uses=2]
1792 %367 = icmp eq i32 %362, 0 ; [#uses=1]
1794 //===---------------------------------------------------------------------===//
1796 Missed instcombine/dagcombine transformation:
1797 define void @lshift_lt(i8 zeroext %a) nounwind {
1799 %conv = zext i8 %a to i32
1800 %shl = shl i32 %conv, 3
1801 %cmp = icmp ult i32 %shl, 33
1802 br i1 %cmp, label %if.then, label %if.end
1805 tail call void @bar() nounwind
1811 declare void @bar() nounwind
1813 The shift should be eliminated. Testcase derived from gcc.
1815 //===---------------------------------------------------------------------===//
1817 These compile into different code, one gets recognized as a switch and the
1818 other doesn't due to phase ordering issues (PR6212):
1820 int test1(int mainType, int subType) {
1823 else if (mainType == 9)
1825 else if (mainType == 11)
1830 int test2(int mainType, int subType) {
1840 //===---------------------------------------------------------------------===//
1842 The following test case (from PR6576):
1844 define i32 @mul(i32 %a, i32 %b) nounwind readnone {
1846 %cond1 = icmp eq i32 %b, 0 ; <i1> [#uses=1]
1847 br i1 %cond1, label %exit, label %bb.nph
1848 bb.nph: ; preds = %entry
1849 %tmp = mul i32 %b, %a ; <i32> [#uses=1]
1851 exit: ; preds = %entry
1855 could be reduced to:
1857 define i32 @mul(i32 %a, i32 %b) nounwind readnone {
1859 %tmp = mul i32 %b, %a
1863 //===---------------------------------------------------------------------===//
1865 We should use DSE + llvm.lifetime.end to delete dead vtable pointer updates.
1868 Another interesting case is that something related could be used for variables
1869 that go const after their ctor has finished. In these cases, globalopt (which
1870 can statically run the constructor) could mark the global const (so it gets put
1871 in the readonly section). A testcase would be:
1874 using namespace std;
1875 const complex<char> should_be_in_rodata (42,-42);
1876 complex<char> should_be_in_data (42,-42);
1877 complex<char> should_be_in_bss;
1879 Where we currently evaluate the ctors but the globals don't become const because
1880 the optimizer doesn't know they "become const" after the ctor is done. See
1881 GCC PR4131 for more examples.
1883 //===---------------------------------------------------------------------===//
1888 return x > 1 ? x : 1;
1891 LLVM emits a comparison with 1 instead of 0. 0 would be equivalent
1892 and cheaper on most targets.
1894 LLVM prefers comparisons with zero over non-zero in general, but in this
1895 case it choses instead to keep the max operation obvious.
1897 //===---------------------------------------------------------------------===//
1899 Take the following testcase on x86-64 (similar testcases exist for all targets
1902 define void @a(i64* nocapture %s, i64* nocapture %t, i64 %a, i64 %b,
1905 %0 = zext i64 %a to i128 ; <i128> [#uses=1]
1906 %1 = zext i64 %b to i128 ; <i128> [#uses=1]
1907 %2 = add i128 %1, %0 ; <i128> [#uses=2]
1908 %3 = zext i64 %c to i128 ; <i128> [#uses=1]
1909 %4 = shl i128 %3, 64 ; <i128> [#uses=1]
1910 %5 = add i128 %4, %2 ; <i128> [#uses=1]
1911 %6 = lshr i128 %5, 64 ; <i128> [#uses=1]
1912 %7 = trunc i128 %6 to i64 ; <i64> [#uses=1]
1913 store i64 %7, i64* %s, align 8
1914 %8 = trunc i128 %2 to i64 ; <i64> [#uses=1]
1915 store i64 %8, i64* %t, align 8
1935 The generated SelectionDAG has an ADD of an ADDE, where both operands of the
1936 ADDE are zero. Replacing one of the operands of the ADDE with the other operand
1937 of the ADD, and replacing the ADD with the ADDE, should give the desired result.
1939 (That said, we are doing a lot better than gcc on this testcase. :) )
1941 //===---------------------------------------------------------------------===//
1943 Switch lowering generates less than ideal code for the following switch:
1944 define void @a(i32 %x) nounwind {
1946 switch i32 %x, label %if.end [
1947 i32 0, label %if.then
1948 i32 1, label %if.then
1949 i32 2, label %if.then
1950 i32 3, label %if.then
1951 i32 5, label %if.then
1954 tail call void @foo() nounwind
1961 Generated code on x86-64 (other platforms give similar results):
1974 The movl+movl+btq+jb could be simplified to a cmpl+jne.
1976 Or, if we wanted to be really clever, we could simplify the whole thing to
1977 something like the following, which eliminates a branch:
1984 //===---------------------------------------------------------------------===//
1985 Given a branch where the two target blocks are identical ("ret i32 %b" in
1986 both), simplifycfg will simplify them away. But not so for a switch statement:
1988 define i32 @f(i32 %a, i32 %b) nounwind readnone {
1990 switch i32 %a, label %bb3 [
1995 bb: ; preds = %entry, %entry
1998 bb3: ; preds = %entry
2001 //===---------------------------------------------------------------------===//
2003 clang -O3 fails to devirtualize this virtual inheritance case: (GCC PR45875)
2004 Looks related to PR3100
2008 virtual void foo ();
2010 struct c11 : c10, c1{
2013 struct c28 : virtual c11{
2022 //===---------------------------------------------------------------------===//
2026 int foo(int a) { return (a & (~15)) / 16; }
2030 define i32 @foo(i32 %a) nounwind readnone ssp {
2032 %and = and i32 %a, -16
2033 %div = sdiv i32 %and, 16
2037 but this code (X & -A)/A is X >> log2(A) when A is a power of 2, so this case
2038 should be instcombined into just "a >> 4".
2040 We do get this at the codegen level, so something knows about it, but
2041 instcombine should catch it earlier:
2049 //===---------------------------------------------------------------------===//
2051 This code (from GCC PR28685):
2053 int test(int a, int b) {
2063 define i32 @test(i32 %a, i32 %b) nounwind readnone ssp {
2065 %cmp = icmp slt i32 %a, %b
2066 br i1 %cmp, label %return, label %if.end
2068 if.end: ; preds = %entry
2069 %cmp5 = icmp eq i32 %a, %b
2070 %conv6 = zext i1 %cmp5 to i32
2073 return: ; preds = %entry
2079 define i32 @test__(i32 %a, i32 %b) nounwind readnone ssp {
2081 %0 = icmp sle i32 %a, %b
2082 %retval = zext i1 %0 to i32
2086 //===---------------------------------------------------------------------===//