1 Target Independent Opportunities:
3 //===---------------------------------------------------------------------===//
5 We should recognize idioms for add-with-carry and turn it into the appropriate
6 intrinsics. This example:
8 unsigned add32carry(unsigned sum, unsigned x) {
15 Compiles to: clang t.c -S -o - -O3 -fomit-frame-pointer -m64 -mkernel
17 _add32carry: ## @add32carry
28 leal (%rsi,%rdi), %eax
35 //===---------------------------------------------------------------------===//
37 Dead argument elimination should be enhanced to handle cases when an argument is
38 dead to an externally visible function. Though the argument can't be removed
39 from the externally visible function, the caller doesn't need to pass it in.
40 For example in this testcase:
42 void foo(int X) __attribute__((noinline));
43 void foo(int X) { sideeffect(); }
44 void bar(int A) { foo(A+1); }
48 define void @bar(i32 %A) nounwind ssp {
49 %0 = add nsw i32 %A, 1 ; <i32> [#uses=1]
50 tail call void @foo(i32 %0) nounwind noinline ssp
54 The add is dead, we could pass in 'i32 undef' instead. This occurs for C++
55 templates etc, which usually have linkonce_odr/weak_odr linkage, not internal
58 //===---------------------------------------------------------------------===//
60 With the recent changes to make the implicit def/use set explicit in
61 machineinstrs, we should change the target descriptions for 'call' instructions
62 so that the .td files don't list all the call-clobbered registers as implicit
63 defs. Instead, these should be added by the code generator (e.g. on the dag).
65 This has a number of uses:
67 1. PPC32/64 and X86 32/64 can avoid having multiple copies of call instructions
68 for their different impdef sets.
69 2. Targets with multiple calling convs (e.g. x86) which have different clobber
70 sets don't need copies of call instructions.
71 3. 'Interprocedural register allocation' can be done to reduce the clobber sets
74 //===---------------------------------------------------------------------===//
76 We should recognized various "overflow detection" idioms and translate them into
77 llvm.uadd.with.overflow and similar intrinsics. For example, we compile this:
79 size_t add(size_t a,size_t b) {
91 when it would be better to generate:
96 Apparently some version of GCC knows this. Here is a multiply idiom:
98 unsigned int mul(unsigned int a,unsigned int b) {
99 if ((unsigned long long)a*b>0xffffffff)
104 //===---------------------------------------------------------------------===//
106 Get the C front-end to expand hypot(x,y) -> llvm.sqrt(x*x+y*y) when errno and
107 precision don't matter (ffastmath). Misc/mandel will like this. :) This isn't
108 safe in general, even on darwin. See the libm implementation of hypot for
109 examples (which special case when x/y are exactly zero to get signed zeros etc
112 //===---------------------------------------------------------------------===//
114 Solve this DAG isel folding deficiency:
132 The problem is the store's chain operand is not the load X but rather
133 a TokenFactor of the load X and load Y, which prevents the folding.
135 There are two ways to fix this:
137 1. The dag combiner can start using alias analysis to realize that y/x
138 don't alias, making the store to X not dependent on the load from Y.
139 2. The generated isel could be made smarter in the case it can't
140 disambiguate the pointers.
142 Number 1 is the preferred solution.
144 This has been "fixed" by a TableGen hack. But that is a short term workaround
145 which will be removed once the proper fix is made.
147 //===---------------------------------------------------------------------===//
149 On targets with expensive 64-bit multiply, we could LSR this:
156 for (i = ...; ++i, tmp+=tmp)
159 This would be a win on ppc32, but not x86 or ppc64.
161 //===---------------------------------------------------------------------===//
163 Shrink: (setlt (loadi32 P), 0) -> (setlt (loadi8 Phi), 0)
165 //===---------------------------------------------------------------------===//
167 Reassociate should turn things like:
169 int factorial(int X) {
170 return X*X*X*X*X*X*X*X;
173 into llvm.powi calls, allowing the code generator to produce balanced
174 multiplication trees.
176 First, the intrinsic needs to be extended to support integers, and second the
177 code generator needs to be enhanced to lower these to multiplication trees.
179 //===---------------------------------------------------------------------===//
181 Interesting? testcase for add/shift/mul reassoc:
183 int bar(int x, int y) {
184 return x*x*x+y+x*x*x*x*x*y*y*y*y;
186 int foo(int z, int n) {
187 return bar(z, n) + bar(2*z, 2*n);
190 This is blocked on not handling X*X*X -> powi(X, 3) (see note above). The issue
191 is that we end up getting t = 2*X s = t*t and don't turn this into 4*X*X,
192 which is the same number of multiplies and is canonical, because the 2*X has
193 multiple uses. Here's a simple example:
195 define i32 @test15(i32 %X1) {
196 %B = mul i32 %X1, 47 ; X1*47
202 //===---------------------------------------------------------------------===//
204 Reassociate should handle the example in GCC PR16157:
206 extern int a0, a1, a2, a3, a4; extern int b0, b1, b2, b3, b4;
207 void f () { /* this can be optimized to four additions... */
208 b4 = a4 + a3 + a2 + a1 + a0;
209 b3 = a3 + a2 + a1 + a0;
214 This requires reassociating to forms of expressions that are already available,
215 something that reassoc doesn't think about yet.
218 //===---------------------------------------------------------------------===//
220 This function: (derived from GCC PR19988)
221 double foo(double x, double y) {
222 return ((x + 0.1234 * y) * (x + -0.1234 * y));
228 mulsd LCPI1_1(%rip), %xmm1
229 mulsd LCPI1_0(%rip), %xmm2
236 Reassociate should be able to turn it into:
238 double foo(double x, double y) {
239 return ((x + 0.1234 * y) * (x - 0.1234 * y));
242 Which allows the multiply by constant to be CSE'd, producing:
245 mulsd LCPI1_0(%rip), %xmm1
252 This doesn't need -ffast-math support at all. This is particularly bad because
253 the llvm-gcc frontend is canonicalizing the later into the former, but clang
254 doesn't have this problem.
256 //===---------------------------------------------------------------------===//
258 These two functions should generate the same code on big-endian systems:
260 int g(int *j,int *l) { return memcmp(j,l,4); }
261 int h(int *j, int *l) { return *j - *l; }
263 this could be done in SelectionDAGISel.cpp, along with other special cases,
266 //===---------------------------------------------------------------------===//
268 It would be nice to revert this patch:
269 http://lists.cs.uiuc.edu/pipermail/llvm-commits/Week-of-Mon-20060213/031986.html
271 And teach the dag combiner enough to simplify the code expanded before
272 legalize. It seems plausible that this knowledge would let it simplify other
275 //===---------------------------------------------------------------------===//
277 For vector types, TargetData.cpp::getTypeInfo() returns alignment that is equal
278 to the type size. It works but can be overly conservative as the alignment of
279 specific vector types are target dependent.
281 //===---------------------------------------------------------------------===//
283 We should produce an unaligned load from code like this:
285 v4sf example(float *P) {
286 return (v4sf){P[0], P[1], P[2], P[3] };
289 //===---------------------------------------------------------------------===//
291 Add support for conditional increments, and other related patterns. Instead
296 je LBB16_2 #cond_next
307 //===---------------------------------------------------------------------===//
309 Combine: a = sin(x), b = cos(x) into a,b = sincos(x).
311 Expand these to calls of sin/cos and stores:
312 double sincos(double x, double *sin, double *cos);
313 float sincosf(float x, float *sin, float *cos);
314 long double sincosl(long double x, long double *sin, long double *cos);
316 Doing so could allow SROA of the destination pointers. See also:
317 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=17687
319 This is now easily doable with MRVs. We could even make an intrinsic for this
320 if anyone cared enough about sincos.
322 //===---------------------------------------------------------------------===//
324 quantum_sigma_x in 462.libquantum contains the following loop:
326 for(i=0; i<reg->size; i++)
328 /* Flip the target bit of each basis state */
329 reg->node[i].state ^= ((MAX_UNSIGNED) 1 << target);
332 Where MAX_UNSIGNED/state is a 64-bit int. On a 32-bit platform it would be just
333 so cool to turn it into something like:
335 long long Res = ((MAX_UNSIGNED) 1 << target);
337 for(i=0; i<reg->size; i++)
338 reg->node[i].state ^= Res & 0xFFFFFFFFULL;
340 for(i=0; i<reg->size; i++)
341 reg->node[i].state ^= Res & 0xFFFFFFFF00000000ULL
344 ... which would only do one 32-bit XOR per loop iteration instead of two.
346 It would also be nice to recognize the reg->size doesn't alias reg->node[i], but
349 //===---------------------------------------------------------------------===//
351 This isn't recognized as bswap by instcombine (yes, it really is bswap):
353 unsigned long reverse(unsigned v) {
355 t = v ^ ((v << 16) | (v >> 16));
357 v = (v << 24) | (v >> 8);
361 Neither is this (very standard idiom):
365 return (((n) << 24) | (((n) & 0xff00) << 8)
366 | (((n) >> 8) & 0xff00) | ((n) >> 24));
369 //===---------------------------------------------------------------------===//
373 These idioms should be recognized as popcount (see PR1488):
375 unsigned countbits_slow(unsigned v) {
377 for (c = 0; v; v >>= 1)
381 unsigned countbits_fast(unsigned v){
384 v &= v - 1; // clear the least significant bit set
388 BITBOARD = unsigned long long
389 int PopCnt(register BITBOARD a) {
397 unsigned int popcount(unsigned int input) {
398 unsigned int count = 0;
399 for (unsigned int i = 0; i < 4 * 8; i++)
400 count += (input >> i) & i;
404 This is a form of idiom recognition for loops, the same thing that could be
405 useful for recognizing memset/memcpy.
407 //===---------------------------------------------------------------------===//
409 These should turn into single 16-bit (unaligned?) loads on little/big endian
412 unsigned short read_16_le(const unsigned char *adr) {
413 return adr[0] | (adr[1] << 8);
415 unsigned short read_16_be(const unsigned char *adr) {
416 return (adr[0] << 8) | adr[1];
419 //===---------------------------------------------------------------------===//
421 -instcombine should handle this transform:
422 icmp pred (sdiv X / C1 ), C2
423 when X, C1, and C2 are unsigned. Similarly for udiv and signed operands.
425 Currently InstCombine avoids this transform but will do it when the signs of
426 the operands and the sign of the divide match. See the FIXME in
427 InstructionCombining.cpp in the visitSetCondInst method after the switch case
428 for Instruction::UDiv (around line 4447) for more details.
430 The SingleSource/Benchmarks/Shootout-C++/hash and hash2 tests have examples of
433 //===---------------------------------------------------------------------===//
437 viterbi speeds up *significantly* if the various "history" related copy loops
438 are turned into memcpy calls at the source level. We need a "loops to memcpy"
441 //===---------------------------------------------------------------------===//
445 SingleSource/Benchmarks/Misc/dt.c shows several interesting optimization
446 opportunities in its double_array_divs_variable function: it needs loop
447 interchange, memory promotion (which LICM already does), vectorization and
448 variable trip count loop unrolling (since it has a constant trip count). ICC
449 apparently produces this very nice code with -ffast-math:
451 ..B1.70: # Preds ..B1.70 ..B1.69
452 mulpd %xmm0, %xmm1 #108.2
453 mulpd %xmm0, %xmm1 #108.2
454 mulpd %xmm0, %xmm1 #108.2
455 mulpd %xmm0, %xmm1 #108.2
457 cmpl $131072, %edx #108.2
458 jb ..B1.70 # Prob 99% #108.2
460 It would be better to count down to zero, but this is a lot better than what we
463 //===---------------------------------------------------------------------===//
467 typedef unsigned U32;
468 typedef unsigned long long U64;
469 int test (U32 *inst, U64 *regs) {
472 int r1 = (temp >> 20) & 0xf;
473 int b2 = (temp >> 16) & 0xf;
474 effective_addr2 = temp & 0xfff;
475 if (b2) effective_addr2 += regs[b2];
476 b2 = (temp >> 12) & 0xf;
477 if (b2) effective_addr2 += regs[b2];
478 effective_addr2 &= regs[4];
479 if ((effective_addr2 & 3) == 0)
484 Note that only the low 2 bits of effective_addr2 are used. On 32-bit systems,
485 we don't eliminate the computation of the top half of effective_addr2 because
486 we don't have whole-function selection dags. On x86, this means we use one
487 extra register for the function when effective_addr2 is declared as U64 than
488 when it is declared U32.
490 PHI Slicing could be extended to do this.
492 //===---------------------------------------------------------------------===//
494 LSR should know what GPR types a target has from TargetData. This code:
496 volatile short X, Y; // globals
500 for (i = 0; i < N; i++) { X = i; Y = i*4; }
503 produces two near identical IV's (after promotion) on PPC/ARM:
513 add r2, r2, #1 <- [0,+,1]
514 sub r0, r0, #1 <- [0,-,1]
518 LSR should reuse the "+" IV for the exit test.
520 //===---------------------------------------------------------------------===//
522 Tail call elim should be more aggressive, checking to see if the call is
523 followed by an uncond branch to an exit block.
525 ; This testcase is due to tail-duplication not wanting to copy the return
526 ; instruction into the terminating blocks because there was other code
527 ; optimized out of the function after the taildup happened.
528 ; RUN: llvm-as < %s | opt -tailcallelim | llvm-dis | not grep call
530 define i32 @t4(i32 %a) {
532 %tmp.1 = and i32 %a, 1 ; <i32> [#uses=1]
533 %tmp.2 = icmp ne i32 %tmp.1, 0 ; <i1> [#uses=1]
534 br i1 %tmp.2, label %then.0, label %else.0
536 then.0: ; preds = %entry
537 %tmp.5 = add i32 %a, -1 ; <i32> [#uses=1]
538 %tmp.3 = call i32 @t4( i32 %tmp.5 ) ; <i32> [#uses=1]
541 else.0: ; preds = %entry
542 %tmp.7 = icmp ne i32 %a, 0 ; <i1> [#uses=1]
543 br i1 %tmp.7, label %then.1, label %return
545 then.1: ; preds = %else.0
546 %tmp.11 = add i32 %a, -2 ; <i32> [#uses=1]
547 %tmp.9 = call i32 @t4( i32 %tmp.11 ) ; <i32> [#uses=1]
550 return: ; preds = %then.1, %else.0, %then.0
551 %result.0 = phi i32 [ 0, %else.0 ], [ %tmp.3, %then.0 ],
556 //===---------------------------------------------------------------------===//
558 Tail recursion elimination should handle:
563 return 2 * pow2m1 (n - 1) + 1;
566 Also, multiplies can be turned into SHL's, so they should be handled as if
567 they were associative. "return foo() << 1" can be tail recursion eliminated.
569 //===---------------------------------------------------------------------===//
571 Argument promotion should promote arguments for recursive functions, like
574 ; RUN: llvm-as < %s | opt -argpromotion | llvm-dis | grep x.val
576 define internal i32 @foo(i32* %x) {
578 %tmp = load i32* %x ; <i32> [#uses=0]
579 %tmp.foo = call i32 @foo( i32* %x ) ; <i32> [#uses=1]
583 define i32 @bar(i32* %x) {
585 %tmp3 = call i32 @foo( i32* %x ) ; <i32> [#uses=1]
589 //===---------------------------------------------------------------------===//
591 We should investigate an instruction sinking pass. Consider this silly
607 je LBB1_2 # cond_true
615 The PIC base computation (call+popl) is only used on one path through the
616 code, but is currently always computed in the entry block. It would be
617 better to sink the picbase computation down into the block for the
618 assertion, as it is the only one that uses it. This happens for a lot of
619 code with early outs.
621 Another example is loads of arguments, which are usually emitted into the
622 entry block on targets like x86. If not used in all paths through a
623 function, they should be sunk into the ones that do.
625 In this case, whole-function-isel would also handle this.
627 //===---------------------------------------------------------------------===//
629 Investigate lowering of sparse switch statements into perfect hash tables:
630 http://burtleburtle.net/bob/hash/perfect.html
632 //===---------------------------------------------------------------------===//
634 We should turn things like "load+fabs+store" and "load+fneg+store" into the
635 corresponding integer operations. On a yonah, this loop:
640 for (b = 0; b < 10000000; b++)
641 for (i = 0; i < 256; i++)
645 is twice as slow as this loop:
650 for (b = 0; b < 10000000; b++)
651 for (i = 0; i < 256; i++)
652 a[i] ^= (1ULL << 63);
655 and I suspect other processors are similar. On X86 in particular this is a
656 big win because doing this with integers allows the use of read/modify/write
659 //===---------------------------------------------------------------------===//
661 DAG Combiner should try to combine small loads into larger loads when
662 profitable. For example, we compile this C++ example:
664 struct THotKey { short Key; bool Control; bool Shift; bool Alt; };
665 extern THotKey m_HotKey;
666 THotKey GetHotKey () { return m_HotKey; }
668 into (-O3 -fno-exceptions -static -fomit-frame-pointer):
673 movb _m_HotKey+3, %cl
674 movb _m_HotKey+4, %dl
675 movb _m_HotKey+2, %ch
690 movzwl _m_HotKey+4, %edx
694 The LLVM IR contains the needed alignment info, so we should be able to
695 merge the loads and stores into 4-byte loads:
697 %struct.THotKey = type { i16, i8, i8, i8 }
698 define void @_Z9GetHotKeyv(%struct.THotKey* sret %agg.result) nounwind {
700 %tmp2 = load i16* getelementptr (@m_HotKey, i32 0, i32 0), align 8
701 %tmp5 = load i8* getelementptr (@m_HotKey, i32 0, i32 1), align 2
702 %tmp8 = load i8* getelementptr (@m_HotKey, i32 0, i32 2), align 1
703 %tmp11 = load i8* getelementptr (@m_HotKey, i32 0, i32 3), align 2
705 Alternatively, we should use a small amount of base-offset alias analysis
706 to make it so the scheduler doesn't need to hold all the loads in regs at
709 //===---------------------------------------------------------------------===//
711 We should add an FRINT node to the DAG to model targets that have legal
712 implementations of ceil/floor/rint.
714 //===---------------------------------------------------------------------===//
719 long long input[8] = {1,1,1,1,1,1,1,1};
723 We currently compile this into a memcpy from a global array since the
724 initializer is fairly large and not memset'able. This is good, but the memcpy
725 gets lowered to load/stores in the code generator. This is also ok, except
726 that the codegen lowering for memcpy doesn't handle the case when the source
727 is a constant global. This gives us atrocious code like this:
732 movl _C.0.1444-"L1$pb"+32(%eax), %ecx
734 movl _C.0.1444-"L1$pb"+20(%eax), %ecx
736 movl _C.0.1444-"L1$pb"+36(%eax), %ecx
738 movl _C.0.1444-"L1$pb"+44(%eax), %ecx
740 movl _C.0.1444-"L1$pb"+40(%eax), %ecx
742 movl _C.0.1444-"L1$pb"+12(%eax), %ecx
744 movl _C.0.1444-"L1$pb"+4(%eax), %ecx
756 //===---------------------------------------------------------------------===//
758 http://llvm.org/PR717:
760 The following code should compile into "ret int undef". Instead, LLVM
761 produces "ret int 0":
770 //===---------------------------------------------------------------------===//
772 The loop unroller should partially unroll loops (instead of peeling them)
773 when code growth isn't too bad and when an unroll count allows simplification
774 of some code within the loop. One trivial example is:
780 for ( nLoop = 0; nLoop < 1000; nLoop++ ) {
789 Unrolling by 2 would eliminate the '&1' in both copies, leading to a net
790 reduction in code size. The resultant code would then also be suitable for
791 exit value computation.
793 //===---------------------------------------------------------------------===//
795 We miss a bunch of rotate opportunities on various targets, including ppc, x86,
796 etc. On X86, we miss a bunch of 'rotate by variable' cases because the rotate
797 matching code in dag combine doesn't look through truncates aggressively
798 enough. Here are some testcases reduces from GCC PR17886:
800 unsigned long long f(unsigned long long x, int y) {
801 return (x << y) | (x >> 64-y);
803 unsigned f2(unsigned x, int y){
804 return (x << y) | (x >> 32-y);
806 unsigned long long f3(unsigned long long x){
808 return (x << y) | (x >> 64-y);
810 unsigned f4(unsigned x){
812 return (x << y) | (x >> 32-y);
814 unsigned long long f5(unsigned long long x, unsigned long long y) {
815 return (x << 8) | ((y >> 48) & 0xffull);
817 unsigned long long f6(unsigned long long x, unsigned long long y, int z) {
820 return (x << 8) | ((y >> 48) & 0xffull);
822 return (x << 16) | ((y >> 40) & 0xffffull);
824 return (x << 24) | ((y >> 32) & 0xffffffull);
826 return (x << 32) | ((y >> 24) & 0xffffffffull);
828 return (x << 40) | ((y >> 16) & 0xffffffffffull);
832 On X86-64, we only handle f2/f3/f4 right. On x86-32, a few of these
833 generate truly horrible code, instead of using shld and friends. On
834 ARM, we end up with calls to L___lshrdi3/L___ashldi3 in f, which is
835 badness. PPC64 misses f, f5 and f6. CellSPU aborts in isel.
837 //===---------------------------------------------------------------------===//
839 This (and similar related idioms):
841 unsigned int foo(unsigned char i) {
842 return i | (i<<8) | (i<<16) | (i<<24);
847 define i32 @foo(i8 zeroext %i) nounwind readnone ssp noredzone {
849 %conv = zext i8 %i to i32
850 %shl = shl i32 %conv, 8
851 %shl5 = shl i32 %conv, 16
852 %shl9 = shl i32 %conv, 24
853 %or = or i32 %shl9, %conv
854 %or6 = or i32 %or, %shl5
855 %or10 = or i32 %or6, %shl
859 it would be better as:
861 unsigned int bar(unsigned char i) {
862 unsigned int j=i | (i << 8);
868 define i32 @bar(i8 zeroext %i) nounwind readnone ssp noredzone {
870 %conv = zext i8 %i to i32
871 %shl = shl i32 %conv, 8
872 %or = or i32 %shl, %conv
873 %shl5 = shl i32 %or, 16
874 %or6 = or i32 %shl5, %or
878 or even i*0x01010101, depending on the speed of the multiplier. The best way to
879 handle this is to canonicalize it to a multiply in IR and have codegen handle
880 lowering multiplies to shifts on cpus where shifts are faster.
882 //===---------------------------------------------------------------------===//
884 We do a number of simplifications in simplify libcalls to strength reduce
885 standard library functions, but we don't currently merge them together. For
886 example, it is useful to merge memcpy(a,b,strlen(b)) -> strcpy. This can only
887 be done safely if "b" isn't modified between the strlen and memcpy of course.
889 //===---------------------------------------------------------------------===//
891 We compile this program: (from GCC PR11680)
892 http://gcc.gnu.org/bugzilla/attachment.cgi?id=4487
894 Into code that runs the same speed in fast/slow modes, but both modes run 2x
895 slower than when compile with GCC (either 4.0 or 4.2):
897 $ llvm-g++ perf.cpp -O3 -fno-exceptions
899 1.821u 0.003s 0:01.82 100.0% 0+0k 0+0io 0pf+0w
901 $ g++ perf.cpp -O3 -fno-exceptions
903 0.821u 0.001s 0:00.82 100.0% 0+0k 0+0io 0pf+0w
905 It looks like we are making the same inlining decisions, so this may be raw
906 codegen badness or something else (haven't investigated).
908 //===---------------------------------------------------------------------===//
910 We miss some instcombines for stuff like this:
912 void foo (unsigned int a) {
913 /* This one is equivalent to a >= (3 << 2). */
918 A few other related ones are in GCC PR14753.
920 //===---------------------------------------------------------------------===//
922 Divisibility by constant can be simplified (according to GCC PR12849) from
923 being a mulhi to being a mul lo (cheaper). Testcase:
925 void bar(unsigned n) {
930 This is equivalent to the following, where 2863311531 is the multiplicative
931 inverse of 3, and 1431655766 is ((2^32)-1)/3+1:
932 void bar(unsigned n) {
933 if (n * 2863311531U < 1431655766U)
937 The same transformation can work with an even modulo with the addition of a
938 rotate: rotate the result of the multiply to the right by the number of bits
939 which need to be zero for the condition to be true, and shrink the compare RHS
940 by the same amount. Unless the target supports rotates, though, that
941 transformation probably isn't worthwhile.
943 The transformation can also easily be made to work with non-zero equality
944 comparisons: just transform, for example, "n % 3 == 1" to "(n-1) % 3 == 0".
946 //===---------------------------------------------------------------------===//
948 Better mod/ref analysis for scanf would allow us to eliminate the vtable and a
949 bunch of other stuff from this example (see PR1604):
959 std::scanf("%d", &t.val);
960 std::printf("%d\n", t.val);
963 //===---------------------------------------------------------------------===//
965 These functions perform the same computation, but produce different assembly.
967 define i8 @select(i8 %x) readnone nounwind {
968 %A = icmp ult i8 %x, 250
969 %B = select i1 %A, i8 0, i8 1
973 define i8 @addshr(i8 %x) readnone nounwind {
974 %A = zext i8 %x to i9
975 %B = add i9 %A, 6 ;; 256 - 250 == 6
977 %D = trunc i9 %C to i8
981 //===---------------------------------------------------------------------===//
985 f (unsigned long a, unsigned long b, unsigned long c)
987 return ((a & (c - 1)) != 0) || ((b & (c - 1)) != 0);
990 f (unsigned long a, unsigned long b, unsigned long c)
992 return ((a & (c - 1)) != 0) | ((b & (c - 1)) != 0);
994 Both should combine to ((a|b) & (c-1)) != 0. Currently not optimized with
995 "clang -emit-llvm-bc | opt -std-compile-opts".
997 //===---------------------------------------------------------------------===//
1000 #define PMD_MASK (~((1UL << 23) - 1))
1001 void clear_pmd_range(unsigned long start, unsigned long end)
1003 if (!(start & ~PMD_MASK) && !(end & ~PMD_MASK))
1006 The expression should optimize to something like
1007 "!((start|end)&~PMD_MASK). Currently not optimized with "clang
1008 -emit-llvm-bc | opt -std-compile-opts".
1010 //===---------------------------------------------------------------------===//
1012 unsigned int f(unsigned int i, unsigned int n) {++i; if (i == n) ++i; return
1014 unsigned int f2(unsigned int i, unsigned int n) {++i; i += i == n; return i;}
1015 These should combine to the same thing. Currently, the first function
1016 produces better code on X86.
1018 //===---------------------------------------------------------------------===//
1021 #define abs(x) x>0?x:-x
1024 return (abs(x)) >= 0;
1026 This should optimize to x == INT_MIN. (With -fwrapv.) Currently not
1027 optimized with "clang -emit-llvm-bc | opt -std-compile-opts".
1029 //===---------------------------------------------------------------------===//
1033 rotate_cst (unsigned int a)
1035 a = (a << 10) | (a >> 22);
1040 minus_cst (unsigned int a)
1049 mask_gt (unsigned int a)
1051 /* This is equivalent to a > 15. */
1056 rshift_gt (unsigned int a)
1058 /* This is equivalent to a > 23. */
1062 All should simplify to a single comparison. All of these are
1063 currently not optimized with "clang -emit-llvm-bc | opt
1066 //===---------------------------------------------------------------------===//
1069 int c(int* x) {return (char*)x+2 == (char*)x;}
1070 Should combine to 0. Currently not optimized with "clang
1071 -emit-llvm-bc | opt -std-compile-opts" (although llc can optimize it).
1073 //===---------------------------------------------------------------------===//
1075 int a(unsigned b) {return ((b << 31) | (b << 30)) >> 31;}
1076 Should be combined to "((b >> 1) | b) & 1". Currently not optimized
1077 with "clang -emit-llvm-bc | opt -std-compile-opts".
1079 //===---------------------------------------------------------------------===//
1081 unsigned a(unsigned x, unsigned y) { return x | (y & 1) | (y & 2);}
1082 Should combine to "x | (y & 3)". Currently not optimized with "clang
1083 -emit-llvm-bc | opt -std-compile-opts".
1085 //===---------------------------------------------------------------------===//
1087 int a(int a, int b, int c) {return (~a & c) | ((c|a) & b);}
1088 Should fold to "(~a & c) | (a & b)". Currently not optimized with
1089 "clang -emit-llvm-bc | opt -std-compile-opts".
1091 //===---------------------------------------------------------------------===//
1093 int a(int a,int b) {return (~(a|b))|a;}
1094 Should fold to "a|~b". Currently not optimized with "clang
1095 -emit-llvm-bc | opt -std-compile-opts".
1097 //===---------------------------------------------------------------------===//
1099 int a(int a, int b) {return (a&&b) || (a&&!b);}
1100 Should fold to "a". Currently not optimized with "clang -emit-llvm-bc
1101 | opt -std-compile-opts".
1103 //===---------------------------------------------------------------------===//
1105 int a(int a, int b, int c) {return (a&&b) || (!a&&c);}
1106 Should fold to "a ? b : c", or at least something sane. Currently not
1107 optimized with "clang -emit-llvm-bc | opt -std-compile-opts".
1109 //===---------------------------------------------------------------------===//
1111 int a(int a, int b, int c) {return (a&&b) || (a&&c) || (a&&b&&c);}
1112 Should fold to a && (b || c). Currently not optimized with "clang
1113 -emit-llvm-bc | opt -std-compile-opts".
1115 //===---------------------------------------------------------------------===//
1117 int a(int x) {return x | ((x & 8) ^ 8);}
1118 Should combine to x | 8. Currently not optimized with "clang
1119 -emit-llvm-bc | opt -std-compile-opts".
1121 //===---------------------------------------------------------------------===//
1123 int a(int x) {return x ^ ((x & 8) ^ 8);}
1124 Should also combine to x | 8. Currently not optimized with "clang
1125 -emit-llvm-bc | opt -std-compile-opts".
1127 //===---------------------------------------------------------------------===//
1129 int a(int x) {return ((x | -9) ^ 8) & x;}
1130 Should combine to x & -9. Currently not optimized with "clang
1131 -emit-llvm-bc | opt -std-compile-opts".
1133 //===---------------------------------------------------------------------===//
1135 unsigned a(unsigned a) {return a * 0x11111111 >> 28 & 1;}
1136 Should combine to "a * 0x88888888 >> 31". Currently not optimized
1137 with "clang -emit-llvm-bc | opt -std-compile-opts".
1139 //===---------------------------------------------------------------------===//
1141 unsigned a(char* x) {if ((*x & 32) == 0) return b();}
1142 There's an unnecessary zext in the generated code with "clang
1143 -emit-llvm-bc | opt -std-compile-opts".
1145 //===---------------------------------------------------------------------===//
1147 unsigned a(unsigned long long x) {return 40 * (x >> 1);}
1148 Should combine to "20 * (((unsigned)x) & -2)". Currently not
1149 optimized with "clang -emit-llvm-bc | opt -std-compile-opts".
1151 //===---------------------------------------------------------------------===//
1153 This was noticed in the entryblock for grokdeclarator in 403.gcc:
1155 %tmp = icmp eq i32 %decl_context, 4
1156 %decl_context_addr.0 = select i1 %tmp, i32 3, i32 %decl_context
1157 %tmp1 = icmp eq i32 %decl_context_addr.0, 1
1158 %decl_context_addr.1 = select i1 %tmp1, i32 0, i32 %decl_context_addr.0
1160 tmp1 should be simplified to something like:
1161 (!tmp || decl_context == 1)
1163 This allows recursive simplifications, tmp1 is used all over the place in
1164 the function, e.g. by:
1166 %tmp23 = icmp eq i32 %decl_context_addr.1, 0 ; <i1> [#uses=1]
1167 %tmp24 = xor i1 %tmp1, true ; <i1> [#uses=1]
1168 %or.cond8 = and i1 %tmp23, %tmp24 ; <i1> [#uses=1]
1172 //===---------------------------------------------------------------------===//
1176 Store sinking: This code:
1178 void f (int n, int *cond, int *res) {
1181 for (i = 0; i < n; i++)
1183 *res ^= 234; /* (*) */
1186 On this function GVN hoists the fully redundant value of *res, but nothing
1187 moves the store out. This gives us this code:
1189 bb: ; preds = %bb2, %entry
1190 %.rle = phi i32 [ 0, %entry ], [ %.rle6, %bb2 ]
1191 %i.05 = phi i32 [ 0, %entry ], [ %indvar.next, %bb2 ]
1192 %1 = load i32* %cond, align 4
1193 %2 = icmp eq i32 %1, 0
1194 br i1 %2, label %bb2, label %bb1
1197 %3 = xor i32 %.rle, 234
1198 store i32 %3, i32* %res, align 4
1201 bb2: ; preds = %bb, %bb1
1202 %.rle6 = phi i32 [ %3, %bb1 ], [ %.rle, %bb ]
1203 %indvar.next = add i32 %i.05, 1
1204 %exitcond = icmp eq i32 %indvar.next, %n
1205 br i1 %exitcond, label %return, label %bb
1207 DSE should sink partially dead stores to get the store out of the loop.
1209 Here's another partial dead case:
1210 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=12395
1212 //===---------------------------------------------------------------------===//
1214 Scalar PRE hoists the mul in the common block up to the else:
1216 int test (int a, int b, int c, int g) {
1226 It would be better to do the mul once to reduce codesize above the if.
1227 This is GCC PR38204.
1229 //===---------------------------------------------------------------------===//
1233 GCC PR37810 is an interesting case where we should sink load/store reload
1234 into the if block and outside the loop, so we don't reload/store it on the
1255 We now hoist the reload after the call (Transforms/GVN/lpre-call-wrap.ll), but
1256 we don't sink the store. We need partially dead store sinking.
1258 //===---------------------------------------------------------------------===//
1260 [LOAD PRE CRIT EDGE SPLITTING]
1262 GCC PR37166: Sinking of loads prevents SROA'ing the "g" struct on the stack
1263 leading to excess stack traffic. This could be handled by GVN with some crazy
1264 symbolic phi translation. The code we get looks like (g is on the stack):
1268 %9 = getelementptr %struct.f* %g, i32 0, i32 0
1269 store i32 %8, i32* %9, align bel %bb3
1271 bb3: ; preds = %bb1, %bb2, %bb
1272 %c_addr.0 = phi %struct.f* [ %g, %bb2 ], [ %c, %bb ], [ %c, %bb1 ]
1273 %b_addr.0 = phi %struct.f* [ %b, %bb2 ], [ %g, %bb ], [ %b, %bb1 ]
1274 %10 = getelementptr %struct.f* %c_addr.0, i32 0, i32 0
1275 %11 = load i32* %10, align 4
1277 %11 is partially redundant, an in BB2 it should have the value %8.
1279 GCC PR33344 and PR35287 are similar cases.
1282 //===---------------------------------------------------------------------===//
1286 There are many load PRE testcases in testsuite/gcc.dg/tree-ssa/loadpre* in the
1287 GCC testsuite, ones we don't get yet are (checked through loadpre25):
1289 [CRIT EDGE BREAKING]
1290 loadpre3.c predcom-4.c
1292 [PRE OF READONLY CALL]
1295 [TURN SELECT INTO BRANCH]
1296 loadpre14.c loadpre15.c
1298 actually a conditional increment: loadpre18.c loadpre19.c
1300 //===---------------------------------------------------------------------===//
1302 [LOAD PRE / STORE SINKING / SPEC HACK]
1304 This is a chunk of code from 456.hmmer:
1306 int f(int M, int *mc, int *mpp, int *tpmm, int *ip, int *tpim, int *dpp,
1307 int *tpdm, int xmb, int *bp, int *ms) {
1309 for (k = 1; k <= M; k++) {
1310 mc[k] = mpp[k-1] + tpmm[k-1];
1311 if ((sc = ip[k-1] + tpim[k-1]) > mc[k]) mc[k] = sc;
1312 if ((sc = dpp[k-1] + tpdm[k-1]) > mc[k]) mc[k] = sc;
1313 if ((sc = xmb + bp[k]) > mc[k]) mc[k] = sc;
1318 It is very profitable for this benchmark to turn the conditional stores to mc[k]
1319 into a conditional move (select instr in IR) and allow the final store to do the
1320 store. See GCC PR27313 for more details. Note that this is valid to xform even
1321 with the new C++ memory model, since mc[k] is previously loaded and later
1324 //===---------------------------------------------------------------------===//
1327 There are many PRE testcases in testsuite/gcc.dg/tree-ssa/ssa-pre-*.c in the
1330 //===---------------------------------------------------------------------===//
1332 There are some interesting cases in testsuite/gcc.dg/tree-ssa/pred-comm* in the
1333 GCC testsuite. For example, we get the first example in predcom-1.c, but
1334 miss the second one:
1339 __attribute__ ((noinline))
1340 void count_averages(int n) {
1342 for (i = 1; i < n; i++)
1343 avg[i] = (((unsigned long) fib[i - 1] + fib[i] + fib[i + 1]) / 3) & 0xffff;
1346 which compiles into two loads instead of one in the loop.
1348 predcom-2.c is the same as predcom-1.c
1350 predcom-3.c is very similar but needs loads feeding each other instead of
1354 //===---------------------------------------------------------------------===//
1358 Type based alias analysis:
1359 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=14705
1361 We should do better analysis of posix_memalign. At the least it should
1362 no-capture its pointer argument, at best, we should know that the out-value
1363 result doesn't point to anything (like malloc). One example of this is in
1364 SingleSource/Benchmarks/Misc/dt.c
1366 //===---------------------------------------------------------------------===//
1368 A/B get pinned to the stack because we turn an if/then into a select instead
1369 of PRE'ing the load/store. This may be fixable in instcombine:
1370 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=37892
1372 struct X { int i; };
1386 //===---------------------------------------------------------------------===//
1388 Interesting missed case because of control flow flattening (should be 2 loads):
1389 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=26629
1390 With: llvm-gcc t2.c -S -o - -O0 -emit-llvm | llvm-as |
1391 opt -mem2reg -gvn -instcombine | llvm-dis
1392 we miss it because we need 1) CRIT EDGE 2) MULTIPLE DIFFERENT
1393 VALS PRODUCED BY ONE BLOCK OVER DIFFERENT PATHS
1395 //===---------------------------------------------------------------------===//
1397 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=19633
1398 We could eliminate the branch condition here, loading from null is undefined:
1400 struct S { int w, x, y, z; };
1401 struct T { int r; struct S s; };
1402 void bar (struct S, int);
1403 void foo (int a, struct T b)
1411 //===---------------------------------------------------------------------===//
1413 simplifylibcalls should do several optimizations for strspn/strcspn:
1415 strcspn(x, "a") -> inlined loop for up to 3 letters (similarly for strspn):
1417 size_t __strcspn_c3 (__const char *__s, int __reject1, int __reject2,
1419 register size_t __result = 0;
1420 while (__s[__result] != '\0' && __s[__result] != __reject1 &&
1421 __s[__result] != __reject2 && __s[__result] != __reject3)
1426 This should turn into a switch on the character. See PR3253 for some notes on
1429 456.hmmer apparently uses strcspn and strspn a lot. 471.omnetpp uses strspn.
1431 //===---------------------------------------------------------------------===//
1433 "gas" uses this idiom:
1434 else if (strchr ("+-/*%|&^:[]()~", *intel_parser.op_string))
1436 else if (strchr ("<>", *intel_parser.op_string)
1438 Those should be turned into a switch.
1440 //===---------------------------------------------------------------------===//
1442 252.eon contains this interesting code:
1444 %3072 = getelementptr [100 x i8]* %tempString, i32 0, i32 0
1445 %3073 = call i8* @strcpy(i8* %3072, i8* %3071) nounwind
1446 %strlen = call i32 @strlen(i8* %3072) ; uses = 1
1447 %endptr = getelementptr [100 x i8]* %tempString, i32 0, i32 %strlen
1448 call void @llvm.memcpy.i32(i8* %endptr,
1449 i8* getelementptr ([5 x i8]* @"\01LC42", i32 0, i32 0), i32 5, i32 1)
1450 %3074 = call i32 @strlen(i8* %endptr) nounwind readonly
1452 This is interesting for a couple reasons. First, in this:
1454 %3073 = call i8* @strcpy(i8* %3072, i8* %3071) nounwind
1455 %strlen = call i32 @strlen(i8* %3072)
1457 The strlen could be replaced with: %strlen = sub %3072, %3073, because the
1458 strcpy call returns a pointer to the end of the string. Based on that, the
1459 endptr GEP just becomes equal to 3073, which eliminates a strlen call and GEP.
1461 Second, the memcpy+strlen strlen can be replaced with:
1463 %3074 = call i32 @strlen([5 x i8]* @"\01LC42") nounwind readonly
1465 Because the destination was just copied into the specified memory buffer. This,
1466 in turn, can be constant folded to "4".
1468 In other code, it contains:
1470 %endptr6978 = bitcast i8* %endptr69 to i32*
1471 store i32 7107374, i32* %endptr6978, align 1
1472 %3167 = call i32 @strlen(i8* %endptr69) nounwind readonly
1474 Which could also be constant folded. Whatever is producing this should probably
1475 be fixed to leave this as a memcpy from a string.
1477 Further, eon also has an interesting partially redundant strlen call:
1479 bb8: ; preds = %_ZN18eonImageCalculatorC1Ev.exit
1480 %682 = getelementptr i8** %argv, i32 6 ; <i8**> [#uses=2]
1481 %683 = load i8** %682, align 4 ; <i8*> [#uses=4]
1482 %684 = load i8* %683, align 1 ; <i8> [#uses=1]
1483 %685 = icmp eq i8 %684, 0 ; <i1> [#uses=1]
1484 br i1 %685, label %bb10, label %bb9
1487 %686 = call i32 @strlen(i8* %683) nounwind readonly
1488 %687 = icmp ugt i32 %686, 254 ; <i1> [#uses=1]
1489 br i1 %687, label %bb10, label %bb11
1491 bb10: ; preds = %bb9, %bb8
1492 %688 = call i32 @strlen(i8* %683) nounwind readonly
1494 This could be eliminated by doing the strlen once in bb8, saving code size and
1495 improving perf on the bb8->9->10 path.
1497 //===---------------------------------------------------------------------===//
1499 I see an interesting fully redundant call to strlen left in 186.crafty:InputMove
1501 %movetext11 = getelementptr [128 x i8]* %movetext, i32 0, i32 0
1504 bb62: ; preds = %bb55, %bb53
1505 %promote.0 = phi i32 [ %169, %bb55 ], [ 0, %bb53 ]
1506 %171 = call i32 @strlen(i8* %movetext11) nounwind readonly align 1
1507 %172 = add i32 %171, -1 ; <i32> [#uses=1]
1508 %173 = getelementptr [128 x i8]* %movetext, i32 0, i32 %172
1511 br i1 %or.cond, label %bb65, label %bb72
1513 bb65: ; preds = %bb62
1514 store i8 0, i8* %173, align 1
1517 bb72: ; preds = %bb65, %bb62
1518 %trank.1 = phi i32 [ %176, %bb65 ], [ -1, %bb62 ]
1519 %177 = call i32 @strlen(i8* %movetext11) nounwind readonly align 1
1521 Note that on the bb62->bb72 path, that the %177 strlen call is partially
1522 redundant with the %171 call. At worst, we could shove the %177 strlen call
1523 up into the bb65 block moving it out of the bb62->bb72 path. However, note
1524 that bb65 stores to the string, zeroing out the last byte. This means that on
1525 that path the value of %177 is actually just %171-1. A sub is cheaper than a
1528 This pattern repeats several times, basically doing:
1533 where it is "obvious" that B = A-1.
1535 //===---------------------------------------------------------------------===//
1537 186.crafty also contains this code:
1539 %1906 = call i32 @strlen(i8* getelementptr ([32 x i8]* @pgn_event, i32 0,i32 0))
1540 %1907 = getelementptr [32 x i8]* @pgn_event, i32 0, i32 %1906
1541 %1908 = call i8* @strcpy(i8* %1907, i8* %1905) nounwind align 1
1542 %1909 = call i32 @strlen(i8* getelementptr ([32 x i8]* @pgn_event, i32 0,i32 0))
1543 %1910 = getelementptr [32 x i8]* @pgn_event, i32 0, i32 %1909
1545 The last strlen is computable as 1908-@pgn_event, which means 1910=1908.
1547 //===---------------------------------------------------------------------===//
1549 186.crafty has this interesting pattern with the "out.4543" variable:
1551 call void @llvm.memcpy.i32(
1552 i8* getelementptr ([10 x i8]* @out.4543, i32 0, i32 0),
1553 i8* getelementptr ([7 x i8]* @"\01LC28700", i32 0, i32 0), i32 7, i32 1)
1554 %101 = call@printf(i8* ... @out.4543, i32 0, i32 0)) nounwind
1556 It is basically doing:
1558 memcpy(globalarray, "string");
1559 printf(..., globalarray);
1561 Anyway, by knowing that printf just reads the memory and forward substituting
1562 the string directly into the printf, this eliminates reads from globalarray.
1563 Since this pattern occurs frequently in crafty (due to the "DisplayTime" and
1564 other similar functions) there are many stores to "out". Once all the printfs
1565 stop using "out", all that is left is the memcpy's into it. This should allow
1566 globalopt to remove the "stored only" global.
1568 //===---------------------------------------------------------------------===//
1572 define inreg i32 @foo(i8* inreg %p) nounwind {
1574 %tmp1 = ashr i8 %tmp0, 5
1575 %tmp2 = sext i8 %tmp1 to i32
1579 could be dagcombine'd to a sign-extending load with a shift.
1580 For example, on x86 this currently gets this:
1586 while it could get this:
1591 //===---------------------------------------------------------------------===//
1595 int test(int x) { return 1-x == x; } // --> return false
1596 int test2(int x) { return 2-x == x; } // --> return x == 1 ?
1598 Always foldable for odd constants, what is the rule for even?
1600 //===---------------------------------------------------------------------===//
1602 PR 3381: GEP to field of size 0 inside a struct could be turned into GEP
1603 for next field in struct (which is at same address).
1605 For example: store of float into { {{}}, float } could be turned into a store to
1608 //===---------------------------------------------------------------------===//
1610 The arg promotion pass should make use of nocapture to make its alias analysis
1611 stuff much more precise.
1613 //===---------------------------------------------------------------------===//
1615 The following functions should be optimized to use a select instead of a
1616 branch (from gcc PR40072):
1618 char char_int(int m) {if(m>7) return 0; return m;}
1619 int int_char(char m) {if(m>7) return 0; return m;}
1621 //===---------------------------------------------------------------------===//
1623 int func(int a, int b) { if (a & 0x80) b |= 0x80; else b &= ~0x80; return b; }
1627 define i32 @func(i32 %a, i32 %b) nounwind readnone ssp {
1629 %0 = and i32 %a, 128 ; <i32> [#uses=1]
1630 %1 = icmp eq i32 %0, 0 ; <i1> [#uses=1]
1631 %2 = or i32 %b, 128 ; <i32> [#uses=1]
1632 %3 = and i32 %b, -129 ; <i32> [#uses=1]
1633 %b_addr.0 = select i1 %1, i32 %3, i32 %2 ; <i32> [#uses=1]
1637 However, it's functionally equivalent to:
1639 b = (b & ~0x80) | (a & 0x80);
1641 Which generates this:
1643 define i32 @func(i32 %a, i32 %b) nounwind readnone ssp {
1645 %0 = and i32 %b, -129 ; <i32> [#uses=1]
1646 %1 = and i32 %a, 128 ; <i32> [#uses=1]
1647 %2 = or i32 %0, %1 ; <i32> [#uses=1]
1651 This can be generalized for other forms:
1653 b = (b & ~0x80) | (a & 0x40) << 1;
1655 //===---------------------------------------------------------------------===//
1657 These two functions produce different code. They shouldn't:
1661 uint8_t p1(uint8_t b, uint8_t a) {
1662 b = (b & ~0xc0) | (a & 0xc0);
1666 uint8_t p2(uint8_t b, uint8_t a) {
1667 b = (b & ~0x40) | (a & 0x40);
1668 b = (b & ~0x80) | (a & 0x80);
1672 define zeroext i8 @p1(i8 zeroext %b, i8 zeroext %a) nounwind readnone ssp {
1674 %0 = and i8 %b, 63 ; <i8> [#uses=1]
1675 %1 = and i8 %a, -64 ; <i8> [#uses=1]
1676 %2 = or i8 %1, %0 ; <i8> [#uses=1]
1680 define zeroext i8 @p2(i8 zeroext %b, i8 zeroext %a) nounwind readnone ssp {
1682 %0 = and i8 %b, 63 ; <i8> [#uses=1]
1683 %.masked = and i8 %a, 64 ; <i8> [#uses=1]
1684 %1 = and i8 %a, -128 ; <i8> [#uses=1]
1685 %2 = or i8 %1, %0 ; <i8> [#uses=1]
1686 %3 = or i8 %2, %.masked ; <i8> [#uses=1]
1690 //===---------------------------------------------------------------------===//
1692 IPSCCP does not currently propagate argument dependent constants through
1693 functions where it does not not all of the callers. This includes functions
1694 with normal external linkage as well as templates, C99 inline functions etc.
1695 Specifically, it does nothing to:
1697 define i32 @test(i32 %x, i32 %y, i32 %z) nounwind {
1699 %0 = add nsw i32 %y, %z
1702 %3 = add nsw i32 %1, %2
1706 define i32 @test2() nounwind {
1708 %0 = call i32 @test(i32 1, i32 2, i32 4) nounwind
1712 It would be interesting extend IPSCCP to be able to handle simple cases like
1713 this, where all of the arguments to a call are constant. Because IPSCCP runs
1714 before inlining, trivial templates and inline functions are not yet inlined.
1715 The results for a function + set of constant arguments should be memoized in a
1718 //===---------------------------------------------------------------------===//
1720 The libcall constant folding stuff should be moved out of SimplifyLibcalls into
1721 libanalysis' constantfolding logic. This would allow IPSCCP to be able to
1722 handle simple things like this:
1724 static int foo(const char *X) { return strlen(X); }
1725 int bar() { return foo("abcd"); }
1727 //===---------------------------------------------------------------------===//
1729 InstCombine should use SimplifyDemandedBits to remove the or instruction:
1731 define i1 @test(i8 %x, i8 %y) {
1733 %B = icmp ugt i8 %A, 3
1737 Currently instcombine calls SimplifyDemandedBits with either all bits or just
1738 the sign bit, if the comparison is obviously a sign test. In this case, we only
1739 need all but the bottom two bits from %A, and if we gave that mask to SDB it
1740 would delete the or instruction for us.
1742 //===---------------------------------------------------------------------===//
1744 functionattrs doesn't know much about memcpy/memset. This function should be
1745 marked readnone rather than readonly, since it only twiddles local memory, but
1746 functionattrs doesn't handle memset/memcpy/memmove aggressively:
1748 struct X { int *p; int *q; };
1755 p = __builtin_memcpy (&x, &y, sizeof (int *));
1759 //===---------------------------------------------------------------------===//
1761 Missed instcombine transformation:
1762 define i1 @a(i32 %x) nounwind readnone {
1764 %cmp = icmp eq i32 %x, 30
1765 %sub = add i32 %x, -30
1766 %cmp2 = icmp ugt i32 %sub, 9
1767 %or = or i1 %cmp, %cmp2
1770 This should be optimized to a single compare. Testcase derived from gcc.
1772 //===---------------------------------------------------------------------===//
1774 Missed instcombine or reassociate transformation:
1775 int a(int a, int b) { return (a==12)&(b>47)&(b<58); }
1777 The sgt and slt should be combined into a single comparison. Testcase derived
1780 //===---------------------------------------------------------------------===//
1782 Missed instcombine transformation:
1784 %382 = srem i32 %tmp14.i, 64 ; [#uses=1]
1785 %383 = zext i32 %382 to i64 ; [#uses=1]
1786 %384 = shl i64 %381, %383 ; [#uses=1]
1787 %385 = icmp slt i32 %tmp14.i, 64 ; [#uses=1]
1789 The srem can be transformed to an and because if %tmp14.i is negative, the
1790 shift is undefined. Testcase derived from 403.gcc.
1792 //===---------------------------------------------------------------------===//
1794 This is a range comparison on a divided result (from 403.gcc):
1796 %1337 = sdiv i32 %1336, 8 ; [#uses=1]
1797 %.off.i208 = add i32 %1336, 7 ; [#uses=1]
1798 %1338 = icmp ult i32 %.off.i208, 15 ; [#uses=1]
1800 We already catch this (removing the sdiv) if there isn't an add, we should
1801 handle the 'add' as well. This is a common idiom with it's builtin_alloca code.
1804 int a(int x) { return (unsigned)(x/16+7) < 15; }
1806 Another similar case involves truncations on 64-bit targets:
1808 %361 = sdiv i64 %.046, 8 ; [#uses=1]
1809 %362 = trunc i64 %361 to i32 ; [#uses=2]
1811 %367 = icmp eq i32 %362, 0 ; [#uses=1]
1813 //===---------------------------------------------------------------------===//
1815 Missed instcombine/dagcombine transformation:
1816 define void @lshift_lt(i8 zeroext %a) nounwind {
1818 %conv = zext i8 %a to i32
1819 %shl = shl i32 %conv, 3
1820 %cmp = icmp ult i32 %shl, 33
1821 br i1 %cmp, label %if.then, label %if.end
1824 tail call void @bar() nounwind
1830 declare void @bar() nounwind
1832 The shift should be eliminated. Testcase derived from gcc.
1834 //===---------------------------------------------------------------------===//
1836 These compile into different code, one gets recognized as a switch and the
1837 other doesn't due to phase ordering issues (PR6212):
1839 int test1(int mainType, int subType) {
1842 else if (mainType == 9)
1844 else if (mainType == 11)
1849 int test2(int mainType, int subType) {
1859 //===---------------------------------------------------------------------===//
1861 The following test case (from PR6576):
1863 define i32 @mul(i32 %a, i32 %b) nounwind readnone {
1865 %cond1 = icmp eq i32 %b, 0 ; <i1> [#uses=1]
1866 br i1 %cond1, label %exit, label %bb.nph
1867 bb.nph: ; preds = %entry
1868 %tmp = mul i32 %b, %a ; <i32> [#uses=1]
1870 exit: ; preds = %entry
1874 could be reduced to:
1876 define i32 @mul(i32 %a, i32 %b) nounwind readnone {
1878 %tmp = mul i32 %b, %a
1882 //===---------------------------------------------------------------------===//
1884 We should use DSE + llvm.lifetime.end to delete dead vtable pointer updates.
1887 Another interesting case is that something related could be used for variables
1888 that go const after their ctor has finished. In these cases, globalopt (which
1889 can statically run the constructor) could mark the global const (so it gets put
1890 in the readonly section). A testcase would be:
1893 using namespace std;
1894 const complex<char> should_be_in_rodata (42,-42);
1895 complex<char> should_be_in_data (42,-42);
1896 complex<char> should_be_in_bss;
1898 Where we currently evaluate the ctors but the globals don't become const because
1899 the optimizer doesn't know they "become const" after the ctor is done. See
1900 GCC PR4131 for more examples.
1902 //===---------------------------------------------------------------------===//
1907 return x > 1 ? x : 1;
1910 LLVM emits a comparison with 1 instead of 0. 0 would be equivalent
1911 and cheaper on most targets.
1913 LLVM prefers comparisons with zero over non-zero in general, but in this
1914 case it choses instead to keep the max operation obvious.
1916 //===---------------------------------------------------------------------===//
1918 Take the following testcase on x86-64 (similar testcases exist for all targets
1921 define void @a(i64* nocapture %s, i64* nocapture %t, i64 %a, i64 %b,
1924 %0 = zext i64 %a to i128 ; <i128> [#uses=1]
1925 %1 = zext i64 %b to i128 ; <i128> [#uses=1]
1926 %2 = add i128 %1, %0 ; <i128> [#uses=2]
1927 %3 = zext i64 %c to i128 ; <i128> [#uses=1]
1928 %4 = shl i128 %3, 64 ; <i128> [#uses=1]
1929 %5 = add i128 %4, %2 ; <i128> [#uses=1]
1930 %6 = lshr i128 %5, 64 ; <i128> [#uses=1]
1931 %7 = trunc i128 %6 to i64 ; <i64> [#uses=1]
1932 store i64 %7, i64* %s, align 8
1933 %8 = trunc i128 %2 to i64 ; <i64> [#uses=1]
1934 store i64 %8, i64* %t, align 8
1954 The generated SelectionDAG has an ADD of an ADDE, where both operands of the
1955 ADDE are zero. Replacing one of the operands of the ADDE with the other operand
1956 of the ADD, and replacing the ADD with the ADDE, should give the desired result.
1958 (That said, we are doing a lot better than gcc on this testcase. :) )
1960 //===---------------------------------------------------------------------===//
1962 Switch lowering generates less than ideal code for the following switch:
1963 define void @a(i32 %x) nounwind {
1965 switch i32 %x, label %if.end [
1966 i32 0, label %if.then
1967 i32 1, label %if.then
1968 i32 2, label %if.then
1969 i32 3, label %if.then
1970 i32 5, label %if.then
1973 tail call void @foo() nounwind
1980 Generated code on x86-64 (other platforms give similar results):
1993 The movl+movl+btq+jb could be simplified to a cmpl+jne.
1995 Or, if we wanted to be really clever, we could simplify the whole thing to
1996 something like the following, which eliminates a branch:
2003 //===---------------------------------------------------------------------===//
2004 Given a branch where the two target blocks are identical ("ret i32 %b" in
2005 both), simplifycfg will simplify them away. But not so for a switch statement:
2007 define i32 @f(i32 %a, i32 %b) nounwind readnone {
2009 switch i32 %a, label %bb3 [
2014 bb: ; preds = %entry, %entry
2017 bb3: ; preds = %entry
2020 //===---------------------------------------------------------------------===//
2022 clang -O3 fails to devirtualize this virtual inheritance case: (GCC PR45875)
2023 Looks related to PR3100
2027 virtual void foo ();
2029 struct c11 : c10, c1{
2032 struct c28 : virtual c11{
2041 //===---------------------------------------------------------------------===//
2045 int foo(int a) { return (a & (~15)) / 16; }
2049 define i32 @foo(i32 %a) nounwind readnone ssp {
2051 %and = and i32 %a, -16
2052 %div = sdiv i32 %and, 16
2056 but this code (X & -A)/A is X >> log2(A) when A is a power of 2, so this case
2057 should be instcombined into just "a >> 4".
2059 We do get this at the codegen level, so something knows about it, but
2060 instcombine should catch it earlier:
2068 //===---------------------------------------------------------------------===//
2070 This code (from GCC PR28685):
2072 int test(int a, int b) {
2082 define i32 @test(i32 %a, i32 %b) nounwind readnone ssp {
2084 %cmp = icmp slt i32 %a, %b
2085 br i1 %cmp, label %return, label %if.end
2087 if.end: ; preds = %entry
2088 %cmp5 = icmp eq i32 %a, %b
2089 %conv6 = zext i1 %cmp5 to i32
2092 return: ; preds = %entry
2098 define i32 @test__(i32 %a, i32 %b) nounwind readnone ssp {
2100 %0 = icmp sle i32 %a, %b
2101 %retval = zext i1 %0 to i32
2105 //===---------------------------------------------------------------------===//