1 Target Independent Opportunities:
3 //===---------------------------------------------------------------------===//
5 With the recent changes to make the implicit def/use set explicit in
6 machineinstrs, we should change the target descriptions for 'call' instructions
7 so that the .td files don't list all the call-clobbered registers as implicit
8 defs. Instead, these should be added by the code generator (e.g. on the dag).
10 This has a number of uses:
12 1. PPC32/64 and X86 32/64 can avoid having multiple copies of call instructions
13 for their different impdef sets.
14 2. Targets with multiple calling convs (e.g. x86) which have different clobber
15 sets don't need copies of call instructions.
16 3. 'Interprocedural register allocation' can be done to reduce the clobber sets
19 //===---------------------------------------------------------------------===//
21 We should recognized various "overflow detection" idioms and translate them into
22 llvm.uadd.with.overflow and similar intrinsics. Here is a multiply idiom:
24 unsigned int mul(unsigned int a,unsigned int b) {
25 if ((unsigned long long)a*b>0xffffffff)
30 The legalization code for mul-with-overflow needs to be made more robust before
31 this can be implemented though.
33 //===---------------------------------------------------------------------===//
35 Get the C front-end to expand hypot(x,y) -> llvm.sqrt(x*x+y*y) when errno and
36 precision don't matter (ffastmath). Misc/mandel will like this. :) This isn't
37 safe in general, even on darwin. See the libm implementation of hypot for
38 examples (which special case when x/y are exactly zero to get signed zeros etc
41 //===---------------------------------------------------------------------===//
43 On targets with expensive 64-bit multiply, we could LSR this:
50 for (i = ...; ++i, tmp+=tmp)
53 This would be a win on ppc32, but not x86 or ppc64.
55 //===---------------------------------------------------------------------===//
57 Shrink: (setlt (loadi32 P), 0) -> (setlt (loadi8 Phi), 0)
59 //===---------------------------------------------------------------------===//
61 Reassociate should turn things like:
63 int factorial(int X) {
64 return X*X*X*X*X*X*X*X;
67 into llvm.powi calls, allowing the code generator to produce balanced
70 First, the intrinsic needs to be extended to support integers, and second the
71 code generator needs to be enhanced to lower these to multiplication trees.
73 //===---------------------------------------------------------------------===//
75 Interesting? testcase for add/shift/mul reassoc:
77 int bar(int x, int y) {
78 return x*x*x+y+x*x*x*x*x*y*y*y*y;
80 int foo(int z, int n) {
81 return bar(z, n) + bar(2*z, 2*n);
84 This is blocked on not handling X*X*X -> powi(X, 3) (see note above). The issue
85 is that we end up getting t = 2*X s = t*t and don't turn this into 4*X*X,
86 which is the same number of multiplies and is canonical, because the 2*X has
87 multiple uses. Here's a simple example:
89 define i32 @test15(i32 %X1) {
90 %B = mul i32 %X1, 47 ; X1*47
96 //===---------------------------------------------------------------------===//
98 Reassociate should handle the example in GCC PR16157:
100 extern int a0, a1, a2, a3, a4; extern int b0, b1, b2, b3, b4;
101 void f () { /* this can be optimized to four additions... */
102 b4 = a4 + a3 + a2 + a1 + a0;
103 b3 = a3 + a2 + a1 + a0;
108 This requires reassociating to forms of expressions that are already available,
109 something that reassoc doesn't think about yet.
112 //===---------------------------------------------------------------------===//
114 This function: (derived from GCC PR19988)
115 double foo(double x, double y) {
116 return ((x + 0.1234 * y) * (x + -0.1234 * y));
122 mulsd LCPI1_1(%rip), %xmm1
123 mulsd LCPI1_0(%rip), %xmm2
130 Reassociate should be able to turn it into:
132 double foo(double x, double y) {
133 return ((x + 0.1234 * y) * (x - 0.1234 * y));
136 Which allows the multiply by constant to be CSE'd, producing:
139 mulsd LCPI1_0(%rip), %xmm1
146 This doesn't need -ffast-math support at all. This is particularly bad because
147 the llvm-gcc frontend is canonicalizing the later into the former, but clang
148 doesn't have this problem.
150 //===---------------------------------------------------------------------===//
152 These two functions should generate the same code on big-endian systems:
154 int g(int *j,int *l) { return memcmp(j,l,4); }
155 int h(int *j, int *l) { return *j - *l; }
157 this could be done in SelectionDAGISel.cpp, along with other special cases,
160 //===---------------------------------------------------------------------===//
162 It would be nice to revert this patch:
163 http://lists.cs.uiuc.edu/pipermail/llvm-commits/Week-of-Mon-20060213/031986.html
165 And teach the dag combiner enough to simplify the code expanded before
166 legalize. It seems plausible that this knowledge would let it simplify other
169 //===---------------------------------------------------------------------===//
171 For vector types, TargetData.cpp::getTypeInfo() returns alignment that is equal
172 to the type size. It works but can be overly conservative as the alignment of
173 specific vector types are target dependent.
175 //===---------------------------------------------------------------------===//
177 We should produce an unaligned load from code like this:
179 v4sf example(float *P) {
180 return (v4sf){P[0], P[1], P[2], P[3] };
183 //===---------------------------------------------------------------------===//
185 Add support for conditional increments, and other related patterns. Instead
190 je LBB16_2 #cond_next
201 //===---------------------------------------------------------------------===//
203 Combine: a = sin(x), b = cos(x) into a,b = sincos(x).
205 Expand these to calls of sin/cos and stores:
206 double sincos(double x, double *sin, double *cos);
207 float sincosf(float x, float *sin, float *cos);
208 long double sincosl(long double x, long double *sin, long double *cos);
210 Doing so could allow SROA of the destination pointers. See also:
211 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=17687
213 This is now easily doable with MRVs. We could even make an intrinsic for this
214 if anyone cared enough about sincos.
216 //===---------------------------------------------------------------------===//
218 quantum_sigma_x in 462.libquantum contains the following loop:
220 for(i=0; i<reg->size; i++)
222 /* Flip the target bit of each basis state */
223 reg->node[i].state ^= ((MAX_UNSIGNED) 1 << target);
226 Where MAX_UNSIGNED/state is a 64-bit int. On a 32-bit platform it would be just
227 so cool to turn it into something like:
229 long long Res = ((MAX_UNSIGNED) 1 << target);
231 for(i=0; i<reg->size; i++)
232 reg->node[i].state ^= Res & 0xFFFFFFFFULL;
234 for(i=0; i<reg->size; i++)
235 reg->node[i].state ^= Res & 0xFFFFFFFF00000000ULL
238 ... which would only do one 32-bit XOR per loop iteration instead of two.
240 It would also be nice to recognize the reg->size doesn't alias reg->node[i], but
243 //===---------------------------------------------------------------------===//
245 This isn't recognized as bswap by instcombine (yes, it really is bswap):
247 unsigned long reverse(unsigned v) {
249 t = v ^ ((v << 16) | (v >> 16));
251 v = (v << 24) | (v >> 8);
255 //===---------------------------------------------------------------------===//
259 We don't delete this output free loop, because trip count analysis doesn't
260 realize that it is finite (if it were infinite, it would be undefined). Not
261 having this blocks Loop Idiom from matching strlen and friends.
269 //===---------------------------------------------------------------------===//
273 These idioms should be recognized as popcount (see PR1488):
275 unsigned countbits_slow(unsigned v) {
277 for (c = 0; v; v >>= 1)
281 unsigned countbits_fast(unsigned v){
284 v &= v - 1; // clear the least significant bit set
288 BITBOARD = unsigned long long
289 int PopCnt(register BITBOARD a) {
297 unsigned int popcount(unsigned int input) {
298 unsigned int count = 0;
299 for (unsigned int i = 0; i < 4 * 8; i++)
300 count += (input >> i) & i;
304 This should be recognized as CLZ: rdar://8459039
306 unsigned clz_a(unsigned a) {
314 This sort of thing should be added to the loop idiom pass.
316 //===---------------------------------------------------------------------===//
318 These should turn into single 16-bit (unaligned?) loads on little/big endian
321 unsigned short read_16_le(const unsigned char *adr) {
322 return adr[0] | (adr[1] << 8);
324 unsigned short read_16_be(const unsigned char *adr) {
325 return (adr[0] << 8) | adr[1];
328 //===---------------------------------------------------------------------===//
330 -instcombine should handle this transform:
331 icmp pred (sdiv X / C1 ), C2
332 when X, C1, and C2 are unsigned. Similarly for udiv and signed operands.
334 Currently InstCombine avoids this transform but will do it when the signs of
335 the operands and the sign of the divide match. See the FIXME in
336 InstructionCombining.cpp in the visitSetCondInst method after the switch case
337 for Instruction::UDiv (around line 4447) for more details.
339 The SingleSource/Benchmarks/Shootout-C++/hash and hash2 tests have examples of
342 //===---------------------------------------------------------------------===//
346 SingleSource/Benchmarks/Misc/dt.c shows several interesting optimization
347 opportunities in its double_array_divs_variable function: it needs loop
348 interchange, memory promotion (which LICM already does), vectorization and
349 variable trip count loop unrolling (since it has a constant trip count). ICC
350 apparently produces this very nice code with -ffast-math:
352 ..B1.70: # Preds ..B1.70 ..B1.69
353 mulpd %xmm0, %xmm1 #108.2
354 mulpd %xmm0, %xmm1 #108.2
355 mulpd %xmm0, %xmm1 #108.2
356 mulpd %xmm0, %xmm1 #108.2
358 cmpl $131072, %edx #108.2
359 jb ..B1.70 # Prob 99% #108.2
361 It would be better to count down to zero, but this is a lot better than what we
364 //===---------------------------------------------------------------------===//
368 typedef unsigned U32;
369 typedef unsigned long long U64;
370 int test (U32 *inst, U64 *regs) {
373 int r1 = (temp >> 20) & 0xf;
374 int b2 = (temp >> 16) & 0xf;
375 effective_addr2 = temp & 0xfff;
376 if (b2) effective_addr2 += regs[b2];
377 b2 = (temp >> 12) & 0xf;
378 if (b2) effective_addr2 += regs[b2];
379 effective_addr2 &= regs[4];
380 if ((effective_addr2 & 3) == 0)
385 Note that only the low 2 bits of effective_addr2 are used. On 32-bit systems,
386 we don't eliminate the computation of the top half of effective_addr2 because
387 we don't have whole-function selection dags. On x86, this means we use one
388 extra register for the function when effective_addr2 is declared as U64 than
389 when it is declared U32.
391 PHI Slicing could be extended to do this.
393 //===---------------------------------------------------------------------===//
395 LSR should know what GPR types a target has from TargetData. This code:
397 volatile short X, Y; // globals
401 for (i = 0; i < N; i++) { X = i; Y = i*4; }
404 produces two near identical IV's (after promotion) on PPC/ARM:
414 add r2, r2, #1 <- [0,+,1]
415 sub r0, r0, #1 <- [0,-,1]
419 LSR should reuse the "+" IV for the exit test.
421 //===---------------------------------------------------------------------===//
423 Tail call elim should be more aggressive, checking to see if the call is
424 followed by an uncond branch to an exit block.
426 ; This testcase is due to tail-duplication not wanting to copy the return
427 ; instruction into the terminating blocks because there was other code
428 ; optimized out of the function after the taildup happened.
429 ; RUN: llvm-as < %s | opt -tailcallelim | llvm-dis | not grep call
431 define i32 @t4(i32 %a) {
433 %tmp.1 = and i32 %a, 1 ; <i32> [#uses=1]
434 %tmp.2 = icmp ne i32 %tmp.1, 0 ; <i1> [#uses=1]
435 br i1 %tmp.2, label %then.0, label %else.0
437 then.0: ; preds = %entry
438 %tmp.5 = add i32 %a, -1 ; <i32> [#uses=1]
439 %tmp.3 = call i32 @t4( i32 %tmp.5 ) ; <i32> [#uses=1]
442 else.0: ; preds = %entry
443 %tmp.7 = icmp ne i32 %a, 0 ; <i1> [#uses=1]
444 br i1 %tmp.7, label %then.1, label %return
446 then.1: ; preds = %else.0
447 %tmp.11 = add i32 %a, -2 ; <i32> [#uses=1]
448 %tmp.9 = call i32 @t4( i32 %tmp.11 ) ; <i32> [#uses=1]
451 return: ; preds = %then.1, %else.0, %then.0
452 %result.0 = phi i32 [ 0, %else.0 ], [ %tmp.3, %then.0 ],
457 //===---------------------------------------------------------------------===//
459 Tail recursion elimination should handle:
464 return 2 * pow2m1 (n - 1) + 1;
467 Also, multiplies can be turned into SHL's, so they should be handled as if
468 they were associative. "return foo() << 1" can be tail recursion eliminated.
470 //===---------------------------------------------------------------------===//
472 Argument promotion should promote arguments for recursive functions, like
475 ; RUN: llvm-as < %s | opt -argpromotion | llvm-dis | grep x.val
477 define internal i32 @foo(i32* %x) {
479 %tmp = load i32* %x ; <i32> [#uses=0]
480 %tmp.foo = call i32 @foo( i32* %x ) ; <i32> [#uses=1]
484 define i32 @bar(i32* %x) {
486 %tmp3 = call i32 @foo( i32* %x ) ; <i32> [#uses=1]
490 //===---------------------------------------------------------------------===//
492 We should investigate an instruction sinking pass. Consider this silly
508 je LBB1_2 # cond_true
516 The PIC base computation (call+popl) is only used on one path through the
517 code, but is currently always computed in the entry block. It would be
518 better to sink the picbase computation down into the block for the
519 assertion, as it is the only one that uses it. This happens for a lot of
520 code with early outs.
522 Another example is loads of arguments, which are usually emitted into the
523 entry block on targets like x86. If not used in all paths through a
524 function, they should be sunk into the ones that do.
526 In this case, whole-function-isel would also handle this.
528 //===---------------------------------------------------------------------===//
530 Investigate lowering of sparse switch statements into perfect hash tables:
531 http://burtleburtle.net/bob/hash/perfect.html
533 //===---------------------------------------------------------------------===//
535 We should turn things like "load+fabs+store" and "load+fneg+store" into the
536 corresponding integer operations. On a yonah, this loop:
541 for (b = 0; b < 10000000; b++)
542 for (i = 0; i < 256; i++)
546 is twice as slow as this loop:
551 for (b = 0; b < 10000000; b++)
552 for (i = 0; i < 256; i++)
553 a[i] ^= (1ULL << 63);
556 and I suspect other processors are similar. On X86 in particular this is a
557 big win because doing this with integers allows the use of read/modify/write
560 //===---------------------------------------------------------------------===//
562 DAG Combiner should try to combine small loads into larger loads when
563 profitable. For example, we compile this C++ example:
565 struct THotKey { short Key; bool Control; bool Shift; bool Alt; };
566 extern THotKey m_HotKey;
567 THotKey GetHotKey () { return m_HotKey; }
569 into (-m64 -O3 -fno-exceptions -static -fomit-frame-pointer):
571 __Z9GetHotKeyv: ## @_Z9GetHotKeyv
572 movq _m_HotKey@GOTPCREL(%rip), %rax
585 //===---------------------------------------------------------------------===//
587 We should add an FRINT node to the DAG to model targets that have legal
588 implementations of ceil/floor/rint.
590 //===---------------------------------------------------------------------===//
595 long long input[8] = {1,0,1,0,1,0,1,0};
599 Clang compiles this into:
601 call void @llvm.memset.p0i8.i64(i8* %tmp, i8 0, i64 64, i32 16, i1 false)
602 %0 = getelementptr [8 x i64]* %input, i64 0, i64 0
603 store i64 1, i64* %0, align 16
604 %1 = getelementptr [8 x i64]* %input, i64 0, i64 2
605 store i64 1, i64* %1, align 16
606 %2 = getelementptr [8 x i64]* %input, i64 0, i64 4
607 store i64 1, i64* %2, align 16
608 %3 = getelementptr [8 x i64]* %input, i64 0, i64 6
609 store i64 1, i64* %3, align 16
611 Which gets codegen'd into:
614 movaps %xmm0, -16(%rbp)
615 movaps %xmm0, -32(%rbp)
616 movaps %xmm0, -48(%rbp)
617 movaps %xmm0, -64(%rbp)
623 It would be better to have 4 movq's of 0 instead of the movaps's.
625 //===---------------------------------------------------------------------===//
627 http://llvm.org/PR717:
629 The following code should compile into "ret int undef". Instead, LLVM
630 produces "ret int 0":
639 //===---------------------------------------------------------------------===//
641 The loop unroller should partially unroll loops (instead of peeling them)
642 when code growth isn't too bad and when an unroll count allows simplification
643 of some code within the loop. One trivial example is:
649 for ( nLoop = 0; nLoop < 1000; nLoop++ ) {
658 Unrolling by 2 would eliminate the '&1' in both copies, leading to a net
659 reduction in code size. The resultant code would then also be suitable for
660 exit value computation.
662 //===---------------------------------------------------------------------===//
664 We miss a bunch of rotate opportunities on various targets, including ppc, x86,
665 etc. On X86, we miss a bunch of 'rotate by variable' cases because the rotate
666 matching code in dag combine doesn't look through truncates aggressively
667 enough. Here are some testcases reduces from GCC PR17886:
669 unsigned long long f5(unsigned long long x, unsigned long long y) {
670 return (x << 8) | ((y >> 48) & 0xffull);
672 unsigned long long f6(unsigned long long x, unsigned long long y, int z) {
675 return (x << 8) | ((y >> 48) & 0xffull);
677 return (x << 16) | ((y >> 40) & 0xffffull);
679 return (x << 24) | ((y >> 32) & 0xffffffull);
681 return (x << 32) | ((y >> 24) & 0xffffffffull);
683 return (x << 40) | ((y >> 16) & 0xffffffffffull);
687 //===---------------------------------------------------------------------===//
689 This (and similar related idioms):
691 unsigned int foo(unsigned char i) {
692 return i | (i<<8) | (i<<16) | (i<<24);
697 define i32 @foo(i8 zeroext %i) nounwind readnone ssp noredzone {
699 %conv = zext i8 %i to i32
700 %shl = shl i32 %conv, 8
701 %shl5 = shl i32 %conv, 16
702 %shl9 = shl i32 %conv, 24
703 %or = or i32 %shl9, %conv
704 %or6 = or i32 %or, %shl5
705 %or10 = or i32 %or6, %shl
709 it would be better as:
711 unsigned int bar(unsigned char i) {
712 unsigned int j=i | (i << 8);
718 define i32 @bar(i8 zeroext %i) nounwind readnone ssp noredzone {
720 %conv = zext i8 %i to i32
721 %shl = shl i32 %conv, 8
722 %or = or i32 %shl, %conv
723 %shl5 = shl i32 %or, 16
724 %or6 = or i32 %shl5, %or
728 or even i*0x01010101, depending on the speed of the multiplier. The best way to
729 handle this is to canonicalize it to a multiply in IR and have codegen handle
730 lowering multiplies to shifts on cpus where shifts are faster.
732 //===---------------------------------------------------------------------===//
734 We do a number of simplifications in simplify libcalls to strength reduce
735 standard library functions, but we don't currently merge them together. For
736 example, it is useful to merge memcpy(a,b,strlen(b)) -> strcpy. This can only
737 be done safely if "b" isn't modified between the strlen and memcpy of course.
739 //===---------------------------------------------------------------------===//
741 We compile this program: (from GCC PR11680)
742 http://gcc.gnu.org/bugzilla/attachment.cgi?id=4487
744 Into code that runs the same speed in fast/slow modes, but both modes run 2x
745 slower than when compile with GCC (either 4.0 or 4.2):
747 $ llvm-g++ perf.cpp -O3 -fno-exceptions
749 1.821u 0.003s 0:01.82 100.0% 0+0k 0+0io 0pf+0w
751 $ g++ perf.cpp -O3 -fno-exceptions
753 0.821u 0.001s 0:00.82 100.0% 0+0k 0+0io 0pf+0w
755 It looks like we are making the same inlining decisions, so this may be raw
756 codegen badness or something else (haven't investigated).
758 //===---------------------------------------------------------------------===//
760 Divisibility by constant can be simplified (according to GCC PR12849) from
761 being a mulhi to being a mul lo (cheaper). Testcase:
763 void bar(unsigned n) {
768 This is equivalent to the following, where 2863311531 is the multiplicative
769 inverse of 3, and 1431655766 is ((2^32)-1)/3+1:
770 void bar(unsigned n) {
771 if (n * 2863311531U < 1431655766U)
775 The same transformation can work with an even modulo with the addition of a
776 rotate: rotate the result of the multiply to the right by the number of bits
777 which need to be zero for the condition to be true, and shrink the compare RHS
778 by the same amount. Unless the target supports rotates, though, that
779 transformation probably isn't worthwhile.
781 The transformation can also easily be made to work with non-zero equality
782 comparisons: just transform, for example, "n % 3 == 1" to "(n-1) % 3 == 0".
784 //===---------------------------------------------------------------------===//
786 Better mod/ref analysis for scanf would allow us to eliminate the vtable and a
787 bunch of other stuff from this example (see PR1604):
797 std::scanf("%d", &t.val);
798 std::printf("%d\n", t.val);
801 //===---------------------------------------------------------------------===//
803 These functions perform the same computation, but produce different assembly.
805 define i8 @select(i8 %x) readnone nounwind {
806 %A = icmp ult i8 %x, 250
807 %B = select i1 %A, i8 0, i8 1
811 define i8 @addshr(i8 %x) readnone nounwind {
812 %A = zext i8 %x to i9
813 %B = add i9 %A, 6 ;; 256 - 250 == 6
815 %D = trunc i9 %C to i8
819 //===---------------------------------------------------------------------===//
823 f (unsigned long a, unsigned long b, unsigned long c)
825 return ((a & (c - 1)) != 0) || ((b & (c - 1)) != 0);
828 f (unsigned long a, unsigned long b, unsigned long c)
830 return ((a & (c - 1)) != 0) | ((b & (c - 1)) != 0);
832 Both should combine to ((a|b) & (c-1)) != 0. Currently not optimized with
833 "clang -emit-llvm-bc | opt -std-compile-opts".
835 //===---------------------------------------------------------------------===//
838 #define PMD_MASK (~((1UL << 23) - 1))
839 void clear_pmd_range(unsigned long start, unsigned long end)
841 if (!(start & ~PMD_MASK) && !(end & ~PMD_MASK))
844 The expression should optimize to something like
845 "!((start|end)&~PMD_MASK). Currently not optimized with "clang
846 -emit-llvm-bc | opt -std-compile-opts".
848 //===---------------------------------------------------------------------===//
850 unsigned int f(unsigned int i, unsigned int n) {++i; if (i == n) ++i; return
852 unsigned int f2(unsigned int i, unsigned int n) {++i; i += i == n; return i;}
853 These should combine to the same thing. Currently, the first function
854 produces better code on X86.
856 //===---------------------------------------------------------------------===//
859 #define abs(x) x>0?x:-x
862 return (abs(x)) >= 0;
864 This should optimize to x == INT_MIN. (With -fwrapv.) Currently not
865 optimized with "clang -emit-llvm-bc | opt -std-compile-opts".
867 //===---------------------------------------------------------------------===//
871 rotate_cst (unsigned int a)
873 a = (a << 10) | (a >> 22);
878 minus_cst (unsigned int a)
887 mask_gt (unsigned int a)
889 /* This is equivalent to a > 15. */
894 rshift_gt (unsigned int a)
896 /* This is equivalent to a > 23. */
901 void neg_eq_cst(unsigned int a) {
906 All should simplify to a single comparison. All of these are
907 currently not optimized with "clang -emit-llvm-bc | opt
910 //===---------------------------------------------------------------------===//
913 int c(int* x) {return (char*)x+2 == (char*)x;}
914 Should combine to 0. Currently not optimized with "clang
915 -emit-llvm-bc | opt -std-compile-opts" (although llc can optimize it).
917 //===---------------------------------------------------------------------===//
919 int a(unsigned b) {return ((b << 31) | (b << 30)) >> 31;}
920 Should be combined to "((b >> 1) | b) & 1". Currently not optimized
921 with "clang -emit-llvm-bc | opt -std-compile-opts".
923 //===---------------------------------------------------------------------===//
925 unsigned a(unsigned x, unsigned y) { return x | (y & 1) | (y & 2);}
926 Should combine to "x | (y & 3)". Currently not optimized with "clang
927 -emit-llvm-bc | opt -std-compile-opts".
929 //===---------------------------------------------------------------------===//
931 int a(int a, int b, int c) {return (~a & c) | ((c|a) & b);}
932 Should fold to "(~a & c) | (a & b)". Currently not optimized with
933 "clang -emit-llvm-bc | opt -std-compile-opts".
935 //===---------------------------------------------------------------------===//
937 int a(int a,int b) {return (~(a|b))|a;}
938 Should fold to "a|~b". Currently not optimized with "clang
939 -emit-llvm-bc | opt -std-compile-opts".
941 //===---------------------------------------------------------------------===//
943 int a(int a, int b) {return (a&&b) || (a&&!b);}
944 Should fold to "a". Currently not optimized with "clang -emit-llvm-bc
945 | opt -std-compile-opts".
947 //===---------------------------------------------------------------------===//
949 int a(int a, int b, int c) {return (a&&b) || (!a&&c);}
950 Should fold to "a ? b : c", or at least something sane. Currently not
951 optimized with "clang -emit-llvm-bc | opt -std-compile-opts".
953 //===---------------------------------------------------------------------===//
955 int a(int a, int b, int c) {return (a&&b) || (a&&c) || (a&&b&&c);}
956 Should fold to a && (b || c). Currently not optimized with "clang
957 -emit-llvm-bc | opt -std-compile-opts".
959 //===---------------------------------------------------------------------===//
961 int a(int x) {return x | ((x & 8) ^ 8);}
962 Should combine to x | 8. Currently not optimized with "clang
963 -emit-llvm-bc | opt -std-compile-opts".
965 //===---------------------------------------------------------------------===//
967 int a(int x) {return x ^ ((x & 8) ^ 8);}
968 Should also combine to x | 8. Currently not optimized with "clang
969 -emit-llvm-bc | opt -std-compile-opts".
971 //===---------------------------------------------------------------------===//
973 int a(int x) {return ((x | -9) ^ 8) & x;}
974 Should combine to x & -9. Currently not optimized with "clang
975 -emit-llvm-bc | opt -std-compile-opts".
977 //===---------------------------------------------------------------------===//
979 unsigned a(unsigned a) {return a * 0x11111111 >> 28 & 1;}
980 Should combine to "a * 0x88888888 >> 31". Currently not optimized
981 with "clang -emit-llvm-bc | opt -std-compile-opts".
983 //===---------------------------------------------------------------------===//
985 unsigned a(char* x) {if ((*x & 32) == 0) return b();}
986 There's an unnecessary zext in the generated code with "clang
987 -emit-llvm-bc | opt -std-compile-opts".
989 //===---------------------------------------------------------------------===//
991 unsigned a(unsigned long long x) {return 40 * (x >> 1);}
992 Should combine to "20 * (((unsigned)x) & -2)". Currently not
993 optimized with "clang -emit-llvm-bc | opt -std-compile-opts".
995 //===---------------------------------------------------------------------===//
997 This was noticed in the entryblock for grokdeclarator in 403.gcc:
999 %tmp = icmp eq i32 %decl_context, 4
1000 %decl_context_addr.0 = select i1 %tmp, i32 3, i32 %decl_context
1001 %tmp1 = icmp eq i32 %decl_context_addr.0, 1
1002 %decl_context_addr.1 = select i1 %tmp1, i32 0, i32 %decl_context_addr.0
1004 tmp1 should be simplified to something like:
1005 (!tmp || decl_context == 1)
1007 This allows recursive simplifications, tmp1 is used all over the place in
1008 the function, e.g. by:
1010 %tmp23 = icmp eq i32 %decl_context_addr.1, 0 ; <i1> [#uses=1]
1011 %tmp24 = xor i1 %tmp1, true ; <i1> [#uses=1]
1012 %or.cond8 = and i1 %tmp23, %tmp24 ; <i1> [#uses=1]
1016 //===---------------------------------------------------------------------===//
1020 Store sinking: This code:
1022 void f (int n, int *cond, int *res) {
1025 for (i = 0; i < n; i++)
1027 *res ^= 234; /* (*) */
1030 On this function GVN hoists the fully redundant value of *res, but nothing
1031 moves the store out. This gives us this code:
1033 bb: ; preds = %bb2, %entry
1034 %.rle = phi i32 [ 0, %entry ], [ %.rle6, %bb2 ]
1035 %i.05 = phi i32 [ 0, %entry ], [ %indvar.next, %bb2 ]
1036 %1 = load i32* %cond, align 4
1037 %2 = icmp eq i32 %1, 0
1038 br i1 %2, label %bb2, label %bb1
1041 %3 = xor i32 %.rle, 234
1042 store i32 %3, i32* %res, align 4
1045 bb2: ; preds = %bb, %bb1
1046 %.rle6 = phi i32 [ %3, %bb1 ], [ %.rle, %bb ]
1047 %indvar.next = add i32 %i.05, 1
1048 %exitcond = icmp eq i32 %indvar.next, %n
1049 br i1 %exitcond, label %return, label %bb
1051 DSE should sink partially dead stores to get the store out of the loop.
1053 Here's another partial dead case:
1054 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=12395
1056 //===---------------------------------------------------------------------===//
1058 Scalar PRE hoists the mul in the common block up to the else:
1060 int test (int a, int b, int c, int g) {
1070 It would be better to do the mul once to reduce codesize above the if.
1071 This is GCC PR38204.
1074 //===---------------------------------------------------------------------===//
1075 This simple function from 179.art:
1078 struct { double y; int reset; } *Y;
1083 for (i=0;i<numf2s;i++)
1084 if (Y[i].y > Y[winner].y)
1088 Compiles into (with clang TBAA):
1090 for.body: ; preds = %for.inc, %bb.nph
1091 %indvar = phi i64 [ 0, %bb.nph ], [ %indvar.next, %for.inc ]
1092 %i.01718 = phi i32 [ 0, %bb.nph ], [ %i.01719, %for.inc ]
1093 %tmp4 = getelementptr inbounds %struct.anon* %tmp3, i64 %indvar, i32 0
1094 %tmp5 = load double* %tmp4, align 8, !tbaa !4
1095 %idxprom7 = sext i32 %i.01718 to i64
1096 %tmp10 = getelementptr inbounds %struct.anon* %tmp3, i64 %idxprom7, i32 0
1097 %tmp11 = load double* %tmp10, align 8, !tbaa !4
1098 %cmp12 = fcmp ogt double %tmp5, %tmp11
1099 br i1 %cmp12, label %if.then, label %for.inc
1101 if.then: ; preds = %for.body
1102 %i.017 = trunc i64 %indvar to i32
1105 for.inc: ; preds = %for.body, %if.then
1106 %i.01719 = phi i32 [ %i.01718, %for.body ], [ %i.017, %if.then ]
1107 %indvar.next = add i64 %indvar, 1
1108 %exitcond = icmp eq i64 %indvar.next, %tmp22
1109 br i1 %exitcond, label %for.cond.for.end_crit_edge, label %for.body
1112 It is good that we hoisted the reloads of numf2's, and Y out of the loop and
1113 sunk the store to winner out.
1115 However, this is awful on several levels: the conditional truncate in the loop
1116 (-indvars at fault? why can't we completely promote the IV to i64?).
1118 Beyond that, we have a partially redundant load in the loop: if "winner" (aka
1119 %i.01718) isn't updated, we reload Y[winner].y the next time through the loop.
1120 Similarly, the addressing that feeds it (including the sext) is redundant. In
1121 the end we get this generated assembly:
1123 LBB0_2: ## %for.body
1124 ## =>This Inner Loop Header: Depth=1
1128 ucomisd (%rcx,%r8), %xmm0
1137 All things considered this isn't too bad, but we shouldn't need the movslq or
1138 the shlq instruction, or the load folded into ucomisd every time through the
1141 On an x86-specific topic, if the loop can't be restructure, the movl should be a
1144 //===---------------------------------------------------------------------===//
1148 GCC PR37810 is an interesting case where we should sink load/store reload
1149 into the if block and outside the loop, so we don't reload/store it on the
1170 We now hoist the reload after the call (Transforms/GVN/lpre-call-wrap.ll), but
1171 we don't sink the store. We need partially dead store sinking.
1173 //===---------------------------------------------------------------------===//
1175 [LOAD PRE CRIT EDGE SPLITTING]
1177 GCC PR37166: Sinking of loads prevents SROA'ing the "g" struct on the stack
1178 leading to excess stack traffic. This could be handled by GVN with some crazy
1179 symbolic phi translation. The code we get looks like (g is on the stack):
1183 %9 = getelementptr %struct.f* %g, i32 0, i32 0
1184 store i32 %8, i32* %9, align bel %bb3
1186 bb3: ; preds = %bb1, %bb2, %bb
1187 %c_addr.0 = phi %struct.f* [ %g, %bb2 ], [ %c, %bb ], [ %c, %bb1 ]
1188 %b_addr.0 = phi %struct.f* [ %b, %bb2 ], [ %g, %bb ], [ %b, %bb1 ]
1189 %10 = getelementptr %struct.f* %c_addr.0, i32 0, i32 0
1190 %11 = load i32* %10, align 4
1192 %11 is partially redundant, an in BB2 it should have the value %8.
1194 GCC PR33344 and PR35287 are similar cases.
1197 //===---------------------------------------------------------------------===//
1201 There are many load PRE testcases in testsuite/gcc.dg/tree-ssa/loadpre* in the
1202 GCC testsuite, ones we don't get yet are (checked through loadpre25):
1204 [CRIT EDGE BREAKING]
1205 loadpre3.c predcom-4.c
1207 [PRE OF READONLY CALL]
1210 [TURN SELECT INTO BRANCH]
1211 loadpre14.c loadpre15.c
1213 actually a conditional increment: loadpre18.c loadpre19.c
1215 //===---------------------------------------------------------------------===//
1217 [LOAD PRE / STORE SINKING / SPEC HACK]
1219 This is a chunk of code from 456.hmmer:
1221 int f(int M, int *mc, int *mpp, int *tpmm, int *ip, int *tpim, int *dpp,
1222 int *tpdm, int xmb, int *bp, int *ms) {
1224 for (k = 1; k <= M; k++) {
1225 mc[k] = mpp[k-1] + tpmm[k-1];
1226 if ((sc = ip[k-1] + tpim[k-1]) > mc[k]) mc[k] = sc;
1227 if ((sc = dpp[k-1] + tpdm[k-1]) > mc[k]) mc[k] = sc;
1228 if ((sc = xmb + bp[k]) > mc[k]) mc[k] = sc;
1233 It is very profitable for this benchmark to turn the conditional stores to mc[k]
1234 into a conditional move (select instr in IR) and allow the final store to do the
1235 store. See GCC PR27313 for more details. Note that this is valid to xform even
1236 with the new C++ memory model, since mc[k] is previously loaded and later
1239 //===---------------------------------------------------------------------===//
1242 There are many PRE testcases in testsuite/gcc.dg/tree-ssa/ssa-pre-*.c in the
1245 //===---------------------------------------------------------------------===//
1247 There are some interesting cases in testsuite/gcc.dg/tree-ssa/pred-comm* in the
1248 GCC testsuite. For example, we get the first example in predcom-1.c, but
1249 miss the second one:
1254 __attribute__ ((noinline))
1255 void count_averages(int n) {
1257 for (i = 1; i < n; i++)
1258 avg[i] = (((unsigned long) fib[i - 1] + fib[i] + fib[i + 1]) / 3) & 0xffff;
1261 which compiles into two loads instead of one in the loop.
1263 predcom-2.c is the same as predcom-1.c
1265 predcom-3.c is very similar but needs loads feeding each other instead of
1269 //===---------------------------------------------------------------------===//
1273 Type based alias analysis:
1274 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=14705
1276 We should do better analysis of posix_memalign. At the least it should
1277 no-capture its pointer argument, at best, we should know that the out-value
1278 result doesn't point to anything (like malloc). One example of this is in
1279 SingleSource/Benchmarks/Misc/dt.c
1281 //===---------------------------------------------------------------------===//
1283 Interesting missed case because of control flow flattening (should be 2 loads):
1284 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=26629
1285 With: llvm-gcc t2.c -S -o - -O0 -emit-llvm | llvm-as |
1286 opt -mem2reg -gvn -instcombine | llvm-dis
1287 we miss it because we need 1) CRIT EDGE 2) MULTIPLE DIFFERENT
1288 VALS PRODUCED BY ONE BLOCK OVER DIFFERENT PATHS
1290 //===---------------------------------------------------------------------===//
1292 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=19633
1293 We could eliminate the branch condition here, loading from null is undefined:
1295 struct S { int w, x, y, z; };
1296 struct T { int r; struct S s; };
1297 void bar (struct S, int);
1298 void foo (int a, struct T b)
1306 //===---------------------------------------------------------------------===//
1308 simplifylibcalls should do several optimizations for strspn/strcspn:
1310 strcspn(x, "a") -> inlined loop for up to 3 letters (similarly for strspn):
1312 size_t __strcspn_c3 (__const char *__s, int __reject1, int __reject2,
1314 register size_t __result = 0;
1315 while (__s[__result] != '\0' && __s[__result] != __reject1 &&
1316 __s[__result] != __reject2 && __s[__result] != __reject3)
1321 This should turn into a switch on the character. See PR3253 for some notes on
1324 456.hmmer apparently uses strcspn and strspn a lot. 471.omnetpp uses strspn.
1326 //===---------------------------------------------------------------------===//
1328 "gas" uses this idiom:
1329 else if (strchr ("+-/*%|&^:[]()~", *intel_parser.op_string))
1331 else if (strchr ("<>", *intel_parser.op_string)
1333 Those should be turned into a switch.
1335 //===---------------------------------------------------------------------===//
1337 252.eon contains this interesting code:
1339 %3072 = getelementptr [100 x i8]* %tempString, i32 0, i32 0
1340 %3073 = call i8* @strcpy(i8* %3072, i8* %3071) nounwind
1341 %strlen = call i32 @strlen(i8* %3072) ; uses = 1
1342 %endptr = getelementptr [100 x i8]* %tempString, i32 0, i32 %strlen
1343 call void @llvm.memcpy.i32(i8* %endptr,
1344 i8* getelementptr ([5 x i8]* @"\01LC42", i32 0, i32 0), i32 5, i32 1)
1345 %3074 = call i32 @strlen(i8* %endptr) nounwind readonly
1347 This is interesting for a couple reasons. First, in this:
1349 The memcpy+strlen strlen can be replaced with:
1351 %3074 = call i32 @strlen([5 x i8]* @"\01LC42") nounwind readonly
1353 Because the destination was just copied into the specified memory buffer. This,
1354 in turn, can be constant folded to "4".
1356 In other code, it contains:
1358 %endptr6978 = bitcast i8* %endptr69 to i32*
1359 store i32 7107374, i32* %endptr6978, align 1
1360 %3167 = call i32 @strlen(i8* %endptr69) nounwind readonly
1362 Which could also be constant folded. Whatever is producing this should probably
1363 be fixed to leave this as a memcpy from a string.
1365 Further, eon also has an interesting partially redundant strlen call:
1367 bb8: ; preds = %_ZN18eonImageCalculatorC1Ev.exit
1368 %682 = getelementptr i8** %argv, i32 6 ; <i8**> [#uses=2]
1369 %683 = load i8** %682, align 4 ; <i8*> [#uses=4]
1370 %684 = load i8* %683, align 1 ; <i8> [#uses=1]
1371 %685 = icmp eq i8 %684, 0 ; <i1> [#uses=1]
1372 br i1 %685, label %bb10, label %bb9
1375 %686 = call i32 @strlen(i8* %683) nounwind readonly
1376 %687 = icmp ugt i32 %686, 254 ; <i1> [#uses=1]
1377 br i1 %687, label %bb10, label %bb11
1379 bb10: ; preds = %bb9, %bb8
1380 %688 = call i32 @strlen(i8* %683) nounwind readonly
1382 This could be eliminated by doing the strlen once in bb8, saving code size and
1383 improving perf on the bb8->9->10 path.
1385 //===---------------------------------------------------------------------===//
1387 I see an interesting fully redundant call to strlen left in 186.crafty:InputMove
1389 %movetext11 = getelementptr [128 x i8]* %movetext, i32 0, i32 0
1392 bb62: ; preds = %bb55, %bb53
1393 %promote.0 = phi i32 [ %169, %bb55 ], [ 0, %bb53 ]
1394 %171 = call i32 @strlen(i8* %movetext11) nounwind readonly align 1
1395 %172 = add i32 %171, -1 ; <i32> [#uses=1]
1396 %173 = getelementptr [128 x i8]* %movetext, i32 0, i32 %172
1399 br i1 %or.cond, label %bb65, label %bb72
1401 bb65: ; preds = %bb62
1402 store i8 0, i8* %173, align 1
1405 bb72: ; preds = %bb65, %bb62
1406 %trank.1 = phi i32 [ %176, %bb65 ], [ -1, %bb62 ]
1407 %177 = call i32 @strlen(i8* %movetext11) nounwind readonly align 1
1409 Note that on the bb62->bb72 path, that the %177 strlen call is partially
1410 redundant with the %171 call. At worst, we could shove the %177 strlen call
1411 up into the bb65 block moving it out of the bb62->bb72 path. However, note
1412 that bb65 stores to the string, zeroing out the last byte. This means that on
1413 that path the value of %177 is actually just %171-1. A sub is cheaper than a
1416 This pattern repeats several times, basically doing:
1421 where it is "obvious" that B = A-1.
1423 //===---------------------------------------------------------------------===//
1425 186.crafty has this interesting pattern with the "out.4543" variable:
1427 call void @llvm.memcpy.i32(
1428 i8* getelementptr ([10 x i8]* @out.4543, i32 0, i32 0),
1429 i8* getelementptr ([7 x i8]* @"\01LC28700", i32 0, i32 0), i32 7, i32 1)
1430 %101 = call@printf(i8* ... @out.4543, i32 0, i32 0)) nounwind
1432 It is basically doing:
1434 memcpy(globalarray, "string");
1435 printf(..., globalarray);
1437 Anyway, by knowing that printf just reads the memory and forward substituting
1438 the string directly into the printf, this eliminates reads from globalarray.
1439 Since this pattern occurs frequently in crafty (due to the "DisplayTime" and
1440 other similar functions) there are many stores to "out". Once all the printfs
1441 stop using "out", all that is left is the memcpy's into it. This should allow
1442 globalopt to remove the "stored only" global.
1444 //===---------------------------------------------------------------------===//
1448 define inreg i32 @foo(i8* inreg %p) nounwind {
1450 %tmp1 = ashr i8 %tmp0, 5
1451 %tmp2 = sext i8 %tmp1 to i32
1455 could be dagcombine'd to a sign-extending load with a shift.
1456 For example, on x86 this currently gets this:
1462 while it could get this:
1467 //===---------------------------------------------------------------------===//
1471 int test(int x) { return 1-x == x; } // --> return false
1472 int test2(int x) { return 2-x == x; } // --> return x == 1 ?
1474 Always foldable for odd constants, what is the rule for even?
1476 //===---------------------------------------------------------------------===//
1478 PR 3381: GEP to field of size 0 inside a struct could be turned into GEP
1479 for next field in struct (which is at same address).
1481 For example: store of float into { {{}}, float } could be turned into a store to
1484 //===---------------------------------------------------------------------===//
1486 The arg promotion pass should make use of nocapture to make its alias analysis
1487 stuff much more precise.
1489 //===---------------------------------------------------------------------===//
1491 The following functions should be optimized to use a select instead of a
1492 branch (from gcc PR40072):
1494 char char_int(int m) {if(m>7) return 0; return m;}
1495 int int_char(char m) {if(m>7) return 0; return m;}
1497 //===---------------------------------------------------------------------===//
1499 int func(int a, int b) { if (a & 0x80) b |= 0x80; else b &= ~0x80; return b; }
1503 define i32 @func(i32 %a, i32 %b) nounwind readnone ssp {
1505 %0 = and i32 %a, 128 ; <i32> [#uses=1]
1506 %1 = icmp eq i32 %0, 0 ; <i1> [#uses=1]
1507 %2 = or i32 %b, 128 ; <i32> [#uses=1]
1508 %3 = and i32 %b, -129 ; <i32> [#uses=1]
1509 %b_addr.0 = select i1 %1, i32 %3, i32 %2 ; <i32> [#uses=1]
1513 However, it's functionally equivalent to:
1515 b = (b & ~0x80) | (a & 0x80);
1517 Which generates this:
1519 define i32 @func(i32 %a, i32 %b) nounwind readnone ssp {
1521 %0 = and i32 %b, -129 ; <i32> [#uses=1]
1522 %1 = and i32 %a, 128 ; <i32> [#uses=1]
1523 %2 = or i32 %0, %1 ; <i32> [#uses=1]
1527 This can be generalized for other forms:
1529 b = (b & ~0x80) | (a & 0x40) << 1;
1531 //===---------------------------------------------------------------------===//
1533 These two functions produce different code. They shouldn't:
1537 uint8_t p1(uint8_t b, uint8_t a) {
1538 b = (b & ~0xc0) | (a & 0xc0);
1542 uint8_t p2(uint8_t b, uint8_t a) {
1543 b = (b & ~0x40) | (a & 0x40);
1544 b = (b & ~0x80) | (a & 0x80);
1548 define zeroext i8 @p1(i8 zeroext %b, i8 zeroext %a) nounwind readnone ssp {
1550 %0 = and i8 %b, 63 ; <i8> [#uses=1]
1551 %1 = and i8 %a, -64 ; <i8> [#uses=1]
1552 %2 = or i8 %1, %0 ; <i8> [#uses=1]
1556 define zeroext i8 @p2(i8 zeroext %b, i8 zeroext %a) nounwind readnone ssp {
1558 %0 = and i8 %b, 63 ; <i8> [#uses=1]
1559 %.masked = and i8 %a, 64 ; <i8> [#uses=1]
1560 %1 = and i8 %a, -128 ; <i8> [#uses=1]
1561 %2 = or i8 %1, %0 ; <i8> [#uses=1]
1562 %3 = or i8 %2, %.masked ; <i8> [#uses=1]
1566 //===---------------------------------------------------------------------===//
1568 IPSCCP does not currently propagate argument dependent constants through
1569 functions where it does not not all of the callers. This includes functions
1570 with normal external linkage as well as templates, C99 inline functions etc.
1571 Specifically, it does nothing to:
1573 define i32 @test(i32 %x, i32 %y, i32 %z) nounwind {
1575 %0 = add nsw i32 %y, %z
1578 %3 = add nsw i32 %1, %2
1582 define i32 @test2() nounwind {
1584 %0 = call i32 @test(i32 1, i32 2, i32 4) nounwind
1588 It would be interesting extend IPSCCP to be able to handle simple cases like
1589 this, where all of the arguments to a call are constant. Because IPSCCP runs
1590 before inlining, trivial templates and inline functions are not yet inlined.
1591 The results for a function + set of constant arguments should be memoized in a
1594 //===---------------------------------------------------------------------===//
1596 The libcall constant folding stuff should be moved out of SimplifyLibcalls into
1597 libanalysis' constantfolding logic. This would allow IPSCCP to be able to
1598 handle simple things like this:
1600 static int foo(const char *X) { return strlen(X); }
1601 int bar() { return foo("abcd"); }
1603 //===---------------------------------------------------------------------===//
1605 functionattrs doesn't know much about memcpy/memset. This function should be
1606 marked readnone rather than readonly, since it only twiddles local memory, but
1607 functionattrs doesn't handle memset/memcpy/memmove aggressively:
1609 struct X { int *p; int *q; };
1616 p = __builtin_memcpy (&x, &y, sizeof (int *));
1620 This can be seen at:
1621 $ clang t.c -S -o - -mkernel -O0 -emit-llvm | opt -functionattrs -S
1624 //===---------------------------------------------------------------------===//
1626 Missed instcombine transformation:
1627 define i1 @a(i32 %x) nounwind readnone {
1629 %cmp = icmp eq i32 %x, 30
1630 %sub = add i32 %x, -30
1631 %cmp2 = icmp ugt i32 %sub, 9
1632 %or = or i1 %cmp, %cmp2
1635 This should be optimized to a single compare. Testcase derived from gcc.
1637 //===---------------------------------------------------------------------===//
1639 Missed instcombine or reassociate transformation:
1640 int a(int a, int b) { return (a==12)&(b>47)&(b<58); }
1642 The sgt and slt should be combined into a single comparison. Testcase derived
1645 //===---------------------------------------------------------------------===//
1647 Missed instcombine transformation:
1649 %382 = srem i32 %tmp14.i, 64 ; [#uses=1]
1650 %383 = zext i32 %382 to i64 ; [#uses=1]
1651 %384 = shl i64 %381, %383 ; [#uses=1]
1652 %385 = icmp slt i32 %tmp14.i, 64 ; [#uses=1]
1654 The srem can be transformed to an and because if %tmp14.i is negative, the
1655 shift is undefined. Testcase derived from 403.gcc.
1657 //===---------------------------------------------------------------------===//
1659 This is a range comparison on a divided result (from 403.gcc):
1661 %1337 = sdiv i32 %1336, 8 ; [#uses=1]
1662 %.off.i208 = add i32 %1336, 7 ; [#uses=1]
1663 %1338 = icmp ult i32 %.off.i208, 15 ; [#uses=1]
1665 We already catch this (removing the sdiv) if there isn't an add, we should
1666 handle the 'add' as well. This is a common idiom with it's builtin_alloca code.
1669 int a(int x) { return (unsigned)(x/16+7) < 15; }
1671 Another similar case involves truncations on 64-bit targets:
1673 %361 = sdiv i64 %.046, 8 ; [#uses=1]
1674 %362 = trunc i64 %361 to i32 ; [#uses=2]
1676 %367 = icmp eq i32 %362, 0 ; [#uses=1]
1678 //===---------------------------------------------------------------------===//
1680 Missed instcombine/dagcombine transformation:
1681 define void @lshift_lt(i8 zeroext %a) nounwind {
1683 %conv = zext i8 %a to i32
1684 %shl = shl i32 %conv, 3
1685 %cmp = icmp ult i32 %shl, 33
1686 br i1 %cmp, label %if.then, label %if.end
1689 tail call void @bar() nounwind
1695 declare void @bar() nounwind
1697 The shift should be eliminated. Testcase derived from gcc.
1699 //===---------------------------------------------------------------------===//
1701 These compile into different code, one gets recognized as a switch and the
1702 other doesn't due to phase ordering issues (PR6212):
1704 int test1(int mainType, int subType) {
1707 else if (mainType == 9)
1709 else if (mainType == 11)
1714 int test2(int mainType, int subType) {
1724 //===---------------------------------------------------------------------===//
1726 The following test case (from PR6576):
1728 define i32 @mul(i32 %a, i32 %b) nounwind readnone {
1730 %cond1 = icmp eq i32 %b, 0 ; <i1> [#uses=1]
1731 br i1 %cond1, label %exit, label %bb.nph
1732 bb.nph: ; preds = %entry
1733 %tmp = mul i32 %b, %a ; <i32> [#uses=1]
1735 exit: ; preds = %entry
1739 could be reduced to:
1741 define i32 @mul(i32 %a, i32 %b) nounwind readnone {
1743 %tmp = mul i32 %b, %a
1747 //===---------------------------------------------------------------------===//
1749 We should use DSE + llvm.lifetime.end to delete dead vtable pointer updates.
1752 Another interesting case is that something related could be used for variables
1753 that go const after their ctor has finished. In these cases, globalopt (which
1754 can statically run the constructor) could mark the global const (so it gets put
1755 in the readonly section). A testcase would be:
1758 using namespace std;
1759 const complex<char> should_be_in_rodata (42,-42);
1760 complex<char> should_be_in_data (42,-42);
1761 complex<char> should_be_in_bss;
1763 Where we currently evaluate the ctors but the globals don't become const because
1764 the optimizer doesn't know they "become const" after the ctor is done. See
1765 GCC PR4131 for more examples.
1767 //===---------------------------------------------------------------------===//
1772 return x > 1 ? x : 1;
1775 LLVM emits a comparison with 1 instead of 0. 0 would be equivalent
1776 and cheaper on most targets.
1778 LLVM prefers comparisons with zero over non-zero in general, but in this
1779 case it choses instead to keep the max operation obvious.
1781 //===---------------------------------------------------------------------===//
1783 Take the following testcase on x86-64 (similar testcases exist for all targets
1786 define void @a(i64* nocapture %s, i64* nocapture %t, i64 %a, i64 %b,
1789 %0 = zext i64 %a to i128 ; <i128> [#uses=1]
1790 %1 = zext i64 %b to i128 ; <i128> [#uses=1]
1791 %2 = add i128 %1, %0 ; <i128> [#uses=2]
1792 %3 = zext i64 %c to i128 ; <i128> [#uses=1]
1793 %4 = shl i128 %3, 64 ; <i128> [#uses=1]
1794 %5 = add i128 %4, %2 ; <i128> [#uses=1]
1795 %6 = lshr i128 %5, 64 ; <i128> [#uses=1]
1796 %7 = trunc i128 %6 to i64 ; <i64> [#uses=1]
1797 store i64 %7, i64* %s, align 8
1798 %8 = trunc i128 %2 to i64 ; <i64> [#uses=1]
1799 store i64 %8, i64* %t, align 8
1818 //===---------------------------------------------------------------------===//
1820 Switch lowering generates less than ideal code for the following switch:
1821 define void @a(i32 %x) nounwind {
1823 switch i32 %x, label %if.end [
1824 i32 0, label %if.then
1825 i32 1, label %if.then
1826 i32 2, label %if.then
1827 i32 3, label %if.then
1828 i32 5, label %if.then
1831 tail call void @foo() nounwind
1838 Generated code on x86-64 (other platforms give similar results):
1851 The movl+movl+btq+jb could be simplified to a cmpl+jne.
1853 Or, if we wanted to be really clever, we could simplify the whole thing to
1854 something like the following, which eliminates a branch:
1862 //===---------------------------------------------------------------------===//
1866 int foo(int a) { return (a & (~15)) / 16; }
1870 define i32 @foo(i32 %a) nounwind readnone ssp {
1872 %and = and i32 %a, -16
1873 %div = sdiv i32 %and, 16
1877 but this code (X & -A)/A is X >> log2(A) when A is a power of 2, so this case
1878 should be instcombined into just "a >> 4".
1880 We do get this at the codegen level, so something knows about it, but
1881 instcombine should catch it earlier:
1889 //===---------------------------------------------------------------------===//
1891 This code (from GCC PR28685):
1893 int test(int a, int b) {
1903 define i32 @test(i32 %a, i32 %b) nounwind readnone ssp {
1905 %cmp = icmp slt i32 %a, %b
1906 br i1 %cmp, label %return, label %if.end
1908 if.end: ; preds = %entry
1909 %cmp5 = icmp eq i32 %a, %b
1910 %conv6 = zext i1 %cmp5 to i32
1913 return: ; preds = %entry
1919 define i32 @test__(i32 %a, i32 %b) nounwind readnone ssp {
1921 %0 = icmp sle i32 %a, %b
1922 %retval = zext i1 %0 to i32
1926 //===---------------------------------------------------------------------===//
1928 This code can be seen in viterbi:
1930 %64 = call noalias i8* @malloc(i64 %62) nounwind
1932 %67 = call i64 @llvm.objectsize.i64(i8* %64, i1 false) nounwind
1933 %68 = call i8* @__memset_chk(i8* %64, i32 0, i64 %62, i64 %67) nounwind
1935 llvm.objectsize.i64 should be taught about malloc/calloc, allowing it to
1936 fold to %62. This is a security win (overflows of malloc will get caught)
1937 and also a performance win by exposing more memsets to the optimizer.
1939 This occurs several times in viterbi.
1941 Note that this would change the semantics of @llvm.objectsize which by its
1942 current definition always folds to a constant. We also should make sure that
1943 we remove checking in code like
1945 char *p = malloc(strlen(s)+1);
1946 __strcpy_chk(p, s, __builtin_objectsize(p, 0));
1948 //===---------------------------------------------------------------------===//
1950 This code (from Benchmarks/Dhrystone/dry.c):
1952 define i32 @Func1(i32, i32) nounwind readnone optsize ssp {
1954 %sext = shl i32 %0, 24
1955 %conv = ashr i32 %sext, 24
1956 %sext6 = shl i32 %1, 24
1957 %conv4 = ashr i32 %sext6, 24
1958 %cmp = icmp eq i32 %conv, %conv4
1959 %. = select i1 %cmp, i32 10000, i32 0
1963 Should be simplified into something like:
1965 define i32 @Func1(i32, i32) nounwind readnone optsize ssp {
1967 %sext = shl i32 %0, 24
1968 %conv = and i32 %sext, 0xFF000000
1969 %sext6 = shl i32 %1, 24
1970 %conv4 = and i32 %sext6, 0xFF000000
1971 %cmp = icmp eq i32 %conv, %conv4
1972 %. = select i1 %cmp, i32 10000, i32 0
1978 define i32 @Func1(i32, i32) nounwind readnone optsize ssp {
1980 %conv = and i32 %0, 0xFF
1981 %conv4 = and i32 %1, 0xFF
1982 %cmp = icmp eq i32 %conv, %conv4
1983 %. = select i1 %cmp, i32 10000, i32 0
1986 //===---------------------------------------------------------------------===//
1988 clang -O3 currently compiles this code
1990 int g(unsigned int a) {
1991 unsigned int c[100];
1994 unsigned int b = c[10] + c[11];
2002 define i32 @g(i32 a) nounwind readnone {
2003 %add = shl i32 %a, 1
2004 %mul = shl i32 %a, 1
2005 %cmp = icmp ugt i32 %add, %mul
2006 %a.addr.0 = select i1 %cmp, i32 11, i32 15
2010 The icmp should fold to false. This CSE opportunity is only available
2011 after GVN and InstCombine have run.
2013 //===---------------------------------------------------------------------===//
2015 memcpyopt should turn this:
2017 define i8* @test10(i32 %x) {
2018 %alloc = call noalias i8* @malloc(i32 %x) nounwind
2019 call void @llvm.memset.p0i8.i32(i8* %alloc, i8 0, i32 %x, i32 1, i1 false)
2023 into a call to calloc. We should make sure that we analyze calloc as
2024 aggressively as malloc though.
2026 //===---------------------------------------------------------------------===//
2028 clang -O3 doesn't optimize this:
2030 void f1(int* begin, int* end) {
2031 std::fill(begin, end, 0);
2034 into a memset. This is PR8942.
2036 //===---------------------------------------------------------------------===//
2038 clang -O3 -fno-exceptions currently compiles this code:
2041 std::vector<int> v(N);
2043 extern void sink(void*); sink(&v);
2048 define void @_Z1fi(i32 %N) nounwind {
2050 %v2 = alloca [3 x i32*], align 8
2051 %v2.sub = getelementptr inbounds [3 x i32*]* %v2, i64 0, i64 0
2052 %tmpcast = bitcast [3 x i32*]* %v2 to %"class.std::vector"*
2053 %conv = sext i32 %N to i64
2054 store i32* null, i32** %v2.sub, align 8, !tbaa !0
2055 %tmp3.i.i.i.i.i = getelementptr inbounds [3 x i32*]* %v2, i64 0, i64 1
2056 store i32* null, i32** %tmp3.i.i.i.i.i, align 8, !tbaa !0
2057 %tmp4.i.i.i.i.i = getelementptr inbounds [3 x i32*]* %v2, i64 0, i64 2
2058 store i32* null, i32** %tmp4.i.i.i.i.i, align 8, !tbaa !0
2059 %cmp.i.i.i.i = icmp eq i32 %N, 0
2060 br i1 %cmp.i.i.i.i, label %_ZNSt12_Vector_baseIiSaIiEEC2EmRKS0_.exit.thread.i.i, label %cond.true.i.i.i.i
2062 _ZNSt12_Vector_baseIiSaIiEEC2EmRKS0_.exit.thread.i.i: ; preds = %entry
2063 store i32* null, i32** %v2.sub, align 8, !tbaa !0
2064 store i32* null, i32** %tmp3.i.i.i.i.i, align 8, !tbaa !0
2065 %add.ptr.i5.i.i = getelementptr inbounds i32* null, i64 %conv
2066 store i32* %add.ptr.i5.i.i, i32** %tmp4.i.i.i.i.i, align 8, !tbaa !0
2067 br label %_ZNSt6vectorIiSaIiEEC1EmRKiRKS0_.exit
2069 cond.true.i.i.i.i: ; preds = %entry
2070 %cmp.i.i.i.i.i = icmp slt i32 %N, 0
2071 br i1 %cmp.i.i.i.i.i, label %if.then.i.i.i.i.i, label %_ZNSt12_Vector_baseIiSaIiEEC2EmRKS0_.exit.i.i
2073 if.then.i.i.i.i.i: ; preds = %cond.true.i.i.i.i
2074 call void @_ZSt17__throw_bad_allocv() noreturn nounwind
2077 _ZNSt12_Vector_baseIiSaIiEEC2EmRKS0_.exit.i.i: ; preds = %cond.true.i.i.i.i
2078 %mul.i.i.i.i.i = shl i64 %conv, 2
2079 %call3.i.i.i.i.i = call noalias i8* @_Znwm(i64 %mul.i.i.i.i.i) nounwind
2080 %0 = bitcast i8* %call3.i.i.i.i.i to i32*
2081 store i32* %0, i32** %v2.sub, align 8, !tbaa !0
2082 store i32* %0, i32** %tmp3.i.i.i.i.i, align 8, !tbaa !0
2083 %add.ptr.i.i.i = getelementptr inbounds i32* %0, i64 %conv
2084 store i32* %add.ptr.i.i.i, i32** %tmp4.i.i.i.i.i, align 8, !tbaa !0
2085 call void @llvm.memset.p0i8.i64(i8* %call3.i.i.i.i.i, i8 0, i64 %mul.i.i.i.i.i, i32 4, i1 false)
2086 br label %_ZNSt6vectorIiSaIiEEC1EmRKiRKS0_.exit
2088 This is just the handling the construction of the vector. Most surprising here
2089 is the fact that all three null stores in %entry are dead (because we do no
2092 Also surprising is that %conv isn't simplified to 0 in %....exit.thread.i.i.
2093 This is a because the client of LazyValueInfo doesn't simplify all instruction
2094 operands, just selected ones.
2096 //===---------------------------------------------------------------------===//
2098 clang -O3 -fno-exceptions currently compiles this code:
2100 void f(char* a, int n) {
2101 __builtin_memset(a, 0, n);
2102 for (int i = 0; i < n; ++i)
2108 define void @_Z1fPci(i8* nocapture %a, i32 %n) nounwind {
2110 %conv = sext i32 %n to i64
2111 tail call void @llvm.memset.p0i8.i64(i8* %a, i8 0, i64 %conv, i32 1, i1 false)
2112 %cmp8 = icmp sgt i32 %n, 0
2113 br i1 %cmp8, label %for.body.lr.ph, label %for.end
2115 for.body.lr.ph: ; preds = %entry
2116 %tmp10 = add i32 %n, -1
2117 %tmp11 = zext i32 %tmp10 to i64
2118 %tmp12 = add i64 %tmp11, 1
2119 call void @llvm.memset.p0i8.i64(i8* %a, i8 0, i64 %tmp12, i32 1, i1 false)
2122 for.end: ; preds = %entry
2126 This shouldn't need the ((zext (%n - 1)) + 1) game, and it should ideally fold
2127 the two memset's together. The issue with %n seems to stem from poor handling
2128 of the original loop.
2130 To simplify this, we need SCEV to know that "n != 0" because of the dominating
2131 conditional. That would turn the second memset into a simple memset of 'n'.
2133 //===---------------------------------------------------------------------===//
2135 clang -O3 -fno-exceptions currently compiles this code:
2138 unsigned short m1, m2;
2139 unsigned char m3, m4;
2143 std::vector<S> v(N);
2144 extern void sink(void*); sink(&v);
2147 into poor code for zero-initializing 'v' when N is >0. The problem is that
2148 S is only 6 bytes, but each element is 8 byte-aligned. We generate a loop and
2149 4 stores on each iteration. If the struct were 8 bytes, this gets turned into
2152 In order to handle this we have to:
2153 A) Teach clang to generate metadata for memsets of structs that have holes in
2155 B) Teach clang to use such a memset for zero init of this struct (since it has
2156 a hole), instead of doing elementwise zeroing.
2158 //===---------------------------------------------------------------------===//
2160 clang -O3 currently compiles this code:
2162 extern const int magic;
2163 double f() { return 0.0 * magic; }
2167 @magic = external constant i32
2169 define double @_Z1fv() nounwind readnone {
2171 %tmp = load i32* @magic, align 4, !tbaa !0
2172 %conv = sitofp i32 %tmp to double
2173 %mul = fmul double %conv, 0.000000e+00
2177 We should be able to fold away this fmul to 0.0. More generally, fmul(x,0.0)
2178 can be folded to 0.0 if we can prove that the LHS is not -0.0, not a NaN, and
2179 not an INF. The CannotBeNegativeZero predicate in value tracking should be
2180 extended to support general "fpclassify" operations that can return
2181 yes/no/unknown for each of these predicates.
2183 In this predicate, we know that uitofp is trivially never NaN or -0.0, and
2184 we know that it isn't +/-Inf if the floating point type has enough exponent bits
2185 to represent the largest integer value as < inf.
2187 //===---------------------------------------------------------------------===//
2189 When optimizing a transformation that can change the sign of 0.0 (such as the
2190 0.0*val -> 0.0 transformation above), it might be provable that the sign of the
2191 expression doesn't matter. For example, by the above rules, we can't transform
2192 fmul(sitofp(x), 0.0) into 0.0, because x might be -1 and the result of the
2193 expression is defined to be -0.0.
2195 If we look at the uses of the fmul for example, we might be able to prove that
2196 all uses don't care about the sign of zero. For example, if we have:
2198 fadd(fmul(sitofp(x), 0.0), 2.0)
2200 Since we know that x+2.0 doesn't care about the sign of any zeros in X, we can
2201 transform the fmul to 0.0, and then the fadd to 2.0.
2203 //===---------------------------------------------------------------------===//
2205 We should enhance memcpy/memcpy/memset to allow a metadata node on them
2206 indicating that some bytes of the transfer are undefined. This is useful for
2207 frontends like clang when lowering struct copies, when some elements of the
2208 struct are undefined. Consider something like this:
2214 void foo(struct x*P);
2215 struct x testfunc() {
2223 We currently compile this to:
2224 $ clang t.c -S -o - -O0 -emit-llvm | opt -scalarrepl -S
2227 %struct.x = type { i8, [4 x i32] }
2229 define void @testfunc(%struct.x* sret %agg.result) nounwind ssp {
2231 %V1 = alloca %struct.x, align 4
2232 call void @foo(%struct.x* %V1)
2233 %tmp1 = bitcast %struct.x* %V1 to i8*
2234 %0 = bitcast %struct.x* %V1 to i160*
2235 %srcval1 = load i160* %0, align 4
2236 %tmp2 = bitcast %struct.x* %agg.result to i8*
2237 %1 = bitcast %struct.x* %agg.result to i160*
2238 store i160 %srcval1, i160* %1, align 4
2242 This happens because SRoA sees that the temp alloca has is being memcpy'd into
2243 and out of and it has holes and it has to be conservative. If we knew about the
2244 holes, then this could be much much better.
2246 Having information about these holes would also improve memcpy (etc) lowering at
2247 llc time when it gets inlined, because we can use smaller transfers. This also
2248 avoids partial register stalls in some important cases.
2250 //===---------------------------------------------------------------------===//
2252 We don't fold (icmp (add) (add)) unless the two adds only have a single use.
2253 There are a lot of cases that we're refusing to fold in (e.g.) 256.bzip2, for
2256 %indvar.next90 = add i64 %indvar89, 1 ;; Has 2 uses
2257 %tmp96 = add i64 %tmp95, 1 ;; Has 1 use
2258 %exitcond97 = icmp eq i64 %indvar.next90, %tmp96
2260 We don't fold this because we don't want to introduce an overlapped live range
2261 of the ivar. However if we can make this more aggressive without causing
2262 performance issues in two ways:
2264 1. If *either* the LHS or RHS has a single use, we can definitely do the
2265 transformation. In the overlapping liverange case we're trading one register
2266 use for one fewer operation, which is a reasonable trade. Before doing this
2267 we should verify that the llc output actually shrinks for some benchmarks.
2268 2. If both ops have multiple uses, we can still fold it if the operations are
2269 both sinkable to *after* the icmp (e.g. in a subsequent block) which doesn't
2270 increase register pressure.
2272 There are a ton of icmp's we aren't simplifying because of the reg pressure
2273 concern. Care is warranted here though because many of these are induction
2274 variables and other cases that matter a lot to performance, like the above.
2275 Here's a blob of code that you can drop into the bottom of visitICmp to see some
2278 { Value *A, *B, *C, *D;
2279 if (match(Op0, m_Add(m_Value(A), m_Value(B))) &&
2280 match(Op1, m_Add(m_Value(C), m_Value(D))) &&
2281 (A == C || A == D || B == C || B == D)) {
2282 errs() << "OP0 = " << *Op0 << " U=" << Op0->getNumUses() << "\n";
2283 errs() << "OP1 = " << *Op1 << " U=" << Op1->getNumUses() << "\n";
2284 errs() << "CMP = " << I << "\n\n";
2288 //===---------------------------------------------------------------------===//