1 Target Independent Opportunities:
3 //===---------------------------------------------------------------------===//
5 Dead argument elimination should be enhanced to handle cases when an argument is
6 dead to an externally visible function. Though the argument can't be removed
7 from the externally visible function, the caller doesn't need to pass it in.
8 For example in this testcase:
10 void foo(int X) __attribute__((noinline));
11 void foo(int X) { sideeffect(); }
12 void bar(int A) { foo(A+1); }
16 define void @bar(i32 %A) nounwind ssp {
17 %0 = add nsw i32 %A, 1 ; <i32> [#uses=1]
18 tail call void @foo(i32 %0) nounwind noinline ssp
22 The add is dead, we could pass in 'i32 undef' instead. This occurs for C++
23 templates etc, which usually have linkonce_odr/weak_odr linkage, not internal
26 //===---------------------------------------------------------------------===//
28 With the recent changes to make the implicit def/use set explicit in
29 machineinstrs, we should change the target descriptions for 'call' instructions
30 so that the .td files don't list all the call-clobbered registers as implicit
31 defs. Instead, these should be added by the code generator (e.g. on the dag).
33 This has a number of uses:
35 1. PPC32/64 and X86 32/64 can avoid having multiple copies of call instructions
36 for their different impdef sets.
37 2. Targets with multiple calling convs (e.g. x86) which have different clobber
38 sets don't need copies of call instructions.
39 3. 'Interprocedural register allocation' can be done to reduce the clobber sets
42 //===---------------------------------------------------------------------===//
44 Make the PPC branch selector target independant
46 //===---------------------------------------------------------------------===//
48 Get the C front-end to expand hypot(x,y) -> llvm.sqrt(x*x+y*y) when errno and
49 precision don't matter (ffastmath). Misc/mandel will like this. :) This isn't
50 safe in general, even on darwin. See the libm implementation of hypot for
51 examples (which special case when x/y are exactly zero to get signed zeros etc
54 //===---------------------------------------------------------------------===//
56 Solve this DAG isel folding deficiency:
74 The problem is the store's chain operand is not the load X but rather
75 a TokenFactor of the load X and load Y, which prevents the folding.
77 There are two ways to fix this:
79 1. The dag combiner can start using alias analysis to realize that y/x
80 don't alias, making the store to X not dependent on the load from Y.
81 2. The generated isel could be made smarter in the case it can't
82 disambiguate the pointers.
84 Number 1 is the preferred solution.
86 This has been "fixed" by a TableGen hack. But that is a short term workaround
87 which will be removed once the proper fix is made.
89 //===---------------------------------------------------------------------===//
91 On targets with expensive 64-bit multiply, we could LSR this:
98 for (i = ...; ++i, tmp+=tmp)
101 This would be a win on ppc32, but not x86 or ppc64.
103 //===---------------------------------------------------------------------===//
105 Shrink: (setlt (loadi32 P), 0) -> (setlt (loadi8 Phi), 0)
107 //===---------------------------------------------------------------------===//
109 Reassociate should turn things like:
111 int factorial(int X) {
112 return X*X*X*X*X*X*X*X;
115 into llvm.powi calls, allowing the code generator to produce balanced
116 multiplication trees.
118 First, the intrinsic needs to be extended to support integers, and second the
119 code generator needs to be enhanced to lower these to multiplication trees.
121 //===---------------------------------------------------------------------===//
123 Interesting? testcase for add/shift/mul reassoc:
125 int bar(int x, int y) {
126 return x*x*x+y+x*x*x*x*x*y*y*y*y;
128 int foo(int z, int n) {
129 return bar(z, n) + bar(2*z, 2*n);
132 This is blocked on not handling X*X*X -> powi(X, 3) (see note above). The issue
133 is that we end up getting t = 2*X s = t*t and don't turn this into 4*X*X,
134 which is the same number of multiplies and is canonical, because the 2*X has
135 multiple uses. Here's a simple example:
137 define i32 @test15(i32 %X1) {
138 %B = mul i32 %X1, 47 ; X1*47
144 //===---------------------------------------------------------------------===//
146 Reassociate should handle the example in GCC PR16157:
148 extern int a0, a1, a2, a3, a4; extern int b0, b1, b2, b3, b4;
149 void f () { /* this can be optimized to four additions... */
150 b4 = a4 + a3 + a2 + a1 + a0;
151 b3 = a3 + a2 + a1 + a0;
156 This requires reassociating to forms of expressions that are already available,
157 something that reassoc doesn't think about yet.
159 //===---------------------------------------------------------------------===//
161 These two functions should generate the same code on big-endian systems:
163 int g(int *j,int *l) { return memcmp(j,l,4); }
164 int h(int *j, int *l) { return *j - *l; }
166 this could be done in SelectionDAGISel.cpp, along with other special cases,
169 //===---------------------------------------------------------------------===//
171 It would be nice to revert this patch:
172 http://lists.cs.uiuc.edu/pipermail/llvm-commits/Week-of-Mon-20060213/031986.html
174 And teach the dag combiner enough to simplify the code expanded before
175 legalize. It seems plausible that this knowledge would let it simplify other
178 //===---------------------------------------------------------------------===//
180 For vector types, TargetData.cpp::getTypeInfo() returns alignment that is equal
181 to the type size. It works but can be overly conservative as the alignment of
182 specific vector types are target dependent.
184 //===---------------------------------------------------------------------===//
186 We should produce an unaligned load from code like this:
188 v4sf example(float *P) {
189 return (v4sf){P[0], P[1], P[2], P[3] };
192 //===---------------------------------------------------------------------===//
194 Add support for conditional increments, and other related patterns. Instead
199 je LBB16_2 #cond_next
210 //===---------------------------------------------------------------------===//
212 Combine: a = sin(x), b = cos(x) into a,b = sincos(x).
214 Expand these to calls of sin/cos and stores:
215 double sincos(double x, double *sin, double *cos);
216 float sincosf(float x, float *sin, float *cos);
217 long double sincosl(long double x, long double *sin, long double *cos);
219 Doing so could allow SROA of the destination pointers. See also:
220 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=17687
222 This is now easily doable with MRVs. We could even make an intrinsic for this
223 if anyone cared enough about sincos.
225 //===---------------------------------------------------------------------===//
227 Turn this into a single byte store with no load (the other 3 bytes are
230 define void @test(i32* %P) {
232 %tmp14 = or i32 %tmp, 3305111552
233 %tmp15 = and i32 %tmp14, 3321888767
234 store i32 %tmp15, i32* %P
238 //===---------------------------------------------------------------------===//
240 quantum_sigma_x in 462.libquantum contains the following loop:
242 for(i=0; i<reg->size; i++)
244 /* Flip the target bit of each basis state */
245 reg->node[i].state ^= ((MAX_UNSIGNED) 1 << target);
248 Where MAX_UNSIGNED/state is a 64-bit int. On a 32-bit platform it would be just
249 so cool to turn it into something like:
251 long long Res = ((MAX_UNSIGNED) 1 << target);
253 for(i=0; i<reg->size; i++)
254 reg->node[i].state ^= Res & 0xFFFFFFFFULL;
256 for(i=0; i<reg->size; i++)
257 reg->node[i].state ^= Res & 0xFFFFFFFF00000000ULL
260 ... which would only do one 32-bit XOR per loop iteration instead of two.
262 It would also be nice to recognize the reg->size doesn't alias reg->node[i], but
265 //===---------------------------------------------------------------------===//
267 This isn't recognized as bswap by instcombine (yes, it really is bswap):
269 unsigned long reverse(unsigned v) {
271 t = v ^ ((v << 16) | (v >> 16));
273 v = (v << 24) | (v >> 8);
277 //===---------------------------------------------------------------------===//
281 These idioms should be recognized as popcount (see PR1488):
283 unsigned countbits_slow(unsigned v) {
285 for (c = 0; v; v >>= 1)
289 unsigned countbits_fast(unsigned v){
292 v &= v - 1; // clear the least significant bit set
296 BITBOARD = unsigned long long
297 int PopCnt(register BITBOARD a) {
305 unsigned int popcount(unsigned int input) {
306 unsigned int count = 0;
307 for (unsigned int i = 0; i < 4 * 8; i++)
308 count += (input >> i) & i;
312 This is a form of idiom recognition for loops, the same thing that could be
313 useful for recognizing memset/memcpy.
315 //===---------------------------------------------------------------------===//
317 These should turn into single 16-bit (unaligned?) loads on little/big endian
320 unsigned short read_16_le(const unsigned char *adr) {
321 return adr[0] | (adr[1] << 8);
323 unsigned short read_16_be(const unsigned char *adr) {
324 return (adr[0] << 8) | adr[1];
327 //===---------------------------------------------------------------------===//
329 -instcombine should handle this transform:
330 icmp pred (sdiv X / C1 ), C2
331 when X, C1, and C2 are unsigned. Similarly for udiv and signed operands.
333 Currently InstCombine avoids this transform but will do it when the signs of
334 the operands and the sign of the divide match. See the FIXME in
335 InstructionCombining.cpp in the visitSetCondInst method after the switch case
336 for Instruction::UDiv (around line 4447) for more details.
338 The SingleSource/Benchmarks/Shootout-C++/hash and hash2 tests have examples of
341 //===---------------------------------------------------------------------===//
345 viterbi speeds up *significantly* if the various "history" related copy loops
346 are turned into memcpy calls at the source level. We need a "loops to memcpy"
349 //===---------------------------------------------------------------------===//
353 SingleSource/Benchmarks/Misc/dt.c shows several interesting optimization
354 opportunities in its double_array_divs_variable function: it needs loop
355 interchange, memory promotion (which LICM already does), vectorization and
356 variable trip count loop unrolling (since it has a constant trip count). ICC
357 apparently produces this very nice code with -ffast-math:
359 ..B1.70: # Preds ..B1.70 ..B1.69
360 mulpd %xmm0, %xmm1 #108.2
361 mulpd %xmm0, %xmm1 #108.2
362 mulpd %xmm0, %xmm1 #108.2
363 mulpd %xmm0, %xmm1 #108.2
365 cmpl $131072, %edx #108.2
366 jb ..B1.70 # Prob 99% #108.2
368 It would be better to count down to zero, but this is a lot better than what we
371 //===---------------------------------------------------------------------===//
375 typedef unsigned U32;
376 typedef unsigned long long U64;
377 int test (U32 *inst, U64 *regs) {
380 int r1 = (temp >> 20) & 0xf;
381 int b2 = (temp >> 16) & 0xf;
382 effective_addr2 = temp & 0xfff;
383 if (b2) effective_addr2 += regs[b2];
384 b2 = (temp >> 12) & 0xf;
385 if (b2) effective_addr2 += regs[b2];
386 effective_addr2 &= regs[4];
387 if ((effective_addr2 & 3) == 0)
392 Note that only the low 2 bits of effective_addr2 are used. On 32-bit systems,
393 we don't eliminate the computation of the top half of effective_addr2 because
394 we don't have whole-function selection dags. On x86, this means we use one
395 extra register for the function when effective_addr2 is declared as U64 than
396 when it is declared U32.
398 PHI Slicing could be extended to do this.
400 //===---------------------------------------------------------------------===//
402 LSR should know what GPR types a target has from TargetData. This code:
404 volatile short X, Y; // globals
408 for (i = 0; i < N; i++) { X = i; Y = i*4; }
411 produces two near identical IV's (after promotion) on PPC/ARM:
421 add r2, r2, #1 <- [0,+,1]
422 sub r0, r0, #1 <- [0,-,1]
426 LSR should reuse the "+" IV for the exit test.
428 //===---------------------------------------------------------------------===//
430 Tail call elim should be more aggressive, checking to see if the call is
431 followed by an uncond branch to an exit block.
433 ; This testcase is due to tail-duplication not wanting to copy the return
434 ; instruction into the terminating blocks because there was other code
435 ; optimized out of the function after the taildup happened.
436 ; RUN: llvm-as < %s | opt -tailcallelim | llvm-dis | not grep call
438 define i32 @t4(i32 %a) {
440 %tmp.1 = and i32 %a, 1 ; <i32> [#uses=1]
441 %tmp.2 = icmp ne i32 %tmp.1, 0 ; <i1> [#uses=1]
442 br i1 %tmp.2, label %then.0, label %else.0
444 then.0: ; preds = %entry
445 %tmp.5 = add i32 %a, -1 ; <i32> [#uses=1]
446 %tmp.3 = call i32 @t4( i32 %tmp.5 ) ; <i32> [#uses=1]
449 else.0: ; preds = %entry
450 %tmp.7 = icmp ne i32 %a, 0 ; <i1> [#uses=1]
451 br i1 %tmp.7, label %then.1, label %return
453 then.1: ; preds = %else.0
454 %tmp.11 = add i32 %a, -2 ; <i32> [#uses=1]
455 %tmp.9 = call i32 @t4( i32 %tmp.11 ) ; <i32> [#uses=1]
458 return: ; preds = %then.1, %else.0, %then.0
459 %result.0 = phi i32 [ 0, %else.0 ], [ %tmp.3, %then.0 ],
464 //===---------------------------------------------------------------------===//
466 Tail recursion elimination should handle:
471 return 2 * pow2m1 (n - 1) + 1;
474 Also, multiplies can be turned into SHL's, so they should be handled as if
475 they were associative. "return foo() << 1" can be tail recursion eliminated.
477 //===---------------------------------------------------------------------===//
479 Argument promotion should promote arguments for recursive functions, like
482 ; RUN: llvm-as < %s | opt -argpromotion | llvm-dis | grep x.val
484 define internal i32 @foo(i32* %x) {
486 %tmp = load i32* %x ; <i32> [#uses=0]
487 %tmp.foo = call i32 @foo( i32* %x ) ; <i32> [#uses=1]
491 define i32 @bar(i32* %x) {
493 %tmp3 = call i32 @foo( i32* %x ) ; <i32> [#uses=1]
497 //===---------------------------------------------------------------------===//
499 We should investigate an instruction sinking pass. Consider this silly
515 je LBB1_2 # cond_true
523 The PIC base computation (call+popl) is only used on one path through the
524 code, but is currently always computed in the entry block. It would be
525 better to sink the picbase computation down into the block for the
526 assertion, as it is the only one that uses it. This happens for a lot of
527 code with early outs.
529 Another example is loads of arguments, which are usually emitted into the
530 entry block on targets like x86. If not used in all paths through a
531 function, they should be sunk into the ones that do.
533 In this case, whole-function-isel would also handle this.
535 //===---------------------------------------------------------------------===//
537 Investigate lowering of sparse switch statements into perfect hash tables:
538 http://burtleburtle.net/bob/hash/perfect.html
540 //===---------------------------------------------------------------------===//
542 We should turn things like "load+fabs+store" and "load+fneg+store" into the
543 corresponding integer operations. On a yonah, this loop:
548 for (b = 0; b < 10000000; b++)
549 for (i = 0; i < 256; i++)
553 is twice as slow as this loop:
558 for (b = 0; b < 10000000; b++)
559 for (i = 0; i < 256; i++)
560 a[i] ^= (1ULL << 63);
563 and I suspect other processors are similar. On X86 in particular this is a
564 big win because doing this with integers allows the use of read/modify/write
567 //===---------------------------------------------------------------------===//
569 DAG Combiner should try to combine small loads into larger loads when
570 profitable. For example, we compile this C++ example:
572 struct THotKey { short Key; bool Control; bool Shift; bool Alt; };
573 extern THotKey m_HotKey;
574 THotKey GetHotKey () { return m_HotKey; }
576 into (-O3 -fno-exceptions -static -fomit-frame-pointer):
581 movb _m_HotKey+3, %cl
582 movb _m_HotKey+4, %dl
583 movb _m_HotKey+2, %ch
598 movzwl _m_HotKey+4, %edx
602 The LLVM IR contains the needed alignment info, so we should be able to
603 merge the loads and stores into 4-byte loads:
605 %struct.THotKey = type { i16, i8, i8, i8 }
606 define void @_Z9GetHotKeyv(%struct.THotKey* sret %agg.result) nounwind {
608 %tmp2 = load i16* getelementptr (@m_HotKey, i32 0, i32 0), align 8
609 %tmp5 = load i8* getelementptr (@m_HotKey, i32 0, i32 1), align 2
610 %tmp8 = load i8* getelementptr (@m_HotKey, i32 0, i32 2), align 1
611 %tmp11 = load i8* getelementptr (@m_HotKey, i32 0, i32 3), align 2
613 Alternatively, we should use a small amount of base-offset alias analysis
614 to make it so the scheduler doesn't need to hold all the loads in regs at
617 //===---------------------------------------------------------------------===//
619 We should add an FRINT node to the DAG to model targets that have legal
620 implementations of ceil/floor/rint.
622 //===---------------------------------------------------------------------===//
627 long long input[8] = {1,1,1,1,1,1,1,1};
631 We currently compile this into a memcpy from a global array since the
632 initializer is fairly large and not memset'able. This is good, but the memcpy
633 gets lowered to load/stores in the code generator. This is also ok, except
634 that the codegen lowering for memcpy doesn't handle the case when the source
635 is a constant global. This gives us atrocious code like this:
640 movl _C.0.1444-"L1$pb"+32(%eax), %ecx
642 movl _C.0.1444-"L1$pb"+20(%eax), %ecx
644 movl _C.0.1444-"L1$pb"+36(%eax), %ecx
646 movl _C.0.1444-"L1$pb"+44(%eax), %ecx
648 movl _C.0.1444-"L1$pb"+40(%eax), %ecx
650 movl _C.0.1444-"L1$pb"+12(%eax), %ecx
652 movl _C.0.1444-"L1$pb"+4(%eax), %ecx
664 //===---------------------------------------------------------------------===//
666 http://llvm.org/PR717:
668 The following code should compile into "ret int undef". Instead, LLVM
669 produces "ret int 0":
678 //===---------------------------------------------------------------------===//
680 The loop unroller should partially unroll loops (instead of peeling them)
681 when code growth isn't too bad and when an unroll count allows simplification
682 of some code within the loop. One trivial example is:
688 for ( nLoop = 0; nLoop < 1000; nLoop++ ) {
697 Unrolling by 2 would eliminate the '&1' in both copies, leading to a net
698 reduction in code size. The resultant code would then also be suitable for
699 exit value computation.
701 //===---------------------------------------------------------------------===//
703 We miss a bunch of rotate opportunities on various targets, including ppc, x86,
704 etc. On X86, we miss a bunch of 'rotate by variable' cases because the rotate
705 matching code in dag combine doesn't look through truncates aggressively
706 enough. Here are some testcases reduces from GCC PR17886:
708 unsigned long long f(unsigned long long x, int y) {
709 return (x << y) | (x >> 64-y);
711 unsigned f2(unsigned x, int y){
712 return (x << y) | (x >> 32-y);
714 unsigned long long f3(unsigned long long x){
716 return (x << y) | (x >> 64-y);
718 unsigned f4(unsigned x){
720 return (x << y) | (x >> 32-y);
722 unsigned long long f5(unsigned long long x, unsigned long long y) {
723 return (x << 8) | ((y >> 48) & 0xffull);
725 unsigned long long f6(unsigned long long x, unsigned long long y, int z) {
728 return (x << 8) | ((y >> 48) & 0xffull);
730 return (x << 16) | ((y >> 40) & 0xffffull);
732 return (x << 24) | ((y >> 32) & 0xffffffull);
734 return (x << 32) | ((y >> 24) & 0xffffffffull);
736 return (x << 40) | ((y >> 16) & 0xffffffffffull);
740 On X86-64, we only handle f2/f3/f4 right. On x86-32, a few of these
741 generate truly horrible code, instead of using shld and friends. On
742 ARM, we end up with calls to L___lshrdi3/L___ashldi3 in f, which is
743 badness. PPC64 misses f, f5 and f6. CellSPU aborts in isel.
745 //===---------------------------------------------------------------------===//
747 We do a number of simplifications in simplify libcalls to strength reduce
748 standard library functions, but we don't currently merge them together. For
749 example, it is useful to merge memcpy(a,b,strlen(b)) -> strcpy. This can only
750 be done safely if "b" isn't modified between the strlen and memcpy of course.
752 //===---------------------------------------------------------------------===//
754 We compile this program: (from GCC PR11680)
755 http://gcc.gnu.org/bugzilla/attachment.cgi?id=4487
757 Into code that runs the same speed in fast/slow modes, but both modes run 2x
758 slower than when compile with GCC (either 4.0 or 4.2):
760 $ llvm-g++ perf.cpp -O3 -fno-exceptions
762 1.821u 0.003s 0:01.82 100.0% 0+0k 0+0io 0pf+0w
764 $ g++ perf.cpp -O3 -fno-exceptions
766 0.821u 0.001s 0:00.82 100.0% 0+0k 0+0io 0pf+0w
768 It looks like we are making the same inlining decisions, so this may be raw
769 codegen badness or something else (haven't investigated).
771 //===---------------------------------------------------------------------===//
773 We miss some instcombines for stuff like this:
775 void foo (unsigned int a) {
776 /* This one is equivalent to a >= (3 << 2). */
781 A few other related ones are in GCC PR14753.
783 //===---------------------------------------------------------------------===//
785 Divisibility by constant can be simplified (according to GCC PR12849) from
786 being a mulhi to being a mul lo (cheaper). Testcase:
788 void bar(unsigned n) {
793 This is equivalent to the following, where 2863311531 is the multiplicative
794 inverse of 3, and 1431655766 is ((2^32)-1)/3+1:
795 void bar(unsigned n) {
796 if (n * 2863311531U < 1431655766U)
800 The same transformation can work with an even modulo with the addition of a
801 rotate: rotate the result of the multiply to the right by the number of bits
802 which need to be zero for the condition to be true, and shrink the compare RHS
803 by the same amount. Unless the target supports rotates, though, that
804 transformation probably isn't worthwhile.
806 The transformation can also easily be made to work with non-zero equality
807 comparisons: just transform, for example, "n % 3 == 1" to "(n-1) % 3 == 0".
809 //===---------------------------------------------------------------------===//
811 Better mod/ref analysis for scanf would allow us to eliminate the vtable and a
812 bunch of other stuff from this example (see PR1604):
822 std::scanf("%d", &t.val);
823 std::printf("%d\n", t.val);
826 //===---------------------------------------------------------------------===//
828 These functions perform the same computation, but produce different assembly.
830 define i8 @select(i8 %x) readnone nounwind {
831 %A = icmp ult i8 %x, 250
832 %B = select i1 %A, i8 0, i8 1
836 define i8 @addshr(i8 %x) readnone nounwind {
837 %A = zext i8 %x to i9
838 %B = add i9 %A, 6 ;; 256 - 250 == 6
840 %D = trunc i9 %C to i8
844 //===---------------------------------------------------------------------===//
848 f (unsigned long a, unsigned long b, unsigned long c)
850 return ((a & (c - 1)) != 0) || ((b & (c - 1)) != 0);
853 f (unsigned long a, unsigned long b, unsigned long c)
855 return ((a & (c - 1)) != 0) | ((b & (c - 1)) != 0);
857 Both should combine to ((a|b) & (c-1)) != 0. Currently not optimized with
858 "clang -emit-llvm-bc | opt -std-compile-opts".
860 //===---------------------------------------------------------------------===//
863 #define PMD_MASK (~((1UL << 23) - 1))
864 void clear_pmd_range(unsigned long start, unsigned long end)
866 if (!(start & ~PMD_MASK) && !(end & ~PMD_MASK))
869 The expression should optimize to something like
870 "!((start|end)&~PMD_MASK). Currently not optimized with "clang
871 -emit-llvm-bc | opt -std-compile-opts".
873 //===---------------------------------------------------------------------===//
879 return (n >= 0 ? 1 : -1);
881 Should combine to (n >> 31) | 1. Currently not optimized with "clang
882 -emit-llvm-bc | opt -std-compile-opts | llc".
884 //===---------------------------------------------------------------------===//
888 if (variable == 4 || variable == 6)
891 This should optimize to "if ((variable | 2) == 6)". Currently not
892 optimized with "clang -emit-llvm-bc | opt -std-compile-opts | llc".
894 //===---------------------------------------------------------------------===//
896 unsigned int f(unsigned int i, unsigned int n) {++i; if (i == n) ++i; return
898 unsigned int f2(unsigned int i, unsigned int n) {++i; i += i == n; return i;}
899 These should combine to the same thing. Currently, the first function
900 produces better code on X86.
902 //===---------------------------------------------------------------------===//
905 #define abs(x) x>0?x:-x
908 return (abs(x)) >= 0;
910 This should optimize to x == INT_MIN. (With -fwrapv.) Currently not
911 optimized with "clang -emit-llvm-bc | opt -std-compile-opts".
913 //===---------------------------------------------------------------------===//
917 rotate_cst (unsigned int a)
919 a = (a << 10) | (a >> 22);
924 minus_cst (unsigned int a)
933 mask_gt (unsigned int a)
935 /* This is equivalent to a > 15. */
940 rshift_gt (unsigned int a)
942 /* This is equivalent to a > 23. */
946 All should simplify to a single comparison. All of these are
947 currently not optimized with "clang -emit-llvm-bc | opt
950 //===---------------------------------------------------------------------===//
953 int c(int* x) {return (char*)x+2 == (char*)x;}
954 Should combine to 0. Currently not optimized with "clang
955 -emit-llvm-bc | opt -std-compile-opts" (although llc can optimize it).
957 //===---------------------------------------------------------------------===//
959 int a(unsigned b) {return ((b << 31) | (b << 30)) >> 31;}
960 Should be combined to "((b >> 1) | b) & 1". Currently not optimized
961 with "clang -emit-llvm-bc | opt -std-compile-opts".
963 //===---------------------------------------------------------------------===//
965 unsigned a(unsigned x, unsigned y) { return x | (y & 1) | (y & 2);}
966 Should combine to "x | (y & 3)". Currently not optimized with "clang
967 -emit-llvm-bc | opt -std-compile-opts".
969 //===---------------------------------------------------------------------===//
971 int a(int a, int b, int c) {return (~a & c) | ((c|a) & b);}
972 Should fold to "(~a & c) | (a & b)". Currently not optimized with
973 "clang -emit-llvm-bc | opt -std-compile-opts".
975 //===---------------------------------------------------------------------===//
977 int a(int a,int b) {return (~(a|b))|a;}
978 Should fold to "a|~b". Currently not optimized with "clang
979 -emit-llvm-bc | opt -std-compile-opts".
981 //===---------------------------------------------------------------------===//
983 int a(int a, int b) {return (a&&b) || (a&&!b);}
984 Should fold to "a". Currently not optimized with "clang -emit-llvm-bc
985 | opt -std-compile-opts".
987 //===---------------------------------------------------------------------===//
989 int a(int a, int b, int c) {return (a&&b) || (!a&&c);}
990 Should fold to "a ? b : c", or at least something sane. Currently not
991 optimized with "clang -emit-llvm-bc | opt -std-compile-opts".
993 //===---------------------------------------------------------------------===//
995 int a(int a, int b, int c) {return (a&&b) || (a&&c) || (a&&b&&c);}
996 Should fold to a && (b || c). Currently not optimized with "clang
997 -emit-llvm-bc | opt -std-compile-opts".
999 //===---------------------------------------------------------------------===//
1001 int a(int x) {return x | ((x & 8) ^ 8);}
1002 Should combine to x | 8. Currently not optimized with "clang
1003 -emit-llvm-bc | opt -std-compile-opts".
1005 //===---------------------------------------------------------------------===//
1007 int a(int x) {return x ^ ((x & 8) ^ 8);}
1008 Should also combine to x | 8. Currently not optimized with "clang
1009 -emit-llvm-bc | opt -std-compile-opts".
1011 //===---------------------------------------------------------------------===//
1013 int a(int x) {return (x & 8) == 0 ? -1 : -9;}
1014 Should combine to (x | -9) ^ 8. Currently not optimized with "clang
1015 -emit-llvm-bc | opt -std-compile-opts".
1017 //===---------------------------------------------------------------------===//
1019 int a(int x) {return (x & 8) == 0 ? -9 : -1;}
1020 Should combine to x | -9. Currently not optimized with "clang
1021 -emit-llvm-bc | opt -std-compile-opts".
1023 //===---------------------------------------------------------------------===//
1025 int a(int x) {return ((x | -9) ^ 8) & x;}
1026 Should combine to x & -9. Currently not optimized with "clang
1027 -emit-llvm-bc | opt -std-compile-opts".
1029 //===---------------------------------------------------------------------===//
1031 unsigned a(unsigned a) {return a * 0x11111111 >> 28 & 1;}
1032 Should combine to "a * 0x88888888 >> 31". Currently not optimized
1033 with "clang -emit-llvm-bc | opt -std-compile-opts".
1035 //===---------------------------------------------------------------------===//
1037 unsigned a(char* x) {if ((*x & 32) == 0) return b();}
1038 There's an unnecessary zext in the generated code with "clang
1039 -emit-llvm-bc | opt -std-compile-opts".
1041 //===---------------------------------------------------------------------===//
1043 unsigned a(unsigned long long x) {return 40 * (x >> 1);}
1044 Should combine to "20 * (((unsigned)x) & -2)". Currently not
1045 optimized with "clang -emit-llvm-bc | opt -std-compile-opts".
1047 //===---------------------------------------------------------------------===//
1049 This was noticed in the entryblock for grokdeclarator in 403.gcc:
1051 %tmp = icmp eq i32 %decl_context, 4
1052 %decl_context_addr.0 = select i1 %tmp, i32 3, i32 %decl_context
1053 %tmp1 = icmp eq i32 %decl_context_addr.0, 1
1054 %decl_context_addr.1 = select i1 %tmp1, i32 0, i32 %decl_context_addr.0
1056 tmp1 should be simplified to something like:
1057 (!tmp || decl_context == 1)
1059 This allows recursive simplifications, tmp1 is used all over the place in
1060 the function, e.g. by:
1062 %tmp23 = icmp eq i32 %decl_context_addr.1, 0 ; <i1> [#uses=1]
1063 %tmp24 = xor i1 %tmp1, true ; <i1> [#uses=1]
1064 %or.cond8 = and i1 %tmp23, %tmp24 ; <i1> [#uses=1]
1068 //===---------------------------------------------------------------------===//
1072 Store sinking: This code:
1074 void f (int n, int *cond, int *res) {
1077 for (i = 0; i < n; i++)
1079 *res ^= 234; /* (*) */
1082 On this function GVN hoists the fully redundant value of *res, but nothing
1083 moves the store out. This gives us this code:
1085 bb: ; preds = %bb2, %entry
1086 %.rle = phi i32 [ 0, %entry ], [ %.rle6, %bb2 ]
1087 %i.05 = phi i32 [ 0, %entry ], [ %indvar.next, %bb2 ]
1088 %1 = load i32* %cond, align 4
1089 %2 = icmp eq i32 %1, 0
1090 br i1 %2, label %bb2, label %bb1
1093 %3 = xor i32 %.rle, 234
1094 store i32 %3, i32* %res, align 4
1097 bb2: ; preds = %bb, %bb1
1098 %.rle6 = phi i32 [ %3, %bb1 ], [ %.rle, %bb ]
1099 %indvar.next = add i32 %i.05, 1
1100 %exitcond = icmp eq i32 %indvar.next, %n
1101 br i1 %exitcond, label %return, label %bb
1103 DSE should sink partially dead stores to get the store out of the loop.
1105 Here's another partial dead case:
1106 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=12395
1108 //===---------------------------------------------------------------------===//
1110 Scalar PRE hoists the mul in the common block up to the else:
1112 int test (int a, int b, int c, int g) {
1122 It would be better to do the mul once to reduce codesize above the if.
1123 This is GCC PR38204.
1125 //===---------------------------------------------------------------------===//
1129 GCC PR37810 is an interesting case where we should sink load/store reload
1130 into the if block and outside the loop, so we don't reload/store it on the
1151 We now hoist the reload after the call (Transforms/GVN/lpre-call-wrap.ll), but
1152 we don't sink the store. We need partially dead store sinking.
1154 //===---------------------------------------------------------------------===//
1156 [LOAD PRE CRIT EDGE SPLITTING]
1158 GCC PR37166: Sinking of loads prevents SROA'ing the "g" struct on the stack
1159 leading to excess stack traffic. This could be handled by GVN with some crazy
1160 symbolic phi translation. The code we get looks like (g is on the stack):
1164 %9 = getelementptr %struct.f* %g, i32 0, i32 0
1165 store i32 %8, i32* %9, align bel %bb3
1167 bb3: ; preds = %bb1, %bb2, %bb
1168 %c_addr.0 = phi %struct.f* [ %g, %bb2 ], [ %c, %bb ], [ %c, %bb1 ]
1169 %b_addr.0 = phi %struct.f* [ %b, %bb2 ], [ %g, %bb ], [ %b, %bb1 ]
1170 %10 = getelementptr %struct.f* %c_addr.0, i32 0, i32 0
1171 %11 = load i32* %10, align 4
1173 %11 is partially redundant, an in BB2 it should have the value %8.
1175 GCC PR33344 and PR35287 are similar cases.
1178 //===---------------------------------------------------------------------===//
1182 There are many load PRE testcases in testsuite/gcc.dg/tree-ssa/loadpre* in the
1183 GCC testsuite, ones we don't get yet are (checked through loadpre25):
1185 [CRIT EDGE BREAKING]
1186 loadpre3.c predcom-4.c
1188 [PRE OF READONLY CALL]
1191 [TURN SELECT INTO BRANCH]
1192 loadpre14.c loadpre15.c
1194 actually a conditional increment: loadpre18.c loadpre19.c
1197 //===---------------------------------------------------------------------===//
1200 There are many PRE testcases in testsuite/gcc.dg/tree-ssa/ssa-pre-*.c in the
1203 //===---------------------------------------------------------------------===//
1205 There are some interesting cases in testsuite/gcc.dg/tree-ssa/pred-comm* in the
1206 GCC testsuite. For example, we get the first example in predcom-1.c, but
1207 miss the second one:
1212 __attribute__ ((noinline))
1213 void count_averages(int n) {
1215 for (i = 1; i < n; i++)
1216 avg[i] = (((unsigned long) fib[i - 1] + fib[i] + fib[i + 1]) / 3) & 0xffff;
1219 which compiles into two loads instead of one in the loop.
1221 predcom-2.c is the same as predcom-1.c
1223 predcom-3.c is very similar but needs loads feeding each other instead of
1227 //===---------------------------------------------------------------------===//
1231 Type based alias analysis:
1232 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=14705
1234 We should do better analysis of posix_memalign. At the least it should
1235 no-capture its pointer argument, at best, we should know that the out-value
1236 result doesn't point to anything (like malloc). One example of this is in
1237 SingleSource/Benchmarks/Misc/dt.c
1239 //===---------------------------------------------------------------------===//
1241 A/B get pinned to the stack because we turn an if/then into a select instead
1242 of PRE'ing the load/store. This may be fixable in instcombine:
1243 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=37892
1245 struct X { int i; };
1259 //===---------------------------------------------------------------------===//
1261 Interesting missed case because of control flow flattening (should be 2 loads):
1262 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=26629
1263 With: llvm-gcc t2.c -S -o - -O0 -emit-llvm | llvm-as |
1264 opt -mem2reg -gvn -instcombine | llvm-dis
1265 we miss it because we need 1) CRIT EDGE 2) MULTIPLE DIFFERENT
1266 VALS PRODUCED BY ONE BLOCK OVER DIFFERENT PATHS
1268 //===---------------------------------------------------------------------===//
1270 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=19633
1271 We could eliminate the branch condition here, loading from null is undefined:
1273 struct S { int w, x, y, z; };
1274 struct T { int r; struct S s; };
1275 void bar (struct S, int);
1276 void foo (int a, struct T b)
1284 //===---------------------------------------------------------------------===//
1286 simplifylibcalls should do several optimizations for strspn/strcspn:
1288 strcspn(x, "") -> strlen(x)
1291 strspn(x, "") -> strlen(x)
1292 strspn(x, "a") -> strchr(x, 'a')-x
1294 strcspn(x, "a") -> inlined loop for up to 3 letters (similarly for strspn):
1296 size_t __strcspn_c3 (__const char *__s, int __reject1, int __reject2,
1298 register size_t __result = 0;
1299 while (__s[__result] != '\0' && __s[__result] != __reject1 &&
1300 __s[__result] != __reject2 && __s[__result] != __reject3)
1305 This should turn into a switch on the character. See PR3253 for some notes on
1308 456.hmmer apparently uses strcspn and strspn a lot. 471.omnetpp uses strspn.
1310 //===---------------------------------------------------------------------===//
1312 "gas" uses this idiom:
1313 else if (strchr ("+-/*%|&^:[]()~", *intel_parser.op_string))
1315 else if (strchr ("<>", *intel_parser.op_string)
1317 Those should be turned into a switch.
1319 //===---------------------------------------------------------------------===//
1321 252.eon contains this interesting code:
1323 %3072 = getelementptr [100 x i8]* %tempString, i32 0, i32 0
1324 %3073 = call i8* @strcpy(i8* %3072, i8* %3071) nounwind
1325 %strlen = call i32 @strlen(i8* %3072) ; uses = 1
1326 %endptr = getelementptr [100 x i8]* %tempString, i32 0, i32 %strlen
1327 call void @llvm.memcpy.i32(i8* %endptr,
1328 i8* getelementptr ([5 x i8]* @"\01LC42", i32 0, i32 0), i32 5, i32 1)
1329 %3074 = call i32 @strlen(i8* %endptr) nounwind readonly
1331 This is interesting for a couple reasons. First, in this:
1333 %3073 = call i8* @strcpy(i8* %3072, i8* %3071) nounwind
1334 %strlen = call i32 @strlen(i8* %3072)
1336 The strlen could be replaced with: %strlen = sub %3072, %3073, because the
1337 strcpy call returns a pointer to the end of the string. Based on that, the
1338 endptr GEP just becomes equal to 3073, which eliminates a strlen call and GEP.
1340 Second, the memcpy+strlen strlen can be replaced with:
1342 %3074 = call i32 @strlen([5 x i8]* @"\01LC42") nounwind readonly
1344 Because the destination was just copied into the specified memory buffer. This,
1345 in turn, can be constant folded to "4".
1347 In other code, it contains:
1349 %endptr6978 = bitcast i8* %endptr69 to i32*
1350 store i32 7107374, i32* %endptr6978, align 1
1351 %3167 = call i32 @strlen(i8* %endptr69) nounwind readonly
1353 Which could also be constant folded. Whatever is producing this should probably
1354 be fixed to leave this as a memcpy from a string.
1356 Further, eon also has an interesting partially redundant strlen call:
1358 bb8: ; preds = %_ZN18eonImageCalculatorC1Ev.exit
1359 %682 = getelementptr i8** %argv, i32 6 ; <i8**> [#uses=2]
1360 %683 = load i8** %682, align 4 ; <i8*> [#uses=4]
1361 %684 = load i8* %683, align 1 ; <i8> [#uses=1]
1362 %685 = icmp eq i8 %684, 0 ; <i1> [#uses=1]
1363 br i1 %685, label %bb10, label %bb9
1366 %686 = call i32 @strlen(i8* %683) nounwind readonly
1367 %687 = icmp ugt i32 %686, 254 ; <i1> [#uses=1]
1368 br i1 %687, label %bb10, label %bb11
1370 bb10: ; preds = %bb9, %bb8
1371 %688 = call i32 @strlen(i8* %683) nounwind readonly
1373 This could be eliminated by doing the strlen once in bb8, saving code size and
1374 improving perf on the bb8->9->10 path.
1376 //===---------------------------------------------------------------------===//
1378 I see an interesting fully redundant call to strlen left in 186.crafty:InputMove
1380 %movetext11 = getelementptr [128 x i8]* %movetext, i32 0, i32 0
1383 bb62: ; preds = %bb55, %bb53
1384 %promote.0 = phi i32 [ %169, %bb55 ], [ 0, %bb53 ]
1385 %171 = call i32 @strlen(i8* %movetext11) nounwind readonly align 1
1386 %172 = add i32 %171, -1 ; <i32> [#uses=1]
1387 %173 = getelementptr [128 x i8]* %movetext, i32 0, i32 %172
1390 br i1 %or.cond, label %bb65, label %bb72
1392 bb65: ; preds = %bb62
1393 store i8 0, i8* %173, align 1
1396 bb72: ; preds = %bb65, %bb62
1397 %trank.1 = phi i32 [ %176, %bb65 ], [ -1, %bb62 ]
1398 %177 = call i32 @strlen(i8* %movetext11) nounwind readonly align 1
1400 Note that on the bb62->bb72 path, that the %177 strlen call is partially
1401 redundant with the %171 call. At worst, we could shove the %177 strlen call
1402 up into the bb65 block moving it out of the bb62->bb72 path. However, note
1403 that bb65 stores to the string, zeroing out the last byte. This means that on
1404 that path the value of %177 is actually just %171-1. A sub is cheaper than a
1407 This pattern repeats several times, basically doing:
1412 where it is "obvious" that B = A-1.
1414 //===---------------------------------------------------------------------===//
1416 186.crafty contains this interesting pattern:
1418 %77 = call i8* @strstr(i8* getelementptr ([6 x i8]* @"\01LC5", i32 0, i32 0),
1420 %phitmp648 = icmp eq i8* %77, getelementptr ([6 x i8]* @"\01LC5", i32 0, i32 0)
1421 br i1 %phitmp648, label %bb70, label %bb76
1423 bb70: ; preds = %OptionMatch.exit91, %bb69
1424 %78 = call i32 @strlen(i8* %30) nounwind readonly align 1 ; <i32> [#uses=1]
1428 if (strstr(cststr, P) == cststr) {
1432 The strstr call would be significantly cheaper written as:
1435 if (memcmp(P, str, strlen(P)))
1438 This is memcmp+strlen instead of strstr. This also makes the strlen fully
1441 //===---------------------------------------------------------------------===//
1443 186.crafty also contains this code:
1445 %1906 = call i32 @strlen(i8* getelementptr ([32 x i8]* @pgn_event, i32 0,i32 0))
1446 %1907 = getelementptr [32 x i8]* @pgn_event, i32 0, i32 %1906
1447 %1908 = call i8* @strcpy(i8* %1907, i8* %1905) nounwind align 1
1448 %1909 = call i32 @strlen(i8* getelementptr ([32 x i8]* @pgn_event, i32 0,i32 0))
1449 %1910 = getelementptr [32 x i8]* @pgn_event, i32 0, i32 %1909
1451 The last strlen is computable as 1908-@pgn_event, which means 1910=1908.
1453 //===---------------------------------------------------------------------===//
1455 186.crafty has this interesting pattern with the "out.4543" variable:
1457 call void @llvm.memcpy.i32(
1458 i8* getelementptr ([10 x i8]* @out.4543, i32 0, i32 0),
1459 i8* getelementptr ([7 x i8]* @"\01LC28700", i32 0, i32 0), i32 7, i32 1)
1460 %101 = call@printf(i8* ... @out.4543, i32 0, i32 0)) nounwind
1462 It is basically doing:
1464 memcpy(globalarray, "string");
1465 printf(..., globalarray);
1467 Anyway, by knowing that printf just reads the memory and forward substituting
1468 the string directly into the printf, this eliminates reads from globalarray.
1469 Since this pattern occurs frequently in crafty (due to the "DisplayTime" and
1470 other similar functions) there are many stores to "out". Once all the printfs
1471 stop using "out", all that is left is the memcpy's into it. This should allow
1472 globalopt to remove the "stored only" global.
1474 //===---------------------------------------------------------------------===//
1478 define inreg i32 @foo(i8* inreg %p) nounwind {
1480 %tmp1 = ashr i8 %tmp0, 5
1481 %tmp2 = sext i8 %tmp1 to i32
1485 could be dagcombine'd to a sign-extending load with a shift.
1486 For example, on x86 this currently gets this:
1492 while it could get this:
1497 //===---------------------------------------------------------------------===//
1501 int test(int x) { return 1-x == x; } // --> return false
1502 int test2(int x) { return 2-x == x; } // --> return x == 1 ?
1504 Always foldable for odd constants, what is the rule for even?
1506 //===---------------------------------------------------------------------===//
1508 PR 3381: GEP to field of size 0 inside a struct could be turned into GEP
1509 for next field in struct (which is at same address).
1511 For example: store of float into { {{}}, float } could be turned into a store to
1514 //===---------------------------------------------------------------------===//
1517 double foo(double a) { return sin(a); }
1519 This compiles into this on x86-64 Linux:
1530 //===---------------------------------------------------------------------===//
1532 The arg promotion pass should make use of nocapture to make its alias analysis
1533 stuff much more precise.
1535 //===---------------------------------------------------------------------===//
1537 The following functions should be optimized to use a select instead of a
1538 branch (from gcc PR40072):
1540 char char_int(int m) {if(m>7) return 0; return m;}
1541 int int_char(char m) {if(m>7) return 0; return m;}
1543 //===---------------------------------------------------------------------===//
1545 int func(int a, int b) { if (a & 0x80) b |= 0x80; else b &= ~0x80; return b; }
1549 define i32 @func(i32 %a, i32 %b) nounwind readnone ssp {
1551 %0 = and i32 %a, 128 ; <i32> [#uses=1]
1552 %1 = icmp eq i32 %0, 0 ; <i1> [#uses=1]
1553 %2 = or i32 %b, 128 ; <i32> [#uses=1]
1554 %3 = and i32 %b, -129 ; <i32> [#uses=1]
1555 %b_addr.0 = select i1 %1, i32 %3, i32 %2 ; <i32> [#uses=1]
1559 However, it's functionally equivalent to:
1561 b = (b & ~0x80) | (a & 0x80);
1563 Which generates this:
1565 define i32 @func(i32 %a, i32 %b) nounwind readnone ssp {
1567 %0 = and i32 %b, -129 ; <i32> [#uses=1]
1568 %1 = and i32 %a, 128 ; <i32> [#uses=1]
1569 %2 = or i32 %0, %1 ; <i32> [#uses=1]
1573 This can be generalized for other forms:
1575 b = (b & ~0x80) | (a & 0x40) << 1;
1577 //===---------------------------------------------------------------------===//
1579 These two functions produce different code. They shouldn't:
1583 uint8_t p1(uint8_t b, uint8_t a) {
1584 b = (b & ~0xc0) | (a & 0xc0);
1588 uint8_t p2(uint8_t b, uint8_t a) {
1589 b = (b & ~0x40) | (a & 0x40);
1590 b = (b & ~0x80) | (a & 0x80);
1594 define zeroext i8 @p1(i8 zeroext %b, i8 zeroext %a) nounwind readnone ssp {
1596 %0 = and i8 %b, 63 ; <i8> [#uses=1]
1597 %1 = and i8 %a, -64 ; <i8> [#uses=1]
1598 %2 = or i8 %1, %0 ; <i8> [#uses=1]
1602 define zeroext i8 @p2(i8 zeroext %b, i8 zeroext %a) nounwind readnone ssp {
1604 %0 = and i8 %b, 63 ; <i8> [#uses=1]
1605 %.masked = and i8 %a, 64 ; <i8> [#uses=1]
1606 %1 = and i8 %a, -128 ; <i8> [#uses=1]
1607 %2 = or i8 %1, %0 ; <i8> [#uses=1]
1608 %3 = or i8 %2, %.masked ; <i8> [#uses=1]
1612 //===---------------------------------------------------------------------===//
1614 IPSCCP does not currently propagate argument dependent constants through
1615 functions where it does not not all of the callers. This includes functions
1616 with normal external linkage as well as templates, C99 inline functions etc.
1617 Specifically, it does nothing to:
1619 define i32 @test(i32 %x, i32 %y, i32 %z) nounwind {
1621 %0 = add nsw i32 %y, %z
1624 %3 = add nsw i32 %1, %2
1628 define i32 @test2() nounwind {
1630 %0 = call i32 @test(i32 1, i32 2, i32 4) nounwind
1634 It would be interesting extend IPSCCP to be able to handle simple cases like
1635 this, where all of the arguments to a call are constant. Because IPSCCP runs
1636 before inlining, trivial templates and inline functions are not yet inlined.
1637 The results for a function + set of constant arguments should be memoized in a
1640 //===---------------------------------------------------------------------===//
1642 The libcall constant folding stuff should be moved out of SimplifyLibcalls into
1643 libanalysis' constantfolding logic. This would allow IPSCCP to be able to
1644 handle simple things like this:
1646 static int foo(const char *X) { return strlen(X); }
1647 int bar() { return foo("abcd"); }
1649 //===---------------------------------------------------------------------===//
1651 InstCombine should use SimplifyDemandedBits to remove the or instruction:
1653 define i1 @test(i8 %x, i8 %y) {
1655 %B = icmp ugt i8 %A, 3
1659 Currently instcombine calls SimplifyDemandedBits with either all bits or just
1660 the sign bit, if the comparison is obviously a sign test. In this case, we only
1661 need all but the bottom two bits from %A, and if we gave that mask to SDB it
1662 would delete the or instruction for us.
1664 //===---------------------------------------------------------------------===//
1666 functionattrs doesn't know much about memcpy/memset. This function should be
1667 marked readnone rather than readonly, since it only twiddles local memory, but
1668 functionattrs doesn't handle memset/memcpy/memmove aggressively:
1670 struct X { int *p; int *q; };
1677 p = __builtin_memcpy (&x, &y, sizeof (int *));
1681 //===---------------------------------------------------------------------===//
1683 Missed instcombine transformation:
1684 define i1 @a(i32 %x) nounwind readnone {
1686 %cmp = icmp eq i32 %x, 30
1687 %sub = add i32 %x, -30
1688 %cmp2 = icmp ugt i32 %sub, 9
1689 %or = or i1 %cmp, %cmp2
1692 This should be optimized to a single compare. Testcase derived from gcc.
1694 //===---------------------------------------------------------------------===//
1696 Missed instcombine transformation:
1698 void a(int x) { if (((1<<x)&8)==0) b(); }
1700 The shift should be optimized out. Testcase derived from gcc.
1702 //===---------------------------------------------------------------------===//
1704 Missed instcombine or reassociate transformation:
1705 int a(int a, int b) { return (a==12)&(b>47)&(b<58); }
1707 The sgt and slt should be combined into a single comparison. Testcase derived
1710 //===---------------------------------------------------------------------===//
1712 Missed instcombine transformation:
1713 define i32 @a(i32 %x) nounwind readnone {
1715 %shr = lshr i32 %x, 5 ; <i32> [#uses=1]
1716 %xor = xor i32 %shr, 67108864 ; <i32> [#uses=1]
1717 %sub = add i32 %xor, -67108864 ; <i32> [#uses=1]
1721 This function is equivalent to "ashr i32 %x, 5". Testcase derived from gcc.
1723 //===---------------------------------------------------------------------===//
1725 isSafeToLoadUnconditionally should allow a GEP of a global/alloca with constant
1726 indicies within the bounds of the allocated object. Reduced example:
1728 const int a[] = {3,6};
1729 int b(int y) { int* x = y ? &a[0] : &a[1]; return *x; }
1731 All the loads should be eliminated. Testcase derived from gcc.
1733 //===---------------------------------------------------------------------===//