1 Target Independent Opportunities:
3 //===---------------------------------------------------------------------===//
5 Dead argument elimination should be enhanced to handle cases when an argument is
6 dead to an externally visible function. Though the argument can't be removed
7 from the externally visible function, the caller doesn't need to pass it in.
8 For example in this testcase:
10 void foo(int X) __attribute__((noinline));
11 void foo(int X) { sideeffect(); }
12 void bar(int A) { foo(A+1); }
16 define void @bar(i32 %A) nounwind ssp {
17 %0 = add nsw i32 %A, 1 ; <i32> [#uses=1]
18 tail call void @foo(i32 %0) nounwind noinline ssp
22 The add is dead, we could pass in 'i32 undef' instead. This occurs for C++
23 templates etc, which usually have linkonce_odr/weak_odr linkage, not internal
26 //===---------------------------------------------------------------------===//
28 With the recent changes to make the implicit def/use set explicit in
29 machineinstrs, we should change the target descriptions for 'call' instructions
30 so that the .td files don't list all the call-clobbered registers as implicit
31 defs. Instead, these should be added by the code generator (e.g. on the dag).
33 This has a number of uses:
35 1. PPC32/64 and X86 32/64 can avoid having multiple copies of call instructions
36 for their different impdef sets.
37 2. Targets with multiple calling convs (e.g. x86) which have different clobber
38 sets don't need copies of call instructions.
39 3. 'Interprocedural register allocation' can be done to reduce the clobber sets
42 //===---------------------------------------------------------------------===//
44 Make the PPC branch selector target independant
46 //===---------------------------------------------------------------------===//
48 Get the C front-end to expand hypot(x,y) -> llvm.sqrt(x*x+y*y) when errno and
49 precision don't matter (ffastmath). Misc/mandel will like this. :) This isn't
50 safe in general, even on darwin. See the libm implementation of hypot for
51 examples (which special case when x/y are exactly zero to get signed zeros etc
54 //===---------------------------------------------------------------------===//
56 Solve this DAG isel folding deficiency:
74 The problem is the store's chain operand is not the load X but rather
75 a TokenFactor of the load X and load Y, which prevents the folding.
77 There are two ways to fix this:
79 1. The dag combiner can start using alias analysis to realize that y/x
80 don't alias, making the store to X not dependent on the load from Y.
81 2. The generated isel could be made smarter in the case it can't
82 disambiguate the pointers.
84 Number 1 is the preferred solution.
86 This has been "fixed" by a TableGen hack. But that is a short term workaround
87 which will be removed once the proper fix is made.
89 //===---------------------------------------------------------------------===//
91 On targets with expensive 64-bit multiply, we could LSR this:
98 for (i = ...; ++i, tmp+=tmp)
101 This would be a win on ppc32, but not x86 or ppc64.
103 //===---------------------------------------------------------------------===//
105 Shrink: (setlt (loadi32 P), 0) -> (setlt (loadi8 Phi), 0)
107 //===---------------------------------------------------------------------===//
109 Reassociate should turn things like:
111 int factorial(int X) {
112 return X*X*X*X*X*X*X*X;
115 into llvm.powi calls, allowing the code generator to produce balanced
116 multiplication trees.
118 First, the intrinsic needs to be extended to support integers, and second the
119 code generator needs to be enhanced to lower these to multiplication trees.
121 //===---------------------------------------------------------------------===//
123 Interesting? testcase for add/shift/mul reassoc:
125 int bar(int x, int y) {
126 return x*x*x+y+x*x*x*x*x*y*y*y*y;
128 int foo(int z, int n) {
129 return bar(z, n) + bar(2*z, 2*n);
132 This is blocked on not handling X*X*X -> powi(X, 3) (see note above). The issue
133 is that we end up getting t = 2*X s = t*t and don't turn this into 4*X*X,
134 which is the same number of multiplies and is canonical, because the 2*X has
135 multiple uses. Here's a simple example:
137 define i32 @test15(i32 %X1) {
138 %B = mul i32 %X1, 47 ; X1*47
144 //===---------------------------------------------------------------------===//
146 Reassociate should handle the example in GCC PR16157:
148 extern int a0, a1, a2, a3, a4; extern int b0, b1, b2, b3, b4;
149 void f () { /* this can be optimized to four additions... */
150 b4 = a4 + a3 + a2 + a1 + a0;
151 b3 = a3 + a2 + a1 + a0;
156 This requires reassociating to forms of expressions that are already available,
157 something that reassoc doesn't think about yet.
159 //===---------------------------------------------------------------------===//
161 These two functions should generate the same code on big-endian systems:
163 int g(int *j,int *l) { return memcmp(j,l,4); }
164 int h(int *j, int *l) { return *j - *l; }
166 this could be done in SelectionDAGISel.cpp, along with other special cases,
169 //===---------------------------------------------------------------------===//
171 It would be nice to revert this patch:
172 http://lists.cs.uiuc.edu/pipermail/llvm-commits/Week-of-Mon-20060213/031986.html
174 And teach the dag combiner enough to simplify the code expanded before
175 legalize. It seems plausible that this knowledge would let it simplify other
178 //===---------------------------------------------------------------------===//
180 For vector types, TargetData.cpp::getTypeInfo() returns alignment that is equal
181 to the type size. It works but can be overly conservative as the alignment of
182 specific vector types are target dependent.
184 //===---------------------------------------------------------------------===//
186 We should produce an unaligned load from code like this:
188 v4sf example(float *P) {
189 return (v4sf){P[0], P[1], P[2], P[3] };
192 //===---------------------------------------------------------------------===//
194 Add support for conditional increments, and other related patterns. Instead
199 je LBB16_2 #cond_next
210 //===---------------------------------------------------------------------===//
212 Combine: a = sin(x), b = cos(x) into a,b = sincos(x).
214 Expand these to calls of sin/cos and stores:
215 double sincos(double x, double *sin, double *cos);
216 float sincosf(float x, float *sin, float *cos);
217 long double sincosl(long double x, long double *sin, long double *cos);
219 Doing so could allow SROA of the destination pointers. See also:
220 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=17687
222 This is now easily doable with MRVs. We could even make an intrinsic for this
223 if anyone cared enough about sincos.
225 //===---------------------------------------------------------------------===//
227 Turn this into a single byte store with no load (the other 3 bytes are
230 define void @test(i32* %P) {
232 %tmp14 = or i32 %tmp, 3305111552
233 %tmp15 = and i32 %tmp14, 3321888767
234 store i32 %tmp15, i32* %P
238 //===---------------------------------------------------------------------===//
240 dag/inst combine "clz(x)>>5 -> x==0" for 32-bit x.
246 int t = __builtin_clz(x);
256 //===---------------------------------------------------------------------===//
258 quantum_sigma_x in 462.libquantum contains the following loop:
260 for(i=0; i<reg->size; i++)
262 /* Flip the target bit of each basis state */
263 reg->node[i].state ^= ((MAX_UNSIGNED) 1 << target);
266 Where MAX_UNSIGNED/state is a 64-bit int. On a 32-bit platform it would be just
267 so cool to turn it into something like:
269 long long Res = ((MAX_UNSIGNED) 1 << target);
271 for(i=0; i<reg->size; i++)
272 reg->node[i].state ^= Res & 0xFFFFFFFFULL;
274 for(i=0; i<reg->size; i++)
275 reg->node[i].state ^= Res & 0xFFFFFFFF00000000ULL
278 ... which would only do one 32-bit XOR per loop iteration instead of two.
280 It would also be nice to recognize the reg->size doesn't alias reg->node[i], but
283 //===---------------------------------------------------------------------===//
285 This should be optimized to one 'and' and one 'or', from PR4216:
287 define i32 @test_bitfield(i32 %bf.prev.low) nounwind ssp {
289 %bf.prev.lo.cleared10 = or i32 %bf.prev.low, 32962 ; <i32> [#uses=1]
290 %0 = and i32 %bf.prev.low, -65536 ; <i32> [#uses=1]
291 %1 = and i32 %bf.prev.lo.cleared10, 40186 ; <i32> [#uses=1]
292 %2 = or i32 %1, %0 ; <i32> [#uses=1]
296 //===---------------------------------------------------------------------===//
298 This isn't recognized as bswap by instcombine (yes, it really is bswap):
300 unsigned long reverse(unsigned v) {
302 t = v ^ ((v << 16) | (v >> 16));
304 v = (v << 24) | (v >> 8);
308 //===---------------------------------------------------------------------===//
310 These idioms should be recognized as popcount (see PR1488):
312 unsigned countbits_slow(unsigned v) {
314 for (c = 0; v; v >>= 1)
318 unsigned countbits_fast(unsigned v){
321 v &= v - 1; // clear the least significant bit set
325 BITBOARD = unsigned long long
326 int PopCnt(register BITBOARD a) {
334 unsigned int popcount(unsigned int input) {
335 unsigned int count = 0;
336 for (unsigned int i = 0; i < 4 * 8; i++)
337 count += (input >> i) & i;
341 This is a form of idiom recognition for loops, the same thing that could be
342 useful for recognizing memset/memcpy.
344 //===---------------------------------------------------------------------===//
346 These should turn into single 16-bit (unaligned?) loads on little/big endian
349 unsigned short read_16_le(const unsigned char *adr) {
350 return adr[0] | (adr[1] << 8);
352 unsigned short read_16_be(const unsigned char *adr) {
353 return (adr[0] << 8) | adr[1];
356 //===---------------------------------------------------------------------===//
358 -instcombine should handle this transform:
359 icmp pred (sdiv X / C1 ), C2
360 when X, C1, and C2 are unsigned. Similarly for udiv and signed operands.
362 Currently InstCombine avoids this transform but will do it when the signs of
363 the operands and the sign of the divide match. See the FIXME in
364 InstructionCombining.cpp in the visitSetCondInst method after the switch case
365 for Instruction::UDiv (around line 4447) for more details.
367 The SingleSource/Benchmarks/Shootout-C++/hash and hash2 tests have examples of
370 //===---------------------------------------------------------------------===//
372 viterbi speeds up *significantly* if the various "history" related copy loops
373 are turned into memcpy calls at the source level. We need a "loops to memcpy"
376 //===---------------------------------------------------------------------===//
380 typedef unsigned U32;
381 typedef unsigned long long U64;
382 int test (U32 *inst, U64 *regs) {
385 int r1 = (temp >> 20) & 0xf;
386 int b2 = (temp >> 16) & 0xf;
387 effective_addr2 = temp & 0xfff;
388 if (b2) effective_addr2 += regs[b2];
389 b2 = (temp >> 12) & 0xf;
390 if (b2) effective_addr2 += regs[b2];
391 effective_addr2 &= regs[4];
392 if ((effective_addr2 & 3) == 0)
397 Note that only the low 2 bits of effective_addr2 are used. On 32-bit systems,
398 we don't eliminate the computation of the top half of effective_addr2 because
399 we don't have whole-function selection dags. On x86, this means we use one
400 extra register for the function when effective_addr2 is declared as U64 than
401 when it is declared U32.
403 PHI Slicing could be extended to do this.
405 //===---------------------------------------------------------------------===//
407 LSR should know what GPR types a target has from TargetData. This code:
409 volatile short X, Y; // globals
413 for (i = 0; i < N; i++) { X = i; Y = i*4; }
416 produces two near identical IV's (after promotion) on PPC/ARM:
426 add r2, r2, #1 <- [0,+,1]
427 sub r0, r0, #1 <- [0,-,1]
431 LSR should reuse the "+" IV for the exit test.
433 //===---------------------------------------------------------------------===//
435 Tail call elim should be more aggressive, checking to see if the call is
436 followed by an uncond branch to an exit block.
438 ; This testcase is due to tail-duplication not wanting to copy the return
439 ; instruction into the terminating blocks because there was other code
440 ; optimized out of the function after the taildup happened.
441 ; RUN: llvm-as < %s | opt -tailcallelim | llvm-dis | not grep call
443 define i32 @t4(i32 %a) {
445 %tmp.1 = and i32 %a, 1 ; <i32> [#uses=1]
446 %tmp.2 = icmp ne i32 %tmp.1, 0 ; <i1> [#uses=1]
447 br i1 %tmp.2, label %then.0, label %else.0
449 then.0: ; preds = %entry
450 %tmp.5 = add i32 %a, -1 ; <i32> [#uses=1]
451 %tmp.3 = call i32 @t4( i32 %tmp.5 ) ; <i32> [#uses=1]
454 else.0: ; preds = %entry
455 %tmp.7 = icmp ne i32 %a, 0 ; <i1> [#uses=1]
456 br i1 %tmp.7, label %then.1, label %return
458 then.1: ; preds = %else.0
459 %tmp.11 = add i32 %a, -2 ; <i32> [#uses=1]
460 %tmp.9 = call i32 @t4( i32 %tmp.11 ) ; <i32> [#uses=1]
463 return: ; preds = %then.1, %else.0, %then.0
464 %result.0 = phi i32 [ 0, %else.0 ], [ %tmp.3, %then.0 ],
469 //===---------------------------------------------------------------------===//
471 Tail recursion elimination should handle:
476 return 2 * pow2m1 (n - 1) + 1;
479 Also, multiplies can be turned into SHL's, so they should be handled as if
480 they were associative. "return foo() << 1" can be tail recursion eliminated.
482 //===---------------------------------------------------------------------===//
484 Argument promotion should promote arguments for recursive functions, like
487 ; RUN: llvm-as < %s | opt -argpromotion | llvm-dis | grep x.val
489 define internal i32 @foo(i32* %x) {
491 %tmp = load i32* %x ; <i32> [#uses=0]
492 %tmp.foo = call i32 @foo( i32* %x ) ; <i32> [#uses=1]
496 define i32 @bar(i32* %x) {
498 %tmp3 = call i32 @foo( i32* %x ) ; <i32> [#uses=1]
502 //===---------------------------------------------------------------------===//
504 We should investigate an instruction sinking pass. Consider this silly
520 je LBB1_2 # cond_true
528 The PIC base computation (call+popl) is only used on one path through the
529 code, but is currently always computed in the entry block. It would be
530 better to sink the picbase computation down into the block for the
531 assertion, as it is the only one that uses it. This happens for a lot of
532 code with early outs.
534 Another example is loads of arguments, which are usually emitted into the
535 entry block on targets like x86. If not used in all paths through a
536 function, they should be sunk into the ones that do.
538 In this case, whole-function-isel would also handle this.
540 //===---------------------------------------------------------------------===//
542 Investigate lowering of sparse switch statements into perfect hash tables:
543 http://burtleburtle.net/bob/hash/perfect.html
545 //===---------------------------------------------------------------------===//
547 We should turn things like "load+fabs+store" and "load+fneg+store" into the
548 corresponding integer operations. On a yonah, this loop:
553 for (b = 0; b < 10000000; b++)
554 for (i = 0; i < 256; i++)
558 is twice as slow as this loop:
563 for (b = 0; b < 10000000; b++)
564 for (i = 0; i < 256; i++)
565 a[i] ^= (1ULL << 63);
568 and I suspect other processors are similar. On X86 in particular this is a
569 big win because doing this with integers allows the use of read/modify/write
572 //===---------------------------------------------------------------------===//
574 DAG Combiner should try to combine small loads into larger loads when
575 profitable. For example, we compile this C++ example:
577 struct THotKey { short Key; bool Control; bool Shift; bool Alt; };
578 extern THotKey m_HotKey;
579 THotKey GetHotKey () { return m_HotKey; }
581 into (-O3 -fno-exceptions -static -fomit-frame-pointer):
586 movb _m_HotKey+3, %cl
587 movb _m_HotKey+4, %dl
588 movb _m_HotKey+2, %ch
603 movzwl _m_HotKey+4, %edx
607 The LLVM IR contains the needed alignment info, so we should be able to
608 merge the loads and stores into 4-byte loads:
610 %struct.THotKey = type { i16, i8, i8, i8 }
611 define void @_Z9GetHotKeyv(%struct.THotKey* sret %agg.result) nounwind {
613 %tmp2 = load i16* getelementptr (@m_HotKey, i32 0, i32 0), align 8
614 %tmp5 = load i8* getelementptr (@m_HotKey, i32 0, i32 1), align 2
615 %tmp8 = load i8* getelementptr (@m_HotKey, i32 0, i32 2), align 1
616 %tmp11 = load i8* getelementptr (@m_HotKey, i32 0, i32 3), align 2
618 Alternatively, we should use a small amount of base-offset alias analysis
619 to make it so the scheduler doesn't need to hold all the loads in regs at
622 //===---------------------------------------------------------------------===//
624 We should add an FRINT node to the DAG to model targets that have legal
625 implementations of ceil/floor/rint.
627 //===---------------------------------------------------------------------===//
632 long long input[8] = {1,1,1,1,1,1,1,1};
636 We currently compile this into a memcpy from a global array since the
637 initializer is fairly large and not memset'able. This is good, but the memcpy
638 gets lowered to load/stores in the code generator. This is also ok, except
639 that the codegen lowering for memcpy doesn't handle the case when the source
640 is a constant global. This gives us atrocious code like this:
645 movl _C.0.1444-"L1$pb"+32(%eax), %ecx
647 movl _C.0.1444-"L1$pb"+20(%eax), %ecx
649 movl _C.0.1444-"L1$pb"+36(%eax), %ecx
651 movl _C.0.1444-"L1$pb"+44(%eax), %ecx
653 movl _C.0.1444-"L1$pb"+40(%eax), %ecx
655 movl _C.0.1444-"L1$pb"+12(%eax), %ecx
657 movl _C.0.1444-"L1$pb"+4(%eax), %ecx
669 //===---------------------------------------------------------------------===//
671 http://llvm.org/PR717:
673 The following code should compile into "ret int undef". Instead, LLVM
674 produces "ret int 0":
683 //===---------------------------------------------------------------------===//
685 The loop unroller should partially unroll loops (instead of peeling them)
686 when code growth isn't too bad and when an unroll count allows simplification
687 of some code within the loop. One trivial example is:
693 for ( nLoop = 0; nLoop < 1000; nLoop++ ) {
702 Unrolling by 2 would eliminate the '&1' in both copies, leading to a net
703 reduction in code size. The resultant code would then also be suitable for
704 exit value computation.
706 //===---------------------------------------------------------------------===//
708 We miss a bunch of rotate opportunities on various targets, including ppc, x86,
709 etc. On X86, we miss a bunch of 'rotate by variable' cases because the rotate
710 matching code in dag combine doesn't look through truncates aggressively
711 enough. Here are some testcases reduces from GCC PR17886:
713 unsigned long long f(unsigned long long x, int y) {
714 return (x << y) | (x >> 64-y);
716 unsigned f2(unsigned x, int y){
717 return (x << y) | (x >> 32-y);
719 unsigned long long f3(unsigned long long x){
721 return (x << y) | (x >> 64-y);
723 unsigned f4(unsigned x){
725 return (x << y) | (x >> 32-y);
727 unsigned long long f5(unsigned long long x, unsigned long long y) {
728 return (x << 8) | ((y >> 48) & 0xffull);
730 unsigned long long f6(unsigned long long x, unsigned long long y, int z) {
733 return (x << 8) | ((y >> 48) & 0xffull);
735 return (x << 16) | ((y >> 40) & 0xffffull);
737 return (x << 24) | ((y >> 32) & 0xffffffull);
739 return (x << 32) | ((y >> 24) & 0xffffffffull);
741 return (x << 40) | ((y >> 16) & 0xffffffffffull);
745 On X86-64, we only handle f2/f3/f4 right. On x86-32, a few of these
746 generate truly horrible code, instead of using shld and friends. On
747 ARM, we end up with calls to L___lshrdi3/L___ashldi3 in f, which is
748 badness. PPC64 misses f, f5 and f6. CellSPU aborts in isel.
750 //===---------------------------------------------------------------------===//
752 We do a number of simplifications in simplify libcalls to strength reduce
753 standard library functions, but we don't currently merge them together. For
754 example, it is useful to merge memcpy(a,b,strlen(b)) -> strcpy. This can only
755 be done safely if "b" isn't modified between the strlen and memcpy of course.
757 //===---------------------------------------------------------------------===//
759 We generate a horrible libcall for llvm.powi. For example, we compile:
762 double f(double a) { return std::pow(a, 4); }
768 movsd 16(%esp), %xmm0
771 call L___powidf2$stub
779 movsd 16(%esp), %xmm0
787 //===---------------------------------------------------------------------===//
789 We compile this program: (from GCC PR11680)
790 http://gcc.gnu.org/bugzilla/attachment.cgi?id=4487
792 Into code that runs the same speed in fast/slow modes, but both modes run 2x
793 slower than when compile with GCC (either 4.0 or 4.2):
795 $ llvm-g++ perf.cpp -O3 -fno-exceptions
797 1.821u 0.003s 0:01.82 100.0% 0+0k 0+0io 0pf+0w
799 $ g++ perf.cpp -O3 -fno-exceptions
801 0.821u 0.001s 0:00.82 100.0% 0+0k 0+0io 0pf+0w
803 It looks like we are making the same inlining decisions, so this may be raw
804 codegen badness or something else (haven't investigated).
806 //===---------------------------------------------------------------------===//
808 We miss some instcombines for stuff like this:
810 void foo (unsigned int a) {
811 /* This one is equivalent to a >= (3 << 2). */
816 A few other related ones are in GCC PR14753.
818 //===---------------------------------------------------------------------===//
820 Divisibility by constant can be simplified (according to GCC PR12849) from
821 being a mulhi to being a mul lo (cheaper). Testcase:
823 void bar(unsigned n) {
828 This is equivalent to the following, where 2863311531 is the multiplicative
829 inverse of 3, and 1431655766 is ((2^32)-1)/3+1:
830 void bar(unsigned n) {
831 if (n * 2863311531U < 1431655766U)
835 The same transformation can work with an even modulo with the addition of a
836 rotate: rotate the result of the multiply to the right by the number of bits
837 which need to be zero for the condition to be true, and shrink the compare RHS
838 by the same amount. Unless the target supports rotates, though, that
839 transformation probably isn't worthwhile.
841 The transformation can also easily be made to work with non-zero equality
842 comparisons: just transform, for example, "n % 3 == 1" to "(n-1) % 3 == 0".
844 //===---------------------------------------------------------------------===//
846 Better mod/ref analysis for scanf would allow us to eliminate the vtable and a
847 bunch of other stuff from this example (see PR1604):
857 std::scanf("%d", &t.val);
858 std::printf("%d\n", t.val);
861 //===---------------------------------------------------------------------===//
863 These functions perform the same computation, but produce different assembly.
865 define i8 @select(i8 %x) readnone nounwind {
866 %A = icmp ult i8 %x, 250
867 %B = select i1 %A, i8 0, i8 1
871 define i8 @addshr(i8 %x) readnone nounwind {
872 %A = zext i8 %x to i9
873 %B = add i9 %A, 6 ;; 256 - 250 == 6
875 %D = trunc i9 %C to i8
879 //===---------------------------------------------------------------------===//
883 f (unsigned long a, unsigned long b, unsigned long c)
885 return ((a & (c - 1)) != 0) || ((b & (c - 1)) != 0);
888 f (unsigned long a, unsigned long b, unsigned long c)
890 return ((a & (c - 1)) != 0) | ((b & (c - 1)) != 0);
892 Both should combine to ((a|b) & (c-1)) != 0. Currently not optimized with
893 "clang -emit-llvm-bc | opt -std-compile-opts".
895 //===---------------------------------------------------------------------===//
898 #define PMD_MASK (~((1UL << 23) - 1))
899 void clear_pmd_range(unsigned long start, unsigned long end)
901 if (!(start & ~PMD_MASK) && !(end & ~PMD_MASK))
904 The expression should optimize to something like
905 "!((start|end)&~PMD_MASK). Currently not optimized with "clang
906 -emit-llvm-bc | opt -std-compile-opts".
908 //===---------------------------------------------------------------------===//
914 return (n >= 0 ? 1 : -1);
916 Should combine to (n >> 31) | 1. Currently not optimized with "clang
917 -emit-llvm-bc | opt -std-compile-opts | llc".
919 //===---------------------------------------------------------------------===//
923 if (variable == 4 || variable == 6)
926 This should optimize to "if ((variable | 2) == 6)". Currently not
927 optimized with "clang -emit-llvm-bc | opt -std-compile-opts | llc".
929 //===---------------------------------------------------------------------===//
931 unsigned int f(unsigned int i, unsigned int n) {++i; if (i == n) ++i; return
933 unsigned int f2(unsigned int i, unsigned int n) {++i; i += i == n; return i;}
934 These should combine to the same thing. Currently, the first function
935 produces better code on X86.
937 //===---------------------------------------------------------------------===//
940 #define abs(x) x>0?x:-x
943 return (abs(x)) >= 0;
945 This should optimize to x == INT_MIN. (With -fwrapv.) Currently not
946 optimized with "clang -emit-llvm-bc | opt -std-compile-opts".
948 //===---------------------------------------------------------------------===//
952 rotate_cst (unsigned int a)
954 a = (a << 10) | (a >> 22);
959 minus_cst (unsigned int a)
968 mask_gt (unsigned int a)
970 /* This is equivalent to a > 15. */
975 rshift_gt (unsigned int a)
977 /* This is equivalent to a > 23. */
981 All should simplify to a single comparison. All of these are
982 currently not optimized with "clang -emit-llvm-bc | opt
985 //===---------------------------------------------------------------------===//
988 int c(int* x) {return (char*)x+2 == (char*)x;}
989 Should combine to 0. Currently not optimized with "clang
990 -emit-llvm-bc | opt -std-compile-opts" (although llc can optimize it).
992 //===---------------------------------------------------------------------===//
994 int a(unsigned b) {return ((b << 31) | (b << 30)) >> 31;}
995 Should be combined to "((b >> 1) | b) & 1". Currently not optimized
996 with "clang -emit-llvm-bc | opt -std-compile-opts".
998 //===---------------------------------------------------------------------===//
1000 unsigned a(unsigned x, unsigned y) { return x | (y & 1) | (y & 2);}
1001 Should combine to "x | (y & 3)". Currently not optimized with "clang
1002 -emit-llvm-bc | opt -std-compile-opts".
1004 //===---------------------------------------------------------------------===//
1006 int a(int a, int b, int c) {return (~a & c) | ((c|a) & b);}
1007 Should fold to "(~a & c) | (a & b)". Currently not optimized with
1008 "clang -emit-llvm-bc | opt -std-compile-opts".
1010 //===---------------------------------------------------------------------===//
1012 int a(int a,int b) {return (~(a|b))|a;}
1013 Should fold to "a|~b". Currently not optimized with "clang
1014 -emit-llvm-bc | opt -std-compile-opts".
1016 //===---------------------------------------------------------------------===//
1018 int a(int a, int b) {return (a&&b) || (a&&!b);}
1019 Should fold to "a". Currently not optimized with "clang -emit-llvm-bc
1020 | opt -std-compile-opts".
1022 //===---------------------------------------------------------------------===//
1024 int a(int a, int b, int c) {return (a&&b) || (!a&&c);}
1025 Should fold to "a ? b : c", or at least something sane. Currently not
1026 optimized with "clang -emit-llvm-bc | opt -std-compile-opts".
1028 //===---------------------------------------------------------------------===//
1030 int a(int a, int b, int c) {return (a&&b) || (a&&c) || (a&&b&&c);}
1031 Should fold to a && (b || c). Currently not optimized with "clang
1032 -emit-llvm-bc | opt -std-compile-opts".
1034 //===---------------------------------------------------------------------===//
1036 int a(int x) {return x | ((x & 8) ^ 8);}
1037 Should combine to x | 8. Currently not optimized with "clang
1038 -emit-llvm-bc | opt -std-compile-opts".
1040 //===---------------------------------------------------------------------===//
1042 int a(int x) {return x ^ ((x & 8) ^ 8);}
1043 Should also combine to x | 8. Currently not optimized with "clang
1044 -emit-llvm-bc | opt -std-compile-opts".
1046 //===---------------------------------------------------------------------===//
1048 int a(int x) {return (x & 8) == 0 ? -1 : -9;}
1049 Should combine to (x | -9) ^ 8. Currently not optimized with "clang
1050 -emit-llvm-bc | opt -std-compile-opts".
1052 //===---------------------------------------------------------------------===//
1054 int a(int x) {return (x & 8) == 0 ? -9 : -1;}
1055 Should combine to x | -9. Currently not optimized with "clang
1056 -emit-llvm-bc | opt -std-compile-opts".
1058 //===---------------------------------------------------------------------===//
1060 int a(int x) {return ((x | -9) ^ 8) & x;}
1061 Should combine to x & -9. Currently not optimized with "clang
1062 -emit-llvm-bc | opt -std-compile-opts".
1064 //===---------------------------------------------------------------------===//
1066 unsigned a(unsigned a) {return a * 0x11111111 >> 28 & 1;}
1067 Should combine to "a * 0x88888888 >> 31". Currently not optimized
1068 with "clang -emit-llvm-bc | opt -std-compile-opts".
1070 //===---------------------------------------------------------------------===//
1072 unsigned a(char* x) {if ((*x & 32) == 0) return b();}
1073 There's an unnecessary zext in the generated code with "clang
1074 -emit-llvm-bc | opt -std-compile-opts".
1076 //===---------------------------------------------------------------------===//
1078 unsigned a(unsigned long long x) {return 40 * (x >> 1);}
1079 Should combine to "20 * (((unsigned)x) & -2)". Currently not
1080 optimized with "clang -emit-llvm-bc | opt -std-compile-opts".
1082 //===---------------------------------------------------------------------===//
1084 This was noticed in the entryblock for grokdeclarator in 403.gcc:
1086 %tmp = icmp eq i32 %decl_context, 4
1087 %decl_context_addr.0 = select i1 %tmp, i32 3, i32 %decl_context
1088 %tmp1 = icmp eq i32 %decl_context_addr.0, 1
1089 %decl_context_addr.1 = select i1 %tmp1, i32 0, i32 %decl_context_addr.0
1091 tmp1 should be simplified to something like:
1092 (!tmp || decl_context == 1)
1094 This allows recursive simplifications, tmp1 is used all over the place in
1095 the function, e.g. by:
1097 %tmp23 = icmp eq i32 %decl_context_addr.1, 0 ; <i1> [#uses=1]
1098 %tmp24 = xor i1 %tmp1, true ; <i1> [#uses=1]
1099 %or.cond8 = and i1 %tmp23, %tmp24 ; <i1> [#uses=1]
1103 //===---------------------------------------------------------------------===//
1107 Store sinking: This code:
1109 void f (int n, int *cond, int *res) {
1112 for (i = 0; i < n; i++)
1114 *res ^= 234; /* (*) */
1117 On this function GVN hoists the fully redundant value of *res, but nothing
1118 moves the store out. This gives us this code:
1120 bb: ; preds = %bb2, %entry
1121 %.rle = phi i32 [ 0, %entry ], [ %.rle6, %bb2 ]
1122 %i.05 = phi i32 [ 0, %entry ], [ %indvar.next, %bb2 ]
1123 %1 = load i32* %cond, align 4
1124 %2 = icmp eq i32 %1, 0
1125 br i1 %2, label %bb2, label %bb1
1128 %3 = xor i32 %.rle, 234
1129 store i32 %3, i32* %res, align 4
1132 bb2: ; preds = %bb, %bb1
1133 %.rle6 = phi i32 [ %3, %bb1 ], [ %.rle, %bb ]
1134 %indvar.next = add i32 %i.05, 1
1135 %exitcond = icmp eq i32 %indvar.next, %n
1136 br i1 %exitcond, label %return, label %bb
1138 DSE should sink partially dead stores to get the store out of the loop.
1140 Here's another partial dead case:
1141 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=12395
1143 //===---------------------------------------------------------------------===//
1145 Scalar PRE hoists the mul in the common block up to the else:
1147 int test (int a, int b, int c, int g) {
1157 It would be better to do the mul once to reduce codesize above the if.
1158 This is GCC PR38204.
1160 //===---------------------------------------------------------------------===//
1164 GCC PR37810 is an interesting case where we should sink load/store reload
1165 into the if block and outside the loop, so we don't reload/store it on the
1186 We now hoist the reload after the call (Transforms/GVN/lpre-call-wrap.ll), but
1187 we don't sink the store. We need partially dead store sinking.
1189 //===---------------------------------------------------------------------===//
1191 [LOAD PRE CRIT EDGE SPLITTING]
1193 GCC PR37166: Sinking of loads prevents SROA'ing the "g" struct on the stack
1194 leading to excess stack traffic. This could be handled by GVN with some crazy
1195 symbolic phi translation. The code we get looks like (g is on the stack):
1199 %9 = getelementptr %struct.f* %g, i32 0, i32 0
1200 store i32 %8, i32* %9, align bel %bb3
1202 bb3: ; preds = %bb1, %bb2, %bb
1203 %c_addr.0 = phi %struct.f* [ %g, %bb2 ], [ %c, %bb ], [ %c, %bb1 ]
1204 %b_addr.0 = phi %struct.f* [ %b, %bb2 ], [ %g, %bb ], [ %b, %bb1 ]
1205 %10 = getelementptr %struct.f* %c_addr.0, i32 0, i32 0
1206 %11 = load i32* %10, align 4
1208 %11 is partially redundant, an in BB2 it should have the value %8.
1210 GCC PR33344 and PR35287 are similar cases.
1213 //===---------------------------------------------------------------------===//
1217 There are many load PRE testcases in testsuite/gcc.dg/tree-ssa/loadpre* in the
1218 GCC testsuite, ones we don't get yet are (checked through loadpre25):
1220 [CRIT EDGE BREAKING]
1221 loadpre3.c predcom-4.c
1223 [PRE OF READONLY CALL]
1226 [TURN SELECT INTO BRANCH]
1227 loadpre14.c loadpre15.c
1229 actually a conditional increment: loadpre18.c loadpre19.c
1232 //===---------------------------------------------------------------------===//
1235 There are many PRE testcases in testsuite/gcc.dg/tree-ssa/ssa-pre-*.c in the
1238 //===---------------------------------------------------------------------===//
1240 There are some interesting cases in testsuite/gcc.dg/tree-ssa/pred-comm* in the
1241 GCC testsuite. For example, we get the first example in predcom-1.c, but
1242 miss the second one:
1247 __attribute__ ((noinline))
1248 void count_averages(int n) {
1250 for (i = 1; i < n; i++)
1251 avg[i] = (((unsigned long) fib[i - 1] + fib[i] + fib[i + 1]) / 3) & 0xffff;
1254 which compiles into two loads instead of one in the loop.
1256 predcom-2.c is the same as predcom-1.c
1258 predcom-3.c is very similar but needs loads feeding each other instead of
1262 //===---------------------------------------------------------------------===//
1264 Type based alias analysis:
1265 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=14705
1267 //===---------------------------------------------------------------------===//
1269 A/B get pinned to the stack because we turn an if/then into a select instead
1270 of PRE'ing the load/store. This may be fixable in instcombine:
1271 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=37892
1273 struct X { int i; };
1287 //===---------------------------------------------------------------------===//
1289 Interesting missed case because of control flow flattening (should be 2 loads):
1290 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=26629
1291 With: llvm-gcc t2.c -S -o - -O0 -emit-llvm | llvm-as |
1292 opt -mem2reg -gvn -instcombine | llvm-dis
1293 we miss it because we need 1) CRIT EDGE 2) MULTIPLE DIFFERENT
1294 VALS PRODUCED BY ONE BLOCK OVER DIFFERENT PATHS
1296 //===---------------------------------------------------------------------===//
1298 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=19633
1299 We could eliminate the branch condition here, loading from null is undefined:
1301 struct S { int w, x, y, z; };
1302 struct T { int r; struct S s; };
1303 void bar (struct S, int);
1304 void foo (int a, struct T b)
1312 //===---------------------------------------------------------------------===//
1314 simplifylibcalls should do several optimizations for strspn/strcspn:
1316 strcspn(x, "") -> strlen(x)
1319 strspn(x, "") -> strlen(x)
1320 strspn(x, "a") -> strchr(x, 'a')-x
1322 strcspn(x, "a") -> inlined loop for up to 3 letters (similarly for strspn):
1324 size_t __strcspn_c3 (__const char *__s, int __reject1, int __reject2,
1326 register size_t __result = 0;
1327 while (__s[__result] != '\0' && __s[__result] != __reject1 &&
1328 __s[__result] != __reject2 && __s[__result] != __reject3)
1333 This should turn into a switch on the character. See PR3253 for some notes on
1336 456.hmmer apparently uses strcspn and strspn a lot. 471.omnetpp uses strspn.
1338 //===---------------------------------------------------------------------===//
1340 "gas" uses this idiom:
1341 else if (strchr ("+-/*%|&^:[]()~", *intel_parser.op_string))
1343 else if (strchr ("<>", *intel_parser.op_string)
1345 Those should be turned into a switch.
1347 //===---------------------------------------------------------------------===//
1349 252.eon contains this interesting code:
1351 %3072 = getelementptr [100 x i8]* %tempString, i32 0, i32 0
1352 %3073 = call i8* @strcpy(i8* %3072, i8* %3071) nounwind
1353 %strlen = call i32 @strlen(i8* %3072) ; uses = 1
1354 %endptr = getelementptr [100 x i8]* %tempString, i32 0, i32 %strlen
1355 call void @llvm.memcpy.i32(i8* %endptr,
1356 i8* getelementptr ([5 x i8]* @"\01LC42", i32 0, i32 0), i32 5, i32 1)
1357 %3074 = call i32 @strlen(i8* %endptr) nounwind readonly
1359 This is interesting for a couple reasons. First, in this:
1361 %3073 = call i8* @strcpy(i8* %3072, i8* %3071) nounwind
1362 %strlen = call i32 @strlen(i8* %3072)
1364 The strlen could be replaced with: %strlen = sub %3072, %3073, because the
1365 strcpy call returns a pointer to the end of the string. Based on that, the
1366 endptr GEP just becomes equal to 3073, which eliminates a strlen call and GEP.
1368 Second, the memcpy+strlen strlen can be replaced with:
1370 %3074 = call i32 @strlen([5 x i8]* @"\01LC42") nounwind readonly
1372 Because the destination was just copied into the specified memory buffer. This,
1373 in turn, can be constant folded to "4".
1375 In other code, it contains:
1377 %endptr6978 = bitcast i8* %endptr69 to i32*
1378 store i32 7107374, i32* %endptr6978, align 1
1379 %3167 = call i32 @strlen(i8* %endptr69) nounwind readonly
1381 Which could also be constant folded. Whatever is producing this should probably
1382 be fixed to leave this as a memcpy from a string.
1384 Further, eon also has an interesting partially redundant strlen call:
1386 bb8: ; preds = %_ZN18eonImageCalculatorC1Ev.exit
1387 %682 = getelementptr i8** %argv, i32 6 ; <i8**> [#uses=2]
1388 %683 = load i8** %682, align 4 ; <i8*> [#uses=4]
1389 %684 = load i8* %683, align 1 ; <i8> [#uses=1]
1390 %685 = icmp eq i8 %684, 0 ; <i1> [#uses=1]
1391 br i1 %685, label %bb10, label %bb9
1394 %686 = call i32 @strlen(i8* %683) nounwind readonly
1395 %687 = icmp ugt i32 %686, 254 ; <i1> [#uses=1]
1396 br i1 %687, label %bb10, label %bb11
1398 bb10: ; preds = %bb9, %bb8
1399 %688 = call i32 @strlen(i8* %683) nounwind readonly
1401 This could be eliminated by doing the strlen once in bb8, saving code size and
1402 improving perf on the bb8->9->10 path.
1404 //===---------------------------------------------------------------------===//
1406 I see an interesting fully redundant call to strlen left in 186.crafty:InputMove
1408 %movetext11 = getelementptr [128 x i8]* %movetext, i32 0, i32 0
1411 bb62: ; preds = %bb55, %bb53
1412 %promote.0 = phi i32 [ %169, %bb55 ], [ 0, %bb53 ]
1413 %171 = call i32 @strlen(i8* %movetext11) nounwind readonly align 1
1414 %172 = add i32 %171, -1 ; <i32> [#uses=1]
1415 %173 = getelementptr [128 x i8]* %movetext, i32 0, i32 %172
1418 br i1 %or.cond, label %bb65, label %bb72
1420 bb65: ; preds = %bb62
1421 store i8 0, i8* %173, align 1
1424 bb72: ; preds = %bb65, %bb62
1425 %trank.1 = phi i32 [ %176, %bb65 ], [ -1, %bb62 ]
1426 %177 = call i32 @strlen(i8* %movetext11) nounwind readonly align 1
1428 Note that on the bb62->bb72 path, that the %177 strlen call is partially
1429 redundant with the %171 call. At worst, we could shove the %177 strlen call
1430 up into the bb65 block moving it out of the bb62->bb72 path. However, note
1431 that bb65 stores to the string, zeroing out the last byte. This means that on
1432 that path the value of %177 is actually just %171-1. A sub is cheaper than a
1435 This pattern repeats several times, basically doing:
1440 where it is "obvious" that B = A-1.
1442 //===---------------------------------------------------------------------===//
1444 186.crafty contains this interesting pattern:
1446 %77 = call i8* @strstr(i8* getelementptr ([6 x i8]* @"\01LC5", i32 0, i32 0),
1448 %phitmp648 = icmp eq i8* %77, getelementptr ([6 x i8]* @"\01LC5", i32 0, i32 0)
1449 br i1 %phitmp648, label %bb70, label %bb76
1451 bb70: ; preds = %OptionMatch.exit91, %bb69
1452 %78 = call i32 @strlen(i8* %30) nounwind readonly align 1 ; <i32> [#uses=1]
1456 if (strstr(cststr, P) == cststr) {
1460 The strstr call would be significantly cheaper written as:
1463 if (memcmp(P, str, strlen(P)))
1466 This is memcmp+strlen instead of strstr. This also makes the strlen fully
1469 //===---------------------------------------------------------------------===//
1471 186.crafty also contains this code:
1473 %1906 = call i32 @strlen(i8* getelementptr ([32 x i8]* @pgn_event, i32 0,i32 0))
1474 %1907 = getelementptr [32 x i8]* @pgn_event, i32 0, i32 %1906
1475 %1908 = call i8* @strcpy(i8* %1907, i8* %1905) nounwind align 1
1476 %1909 = call i32 @strlen(i8* getelementptr ([32 x i8]* @pgn_event, i32 0,i32 0))
1477 %1910 = getelementptr [32 x i8]* @pgn_event, i32 0, i32 %1909
1479 The last strlen is computable as 1908-@pgn_event, which means 1910=1908.
1481 //===---------------------------------------------------------------------===//
1483 186.crafty has this interesting pattern with the "out.4543" variable:
1485 call void @llvm.memcpy.i32(
1486 i8* getelementptr ([10 x i8]* @out.4543, i32 0, i32 0),
1487 i8* getelementptr ([7 x i8]* @"\01LC28700", i32 0, i32 0), i32 7, i32 1)
1488 %101 = call@printf(i8* ... @out.4543, i32 0, i32 0)) nounwind
1490 It is basically doing:
1492 memcpy(globalarray, "string");
1493 printf(..., globalarray);
1495 Anyway, by knowing that printf just reads the memory and forward substituting
1496 the string directly into the printf, this eliminates reads from globalarray.
1497 Since this pattern occurs frequently in crafty (due to the "DisplayTime" and
1498 other similar functions) there are many stores to "out". Once all the printfs
1499 stop using "out", all that is left is the memcpy's into it. This should allow
1500 globalopt to remove the "stored only" global.
1502 //===---------------------------------------------------------------------===//
1506 define inreg i32 @foo(i8* inreg %p) nounwind {
1508 %tmp1 = ashr i8 %tmp0, 5
1509 %tmp2 = sext i8 %tmp1 to i32
1513 could be dagcombine'd to a sign-extending load with a shift.
1514 For example, on x86 this currently gets this:
1520 while it could get this:
1525 //===---------------------------------------------------------------------===//
1529 int test(int x) { return 1-x == x; } // --> return false
1530 int test2(int x) { return 2-x == x; } // --> return x == 1 ?
1532 Always foldable for odd constants, what is the rule for even?
1534 //===---------------------------------------------------------------------===//
1536 PR 3381: GEP to field of size 0 inside a struct could be turned into GEP
1537 for next field in struct (which is at same address).
1539 For example: store of float into { {{}}, float } could be turned into a store to
1542 //===---------------------------------------------------------------------===//
1545 double foo(double a) { return sin(a); }
1547 This compiles into this on x86-64 Linux:
1558 //===---------------------------------------------------------------------===//
1560 The arg promotion pass should make use of nocapture to make its alias analysis
1561 stuff much more precise.
1563 //===---------------------------------------------------------------------===//
1565 The following functions should be optimized to use a select instead of a
1566 branch (from gcc PR40072):
1568 char char_int(int m) {if(m>7) return 0; return m;}
1569 int int_char(char m) {if(m>7) return 0; return m;}
1571 //===---------------------------------------------------------------------===//
1573 int func(int a, int b) { if (a & 0x80) b |= 0x80; else b &= ~0x80; return b; }
1577 define i32 @func(i32 %a, i32 %b) nounwind readnone ssp {
1579 %0 = and i32 %a, 128 ; <i32> [#uses=1]
1580 %1 = icmp eq i32 %0, 0 ; <i1> [#uses=1]
1581 %2 = or i32 %b, 128 ; <i32> [#uses=1]
1582 %3 = and i32 %b, -129 ; <i32> [#uses=1]
1583 %b_addr.0 = select i1 %1, i32 %3, i32 %2 ; <i32> [#uses=1]
1587 However, it's functionally equivalent to:
1589 b = (b & ~0x80) | (a & 0x80);
1591 Which generates this:
1593 define i32 @func(i32 %a, i32 %b) nounwind readnone ssp {
1595 %0 = and i32 %b, -129 ; <i32> [#uses=1]
1596 %1 = and i32 %a, 128 ; <i32> [#uses=1]
1597 %2 = or i32 %0, %1 ; <i32> [#uses=1]
1601 This can be generalized for other forms:
1603 b = (b & ~0x80) | (a & 0x40) << 1;
1605 //===---------------------------------------------------------------------===//
1607 These two functions produce different code. They shouldn't:
1611 uint8_t p1(uint8_t b, uint8_t a) {
1612 b = (b & ~0xc0) | (a & 0xc0);
1616 uint8_t p2(uint8_t b, uint8_t a) {
1617 b = (b & ~0x40) | (a & 0x40);
1618 b = (b & ~0x80) | (a & 0x80);
1622 define zeroext i8 @p1(i8 zeroext %b, i8 zeroext %a) nounwind readnone ssp {
1624 %0 = and i8 %b, 63 ; <i8> [#uses=1]
1625 %1 = and i8 %a, -64 ; <i8> [#uses=1]
1626 %2 = or i8 %1, %0 ; <i8> [#uses=1]
1630 define zeroext i8 @p2(i8 zeroext %b, i8 zeroext %a) nounwind readnone ssp {
1632 %0 = and i8 %b, 63 ; <i8> [#uses=1]
1633 %.masked = and i8 %a, 64 ; <i8> [#uses=1]
1634 %1 = and i8 %a, -128 ; <i8> [#uses=1]
1635 %2 = or i8 %1, %0 ; <i8> [#uses=1]
1636 %3 = or i8 %2, %.masked ; <i8> [#uses=1]
1640 //===---------------------------------------------------------------------===//
1642 IPSCCP does not currently propagate argument dependent constants through
1643 functions where it does not not all of the callers. This includes functions
1644 with normal external linkage as well as templates, C99 inline functions etc.
1645 Specifically, it does nothing to:
1647 define i32 @test(i32 %x, i32 %y, i32 %z) nounwind {
1649 %0 = add nsw i32 %y, %z
1652 %3 = add nsw i32 %1, %2
1656 define i32 @test2() nounwind {
1658 %0 = call i32 @test(i32 1, i32 2, i32 4) nounwind
1662 It would be interesting extend IPSCCP to be able to handle simple cases like
1663 this, where all of the arguments to a call are constant. Because IPSCCP runs
1664 before inlining, trivial templates and inline functions are not yet inlined.
1665 The results for a function + set of constant arguments should be memoized in a
1668 //===---------------------------------------------------------------------===//
1670 The libcall constant folding stuff should be moved out of SimplifyLibcalls into
1671 libanalysis' constantfolding logic. This would allow IPSCCP to be able to
1672 handle simple things like this:
1674 static int foo(const char *X) { return strlen(X); }
1675 int bar() { return foo("abcd"); }
1677 //===---------------------------------------------------------------------===//
1679 InstCombine should use SimplifyDemandedBits to remove the or instruction:
1681 define i1 @test(i8 %x, i8 %y) {
1683 %B = icmp ugt i8 %A, 3
1687 Currently instcombine calls SimplifyDemandedBits with either all bits or just
1688 the sign bit, if the comparison is obviously a sign test. In this case, we only
1689 need all but the bottom two bits from %A, and if we gave that mask to SDB it
1690 would delete the or instruction for us.
1692 //===---------------------------------------------------------------------===//
1694 FunctionAttrs is not marking this function as readnone (just readonly):
1695 $ clang t.c -emit-llvm -S -o - -O0 | opt -mem2reg -S -functionattrs
1697 int t(int a, int b, int c) {
1706 This is because we codegen this to:
1708 define i32 @t(i32 %a, i32 %b, i32 %c) nounwind readonly ssp {
1710 %a.addr = alloca i32 ; <i32*> [#uses=3]
1711 %c.addr = alloca i32 ; <i32*> [#uses=2]
1715 %p.0 = phi i32* [ %a.addr, %if.then ], [ %c.addr, %if.else ]
1716 %tmp2 = load i32* %p.0 ; <i32> [#uses=1]
1720 And functionattrs doesn't realize that the p.0 load points to function local
1723 Also, functionattrs doesn't know about memcpy/memset. This function should be
1724 marked readnone, since it only twiddles local memory, but functionattrs doesn't
1725 handle memset/memcpy/memmove aggressively:
1727 struct X { int *p; int *q; };
1734 p = __builtin_memcpy (&x, &y, sizeof (int *));
1738 //===---------------------------------------------------------------------===//