1 Target Independent Opportunities:
3 //===---------------------------------------------------------------------===//
5 Dead argument elimination should be enhanced to handle cases when an argument is
6 dead to an externally visible function. Though the argument can't be removed
7 from the externally visible function, the caller doesn't need to pass it in.
8 For example in this testcase:
10 void foo(int X) __attribute__((noinline));
11 void foo(int X) { sideeffect(); }
12 void bar(int A) { foo(A+1); }
16 define void @bar(i32 %A) nounwind ssp {
17 %0 = add nsw i32 %A, 1 ; <i32> [#uses=1]
18 tail call void @foo(i32 %0) nounwind noinline ssp
22 The add is dead, we could pass in 'i32 undef' instead. This occurs for C++
23 templates etc, which usually have linkonce_odr/weak_odr linkage, not internal
26 //===---------------------------------------------------------------------===//
28 With the recent changes to make the implicit def/use set explicit in
29 machineinstrs, we should change the target descriptions for 'call' instructions
30 so that the .td files don't list all the call-clobbered registers as implicit
31 defs. Instead, these should be added by the code generator (e.g. on the dag).
33 This has a number of uses:
35 1. PPC32/64 and X86 32/64 can avoid having multiple copies of call instructions
36 for their different impdef sets.
37 2. Targets with multiple calling convs (e.g. x86) which have different clobber
38 sets don't need copies of call instructions.
39 3. 'Interprocedural register allocation' can be done to reduce the clobber sets
42 //===---------------------------------------------------------------------===//
44 Make the PPC branch selector target independant
46 //===---------------------------------------------------------------------===//
48 Get the C front-end to expand hypot(x,y) -> llvm.sqrt(x*x+y*y) when errno and
49 precision don't matter (ffastmath). Misc/mandel will like this. :) This isn't
50 safe in general, even on darwin. See the libm implementation of hypot for
51 examples (which special case when x/y are exactly zero to get signed zeros etc
54 //===---------------------------------------------------------------------===//
56 Solve this DAG isel folding deficiency:
74 The problem is the store's chain operand is not the load X but rather
75 a TokenFactor of the load X and load Y, which prevents the folding.
77 There are two ways to fix this:
79 1. The dag combiner can start using alias analysis to realize that y/x
80 don't alias, making the store to X not dependent on the load from Y.
81 2. The generated isel could be made smarter in the case it can't
82 disambiguate the pointers.
84 Number 1 is the preferred solution.
86 This has been "fixed" by a TableGen hack. But that is a short term workaround
87 which will be removed once the proper fix is made.
89 //===---------------------------------------------------------------------===//
91 On targets with expensive 64-bit multiply, we could LSR this:
98 for (i = ...; ++i, tmp+=tmp)
101 This would be a win on ppc32, but not x86 or ppc64.
103 //===---------------------------------------------------------------------===//
105 Shrink: (setlt (loadi32 P), 0) -> (setlt (loadi8 Phi), 0)
107 //===---------------------------------------------------------------------===//
109 Reassociate should turn: X*X*X*X -> t=(X*X) (t*t) to eliminate a multiply.
111 //===---------------------------------------------------------------------===//
113 Interesting? testcase for add/shift/mul reassoc:
115 int bar(int x, int y) {
116 return x*x*x+y+x*x*x*x*x*y*y*y*y;
118 int foo(int z, int n) {
119 return bar(z, n) + bar(2*z, 2*n);
122 Reassociate should handle the example in GCC PR16157.
124 //===---------------------------------------------------------------------===//
126 These two functions should generate the same code on big-endian systems:
128 int g(int *j,int *l) { return memcmp(j,l,4); }
129 int h(int *j, int *l) { return *j - *l; }
131 this could be done in SelectionDAGISel.cpp, along with other special cases,
134 //===---------------------------------------------------------------------===//
136 It would be nice to revert this patch:
137 http://lists.cs.uiuc.edu/pipermail/llvm-commits/Week-of-Mon-20060213/031986.html
139 And teach the dag combiner enough to simplify the code expanded before
140 legalize. It seems plausible that this knowledge would let it simplify other
143 //===---------------------------------------------------------------------===//
145 For vector types, TargetData.cpp::getTypeInfo() returns alignment that is equal
146 to the type size. It works but can be overly conservative as the alignment of
147 specific vector types are target dependent.
149 //===---------------------------------------------------------------------===//
151 We should produce an unaligned load from code like this:
153 v4sf example(float *P) {
154 return (v4sf){P[0], P[1], P[2], P[3] };
157 //===---------------------------------------------------------------------===//
159 Add support for conditional increments, and other related patterns. Instead
164 je LBB16_2 #cond_next
175 //===---------------------------------------------------------------------===//
177 Combine: a = sin(x), b = cos(x) into a,b = sincos(x).
179 Expand these to calls of sin/cos and stores:
180 double sincos(double x, double *sin, double *cos);
181 float sincosf(float x, float *sin, float *cos);
182 long double sincosl(long double x, long double *sin, long double *cos);
184 Doing so could allow SROA of the destination pointers. See also:
185 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=17687
187 This is now easily doable with MRVs. We could even make an intrinsic for this
188 if anyone cared enough about sincos.
190 //===---------------------------------------------------------------------===//
192 Turn this into a single byte store with no load (the other 3 bytes are
195 define void @test(i32* %P) {
197 %tmp14 = or i32 %tmp, 3305111552
198 %tmp15 = and i32 %tmp14, 3321888767
199 store i32 %tmp15, i32* %P
203 //===---------------------------------------------------------------------===//
205 dag/inst combine "clz(x)>>5 -> x==0" for 32-bit x.
211 int t = __builtin_clz(x);
221 //===---------------------------------------------------------------------===//
223 quantum_sigma_x in 462.libquantum contains the following loop:
225 for(i=0; i<reg->size; i++)
227 /* Flip the target bit of each basis state */
228 reg->node[i].state ^= ((MAX_UNSIGNED) 1 << target);
231 Where MAX_UNSIGNED/state is a 64-bit int. On a 32-bit platform it would be just
232 so cool to turn it into something like:
234 long long Res = ((MAX_UNSIGNED) 1 << target);
236 for(i=0; i<reg->size; i++)
237 reg->node[i].state ^= Res & 0xFFFFFFFFULL;
239 for(i=0; i<reg->size; i++)
240 reg->node[i].state ^= Res & 0xFFFFFFFF00000000ULL
243 ... which would only do one 32-bit XOR per loop iteration instead of two.
245 It would also be nice to recognize the reg->size doesn't alias reg->node[i], but
248 //===---------------------------------------------------------------------===//
250 This should be optimized to one 'and' and one 'or', from PR4216:
252 define i32 @test_bitfield(i32 %bf.prev.low) nounwind ssp {
254 %bf.prev.lo.cleared10 = or i32 %bf.prev.low, 32962 ; <i32> [#uses=1]
255 %0 = and i32 %bf.prev.low, -65536 ; <i32> [#uses=1]
256 %1 = and i32 %bf.prev.lo.cleared10, 40186 ; <i32> [#uses=1]
257 %2 = or i32 %1, %0 ; <i32> [#uses=1]
261 //===---------------------------------------------------------------------===//
263 This isn't recognized as bswap by instcombine (yes, it really is bswap):
265 unsigned long reverse(unsigned v) {
267 t = v ^ ((v << 16) | (v >> 16));
269 v = (v << 24) | (v >> 8);
273 //===---------------------------------------------------------------------===//
275 These idioms should be recognized as popcount (see PR1488):
277 unsigned countbits_slow(unsigned v) {
279 for (c = 0; v; v >>= 1)
283 unsigned countbits_fast(unsigned v){
286 v &= v - 1; // clear the least significant bit set
290 BITBOARD = unsigned long long
291 int PopCnt(register BITBOARD a) {
299 unsigned int popcount(unsigned int input) {
300 unsigned int count = 0;
301 for (unsigned int i = 0; i < 4 * 8; i++)
302 count += (input >> i) & i;
306 This is a form of idiom recognition for loops, the same thing that could be
307 useful for recognizing memset/memcpy.
309 //===---------------------------------------------------------------------===//
311 These should turn into single 16-bit (unaligned?) loads on little/big endian
314 unsigned short read_16_le(const unsigned char *adr) {
315 return adr[0] | (adr[1] << 8);
317 unsigned short read_16_be(const unsigned char *adr) {
318 return (adr[0] << 8) | adr[1];
321 //===---------------------------------------------------------------------===//
323 -instcombine should handle this transform:
324 icmp pred (sdiv X / C1 ), C2
325 when X, C1, and C2 are unsigned. Similarly for udiv and signed operands.
327 Currently InstCombine avoids this transform but will do it when the signs of
328 the operands and the sign of the divide match. See the FIXME in
329 InstructionCombining.cpp in the visitSetCondInst method after the switch case
330 for Instruction::UDiv (around line 4447) for more details.
332 The SingleSource/Benchmarks/Shootout-C++/hash and hash2 tests have examples of
335 //===---------------------------------------------------------------------===//
337 viterbi speeds up *significantly* if the various "history" related copy loops
338 are turned into memcpy calls at the source level. We need a "loops to memcpy"
341 //===---------------------------------------------------------------------===//
345 typedef unsigned U32;
346 typedef unsigned long long U64;
347 int test (U32 *inst, U64 *regs) {
350 int r1 = (temp >> 20) & 0xf;
351 int b2 = (temp >> 16) & 0xf;
352 effective_addr2 = temp & 0xfff;
353 if (b2) effective_addr2 += regs[b2];
354 b2 = (temp >> 12) & 0xf;
355 if (b2) effective_addr2 += regs[b2];
356 effective_addr2 &= regs[4];
357 if ((effective_addr2 & 3) == 0)
362 Note that only the low 2 bits of effective_addr2 are used. On 32-bit systems,
363 we don't eliminate the computation of the top half of effective_addr2 because
364 we don't have whole-function selection dags. On x86, this means we use one
365 extra register for the function when effective_addr2 is declared as U64 than
366 when it is declared U32.
368 PHI Slicing could be extended to do this.
370 //===---------------------------------------------------------------------===//
372 LSR should know what GPR types a target has from TargetData. This code:
374 volatile short X, Y; // globals
378 for (i = 0; i < N; i++) { X = i; Y = i*4; }
381 produces two near identical IV's (after promotion) on PPC/ARM:
391 add r2, r2, #1 <- [0,+,1]
392 sub r0, r0, #1 <- [0,-,1]
396 LSR should reuse the "+" IV for the exit test.
398 //===---------------------------------------------------------------------===//
400 Tail call elim should be more aggressive, checking to see if the call is
401 followed by an uncond branch to an exit block.
403 ; This testcase is due to tail-duplication not wanting to copy the return
404 ; instruction into the terminating blocks because there was other code
405 ; optimized out of the function after the taildup happened.
406 ; RUN: llvm-as < %s | opt -tailcallelim | llvm-dis | not grep call
408 define i32 @t4(i32 %a) {
410 %tmp.1 = and i32 %a, 1 ; <i32> [#uses=1]
411 %tmp.2 = icmp ne i32 %tmp.1, 0 ; <i1> [#uses=1]
412 br i1 %tmp.2, label %then.0, label %else.0
414 then.0: ; preds = %entry
415 %tmp.5 = add i32 %a, -1 ; <i32> [#uses=1]
416 %tmp.3 = call i32 @t4( i32 %tmp.5 ) ; <i32> [#uses=1]
419 else.0: ; preds = %entry
420 %tmp.7 = icmp ne i32 %a, 0 ; <i1> [#uses=1]
421 br i1 %tmp.7, label %then.1, label %return
423 then.1: ; preds = %else.0
424 %tmp.11 = add i32 %a, -2 ; <i32> [#uses=1]
425 %tmp.9 = call i32 @t4( i32 %tmp.11 ) ; <i32> [#uses=1]
428 return: ; preds = %then.1, %else.0, %then.0
429 %result.0 = phi i32 [ 0, %else.0 ], [ %tmp.3, %then.0 ],
434 //===---------------------------------------------------------------------===//
436 Tail recursion elimination should handle:
441 return 2 * pow2m1 (n - 1) + 1;
444 Also, multiplies can be turned into SHL's, so they should be handled as if
445 they were associative. "return foo() << 1" can be tail recursion eliminated.
447 //===---------------------------------------------------------------------===//
449 Argument promotion should promote arguments for recursive functions, like
452 ; RUN: llvm-as < %s | opt -argpromotion | llvm-dis | grep x.val
454 define internal i32 @foo(i32* %x) {
456 %tmp = load i32* %x ; <i32> [#uses=0]
457 %tmp.foo = call i32 @foo( i32* %x ) ; <i32> [#uses=1]
461 define i32 @bar(i32* %x) {
463 %tmp3 = call i32 @foo( i32* %x ) ; <i32> [#uses=1]
467 //===---------------------------------------------------------------------===//
469 We should investigate an instruction sinking pass. Consider this silly
485 je LBB1_2 # cond_true
493 The PIC base computation (call+popl) is only used on one path through the
494 code, but is currently always computed in the entry block. It would be
495 better to sink the picbase computation down into the block for the
496 assertion, as it is the only one that uses it. This happens for a lot of
497 code with early outs.
499 Another example is loads of arguments, which are usually emitted into the
500 entry block on targets like x86. If not used in all paths through a
501 function, they should be sunk into the ones that do.
503 In this case, whole-function-isel would also handle this.
505 //===---------------------------------------------------------------------===//
507 Investigate lowering of sparse switch statements into perfect hash tables:
508 http://burtleburtle.net/bob/hash/perfect.html
510 //===---------------------------------------------------------------------===//
512 We should turn things like "load+fabs+store" and "load+fneg+store" into the
513 corresponding integer operations. On a yonah, this loop:
518 for (b = 0; b < 10000000; b++)
519 for (i = 0; i < 256; i++)
523 is twice as slow as this loop:
528 for (b = 0; b < 10000000; b++)
529 for (i = 0; i < 256; i++)
530 a[i] ^= (1ULL << 63);
533 and I suspect other processors are similar. On X86 in particular this is a
534 big win because doing this with integers allows the use of read/modify/write
537 //===---------------------------------------------------------------------===//
539 DAG Combiner should try to combine small loads into larger loads when
540 profitable. For example, we compile this C++ example:
542 struct THotKey { short Key; bool Control; bool Shift; bool Alt; };
543 extern THotKey m_HotKey;
544 THotKey GetHotKey () { return m_HotKey; }
546 into (-O3 -fno-exceptions -static -fomit-frame-pointer):
551 movb _m_HotKey+3, %cl
552 movb _m_HotKey+4, %dl
553 movb _m_HotKey+2, %ch
568 movzwl _m_HotKey+4, %edx
572 The LLVM IR contains the needed alignment info, so we should be able to
573 merge the loads and stores into 4-byte loads:
575 %struct.THotKey = type { i16, i8, i8, i8 }
576 define void @_Z9GetHotKeyv(%struct.THotKey* sret %agg.result) nounwind {
578 %tmp2 = load i16* getelementptr (@m_HotKey, i32 0, i32 0), align 8
579 %tmp5 = load i8* getelementptr (@m_HotKey, i32 0, i32 1), align 2
580 %tmp8 = load i8* getelementptr (@m_HotKey, i32 0, i32 2), align 1
581 %tmp11 = load i8* getelementptr (@m_HotKey, i32 0, i32 3), align 2
583 Alternatively, we should use a small amount of base-offset alias analysis
584 to make it so the scheduler doesn't need to hold all the loads in regs at
587 //===---------------------------------------------------------------------===//
589 We should add an FRINT node to the DAG to model targets that have legal
590 implementations of ceil/floor/rint.
592 //===---------------------------------------------------------------------===//
597 long long input[8] = {1,1,1,1,1,1,1,1};
601 We currently compile this into a memcpy from a global array since the
602 initializer is fairly large and not memset'able. This is good, but the memcpy
603 gets lowered to load/stores in the code generator. This is also ok, except
604 that the codegen lowering for memcpy doesn't handle the case when the source
605 is a constant global. This gives us atrocious code like this:
610 movl _C.0.1444-"L1$pb"+32(%eax), %ecx
612 movl _C.0.1444-"L1$pb"+20(%eax), %ecx
614 movl _C.0.1444-"L1$pb"+36(%eax), %ecx
616 movl _C.0.1444-"L1$pb"+44(%eax), %ecx
618 movl _C.0.1444-"L1$pb"+40(%eax), %ecx
620 movl _C.0.1444-"L1$pb"+12(%eax), %ecx
622 movl _C.0.1444-"L1$pb"+4(%eax), %ecx
634 //===---------------------------------------------------------------------===//
636 http://llvm.org/PR717:
638 The following code should compile into "ret int undef". Instead, LLVM
639 produces "ret int 0":
648 //===---------------------------------------------------------------------===//
650 The loop unroller should partially unroll loops (instead of peeling them)
651 when code growth isn't too bad and when an unroll count allows simplification
652 of some code within the loop. One trivial example is:
658 for ( nLoop = 0; nLoop < 1000; nLoop++ ) {
667 Unrolling by 2 would eliminate the '&1' in both copies, leading to a net
668 reduction in code size. The resultant code would then also be suitable for
669 exit value computation.
671 //===---------------------------------------------------------------------===//
673 We miss a bunch of rotate opportunities on various targets, including ppc, x86,
674 etc. On X86, we miss a bunch of 'rotate by variable' cases because the rotate
675 matching code in dag combine doesn't look through truncates aggressively
676 enough. Here are some testcases reduces from GCC PR17886:
678 unsigned long long f(unsigned long long x, int y) {
679 return (x << y) | (x >> 64-y);
681 unsigned f2(unsigned x, int y){
682 return (x << y) | (x >> 32-y);
684 unsigned long long f3(unsigned long long x){
686 return (x << y) | (x >> 64-y);
688 unsigned f4(unsigned x){
690 return (x << y) | (x >> 32-y);
692 unsigned long long f5(unsigned long long x, unsigned long long y) {
693 return (x << 8) | ((y >> 48) & 0xffull);
695 unsigned long long f6(unsigned long long x, unsigned long long y, int z) {
698 return (x << 8) | ((y >> 48) & 0xffull);
700 return (x << 16) | ((y >> 40) & 0xffffull);
702 return (x << 24) | ((y >> 32) & 0xffffffull);
704 return (x << 32) | ((y >> 24) & 0xffffffffull);
706 return (x << 40) | ((y >> 16) & 0xffffffffffull);
710 On X86-64, we only handle f2/f3/f4 right. On x86-32, a few of these
711 generate truly horrible code, instead of using shld and friends. On
712 ARM, we end up with calls to L___lshrdi3/L___ashldi3 in f, which is
713 badness. PPC64 misses f, f5 and f6. CellSPU aborts in isel.
715 //===---------------------------------------------------------------------===//
717 We do a number of simplifications in simplify libcalls to strength reduce
718 standard library functions, but we don't currently merge them together. For
719 example, it is useful to merge memcpy(a,b,strlen(b)) -> strcpy. This can only
720 be done safely if "b" isn't modified between the strlen and memcpy of course.
722 //===---------------------------------------------------------------------===//
724 Reassociate should turn things like:
726 int factorial(int X) {
727 return X*X*X*X*X*X*X*X;
730 into llvm.powi calls, allowing the code generator to produce balanced
731 multiplication trees.
733 //===---------------------------------------------------------------------===//
735 We generate a horrible libcall for llvm.powi. For example, we compile:
738 double f(double a) { return std::pow(a, 4); }
744 movsd 16(%esp), %xmm0
747 call L___powidf2$stub
755 movsd 16(%esp), %xmm0
763 //===---------------------------------------------------------------------===//
765 We compile this program: (from GCC PR11680)
766 http://gcc.gnu.org/bugzilla/attachment.cgi?id=4487
768 Into code that runs the same speed in fast/slow modes, but both modes run 2x
769 slower than when compile with GCC (either 4.0 or 4.2):
771 $ llvm-g++ perf.cpp -O3 -fno-exceptions
773 1.821u 0.003s 0:01.82 100.0% 0+0k 0+0io 0pf+0w
775 $ g++ perf.cpp -O3 -fno-exceptions
777 0.821u 0.001s 0:00.82 100.0% 0+0k 0+0io 0pf+0w
779 It looks like we are making the same inlining decisions, so this may be raw
780 codegen badness or something else (haven't investigated).
782 //===---------------------------------------------------------------------===//
784 We miss some instcombines for stuff like this:
786 void foo (unsigned int a) {
787 /* This one is equivalent to a >= (3 << 2). */
792 A few other related ones are in GCC PR14753.
794 //===---------------------------------------------------------------------===//
796 Divisibility by constant can be simplified (according to GCC PR12849) from
797 being a mulhi to being a mul lo (cheaper). Testcase:
799 void bar(unsigned n) {
804 I think this basically amounts to a dag combine to simplify comparisons against
805 multiply hi's into a comparison against the mullo.
807 //===---------------------------------------------------------------------===//
809 Better mod/ref analysis for scanf would allow us to eliminate the vtable and a
810 bunch of other stuff from this example (see PR1604):
820 std::scanf("%d", &t.val);
821 std::printf("%d\n", t.val);
824 //===---------------------------------------------------------------------===//
826 Instcombine will merge comparisons like (x >= 10) && (x < 20) by producing (x -
827 10) u< 10, but only when the comparisons have matching sign.
829 This could be converted with a similiar technique. (PR1941)
831 define i1 @test(i8 %x) {
832 %A = icmp uge i8 %x, 5
833 %B = icmp slt i8 %x, 20
838 //===---------------------------------------------------------------------===//
840 These functions perform the same computation, but produce different assembly.
842 define i8 @select(i8 %x) readnone nounwind {
843 %A = icmp ult i8 %x, 250
844 %B = select i1 %A, i8 0, i8 1
848 define i8 @addshr(i8 %x) readnone nounwind {
849 %A = zext i8 %x to i9
850 %B = add i9 %A, 6 ;; 256 - 250 == 6
852 %D = trunc i9 %C to i8
856 //===---------------------------------------------------------------------===//
860 f (unsigned long a, unsigned long b, unsigned long c)
862 return ((a & (c - 1)) != 0) || ((b & (c - 1)) != 0);
865 f (unsigned long a, unsigned long b, unsigned long c)
867 return ((a & (c - 1)) != 0) | ((b & (c - 1)) != 0);
869 Both should combine to ((a|b) & (c-1)) != 0. Currently not optimized with
870 "clang -emit-llvm-bc | opt -std-compile-opts".
872 //===---------------------------------------------------------------------===//
875 #define PMD_MASK (~((1UL << 23) - 1))
876 void clear_pmd_range(unsigned long start, unsigned long end)
878 if (!(start & ~PMD_MASK) && !(end & ~PMD_MASK))
881 The expression should optimize to something like
882 "!((start|end)&~PMD_MASK). Currently not optimized with "clang
883 -emit-llvm-bc | opt -std-compile-opts".
885 //===---------------------------------------------------------------------===//
889 foo (unsigned int a, unsigned int b)
891 if (a <= 7 && b <= 7)
894 Should combine to "(a|b) <= 7". Currently not optimized with "clang
895 -emit-llvm-bc | opt -std-compile-opts".
897 //===---------------------------------------------------------------------===//
903 return (n >= 0 ? 1 : -1);
905 Should combine to (n >> 31) | 1. Currently not optimized with "clang
906 -emit-llvm-bc | opt -std-compile-opts | llc".
908 //===---------------------------------------------------------------------===//
911 int test(int a, int b)
918 Should combine to "a <= b". Currently not optimized with "clang
919 -emit-llvm-bc | opt -std-compile-opts | llc".
921 //===---------------------------------------------------------------------===//
925 if (variable == 4 || variable == 6)
928 This should optimize to "if ((variable | 2) == 6)". Currently not
929 optimized with "clang -emit-llvm-bc | opt -std-compile-opts | llc".
931 //===---------------------------------------------------------------------===//
933 unsigned int f(unsigned int i, unsigned int n) {++i; if (i == n) ++i; return
935 unsigned int f2(unsigned int i, unsigned int n) {++i; i += i == n; return i;}
936 These should combine to the same thing. Currently, the first function
937 produces better code on X86.
939 //===---------------------------------------------------------------------===//
942 #define abs(x) x>0?x:-x
945 return (abs(x)) >= 0;
947 This should optimize to x == INT_MIN. (With -fwrapv.) Currently not
948 optimized with "clang -emit-llvm-bc | opt -std-compile-opts".
950 //===---------------------------------------------------------------------===//
954 rotate_cst (unsigned int a)
956 a = (a << 10) | (a >> 22);
961 minus_cst (unsigned int a)
970 mask_gt (unsigned int a)
972 /* This is equivalent to a > 15. */
977 rshift_gt (unsigned int a)
979 /* This is equivalent to a > 23. */
983 All should simplify to a single comparison. All of these are
984 currently not optimized with "clang -emit-llvm-bc | opt
987 //===---------------------------------------------------------------------===//
990 int c(int* x) {return (char*)x+2 == (char*)x;}
991 Should combine to 0. Currently not optimized with "clang
992 -emit-llvm-bc | opt -std-compile-opts" (although llc can optimize it).
994 //===---------------------------------------------------------------------===//
996 int a(unsigned char* b) {return *b > 99;}
997 There's an unnecessary zext in the generated code with "clang
998 -emit-llvm-bc | opt -std-compile-opts".
1000 //===---------------------------------------------------------------------===//
1002 int a(unsigned b) {return ((b << 31) | (b << 30)) >> 31;}
1003 Should be combined to "((b >> 1) | b) & 1". Currently not optimized
1004 with "clang -emit-llvm-bc | opt -std-compile-opts".
1006 //===---------------------------------------------------------------------===//
1008 unsigned a(unsigned x, unsigned y) { return x | (y & 1) | (y & 2);}
1009 Should combine to "x | (y & 3)". Currently not optimized with "clang
1010 -emit-llvm-bc | opt -std-compile-opts".
1012 //===---------------------------------------------------------------------===//
1014 unsigned a(unsigned a) {return ((a | 1) & 3) | (a & -4);}
1015 Should combine to "a | 1". Currently not optimized with "clang
1016 -emit-llvm-bc | opt -std-compile-opts".
1018 //===---------------------------------------------------------------------===//
1020 int a(int a, int b, int c) {return (~a & c) | ((c|a) & b);}
1021 Should fold to "(~a & c) | (a & b)". Currently not optimized with
1022 "clang -emit-llvm-bc | opt -std-compile-opts".
1024 //===---------------------------------------------------------------------===//
1026 int a(int a,int b) {return (~(a|b))|a;}
1027 Should fold to "a|~b". Currently not optimized with "clang
1028 -emit-llvm-bc | opt -std-compile-opts".
1030 //===---------------------------------------------------------------------===//
1032 int a(int a, int b) {return (a&&b) || (a&&!b);}
1033 Should fold to "a". Currently not optimized with "clang -emit-llvm-bc
1034 | opt -std-compile-opts".
1036 //===---------------------------------------------------------------------===//
1038 int a(int a, int b, int c) {return (a&&b) || (!a&&c);}
1039 Should fold to "a ? b : c", or at least something sane. Currently not
1040 optimized with "clang -emit-llvm-bc | opt -std-compile-opts".
1042 //===---------------------------------------------------------------------===//
1044 int a(int a, int b, int c) {return (a&&b) || (a&&c) || (a&&b&&c);}
1045 Should fold to a && (b || c). Currently not optimized with "clang
1046 -emit-llvm-bc | opt -std-compile-opts".
1048 //===---------------------------------------------------------------------===//
1050 int a(int x) {return x | ((x & 8) ^ 8);}
1051 Should combine to x | 8. Currently not optimized with "clang
1052 -emit-llvm-bc | opt -std-compile-opts".
1054 //===---------------------------------------------------------------------===//
1056 int a(int x) {return x ^ ((x & 8) ^ 8);}
1057 Should also combine to x | 8. Currently not optimized with "clang
1058 -emit-llvm-bc | opt -std-compile-opts".
1060 //===---------------------------------------------------------------------===//
1062 int a(int x) {return (x & 8) == 0 ? -1 : -9;}
1063 Should combine to (x | -9) ^ 8. Currently not optimized with "clang
1064 -emit-llvm-bc | opt -std-compile-opts".
1066 //===---------------------------------------------------------------------===//
1068 int a(int x) {return (x & 8) == 0 ? -9 : -1;}
1069 Should combine to x | -9. Currently not optimized with "clang
1070 -emit-llvm-bc | opt -std-compile-opts".
1072 //===---------------------------------------------------------------------===//
1074 int a(int x) {return ((x | -9) ^ 8) & x;}
1075 Should combine to x & -9. Currently not optimized with "clang
1076 -emit-llvm-bc | opt -std-compile-opts".
1078 //===---------------------------------------------------------------------===//
1080 unsigned a(unsigned a) {return a * 0x11111111 >> 28 & 1;}
1081 Should combine to "a * 0x88888888 >> 31". Currently not optimized
1082 with "clang -emit-llvm-bc | opt -std-compile-opts".
1084 //===---------------------------------------------------------------------===//
1086 unsigned a(char* x) {if ((*x & 32) == 0) return b();}
1087 There's an unnecessary zext in the generated code with "clang
1088 -emit-llvm-bc | opt -std-compile-opts".
1090 //===---------------------------------------------------------------------===//
1092 unsigned a(unsigned long long x) {return 40 * (x >> 1);}
1093 Should combine to "20 * (((unsigned)x) & -2)". Currently not
1094 optimized with "clang -emit-llvm-bc | opt -std-compile-opts".
1096 //===---------------------------------------------------------------------===//
1098 This was noticed in the entryblock for grokdeclarator in 403.gcc:
1100 %tmp = icmp eq i32 %decl_context, 4
1101 %decl_context_addr.0 = select i1 %tmp, i32 3, i32 %decl_context
1102 %tmp1 = icmp eq i32 %decl_context_addr.0, 1
1103 %decl_context_addr.1 = select i1 %tmp1, i32 0, i32 %decl_context_addr.0
1105 tmp1 should be simplified to something like:
1106 (!tmp || decl_context == 1)
1108 This allows recursive simplifications, tmp1 is used all over the place in
1109 the function, e.g. by:
1111 %tmp23 = icmp eq i32 %decl_context_addr.1, 0 ; <i1> [#uses=1]
1112 %tmp24 = xor i1 %tmp1, true ; <i1> [#uses=1]
1113 %or.cond8 = and i1 %tmp23, %tmp24 ; <i1> [#uses=1]
1117 //===---------------------------------------------------------------------===//
1119 Store sinking: This code:
1121 void f (int n, int *cond, int *res) {
1124 for (i = 0; i < n; i++)
1126 *res ^= 234; /* (*) */
1129 On this function GVN hoists the fully redundant value of *res, but nothing
1130 moves the store out. This gives us this code:
1132 bb: ; preds = %bb2, %entry
1133 %.rle = phi i32 [ 0, %entry ], [ %.rle6, %bb2 ]
1134 %i.05 = phi i32 [ 0, %entry ], [ %indvar.next, %bb2 ]
1135 %1 = load i32* %cond, align 4
1136 %2 = icmp eq i32 %1, 0
1137 br i1 %2, label %bb2, label %bb1
1140 %3 = xor i32 %.rle, 234
1141 store i32 %3, i32* %res, align 4
1144 bb2: ; preds = %bb, %bb1
1145 %.rle6 = phi i32 [ %3, %bb1 ], [ %.rle, %bb ]
1146 %indvar.next = add i32 %i.05, 1
1147 %exitcond = icmp eq i32 %indvar.next, %n
1148 br i1 %exitcond, label %return, label %bb
1150 DSE should sink partially dead stores to get the store out of the loop.
1152 Here's another partial dead case:
1153 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=12395
1155 //===---------------------------------------------------------------------===//
1157 Scalar PRE hoists the mul in the common block up to the else:
1159 int test (int a, int b, int c, int g) {
1169 It would be better to do the mul once to reduce codesize above the if.
1170 This is GCC PR38204.
1172 //===---------------------------------------------------------------------===//
1174 GCC PR37810 is an interesting case where we should sink load/store reload
1175 into the if block and outside the loop, so we don't reload/store it on the
1196 We now hoist the reload after the call (Transforms/GVN/lpre-call-wrap.ll), but
1197 we don't sink the store. We need partially dead store sinking.
1199 //===---------------------------------------------------------------------===//
1201 [LOAD PRE with NON-AVAILABLE ADDRESS]
1203 GCC PR37166: Sinking of loads prevents SROA'ing the "g" struct on the stack
1204 leading to excess stack traffic. This could be handled by GVN with some crazy
1205 symbolic phi translation. The code we get looks like (g is on the stack):
1209 %9 = getelementptr %struct.f* %g, i32 0, i32 0
1210 store i32 %8, i32* %9, align bel %bb3
1212 bb3: ; preds = %bb1, %bb2, %bb
1213 %c_addr.0 = phi %struct.f* [ %g, %bb2 ], [ %c, %bb ], [ %c, %bb1 ]
1214 %b_addr.0 = phi %struct.f* [ %b, %bb2 ], [ %g, %bb ], [ %b, %bb1 ]
1215 %10 = getelementptr %struct.f* %c_addr.0, i32 0, i32 0
1216 %11 = load i32* %10, align 4
1218 %11 is partially redundant, an in BB2 it should have the value %8.
1220 GCC PR33344 is a similar case.
1223 //===---------------------------------------------------------------------===//
1225 There are many load PRE testcases in testsuite/gcc.dg/tree-ssa/loadpre* in the
1226 GCC testsuite. There are many pre testcases as ssa-pre-*.c
1228 //===---------------------------------------------------------------------===//
1230 There are some interesting cases in testsuite/gcc.dg/tree-ssa/pred-comm* in the
1231 GCC testsuite. For example, predcom-1.c is:
1233 for (i = 2; i < 1000; i++)
1234 fib[i] = (fib[i-1] + fib[i - 2]) & 0xffff;
1236 which compiles into:
1238 bb1: ; preds = %bb1, %bb1.thread
1239 %indvar = phi i32 [ 0, %bb1.thread ], [ %0, %bb1 ]
1240 %i.0.reg2mem.0 = add i32 %indvar, 2
1241 %0 = add i32 %indvar, 1 ; <i32> [#uses=3]
1242 %1 = getelementptr [1000 x i32]* @fib, i32 0, i32 %0
1243 %2 = load i32* %1, align 4 ; <i32> [#uses=1]
1244 %3 = getelementptr [1000 x i32]* @fib, i32 0, i32 %indvar
1245 %4 = load i32* %3, align 4 ; <i32> [#uses=1]
1246 %5 = add i32 %4, %2 ; <i32> [#uses=1]
1247 %6 = and i32 %5, 65535 ; <i32> [#uses=1]
1248 %7 = getelementptr [1000 x i32]* @fib, i32 0, i32 %i.0.reg2mem.0
1249 store i32 %6, i32* %7, align 4
1250 %exitcond = icmp eq i32 %0, 998 ; <i1> [#uses=1]
1251 br i1 %exitcond, label %return, label %bb1
1258 instead of handling this as a loop or other xform, all we'd need to do is teach
1259 load PRE to phi translate the %0 add (i+1) into the predecessor as (i'+1+1) =
1260 (i'+2) (where i' is the previous iteration of i). This would find the store
1263 predcom-2.c is apparently the same as predcom-1.c
1264 predcom-3.c is very similar but needs loads feeding each other instead of
1266 predcom-4.c seems the same as the rest.
1269 //===---------------------------------------------------------------------===//
1271 Other simple load PRE cases:
1272 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=35287 [LPRE crit edge splitting]
1274 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=34677 (licm does this, LPRE crit edge)
1275 llvm-gcc t2.c -S -o - -O0 -emit-llvm | llvm-as | opt -mem2reg -simplifycfg -gvn | llvm-dis
1277 //===---------------------------------------------------------------------===//
1279 Type based alias analysis:
1280 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=14705
1282 //===---------------------------------------------------------------------===//
1284 A/B get pinned to the stack because we turn an if/then into a select instead
1285 of PRE'ing the load/store. This may be fixable in instcombine:
1286 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=37892
1288 struct X { int i; };
1302 //===---------------------------------------------------------------------===//
1304 Interesting missed case because of control flow flattening (should be 2 loads):
1305 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=26629
1306 With: llvm-gcc t2.c -S -o - -O0 -emit-llvm | llvm-as |
1307 opt -mem2reg -gvn -instcombine | llvm-dis
1308 we miss it because we need 1) GEP PHI TRAN, 2) CRIT EDGE 3) MULTIPLE DIFFERENT
1309 VALS PRODUCED BY ONE BLOCK OVER DIFFERENT PATHS
1311 //===---------------------------------------------------------------------===//
1313 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=19633
1314 We could eliminate the branch condition here, loading from null is undefined:
1316 struct S { int w, x, y, z; };
1317 struct T { int r; struct S s; };
1318 void bar (struct S, int);
1319 void foo (int a, struct T b)
1327 //===---------------------------------------------------------------------===//
1329 simplifylibcalls should do several optimizations for strspn/strcspn:
1331 strcspn(x, "") -> strlen(x)
1334 strspn(x, "") -> strlen(x)
1335 strspn(x, "a") -> strchr(x, 'a')-x
1337 strcspn(x, "a") -> inlined loop for up to 3 letters (similarly for strspn):
1339 size_t __strcspn_c3 (__const char *__s, int __reject1, int __reject2,
1341 register size_t __result = 0;
1342 while (__s[__result] != '\0' && __s[__result] != __reject1 &&
1343 __s[__result] != __reject2 && __s[__result] != __reject3)
1348 This should turn into a switch on the character. See PR3253 for some notes on
1351 456.hmmer apparently uses strcspn and strspn a lot. 471.omnetpp uses strspn.
1353 //===---------------------------------------------------------------------===//
1355 "gas" uses this idiom:
1356 else if (strchr ("+-/*%|&^:[]()~", *intel_parser.op_string))
1358 else if (strchr ("<>", *intel_parser.op_string)
1360 Those should be turned into a switch.
1362 //===---------------------------------------------------------------------===//
1364 252.eon contains this interesting code:
1366 %3072 = getelementptr [100 x i8]* %tempString, i32 0, i32 0
1367 %3073 = call i8* @strcpy(i8* %3072, i8* %3071) nounwind
1368 %strlen = call i32 @strlen(i8* %3072) ; uses = 1
1369 %endptr = getelementptr [100 x i8]* %tempString, i32 0, i32 %strlen
1370 call void @llvm.memcpy.i32(i8* %endptr,
1371 i8* getelementptr ([5 x i8]* @"\01LC42", i32 0, i32 0), i32 5, i32 1)
1372 %3074 = call i32 @strlen(i8* %endptr) nounwind readonly
1374 This is interesting for a couple reasons. First, in this:
1376 %3073 = call i8* @strcpy(i8* %3072, i8* %3071) nounwind
1377 %strlen = call i32 @strlen(i8* %3072)
1379 The strlen could be replaced with: %strlen = sub %3072, %3073, because the
1380 strcpy call returns a pointer to the end of the string. Based on that, the
1381 endptr GEP just becomes equal to 3073, which eliminates a strlen call and GEP.
1383 Second, the memcpy+strlen strlen can be replaced with:
1385 %3074 = call i32 @strlen([5 x i8]* @"\01LC42") nounwind readonly
1387 Because the destination was just copied into the specified memory buffer. This,
1388 in turn, can be constant folded to "4".
1390 In other code, it contains:
1392 %endptr6978 = bitcast i8* %endptr69 to i32*
1393 store i32 7107374, i32* %endptr6978, align 1
1394 %3167 = call i32 @strlen(i8* %endptr69) nounwind readonly
1396 Which could also be constant folded. Whatever is producing this should probably
1397 be fixed to leave this as a memcpy from a string.
1399 Further, eon also has an interesting partially redundant strlen call:
1401 bb8: ; preds = %_ZN18eonImageCalculatorC1Ev.exit
1402 %682 = getelementptr i8** %argv, i32 6 ; <i8**> [#uses=2]
1403 %683 = load i8** %682, align 4 ; <i8*> [#uses=4]
1404 %684 = load i8* %683, align 1 ; <i8> [#uses=1]
1405 %685 = icmp eq i8 %684, 0 ; <i1> [#uses=1]
1406 br i1 %685, label %bb10, label %bb9
1409 %686 = call i32 @strlen(i8* %683) nounwind readonly
1410 %687 = icmp ugt i32 %686, 254 ; <i1> [#uses=1]
1411 br i1 %687, label %bb10, label %bb11
1413 bb10: ; preds = %bb9, %bb8
1414 %688 = call i32 @strlen(i8* %683) nounwind readonly
1416 This could be eliminated by doing the strlen once in bb8, saving code size and
1417 improving perf on the bb8->9->10 path.
1419 //===---------------------------------------------------------------------===//
1421 I see an interesting fully redundant call to strlen left in 186.crafty:InputMove
1423 %movetext11 = getelementptr [128 x i8]* %movetext, i32 0, i32 0
1426 bb62: ; preds = %bb55, %bb53
1427 %promote.0 = phi i32 [ %169, %bb55 ], [ 0, %bb53 ]
1428 %171 = call i32 @strlen(i8* %movetext11) nounwind readonly align 1
1429 %172 = add i32 %171, -1 ; <i32> [#uses=1]
1430 %173 = getelementptr [128 x i8]* %movetext, i32 0, i32 %172
1433 br i1 %or.cond, label %bb65, label %bb72
1435 bb65: ; preds = %bb62
1436 store i8 0, i8* %173, align 1
1439 bb72: ; preds = %bb65, %bb62
1440 %trank.1 = phi i32 [ %176, %bb65 ], [ -1, %bb62 ]
1441 %177 = call i32 @strlen(i8* %movetext11) nounwind readonly align 1
1443 Note that on the bb62->bb72 path, that the %177 strlen call is partially
1444 redundant with the %171 call. At worst, we could shove the %177 strlen call
1445 up into the bb65 block moving it out of the bb62->bb72 path. However, note
1446 that bb65 stores to the string, zeroing out the last byte. This means that on
1447 that path the value of %177 is actually just %171-1. A sub is cheaper than a
1450 This pattern repeats several times, basically doing:
1455 where it is "obvious" that B = A-1.
1457 //===---------------------------------------------------------------------===//
1459 186.crafty contains this interesting pattern:
1461 %77 = call i8* @strstr(i8* getelementptr ([6 x i8]* @"\01LC5", i32 0, i32 0),
1463 %phitmp648 = icmp eq i8* %77, getelementptr ([6 x i8]* @"\01LC5", i32 0, i32 0)
1464 br i1 %phitmp648, label %bb70, label %bb76
1466 bb70: ; preds = %OptionMatch.exit91, %bb69
1467 %78 = call i32 @strlen(i8* %30) nounwind readonly align 1 ; <i32> [#uses=1]
1471 if (strstr(cststr, P) == cststr) {
1475 The strstr call would be significantly cheaper written as:
1478 if (memcmp(P, str, strlen(P)))
1481 This is memcmp+strlen instead of strstr. This also makes the strlen fully
1484 //===---------------------------------------------------------------------===//
1486 186.crafty also contains this code:
1488 %1906 = call i32 @strlen(i8* getelementptr ([32 x i8]* @pgn_event, i32 0,i32 0))
1489 %1907 = getelementptr [32 x i8]* @pgn_event, i32 0, i32 %1906
1490 %1908 = call i8* @strcpy(i8* %1907, i8* %1905) nounwind align 1
1491 %1909 = call i32 @strlen(i8* getelementptr ([32 x i8]* @pgn_event, i32 0,i32 0))
1492 %1910 = getelementptr [32 x i8]* @pgn_event, i32 0, i32 %1909
1494 The last strlen is computable as 1908-@pgn_event, which means 1910=1908.
1496 //===---------------------------------------------------------------------===//
1498 186.crafty has this interesting pattern with the "out.4543" variable:
1500 call void @llvm.memcpy.i32(
1501 i8* getelementptr ([10 x i8]* @out.4543, i32 0, i32 0),
1502 i8* getelementptr ([7 x i8]* @"\01LC28700", i32 0, i32 0), i32 7, i32 1)
1503 %101 = call@printf(i8* ... @out.4543, i32 0, i32 0)) nounwind
1505 It is basically doing:
1507 memcpy(globalarray, "string");
1508 printf(..., globalarray);
1510 Anyway, by knowing that printf just reads the memory and forward substituting
1511 the string directly into the printf, this eliminates reads from globalarray.
1512 Since this pattern occurs frequently in crafty (due to the "DisplayTime" and
1513 other similar functions) there are many stores to "out". Once all the printfs
1514 stop using "out", all that is left is the memcpy's into it. This should allow
1515 globalopt to remove the "stored only" global.
1517 //===---------------------------------------------------------------------===//
1521 define inreg i32 @foo(i8* inreg %p) nounwind {
1523 %tmp1 = ashr i8 %tmp0, 5
1524 %tmp2 = sext i8 %tmp1 to i32
1528 could be dagcombine'd to a sign-extending load with a shift.
1529 For example, on x86 this currently gets this:
1535 while it could get this:
1540 //===---------------------------------------------------------------------===//
1544 int test(int x) { return 1-x == x; } // --> return false
1545 int test2(int x) { return 2-x == x; } // --> return x == 1 ?
1547 Always foldable for odd constants, what is the rule for even?
1549 //===---------------------------------------------------------------------===//
1551 PR 3381: GEP to field of size 0 inside a struct could be turned into GEP
1552 for next field in struct (which is at same address).
1554 For example: store of float into { {{}}, float } could be turned into a store to
1557 //===---------------------------------------------------------------------===//
1560 double foo(double a) { return sin(a); }
1562 This compiles into this on x86-64 Linux:
1573 //===---------------------------------------------------------------------===//
1575 The arg promotion pass should make use of nocapture to make its alias analysis
1576 stuff much more precise.
1578 //===---------------------------------------------------------------------===//
1580 The following functions should be optimized to use a select instead of a
1581 branch (from gcc PR40072):
1583 char char_int(int m) {if(m>7) return 0; return m;}
1584 int int_char(char m) {if(m>7) return 0; return m;}
1586 //===---------------------------------------------------------------------===//
1588 int func(int a, int b) { if (a & 0x80) b |= 0x80; else b &= ~0x80; return b; }
1592 define i32 @func(i32 %a, i32 %b) nounwind readnone ssp {
1594 %0 = and i32 %a, 128 ; <i32> [#uses=1]
1595 %1 = icmp eq i32 %0, 0 ; <i1> [#uses=1]
1596 %2 = or i32 %b, 128 ; <i32> [#uses=1]
1597 %3 = and i32 %b, -129 ; <i32> [#uses=1]
1598 %b_addr.0 = select i1 %1, i32 %3, i32 %2 ; <i32> [#uses=1]
1602 However, it's functionally equivalent to:
1604 b = (b & ~0x80) | (a & 0x80);
1606 Which generates this:
1608 define i32 @func(i32 %a, i32 %b) nounwind readnone ssp {
1610 %0 = and i32 %b, -129 ; <i32> [#uses=1]
1611 %1 = and i32 %a, 128 ; <i32> [#uses=1]
1612 %2 = or i32 %0, %1 ; <i32> [#uses=1]
1616 This can be generalized for other forms:
1618 b = (b & ~0x80) | (a & 0x40) << 1;
1620 //===---------------------------------------------------------------------===//
1622 These two functions produce different code. They shouldn't:
1626 uint8_t p1(uint8_t b, uint8_t a) {
1627 b = (b & ~0xc0) | (a & 0xc0);
1631 uint8_t p2(uint8_t b, uint8_t a) {
1632 b = (b & ~0x40) | (a & 0x40);
1633 b = (b & ~0x80) | (a & 0x80);
1637 define zeroext i8 @p1(i8 zeroext %b, i8 zeroext %a) nounwind readnone ssp {
1639 %0 = and i8 %b, 63 ; <i8> [#uses=1]
1640 %1 = and i8 %a, -64 ; <i8> [#uses=1]
1641 %2 = or i8 %1, %0 ; <i8> [#uses=1]
1645 define zeroext i8 @p2(i8 zeroext %b, i8 zeroext %a) nounwind readnone ssp {
1647 %0 = and i8 %b, 63 ; <i8> [#uses=1]
1648 %.masked = and i8 %a, 64 ; <i8> [#uses=1]
1649 %1 = and i8 %a, -128 ; <i8> [#uses=1]
1650 %2 = or i8 %1, %0 ; <i8> [#uses=1]
1651 %3 = or i8 %2, %.masked ; <i8> [#uses=1]
1655 //===---------------------------------------------------------------------===//
1657 IPSCCP does not currently propagate argument dependent constants through
1658 functions where it does not not all of the callers. This includes functions
1659 with normal external linkage as well as templates, C99 inline functions etc.
1660 Specifically, it does nothing to:
1662 define i32 @test(i32 %x, i32 %y, i32 %z) nounwind {
1664 %0 = add nsw i32 %y, %z
1667 %3 = add nsw i32 %1, %2
1671 define i32 @test2() nounwind {
1673 %0 = call i32 @test(i32 1, i32 2, i32 4) nounwind
1677 It would be interesting extend IPSCCP to be able to handle simple cases like
1678 this, where all of the arguments to a call are constant. Because IPSCCP runs
1679 before inlining, trivial templates and inline functions are not yet inlined.
1680 The results for a function + set of constant arguments should be memoized in a
1683 //===---------------------------------------------------------------------===//
1685 The libcall constant folding stuff should be moved out of SimplifyLibcalls into
1686 libanalysis' constantfolding logic. This would allow IPSCCP to be able to
1687 handle simple things like this:
1689 static int foo(const char *X) { return strlen(X); }
1690 int bar() { return foo("abcd"); }
1692 //===---------------------------------------------------------------------===//
1694 InstCombine should use SimplifyDemandedBits to remove the or instruction:
1696 define i1 @test(i8 %x, i8 %y) {
1698 %B = icmp ugt i8 %A, 3
1702 Currently instcombine calls SimplifyDemandedBits with either all bits or just
1703 the sign bit, if the comparison is obviously a sign test. In this case, we only
1704 need all but the bottom two bits from %A, and if we gave that mask to SDB it
1705 would delete the or instruction for us.
1707 //===---------------------------------------------------------------------===//