1 Target Independent Opportunities:
3 //===---------------------------------------------------------------------===//
5 We should make the various target's "IMPLICIT_DEF" instructions be a single
6 target-independent opcode like TargetInstrInfo::INLINEASM. This would allow
7 us to eliminate the TargetInstrDesc::isImplicitDef() method, and would allow
8 us to avoid having to define this for every target for every register class.
10 //===---------------------------------------------------------------------===//
12 With the recent changes to make the implicit def/use set explicit in
13 machineinstrs, we should change the target descriptions for 'call' instructions
14 so that the .td files don't list all the call-clobbered registers as implicit
15 defs. Instead, these should be added by the code generator (e.g. on the dag).
17 This has a number of uses:
19 1. PPC32/64 and X86 32/64 can avoid having multiple copies of call instructions
20 for their different impdef sets.
21 2. Targets with multiple calling convs (e.g. x86) which have different clobber
22 sets don't need copies of call instructions.
23 3. 'Interprocedural register allocation' can be done to reduce the clobber sets
26 //===---------------------------------------------------------------------===//
28 Make the PPC branch selector target independant
30 //===---------------------------------------------------------------------===//
32 Get the C front-end to expand hypot(x,y) -> llvm.sqrt(x*x+y*y) when errno and
33 precision don't matter (ffastmath). Misc/mandel will like this. :)
35 //===---------------------------------------------------------------------===//
37 Solve this DAG isel folding deficiency:
55 The problem is the store's chain operand is not the load X but rather
56 a TokenFactor of the load X and load Y, which prevents the folding.
58 There are two ways to fix this:
60 1. The dag combiner can start using alias analysis to realize that y/x
61 don't alias, making the store to X not dependent on the load from Y.
62 2. The generated isel could be made smarter in the case it can't
63 disambiguate the pointers.
65 Number 1 is the preferred solution.
67 This has been "fixed" by a TableGen hack. But that is a short term workaround
68 which will be removed once the proper fix is made.
70 //===---------------------------------------------------------------------===//
72 On targets with expensive 64-bit multiply, we could LSR this:
79 for (i = ...; ++i, tmp+=tmp)
82 This would be a win on ppc32, but not x86 or ppc64.
84 //===---------------------------------------------------------------------===//
86 Shrink: (setlt (loadi32 P), 0) -> (setlt (loadi8 Phi), 0)
88 //===---------------------------------------------------------------------===//
90 Reassociate should turn: X*X*X*X -> t=(X*X) (t*t) to eliminate a multiply.
92 //===---------------------------------------------------------------------===//
94 Interesting? testcase for add/shift/mul reassoc:
96 int bar(int x, int y) {
97 return x*x*x+y+x*x*x*x*x*y*y*y*y;
99 int foo(int z, int n) {
100 return bar(z, n) + bar(2*z, 2*n);
103 Reassociate should handle the example in GCC PR16157.
105 //===---------------------------------------------------------------------===//
107 These two functions should generate the same code on big-endian systems:
109 int g(int *j,int *l) { return memcmp(j,l,4); }
110 int h(int *j, int *l) { return *j - *l; }
112 this could be done in SelectionDAGISel.cpp, along with other special cases,
115 //===---------------------------------------------------------------------===//
117 It would be nice to revert this patch:
118 http://lists.cs.uiuc.edu/pipermail/llvm-commits/Week-of-Mon-20060213/031986.html
120 And teach the dag combiner enough to simplify the code expanded before
121 legalize. It seems plausible that this knowledge would let it simplify other
124 //===---------------------------------------------------------------------===//
126 For vector types, TargetData.cpp::getTypeInfo() returns alignment that is equal
127 to the type size. It works but can be overly conservative as the alignment of
128 specific vector types are target dependent.
130 //===---------------------------------------------------------------------===//
132 We should add 'unaligned load/store' nodes, and produce them from code like
135 v4sf example(float *P) {
136 return (v4sf){P[0], P[1], P[2], P[3] };
139 //===---------------------------------------------------------------------===//
141 Add support for conditional increments, and other related patterns. Instead
146 je LBB16_2 #cond_next
157 //===---------------------------------------------------------------------===//
159 Combine: a = sin(x), b = cos(x) into a,b = sincos(x).
161 Expand these to calls of sin/cos and stores:
162 double sincos(double x, double *sin, double *cos);
163 float sincosf(float x, float *sin, float *cos);
164 long double sincosl(long double x, long double *sin, long double *cos);
166 Doing so could allow SROA of the destination pointers. See also:
167 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=17687
169 //===---------------------------------------------------------------------===//
171 Scalar Repl cannot currently promote this testcase to 'ret long cst':
173 %struct.X = type { i32, i32 }
174 %struct.Y = type { %struct.X }
177 %retval = alloca %struct.Y, align 8
178 %tmp12 = getelementptr %struct.Y* %retval, i32 0, i32 0, i32 0
179 store i32 0, i32* %tmp12
180 %tmp15 = getelementptr %struct.Y* %retval, i32 0, i32 0, i32 1
181 store i32 1, i32* %tmp15
182 %retval.upgrd.1 = bitcast %struct.Y* %retval to i64*
183 %retval.upgrd.2 = load i64* %retval.upgrd.1
184 ret i64 %retval.upgrd.2
187 it should be extended to do so.
189 //===---------------------------------------------------------------------===//
191 -scalarrepl should promote this to be a vector scalar.
193 %struct..0anon = type { <4 x float> }
195 define void @test1(<4 x float> %V, float* %P) {
196 %u = alloca %struct..0anon, align 16
197 %tmp = getelementptr %struct..0anon* %u, i32 0, i32 0
198 store <4 x float> %V, <4 x float>* %tmp
199 %tmp1 = bitcast %struct..0anon* %u to [4 x float]*
200 %tmp.upgrd.1 = getelementptr [4 x float]* %tmp1, i32 0, i32 1
201 %tmp.upgrd.2 = load float* %tmp.upgrd.1
202 %tmp3 = mul float %tmp.upgrd.2, 2.000000e+00
203 store float %tmp3, float* %P
207 //===---------------------------------------------------------------------===//
209 Turn this into a single byte store with no load (the other 3 bytes are
212 void %test(uint* %P) {
214 %tmp14 = or uint %tmp, 3305111552
215 %tmp15 = and uint %tmp14, 3321888767
216 store uint %tmp15, uint* %P
220 //===---------------------------------------------------------------------===//
222 dag/inst combine "clz(x)>>5 -> x==0" for 32-bit x.
228 int t = __builtin_clz(x);
238 //===---------------------------------------------------------------------===//
240 Legalize should lower ctlz like this:
241 ctlz(x) = popcnt((x-1) & ~x)
243 on targets that have popcnt but not ctlz. itanium, what else?
245 //===---------------------------------------------------------------------===//
247 quantum_sigma_x in 462.libquantum contains the following loop:
249 for(i=0; i<reg->size; i++)
251 /* Flip the target bit of each basis state */
252 reg->node[i].state ^= ((MAX_UNSIGNED) 1 << target);
255 Where MAX_UNSIGNED/state is a 64-bit int. On a 32-bit platform it would be just
256 so cool to turn it into something like:
258 long long Res = ((MAX_UNSIGNED) 1 << target);
260 for(i=0; i<reg->size; i++)
261 reg->node[i].state ^= Res & 0xFFFFFFFFULL;
263 for(i=0; i<reg->size; i++)
264 reg->node[i].state ^= Res & 0xFFFFFFFF00000000ULL
267 ... which would only do one 32-bit XOR per loop iteration instead of two.
269 It would also be nice to recognize the reg->size doesn't alias reg->node[i], but
272 //===---------------------------------------------------------------------===//
274 This isn't recognized as bswap by instcombine (yes, it really is bswap):
276 unsigned long reverse(unsigned v) {
278 t = v ^ ((v << 16) | (v >> 16));
280 v = (v << 24) | (v >> 8);
284 //===---------------------------------------------------------------------===//
286 These idioms should be recognized as popcount (see PR1488):
288 unsigned countbits_slow(unsigned v) {
290 for (c = 0; v; v >>= 1)
294 unsigned countbits_fast(unsigned v){
297 v &= v - 1; // clear the least significant bit set
301 BITBOARD = unsigned long long
302 int PopCnt(register BITBOARD a) {
310 unsigned int popcount(unsigned int input) {
311 unsigned int count = 0;
312 for (unsigned int i = 0; i < 4 * 8; i++)
313 count += (input >> i) & i;
317 //===---------------------------------------------------------------------===//
319 These should turn into single 16-bit (unaligned?) loads on little/big endian
322 unsigned short read_16_le(const unsigned char *adr) {
323 return adr[0] | (adr[1] << 8);
325 unsigned short read_16_be(const unsigned char *adr) {
326 return (adr[0] << 8) | adr[1];
329 //===---------------------------------------------------------------------===//
331 -instcombine should handle this transform:
332 icmp pred (sdiv X / C1 ), C2
333 when X, C1, and C2 are unsigned. Similarly for udiv and signed operands.
335 Currently InstCombine avoids this transform but will do it when the signs of
336 the operands and the sign of the divide match. See the FIXME in
337 InstructionCombining.cpp in the visitSetCondInst method after the switch case
338 for Instruction::UDiv (around line 4447) for more details.
340 The SingleSource/Benchmarks/Shootout-C++/hash and hash2 tests have examples of
343 //===---------------------------------------------------------------------===//
345 viterbi speeds up *significantly* if the various "history" related copy loops
346 are turned into memcpy calls at the source level. We need a "loops to memcpy"
349 //===---------------------------------------------------------------------===//
353 typedef unsigned U32;
354 typedef unsigned long long U64;
355 int test (U32 *inst, U64 *regs) {
358 int r1 = (temp >> 20) & 0xf;
359 int b2 = (temp >> 16) & 0xf;
360 effective_addr2 = temp & 0xfff;
361 if (b2) effective_addr2 += regs[b2];
362 b2 = (temp >> 12) & 0xf;
363 if (b2) effective_addr2 += regs[b2];
364 effective_addr2 &= regs[4];
365 if ((effective_addr2 & 3) == 0)
370 Note that only the low 2 bits of effective_addr2 are used. On 32-bit systems,
371 we don't eliminate the computation of the top half of effective_addr2 because
372 we don't have whole-function selection dags. On x86, this means we use one
373 extra register for the function when effective_addr2 is declared as U64 than
374 when it is declared U32.
376 //===---------------------------------------------------------------------===//
378 Promote for i32 bswap can use i64 bswap + shr. Useful on targets with 64-bit
379 regs and bswap, like itanium.
381 //===---------------------------------------------------------------------===//
383 LSR should know what GPR types a target has. This code:
385 volatile short X, Y; // globals
389 for (i = 0; i < N; i++) { X = i; Y = i*4; }
392 produces two identical IV's (after promotion) on PPC/ARM:
394 LBB1_1: @bb.preheader
405 add r1, r1, #1 <- [0,+,1]
407 add r2, r2, #1 <- [0,+,1]
412 //===---------------------------------------------------------------------===//
414 Tail call elim should be more aggressive, checking to see if the call is
415 followed by an uncond branch to an exit block.
417 ; This testcase is due to tail-duplication not wanting to copy the return
418 ; instruction into the terminating blocks because there was other code
419 ; optimized out of the function after the taildup happened.
420 ; RUN: llvm-as < %s | opt -tailcallelim | llvm-dis | not grep call
422 define i32 @t4(i32 %a) {
424 %tmp.1 = and i32 %a, 1 ; <i32> [#uses=1]
425 %tmp.2 = icmp ne i32 %tmp.1, 0 ; <i1> [#uses=1]
426 br i1 %tmp.2, label %then.0, label %else.0
428 then.0: ; preds = %entry
429 %tmp.5 = add i32 %a, -1 ; <i32> [#uses=1]
430 %tmp.3 = call i32 @t4( i32 %tmp.5 ) ; <i32> [#uses=1]
433 else.0: ; preds = %entry
434 %tmp.7 = icmp ne i32 %a, 0 ; <i1> [#uses=1]
435 br i1 %tmp.7, label %then.1, label %return
437 then.1: ; preds = %else.0
438 %tmp.11 = add i32 %a, -2 ; <i32> [#uses=1]
439 %tmp.9 = call i32 @t4( i32 %tmp.11 ) ; <i32> [#uses=1]
442 return: ; preds = %then.1, %else.0, %then.0
443 %result.0 = phi i32 [ 0, %else.0 ], [ %tmp.3, %then.0 ],
448 //===---------------------------------------------------------------------===//
450 Tail recursion elimination is not transforming this function, because it is
451 returning n, which fails the isDynamicConstant check in the accumulator
454 long long fib(const long long n) {
460 return fib(n-1) + fib(n-2);
464 //===---------------------------------------------------------------------===//
466 Tail recursion elimination should handle:
471 return 2 * pow2m1 (n - 1) + 1;
474 Also, multiplies can be turned into SHL's, so they should be handled as if
475 they were associative. "return foo() << 1" can be tail recursion eliminated.
477 //===---------------------------------------------------------------------===//
479 Argument promotion should promote arguments for recursive functions, like
482 ; RUN: llvm-as < %s | opt -argpromotion | llvm-dis | grep x.val
484 define internal i32 @foo(i32* %x) {
486 %tmp = load i32* %x ; <i32> [#uses=0]
487 %tmp.foo = call i32 @foo( i32* %x ) ; <i32> [#uses=1]
491 define i32 @bar(i32* %x) {
493 %tmp3 = call i32 @foo( i32* %x ) ; <i32> [#uses=1]
497 //===---------------------------------------------------------------------===//
499 "basicaa" should know how to look through "or" instructions that act like add
500 instructions. For example in this code, the x*4+1 is turned into x*4 | 1, and
501 basicaa can't analyze the array subscript, leading to duplicated loads in the
504 void test(int X, int Y, int a[]) {
506 for (i=2; i<1000; i+=4) {
507 a[i+0] = a[i-1+0]*a[i-2+0];
508 a[i+1] = a[i-1+1]*a[i-2+1];
509 a[i+2] = a[i-1+2]*a[i-2+2];
510 a[i+3] = a[i-1+3]*a[i-2+3];
514 //===---------------------------------------------------------------------===//
516 We should investigate an instruction sinking pass. Consider this silly
532 je LBB1_2 # cond_true
540 The PIC base computation (call+popl) is only used on one path through the
541 code, but is currently always computed in the entry block. It would be
542 better to sink the picbase computation down into the block for the
543 assertion, as it is the only one that uses it. This happens for a lot of
544 code with early outs.
546 Another example is loads of arguments, which are usually emitted into the
547 entry block on targets like x86. If not used in all paths through a
548 function, they should be sunk into the ones that do.
550 In this case, whole-function-isel would also handle this.
552 //===---------------------------------------------------------------------===//
554 Investigate lowering of sparse switch statements into perfect hash tables:
555 http://burtleburtle.net/bob/hash/perfect.html
557 //===---------------------------------------------------------------------===//
559 We should turn things like "load+fabs+store" and "load+fneg+store" into the
560 corresponding integer operations. On a yonah, this loop:
565 for (b = 0; b < 10000000; b++)
566 for (i = 0; i < 256; i++)
570 is twice as slow as this loop:
575 for (b = 0; b < 10000000; b++)
576 for (i = 0; i < 256; i++)
577 a[i] ^= (1ULL << 63);
580 and I suspect other processors are similar. On X86 in particular this is a
581 big win because doing this with integers allows the use of read/modify/write
584 //===---------------------------------------------------------------------===//
586 DAG Combiner should try to combine small loads into larger loads when
587 profitable. For example, we compile this C++ example:
589 struct THotKey { short Key; bool Control; bool Shift; bool Alt; };
590 extern THotKey m_HotKey;
591 THotKey GetHotKey () { return m_HotKey; }
593 into (-O3 -fno-exceptions -static -fomit-frame-pointer):
598 movb _m_HotKey+3, %cl
599 movb _m_HotKey+4, %dl
600 movb _m_HotKey+2, %ch
615 movzwl _m_HotKey+4, %edx
619 The LLVM IR contains the needed alignment info, so we should be able to
620 merge the loads and stores into 4-byte loads:
622 %struct.THotKey = type { i16, i8, i8, i8 }
623 define void @_Z9GetHotKeyv(%struct.THotKey* sret %agg.result) nounwind {
625 %tmp2 = load i16* getelementptr (@m_HotKey, i32 0, i32 0), align 8
626 %tmp5 = load i8* getelementptr (@m_HotKey, i32 0, i32 1), align 2
627 %tmp8 = load i8* getelementptr (@m_HotKey, i32 0, i32 2), align 1
628 %tmp11 = load i8* getelementptr (@m_HotKey, i32 0, i32 3), align 2
630 Alternatively, we should use a small amount of base-offset alias analysis
631 to make it so the scheduler doesn't need to hold all the loads in regs at
634 //===---------------------------------------------------------------------===//
636 We should extend parameter attributes to capture more information about
637 pointer parameters for alias analysis. Some ideas:
639 1. Add a "nocapture" attribute, which indicates that the callee does not store
640 the address of the parameter into a global or any other memory location
641 visible to the callee. This can be used to make basicaa and other analyses
642 more powerful. It is true for things like memcpy, strcat, and many other
643 things, including structs passed by value, most C++ references, etc.
644 2. Generalize readonly to be set on parameters. This is important mod/ref
645 info for the function, which is important for basicaa and others. It can
646 also be used by the inliner to avoid inserting a memcpy for byval
647 arguments when the function is inlined.
649 These functions can be inferred by various analysis passes such as the
650 globalsmodrefaa pass. Note that getting #2 right is actually really tricky.
654 void caller(S byvalarg) { G.field = 1; ... }
655 void callee() { caller(G); }
657 The fact that the caller does not modify byval arg is not enough, we need
658 to know that it doesn't modify G either. This is very tricky.
660 //===---------------------------------------------------------------------===//
662 We should add an FRINT node to the DAG to model targets that have legal
663 implementations of ceil/floor/rint.
665 //===---------------------------------------------------------------------===//
667 This GCC bug: http://gcc.gnu.org/bugzilla/show_bug.cgi?id=34043
668 contains a testcase that compiles down to:
670 %struct.XMM128 = type { <4 x float> }
672 %src = alloca %struct.XMM128
674 %tmp6263 = bitcast %struct.XMM128* %src to <2 x i64>*
675 %tmp65 = getelementptr %struct.XMM128* %src, i32 0, i32 0
676 store <2 x i64> %tmp5899, <2 x i64>* %tmp6263, align 16
677 %tmp66 = load <4 x float>* %tmp65, align 16
678 %tmp71 = add <4 x float> %tmp66, %tmp66
680 If the mid-level optimizer turned the bitcast of pointer + store of tmp5899
681 into a bitcast of the vector value and a store to the pointer, then the
682 store->load could be easily removed.
684 //===---------------------------------------------------------------------===//
689 long long input[8] = {1,1,1,1,1,1,1,1};
693 We currently compile this into a memcpy from a global array since the
694 initializer is fairly large and not memset'able. This is good, but the memcpy
695 gets lowered to load/stores in the code generator. This is also ok, except
696 that the codegen lowering for memcpy doesn't handle the case when the source
697 is a constant global. This gives us atrocious code like this:
702 movl _C.0.1444-"L1$pb"+32(%eax), %ecx
704 movl _C.0.1444-"L1$pb"+20(%eax), %ecx
706 movl _C.0.1444-"L1$pb"+36(%eax), %ecx
708 movl _C.0.1444-"L1$pb"+44(%eax), %ecx
710 movl _C.0.1444-"L1$pb"+40(%eax), %ecx
712 movl _C.0.1444-"L1$pb"+12(%eax), %ecx
714 movl _C.0.1444-"L1$pb"+4(%eax), %ecx
726 //===---------------------------------------------------------------------===//
728 http://llvm.org/PR717:
730 The following code should compile into "ret int undef". Instead, LLVM
731 produces "ret int 0":
740 //===---------------------------------------------------------------------===//
742 The loop unroller should partially unroll loops (instead of peeling them)
743 when code growth isn't too bad and when an unroll count allows simplification
744 of some code within the loop. One trivial example is:
750 for ( nLoop = 0; nLoop < 1000; nLoop++ ) {
759 Unrolling by 2 would eliminate the '&1' in both copies, leading to a net
760 reduction in code size. The resultant code would then also be suitable for
761 exit value computation.
763 //===---------------------------------------------------------------------===//
765 We miss a bunch of rotate opportunities on various targets, including ppc, x86,
766 etc. On X86, we miss a bunch of 'rotate by variable' cases because the rotate
767 matching code in dag combine doesn't look through truncates aggressively
768 enough. Here are some testcases reduces from GCC PR17886:
770 unsigned long long f(unsigned long long x, int y) {
771 return (x << y) | (x >> 64-y);
773 unsigned f2(unsigned x, int y){
774 return (x << y) | (x >> 32-y);
776 unsigned long long f3(unsigned long long x){
778 return (x << y) | (x >> 64-y);
780 unsigned f4(unsigned x){
782 return (x << y) | (x >> 32-y);
784 unsigned long long f5(unsigned long long x, unsigned long long y) {
785 return (x << 8) | ((y >> 48) & 0xffull);
787 unsigned long long f6(unsigned long long x, unsigned long long y, int z) {
790 return (x << 8) | ((y >> 48) & 0xffull);
792 return (x << 16) | ((y >> 40) & 0xffffull);
794 return (x << 24) | ((y >> 32) & 0xffffffull);
796 return (x << 32) | ((y >> 24) & 0xffffffffull);
798 return (x << 40) | ((y >> 16) & 0xffffffffffull);
802 On X86-64, we only handle f2/f3/f4 right. On x86-32, a few of these
803 generate truly horrible code, instead of using shld and friends. On
804 ARM, we end up with calls to L___lshrdi3/L___ashldi3 in f, which is
805 badness. PPC64 misses f, f5 and f6. CellSPU aborts in isel.
807 //===---------------------------------------------------------------------===//
809 We do a number of simplifications in simplify libcalls to strength reduce
810 standard library functions, but we don't currently merge them together. For
811 example, it is useful to merge memcpy(a,b,strlen(b)) -> strcpy. This can only
812 be done safely if "b" isn't modified between the strlen and memcpy of course.
814 //===---------------------------------------------------------------------===//
816 We should be able to evaluate this loop:
818 int test(int x_offs) {
824 //===---------------------------------------------------------------------===//
826 Reassociate should turn things like:
828 int factorial(int X) {
829 return X*X*X*X*X*X*X*X;
832 into llvm.powi calls, allowing the code generator to produce balanced
833 multiplication trees.
835 //===---------------------------------------------------------------------===//
837 We generate a horrible libcall for llvm.powi. For example, we compile:
840 double f(double a) { return std::pow(a, 4); }
846 movsd 16(%esp), %xmm0
849 call L___powidf2$stub
857 movsd 16(%esp), %xmm0
865 //===---------------------------------------------------------------------===//
867 We compile this program: (from GCC PR11680)
868 http://gcc.gnu.org/bugzilla/attachment.cgi?id=4487
870 Into code that runs the same speed in fast/slow modes, but both modes run 2x
871 slower than when compile with GCC (either 4.0 or 4.2):
873 $ llvm-g++ perf.cpp -O3 -fno-exceptions
875 1.821u 0.003s 0:01.82 100.0% 0+0k 0+0io 0pf+0w
877 $ g++ perf.cpp -O3 -fno-exceptions
879 0.821u 0.001s 0:00.82 100.0% 0+0k 0+0io 0pf+0w
881 It looks like we are making the same inlining decisions, so this may be raw
882 codegen badness or something else (haven't investigated).
884 //===---------------------------------------------------------------------===//
886 We miss some instcombines for stuff like this:
888 void foo (unsigned int a) {
889 /* This one is equivalent to a >= (3 << 2). */
894 A few other related ones are in GCC PR14753.
896 //===---------------------------------------------------------------------===//
898 Divisibility by constant can be simplified (according to GCC PR12849) from
899 being a mulhi to being a mul lo (cheaper). Testcase:
901 void bar(unsigned n) {
906 I think this basically amounts to a dag combine to simplify comparisons against
907 multiply hi's into a comparison against the mullo.
909 //===---------------------------------------------------------------------===//
911 SROA is not promoting the union on the stack in this example, we should end
916 double v __attribute__((vector_size(16)));
918 typedef union vec2d vec2d;
920 static vec2d a={{1,2}}, b={{3,4}};
923 return (vec2d){ .v = a.v + b.v * (vec2d){{5,5}}.v };
926 //===---------------------------------------------------------------------===//
929 void g(); struct A { int n; int m; A& operator++(void) { ++n; if (n == m) g();
930 return *this; } A() : n(0), m(0) { } friend bool operator!=(A const& a1,
931 A const& a2) { return a1.n != a2.n; } }; void testfunction(A& iter) { A const
932 end; while (iter != end) ++iter; }
936 bb: ; preds = %bb3.backedge, %bb.nph
937 %.rle = phi i32 [ %1, %bb.nph ], [ %7, %bb3.backedge ] ; <i32> [#uses=1]
938 %4 = add i32 %.rle, 1 ; <i32> [#uses=2]
939 store i32 %4, i32* %0, align 4
940 %5 = load i32* %3, align 4 ; <i32> [#uses=1]
941 %6 = icmp eq i32 %4, %5 ; <i1> [#uses=1]
942 br i1 %6, label %bb1, label %bb3.backedge
945 tail call void @_Z1gv()
946 br label %bb3.backedge
948 bb3.backedge: ; preds = %bb, %bb1
949 %7 = load i32* %0, align 4 ; <i32> [#uses=2]
952 The %7 load is partially redundant with the store of %4 to %0, GVN's PRE
953 should remove it, but it doesn't apply to memory objects.
955 //===---------------------------------------------------------------------===//
957 Better mod/ref analysis for scanf would allow us to eliminate the vtable and a
958 bunch of other stuff from this example (see PR1604):
968 std::scanf("%d", &t.val);
969 std::printf("%d\n", t.val);
972 //===---------------------------------------------------------------------===//
974 Instcombine will merge comparisons like (x >= 10) && (x < 20) by producing (x -
975 10) u< 10, but only when the comparisons have matching sign.
977 This could be converted with a similiar technique. (PR1941)
979 define i1 @test(i8 %x) {
980 %A = icmp uge i8 %x, 5
981 %B = icmp slt i8 %x, 20
986 //===---------------------------------------------------------------------===//
988 These functions perform the same computation, but produce different assembly.
990 define i8 @select(i8 %x) readnone nounwind {
991 %A = icmp ult i8 %x, 250
992 %B = select i1 %A, i8 0, i8 1
996 define i8 @addshr(i8 %x) readnone nounwind {
997 %A = zext i8 %x to i9
998 %B = add i9 %A, 6 ;; 256 - 250 == 6
1000 %D = trunc i9 %C to i8
1004 //===---------------------------------------------------------------------===//
1008 f (unsigned long a, unsigned long b, unsigned long c)
1010 return ((a & (c - 1)) != 0) || ((b & (c - 1)) != 0);
1013 f (unsigned long a, unsigned long b, unsigned long c)
1015 return ((a & (c - 1)) != 0) | ((b & (c - 1)) != 0);
1017 Both should combine to ((a|b) & (c-1)) != 0. Currently not optimized with
1018 "clang -emit-llvm-bc | opt -std-compile-opts".
1020 //===---------------------------------------------------------------------===//
1023 #define PMD_MASK (~((1UL << 23) - 1))
1024 void clear_pmd_range(unsigned long start, unsigned long end)
1026 if (!(start & ~PMD_MASK) && !(end & ~PMD_MASK))
1029 The expression should optimize to something like
1030 "!((start|end)&~PMD_MASK). Currently not optimized with "clang
1031 -emit-llvm-bc | opt -std-compile-opts".
1033 //===---------------------------------------------------------------------===//
1037 foo (unsigned int a, unsigned int b)
1039 if (a <= 7 && b <= 7)
1042 Should combine to "(a|b) <= 7". Currently not optimized with "clang
1043 -emit-llvm-bc | opt -std-compile-opts".
1045 //===---------------------------------------------------------------------===//
1051 return (n >= 0 ? 1 : -1);
1053 Should combine to (n >> 31) | 1. Currently not optimized with "clang
1054 -emit-llvm-bc | opt -std-compile-opts | llc".
1056 //===---------------------------------------------------------------------===//
1059 int test(int a, int b)
1066 Should combine to "a <= b". Currently not optimized with "clang
1067 -emit-llvm-bc | opt -std-compile-opts | llc".
1069 //===---------------------------------------------------------------------===//
1071 void a(int variable)
1073 if (variable == 4 || variable == 6)
1076 This should optimize to "if ((variable | 2) == 6)". Currently not
1077 optimized with "clang -emit-llvm-bc | opt -std-compile-opts | llc".
1079 //===---------------------------------------------------------------------===//
1081 unsigned int f(unsigned int i, unsigned int n) {++i; if (i == n) ++i; return
1083 unsigned int f2(unsigned int i, unsigned int n) {++i; i += i == n; return i;}
1084 These should combine to the same thing. Currently, the first function
1085 produces better code on X86.
1087 //===---------------------------------------------------------------------===//
1090 #define abs(x) x>0?x:-x
1093 return (abs(x)) >= 0;
1095 This should optimize to x == INT_MIN. (With -fwrapv.) Currently not
1096 optimized with "clang -emit-llvm-bc | opt -std-compile-opts".
1098 //===---------------------------------------------------------------------===//
1102 rotate_cst (unsigned int a)
1104 a = (a << 10) | (a >> 22);
1109 minus_cst (unsigned int a)
1118 mask_gt (unsigned int a)
1120 /* This is equivalent to a > 15. */
1125 rshift_gt (unsigned int a)
1127 /* This is equivalent to a > 23. */
1131 All should simplify to a single comparison. All of these are
1132 currently not optimized with "clang -emit-llvm-bc | opt
1135 //===---------------------------------------------------------------------===//
1138 int c(int* x) {return (char*)x+2 == (char*)x;}
1139 Should combine to 0. Currently not optimized with "clang
1140 -emit-llvm-bc | opt -std-compile-opts" (although llc can optimize it).
1142 //===---------------------------------------------------------------------===//
1144 int a(unsigned char* b) {return *b > 99;}
1145 There's an unnecessary zext in the generated code with "clang
1146 -emit-llvm-bc | opt -std-compile-opts".
1148 //===---------------------------------------------------------------------===//
1150 int a(unsigned b) {return ((b << 31) | (b << 30)) >> 31;}
1151 Should be combined to "((b >> 1) | b) & 1". Currently not optimized
1152 with "clang -emit-llvm-bc | opt -std-compile-opts".
1154 //===---------------------------------------------------------------------===//
1156 unsigned a(unsigned x, unsigned y) { return x | (y & 1) | (y & 2);}
1157 Should combine to "x | (y & 3)". Currently not optimized with "clang
1158 -emit-llvm-bc | opt -std-compile-opts".
1160 //===---------------------------------------------------------------------===//
1162 unsigned a(unsigned a) {return ((a | 1) & 3) | (a & -4);}
1163 Should combine to "a | 1". Currently not optimized with "clang
1164 -emit-llvm-bc | opt -std-compile-opts".
1166 //===---------------------------------------------------------------------===//
1168 int a(int a, int b, int c) {return (~a & c) | ((c|a) & b);}
1169 Should fold to "(~a & c) | (a & b)". Currently not optimized with
1170 "clang -emit-llvm-bc | opt -std-compile-opts".
1172 //===---------------------------------------------------------------------===//
1174 int a(int a,int b) {return (~(a|b))|a;}
1175 Should fold to "a|~b". Currently not optimized with "clang
1176 -emit-llvm-bc | opt -std-compile-opts".
1178 //===---------------------------------------------------------------------===//
1180 int a(int a, int b) {return (a&&b) || (a&&!b);}
1181 Should fold to "a". Currently not optimized with "clang -emit-llvm-bc
1182 | opt -std-compile-opts".
1184 //===---------------------------------------------------------------------===//
1186 int a(int a, int b, int c) {return (a&&b) || (!a&&c);}
1187 Should fold to "a ? b : c", or at least something sane. Currently not
1188 optimized with "clang -emit-llvm-bc | opt -std-compile-opts".
1190 //===---------------------------------------------------------------------===//
1192 int a(int a, int b, int c) {return (a&&b) || (a&&c) || (a&&b&&c);}
1193 Should fold to a && (b || c). Currently not optimized with "clang
1194 -emit-llvm-bc | opt -std-compile-opts".
1196 //===---------------------------------------------------------------------===//
1198 int a(int x) {return x | ((x & 8) ^ 8);}
1199 Should combine to x | 8. Currently not optimized with "clang
1200 -emit-llvm-bc | opt -std-compile-opts".
1202 //===---------------------------------------------------------------------===//
1204 int a(int x) {return x ^ ((x & 8) ^ 8);}
1205 Should also combine to x | 8. Currently not optimized with "clang
1206 -emit-llvm-bc | opt -std-compile-opts".
1208 //===---------------------------------------------------------------------===//
1210 int a(int x) {return (x & 8) == 0 ? -1 : -9;}
1211 Should combine to (x | -9) ^ 8. Currently not optimized with "clang
1212 -emit-llvm-bc | opt -std-compile-opts".
1214 //===---------------------------------------------------------------------===//
1216 int a(int x) {return (x & 8) == 0 ? -9 : -1;}
1217 Should combine to x | -9. Currently not optimized with "clang
1218 -emit-llvm-bc | opt -std-compile-opts".
1220 //===---------------------------------------------------------------------===//
1222 int a(int x) {return ((x | -9) ^ 8) & x;}
1223 Should combine to x & -9. Currently not optimized with "clang
1224 -emit-llvm-bc | opt -std-compile-opts".
1226 //===---------------------------------------------------------------------===//
1228 unsigned a(unsigned a) {return a * 0x11111111 >> 28 & 1;}
1229 Should combine to "a * 0x88888888 >> 31". Currently not optimized
1230 with "clang -emit-llvm-bc | opt -std-compile-opts".
1232 //===---------------------------------------------------------------------===//
1234 unsigned a(char* x) {if ((*x & 32) == 0) return b();}
1235 There's an unnecessary zext in the generated code with "clang
1236 -emit-llvm-bc | opt -std-compile-opts".
1238 //===---------------------------------------------------------------------===//
1240 unsigned a(unsigned long long x) {return 40 * (x >> 1);}
1241 Should combine to "20 * (((unsigned)x) & -2)". Currently not
1242 optimized with "clang -emit-llvm-bc | opt -std-compile-opts".
1244 //===---------------------------------------------------------------------===//
1246 We would like to do the following transform in the instcombiner:
1250 However, this isn't valid if (-X) overflows. We can implement this when we
1251 have the concept of a "C signed subtraction" operator that which is undefined
1254 //===---------------------------------------------------------------------===//
1256 This was noticed in the entryblock for grokdeclarator in 403.gcc:
1258 %tmp = icmp eq i32 %decl_context, 4
1259 %decl_context_addr.0 = select i1 %tmp, i32 3, i32 %decl_context
1260 %tmp1 = icmp eq i32 %decl_context_addr.0, 1
1261 %decl_context_addr.1 = select i1 %tmp1, i32 0, i32 %decl_context_addr.0
1263 tmp1 should be simplified to something like:
1264 (!tmp || decl_context == 1)
1266 This allows recursive simplifications, tmp1 is used all over the place in
1267 the function, e.g. by:
1269 %tmp23 = icmp eq i32 %decl_context_addr.1, 0 ; <i1> [#uses=1]
1270 %tmp24 = xor i1 %tmp1, true ; <i1> [#uses=1]
1271 %or.cond8 = and i1 %tmp23, %tmp24 ; <i1> [#uses=1]
1275 //===---------------------------------------------------------------------===//
1277 Store sinking: This code:
1279 void f (int n, int *cond, int *res) {
1282 for (i = 0; i < n; i++)
1284 *res ^= 234; /* (*) */
1287 On this function GVN hoists the fully redundant value of *res, but nothing
1288 moves the store out. This gives us this code:
1290 bb: ; preds = %bb2, %entry
1291 %.rle = phi i32 [ 0, %entry ], [ %.rle6, %bb2 ]
1292 %i.05 = phi i32 [ 0, %entry ], [ %indvar.next, %bb2 ]
1293 %1 = load i32* %cond, align 4
1294 %2 = icmp eq i32 %1, 0
1295 br i1 %2, label %bb2, label %bb1
1298 %3 = xor i32 %.rle, 234
1299 store i32 %3, i32* %res, align 4
1302 bb2: ; preds = %bb, %bb1
1303 %.rle6 = phi i32 [ %3, %bb1 ], [ %.rle, %bb ]
1304 %indvar.next = add i32 %i.05, 1
1305 %exitcond = icmp eq i32 %indvar.next, %n
1306 br i1 %exitcond, label %return, label %bb
1308 DSE should sink partially dead stores to get the store out of the loop.
1310 Here's another partial dead case:
1311 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=12395
1313 //===---------------------------------------------------------------------===//
1315 Scalar PRE hoists the mul in the common block up to the else:
1317 int test (int a, int b, int c, int g) {
1327 It would be better to do the mul once to reduce codesize above the if.
1328 This is GCC PR38204.
1330 //===---------------------------------------------------------------------===//
1332 GCC PR37810 is an interesting case where we should sink load/store reload
1333 into the if block and outside the loop, so we don't reload/store it on the
1354 //===---------------------------------------------------------------------===//
1356 GCC PR37166: Sinking of loads prevents SROA'ing the "g" struct on the stack
1357 leading to excess stack traffic. This could be handled by GVN with some crazy
1358 symbolic phi translation. The code we get looks like (g is on the stack):
1362 %9 = getelementptr %struct.f* %g, i32 0, i32 0
1363 store i32 %8, i32* %9, align bel %bb3
1365 bb3: ; preds = %bb1, %bb2, %bb
1366 %c_addr.0 = phi %struct.f* [ %g, %bb2 ], [ %c, %bb ], [ %c, %bb1 ]
1367 %b_addr.0 = phi %struct.f* [ %b, %bb2 ], [ %g, %bb ], [ %b, %bb1 ]
1368 %10 = getelementptr %struct.f* %c_addr.0, i32 0, i32 0
1369 %11 = load i32* %10, align 4
1371 %11 is fully redundant, an in BB2 it should have the value %8.
1373 GCC PR33344 is a similar case.
1375 //===---------------------------------------------------------------------===//
1377 There are many load PRE testcases in testsuite/gcc.dg/tree-ssa/loadpre* in the
1378 GCC testsuite. There are many pre testcases as ssa-pre-*.c
1380 Other simple load PRE cases:
1381 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=35287
1382 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=34677 (licm does this)
1383 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=29789 (SPEC2K6)
1384 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=23455
1385 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=14705
1387 //===---------------------------------------------------------------------===//
1389 When GVN/PRE finds a store of float* to a must aliases pointer when expecting
1390 an int*, it should turn it into a bitcast. This is a nice generalization of
1391 the SROA hack that would apply to other cases, e.g.:
1393 int foo(int C, int *P, float X) {
1404 One example (that requires crazy phi translation) is:
1405 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=16799
1407 //===---------------------------------------------------------------------===//
1409 A/B get pinned to the stack because we turn an if/then into a select instead
1410 of PRE'ing the load/store. This may be fixable in instcombine:
1411 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=37892
1413 Interesting missed case because of control flow flattening (should be 2 loads):
1414 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=26629
1417 //===---------------------------------------------------------------------===//
1419 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=19633
1420 We could eliminate the branch condition here, loading from null is undefined:
1422 struct S { int w, x, y, z; };
1423 struct T { int r; struct S s; };
1424 void bar (struct S, int);
1425 void foo (int a, struct T b)
1433 //===---------------------------------------------------------------------===//