1 Target Independent Opportunities:
3 //===---------------------------------------------------------------------===//
5 With the recent changes to make the implicit def/use set explicit in
6 machineinstrs, we should change the target descriptions for 'call' instructions
7 so that the .td files don't list all the call-clobbered registers as implicit
8 defs. Instead, these should be added by the code generator (e.g. on the dag).
10 This has a number of uses:
12 1. PPC32/64 and X86 32/64 can avoid having multiple copies of call instructions
13 for their different impdef sets.
14 2. Targets with multiple calling convs (e.g. x86) which have different clobber
15 sets don't need copies of call instructions.
16 3. 'Interprocedural register allocation' can be done to reduce the clobber sets
19 //===---------------------------------------------------------------------===//
21 Make the PPC branch selector target independant
23 //===---------------------------------------------------------------------===//
25 Get the C front-end to expand hypot(x,y) -> llvm.sqrt(x*x+y*y) when errno and
26 precision don't matter (ffastmath). Misc/mandel will like this. :) This isn't
27 safe in general, even on darwin. See the libm implementation of hypot for
28 examples (which special case when x/y are exactly zero to get signed zeros etc
31 //===---------------------------------------------------------------------===//
33 Solve this DAG isel folding deficiency:
51 The problem is the store's chain operand is not the load X but rather
52 a TokenFactor of the load X and load Y, which prevents the folding.
54 There are two ways to fix this:
56 1. The dag combiner can start using alias analysis to realize that y/x
57 don't alias, making the store to X not dependent on the load from Y.
58 2. The generated isel could be made smarter in the case it can't
59 disambiguate the pointers.
61 Number 1 is the preferred solution.
63 This has been "fixed" by a TableGen hack. But that is a short term workaround
64 which will be removed once the proper fix is made.
66 //===---------------------------------------------------------------------===//
68 On targets with expensive 64-bit multiply, we could LSR this:
75 for (i = ...; ++i, tmp+=tmp)
78 This would be a win on ppc32, but not x86 or ppc64.
80 //===---------------------------------------------------------------------===//
82 Shrink: (setlt (loadi32 P), 0) -> (setlt (loadi8 Phi), 0)
84 //===---------------------------------------------------------------------===//
86 Reassociate should turn: X*X*X*X -> t=(X*X) (t*t) to eliminate a multiply.
88 //===---------------------------------------------------------------------===//
90 Interesting? testcase for add/shift/mul reassoc:
92 int bar(int x, int y) {
93 return x*x*x+y+x*x*x*x*x*y*y*y*y;
95 int foo(int z, int n) {
96 return bar(z, n) + bar(2*z, 2*n);
99 Reassociate should handle the example in GCC PR16157.
101 //===---------------------------------------------------------------------===//
103 These two functions should generate the same code on big-endian systems:
105 int g(int *j,int *l) { return memcmp(j,l,4); }
106 int h(int *j, int *l) { return *j - *l; }
108 this could be done in SelectionDAGISel.cpp, along with other special cases,
111 //===---------------------------------------------------------------------===//
113 It would be nice to revert this patch:
114 http://lists.cs.uiuc.edu/pipermail/llvm-commits/Week-of-Mon-20060213/031986.html
116 And teach the dag combiner enough to simplify the code expanded before
117 legalize. It seems plausible that this knowledge would let it simplify other
120 //===---------------------------------------------------------------------===//
122 For vector types, TargetData.cpp::getTypeInfo() returns alignment that is equal
123 to the type size. It works but can be overly conservative as the alignment of
124 specific vector types are target dependent.
126 //===---------------------------------------------------------------------===//
128 We should produce an unaligned load from code like this:
130 v4sf example(float *P) {
131 return (v4sf){P[0], P[1], P[2], P[3] };
134 //===---------------------------------------------------------------------===//
136 Add support for conditional increments, and other related patterns. Instead
141 je LBB16_2 #cond_next
152 //===---------------------------------------------------------------------===//
154 Combine: a = sin(x), b = cos(x) into a,b = sincos(x).
156 Expand these to calls of sin/cos and stores:
157 double sincos(double x, double *sin, double *cos);
158 float sincosf(float x, float *sin, float *cos);
159 long double sincosl(long double x, long double *sin, long double *cos);
161 Doing so could allow SROA of the destination pointers. See also:
162 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=17687
164 This is now easily doable with MRVs. We could even make an intrinsic for this
165 if anyone cared enough about sincos.
167 //===---------------------------------------------------------------------===//
169 Turn this into a single byte store with no load (the other 3 bytes are
172 define void @test(i32* %P) {
174 %tmp14 = or i32 %tmp, 3305111552
175 %tmp15 = and i32 %tmp14, 3321888767
176 store i32 %tmp15, i32* %P
180 //===---------------------------------------------------------------------===//
182 dag/inst combine "clz(x)>>5 -> x==0" for 32-bit x.
188 int t = __builtin_clz(x);
198 //===---------------------------------------------------------------------===//
200 quantum_sigma_x in 462.libquantum contains the following loop:
202 for(i=0; i<reg->size; i++)
204 /* Flip the target bit of each basis state */
205 reg->node[i].state ^= ((MAX_UNSIGNED) 1 << target);
208 Where MAX_UNSIGNED/state is a 64-bit int. On a 32-bit platform it would be just
209 so cool to turn it into something like:
211 long long Res = ((MAX_UNSIGNED) 1 << target);
213 for(i=0; i<reg->size; i++)
214 reg->node[i].state ^= Res & 0xFFFFFFFFULL;
216 for(i=0; i<reg->size; i++)
217 reg->node[i].state ^= Res & 0xFFFFFFFF00000000ULL
220 ... which would only do one 32-bit XOR per loop iteration instead of two.
222 It would also be nice to recognize the reg->size doesn't alias reg->node[i], but
225 //===---------------------------------------------------------------------===//
227 This should be optimized to one 'and' and one 'or', from PR4216:
229 define i32 @test_bitfield(i32 %bf.prev.low) nounwind ssp {
231 %bf.prev.lo.cleared10 = or i32 %bf.prev.low, 32962 ; <i32> [#uses=1]
232 %0 = and i32 %bf.prev.low, -65536 ; <i32> [#uses=1]
233 %1 = and i32 %bf.prev.lo.cleared10, 40186 ; <i32> [#uses=1]
234 %2 = or i32 %1, %0 ; <i32> [#uses=1]
238 //===---------------------------------------------------------------------===//
240 This isn't recognized as bswap by instcombine (yes, it really is bswap):
242 unsigned long reverse(unsigned v) {
244 t = v ^ ((v << 16) | (v >> 16));
246 v = (v << 24) | (v >> 8);
250 //===---------------------------------------------------------------------===//
252 These idioms should be recognized as popcount (see PR1488):
254 unsigned countbits_slow(unsigned v) {
256 for (c = 0; v; v >>= 1)
260 unsigned countbits_fast(unsigned v){
263 v &= v - 1; // clear the least significant bit set
267 BITBOARD = unsigned long long
268 int PopCnt(register BITBOARD a) {
276 unsigned int popcount(unsigned int input) {
277 unsigned int count = 0;
278 for (unsigned int i = 0; i < 4 * 8; i++)
279 count += (input >> i) & i;
283 This is a form of idiom recognition for loops, the same thing that could be
284 useful for recognizing memset/memcpy.
286 //===---------------------------------------------------------------------===//
288 These should turn into single 16-bit (unaligned?) loads on little/big endian
291 unsigned short read_16_le(const unsigned char *adr) {
292 return adr[0] | (adr[1] << 8);
294 unsigned short read_16_be(const unsigned char *adr) {
295 return (adr[0] << 8) | adr[1];
298 //===---------------------------------------------------------------------===//
300 -instcombine should handle this transform:
301 icmp pred (sdiv X / C1 ), C2
302 when X, C1, and C2 are unsigned. Similarly for udiv and signed operands.
304 Currently InstCombine avoids this transform but will do it when the signs of
305 the operands and the sign of the divide match. See the FIXME in
306 InstructionCombining.cpp in the visitSetCondInst method after the switch case
307 for Instruction::UDiv (around line 4447) for more details.
309 The SingleSource/Benchmarks/Shootout-C++/hash and hash2 tests have examples of
312 //===---------------------------------------------------------------------===//
314 viterbi speeds up *significantly* if the various "history" related copy loops
315 are turned into memcpy calls at the source level. We need a "loops to memcpy"
318 //===---------------------------------------------------------------------===//
322 typedef unsigned U32;
323 typedef unsigned long long U64;
324 int test (U32 *inst, U64 *regs) {
327 int r1 = (temp >> 20) & 0xf;
328 int b2 = (temp >> 16) & 0xf;
329 effective_addr2 = temp & 0xfff;
330 if (b2) effective_addr2 += regs[b2];
331 b2 = (temp >> 12) & 0xf;
332 if (b2) effective_addr2 += regs[b2];
333 effective_addr2 &= regs[4];
334 if ((effective_addr2 & 3) == 0)
339 Note that only the low 2 bits of effective_addr2 are used. On 32-bit systems,
340 we don't eliminate the computation of the top half of effective_addr2 because
341 we don't have whole-function selection dags. On x86, this means we use one
342 extra register for the function when effective_addr2 is declared as U64 than
343 when it is declared U32.
345 PHI Slicing could be extended to do this.
347 //===---------------------------------------------------------------------===//
349 LSR should know what GPR types a target has from TargetData. This code:
351 volatile short X, Y; // globals
355 for (i = 0; i < N; i++) { X = i; Y = i*4; }
358 produces two near identical IV's (after promotion) on PPC/ARM:
368 add r2, r2, #1 <- [0,+,1]
369 sub r0, r0, #1 <- [0,-,1]
373 LSR should reuse the "+" IV for the exit test.
375 //===---------------------------------------------------------------------===//
377 Tail call elim should be more aggressive, checking to see if the call is
378 followed by an uncond branch to an exit block.
380 ; This testcase is due to tail-duplication not wanting to copy the return
381 ; instruction into the terminating blocks because there was other code
382 ; optimized out of the function after the taildup happened.
383 ; RUN: llvm-as < %s | opt -tailcallelim | llvm-dis | not grep call
385 define i32 @t4(i32 %a) {
387 %tmp.1 = and i32 %a, 1 ; <i32> [#uses=1]
388 %tmp.2 = icmp ne i32 %tmp.1, 0 ; <i1> [#uses=1]
389 br i1 %tmp.2, label %then.0, label %else.0
391 then.0: ; preds = %entry
392 %tmp.5 = add i32 %a, -1 ; <i32> [#uses=1]
393 %tmp.3 = call i32 @t4( i32 %tmp.5 ) ; <i32> [#uses=1]
396 else.0: ; preds = %entry
397 %tmp.7 = icmp ne i32 %a, 0 ; <i1> [#uses=1]
398 br i1 %tmp.7, label %then.1, label %return
400 then.1: ; preds = %else.0
401 %tmp.11 = add i32 %a, -2 ; <i32> [#uses=1]
402 %tmp.9 = call i32 @t4( i32 %tmp.11 ) ; <i32> [#uses=1]
405 return: ; preds = %then.1, %else.0, %then.0
406 %result.0 = phi i32 [ 0, %else.0 ], [ %tmp.3, %then.0 ],
411 //===---------------------------------------------------------------------===//
413 Tail recursion elimination should handle:
418 return 2 * pow2m1 (n - 1) + 1;
421 Also, multiplies can be turned into SHL's, so they should be handled as if
422 they were associative. "return foo() << 1" can be tail recursion eliminated.
424 //===---------------------------------------------------------------------===//
426 Argument promotion should promote arguments for recursive functions, like
429 ; RUN: llvm-as < %s | opt -argpromotion | llvm-dis | grep x.val
431 define internal i32 @foo(i32* %x) {
433 %tmp = load i32* %x ; <i32> [#uses=0]
434 %tmp.foo = call i32 @foo( i32* %x ) ; <i32> [#uses=1]
438 define i32 @bar(i32* %x) {
440 %tmp3 = call i32 @foo( i32* %x ) ; <i32> [#uses=1]
444 //===---------------------------------------------------------------------===//
446 We should investigate an instruction sinking pass. Consider this silly
462 je LBB1_2 # cond_true
470 The PIC base computation (call+popl) is only used on one path through the
471 code, but is currently always computed in the entry block. It would be
472 better to sink the picbase computation down into the block for the
473 assertion, as it is the only one that uses it. This happens for a lot of
474 code with early outs.
476 Another example is loads of arguments, which are usually emitted into the
477 entry block on targets like x86. If not used in all paths through a
478 function, they should be sunk into the ones that do.
480 In this case, whole-function-isel would also handle this.
482 //===---------------------------------------------------------------------===//
484 Investigate lowering of sparse switch statements into perfect hash tables:
485 http://burtleburtle.net/bob/hash/perfect.html
487 //===---------------------------------------------------------------------===//
489 We should turn things like "load+fabs+store" and "load+fneg+store" into the
490 corresponding integer operations. On a yonah, this loop:
495 for (b = 0; b < 10000000; b++)
496 for (i = 0; i < 256; i++)
500 is twice as slow as this loop:
505 for (b = 0; b < 10000000; b++)
506 for (i = 0; i < 256; i++)
507 a[i] ^= (1ULL << 63);
510 and I suspect other processors are similar. On X86 in particular this is a
511 big win because doing this with integers allows the use of read/modify/write
514 //===---------------------------------------------------------------------===//
516 DAG Combiner should try to combine small loads into larger loads when
517 profitable. For example, we compile this C++ example:
519 struct THotKey { short Key; bool Control; bool Shift; bool Alt; };
520 extern THotKey m_HotKey;
521 THotKey GetHotKey () { return m_HotKey; }
523 into (-O3 -fno-exceptions -static -fomit-frame-pointer):
528 movb _m_HotKey+3, %cl
529 movb _m_HotKey+4, %dl
530 movb _m_HotKey+2, %ch
545 movzwl _m_HotKey+4, %edx
549 The LLVM IR contains the needed alignment info, so we should be able to
550 merge the loads and stores into 4-byte loads:
552 %struct.THotKey = type { i16, i8, i8, i8 }
553 define void @_Z9GetHotKeyv(%struct.THotKey* sret %agg.result) nounwind {
555 %tmp2 = load i16* getelementptr (@m_HotKey, i32 0, i32 0), align 8
556 %tmp5 = load i8* getelementptr (@m_HotKey, i32 0, i32 1), align 2
557 %tmp8 = load i8* getelementptr (@m_HotKey, i32 0, i32 2), align 1
558 %tmp11 = load i8* getelementptr (@m_HotKey, i32 0, i32 3), align 2
560 Alternatively, we should use a small amount of base-offset alias analysis
561 to make it so the scheduler doesn't need to hold all the loads in regs at
564 //===---------------------------------------------------------------------===//
566 We should add an FRINT node to the DAG to model targets that have legal
567 implementations of ceil/floor/rint.
569 //===---------------------------------------------------------------------===//
574 long long input[8] = {1,1,1,1,1,1,1,1};
578 We currently compile this into a memcpy from a global array since the
579 initializer is fairly large and not memset'able. This is good, but the memcpy
580 gets lowered to load/stores in the code generator. This is also ok, except
581 that the codegen lowering for memcpy doesn't handle the case when the source
582 is a constant global. This gives us atrocious code like this:
587 movl _C.0.1444-"L1$pb"+32(%eax), %ecx
589 movl _C.0.1444-"L1$pb"+20(%eax), %ecx
591 movl _C.0.1444-"L1$pb"+36(%eax), %ecx
593 movl _C.0.1444-"L1$pb"+44(%eax), %ecx
595 movl _C.0.1444-"L1$pb"+40(%eax), %ecx
597 movl _C.0.1444-"L1$pb"+12(%eax), %ecx
599 movl _C.0.1444-"L1$pb"+4(%eax), %ecx
611 //===---------------------------------------------------------------------===//
613 http://llvm.org/PR717:
615 The following code should compile into "ret int undef". Instead, LLVM
616 produces "ret int 0":
625 //===---------------------------------------------------------------------===//
627 The loop unroller should partially unroll loops (instead of peeling them)
628 when code growth isn't too bad and when an unroll count allows simplification
629 of some code within the loop. One trivial example is:
635 for ( nLoop = 0; nLoop < 1000; nLoop++ ) {
644 Unrolling by 2 would eliminate the '&1' in both copies, leading to a net
645 reduction in code size. The resultant code would then also be suitable for
646 exit value computation.
648 //===---------------------------------------------------------------------===//
650 We miss a bunch of rotate opportunities on various targets, including ppc, x86,
651 etc. On X86, we miss a bunch of 'rotate by variable' cases because the rotate
652 matching code in dag combine doesn't look through truncates aggressively
653 enough. Here are some testcases reduces from GCC PR17886:
655 unsigned long long f(unsigned long long x, int y) {
656 return (x << y) | (x >> 64-y);
658 unsigned f2(unsigned x, int y){
659 return (x << y) | (x >> 32-y);
661 unsigned long long f3(unsigned long long x){
663 return (x << y) | (x >> 64-y);
665 unsigned f4(unsigned x){
667 return (x << y) | (x >> 32-y);
669 unsigned long long f5(unsigned long long x, unsigned long long y) {
670 return (x << 8) | ((y >> 48) & 0xffull);
672 unsigned long long f6(unsigned long long x, unsigned long long y, int z) {
675 return (x << 8) | ((y >> 48) & 0xffull);
677 return (x << 16) | ((y >> 40) & 0xffffull);
679 return (x << 24) | ((y >> 32) & 0xffffffull);
681 return (x << 32) | ((y >> 24) & 0xffffffffull);
683 return (x << 40) | ((y >> 16) & 0xffffffffffull);
687 On X86-64, we only handle f2/f3/f4 right. On x86-32, a few of these
688 generate truly horrible code, instead of using shld and friends. On
689 ARM, we end up with calls to L___lshrdi3/L___ashldi3 in f, which is
690 badness. PPC64 misses f, f5 and f6. CellSPU aborts in isel.
692 //===---------------------------------------------------------------------===//
694 We do a number of simplifications in simplify libcalls to strength reduce
695 standard library functions, but we don't currently merge them together. For
696 example, it is useful to merge memcpy(a,b,strlen(b)) -> strcpy. This can only
697 be done safely if "b" isn't modified between the strlen and memcpy of course.
699 //===---------------------------------------------------------------------===//
701 Reassociate should turn things like:
703 int factorial(int X) {
704 return X*X*X*X*X*X*X*X;
707 into llvm.powi calls, allowing the code generator to produce balanced
708 multiplication trees.
710 //===---------------------------------------------------------------------===//
712 We generate a horrible libcall for llvm.powi. For example, we compile:
715 double f(double a) { return std::pow(a, 4); }
721 movsd 16(%esp), %xmm0
724 call L___powidf2$stub
732 movsd 16(%esp), %xmm0
740 //===---------------------------------------------------------------------===//
742 We compile this program: (from GCC PR11680)
743 http://gcc.gnu.org/bugzilla/attachment.cgi?id=4487
745 Into code that runs the same speed in fast/slow modes, but both modes run 2x
746 slower than when compile with GCC (either 4.0 or 4.2):
748 $ llvm-g++ perf.cpp -O3 -fno-exceptions
750 1.821u 0.003s 0:01.82 100.0% 0+0k 0+0io 0pf+0w
752 $ g++ perf.cpp -O3 -fno-exceptions
754 0.821u 0.001s 0:00.82 100.0% 0+0k 0+0io 0pf+0w
756 It looks like we are making the same inlining decisions, so this may be raw
757 codegen badness or something else (haven't investigated).
759 //===---------------------------------------------------------------------===//
761 We miss some instcombines for stuff like this:
763 void foo (unsigned int a) {
764 /* This one is equivalent to a >= (3 << 2). */
769 A few other related ones are in GCC PR14753.
771 //===---------------------------------------------------------------------===//
773 Divisibility by constant can be simplified (according to GCC PR12849) from
774 being a mulhi to being a mul lo (cheaper). Testcase:
776 void bar(unsigned n) {
781 I think this basically amounts to a dag combine to simplify comparisons against
782 multiply hi's into a comparison against the mullo.
784 //===---------------------------------------------------------------------===//
786 Better mod/ref analysis for scanf would allow us to eliminate the vtable and a
787 bunch of other stuff from this example (see PR1604):
797 std::scanf("%d", &t.val);
798 std::printf("%d\n", t.val);
801 //===---------------------------------------------------------------------===//
803 Instcombine will merge comparisons like (x >= 10) && (x < 20) by producing (x -
804 10) u< 10, but only when the comparisons have matching sign.
806 This could be converted with a similiar technique. (PR1941)
808 define i1 @test(i8 %x) {
809 %A = icmp uge i8 %x, 5
810 %B = icmp slt i8 %x, 20
815 //===---------------------------------------------------------------------===//
817 These functions perform the same computation, but produce different assembly.
819 define i8 @select(i8 %x) readnone nounwind {
820 %A = icmp ult i8 %x, 250
821 %B = select i1 %A, i8 0, i8 1
825 define i8 @addshr(i8 %x) readnone nounwind {
826 %A = zext i8 %x to i9
827 %B = add i9 %A, 6 ;; 256 - 250 == 6
829 %D = trunc i9 %C to i8
833 //===---------------------------------------------------------------------===//
837 f (unsigned long a, unsigned long b, unsigned long c)
839 return ((a & (c - 1)) != 0) || ((b & (c - 1)) != 0);
842 f (unsigned long a, unsigned long b, unsigned long c)
844 return ((a & (c - 1)) != 0) | ((b & (c - 1)) != 0);
846 Both should combine to ((a|b) & (c-1)) != 0. Currently not optimized with
847 "clang -emit-llvm-bc | opt -std-compile-opts".
849 //===---------------------------------------------------------------------===//
852 #define PMD_MASK (~((1UL << 23) - 1))
853 void clear_pmd_range(unsigned long start, unsigned long end)
855 if (!(start & ~PMD_MASK) && !(end & ~PMD_MASK))
858 The expression should optimize to something like
859 "!((start|end)&~PMD_MASK). Currently not optimized with "clang
860 -emit-llvm-bc | opt -std-compile-opts".
862 //===---------------------------------------------------------------------===//
866 foo (unsigned int a, unsigned int b)
868 if (a <= 7 && b <= 7)
871 Should combine to "(a|b) <= 7". Currently not optimized with "clang
872 -emit-llvm-bc | opt -std-compile-opts".
874 //===---------------------------------------------------------------------===//
880 return (n >= 0 ? 1 : -1);
882 Should combine to (n >> 31) | 1. Currently not optimized with "clang
883 -emit-llvm-bc | opt -std-compile-opts | llc".
885 //===---------------------------------------------------------------------===//
888 int test(int a, int b)
895 Should combine to "a <= b". Currently not optimized with "clang
896 -emit-llvm-bc | opt -std-compile-opts | llc".
898 //===---------------------------------------------------------------------===//
902 if (variable == 4 || variable == 6)
905 This should optimize to "if ((variable | 2) == 6)". Currently not
906 optimized with "clang -emit-llvm-bc | opt -std-compile-opts | llc".
908 //===---------------------------------------------------------------------===//
910 unsigned int f(unsigned int i, unsigned int n) {++i; if (i == n) ++i; return
912 unsigned int f2(unsigned int i, unsigned int n) {++i; i += i == n; return i;}
913 These should combine to the same thing. Currently, the first function
914 produces better code on X86.
916 //===---------------------------------------------------------------------===//
919 #define abs(x) x>0?x:-x
922 return (abs(x)) >= 0;
924 This should optimize to x == INT_MIN. (With -fwrapv.) Currently not
925 optimized with "clang -emit-llvm-bc | opt -std-compile-opts".
927 //===---------------------------------------------------------------------===//
931 rotate_cst (unsigned int a)
933 a = (a << 10) | (a >> 22);
938 minus_cst (unsigned int a)
947 mask_gt (unsigned int a)
949 /* This is equivalent to a > 15. */
954 rshift_gt (unsigned int a)
956 /* This is equivalent to a > 23. */
960 All should simplify to a single comparison. All of these are
961 currently not optimized with "clang -emit-llvm-bc | opt
964 //===---------------------------------------------------------------------===//
967 int c(int* x) {return (char*)x+2 == (char*)x;}
968 Should combine to 0. Currently not optimized with "clang
969 -emit-llvm-bc | opt -std-compile-opts" (although llc can optimize it).
971 //===---------------------------------------------------------------------===//
973 int a(unsigned char* b) {return *b > 99;}
974 There's an unnecessary zext in the generated code with "clang
975 -emit-llvm-bc | opt -std-compile-opts".
977 //===---------------------------------------------------------------------===//
979 int a(unsigned b) {return ((b << 31) | (b << 30)) >> 31;}
980 Should be combined to "((b >> 1) | b) & 1". Currently not optimized
981 with "clang -emit-llvm-bc | opt -std-compile-opts".
983 //===---------------------------------------------------------------------===//
985 unsigned a(unsigned x, unsigned y) { return x | (y & 1) | (y & 2);}
986 Should combine to "x | (y & 3)". Currently not optimized with "clang
987 -emit-llvm-bc | opt -std-compile-opts".
989 //===---------------------------------------------------------------------===//
991 unsigned a(unsigned a) {return ((a | 1) & 3) | (a & -4);}
992 Should combine to "a | 1". Currently not optimized with "clang
993 -emit-llvm-bc | opt -std-compile-opts".
995 //===---------------------------------------------------------------------===//
997 int a(int a, int b, int c) {return (~a & c) | ((c|a) & b);}
998 Should fold to "(~a & c) | (a & b)". Currently not optimized with
999 "clang -emit-llvm-bc | opt -std-compile-opts".
1001 //===---------------------------------------------------------------------===//
1003 int a(int a,int b) {return (~(a|b))|a;}
1004 Should fold to "a|~b". Currently not optimized with "clang
1005 -emit-llvm-bc | opt -std-compile-opts".
1007 //===---------------------------------------------------------------------===//
1009 int a(int a, int b) {return (a&&b) || (a&&!b);}
1010 Should fold to "a". Currently not optimized with "clang -emit-llvm-bc
1011 | opt -std-compile-opts".
1013 //===---------------------------------------------------------------------===//
1015 int a(int a, int b, int c) {return (a&&b) || (!a&&c);}
1016 Should fold to "a ? b : c", or at least something sane. Currently not
1017 optimized with "clang -emit-llvm-bc | opt -std-compile-opts".
1019 //===---------------------------------------------------------------------===//
1021 int a(int a, int b, int c) {return (a&&b) || (a&&c) || (a&&b&&c);}
1022 Should fold to a && (b || c). Currently not optimized with "clang
1023 -emit-llvm-bc | opt -std-compile-opts".
1025 //===---------------------------------------------------------------------===//
1027 int a(int x) {return x | ((x & 8) ^ 8);}
1028 Should combine to x | 8. Currently not optimized with "clang
1029 -emit-llvm-bc | opt -std-compile-opts".
1031 //===---------------------------------------------------------------------===//
1033 int a(int x) {return x ^ ((x & 8) ^ 8);}
1034 Should also combine to x | 8. Currently not optimized with "clang
1035 -emit-llvm-bc | opt -std-compile-opts".
1037 //===---------------------------------------------------------------------===//
1039 int a(int x) {return (x & 8) == 0 ? -1 : -9;}
1040 Should combine to (x | -9) ^ 8. Currently not optimized with "clang
1041 -emit-llvm-bc | opt -std-compile-opts".
1043 //===---------------------------------------------------------------------===//
1045 int a(int x) {return (x & 8) == 0 ? -9 : -1;}
1046 Should combine to x | -9. Currently not optimized with "clang
1047 -emit-llvm-bc | opt -std-compile-opts".
1049 //===---------------------------------------------------------------------===//
1051 int a(int x) {return ((x | -9) ^ 8) & x;}
1052 Should combine to x & -9. Currently not optimized with "clang
1053 -emit-llvm-bc | opt -std-compile-opts".
1055 //===---------------------------------------------------------------------===//
1057 unsigned a(unsigned a) {return a * 0x11111111 >> 28 & 1;}
1058 Should combine to "a * 0x88888888 >> 31". Currently not optimized
1059 with "clang -emit-llvm-bc | opt -std-compile-opts".
1061 //===---------------------------------------------------------------------===//
1063 unsigned a(char* x) {if ((*x & 32) == 0) return b();}
1064 There's an unnecessary zext in the generated code with "clang
1065 -emit-llvm-bc | opt -std-compile-opts".
1067 //===---------------------------------------------------------------------===//
1069 unsigned a(unsigned long long x) {return 40 * (x >> 1);}
1070 Should combine to "20 * (((unsigned)x) & -2)". Currently not
1071 optimized with "clang -emit-llvm-bc | opt -std-compile-opts".
1073 //===---------------------------------------------------------------------===//
1075 This was noticed in the entryblock for grokdeclarator in 403.gcc:
1077 %tmp = icmp eq i32 %decl_context, 4
1078 %decl_context_addr.0 = select i1 %tmp, i32 3, i32 %decl_context
1079 %tmp1 = icmp eq i32 %decl_context_addr.0, 1
1080 %decl_context_addr.1 = select i1 %tmp1, i32 0, i32 %decl_context_addr.0
1082 tmp1 should be simplified to something like:
1083 (!tmp || decl_context == 1)
1085 This allows recursive simplifications, tmp1 is used all over the place in
1086 the function, e.g. by:
1088 %tmp23 = icmp eq i32 %decl_context_addr.1, 0 ; <i1> [#uses=1]
1089 %tmp24 = xor i1 %tmp1, true ; <i1> [#uses=1]
1090 %or.cond8 = and i1 %tmp23, %tmp24 ; <i1> [#uses=1]
1094 //===---------------------------------------------------------------------===//
1096 Store sinking: This code:
1098 void f (int n, int *cond, int *res) {
1101 for (i = 0; i < n; i++)
1103 *res ^= 234; /* (*) */
1106 On this function GVN hoists the fully redundant value of *res, but nothing
1107 moves the store out. This gives us this code:
1109 bb: ; preds = %bb2, %entry
1110 %.rle = phi i32 [ 0, %entry ], [ %.rle6, %bb2 ]
1111 %i.05 = phi i32 [ 0, %entry ], [ %indvar.next, %bb2 ]
1112 %1 = load i32* %cond, align 4
1113 %2 = icmp eq i32 %1, 0
1114 br i1 %2, label %bb2, label %bb1
1117 %3 = xor i32 %.rle, 234
1118 store i32 %3, i32* %res, align 4
1121 bb2: ; preds = %bb, %bb1
1122 %.rle6 = phi i32 [ %3, %bb1 ], [ %.rle, %bb ]
1123 %indvar.next = add i32 %i.05, 1
1124 %exitcond = icmp eq i32 %indvar.next, %n
1125 br i1 %exitcond, label %return, label %bb
1127 DSE should sink partially dead stores to get the store out of the loop.
1129 Here's another partial dead case:
1130 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=12395
1132 //===---------------------------------------------------------------------===//
1134 Scalar PRE hoists the mul in the common block up to the else:
1136 int test (int a, int b, int c, int g) {
1146 It would be better to do the mul once to reduce codesize above the if.
1147 This is GCC PR38204.
1149 //===---------------------------------------------------------------------===//
1151 GCC PR37810 is an interesting case where we should sink load/store reload
1152 into the if block and outside the loop, so we don't reload/store it on the
1173 We now hoist the reload after the call (Transforms/GVN/lpre-call-wrap.ll), but
1174 we don't sink the store. We need partially dead store sinking.
1176 //===---------------------------------------------------------------------===//
1178 [LOAD PRE with NON-AVAILABLE ADDRESS]
1180 GCC PR37166: Sinking of loads prevents SROA'ing the "g" struct on the stack
1181 leading to excess stack traffic. This could be handled by GVN with some crazy
1182 symbolic phi translation. The code we get looks like (g is on the stack):
1186 %9 = getelementptr %struct.f* %g, i32 0, i32 0
1187 store i32 %8, i32* %9, align bel %bb3
1189 bb3: ; preds = %bb1, %bb2, %bb
1190 %c_addr.0 = phi %struct.f* [ %g, %bb2 ], [ %c, %bb ], [ %c, %bb1 ]
1191 %b_addr.0 = phi %struct.f* [ %b, %bb2 ], [ %g, %bb ], [ %b, %bb1 ]
1192 %10 = getelementptr %struct.f* %c_addr.0, i32 0, i32 0
1193 %11 = load i32* %10, align 4
1195 %11 is partially redundant, an in BB2 it should have the value %8.
1197 GCC PR33344 is a similar case.
1200 //===---------------------------------------------------------------------===//
1202 There are many load PRE testcases in testsuite/gcc.dg/tree-ssa/loadpre* in the
1203 GCC testsuite. There are many pre testcases as ssa-pre-*.c
1205 //===---------------------------------------------------------------------===//
1207 There are some interesting cases in testsuite/gcc.dg/tree-ssa/pred-comm* in the
1208 GCC testsuite. For example, predcom-1.c is:
1210 for (i = 2; i < 1000; i++)
1211 fib[i] = (fib[i-1] + fib[i - 2]) & 0xffff;
1213 which compiles into:
1215 bb1: ; preds = %bb1, %bb1.thread
1216 %indvar = phi i32 [ 0, %bb1.thread ], [ %0, %bb1 ]
1217 %i.0.reg2mem.0 = add i32 %indvar, 2
1218 %0 = add i32 %indvar, 1 ; <i32> [#uses=3]
1219 %1 = getelementptr [1000 x i32]* @fib, i32 0, i32 %0
1220 %2 = load i32* %1, align 4 ; <i32> [#uses=1]
1221 %3 = getelementptr [1000 x i32]* @fib, i32 0, i32 %indvar
1222 %4 = load i32* %3, align 4 ; <i32> [#uses=1]
1223 %5 = add i32 %4, %2 ; <i32> [#uses=1]
1224 %6 = and i32 %5, 65535 ; <i32> [#uses=1]
1225 %7 = getelementptr [1000 x i32]* @fib, i32 0, i32 %i.0.reg2mem.0
1226 store i32 %6, i32* %7, align 4
1227 %exitcond = icmp eq i32 %0, 998 ; <i1> [#uses=1]
1228 br i1 %exitcond, label %return, label %bb1
1235 instead of handling this as a loop or other xform, all we'd need to do is teach
1236 load PRE to phi translate the %0 add (i+1) into the predecessor as (i'+1+1) =
1237 (i'+2) (where i' is the previous iteration of i). This would find the store
1240 predcom-2.c is apparently the same as predcom-1.c
1241 predcom-3.c is very similar but needs loads feeding each other instead of
1243 predcom-4.c seems the same as the rest.
1246 //===---------------------------------------------------------------------===//
1248 Other simple load PRE cases:
1249 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=35287 [LPRE crit edge splitting]
1251 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=34677 (licm does this, LPRE crit edge)
1252 llvm-gcc t2.c -S -o - -O0 -emit-llvm | llvm-as | opt -mem2reg -simplifycfg -gvn | llvm-dis
1254 //===---------------------------------------------------------------------===//
1256 Type based alias analysis:
1257 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=14705
1259 //===---------------------------------------------------------------------===//
1261 A/B get pinned to the stack because we turn an if/then into a select instead
1262 of PRE'ing the load/store. This may be fixable in instcombine:
1263 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=37892
1265 struct X { int i; };
1279 //===---------------------------------------------------------------------===//
1281 Interesting missed case because of control flow flattening (should be 2 loads):
1282 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=26629
1283 With: llvm-gcc t2.c -S -o - -O0 -emit-llvm | llvm-as |
1284 opt -mem2reg -gvn -instcombine | llvm-dis
1285 we miss it because we need 1) GEP PHI TRAN, 2) CRIT EDGE 3) MULTIPLE DIFFERENT
1286 VALS PRODUCED BY ONE BLOCK OVER DIFFERENT PATHS
1288 //===---------------------------------------------------------------------===//
1290 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=19633
1291 We could eliminate the branch condition here, loading from null is undefined:
1293 struct S { int w, x, y, z; };
1294 struct T { int r; struct S s; };
1295 void bar (struct S, int);
1296 void foo (int a, struct T b)
1304 //===---------------------------------------------------------------------===//
1306 simplifylibcalls should do several optimizations for strspn/strcspn:
1308 strcspn(x, "") -> strlen(x)
1311 strspn(x, "") -> strlen(x)
1312 strspn(x, "a") -> strchr(x, 'a')-x
1314 strcspn(x, "a") -> inlined loop for up to 3 letters (similarly for strspn):
1316 size_t __strcspn_c3 (__const char *__s, int __reject1, int __reject2,
1318 register size_t __result = 0;
1319 while (__s[__result] != '\0' && __s[__result] != __reject1 &&
1320 __s[__result] != __reject2 && __s[__result] != __reject3)
1325 This should turn into a switch on the character. See PR3253 for some notes on
1328 456.hmmer apparently uses strcspn and strspn a lot. 471.omnetpp uses strspn.
1330 //===---------------------------------------------------------------------===//
1332 "gas" uses this idiom:
1333 else if (strchr ("+-/*%|&^:[]()~", *intel_parser.op_string))
1335 else if (strchr ("<>", *intel_parser.op_string)
1337 Those should be turned into a switch.
1339 //===---------------------------------------------------------------------===//
1341 252.eon contains this interesting code:
1343 %3072 = getelementptr [100 x i8]* %tempString, i32 0, i32 0
1344 %3073 = call i8* @strcpy(i8* %3072, i8* %3071) nounwind
1345 %strlen = call i32 @strlen(i8* %3072) ; uses = 1
1346 %endptr = getelementptr [100 x i8]* %tempString, i32 0, i32 %strlen
1347 call void @llvm.memcpy.i32(i8* %endptr,
1348 i8* getelementptr ([5 x i8]* @"\01LC42", i32 0, i32 0), i32 5, i32 1)
1349 %3074 = call i32 @strlen(i8* %endptr) nounwind readonly
1351 This is interesting for a couple reasons. First, in this:
1353 %3073 = call i8* @strcpy(i8* %3072, i8* %3071) nounwind
1354 %strlen = call i32 @strlen(i8* %3072)
1356 The strlen could be replaced with: %strlen = sub %3072, %3073, because the
1357 strcpy call returns a pointer to the end of the string. Based on that, the
1358 endptr GEP just becomes equal to 3073, which eliminates a strlen call and GEP.
1360 Second, the memcpy+strlen strlen can be replaced with:
1362 %3074 = call i32 @strlen([5 x i8]* @"\01LC42") nounwind readonly
1364 Because the destination was just copied into the specified memory buffer. This,
1365 in turn, can be constant folded to "4".
1367 In other code, it contains:
1369 %endptr6978 = bitcast i8* %endptr69 to i32*
1370 store i32 7107374, i32* %endptr6978, align 1
1371 %3167 = call i32 @strlen(i8* %endptr69) nounwind readonly
1373 Which could also be constant folded. Whatever is producing this should probably
1374 be fixed to leave this as a memcpy from a string.
1376 Further, eon also has an interesting partially redundant strlen call:
1378 bb8: ; preds = %_ZN18eonImageCalculatorC1Ev.exit
1379 %682 = getelementptr i8** %argv, i32 6 ; <i8**> [#uses=2]
1380 %683 = load i8** %682, align 4 ; <i8*> [#uses=4]
1381 %684 = load i8* %683, align 1 ; <i8> [#uses=1]
1382 %685 = icmp eq i8 %684, 0 ; <i1> [#uses=1]
1383 br i1 %685, label %bb10, label %bb9
1386 %686 = call i32 @strlen(i8* %683) nounwind readonly
1387 %687 = icmp ugt i32 %686, 254 ; <i1> [#uses=1]
1388 br i1 %687, label %bb10, label %bb11
1390 bb10: ; preds = %bb9, %bb8
1391 %688 = call i32 @strlen(i8* %683) nounwind readonly
1393 This could be eliminated by doing the strlen once in bb8, saving code size and
1394 improving perf on the bb8->9->10 path.
1396 //===---------------------------------------------------------------------===//
1398 I see an interesting fully redundant call to strlen left in 186.crafty:InputMove
1400 %movetext11 = getelementptr [128 x i8]* %movetext, i32 0, i32 0
1403 bb62: ; preds = %bb55, %bb53
1404 %promote.0 = phi i32 [ %169, %bb55 ], [ 0, %bb53 ]
1405 %171 = call i32 @strlen(i8* %movetext11) nounwind readonly align 1
1406 %172 = add i32 %171, -1 ; <i32> [#uses=1]
1407 %173 = getelementptr [128 x i8]* %movetext, i32 0, i32 %172
1410 br i1 %or.cond, label %bb65, label %bb72
1412 bb65: ; preds = %bb62
1413 store i8 0, i8* %173, align 1
1416 bb72: ; preds = %bb65, %bb62
1417 %trank.1 = phi i32 [ %176, %bb65 ], [ -1, %bb62 ]
1418 %177 = call i32 @strlen(i8* %movetext11) nounwind readonly align 1
1420 Note that on the bb62->bb72 path, that the %177 strlen call is partially
1421 redundant with the %171 call. At worst, we could shove the %177 strlen call
1422 up into the bb65 block moving it out of the bb62->bb72 path. However, note
1423 that bb65 stores to the string, zeroing out the last byte. This means that on
1424 that path the value of %177 is actually just %171-1. A sub is cheaper than a
1427 This pattern repeats several times, basically doing:
1432 where it is "obvious" that B = A-1.
1434 //===---------------------------------------------------------------------===//
1436 186.crafty contains this interesting pattern:
1438 %77 = call i8* @strstr(i8* getelementptr ([6 x i8]* @"\01LC5", i32 0, i32 0),
1440 %phitmp648 = icmp eq i8* %77, getelementptr ([6 x i8]* @"\01LC5", i32 0, i32 0)
1441 br i1 %phitmp648, label %bb70, label %bb76
1443 bb70: ; preds = %OptionMatch.exit91, %bb69
1444 %78 = call i32 @strlen(i8* %30) nounwind readonly align 1 ; <i32> [#uses=1]
1448 if (strstr(cststr, P) == cststr) {
1452 The strstr call would be significantly cheaper written as:
1455 if (memcmp(P, str, strlen(P)))
1458 This is memcmp+strlen instead of strstr. This also makes the strlen fully
1461 //===---------------------------------------------------------------------===//
1463 186.crafty also contains this code:
1465 %1906 = call i32 @strlen(i8* getelementptr ([32 x i8]* @pgn_event, i32 0,i32 0))
1466 %1907 = getelementptr [32 x i8]* @pgn_event, i32 0, i32 %1906
1467 %1908 = call i8* @strcpy(i8* %1907, i8* %1905) nounwind align 1
1468 %1909 = call i32 @strlen(i8* getelementptr ([32 x i8]* @pgn_event, i32 0,i32 0))
1469 %1910 = getelementptr [32 x i8]* @pgn_event, i32 0, i32 %1909
1471 The last strlen is computable as 1908-@pgn_event, which means 1910=1908.
1473 //===---------------------------------------------------------------------===//
1475 186.crafty has this interesting pattern with the "out.4543" variable:
1477 call void @llvm.memcpy.i32(
1478 i8* getelementptr ([10 x i8]* @out.4543, i32 0, i32 0),
1479 i8* getelementptr ([7 x i8]* @"\01LC28700", i32 0, i32 0), i32 7, i32 1)
1480 %101 = call@printf(i8* ... @out.4543, i32 0, i32 0)) nounwind
1482 It is basically doing:
1484 memcpy(globalarray, "string");
1485 printf(..., globalarray);
1487 Anyway, by knowing that printf just reads the memory and forward substituting
1488 the string directly into the printf, this eliminates reads from globalarray.
1489 Since this pattern occurs frequently in crafty (due to the "DisplayTime" and
1490 other similar functions) there are many stores to "out". Once all the printfs
1491 stop using "out", all that is left is the memcpy's into it. This should allow
1492 globalopt to remove the "stored only" global.
1494 //===---------------------------------------------------------------------===//
1498 define inreg i32 @foo(i8* inreg %p) nounwind {
1500 %tmp1 = ashr i8 %tmp0, 5
1501 %tmp2 = sext i8 %tmp1 to i32
1505 could be dagcombine'd to a sign-extending load with a shift.
1506 For example, on x86 this currently gets this:
1512 while it could get this:
1517 //===---------------------------------------------------------------------===//
1521 int test(int x) { return 1-x == x; } // --> return false
1522 int test2(int x) { return 2-x == x; } // --> return x == 1 ?
1524 Always foldable for odd constants, what is the rule for even?
1526 //===---------------------------------------------------------------------===//
1528 PR 3381: GEP to field of size 0 inside a struct could be turned into GEP
1529 for next field in struct (which is at same address).
1531 For example: store of float into { {{}}, float } could be turned into a store to
1534 //===---------------------------------------------------------------------===//
1537 double foo(double a) { return sin(a); }
1539 This compiles into this on x86-64 Linux:
1550 //===---------------------------------------------------------------------===//
1552 The arg promotion pass should make use of nocapture to make its alias analysis
1553 stuff much more precise.
1555 //===---------------------------------------------------------------------===//
1557 The following functions should be optimized to use a select instead of a
1558 branch (from gcc PR40072):
1560 char char_int(int m) {if(m>7) return 0; return m;}
1561 int int_char(char m) {if(m>7) return 0; return m;}
1563 //===---------------------------------------------------------------------===//
1565 int func(int a, int b) { if (a & 0x80) b |= 0x80; else b &= ~0x80; return b; }
1569 define i32 @func(i32 %a, i32 %b) nounwind readnone ssp {
1571 %0 = and i32 %a, 128 ; <i32> [#uses=1]
1572 %1 = icmp eq i32 %0, 0 ; <i1> [#uses=1]
1573 %2 = or i32 %b, 128 ; <i32> [#uses=1]
1574 %3 = and i32 %b, -129 ; <i32> [#uses=1]
1575 %b_addr.0 = select i1 %1, i32 %3, i32 %2 ; <i32> [#uses=1]
1579 However, it's functionally equivalent to:
1581 b = (b & ~0x80) | (a & 0x80);
1583 Which generates this:
1585 define i32 @func(i32 %a, i32 %b) nounwind readnone ssp {
1587 %0 = and i32 %b, -129 ; <i32> [#uses=1]
1588 %1 = and i32 %a, 128 ; <i32> [#uses=1]
1589 %2 = or i32 %0, %1 ; <i32> [#uses=1]
1593 This can be generalized for other forms:
1595 b = (b & ~0x80) | (a & 0x40) << 1;
1597 //===---------------------------------------------------------------------===//
1599 These two functions produce different code. They shouldn't:
1603 uint8_t p1(uint8_t b, uint8_t a) {
1604 b = (b & ~0xc0) | (a & 0xc0);
1608 uint8_t p2(uint8_t b, uint8_t a) {
1609 b = (b & ~0x40) | (a & 0x40);
1610 b = (b & ~0x80) | (a & 0x80);
1614 define zeroext i8 @p1(i8 zeroext %b, i8 zeroext %a) nounwind readnone ssp {
1616 %0 = and i8 %b, 63 ; <i8> [#uses=1]
1617 %1 = and i8 %a, -64 ; <i8> [#uses=1]
1618 %2 = or i8 %1, %0 ; <i8> [#uses=1]
1622 define zeroext i8 @p2(i8 zeroext %b, i8 zeroext %a) nounwind readnone ssp {
1624 %0 = and i8 %b, 63 ; <i8> [#uses=1]
1625 %.masked = and i8 %a, 64 ; <i8> [#uses=1]
1626 %1 = and i8 %a, -128 ; <i8> [#uses=1]
1627 %2 = or i8 %1, %0 ; <i8> [#uses=1]
1628 %3 = or i8 %2, %.masked ; <i8> [#uses=1]
1632 //===---------------------------------------------------------------------===//
1634 IPSCCP does not currently propagate argument dependent constants through
1635 functions where it does not not all of the callers. This includes functions
1636 with normal external linkage as well as templates, C99 inline functions etc.
1637 Specifically, it does nothing to:
1639 define i32 @test(i32 %x, i32 %y, i32 %z) nounwind {
1641 %0 = add nsw i32 %y, %z
1644 %3 = add nsw i32 %1, %2
1648 define i32 @test2() nounwind {
1650 %0 = call i32 @test(i32 1, i32 2, i32 4) nounwind
1654 It would be interesting extend IPSCCP to be able to handle simple cases like
1655 this, where all of the arguments to a call are constant. Because IPSCCP runs
1656 before inlining, trivial templates and inline functions are not yet inlined.
1657 The results for a function + set of constant arguments should be memoized in a
1660 //===---------------------------------------------------------------------===//
1662 The libcall constant folding stuff should be moved out of SimplifyLibcalls into
1663 libanalysis' constantfolding logic. This would allow IPSCCP to be able to
1664 handle simple things like this:
1666 static int foo(const char *X) { return strlen(X); }
1667 int bar() { return foo("abcd"); }
1669 //===---------------------------------------------------------------------===//
1671 InstCombine should use SimplifyDemandedBits to remove the or instruction:
1673 define i1 @test(i8 %x, i8 %y) {
1675 %B = icmp ugt i8 %A, 3
1679 Currently instcombine calls SimplifyDemandedBits with either all bits or just
1680 the sign bit, if the comparison is obviously a sign test. In this case, we only
1681 need all but the bottom two bits from %A, and if we gave that mask to SDB it
1682 would delete the or instruction for us.
1684 //===---------------------------------------------------------------------===//