1 Target Independent Opportunities:
3 //===---------------------------------------------------------------------===//
5 With the recent changes to make the implicit def/use set explicit in
6 machineinstrs, we should change the target descriptions for 'call' instructions
7 so that the .td files don't list all the call-clobbered registers as implicit
8 defs. Instead, these should be added by the code generator (e.g. on the dag).
10 This has a number of uses:
12 1. PPC32/64 and X86 32/64 can avoid having multiple copies of call instructions
13 for their different impdef sets.
14 2. Targets with multiple calling convs (e.g. x86) which have different clobber
15 sets don't need copies of call instructions.
16 3. 'Interprocedural register allocation' can be done to reduce the clobber sets
19 //===---------------------------------------------------------------------===//
21 Make the PPC branch selector target independant
23 //===---------------------------------------------------------------------===//
25 Get the C front-end to expand hypot(x,y) -> llvm.sqrt(x*x+y*y) when errno and
26 precision don't matter (ffastmath). Misc/mandel will like this. :) This isn't
27 safe in general, even on darwin. See the libm implementation of hypot for
28 examples (which special case when x/y are exactly zero to get signed zeros etc
31 //===---------------------------------------------------------------------===//
33 Solve this DAG isel folding deficiency:
51 The problem is the store's chain operand is not the load X but rather
52 a TokenFactor of the load X and load Y, which prevents the folding.
54 There are two ways to fix this:
56 1. The dag combiner can start using alias analysis to realize that y/x
57 don't alias, making the store to X not dependent on the load from Y.
58 2. The generated isel could be made smarter in the case it can't
59 disambiguate the pointers.
61 Number 1 is the preferred solution.
63 This has been "fixed" by a TableGen hack. But that is a short term workaround
64 which will be removed once the proper fix is made.
66 //===---------------------------------------------------------------------===//
68 On targets with expensive 64-bit multiply, we could LSR this:
75 for (i = ...; ++i, tmp+=tmp)
78 This would be a win on ppc32, but not x86 or ppc64.
80 //===---------------------------------------------------------------------===//
82 Shrink: (setlt (loadi32 P), 0) -> (setlt (loadi8 Phi), 0)
84 //===---------------------------------------------------------------------===//
86 Reassociate should turn: X*X*X*X -> t=(X*X) (t*t) to eliminate a multiply.
88 //===---------------------------------------------------------------------===//
90 Interesting? testcase for add/shift/mul reassoc:
92 int bar(int x, int y) {
93 return x*x*x+y+x*x*x*x*x*y*y*y*y;
95 int foo(int z, int n) {
96 return bar(z, n) + bar(2*z, 2*n);
99 Reassociate should handle the example in GCC PR16157.
101 //===---------------------------------------------------------------------===//
103 These two functions should generate the same code on big-endian systems:
105 int g(int *j,int *l) { return memcmp(j,l,4); }
106 int h(int *j, int *l) { return *j - *l; }
108 this could be done in SelectionDAGISel.cpp, along with other special cases,
111 //===---------------------------------------------------------------------===//
113 It would be nice to revert this patch:
114 http://lists.cs.uiuc.edu/pipermail/llvm-commits/Week-of-Mon-20060213/031986.html
116 And teach the dag combiner enough to simplify the code expanded before
117 legalize. It seems plausible that this knowledge would let it simplify other
120 //===---------------------------------------------------------------------===//
122 For vector types, TargetData.cpp::getTypeInfo() returns alignment that is equal
123 to the type size. It works but can be overly conservative as the alignment of
124 specific vector types are target dependent.
126 //===---------------------------------------------------------------------===//
128 We should add 'unaligned load/store' nodes, and produce them from code like
131 v4sf example(float *P) {
132 return (v4sf){P[0], P[1], P[2], P[3] };
135 //===---------------------------------------------------------------------===//
137 Add support for conditional increments, and other related patterns. Instead
142 je LBB16_2 #cond_next
153 //===---------------------------------------------------------------------===//
155 Combine: a = sin(x), b = cos(x) into a,b = sincos(x).
157 Expand these to calls of sin/cos and stores:
158 double sincos(double x, double *sin, double *cos);
159 float sincosf(float x, float *sin, float *cos);
160 long double sincosl(long double x, long double *sin, long double *cos);
162 Doing so could allow SROA of the destination pointers. See also:
163 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=17687
165 This is now easily doable with MRVs. We could even make an intrinsic for this
166 if anyone cared enough about sincos.
168 //===---------------------------------------------------------------------===//
170 Turn this into a single byte store with no load (the other 3 bytes are
173 define void @test(i32* %P) {
175 %tmp14 = or i32 %tmp, 3305111552
176 %tmp15 = and i32 %tmp14, 3321888767
177 store i32 %tmp15, i32* %P
181 //===---------------------------------------------------------------------===//
183 dag/inst combine "clz(x)>>5 -> x==0" for 32-bit x.
189 int t = __builtin_clz(x);
199 //===---------------------------------------------------------------------===//
201 Legalize should lower ctlz like this:
202 ctlz(x) = popcnt((x-1) & ~x)
204 on targets that have popcnt but not ctlz. itanium, what else?
206 //===---------------------------------------------------------------------===//
208 quantum_sigma_x in 462.libquantum contains the following loop:
210 for(i=0; i<reg->size; i++)
212 /* Flip the target bit of each basis state */
213 reg->node[i].state ^= ((MAX_UNSIGNED) 1 << target);
216 Where MAX_UNSIGNED/state is a 64-bit int. On a 32-bit platform it would be just
217 so cool to turn it into something like:
219 long long Res = ((MAX_UNSIGNED) 1 << target);
221 for(i=0; i<reg->size; i++)
222 reg->node[i].state ^= Res & 0xFFFFFFFFULL;
224 for(i=0; i<reg->size; i++)
225 reg->node[i].state ^= Res & 0xFFFFFFFF00000000ULL
228 ... which would only do one 32-bit XOR per loop iteration instead of two.
230 It would also be nice to recognize the reg->size doesn't alias reg->node[i], but
233 //===---------------------------------------------------------------------===//
235 This isn't recognized as bswap by instcombine (yes, it really is bswap):
237 unsigned long reverse(unsigned v) {
239 t = v ^ ((v << 16) | (v >> 16));
241 v = (v << 24) | (v >> 8);
245 //===---------------------------------------------------------------------===//
247 These idioms should be recognized as popcount (see PR1488):
249 unsigned countbits_slow(unsigned v) {
251 for (c = 0; v; v >>= 1)
255 unsigned countbits_fast(unsigned v){
258 v &= v - 1; // clear the least significant bit set
262 BITBOARD = unsigned long long
263 int PopCnt(register BITBOARD a) {
271 unsigned int popcount(unsigned int input) {
272 unsigned int count = 0;
273 for (unsigned int i = 0; i < 4 * 8; i++)
274 count += (input >> i) & i;
278 //===---------------------------------------------------------------------===//
280 These should turn into single 16-bit (unaligned?) loads on little/big endian
283 unsigned short read_16_le(const unsigned char *adr) {
284 return adr[0] | (adr[1] << 8);
286 unsigned short read_16_be(const unsigned char *adr) {
287 return (adr[0] << 8) | adr[1];
290 //===---------------------------------------------------------------------===//
292 -instcombine should handle this transform:
293 icmp pred (sdiv X / C1 ), C2
294 when X, C1, and C2 are unsigned. Similarly for udiv and signed operands.
296 Currently InstCombine avoids this transform but will do it when the signs of
297 the operands and the sign of the divide match. See the FIXME in
298 InstructionCombining.cpp in the visitSetCondInst method after the switch case
299 for Instruction::UDiv (around line 4447) for more details.
301 The SingleSource/Benchmarks/Shootout-C++/hash and hash2 tests have examples of
304 //===---------------------------------------------------------------------===//
306 viterbi speeds up *significantly* if the various "history" related copy loops
307 are turned into memcpy calls at the source level. We need a "loops to memcpy"
310 //===---------------------------------------------------------------------===//
314 typedef unsigned U32;
315 typedef unsigned long long U64;
316 int test (U32 *inst, U64 *regs) {
319 int r1 = (temp >> 20) & 0xf;
320 int b2 = (temp >> 16) & 0xf;
321 effective_addr2 = temp & 0xfff;
322 if (b2) effective_addr2 += regs[b2];
323 b2 = (temp >> 12) & 0xf;
324 if (b2) effective_addr2 += regs[b2];
325 effective_addr2 &= regs[4];
326 if ((effective_addr2 & 3) == 0)
331 Note that only the low 2 bits of effective_addr2 are used. On 32-bit systems,
332 we don't eliminate the computation of the top half of effective_addr2 because
333 we don't have whole-function selection dags. On x86, this means we use one
334 extra register for the function when effective_addr2 is declared as U64 than
335 when it is declared U32.
337 //===---------------------------------------------------------------------===//
339 Promote for i32 bswap can use i64 bswap + shr. Useful on targets with 64-bit
340 regs and bswap, like itanium.
342 //===---------------------------------------------------------------------===//
344 LSR should know what GPR types a target has. This code:
346 volatile short X, Y; // globals
350 for (i = 0; i < N; i++) { X = i; Y = i*4; }
353 produces two identical IV's (after promotion) on PPC/ARM:
355 LBB1_1: @bb.preheader
366 add r1, r1, #1 <- [0,+,1]
368 add r2, r2, #1 <- [0,+,1]
373 //===---------------------------------------------------------------------===//
375 Tail call elim should be more aggressive, checking to see if the call is
376 followed by an uncond branch to an exit block.
378 ; This testcase is due to tail-duplication not wanting to copy the return
379 ; instruction into the terminating blocks because there was other code
380 ; optimized out of the function after the taildup happened.
381 ; RUN: llvm-as < %s | opt -tailcallelim | llvm-dis | not grep call
383 define i32 @t4(i32 %a) {
385 %tmp.1 = and i32 %a, 1 ; <i32> [#uses=1]
386 %tmp.2 = icmp ne i32 %tmp.1, 0 ; <i1> [#uses=1]
387 br i1 %tmp.2, label %then.0, label %else.0
389 then.0: ; preds = %entry
390 %tmp.5 = add i32 %a, -1 ; <i32> [#uses=1]
391 %tmp.3 = call i32 @t4( i32 %tmp.5 ) ; <i32> [#uses=1]
394 else.0: ; preds = %entry
395 %tmp.7 = icmp ne i32 %a, 0 ; <i1> [#uses=1]
396 br i1 %tmp.7, label %then.1, label %return
398 then.1: ; preds = %else.0
399 %tmp.11 = add i32 %a, -2 ; <i32> [#uses=1]
400 %tmp.9 = call i32 @t4( i32 %tmp.11 ) ; <i32> [#uses=1]
403 return: ; preds = %then.1, %else.0, %then.0
404 %result.0 = phi i32 [ 0, %else.0 ], [ %tmp.3, %then.0 ],
409 //===---------------------------------------------------------------------===//
411 Tail recursion elimination is not transforming this function, because it is
412 returning n, which fails the isDynamicConstant check in the accumulator
415 long long fib(const long long n) {
421 return fib(n-1) + fib(n-2);
425 //===---------------------------------------------------------------------===//
427 Tail recursion elimination should handle:
432 return 2 * pow2m1 (n - 1) + 1;
435 Also, multiplies can be turned into SHL's, so they should be handled as if
436 they were associative. "return foo() << 1" can be tail recursion eliminated.
438 //===---------------------------------------------------------------------===//
440 Argument promotion should promote arguments for recursive functions, like
443 ; RUN: llvm-as < %s | opt -argpromotion | llvm-dis | grep x.val
445 define internal i32 @foo(i32* %x) {
447 %tmp = load i32* %x ; <i32> [#uses=0]
448 %tmp.foo = call i32 @foo( i32* %x ) ; <i32> [#uses=1]
452 define i32 @bar(i32* %x) {
454 %tmp3 = call i32 @foo( i32* %x ) ; <i32> [#uses=1]
458 //===---------------------------------------------------------------------===//
460 "basicaa" should know how to look through "or" instructions that act like add
461 instructions. For example in this code, the x*4+1 is turned into x*4 | 1, and
462 basicaa can't analyze the array subscript, leading to duplicated loads in the
465 void test(int X, int Y, int a[]) {
467 for (i=2; i<1000; i+=4) {
468 a[i+0] = a[i-1+0]*a[i-2+0];
469 a[i+1] = a[i-1+1]*a[i-2+1];
470 a[i+2] = a[i-1+2]*a[i-2+2];
471 a[i+3] = a[i-1+3]*a[i-2+3];
475 BasicAA also doesn't do this for add. It needs to know that &A[i+1] != &A[i].
477 //===---------------------------------------------------------------------===//
479 We should investigate an instruction sinking pass. Consider this silly
495 je LBB1_2 # cond_true
503 The PIC base computation (call+popl) is only used on one path through the
504 code, but is currently always computed in the entry block. It would be
505 better to sink the picbase computation down into the block for the
506 assertion, as it is the only one that uses it. This happens for a lot of
507 code with early outs.
509 Another example is loads of arguments, which are usually emitted into the
510 entry block on targets like x86. If not used in all paths through a
511 function, they should be sunk into the ones that do.
513 In this case, whole-function-isel would also handle this.
515 //===---------------------------------------------------------------------===//
517 Investigate lowering of sparse switch statements into perfect hash tables:
518 http://burtleburtle.net/bob/hash/perfect.html
520 //===---------------------------------------------------------------------===//
522 We should turn things like "load+fabs+store" and "load+fneg+store" into the
523 corresponding integer operations. On a yonah, this loop:
528 for (b = 0; b < 10000000; b++)
529 for (i = 0; i < 256; i++)
533 is twice as slow as this loop:
538 for (b = 0; b < 10000000; b++)
539 for (i = 0; i < 256; i++)
540 a[i] ^= (1ULL << 63);
543 and I suspect other processors are similar. On X86 in particular this is a
544 big win because doing this with integers allows the use of read/modify/write
547 //===---------------------------------------------------------------------===//
549 DAG Combiner should try to combine small loads into larger loads when
550 profitable. For example, we compile this C++ example:
552 struct THotKey { short Key; bool Control; bool Shift; bool Alt; };
553 extern THotKey m_HotKey;
554 THotKey GetHotKey () { return m_HotKey; }
556 into (-O3 -fno-exceptions -static -fomit-frame-pointer):
561 movb _m_HotKey+3, %cl
562 movb _m_HotKey+4, %dl
563 movb _m_HotKey+2, %ch
578 movzwl _m_HotKey+4, %edx
582 The LLVM IR contains the needed alignment info, so we should be able to
583 merge the loads and stores into 4-byte loads:
585 %struct.THotKey = type { i16, i8, i8, i8 }
586 define void @_Z9GetHotKeyv(%struct.THotKey* sret %agg.result) nounwind {
588 %tmp2 = load i16* getelementptr (@m_HotKey, i32 0, i32 0), align 8
589 %tmp5 = load i8* getelementptr (@m_HotKey, i32 0, i32 1), align 2
590 %tmp8 = load i8* getelementptr (@m_HotKey, i32 0, i32 2), align 1
591 %tmp11 = load i8* getelementptr (@m_HotKey, i32 0, i32 3), align 2
593 Alternatively, we should use a small amount of base-offset alias analysis
594 to make it so the scheduler doesn't need to hold all the loads in regs at
597 //===---------------------------------------------------------------------===//
599 We should add an FRINT node to the DAG to model targets that have legal
600 implementations of ceil/floor/rint.
602 //===---------------------------------------------------------------------===//
604 This GCC bug: http://gcc.gnu.org/bugzilla/show_bug.cgi?id=34043
605 contains a testcase that compiles down to:
607 %struct.XMM128 = type { <4 x float> }
609 %src = alloca %struct.XMM128
611 %tmp6263 = bitcast %struct.XMM128* %src to <2 x i64>*
612 %tmp65 = getelementptr %struct.XMM128* %src, i32 0, i32 0
613 store <2 x i64> %tmp5899, <2 x i64>* %tmp6263, align 16
614 %tmp66 = load <4 x float>* %tmp65, align 16
615 %tmp71 = add <4 x float> %tmp66, %tmp66
617 If the mid-level optimizer turned the bitcast of pointer + store of tmp5899
618 into a bitcast of the vector value and a store to the pointer, then the
619 store->load could be easily removed.
621 //===---------------------------------------------------------------------===//
626 long long input[8] = {1,1,1,1,1,1,1,1};
630 We currently compile this into a memcpy from a global array since the
631 initializer is fairly large and not memset'able. This is good, but the memcpy
632 gets lowered to load/stores in the code generator. This is also ok, except
633 that the codegen lowering for memcpy doesn't handle the case when the source
634 is a constant global. This gives us atrocious code like this:
639 movl _C.0.1444-"L1$pb"+32(%eax), %ecx
641 movl _C.0.1444-"L1$pb"+20(%eax), %ecx
643 movl _C.0.1444-"L1$pb"+36(%eax), %ecx
645 movl _C.0.1444-"L1$pb"+44(%eax), %ecx
647 movl _C.0.1444-"L1$pb"+40(%eax), %ecx
649 movl _C.0.1444-"L1$pb"+12(%eax), %ecx
651 movl _C.0.1444-"L1$pb"+4(%eax), %ecx
663 //===---------------------------------------------------------------------===//
665 http://llvm.org/PR717:
667 The following code should compile into "ret int undef". Instead, LLVM
668 produces "ret int 0":
677 //===---------------------------------------------------------------------===//
679 The loop unroller should partially unroll loops (instead of peeling them)
680 when code growth isn't too bad and when an unroll count allows simplification
681 of some code within the loop. One trivial example is:
687 for ( nLoop = 0; nLoop < 1000; nLoop++ ) {
696 Unrolling by 2 would eliminate the '&1' in both copies, leading to a net
697 reduction in code size. The resultant code would then also be suitable for
698 exit value computation.
700 //===---------------------------------------------------------------------===//
702 We miss a bunch of rotate opportunities on various targets, including ppc, x86,
703 etc. On X86, we miss a bunch of 'rotate by variable' cases because the rotate
704 matching code in dag combine doesn't look through truncates aggressively
705 enough. Here are some testcases reduces from GCC PR17886:
707 unsigned long long f(unsigned long long x, int y) {
708 return (x << y) | (x >> 64-y);
710 unsigned f2(unsigned x, int y){
711 return (x << y) | (x >> 32-y);
713 unsigned long long f3(unsigned long long x){
715 return (x << y) | (x >> 64-y);
717 unsigned f4(unsigned x){
719 return (x << y) | (x >> 32-y);
721 unsigned long long f5(unsigned long long x, unsigned long long y) {
722 return (x << 8) | ((y >> 48) & 0xffull);
724 unsigned long long f6(unsigned long long x, unsigned long long y, int z) {
727 return (x << 8) | ((y >> 48) & 0xffull);
729 return (x << 16) | ((y >> 40) & 0xffffull);
731 return (x << 24) | ((y >> 32) & 0xffffffull);
733 return (x << 32) | ((y >> 24) & 0xffffffffull);
735 return (x << 40) | ((y >> 16) & 0xffffffffffull);
739 On X86-64, we only handle f2/f3/f4 right. On x86-32, a few of these
740 generate truly horrible code, instead of using shld and friends. On
741 ARM, we end up with calls to L___lshrdi3/L___ashldi3 in f, which is
742 badness. PPC64 misses f, f5 and f6. CellSPU aborts in isel.
744 //===---------------------------------------------------------------------===//
746 We do a number of simplifications in simplify libcalls to strength reduce
747 standard library functions, but we don't currently merge them together. For
748 example, it is useful to merge memcpy(a,b,strlen(b)) -> strcpy. This can only
749 be done safely if "b" isn't modified between the strlen and memcpy of course.
751 //===---------------------------------------------------------------------===//
753 We should be able to evaluate this loop:
755 int test(int x_offs) {
761 //===---------------------------------------------------------------------===//
763 Reassociate should turn things like:
765 int factorial(int X) {
766 return X*X*X*X*X*X*X*X;
769 into llvm.powi calls, allowing the code generator to produce balanced
770 multiplication trees.
772 //===---------------------------------------------------------------------===//
774 We generate a horrible libcall for llvm.powi. For example, we compile:
777 double f(double a) { return std::pow(a, 4); }
783 movsd 16(%esp), %xmm0
786 call L___powidf2$stub
794 movsd 16(%esp), %xmm0
802 //===---------------------------------------------------------------------===//
804 We compile this program: (from GCC PR11680)
805 http://gcc.gnu.org/bugzilla/attachment.cgi?id=4487
807 Into code that runs the same speed in fast/slow modes, but both modes run 2x
808 slower than when compile with GCC (either 4.0 or 4.2):
810 $ llvm-g++ perf.cpp -O3 -fno-exceptions
812 1.821u 0.003s 0:01.82 100.0% 0+0k 0+0io 0pf+0w
814 $ g++ perf.cpp -O3 -fno-exceptions
816 0.821u 0.001s 0:00.82 100.0% 0+0k 0+0io 0pf+0w
818 It looks like we are making the same inlining decisions, so this may be raw
819 codegen badness or something else (haven't investigated).
821 //===---------------------------------------------------------------------===//
823 We miss some instcombines for stuff like this:
825 void foo (unsigned int a) {
826 /* This one is equivalent to a >= (3 << 2). */
831 A few other related ones are in GCC PR14753.
833 //===---------------------------------------------------------------------===//
835 Divisibility by constant can be simplified (according to GCC PR12849) from
836 being a mulhi to being a mul lo (cheaper). Testcase:
838 void bar(unsigned n) {
843 I think this basically amounts to a dag combine to simplify comparisons against
844 multiply hi's into a comparison against the mullo.
846 //===---------------------------------------------------------------------===//
848 Better mod/ref analysis for scanf would allow us to eliminate the vtable and a
849 bunch of other stuff from this example (see PR1604):
859 std::scanf("%d", &t.val);
860 std::printf("%d\n", t.val);
863 //===---------------------------------------------------------------------===//
865 Instcombine will merge comparisons like (x >= 10) && (x < 20) by producing (x -
866 10) u< 10, but only when the comparisons have matching sign.
868 This could be converted with a similiar technique. (PR1941)
870 define i1 @test(i8 %x) {
871 %A = icmp uge i8 %x, 5
872 %B = icmp slt i8 %x, 20
877 //===---------------------------------------------------------------------===//
879 These functions perform the same computation, but produce different assembly.
881 define i8 @select(i8 %x) readnone nounwind {
882 %A = icmp ult i8 %x, 250
883 %B = select i1 %A, i8 0, i8 1
887 define i8 @addshr(i8 %x) readnone nounwind {
888 %A = zext i8 %x to i9
889 %B = add i9 %A, 6 ;; 256 - 250 == 6
891 %D = trunc i9 %C to i8
895 //===---------------------------------------------------------------------===//
899 f (unsigned long a, unsigned long b, unsigned long c)
901 return ((a & (c - 1)) != 0) || ((b & (c - 1)) != 0);
904 f (unsigned long a, unsigned long b, unsigned long c)
906 return ((a & (c - 1)) != 0) | ((b & (c - 1)) != 0);
908 Both should combine to ((a|b) & (c-1)) != 0. Currently not optimized with
909 "clang -emit-llvm-bc | opt -std-compile-opts".
911 //===---------------------------------------------------------------------===//
914 #define PMD_MASK (~((1UL << 23) - 1))
915 void clear_pmd_range(unsigned long start, unsigned long end)
917 if (!(start & ~PMD_MASK) && !(end & ~PMD_MASK))
920 The expression should optimize to something like
921 "!((start|end)&~PMD_MASK). Currently not optimized with "clang
922 -emit-llvm-bc | opt -std-compile-opts".
924 //===---------------------------------------------------------------------===//
928 foo (unsigned int a, unsigned int b)
930 if (a <= 7 && b <= 7)
933 Should combine to "(a|b) <= 7". Currently not optimized with "clang
934 -emit-llvm-bc | opt -std-compile-opts".
936 //===---------------------------------------------------------------------===//
942 return (n >= 0 ? 1 : -1);
944 Should combine to (n >> 31) | 1. Currently not optimized with "clang
945 -emit-llvm-bc | opt -std-compile-opts | llc".
947 //===---------------------------------------------------------------------===//
950 int test(int a, int b)
957 Should combine to "a <= b". Currently not optimized with "clang
958 -emit-llvm-bc | opt -std-compile-opts | llc".
960 //===---------------------------------------------------------------------===//
964 if (variable == 4 || variable == 6)
967 This should optimize to "if ((variable | 2) == 6)". Currently not
968 optimized with "clang -emit-llvm-bc | opt -std-compile-opts | llc".
970 //===---------------------------------------------------------------------===//
972 unsigned int f(unsigned int i, unsigned int n) {++i; if (i == n) ++i; return
974 unsigned int f2(unsigned int i, unsigned int n) {++i; i += i == n; return i;}
975 These should combine to the same thing. Currently, the first function
976 produces better code on X86.
978 //===---------------------------------------------------------------------===//
981 #define abs(x) x>0?x:-x
984 return (abs(x)) >= 0;
986 This should optimize to x == INT_MIN. (With -fwrapv.) Currently not
987 optimized with "clang -emit-llvm-bc | opt -std-compile-opts".
989 //===---------------------------------------------------------------------===//
993 rotate_cst (unsigned int a)
995 a = (a << 10) | (a >> 22);
1000 minus_cst (unsigned int a)
1009 mask_gt (unsigned int a)
1011 /* This is equivalent to a > 15. */
1016 rshift_gt (unsigned int a)
1018 /* This is equivalent to a > 23. */
1022 All should simplify to a single comparison. All of these are
1023 currently not optimized with "clang -emit-llvm-bc | opt
1026 //===---------------------------------------------------------------------===//
1029 int c(int* x) {return (char*)x+2 == (char*)x;}
1030 Should combine to 0. Currently not optimized with "clang
1031 -emit-llvm-bc | opt -std-compile-opts" (although llc can optimize it).
1033 //===---------------------------------------------------------------------===//
1035 int a(unsigned char* b) {return *b > 99;}
1036 There's an unnecessary zext in the generated code with "clang
1037 -emit-llvm-bc | opt -std-compile-opts".
1039 //===---------------------------------------------------------------------===//
1041 int a(unsigned b) {return ((b << 31) | (b << 30)) >> 31;}
1042 Should be combined to "((b >> 1) | b) & 1". Currently not optimized
1043 with "clang -emit-llvm-bc | opt -std-compile-opts".
1045 //===---------------------------------------------------------------------===//
1047 unsigned a(unsigned x, unsigned y) { return x | (y & 1) | (y & 2);}
1048 Should combine to "x | (y & 3)". Currently not optimized with "clang
1049 -emit-llvm-bc | opt -std-compile-opts".
1051 //===---------------------------------------------------------------------===//
1053 unsigned a(unsigned a) {return ((a | 1) & 3) | (a & -4);}
1054 Should combine to "a | 1". Currently not optimized with "clang
1055 -emit-llvm-bc | opt -std-compile-opts".
1057 //===---------------------------------------------------------------------===//
1059 int a(int a, int b, int c) {return (~a & c) | ((c|a) & b);}
1060 Should fold to "(~a & c) | (a & b)". Currently not optimized with
1061 "clang -emit-llvm-bc | opt -std-compile-opts".
1063 //===---------------------------------------------------------------------===//
1065 int a(int a,int b) {return (~(a|b))|a;}
1066 Should fold to "a|~b". Currently not optimized with "clang
1067 -emit-llvm-bc | opt -std-compile-opts".
1069 //===---------------------------------------------------------------------===//
1071 int a(int a, int b) {return (a&&b) || (a&&!b);}
1072 Should fold to "a". Currently not optimized with "clang -emit-llvm-bc
1073 | opt -std-compile-opts".
1075 //===---------------------------------------------------------------------===//
1077 int a(int a, int b, int c) {return (a&&b) || (!a&&c);}
1078 Should fold to "a ? b : c", or at least something sane. Currently not
1079 optimized with "clang -emit-llvm-bc | opt -std-compile-opts".
1081 //===---------------------------------------------------------------------===//
1083 int a(int a, int b, int c) {return (a&&b) || (a&&c) || (a&&b&&c);}
1084 Should fold to a && (b || c). Currently not optimized with "clang
1085 -emit-llvm-bc | opt -std-compile-opts".
1087 //===---------------------------------------------------------------------===//
1089 int a(int x) {return x | ((x & 8) ^ 8);}
1090 Should combine to x | 8. Currently not optimized with "clang
1091 -emit-llvm-bc | opt -std-compile-opts".
1093 //===---------------------------------------------------------------------===//
1095 int a(int x) {return x ^ ((x & 8) ^ 8);}
1096 Should also combine to x | 8. Currently not optimized with "clang
1097 -emit-llvm-bc | opt -std-compile-opts".
1099 //===---------------------------------------------------------------------===//
1101 int a(int x) {return (x & 8) == 0 ? -1 : -9;}
1102 Should combine to (x | -9) ^ 8. Currently not optimized with "clang
1103 -emit-llvm-bc | opt -std-compile-opts".
1105 //===---------------------------------------------------------------------===//
1107 int a(int x) {return (x & 8) == 0 ? -9 : -1;}
1108 Should combine to x | -9. Currently not optimized with "clang
1109 -emit-llvm-bc | opt -std-compile-opts".
1111 //===---------------------------------------------------------------------===//
1113 int a(int x) {return ((x | -9) ^ 8) & x;}
1114 Should combine to x & -9. Currently not optimized with "clang
1115 -emit-llvm-bc | opt -std-compile-opts".
1117 //===---------------------------------------------------------------------===//
1119 unsigned a(unsigned a) {return a * 0x11111111 >> 28 & 1;}
1120 Should combine to "a * 0x88888888 >> 31". Currently not optimized
1121 with "clang -emit-llvm-bc | opt -std-compile-opts".
1123 //===---------------------------------------------------------------------===//
1125 unsigned a(char* x) {if ((*x & 32) == 0) return b();}
1126 There's an unnecessary zext in the generated code with "clang
1127 -emit-llvm-bc | opt -std-compile-opts".
1129 //===---------------------------------------------------------------------===//
1131 unsigned a(unsigned long long x) {return 40 * (x >> 1);}
1132 Should combine to "20 * (((unsigned)x) & -2)". Currently not
1133 optimized with "clang -emit-llvm-bc | opt -std-compile-opts".
1135 //===---------------------------------------------------------------------===//
1137 We would like to do the following transform in the instcombiner:
1141 However, this isn't valid if (-X) overflows. We can implement this when we
1142 have the concept of a "C signed subtraction" operator that which is undefined
1145 //===---------------------------------------------------------------------===//
1147 This was noticed in the entryblock for grokdeclarator in 403.gcc:
1149 %tmp = icmp eq i32 %decl_context, 4
1150 %decl_context_addr.0 = select i1 %tmp, i32 3, i32 %decl_context
1151 %tmp1 = icmp eq i32 %decl_context_addr.0, 1
1152 %decl_context_addr.1 = select i1 %tmp1, i32 0, i32 %decl_context_addr.0
1154 tmp1 should be simplified to something like:
1155 (!tmp || decl_context == 1)
1157 This allows recursive simplifications, tmp1 is used all over the place in
1158 the function, e.g. by:
1160 %tmp23 = icmp eq i32 %decl_context_addr.1, 0 ; <i1> [#uses=1]
1161 %tmp24 = xor i1 %tmp1, true ; <i1> [#uses=1]
1162 %or.cond8 = and i1 %tmp23, %tmp24 ; <i1> [#uses=1]
1166 //===---------------------------------------------------------------------===//
1168 Store sinking: This code:
1170 void f (int n, int *cond, int *res) {
1173 for (i = 0; i < n; i++)
1175 *res ^= 234; /* (*) */
1178 On this function GVN hoists the fully redundant value of *res, but nothing
1179 moves the store out. This gives us this code:
1181 bb: ; preds = %bb2, %entry
1182 %.rle = phi i32 [ 0, %entry ], [ %.rle6, %bb2 ]
1183 %i.05 = phi i32 [ 0, %entry ], [ %indvar.next, %bb2 ]
1184 %1 = load i32* %cond, align 4
1185 %2 = icmp eq i32 %1, 0
1186 br i1 %2, label %bb2, label %bb1
1189 %3 = xor i32 %.rle, 234
1190 store i32 %3, i32* %res, align 4
1193 bb2: ; preds = %bb, %bb1
1194 %.rle6 = phi i32 [ %3, %bb1 ], [ %.rle, %bb ]
1195 %indvar.next = add i32 %i.05, 1
1196 %exitcond = icmp eq i32 %indvar.next, %n
1197 br i1 %exitcond, label %return, label %bb
1199 DSE should sink partially dead stores to get the store out of the loop.
1201 Here's another partial dead case:
1202 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=12395
1204 //===---------------------------------------------------------------------===//
1206 Scalar PRE hoists the mul in the common block up to the else:
1208 int test (int a, int b, int c, int g) {
1218 It would be better to do the mul once to reduce codesize above the if.
1219 This is GCC PR38204.
1221 //===---------------------------------------------------------------------===//
1223 GCC PR37810 is an interesting case where we should sink load/store reload
1224 into the if block and outside the loop, so we don't reload/store it on the
1245 We now hoist the reload after the call (Transforms/GVN/lpre-call-wrap.ll), but
1246 we don't sink the store. We need partially dead store sinking.
1248 //===---------------------------------------------------------------------===//
1250 [PHI TRANSLATE GEPs]
1252 GCC PR37166: Sinking of loads prevents SROA'ing the "g" struct on the stack
1253 leading to excess stack traffic. This could be handled by GVN with some crazy
1254 symbolic phi translation. The code we get looks like (g is on the stack):
1258 %9 = getelementptr %struct.f* %g, i32 0, i32 0
1259 store i32 %8, i32* %9, align bel %bb3
1261 bb3: ; preds = %bb1, %bb2, %bb
1262 %c_addr.0 = phi %struct.f* [ %g, %bb2 ], [ %c, %bb ], [ %c, %bb1 ]
1263 %b_addr.0 = phi %struct.f* [ %b, %bb2 ], [ %g, %bb ], [ %b, %bb1 ]
1264 %10 = getelementptr %struct.f* %c_addr.0, i32 0, i32 0
1265 %11 = load i32* %10, align 4
1267 %11 is fully redundant, an in BB2 it should have the value %8.
1269 GCC PR33344 is a similar case.
1271 //===---------------------------------------------------------------------===//
1273 There are many load PRE testcases in testsuite/gcc.dg/tree-ssa/loadpre* in the
1274 GCC testsuite. There are many pre testcases as ssa-pre-*.c
1276 //===---------------------------------------------------------------------===//
1278 There are some interesting cases in testsuite/gcc.dg/tree-ssa/pred-comm* in the
1279 GCC testsuite. For example, predcom-1.c is:
1281 for (i = 2; i < 1000; i++)
1282 fib[i] = (fib[i-1] + fib[i - 2]) & 0xffff;
1284 which compiles into:
1286 bb1: ; preds = %bb1, %bb1.thread
1287 %indvar = phi i32 [ 0, %bb1.thread ], [ %0, %bb1 ]
1288 %i.0.reg2mem.0 = add i32 %indvar, 2
1289 %0 = add i32 %indvar, 1 ; <i32> [#uses=3]
1290 %1 = getelementptr [1000 x i32]* @fib, i32 0, i32 %0
1291 %2 = load i32* %1, align 4 ; <i32> [#uses=1]
1292 %3 = getelementptr [1000 x i32]* @fib, i32 0, i32 %indvar
1293 %4 = load i32* %3, align 4 ; <i32> [#uses=1]
1294 %5 = add i32 %4, %2 ; <i32> [#uses=1]
1295 %6 = and i32 %5, 65535 ; <i32> [#uses=1]
1296 %7 = getelementptr [1000 x i32]* @fib, i32 0, i32 %i.0.reg2mem.0
1297 store i32 %6, i32* %7, align 4
1298 %exitcond = icmp eq i32 %0, 998 ; <i1> [#uses=1]
1299 br i1 %exitcond, label %return, label %bb1
1306 instead of handling this as a loop or other xform, all we'd need to do is teach
1307 load PRE to phi translate the %0 add (i+1) into the predecessor as (i'+1+1) =
1308 (i'+2) (where i' is the previous iteration of i). This would find the store
1311 predcom-2.c is apparently the same as predcom-1.c
1312 predcom-3.c is very similar but needs loads feeding each other instead of
1314 predcom-4.c seems the same as the rest.
1317 //===---------------------------------------------------------------------===//
1319 Other simple load PRE cases:
1320 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=35287 [LPRE crit edge splitting]
1322 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=34677 (licm does this, LPRE crit edge)
1323 llvm-gcc t2.c -S -o - -O0 -emit-llvm | llvm-as | opt -mem2reg -simplifycfg -gvn | llvm-dis
1325 //===---------------------------------------------------------------------===//
1327 Type based alias analysis:
1328 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=14705
1330 //===---------------------------------------------------------------------===//
1332 When GVN/PRE finds a store of float* to a must aliases pointer when expecting
1333 an int*, it should turn it into a bitcast. This is a nice generalization of
1334 the SROA hack that would apply to other cases, e.g.:
1336 int foo(int C, int *P, float X) {
1347 One example (that requires crazy phi translation) is:
1348 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=16799 [BITCAST PHI TRANS]
1350 //===---------------------------------------------------------------------===//
1352 A/B get pinned to the stack because we turn an if/then into a select instead
1353 of PRE'ing the load/store. This may be fixable in instcombine:
1354 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=37892
1358 Interesting missed case because of control flow flattening (should be 2 loads):
1359 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=26629
1360 With: llvm-gcc t2.c -S -o - -O0 -emit-llvm | llvm-as |
1361 opt -mem2reg -gvn -instcombine | llvm-dis
1362 we miss it because we need 1) GEP PHI TRAN, 2) CRIT EDGE 3) MULTIPLE DIFFERENT
1363 VALS PRODUCED BY ONE BLOCK OVER DIFFERENT PATHS
1365 //===---------------------------------------------------------------------===//
1367 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=19633
1368 We could eliminate the branch condition here, loading from null is undefined:
1370 struct S { int w, x, y, z; };
1371 struct T { int r; struct S s; };
1372 void bar (struct S, int);
1373 void foo (int a, struct T b)
1381 //===---------------------------------------------------------------------===//
1383 simplifylibcalls should do several optimizations for strspn/strcspn:
1385 strcspn(x, "") -> strlen(x)
1388 strspn(x, "") -> strlen(x)
1389 strspn(x, "a") -> strchr(x, 'a')-x
1391 strcspn(x, "a") -> inlined loop for up to 3 letters (similarly for strspn):
1393 size_t __strcspn_c3 (__const char *__s, int __reject1, int __reject2,
1395 register size_t __result = 0;
1396 while (__s[__result] != '\0' && __s[__result] != __reject1 &&
1397 __s[__result] != __reject2 && __s[__result] != __reject3)
1402 This should turn into a switch on the character. See PR3253 for some notes on
1405 456.hmmer apparently uses strcspn and strspn a lot. 471.omnetpp uses strspn.
1407 //===---------------------------------------------------------------------===//
1409 "gas" uses this idiom:
1410 else if (strchr ("+-/*%|&^:[]()~", *intel_parser.op_string))
1412 else if (strchr ("<>", *intel_parser.op_string)
1414 Those should be turned into a switch.
1416 //===---------------------------------------------------------------------===//
1418 252.eon contains this interesting code:
1420 %3072 = getelementptr [100 x i8]* %tempString, i32 0, i32 0
1421 %3073 = call i8* @strcpy(i8* %3072, i8* %3071) nounwind
1422 %strlen = call i32 @strlen(i8* %3072) ; uses = 1
1423 %endptr = getelementptr [100 x i8]* %tempString, i32 0, i32 %strlen
1424 call void @llvm.memcpy.i32(i8* %endptr,
1425 i8* getelementptr ([5 x i8]* @"\01LC42", i32 0, i32 0), i32 5, i32 1)
1426 %3074 = call i32 @strlen(i8* %endptr) nounwind readonly
1428 This is interesting for a couple reasons. First, in this:
1430 %3073 = call i8* @strcpy(i8* %3072, i8* %3071) nounwind
1431 %strlen = call i32 @strlen(i8* %3072)
1433 The strlen could be replaced with: %strlen = sub %3072, %3073, because the
1434 strcpy call returns a pointer to the end of the string. Based on that, the
1435 endptr GEP just becomes equal to 3073, which eliminates a strlen call and GEP.
1437 Second, the memcpy+strlen strlen can be replaced with:
1439 %3074 = call i32 @strlen([5 x i8]* @"\01LC42") nounwind readonly
1441 Because the destination was just copied into the specified memory buffer. This,
1442 in turn, can be constant folded to "4".
1444 In other code, it contains:
1446 %endptr6978 = bitcast i8* %endptr69 to i32*
1447 store i32 7107374, i32* %endptr6978, align 1
1448 %3167 = call i32 @strlen(i8* %endptr69) nounwind readonly
1450 Which could also be constant folded. Whatever is producing this should probably
1451 be fixed to leave this as a memcpy from a string.
1453 Further, eon also has an interesting partially redundant strlen call:
1455 bb8: ; preds = %_ZN18eonImageCalculatorC1Ev.exit
1456 %682 = getelementptr i8** %argv, i32 6 ; <i8**> [#uses=2]
1457 %683 = load i8** %682, align 4 ; <i8*> [#uses=4]
1458 %684 = load i8* %683, align 1 ; <i8> [#uses=1]
1459 %685 = icmp eq i8 %684, 0 ; <i1> [#uses=1]
1460 br i1 %685, label %bb10, label %bb9
1463 %686 = call i32 @strlen(i8* %683) nounwind readonly
1464 %687 = icmp ugt i32 %686, 254 ; <i1> [#uses=1]
1465 br i1 %687, label %bb10, label %bb11
1467 bb10: ; preds = %bb9, %bb8
1468 %688 = call i32 @strlen(i8* %683) nounwind readonly
1470 This could be eliminated by doing the strlen once in bb8, saving code size and
1471 improving perf on the bb8->9->10 path.
1473 //===---------------------------------------------------------------------===//
1475 I see an interesting fully redundant call to strlen left in 186.crafty:InputMove
1477 %movetext11 = getelementptr [128 x i8]* %movetext, i32 0, i32 0
1480 bb62: ; preds = %bb55, %bb53
1481 %promote.0 = phi i32 [ %169, %bb55 ], [ 0, %bb53 ]
1482 %171 = call i32 @strlen(i8* %movetext11) nounwind readonly align 1
1483 %172 = add i32 %171, -1 ; <i32> [#uses=1]
1484 %173 = getelementptr [128 x i8]* %movetext, i32 0, i32 %172
1487 br i1 %or.cond, label %bb65, label %bb72
1489 bb65: ; preds = %bb62
1490 store i8 0, i8* %173, align 1
1493 bb72: ; preds = %bb65, %bb62
1494 %trank.1 = phi i32 [ %176, %bb65 ], [ -1, %bb62 ]
1495 %177 = call i32 @strlen(i8* %movetext11) nounwind readonly align 1
1497 Note that on the bb62->bb72 path, that the %177 strlen call is partially
1498 redundant with the %171 call. At worst, we could shove the %177 strlen call
1499 up into the bb65 block moving it out of the bb62->bb72 path. However, note
1500 that bb65 stores to the string, zeroing out the last byte. This means that on
1501 that path the value of %177 is actually just %171-1. A sub is cheaper than a
1504 This pattern repeats several times, basically doing:
1509 where it is "obvious" that B = A-1.
1511 //===---------------------------------------------------------------------===//
1513 186.crafty contains this interesting pattern:
1515 %77 = call i8* @strstr(i8* getelementptr ([6 x i8]* @"\01LC5", i32 0, i32 0),
1517 %phitmp648 = icmp eq i8* %77, getelementptr ([6 x i8]* @"\01LC5", i32 0, i32 0)
1518 br i1 %phitmp648, label %bb70, label %bb76
1520 bb70: ; preds = %OptionMatch.exit91, %bb69
1521 %78 = call i32 @strlen(i8* %30) nounwind readonly align 1 ; <i32> [#uses=1]
1525 if (strstr(cststr, P) == cststr) {
1529 The strstr call would be significantly cheaper written as:
1532 if (memcmp(P, str, strlen(P)))
1535 This is memcmp+strlen instead of strstr. This also makes the strlen fully
1538 //===---------------------------------------------------------------------===//
1540 186.crafty also contains this code:
1542 %1906 = call i32 @strlen(i8* getelementptr ([32 x i8]* @pgn_event, i32 0,i32 0))
1543 %1907 = getelementptr [32 x i8]* @pgn_event, i32 0, i32 %1906
1544 %1908 = call i8* @strcpy(i8* %1907, i8* %1905) nounwind align 1
1545 %1909 = call i32 @strlen(i8* getelementptr ([32 x i8]* @pgn_event, i32 0,i32 0))
1546 %1910 = getelementptr [32 x i8]* @pgn_event, i32 0, i32 %1909
1548 The last strlen is computable as 1908-@pgn_event, which means 1910=1908.
1550 //===---------------------------------------------------------------------===//
1552 186.crafty has this interesting pattern with the "out.4543" variable:
1554 call void @llvm.memcpy.i32(
1555 i8* getelementptr ([10 x i8]* @out.4543, i32 0, i32 0),
1556 i8* getelementptr ([7 x i8]* @"\01LC28700", i32 0, i32 0), i32 7, i32 1)
1557 %101 = call@printf(i8* ... @out.4543, i32 0, i32 0)) nounwind
1559 It is basically doing:
1561 memcpy(globalarray, "string");
1562 printf(..., globalarray);
1564 Anyway, by knowing that printf just reads the memory and forward substituting
1565 the string directly into the printf, this eliminates reads from globalarray.
1566 Since this pattern occurs frequently in crafty (due to the "DisplayTime" and
1567 other similar functions) there are many stores to "out". Once all the printfs
1568 stop using "out", all that is left is the memcpy's into it. This should allow
1569 globalopt to remove the "stored only" global.
1571 //===---------------------------------------------------------------------===//
1575 define inreg i32 @foo(i8* inreg %p) nounwind {
1577 %tmp1 = ashr i8 %tmp0, 5
1578 %tmp2 = sext i8 %tmp1 to i32
1582 could be dagcombine'd to a sign-extending load with a shift.
1583 For example, on x86 this currently gets this:
1589 while it could get this:
1594 //===---------------------------------------------------------------------===//
1598 int test(int x) { return 1-x == x; } // --> return false
1599 int test2(int x) { return 2-x == x; } // --> return x == 1 ?
1601 Always foldable for odd constants, what is the rule for even?
1603 //===---------------------------------------------------------------------===//
1605 PR 3381: GEP to field of size 0 inside a struct could be turned into GEP
1606 for next field in struct (which is at same address).
1608 For example: store of float into { {{}}, float } could be turned into a store to
1611 //===---------------------------------------------------------------------===//
1614 double foo(double a) { return sin(a); }
1616 This compiles into this on x86-64 Linux:
1627 //===---------------------------------------------------------------------===//
1629 The arg promotion pass should make use of nocapture to make its alias analysis
1630 stuff much more precise.
1632 //===---------------------------------------------------------------------===//
1634 The following functions should be optimized to use a select instead of a
1635 branch (from gcc PR40072):
1637 char char_int(int m) {if(m>7) return 0; return m;}
1638 int int_char(char m) {if(m>7) return 0; return m;}
1640 //===---------------------------------------------------------------------===//
1642 Instcombine should replace the load with a constant in:
1644 static const char x[4] = {'a', 'b', 'c', 'd'};
1646 unsigned int y(void) {
1647 return *(unsigned int *)x;
1650 It currently only does this transformation when the size of the constant
1651 is the same as the size of the integer (so, try x[5]) and the last byte
1652 is a null (making it a C string). There's no need for these restrictions.
1654 //===---------------------------------------------------------------------===//
1656 InstCombine's "turn load from constant into constant" optimization should be
1657 more aggressive in the presence of bitcasts. For example, because of unions,
1662 double v __attribute__((vector_size(16)));
1664 typedef union vec2d vec2d;
1666 static vec2d a={{1,2}}, b={{3,4}};
1669 return (vec2d){ .v = a.v + b.v * (vec2d){{5,5}}.v };
1674 @a = internal constant %0 { [2 x double]
1675 [double 1.000000e+00, double 2.000000e+00] }, align 16
1676 @b = internal constant %0 { [2 x double]
1677 [double 3.000000e+00, double 4.000000e+00] }, align 16
1679 define void @foo(%struct.vec2d* noalias nocapture sret %agg.result) nounwind {
1681 %0 = load <2 x double>* getelementptr (%struct.vec2d*
1682 bitcast (%0* @a to %struct.vec2d*), i32 0, i32 0), align 16
1683 %1 = load <2 x double>* getelementptr (%struct.vec2d*
1684 bitcast (%0* @b to %struct.vec2d*), i32 0, i32 0), align 16
1687 Instcombine should be able to optimize away the loads (and thus the globals).
1690 //===---------------------------------------------------------------------===//