1 Target Independent Opportunities:
3 //===---------------------------------------------------------------------===//
5 We should make the various target's "IMPLICIT_DEF" instructions be a single
6 target-independent opcode like TargetInstrInfo::INLINEASM. This would allow
7 us to eliminate the TargetInstrDesc::isImplicitDef() method, and would allow
8 us to avoid having to define this for every target for every register class.
10 //===---------------------------------------------------------------------===//
12 With the recent changes to make the implicit def/use set explicit in
13 machineinstrs, we should change the target descriptions for 'call' instructions
14 so that the .td files don't list all the call-clobbered registers as implicit
15 defs. Instead, these should be added by the code generator (e.g. on the dag).
17 This has a number of uses:
19 1. PPC32/64 and X86 32/64 can avoid having multiple copies of call instructions
20 for their different impdef sets.
21 2. Targets with multiple calling convs (e.g. x86) which have different clobber
22 sets don't need copies of call instructions.
23 3. 'Interprocedural register allocation' can be done to reduce the clobber sets
26 //===---------------------------------------------------------------------===//
28 Make the PPC branch selector target independant
30 //===---------------------------------------------------------------------===//
32 Get the C front-end to expand hypot(x,y) -> llvm.sqrt(x*x+y*y) when errno and
33 precision don't matter (ffastmath). Misc/mandel will like this. :)
35 //===---------------------------------------------------------------------===//
37 Solve this DAG isel folding deficiency:
55 The problem is the store's chain operand is not the load X but rather
56 a TokenFactor of the load X and load Y, which prevents the folding.
58 There are two ways to fix this:
60 1. The dag combiner can start using alias analysis to realize that y/x
61 don't alias, making the store to X not dependent on the load from Y.
62 2. The generated isel could be made smarter in the case it can't
63 disambiguate the pointers.
65 Number 1 is the preferred solution.
67 This has been "fixed" by a TableGen hack. But that is a short term workaround
68 which will be removed once the proper fix is made.
70 //===---------------------------------------------------------------------===//
72 On targets with expensive 64-bit multiply, we could LSR this:
79 for (i = ...; ++i, tmp+=tmp)
82 This would be a win on ppc32, but not x86 or ppc64.
84 //===---------------------------------------------------------------------===//
86 Shrink: (setlt (loadi32 P), 0) -> (setlt (loadi8 Phi), 0)
88 //===---------------------------------------------------------------------===//
90 Reassociate should turn: X*X*X*X -> t=(X*X) (t*t) to eliminate a multiply.
92 //===---------------------------------------------------------------------===//
94 Interesting? testcase for add/shift/mul reassoc:
96 int bar(int x, int y) {
97 return x*x*x+y+x*x*x*x*x*y*y*y*y;
99 int foo(int z, int n) {
100 return bar(z, n) + bar(2*z, 2*n);
103 Reassociate should handle the example in GCC PR16157.
105 //===---------------------------------------------------------------------===//
107 These two functions should generate the same code on big-endian systems:
109 int g(int *j,int *l) { return memcmp(j,l,4); }
110 int h(int *j, int *l) { return *j - *l; }
112 this could be done in SelectionDAGISel.cpp, along with other special cases,
115 //===---------------------------------------------------------------------===//
117 It would be nice to revert this patch:
118 http://lists.cs.uiuc.edu/pipermail/llvm-commits/Week-of-Mon-20060213/031986.html
120 And teach the dag combiner enough to simplify the code expanded before
121 legalize. It seems plausible that this knowledge would let it simplify other
124 //===---------------------------------------------------------------------===//
126 For vector types, TargetData.cpp::getTypeInfo() returns alignment that is equal
127 to the type size. It works but can be overly conservative as the alignment of
128 specific vector types are target dependent.
130 //===---------------------------------------------------------------------===//
132 We should add 'unaligned load/store' nodes, and produce them from code like
135 v4sf example(float *P) {
136 return (v4sf){P[0], P[1], P[2], P[3] };
139 //===---------------------------------------------------------------------===//
141 Add support for conditional increments, and other related patterns. Instead
146 je LBB16_2 #cond_next
157 //===---------------------------------------------------------------------===//
159 Combine: a = sin(x), b = cos(x) into a,b = sincos(x).
161 Expand these to calls of sin/cos and stores:
162 double sincos(double x, double *sin, double *cos);
163 float sincosf(float x, float *sin, float *cos);
164 long double sincosl(long double x, long double *sin, long double *cos);
166 Doing so could allow SROA of the destination pointers. See also:
167 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=17687
169 //===---------------------------------------------------------------------===//
171 Scalar Repl cannot currently promote this testcase to 'ret long cst':
173 %struct.X = type { i32, i32 }
174 %struct.Y = type { %struct.X }
177 %retval = alloca %struct.Y, align 8
178 %tmp12 = getelementptr %struct.Y* %retval, i32 0, i32 0, i32 0
179 store i32 0, i32* %tmp12
180 %tmp15 = getelementptr %struct.Y* %retval, i32 0, i32 0, i32 1
181 store i32 1, i32* %tmp15
182 %retval.upgrd.1 = bitcast %struct.Y* %retval to i64*
183 %retval.upgrd.2 = load i64* %retval.upgrd.1
184 ret i64 %retval.upgrd.2
187 it should be extended to do so.
189 //===---------------------------------------------------------------------===//
191 -scalarrepl should promote this to be a vector scalar.
193 %struct..0anon = type { <4 x float> }
195 define void @test1(<4 x float> %V, float* %P) {
196 %u = alloca %struct..0anon, align 16
197 %tmp = getelementptr %struct..0anon* %u, i32 0, i32 0
198 store <4 x float> %V, <4 x float>* %tmp
199 %tmp1 = bitcast %struct..0anon* %u to [4 x float]*
200 %tmp.upgrd.1 = getelementptr [4 x float]* %tmp1, i32 0, i32 1
201 %tmp.upgrd.2 = load float* %tmp.upgrd.1
202 %tmp3 = mul float %tmp.upgrd.2, 2.000000e+00
203 store float %tmp3, float* %P
207 //===---------------------------------------------------------------------===//
209 Turn this into a single byte store with no load (the other 3 bytes are
212 void %test(uint* %P) {
214 %tmp14 = or uint %tmp, 3305111552
215 %tmp15 = and uint %tmp14, 3321888767
216 store uint %tmp15, uint* %P
220 //===---------------------------------------------------------------------===//
222 dag/inst combine "clz(x)>>5 -> x==0" for 32-bit x.
228 int t = __builtin_clz(x);
238 //===---------------------------------------------------------------------===//
240 Legalize should lower ctlz like this:
241 ctlz(x) = popcnt((x-1) & ~x)
243 on targets that have popcnt but not ctlz. itanium, what else?
245 //===---------------------------------------------------------------------===//
247 quantum_sigma_x in 462.libquantum contains the following loop:
249 for(i=0; i<reg->size; i++)
251 /* Flip the target bit of each basis state */
252 reg->node[i].state ^= ((MAX_UNSIGNED) 1 << target);
255 Where MAX_UNSIGNED/state is a 64-bit int. On a 32-bit platform it would be just
256 so cool to turn it into something like:
258 long long Res = ((MAX_UNSIGNED) 1 << target);
260 for(i=0; i<reg->size; i++)
261 reg->node[i].state ^= Res & 0xFFFFFFFFULL;
263 for(i=0; i<reg->size; i++)
264 reg->node[i].state ^= Res & 0xFFFFFFFF00000000ULL
267 ... which would only do one 32-bit XOR per loop iteration instead of two.
269 It would also be nice to recognize the reg->size doesn't alias reg->node[i], but
272 //===---------------------------------------------------------------------===//
274 This isn't recognized as bswap by instcombine:
276 unsigned int swap_32(unsigned int v) {
277 v = ((v & 0x00ff00ffU) << 8) | ((v & 0xff00ff00U) >> 8);
278 v = ((v & 0x0000ffffU) << 16) | ((v & 0xffff0000U) >> 16);
282 Nor is this (yes, it really is bswap):
284 unsigned long reverse(unsigned v) {
286 t = v ^ ((v << 16) | (v >> 16));
288 v = (v << 24) | (v >> 8);
292 //===---------------------------------------------------------------------===//
294 These should turn into single 16-bit (unaligned?) loads on little/big endian
297 unsigned short read_16_le(const unsigned char *adr) {
298 return adr[0] | (adr[1] << 8);
300 unsigned short read_16_be(const unsigned char *adr) {
301 return (adr[0] << 8) | adr[1];
304 //===---------------------------------------------------------------------===//
306 -instcombine should handle this transform:
307 icmp pred (sdiv X / C1 ), C2
308 when X, C1, and C2 are unsigned. Similarly for udiv and signed operands.
310 Currently InstCombine avoids this transform but will do it when the signs of
311 the operands and the sign of the divide match. See the FIXME in
312 InstructionCombining.cpp in the visitSetCondInst method after the switch case
313 for Instruction::UDiv (around line 4447) for more details.
315 The SingleSource/Benchmarks/Shootout-C++/hash and hash2 tests have examples of
318 //===---------------------------------------------------------------------===//
320 Instcombine misses several of these cases (see the testcase in the patch):
321 http://gcc.gnu.org/ml/gcc-patches/2006-10/msg01519.html
323 //===---------------------------------------------------------------------===//
325 viterbi speeds up *significantly* if the various "history" related copy loops
326 are turned into memcpy calls at the source level. We need a "loops to memcpy"
329 //===---------------------------------------------------------------------===//
333 typedef unsigned U32;
334 typedef unsigned long long U64;
335 int test (U32 *inst, U64 *regs) {
338 int r1 = (temp >> 20) & 0xf;
339 int b2 = (temp >> 16) & 0xf;
340 effective_addr2 = temp & 0xfff;
341 if (b2) effective_addr2 += regs[b2];
342 b2 = (temp >> 12) & 0xf;
343 if (b2) effective_addr2 += regs[b2];
344 effective_addr2 &= regs[4];
345 if ((effective_addr2 & 3) == 0)
350 Note that only the low 2 bits of effective_addr2 are used. On 32-bit systems,
351 we don't eliminate the computation of the top half of effective_addr2 because
352 we don't have whole-function selection dags. On x86, this means we use one
353 extra register for the function when effective_addr2 is declared as U64 than
354 when it is declared U32.
356 //===---------------------------------------------------------------------===//
358 Promote for i32 bswap can use i64 bswap + shr. Useful on targets with 64-bit
359 regs and bswap, like itanium.
361 //===---------------------------------------------------------------------===//
363 LSR should know what GPR types a target has. This code:
365 volatile short X, Y; // globals
369 for (i = 0; i < N; i++) { X = i; Y = i*4; }
372 produces two identical IV's (after promotion) on PPC/ARM:
374 LBB1_1: @bb.preheader
385 add r1, r1, #1 <- [0,+,1]
387 add r2, r2, #1 <- [0,+,1]
392 //===---------------------------------------------------------------------===//
394 Tail call elim should be more aggressive, checking to see if the call is
395 followed by an uncond branch to an exit block.
397 ; This testcase is due to tail-duplication not wanting to copy the return
398 ; instruction into the terminating blocks because there was other code
399 ; optimized out of the function after the taildup happened.
400 ; RUN: llvm-as < %s | opt -tailcallelim | llvm-dis | not grep call
402 define i32 @t4(i32 %a) {
404 %tmp.1 = and i32 %a, 1 ; <i32> [#uses=1]
405 %tmp.2 = icmp ne i32 %tmp.1, 0 ; <i1> [#uses=1]
406 br i1 %tmp.2, label %then.0, label %else.0
408 then.0: ; preds = %entry
409 %tmp.5 = add i32 %a, -1 ; <i32> [#uses=1]
410 %tmp.3 = call i32 @t4( i32 %tmp.5 ) ; <i32> [#uses=1]
413 else.0: ; preds = %entry
414 %tmp.7 = icmp ne i32 %a, 0 ; <i1> [#uses=1]
415 br i1 %tmp.7, label %then.1, label %return
417 then.1: ; preds = %else.0
418 %tmp.11 = add i32 %a, -2 ; <i32> [#uses=1]
419 %tmp.9 = call i32 @t4( i32 %tmp.11 ) ; <i32> [#uses=1]
422 return: ; preds = %then.1, %else.0, %then.0
423 %result.0 = phi i32 [ 0, %else.0 ], [ %tmp.3, %then.0 ],
428 //===---------------------------------------------------------------------===//
430 Tail recursion elimination is not transforming this function, because it is
431 returning n, which fails the isDynamicConstant check in the accumulator
434 long long fib(const long long n) {
440 return fib(n-1) + fib(n-2);
444 //===---------------------------------------------------------------------===//
446 Argument promotion should promote arguments for recursive functions, like
449 ; RUN: llvm-as < %s | opt -argpromotion | llvm-dis | grep x.val
451 define internal i32 @foo(i32* %x) {
453 %tmp = load i32* %x ; <i32> [#uses=0]
454 %tmp.foo = call i32 @foo( i32* %x ) ; <i32> [#uses=1]
458 define i32 @bar(i32* %x) {
460 %tmp3 = call i32 @foo( i32* %x ) ; <i32> [#uses=1]
464 //===---------------------------------------------------------------------===//
466 "basicaa" should know how to look through "or" instructions that act like add
467 instructions. For example in this code, the x*4+1 is turned into x*4 | 1, and
468 basicaa can't analyze the array subscript, leading to duplicated loads in the
471 void test(int X, int Y, int a[]) {
473 for (i=2; i<1000; i+=4) {
474 a[i+0] = a[i-1+0]*a[i-2+0];
475 a[i+1] = a[i-1+1]*a[i-2+1];
476 a[i+2] = a[i-1+2]*a[i-2+2];
477 a[i+3] = a[i-1+3]*a[i-2+3];
481 //===---------------------------------------------------------------------===//
483 We should investigate an instruction sinking pass. Consider this silly
499 je LBB1_2 # cond_true
507 The PIC base computation (call+popl) is only used on one path through the
508 code, but is currently always computed in the entry block. It would be
509 better to sink the picbase computation down into the block for the
510 assertion, as it is the only one that uses it. This happens for a lot of
511 code with early outs.
513 Another example is loads of arguments, which are usually emitted into the
514 entry block on targets like x86. If not used in all paths through a
515 function, they should be sunk into the ones that do.
517 In this case, whole-function-isel would also handle this.
519 //===---------------------------------------------------------------------===//
521 Investigate lowering of sparse switch statements into perfect hash tables:
522 http://burtleburtle.net/bob/hash/perfect.html
524 //===---------------------------------------------------------------------===//
526 We should turn things like "load+fabs+store" and "load+fneg+store" into the
527 corresponding integer operations. On a yonah, this loop:
532 for (b = 0; b < 10000000; b++)
533 for (i = 0; i < 256; i++)
537 is twice as slow as this loop:
542 for (b = 0; b < 10000000; b++)
543 for (i = 0; i < 256; i++)
544 a[i] ^= (1ULL << 63);
547 and I suspect other processors are similar. On X86 in particular this is a
548 big win because doing this with integers allows the use of read/modify/write
551 //===---------------------------------------------------------------------===//
553 DAG Combiner should try to combine small loads into larger loads when
554 profitable. For example, we compile this C++ example:
556 struct THotKey { short Key; bool Control; bool Shift; bool Alt; };
557 extern THotKey m_HotKey;
558 THotKey GetHotKey () { return m_HotKey; }
560 into (-O3 -fno-exceptions -static -fomit-frame-pointer):
565 movb _m_HotKey+3, %cl
566 movb _m_HotKey+4, %dl
567 movb _m_HotKey+2, %ch
582 movzwl _m_HotKey+4, %edx
586 The LLVM IR contains the needed alignment info, so we should be able to
587 merge the loads and stores into 4-byte loads:
589 %struct.THotKey = type { i16, i8, i8, i8 }
590 define void @_Z9GetHotKeyv(%struct.THotKey* sret %agg.result) nounwind {
592 %tmp2 = load i16* getelementptr (@m_HotKey, i32 0, i32 0), align 8
593 %tmp5 = load i8* getelementptr (@m_HotKey, i32 0, i32 1), align 2
594 %tmp8 = load i8* getelementptr (@m_HotKey, i32 0, i32 2), align 1
595 %tmp11 = load i8* getelementptr (@m_HotKey, i32 0, i32 3), align 2
597 Alternatively, we should use a small amount of base-offset alias analysis
598 to make it so the scheduler doesn't need to hold all the loads in regs at
601 //===---------------------------------------------------------------------===//
603 We should extend parameter attributes to capture more information about
604 pointer parameters for alias analysis. Some ideas:
606 1. Add a "nocapture" attribute, which indicates that the callee does not store
607 the address of the parameter into a global or any other memory location
608 visible to the callee. This can be used to make basicaa and other analyses
609 more powerful. It is true for things like memcpy, strcat, and many other
610 things, including structs passed by value, most C++ references, etc.
611 2. Generalize readonly to be set on parameters. This is important mod/ref
612 info for the function, which is important for basicaa and others. It can
613 also be used by the inliner to avoid inserting a memcpy for byval
614 arguments when the function is inlined.
616 These functions can be inferred by various analysis passes such as the
617 globalsmodrefaa pass. Note that getting #2 right is actually really tricky.
621 void caller(S byvalarg) { G.field = 1; ... }
622 void callee() { caller(G); }
624 The fact that the caller does not modify byval arg is not enough, we need
625 to know that it doesn't modify G either. This is very tricky.
627 //===---------------------------------------------------------------------===//
629 We should add an FRINT node to the DAG to model targets that have legal
630 implementations of ceil/floor/rint.
632 //===---------------------------------------------------------------------===//
634 This GCC bug: http://gcc.gnu.org/bugzilla/show_bug.cgi?id=34043
635 contains a testcase that compiles down to:
637 %struct.XMM128 = type { <4 x float> }
639 %src = alloca %struct.XMM128
641 %tmp6263 = bitcast %struct.XMM128* %src to <2 x i64>*
642 %tmp65 = getelementptr %struct.XMM128* %src, i32 0, i32 0
643 store <2 x i64> %tmp5899, <2 x i64>* %tmp6263, align 16
644 %tmp66 = load <4 x float>* %tmp65, align 16
645 %tmp71 = add <4 x float> %tmp66, %tmp66
647 If the mid-level optimizer turned the bitcast of pointer + store of tmp5899
648 into a bitcast of the vector value and a store to the pointer, then the
649 store->load could be easily removed.
651 //===---------------------------------------------------------------------===//
656 long long input[8] = {1,1,1,1,1,1,1,1};
660 We currently compile this into a memcpy from a global array since the
661 initializer is fairly large and not memset'able. This is good, but the memcpy
662 gets lowered to load/stores in the code generator. This is also ok, except
663 that the codegen lowering for memcpy doesn't handle the case when the source
664 is a constant global. This gives us atrocious code like this:
669 movl _C.0.1444-"L1$pb"+32(%eax), %ecx
671 movl _C.0.1444-"L1$pb"+20(%eax), %ecx
673 movl _C.0.1444-"L1$pb"+36(%eax), %ecx
675 movl _C.0.1444-"L1$pb"+44(%eax), %ecx
677 movl _C.0.1444-"L1$pb"+40(%eax), %ecx
679 movl _C.0.1444-"L1$pb"+12(%eax), %ecx
681 movl _C.0.1444-"L1$pb"+4(%eax), %ecx
693 //===---------------------------------------------------------------------===//
695 http://llvm.org/PR717:
697 The following code should compile into "ret int undef". Instead, LLVM
698 produces "ret int 0":
707 //===---------------------------------------------------------------------===//
709 The loop unroller should partially unroll loops (instead of peeling them)
710 when code growth isn't too bad and when an unroll count allows simplification
711 of some code within the loop. One trivial example is:
717 for ( nLoop = 0; nLoop < 1000; nLoop++ ) {
726 Unrolling by 2 would eliminate the '&1' in both copies, leading to a net
727 reduction in code size. The resultant code would then also be suitable for
728 exit value computation.
730 //===---------------------------------------------------------------------===//