1 Target Independent Opportunities:
3 //===---------------------------------------------------------------------===//
5 With the recent changes to make the implicit def/use set explicit in
6 machineinstrs, we should change the target descriptions for 'call' instructions
7 so that the .td files don't list all the call-clobbered registers as implicit
8 defs. Instead, these should be added by the code generator (e.g. on the dag).
10 This has a number of uses:
12 1. PPC32/64 and X86 32/64 can avoid having multiple copies of call instructions
13 for their different impdef sets.
14 2. Targets with multiple calling convs (e.g. x86) which have different clobber
15 sets don't need copies of call instructions.
16 3. 'Interprocedural register allocation' can be done to reduce the clobber sets
19 //===---------------------------------------------------------------------===//
21 FreeBench/mason contains code like this:
23 typedef struct { int a; int b; int c; } p_type;
25 p_type m0u(p_type *p) {
26 int m[]={0, 8, 1, 2, 16, 5, 13, 7, 14, 9, 3, 4, 11, 12, 15, 10, 17, 6};
34 We currently compile this into a memcpy from a static array into 'm', then
35 a bunch of loads from m. It would be better to avoid the memcpy and just do
36 loads from the static array.
38 //===---------------------------------------------------------------------===//
40 Make the PPC branch selector target independant
42 //===---------------------------------------------------------------------===//
44 Get the C front-end to expand hypot(x,y) -> llvm.sqrt(x*x+y*y) when errno and
45 precision don't matter (ffastmath). Misc/mandel will like this. :)
47 //===---------------------------------------------------------------------===//
49 Solve this DAG isel folding deficiency:
67 The problem is the store's chain operand is not the load X but rather
68 a TokenFactor of the load X and load Y, which prevents the folding.
70 There are two ways to fix this:
72 1. The dag combiner can start using alias analysis to realize that y/x
73 don't alias, making the store to X not dependent on the load from Y.
74 2. The generated isel could be made smarter in the case it can't
75 disambiguate the pointers.
77 Number 1 is the preferred solution.
79 This has been "fixed" by a TableGen hack. But that is a short term workaround
80 which will be removed once the proper fix is made.
82 //===---------------------------------------------------------------------===//
84 On targets with expensive 64-bit multiply, we could LSR this:
91 for (i = ...; ++i, tmp+=tmp)
94 This would be a win on ppc32, but not x86 or ppc64.
96 //===---------------------------------------------------------------------===//
98 Shrink: (setlt (loadi32 P), 0) -> (setlt (loadi8 Phi), 0)
100 //===---------------------------------------------------------------------===//
102 Reassociate should turn: X*X*X*X -> t=(X*X) (t*t) to eliminate a multiply.
104 //===---------------------------------------------------------------------===//
106 Interesting? testcase for add/shift/mul reassoc:
108 int bar(int x, int y) {
109 return x*x*x+y+x*x*x*x*x*y*y*y*y;
111 int foo(int z, int n) {
112 return bar(z, n) + bar(2*z, 2*n);
115 //===---------------------------------------------------------------------===//
117 These two functions should generate the same code on big-endian systems:
119 int g(int *j,int *l) { return memcmp(j,l,4); }
120 int h(int *j, int *l) { return *j - *l; }
122 this could be done in SelectionDAGISel.cpp, along with other special cases,
125 //===---------------------------------------------------------------------===//
127 It would be nice to revert this patch:
128 http://lists.cs.uiuc.edu/pipermail/llvm-commits/Week-of-Mon-20060213/031986.html
130 And teach the dag combiner enough to simplify the code expanded before
131 legalize. It seems plausible that this knowledge would let it simplify other
134 //===---------------------------------------------------------------------===//
136 For vector types, TargetData.cpp::getTypeInfo() returns alignment that is equal
137 to the type size. It works but can be overly conservative as the alignment of
138 specific vector types are target dependent.
140 //===---------------------------------------------------------------------===//
142 We should add 'unaligned load/store' nodes, and produce them from code like
145 v4sf example(float *P) {
146 return (v4sf){P[0], P[1], P[2], P[3] };
149 //===---------------------------------------------------------------------===//
151 We should constant fold vector type casts at the LLVM level, regardless of the
152 cast. Currently we cannot fold some casts because we don't have TargetData
153 information in the constant folder, so we don't know the endianness of the
156 //===---------------------------------------------------------------------===//
158 Add support for conditional increments, and other related patterns. Instead
163 je LBB16_2 #cond_next
174 //===---------------------------------------------------------------------===//
176 Combine: a = sin(x), b = cos(x) into a,b = sincos(x).
178 Expand these to calls of sin/cos and stores:
179 double sincos(double x, double *sin, double *cos);
180 float sincosf(float x, float *sin, float *cos);
181 long double sincosl(long double x, long double *sin, long double *cos);
183 Doing so could allow SROA of the destination pointers. See also:
184 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=17687
186 //===---------------------------------------------------------------------===//
188 Scalar Repl cannot currently promote this testcase to 'ret long cst':
190 %struct.X = type { int, int }
191 %struct.Y = type { %struct.X }
193 %retval = alloca %struct.Y, align 8
194 %tmp12 = getelementptr %struct.Y* %retval, int 0, uint 0, uint 0
195 store int 0, int* %tmp12
196 %tmp15 = getelementptr %struct.Y* %retval, int 0, uint 0, uint 1
197 store int 1, int* %tmp15
198 %retval = bitcast %struct.Y* %retval to ulong*
199 %retval = load ulong* %retval
203 it should be extended to do so.
205 //===---------------------------------------------------------------------===//
207 -scalarrepl should promote this to be a vector scalar.
209 %struct..0anon = type { <4 x float> }
211 implementation ; Functions:
213 void %test1(<4 x float> %V, float* %P) {
214 %u = alloca %struct..0anon, align 16
215 %tmp = getelementptr %struct..0anon* %u, int 0, uint 0
216 store <4 x float> %V, <4 x float>* %tmp
217 %tmp1 = bitcast %struct..0anon* %u to [4 x float]*
218 %tmp = getelementptr [4 x float]* %tmp1, int 0, int 1
219 %tmp = load float* %tmp
220 %tmp3 = mul float %tmp, 2.000000e+00
221 store float %tmp3, float* %P
225 //===---------------------------------------------------------------------===//
227 Turn this into a single byte store with no load (the other 3 bytes are
230 void %test(uint* %P) {
232 %tmp14 = or uint %tmp, 3305111552
233 %tmp15 = and uint %tmp14, 3321888767
234 store uint %tmp15, uint* %P
238 //===---------------------------------------------------------------------===//
240 dag/inst combine "clz(x)>>5 -> x==0" for 32-bit x.
246 int t = __builtin_clz(x);
256 //===---------------------------------------------------------------------===//
258 Legalize should lower ctlz like this:
259 ctlz(x) = popcnt((x-1) & ~x)
261 on targets that have popcnt but not ctlz. itanium, what else?
263 //===---------------------------------------------------------------------===//
265 quantum_sigma_x in 462.libquantum contains the following loop:
267 for(i=0; i<reg->size; i++)
269 /* Flip the target bit of each basis state */
270 reg->node[i].state ^= ((MAX_UNSIGNED) 1 << target);
273 Where MAX_UNSIGNED/state is a 64-bit int. On a 32-bit platform it would be just
274 so cool to turn it into something like:
276 long long Res = ((MAX_UNSIGNED) 1 << target);
278 for(i=0; i<reg->size; i++)
279 reg->node[i].state ^= Res & 0xFFFFFFFFULL;
281 for(i=0; i<reg->size; i++)
282 reg->node[i].state ^= Res & 0xFFFFFFFF00000000ULL
285 ... which would only do one 32-bit XOR per loop iteration instead of two.
287 It would also be nice to recognize the reg->size doesn't alias reg->node[i], but
290 //===---------------------------------------------------------------------===//
292 This isn't recognized as bswap by instcombine:
294 unsigned int swap_32(unsigned int v) {
295 v = ((v & 0x00ff00ffU) << 8) | ((v & 0xff00ff00U) >> 8);
296 v = ((v & 0x0000ffffU) << 16) | ((v & 0xffff0000U) >> 16);
300 Nor is this (yes, it really is bswap):
302 unsigned long reverse(unsigned v) {
304 t = v ^ ((v << 16) | (v >> 16));
306 v = (v << 24) | (v >> 8);
310 //===---------------------------------------------------------------------===//
312 These should turn into single 16-bit (unaligned?) loads on little/big endian
315 unsigned short read_16_le(const unsigned char *adr) {
316 return adr[0] | (adr[1] << 8);
318 unsigned short read_16_be(const unsigned char *adr) {
319 return (adr[0] << 8) | adr[1];
322 //===---------------------------------------------------------------------===//
324 -instcombine should handle this transform:
325 icmp pred (sdiv X / C1 ), C2
326 when X, C1, and C2 are unsigned. Similarly for udiv and signed operands.
328 Currently InstCombine avoids this transform but will do it when the signs of
329 the operands and the sign of the divide match. See the FIXME in
330 InstructionCombining.cpp in the visitSetCondInst method after the switch case
331 for Instruction::UDiv (around line 4447) for more details.
333 The SingleSource/Benchmarks/Shootout-C++/hash and hash2 tests have examples of
336 //===---------------------------------------------------------------------===//
338 Instcombine misses several of these cases (see the testcase in the patch):
339 http://gcc.gnu.org/ml/gcc-patches/2006-10/msg01519.html
341 //===---------------------------------------------------------------------===//
343 viterbi speeds up *significantly* if the various "history" related copy loops
344 are turned into memcpy calls at the source level. We need a "loops to memcpy"
347 //===---------------------------------------------------------------------===//
351 typedef unsigned U32;
352 typedef unsigned long long U64;
353 int test (U32 *inst, U64 *regs) {
356 int r1 = (temp >> 20) & 0xf;
357 int b2 = (temp >> 16) & 0xf;
358 effective_addr2 = temp & 0xfff;
359 if (b2) effective_addr2 += regs[b2];
360 b2 = (temp >> 12) & 0xf;
361 if (b2) effective_addr2 += regs[b2];
362 effective_addr2 &= regs[4];
363 if ((effective_addr2 & 3) == 0)
368 Note that only the low 2 bits of effective_addr2 are used. On 32-bit systems,
369 we don't eliminate the computation of the top half of effective_addr2 because
370 we don't have whole-function selection dags. On x86, this means we use one
371 extra register for the function when effective_addr2 is declared as U64 than
372 when it is declared U32.
374 //===---------------------------------------------------------------------===//
376 Promote for i32 bswap can use i64 bswap + shr. Useful on targets with 64-bit
377 regs and bswap, like itanium.
379 //===---------------------------------------------------------------------===//
381 LSR should know what GPR types a target has. This code:
383 volatile short X, Y; // globals
387 for (i = 0; i < N; i++) { X = i; Y = i*4; }
390 produces two identical IV's (after promotion) on PPC/ARM:
392 LBB1_1: @bb.preheader
403 add r1, r1, #1 <- [0,+,1]
405 add r2, r2, #1 <- [0,+,1]
410 //===---------------------------------------------------------------------===//