1 Target Independent Opportunities:
3 ===-------------------------------------------------------------------------===
5 FreeBench/mason contains code like this:
7 static p_type m0u(p_type p) {
8 int m[]={0, 8, 1, 2, 16, 5, 13, 7, 14, 9, 3, 4, 11, 12, 15, 10, 17, 6};
16 We currently compile this into a memcpy from a static array into 'm', then
17 a bunch of loads from m. It would be better to avoid the memcpy and just do
18 loads from the static array.
20 //===---------------------------------------------------------------------===//
22 Make the PPC branch selector target independant
24 //===---------------------------------------------------------------------===//
26 Get the C front-end to expand hypot(x,y) -> llvm.sqrt(x*x+y*y) when errno and
27 precision don't matter (ffastmath). Misc/mandel will like this. :)
29 //===---------------------------------------------------------------------===//
31 Solve this DAG isel folding deficiency:
49 The problem is the store's chain operand is not the load X but rather
50 a TokenFactor of the load X and load Y, which prevents the folding.
52 There are two ways to fix this:
54 1. The dag combiner can start using alias analysis to realize that y/x
55 don't alias, making the store to X not dependent on the load from Y.
56 2. The generated isel could be made smarter in the case it can't
57 disambiguate the pointers.
59 Number 1 is the preferred solution.
61 This has been "fixed" by a TableGen hack. But that is a short term workaround
62 which will be removed once the proper fix is made.
64 //===---------------------------------------------------------------------===//
66 Turn this into a signed shift right in instcombine:
69 return x >> 31 ? -1 : 0;
72 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=25600
73 http://gcc.gnu.org/ml/gcc-patches/2006-02/msg01492.html
75 //===---------------------------------------------------------------------===//
77 On targets with expensive 64-bit multiply, we could LSR this:
84 for (i = ...; ++i, tmp+=tmp)
87 This would be a win on ppc32, but not x86 or ppc64.
89 //===---------------------------------------------------------------------===//
91 Shrink: (setlt (loadi32 P), 0) -> (setlt (loadi8 Phi), 0)
93 //===---------------------------------------------------------------------===//
95 Reassociate should turn: X*X*X*X -> t=(X*X) (t*t) to eliminate a multiply.
97 //===---------------------------------------------------------------------===//
99 Interesting? testcase for add/shift/mul reassoc:
101 int bar(int x, int y) {
102 return x*x*x+y+x*x*x*x*x*y*y*y*y;
104 int foo(int z, int n) {
105 return bar(z, n) + bar(2*z, 2*n);
108 //===---------------------------------------------------------------------===//
110 These two functions should generate the same code on big-endian systems:
112 int g(int *j,int *l) { return memcmp(j,l,4); }
113 int h(int *j, int *l) { return *j - *l; }
115 this could be done in SelectionDAGISel.cpp, along with other special cases,
118 //===---------------------------------------------------------------------===//
121 int rot(unsigned char b) { int a = ((b>>1) ^ (b<<7)) & 0xff; return a; }
123 Can be improved in two ways:
125 1. The instcombiner should eliminate the type conversions.
126 2. The X86 backend should turn this into a rotate by one bit.
128 //===---------------------------------------------------------------------===//
130 Add LSR exit value substitution. It'll probably be a win for Ackermann, etc.
132 //===---------------------------------------------------------------------===//
134 It would be nice to revert this patch:
135 http://lists.cs.uiuc.edu/pipermail/llvm-commits/Week-of-Mon-20060213/031986.html
137 And teach the dag combiner enough to simplify the code expanded before
138 legalize. It seems plausible that this knowledge would let it simplify other
141 //===---------------------------------------------------------------------===//
143 For packed types, TargetData.cpp::getTypeInfo() returns alignment that is equal
144 to the type size. It works but can be overly conservative as the alignment of
145 specific packed types are target dependent.
147 //===---------------------------------------------------------------------===//
149 We should add 'unaligned load/store' nodes, and produce them from code like
152 v4sf example(float *P) {
153 return (v4sf){P[0], P[1], P[2], P[3] };
156 //===---------------------------------------------------------------------===//
158 We should constant fold packed type casts at the LLVM level, regardless of the
159 cast. Currently we cannot fold some casts because we don't have TargetData
160 information in the constant folder, so we don't know the endianness of the
163 //===---------------------------------------------------------------------===//
165 Add support for conditional increments, and other related patterns. Instead
170 je LBB16_2 #cond_next
181 //===---------------------------------------------------------------------===//
183 Combine: a = sin(x), b = cos(x) into a,b = sincos(x).
185 Expand these to calls of sin/cos and stores:
186 double sincos(double x, double *sin, double *cos);
187 float sincosf(float x, float *sin, float *cos);
188 long double sincosl(long double x, long double *sin, long double *cos);
190 Doing so could allow SROA of the destination pointers. See also:
191 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=17687
193 //===---------------------------------------------------------------------===//
195 Scalar Repl cannot currently promote this testcase to 'ret long cst':
197 %struct.X = type { int, int }
198 %struct.Y = type { %struct.X }
200 %retval = alloca %struct.Y, align 8 ; <%struct.Y*> [#uses=3]
201 %tmp12 = getelementptr %struct.Y* %retval, int 0, uint 0, uint 0
202 store int 0, int* %tmp12
203 %tmp15 = getelementptr %struct.Y* %retval, int 0, uint 0, uint 1
204 store int 1, int* %tmp15
205 %retval = cast %struct.Y* %retval to ulong*
206 %retval = load ulong* %retval ; <ulong> [#uses=1]
210 it should be extended to do so.
212 //===---------------------------------------------------------------------===//
214 Turn this into a single byte store with no load (the other 3 bytes are
217 void %test(uint* %P) {
219 %tmp14 = or uint %tmp, 3305111552
220 %tmp15 = and uint %tmp14, 3321888767
221 store uint %tmp15, uint* %P
225 //===---------------------------------------------------------------------===//
227 dag/inst combine "clz(x)>>5 -> x==0" for 32-bit x.
233 int t = __builtin_clz(x);
243 //===---------------------------------------------------------------------===//
245 Legalize should lower ctlz like this:
246 ctlz(x) = popcnt((x-1) & ~x)
248 on targets that have popcnt but not ctlz. itanium, what else?
250 //===---------------------------------------------------------------------===//
252 quantum_sigma_x in 462.libquantum contains the following loop:
254 for(i=0; i<reg->size; i++)
256 /* Flip the target bit of each basis state */
257 reg->node[i].state ^= ((MAX_UNSIGNED) 1 << target);
260 Where MAX_UNSIGNED/state is a 64-bit int. On a 32-bit platform it would be just
261 so cool to turn it into something like:
264 for(i=0; i<reg->size; i++)
265 reg->node[i].state ^= ((int) (1 << target));
267 for(i=0; i<reg->size; i++)
268 reg->node[i].state ^= (long long)((int) (1 << (target-32))) << 32;
271 ... which would only do one 32-bit XOR per loop iteration instead of two.
273 It would also be nice to recognize the reg->size doesn't alias reg->node[i], but
276 //===---------------------------------------------------------------------===//