a bunch of loads from m. It would be better to avoid the memcpy and just do
loads from the static array.
-===-------------------------------------------------------------------------===
+//===---------------------------------------------------------------------===//
+
+Make the PPC branch selector target independant
+
+//===---------------------------------------------------------------------===//
Get the C front-end to expand hypot(x,y) -> llvm.sqrt(x*x+y*y) when errno and
precision don't matter (ffastmath). Misc/mandel will like this. :)
Number 1 is the preferred solution.
-//===---------------------------------------------------------------------===//
-
-DAG combine this into mul A, 8:
-
-int %test(int %A) {
- %B = mul int %A, 8 ;; shift
- %C = add int %B, 7 ;; dead, no demanded bits.
- %D = and int %C, -8 ;; dead once add is gone.
- ret int %D
-}
-
-This sort of thing occurs in the alloca lowering code and other places that
-are generating alignment of an already aligned value.
+This has been "fixed" by a TableGen hack. But that is a short term workaround
+which will be removed once the proper fix is made.
//===---------------------------------------------------------------------===//
//===---------------------------------------------------------------------===//
-We should reassociate:
-int f(int a, int b){ return a * a + 2 * a * b + b * b; }
-into:
-int f(int a, int b) { return a * (a + 2 * b) + b * b; }
-to eliminate a multiply.
-
-//===---------------------------------------------------------------------===//
-
On targets with expensive 64-bit multiply, we could LSR this:
for (i = ...; ++i) {
//===---------------------------------------------------------------------===//
-Pull add through mul/shift to handle this:
+Shrink: (setlt (loadi32 P), 0) -> (setlt (loadi8 Phi), 0)
+
+//===---------------------------------------------------------------------===//
+
+Reassociate should turn: X*X*X*X -> t=(X*X) (t*t) to eliminate a multiply.
+
+//===---------------------------------------------------------------------===//
-int foo(int P[4][4], int i) {
- return P[i+2][1];
+Interesting? testcase for add/shift/mul reassoc:
+
+int bar(int x, int y) {
+ return x*x*x+y+x*x*x*x*x*y*y*y*y;
+}
+int foo(int z, int n) {
+ return bar(z, n) + bar(2*z, 2*n);
}
-better than this (no addi needed):
+//===---------------------------------------------------------------------===//
+
+These two functions should generate the same code on big-endian systems:
+
+int g(int *j,int *l) { return memcmp(j,l,4); }
+int h(int *j, int *l) { return *j - *l; }
+
+this could be done in SelectionDAGISel.cpp, along with other special cases,
+for 1,2,4,8 bytes.
+
+//===---------------------------------------------------------------------===//
+
+This code:
+int rot(unsigned char b) { int a = ((b>>1) ^ (b<<7)) & 0xff; return a; }
+
+Can be improved in two ways:
+
+1. The instcombiner should eliminate the type conversions.
+2. The X86 backend should turn this into a rotate by one bit.
+
+//===---------------------------------------------------------------------===//
+
+Add LSR exit value substitution. It'll probably be a win for Ackermann, etc.
+
+//===---------------------------------------------------------------------===//
+
+It would be nice to revert this patch:
+http://lists.cs.uiuc.edu/pipermail/llvm-commits/Week-of-Mon-20060213/031986.html
+
+And teach the dag combiner enough to simplify the code expanded before
+legalize. It seems plausible that this knowledge would let it simplify other
+stuff too.
+
+//===---------------------------------------------------------------------===//
+
+The loop unroller should be enhanced to be able to unroll loops that aren't
+single basic blocks. It should be able to handle stuff like this:
+
+ for (i = 0; i < c1; ++i)
+ if (c2 & (1 << i))
+ foo
+
+where c1/c2 are constants.
-_foo:
- addi r2, r4, 2
- slwi r2, r2, 4
- add r2, r3, r2
- lwz r3, 4(r2)
- blr