1 //===- README_ALTIVEC.txt - Notes for improving Altivec code gen ----------===//
3 Implement PPCInstrInfo::isLoadFromStackSlot/isStoreToStackSlot for vector
4 registers, to generate better spill code.
6 //===----------------------------------------------------------------------===//
8 Altivec support. The first should be a single lvx from the constant pool, the
9 second should be a xor/stvx:
12 int x[8] __attribute__((aligned(128))) = { 1, 1, 1, 17, 1, 1, 1, 1 };
18 int x[8] __attribute__((aligned(128)));
19 memset (x, 0, sizeof (x));
23 //===----------------------------------------------------------------------===//
25 Altivec: Codegen'ing MUL with vector FMADD should add -0.0, not 0.0:
26 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=8763
28 When -ffast-math is on, we can use 0.0.
30 //===----------------------------------------------------------------------===//
34 v4f32 Vector2 = { Vector.X, Vector.X, Vector.X, Vector.X };
36 Since we know that "Vector" is 16-byte aligned and we know the element offset
37 of ".X", we should change the load into a lve*x instruction, instead of doing
38 a load/store/lve*x sequence.
40 //===----------------------------------------------------------------------===//
42 There are a wide range of vector constants we can generate with combinations of
43 altivec instructions. Examples
44 GCC does: "t=vsplti*, r = t+t" for constants it can't generate with one vsplti
46 This should be added to the ISD::BUILD_VECTOR case in
47 PPCTargetLowering::LowerOperation.
49 //===----------------------------------------------------------------------===//
51 FABS/FNEG can be codegen'd with the appropriate and/xor of -0.0.
53 //===----------------------------------------------------------------------===//
55 Codegen the constant here with something better than a constant pool load.
57 void %test_f(<4 x float>* %P, <4 x float>* %Q, float %X) {
58 %tmp = load <4 x float>* %Q
59 %tmp = cast <4 x float> %tmp to <4 x int>
60 %tmp1 = and <4 x int> %tmp, < int 2147483647, int 2147483647, int 2147483647, int 2147483647 >
61 %tmp2 = cast <4 x int> %tmp1 to <4 x float>
62 store <4 x float> %tmp2, <4 x float>* %P
66 //===----------------------------------------------------------------------===//
68 For functions that use altivec AND have calls, we are VRSAVE'ing all call
71 //===----------------------------------------------------------------------===//
73 Implement passing/returning vectors by value.
75 //===----------------------------------------------------------------------===//
77 GCC apparently tries to codegen { C1, C2, Variable, C3 } as a constant pool load
78 of C1/C2/C3, then a load and vperm of Variable.
80 //===----------------------------------------------------------------------===//
82 We currently codegen SCALAR_TO_VECTOR as a store of the scalar to a 16-byte
83 aligned stack slot, followed by a lve*x/vperm. We should probably just store it
84 to a scalar stack slot, then use lvsl/vperm to load it. If the value is already
85 in memory, this is a huge win.
87 //===----------------------------------------------------------------------===//
89 Do not generate the MFCR/RLWINM sequence for predicate compares when the
90 predicate compare is used immediately by a branch. Just branch on the right
93 //===----------------------------------------------------------------------===//
95 SROA should turn "vector unions" into the appropriate insert/extract element
98 //===----------------------------------------------------------------------===//
100 We need a way to teach tblgen that some operands of an intrinsic are required to
101 be constants. The verifier should enforce this constraint.
103 //===----------------------------------------------------------------------===//
105 Instead of writting a pattern for type-agnostic operations (e.g. gen-zero, load,
106 store, and, ...) in every supported type, make legalize do the work. We should
107 have a canonical type that we want operations changed to (e.g. v4i32 for
108 build_vector) and legalize should change non-identical types to thse. This is
109 similar to what it does for operations that are only supported in some types,
110 e.g. x86 cmov (not supported on bytes).
112 This would fix two problems:
113 1. Writing patterns multiple times.
114 2. Identical operations in different types are not getting CSE'd.
116 We already do this for shuffle and build_vector. We need load,undef,and,or,xor,
119 //===----------------------------------------------------------------------===//
121 Implement multiply for vector integer types, to avoid the horrible scalarized
122 code produced by legalize.
124 void test(vector int *X, vector int *Y) {
128 //===----------------------------------------------------------------------===//
130 There are a wide variety of vector_shuffle operations that we can do with a pair
131 of instructions (e.g. a vsldoi + vpkuhum). We should pattern match these, but
132 there are a huge number of these.
136 C = vector_shuffle A, B, <0, 1, 2, 4>
137 -> t = vsldoi A, A, 12
138 -> C = vsldoi A, B, 4
140 //===----------------------------------------------------------------------===//