3 * implement do-loop -> bdnz transform
4 * implement powerpc-64 for darwin
5 * use stfiwx in float->int
7 * Fold add and sub with constant into non-extern, non-weak addresses so this:
8 lis r2, ha16(l2__ZTV4Cell)
9 la r2, lo16(l2__ZTV4Cell)(r2)
12 lis r2, ha16(l2__ZTV4Cell+8)
13 la r2, lo16(l2__ZTV4Cell+8)(r2)
16 * Teach LLVM how to codegen this:
17 unsigned short foo(float a) { return a; }
29 rlwinm r3, r2, 0, 16, 31
32 * Support 'update' load/store instructions. These are cracked on the G5, but
33 are still a codesize win.
35 * should hint to the branch select pass that it doesn't need to print the
36 second unconditional branch, so we don't end up with things like:
37 b .LBBl42__2E_expand_function_8_674 ; loopentry.24
38 b .LBBl42__2E_expand_function_8_42 ; NewDefault
39 b .LBBl42__2E_expand_function_8_42 ; NewDefault
41 ===-------------------------------------------------------------------------===
46 if (X == 0x12345678) bar();
62 ===-------------------------------------------------------------------------===
64 Lump the constant pool for each function into ONE pic object, and reference
65 pieces of it as offsets from the start. For functions like this (contrived
66 to have lots of constants obviously):
68 double X(double Y) { return (Y*1.23 + 4.512)*2.34 + 14.38; }
73 lis r2, ha16(.CPI_X_0)
74 lfd f0, lo16(.CPI_X_0)(r2)
75 lis r2, ha16(.CPI_X_1)
76 lfd f2, lo16(.CPI_X_1)(r2)
78 lis r2, ha16(.CPI_X_2)
79 lfd f1, lo16(.CPI_X_2)(r2)
80 lis r2, ha16(.CPI_X_3)
81 lfd f2, lo16(.CPI_X_3)(r2)
85 It would be better to materialize .CPI_X into a register, then use immediates
86 off of the register to avoid the lis's. This is even more important in PIC
89 ===-------------------------------------------------------------------------===
91 Implement Newton-Rhapson method for improving estimate instructions to the
92 correct accuracy, and implementing divide as multiply by reciprocal when it has
93 more than one use. Itanium will want this too.
95 ===-------------------------------------------------------------------------===
97 int foo(int a, int b) { return a == b ? 16 : 0; }
101 rlwinm r2, r2, 31, 31, 31
105 If we exposed the srl & mask ops after the MFCR that we are doing to select
106 the correct CR bit, then we could fold the slwi into the rlwinm before it.
108 ===-------------------------------------------------------------------------===
110 #define ARRAY_LENGTH 16
115 unsigned int field0 : 6;
116 unsigned int field1 : 6;
117 unsigned int field2 : 6;
118 unsigned int field3 : 6;
119 unsigned int field4 : 3;
120 unsigned int field5 : 4;
121 unsigned int field6 : 1;
123 unsigned int field6 : 1;
124 unsigned int field5 : 4;
125 unsigned int field4 : 3;
126 unsigned int field3 : 6;
127 unsigned int field2 : 6;
128 unsigned int field1 : 6;
129 unsigned int field0 : 6;
138 typedef struct program_t {
139 union bitfield array[ARRAY_LENGTH];
145 void AdjustBitfields(program* prog, unsigned int fmt1)
147 unsigned int shift = 0;
148 unsigned int texCount = 0;
151 for (i = 0; i < 8; i++)
153 prog->array[i].bitfields.field0 = texCount;
154 prog->array[i].bitfields.field1 = texCount + 1;
155 prog->array[i].bitfields.field2 = texCount + 2;
156 prog->array[i].bitfields.field3 = texCount + 3;
158 texCount += (fmt1 >> shift) & 0x7;
163 In the loop above, the bitfield adds get generated as
164 (add (shl bitfield, C1), (shl C2, C1)) where C2 is 1, 2 or 3.
166 Since the input to the (or and, and) is an (add) rather than a (shl), the shift
167 doesn't get folded into the rlwimi instruction. We should ideally see through
168 things like this, rather than forcing llvm to generate the equivalent
170 (shl (add bitfield, C2), C1) with some kind of mask.
172 ===-------------------------------------------------------------------------===
176 int %f1(int %a, int %b) {
177 %tmp.1 = and int %a, 15 ; <int> [#uses=1]
178 %tmp.3 = and int %b, 240 ; <int> [#uses=1]
179 %tmp.4 = or int %tmp.3, %tmp.1 ; <int> [#uses=1]
183 without a copy. We make this currently:
186 rlwinm r2, r4, 0, 24, 27
187 rlwimi r2, r3, 0, 28, 31
191 The two-addr pass or RA needs to learn when it is profitable to commute an
192 instruction to avoid a copy AFTER the 2-addr instruction. The 2-addr pass
193 currently only commutes to avoid inserting a copy BEFORE the two addr instr.
195 ===-------------------------------------------------------------------------===
197 Compile offsets from allocas:
200 %X = alloca { int, int }
201 %Y = getelementptr {int,int}* %X, int 0, uint 1
205 into a single add, not two:
212 --> important for C++.
214 ===-------------------------------------------------------------------------===
216 int test3(int a, int b) { return (a < 0) ? a : 0; }
218 should be branch free code. LLVM is turning it into < 1 because of the RHS.
220 ===-------------------------------------------------------------------------===
222 No loads or stores of the constants should be needed:
224 struct foo { double X, Y; };
225 void xxx(struct foo F);
226 void bar() { struct foo R = { 1.0, 2.0 }; xxx(R); }
228 ===-------------------------------------------------------------------------===
232 int h(int i, int j, int k) {
233 return (i==0||j==0||k == 0);
236 We currently emit this:
249 The ctlz/shift instructions are created by the isel, so the dag combiner doesn't
250 have a chance to pull the shifts through the or's (eliminating two
251 instructions). SETCC nodes should be custom lowered in this case, not expanded