3 * implement do-loop -> bdnz transform
4 * implement powerpc-64 for darwin
5 * use stfiwx in float->int
7 * Fold add and sub with constant into non-extern, non-weak addresses so this:
8 lis r2, ha16(l2__ZTV4Cell)
9 la r2, lo16(l2__ZTV4Cell)(r2)
12 lis r2, ha16(l2__ZTV4Cell+8)
13 la r2, lo16(l2__ZTV4Cell+8)(r2)
16 * Teach LLVM how to codegen this:
17 unsigned short foo(float a) { return a; }
29 rlwinm r3, r2, 0, 16, 31
32 * Support 'update' load/store instructions. These are cracked on the G5, but
33 are still a codesize win.
35 * should hint to the branch select pass that it doesn't need to print the
36 second unconditional branch, so we don't end up with things like:
37 b .LBBl42__2E_expand_function_8_674 ; loopentry.24
38 b .LBBl42__2E_expand_function_8_42 ; NewDefault
39 b .LBBl42__2E_expand_function_8_42 ; NewDefault
41 ===-------------------------------------------------------------------------===
46 if (X == 0x12345678) bar();
62 ===-------------------------------------------------------------------------===
64 Lump the constant pool for each function into ONE pic object, and reference
65 pieces of it as offsets from the start. For functions like this (contrived
66 to have lots of constants obviously):
68 double X(double Y) { return (Y*1.23 + 4.512)*2.34 + 14.38; }
73 lis r2, ha16(.CPI_X_0)
74 lfd f0, lo16(.CPI_X_0)(r2)
75 lis r2, ha16(.CPI_X_1)
76 lfd f2, lo16(.CPI_X_1)(r2)
78 lis r2, ha16(.CPI_X_2)
79 lfd f1, lo16(.CPI_X_2)(r2)
80 lis r2, ha16(.CPI_X_3)
81 lfd f2, lo16(.CPI_X_3)(r2)
85 It would be better to materialize .CPI_X into a register, then use immediates
86 off of the register to avoid the lis's. This is even more important in PIC
89 ===-------------------------------------------------------------------------===
91 Implement Newton-Rhapson method for improving estimate instructions to the
92 correct accuracy, and implementing divide as multiply by reciprocal when it has
93 more than one use. Itanium will want this too.
95 ===-------------------------------------------------------------------------===
97 int foo(int a, int b) { return a == b ? 16 : 0; }
101 rlwinm r2, r2, 31, 31, 31
105 If we exposed the srl & mask ops after the MFCR that we are doing to select
106 the correct CR bit, then we could fold the slwi into the rlwinm before it.
108 ===-------------------------------------------------------------------------===
110 #define ARRAY_LENGTH 16
115 unsigned int field0 : 6;
116 unsigned int field1 : 6;
117 unsigned int field2 : 6;
118 unsigned int field3 : 6;
119 unsigned int field4 : 3;
120 unsigned int field5 : 4;
121 unsigned int field6 : 1;
123 unsigned int field6 : 1;
124 unsigned int field5 : 4;
125 unsigned int field4 : 3;
126 unsigned int field3 : 6;
127 unsigned int field2 : 6;
128 unsigned int field1 : 6;
129 unsigned int field0 : 6;
138 typedef struct program_t {
139 union bitfield array[ARRAY_LENGTH];
145 void AdjustBitfields(program* prog, unsigned int fmt1)
147 unsigned int shift = 0;
148 unsigned int texCount = 0;
151 for (i = 0; i < 8; i++)
153 prog->array[i].bitfields.field0 = texCount;
154 prog->array[i].bitfields.field1 = texCount + 1;
155 prog->array[i].bitfields.field2 = texCount + 2;
156 prog->array[i].bitfields.field3 = texCount + 3;
158 texCount += (fmt1 >> shift) & 0x7;
163 In the loop above, the bitfield adds get generated as
164 (add (shl bitfield, C1), (shl C2, C1)) where C2 is 1, 2 or 3.
166 Since the input to the (or and, and) is an (add) rather than a (shl), the shift
167 doesn't get folded into the rlwimi instruction. We should ideally see through
168 things like this, rather than forcing llvm to generate the equivalent
170 (shl (add bitfield, C2), C1) with some kind of mask.
172 ===-------------------------------------------------------------------------===
176 int %f1(int %a, int %b) {
177 %tmp.1 = and int %a, 15 ; <int> [#uses=1]
178 %tmp.3 = and int %b, 240 ; <int> [#uses=1]
179 %tmp.4 = or int %tmp.3, %tmp.1 ; <int> [#uses=1]
183 without a copy. We make this currently:
186 rlwinm r2, r4, 0, 24, 27
187 rlwimi r2, r3, 0, 28, 31
191 The two-addr pass or RA needs to learn when it is profitable to commute an
192 instruction to avoid a copy AFTER the 2-addr instruction. The 2-addr pass
193 currently only commutes to avoid inserting a copy BEFORE the two addr instr.
195 ===-------------------------------------------------------------------------===
197 176.gcc contains a bunch of code like this (this occurs dozens of times):
199 int %test(uint %mode.0.i.0) {
200 %tmp.79 = cast uint %mode.0.i.0 to sbyte ; <sbyte> [#uses=1]
201 %tmp.80 = cast sbyte %tmp.79 to int ; <int> [#uses=1]
202 %tmp.81 = shl int %tmp.80, ubyte 16 ; <int> [#uses=1]
203 %tmp.82 = and int %tmp.81, 16711680
211 rlwinm r3, r2, 16, 8, 15
214 The extsb is obviously dead. This can be handled by a future thing like
215 MaskedValueIsZero that checks to see if bits are ever demanded (in this case,
216 the sign bits are never used, so we can fold the sext_inreg to nothing).
218 I'm seeing code like this:
222 rlwimi r4, r3, 16, 8, 15
224 in which the extsb is preventing the srwi from being nuked.
226 ===-------------------------------------------------------------------------===
228 Another example that occurs is:
230 uint %test(int %specbits.6.1) {
231 %tmp.2540 = shr int %specbits.6.1, ubyte 11 ; <int> [#uses=1]
232 %tmp.2541 = cast int %tmp.2540 to uint ; <uint> [#uses=1]
233 %tmp.2542 = shl uint %tmp.2541, ubyte 13 ; <uint> [#uses=1]
234 %tmp.2543 = and uint %tmp.2542, 8192 ; <uint> [#uses=1]
242 rlwinm r3, r2, 13, 18, 18
245 the srawi can be nuked by turning the SAR into a logical SHR (the sext bits are
246 dead), which I think can then be folded into the rlwinm.
248 ===-------------------------------------------------------------------------===
250 Compile offsets from allocas:
253 %X = alloca { int, int }
254 %Y = getelementptr {int,int}* %X, int 0, uint 1
258 into a single add, not two:
265 --> important for C++.
267 ===-------------------------------------------------------------------------===
269 int test3(int a, int b) { return (a < 0) ? a : 0; }
271 should be branch free code. LLVM is turning it into < 1 because of the RHS.
273 ===-------------------------------------------------------------------------===
275 No loads or stores of the constants should be needed:
277 struct foo { double X, Y; };
278 void xxx(struct foo F);
279 void bar() { struct foo R = { 1.0, 2.0 }; xxx(R); }
281 ===-------------------------------------------------------------------------===
283 Darwin Stub LICM optimization:
289 Have to go through an indirect stub if bar is external or linkonce. It would
290 be better to compile it as:
295 which only computes the address of bar once (instead of each time through the
296 stub). This is Darwin specific and would have to be done in the code generator.
297 Probably not a win on x86.
299 ===-------------------------------------------------------------------------===
301 PowerPC i1/setcc stuff (depends on subreg stuff):
303 Check out the PPC code we get for 'compare' in this testcase:
304 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=19672
306 oof. on top of not doing the logical crnand instead of (mfcr, mfcr,
307 invert, invert, or), we then have to compare it against zero instead of
308 using the value already in a CR!
310 that should be something like
314 bne cr0, LBB_compare_4
322 rlwinm r7, r7, 30, 31, 31
323 rlwinm r8, r8, 30, 31, 31
329 bne cr0, LBB_compare_4 ; loopexit
331 ===-------------------------------------------------------------------------===
333 Simple IPO for argument passing, change:
334 void foo(int X, double Y, int Z) -> void foo(int X, int Z, double Y)
336 the Darwin ABI specifies that any integer arguments in the first 32 bytes worth
337 of arguments get assigned to r3 through r10. That is, if you have a function
338 foo(int, double, int) you get r3, f1, r6, since the 64 bit double ate up the
339 argument bytes for r4 and r5. The trick then would be to shuffle the argument
340 order for functions we can internalize so that the maximum number of
341 integers/pointers get passed in regs before you see any of the fp arguments.
343 Instead of implementing this, it would actually probably be easier to just
344 implement a PPC fastcc, where we could do whatever we wanted to the CC,
345 including having this work sanely.
347 ===-------------------------------------------------------------------------===
349 Fix Darwin FP-In-Integer Registers ABI
351 Darwin passes doubles in structures in integer registers, which is very very
352 bad. Add something like a BIT_CONVERT to LLVM, then do an i-p transformation
353 that percolates these things out of functions.
355 Check out how horrible this is:
356 http://gcc.gnu.org/ml/gcc/2005-10/msg01036.html
358 This is an extension of "interprocedural CC unmunging" that can't be done with
361 ===-------------------------------------------------------------------------===
363 Code Gen IPO optimization:
365 Squish small scalar globals together into a single global struct, allowing the
366 address of the struct to be CSE'd, avoiding PIC accesses (also reduces the size
367 of the GOT on targets with one).
369 ===-------------------------------------------------------------------------===
371 Generate lwbrx and other byteswapping load/store instructions when reasonable.
373 ===-------------------------------------------------------------------------===
375 Implement TargetConstantVec, and set up PPC to custom lower ConstantVec into
376 TargetConstantVec's if it's one of the many forms that are algorithmically
377 computable using the spiffy altivec instructions.
379 ===-------------------------------------------------------------------------===
383 double %test(double %X) {
384 %Y = cast double %X to long
385 %Z = cast long %Y to double
402 without the lwz/stw's.
404 ===-------------------------------------------------------------------------===
411 return b * 3; // ignore the fact that this is always 3.
417 into something not this:
422 rlwinm r2, r2, 29, 31, 31
424 bgt cr0, LBB1_2 ; UnifiedReturnBlock
426 rlwinm r2, r2, 0, 31, 31
429 LBB1_2: ; UnifiedReturnBlock
433 In particular, the two compares (marked 1) could be shared by reversing one.
434 This could be done in the dag combiner, by swapping a BR_CC when a SETCC of the
435 same operands (but backwards) exists. In this case, this wouldn't save us
436 anything though, because the compares still wouldn't be shared.