1 //===- README.txt - Notes for improving PowerPC-specific code gen ---------===//
5 * implement do-loop -> bdnz transform
6 * Implement __builtin_trap (ISD::TRAP) as 'tw 31, 0, 0' aka 'trap'.
7 * lmw/stmw pass a la arm load store optimizer for prolog/epilog
9 ===-------------------------------------------------------------------------===
11 Support 'update' load/store instructions. These are cracked on the G5, but are
14 With preinc enabled, this:
16 long *%test4(long *%X, long *%dest) {
17 %Y = getelementptr long* %X, int 4
19 store long %A, long* %dest
34 with -sched=list-burr, I get:
43 ===-------------------------------------------------------------------------===
45 We compile the hottest inner loop of viterbi to:
56 bne cr0, LBB1_83 ;bb420.i
58 The CBE manages to produce:
69 This could be much better (bdnz instead of bdz) but it still beats us. If we
70 produced this with bdnz, the loop would be a single dispatch group.
72 ===-------------------------------------------------------------------------===
89 This is effectively a simple form of predication.
91 ===-------------------------------------------------------------------------===
93 Lump the constant pool for each function into ONE pic object, and reference
94 pieces of it as offsets from the start. For functions like this (contrived
95 to have lots of constants obviously):
97 double X(double Y) { return (Y*1.23 + 4.512)*2.34 + 14.38; }
102 lis r2, ha16(.CPI_X_0)
103 lfd f0, lo16(.CPI_X_0)(r2)
104 lis r2, ha16(.CPI_X_1)
105 lfd f2, lo16(.CPI_X_1)(r2)
107 lis r2, ha16(.CPI_X_2)
108 lfd f1, lo16(.CPI_X_2)(r2)
109 lis r2, ha16(.CPI_X_3)
110 lfd f2, lo16(.CPI_X_3)(r2)
114 It would be better to materialize .CPI_X into a register, then use immediates
115 off of the register to avoid the lis's. This is even more important in PIC
118 Note that this (and the static variable version) is discussed here for GCC:
119 http://gcc.gnu.org/ml/gcc-patches/2006-02/msg00133.html
121 Here's another example (the sgn function):
122 double testf(double a) {
123 return a == 0.0 ? 0.0 : (a > 0.0 ? 1.0 : -1.0);
126 it produces a BB like this:
128 lis r2, ha16(LCPI1_0)
129 lfs f0, lo16(LCPI1_0)(r2)
130 lis r2, ha16(LCPI1_1)
131 lis r3, ha16(LCPI1_2)
132 lfs f2, lo16(LCPI1_2)(r3)
133 lfs f3, lo16(LCPI1_1)(r2)
138 ===-------------------------------------------------------------------------===
140 PIC Code Gen IPO optimization:
142 Squish small scalar globals together into a single global struct, allowing the
143 address of the struct to be CSE'd, avoiding PIC accesses (also reduces the size
144 of the GOT on targets with one).
146 Note that this is discussed here for GCC:
147 http://gcc.gnu.org/ml/gcc-patches/2006-02/msg00133.html
149 ===-------------------------------------------------------------------------===
151 Implement Newton-Rhapson method for improving estimate instructions to the
152 correct accuracy, and implementing divide as multiply by reciprocal when it has
153 more than one use. Itanium will want this too.
155 ===-------------------------------------------------------------------------===
157 Compile offsets from allocas:
160 %X = alloca { int, int }
161 %Y = getelementptr {int,int}* %X, int 0, uint 1
165 into a single add, not two:
172 --> important for C++.
174 ===-------------------------------------------------------------------------===
176 No loads or stores of the constants should be needed:
178 struct foo { double X, Y; };
179 void xxx(struct foo F);
180 void bar() { struct foo R = { 1.0, 2.0 }; xxx(R); }
182 ===-------------------------------------------------------------------------===
184 Darwin Stub LICM optimization:
190 Have to go through an indirect stub if bar is external or linkonce. It would
191 be better to compile it as:
196 which only computes the address of bar once (instead of each time through the
197 stub). This is Darwin specific and would have to be done in the code generator.
198 Probably not a win on x86.
200 ===-------------------------------------------------------------------------===
202 Simple IPO for argument passing, change:
203 void foo(int X, double Y, int Z) -> void foo(int X, int Z, double Y)
205 the Darwin ABI specifies that any integer arguments in the first 32 bytes worth
206 of arguments get assigned to r3 through r10. That is, if you have a function
207 foo(int, double, int) you get r3, f1, r6, since the 64 bit double ate up the
208 argument bytes for r4 and r5. The trick then would be to shuffle the argument
209 order for functions we can internalize so that the maximum number of
210 integers/pointers get passed in regs before you see any of the fp arguments.
212 Instead of implementing this, it would actually probably be easier to just
213 implement a PPC fastcc, where we could do whatever we wanted to the CC,
214 including having this work sanely.
216 ===-------------------------------------------------------------------------===
218 Fix Darwin FP-In-Integer Registers ABI
220 Darwin passes doubles in structures in integer registers, which is very very
221 bad. Add something like a BIT_CONVERT to LLVM, then do an i-p transformation
222 that percolates these things out of functions.
224 Check out how horrible this is:
225 http://gcc.gnu.org/ml/gcc/2005-10/msg01036.html
227 This is an extension of "interprocedural CC unmunging" that can't be done with
230 ===-------------------------------------------------------------------------===
237 return b * 3; // ignore the fact that this is always 3.
243 into something not this:
248 rlwinm r2, r2, 29, 31, 31
250 bgt cr0, LBB1_2 ; UnifiedReturnBlock
252 rlwinm r2, r2, 0, 31, 31
255 LBB1_2: ; UnifiedReturnBlock
259 In particular, the two compares (marked 1) could be shared by reversing one.
260 This could be done in the dag combiner, by swapping a BR_CC when a SETCC of the
261 same operands (but backwards) exists. In this case, this wouldn't save us
262 anything though, because the compares still wouldn't be shared.
264 ===-------------------------------------------------------------------------===
266 We should custom expand setcc instead of pretending that we have it. That
267 would allow us to expose the access of the crbit after the mfcr, allowing
268 that access to be trivially folded into other ops. A simple example:
270 int foo(int a, int b) { return (a < b) << 4; }
277 rlwinm r2, r2, 29, 31, 31
281 ===-------------------------------------------------------------------------===
283 Fold add and sub with constant into non-extern, non-weak addresses so this:
286 void bar(int b) { a = b; }
287 void foo(unsigned char *c) {
304 lbz r2, lo16(_a+3)(r2)
308 ===-------------------------------------------------------------------------===
310 We generate really bad code for this:
312 int f(signed char *a, _Bool b, _Bool c) {
318 ===-------------------------------------------------------------------------===
321 int test(unsigned *P) { return *P >> 24; }
336 ===-------------------------------------------------------------------------===
338 On the G5, logical CR operations are more expensive in their three
339 address form: ops that read/write the same register are half as expensive as
340 those that read from two registers that are different from their destination.
342 We should model this with two separate instructions. The isel should generate
343 the "two address" form of the instructions. When the register allocator
344 detects that it needs to insert a copy due to the two-addresness of the CR
345 logical op, it will invoke PPCInstrInfo::convertToThreeAddress. At this point
346 we can convert to the "three address" instruction, to save code space.
348 This only matters when we start generating cr logical ops.
350 ===-------------------------------------------------------------------------===
352 We should compile these two functions to the same thing:
355 void f(int a, int b, int *P) {
356 *P = (a-b)>=0?(a-b):(b-a);
358 void g(int a, int b, int *P) {
362 Further, they should compile to something better than:
368 bgt cr0, LBB2_2 ; entry
385 ... which is much nicer.
387 This theoretically may help improve twolf slightly (used in dimbox.c:142?).
389 ===-------------------------------------------------------------------------===
391 int foo(int N, int ***W, int **TK, int X) {
394 for (t = 0; t < N; ++t)
395 for (i = 0; i < 4; ++i)
396 W[t / X][i][t % X] = TK[i][t];
401 We generate relatively atrocious code for this loop compared to gcc.
403 We could also strength reduce the rem and the div:
404 http://www.lcs.mit.edu/pubs/pdf/MIT-LCS-TM-600.pdf
406 ===-------------------------------------------------------------------------===
408 float foo(float X) { return (int)(X); }
423 We could use a target dag combine to turn the lwz/extsw into an lwa when the
424 lwz has a single use. Since LWA is cracked anyway, this would be a codesize
427 ===-------------------------------------------------------------------------===
429 We generate ugly code for this:
431 void func(unsigned int *ret, float dx, float dy, float dz, float dw) {
433 if(dx < -dw) code |= 1;
434 if(dx > dw) code |= 2;
435 if(dy < -dw) code |= 4;
436 if(dy > dw) code |= 8;
437 if(dz < -dw) code |= 16;
438 if(dz > dw) code |= 32;
442 ===-------------------------------------------------------------------------===
444 Complete the signed i32 to FP conversion code using 64-bit registers
445 transformation, good for PI. See PPCISelLowering.cpp, this comment:
447 // FIXME: disable this lowered code. This generates 64-bit register values,
448 // and we don't model the fact that the top part is clobbered by calls. We
449 // need to flag these together so that the value isn't live across a call.
450 //setOperationAction(ISD::SINT_TO_FP, MVT::i32, Custom);
452 Also, if the registers are spilled to the stack, we have to ensure that all
453 64-bits of them are save/restored, otherwise we will miscompile the code. It
454 sounds like we need to get the 64-bit register classes going.
456 ===-------------------------------------------------------------------------===
458 %struct.B = type { i8, [3 x i8] }
460 define void @bar(%struct.B* %b) {
462 %tmp = bitcast %struct.B* %b to i32* ; <uint*> [#uses=1]
463 %tmp = load i32* %tmp ; <uint> [#uses=1]
464 %tmp3 = bitcast %struct.B* %b to i32* ; <uint*> [#uses=1]
465 %tmp4 = load i32* %tmp3 ; <uint> [#uses=1]
466 %tmp8 = bitcast %struct.B* %b to i32* ; <uint*> [#uses=2]
467 %tmp9 = load i32* %tmp8 ; <uint> [#uses=1]
468 %tmp4.mask17 = shl i32 %tmp4, i8 1 ; <uint> [#uses=1]
469 %tmp1415 = and i32 %tmp4.mask17, 2147483648 ; <uint> [#uses=1]
470 %tmp.masked = and i32 %tmp, 2147483648 ; <uint> [#uses=1]
471 %tmp11 = or i32 %tmp1415, %tmp.masked ; <uint> [#uses=1]
472 %tmp12 = and i32 %tmp9, 2147483647 ; <uint> [#uses=1]
473 %tmp13 = or i32 %tmp12, %tmp11 ; <uint> [#uses=1]
474 store i32 %tmp13, i32* %tmp8
484 rlwimi r2, r4, 0, 0, 0
488 We could collapse a bunch of those ORs and ANDs and generate the following
493 rlwinm r4, r2, 1, 0, 0
498 ===-------------------------------------------------------------------------===
502 unsigned test6(unsigned x) {
503 return ((x & 0x00FF0000) >> 16) | ((x & 0x000000FF) << 16);
510 rlwinm r3, r3, 16, 0, 31
519 rlwinm r3,r3,16,24,31
524 ===-------------------------------------------------------------------------===
526 Consider a function like this:
528 float foo(float X) { return X + 1234.4123f; }
530 The FP constant ends up in the constant pool, so we need to get the LR register.
531 This ends up producing code like this:
540 addis r2, r2, ha16(.CPI_foo_0-"L00000$pb")
541 lfs f0, lo16(.CPI_foo_0-"L00000$pb")(r2)
547 This is functional, but there is no reason to spill the LR register all the way
548 to the stack (the two marked instrs): spilling it to a GPR is quite enough.
550 Implementing this will require some codegen improvements. Nate writes:
552 "So basically what we need to support the "no stack frame save and restore" is a
553 generalization of the LR optimization to "callee-save regs".
555 Currently, we have LR marked as a callee-save reg. The register allocator sees
556 that it's callee save, and spills it directly to the stack.
558 Ideally, something like this would happen:
560 LR would be in a separate register class from the GPRs. The class of LR would be
561 marked "unspillable". When the register allocator came across an unspillable
562 reg, it would ask "what is the best class to copy this into that I *can* spill"
563 If it gets a class back, which it will in this case (the gprs), it grabs a free
564 register of that class. If it is then later necessary to spill that reg, so be
567 ===-------------------------------------------------------------------------===
571 return X ? 524288 : 0;
579 beq cr0, LBB1_2 ;entry
592 This sort of thing occurs a lot due to globalopt.
594 ===-------------------------------------------------------------------------===
596 We currently compile 32-bit bswap:
598 declare i32 @llvm.bswap.i32(i32 %A)
599 define i32 @test(i32 %A) {
600 %B = call i32 @llvm.bswap.i32(i32 %A)
607 rlwinm r2, r3, 24, 16, 23
609 rlwimi r2, r3, 8, 24, 31
610 rlwimi r4, r3, 8, 8, 15
611 rlwimi r4, r2, 0, 16, 31
615 it would be more efficient to produce:
618 rlwinm r3,r3,8,0xffffffff
620 rlwimi r3,r0,24,16,23
623 ===-------------------------------------------------------------------------===
625 test/CodeGen/PowerPC/2007-03-24-cntlzd.ll compiles to:
627 __ZNK4llvm5APInt17countLeadingZerosEv:
630 or r2, r2, r2 <<-- silly.
634 The dead or is a 'truncate' from 64- to 32-bits.
636 ===-------------------------------------------------------------------------===
638 We generate horrible ppc code for this:
650 addi r5, r5, 1 ;; Extra IV for the exit value compare.
654 xoris r6, r5, 30 ;; This is due to a large immediate.
655 cmplwi cr0, r6, 33920
658 //===---------------------------------------------------------------------===//
662 inline std::pair<unsigned, bool> full_add(unsigned a, unsigned b)
663 { return std::make_pair(a + b, a + b < a); }
664 bool no_overflow(unsigned a, unsigned b)
665 { return !full_add(a, b).second; }
682 rlwinm r2, r2, 29, 31, 31
686 //===---------------------------------------------------------------------===//
688 We compile some FP comparisons into an mfcr with two rlwinms and an or. For
691 int test(double x, double y) { return islessequal(x, y);}
692 int test2(double x, double y) { return islessgreater(x, y);}
693 int test3(double x, double y) { return !islessequal(x, y);}
695 Compiles into (all three are similar, but the bits differ):
700 rlwinm r3, r2, 29, 31, 31
701 rlwinm r2, r2, 31, 31, 31
705 GCC compiles this into:
714 which is more efficient and can use mfocr. See PR642 for some more context.
716 //===---------------------------------------------------------------------===//
718 void foo(float *data, float d) {
720 for (i = 0; i < 8000; i++)
723 void foo2(float *data, float d) {
726 for (i = 0; i < 8000; i++) {
739 cmplwi cr0, r4, 32000
748 cmplwi cr0, r4, 32000
753 The 'mr' could be eliminated to folding the add into the cmp better.
755 //===---------------------------------------------------------------------===//