1 //===- README.txt - Notes for improving PowerPC-specific code gen ---------===//
4 * lmw/stmw pass a la arm load store optimizer for prolog/epilog
6 ===-------------------------------------------------------------------------===
10 long f2 (long x) { return 0xfffffff000000000UL; }
11 long f3 (long x) { return 0x1ffffffffUL; }
38 ===-------------------------------------------------------------------------===
42 unsigned add32carry(unsigned sum, unsigned x) {
49 Should compile to something like:
59 rlwinm r4, r4, 29, 31, 31
64 ===-------------------------------------------------------------------------===
66 Support 'update' load/store instructions. These are cracked on the G5, but are
69 With preinc enabled, this:
71 long *%test4(long *%X, long *%dest) {
72 %Y = getelementptr long* %X, int 4
74 store long %A, long* %dest
89 with -sched=list-burr, I get:
98 ===-------------------------------------------------------------------------===
100 We compile the hottest inner loop of viterbi to:
111 bne cr0, LBB1_83 ;bb420.i
113 The CBE manages to produce:
124 This could be much better (bdnz instead of bdz) but it still beats us. If we
125 produced this with bdnz, the loop would be a single dispatch group.
127 ===-------------------------------------------------------------------------===
129 Lump the constant pool for each function into ONE pic object, and reference
130 pieces of it as offsets from the start. For functions like this (contrived
131 to have lots of constants obviously):
133 double X(double Y) { return (Y*1.23 + 4.512)*2.34 + 14.38; }
138 lis r2, ha16(.CPI_X_0)
139 lfd f0, lo16(.CPI_X_0)(r2)
140 lis r2, ha16(.CPI_X_1)
141 lfd f2, lo16(.CPI_X_1)(r2)
143 lis r2, ha16(.CPI_X_2)
144 lfd f1, lo16(.CPI_X_2)(r2)
145 lis r2, ha16(.CPI_X_3)
146 lfd f2, lo16(.CPI_X_3)(r2)
150 It would be better to materialize .CPI_X into a register, then use immediates
151 off of the register to avoid the lis's. This is even more important in PIC
154 Note that this (and the static variable version) is discussed here for GCC:
155 http://gcc.gnu.org/ml/gcc-patches/2006-02/msg00133.html
157 Here's another example (the sgn function):
158 double testf(double a) {
159 return a == 0.0 ? 0.0 : (a > 0.0 ? 1.0 : -1.0);
162 it produces a BB like this:
164 lis r2, ha16(LCPI1_0)
165 lfs f0, lo16(LCPI1_0)(r2)
166 lis r2, ha16(LCPI1_1)
167 lis r3, ha16(LCPI1_2)
168 lfs f2, lo16(LCPI1_2)(r3)
169 lfs f3, lo16(LCPI1_1)(r2)
174 ===-------------------------------------------------------------------------===
176 PIC Code Gen IPO optimization:
178 Squish small scalar globals together into a single global struct, allowing the
179 address of the struct to be CSE'd, avoiding PIC accesses (also reduces the size
180 of the GOT on targets with one).
182 Note that this is discussed here for GCC:
183 http://gcc.gnu.org/ml/gcc-patches/2006-02/msg00133.html
185 ===-------------------------------------------------------------------------===
187 No loads or stores of the constants should be needed:
189 struct foo { double X, Y; };
190 void xxx(struct foo F);
191 void bar() { struct foo R = { 1.0, 2.0 }; xxx(R); }
193 ===-------------------------------------------------------------------------===
197 We still generate calls to foo$stub, and stubs, on Darwin. This is not
198 necessary when building with the Leopard (10.5) or later linker, as stubs are
199 generated by ld when necessary. Parameterizing this based on the deployment
200 target (-mmacosx-version-min) is probably enough. x86-32 does this right, see
203 ===-------------------------------------------------------------------------===
205 Darwin Stub LICM optimization:
211 Have to go through an indirect stub if bar is external or linkonce. It would
212 be better to compile it as:
217 which only computes the address of bar once (instead of each time through the
218 stub). This is Darwin specific and would have to be done in the code generator.
219 Probably not a win on x86.
221 ===-------------------------------------------------------------------------===
223 Simple IPO for argument passing, change:
224 void foo(int X, double Y, int Z) -> void foo(int X, int Z, double Y)
226 the Darwin ABI specifies that any integer arguments in the first 32 bytes worth
227 of arguments get assigned to r3 through r10. That is, if you have a function
228 foo(int, double, int) you get r3, f1, r6, since the 64 bit double ate up the
229 argument bytes for r4 and r5. The trick then would be to shuffle the argument
230 order for functions we can internalize so that the maximum number of
231 integers/pointers get passed in regs before you see any of the fp arguments.
233 Instead of implementing this, it would actually probably be easier to just
234 implement a PPC fastcc, where we could do whatever we wanted to the CC,
235 including having this work sanely.
237 ===-------------------------------------------------------------------------===
239 Fix Darwin FP-In-Integer Registers ABI
241 Darwin passes doubles in structures in integer registers, which is very very
242 bad. Add something like a BITCAST to LLVM, then do an i-p transformation that
243 percolates these things out of functions.
245 Check out how horrible this is:
246 http://gcc.gnu.org/ml/gcc/2005-10/msg01036.html
248 This is an extension of "interprocedural CC unmunging" that can't be done with
251 ===-------------------------------------------------------------------------===
258 return b * 3; // ignore the fact that this is always 3.
264 into something not this:
269 rlwinm r2, r2, 29, 31, 31
271 bgt cr0, LBB1_2 ; UnifiedReturnBlock
273 rlwinm r2, r2, 0, 31, 31
276 LBB1_2: ; UnifiedReturnBlock
280 In particular, the two compares (marked 1) could be shared by reversing one.
281 This could be done in the dag combiner, by swapping a BR_CC when a SETCC of the
282 same operands (but backwards) exists. In this case, this wouldn't save us
283 anything though, because the compares still wouldn't be shared.
285 ===-------------------------------------------------------------------------===
287 We should custom expand setcc instead of pretending that we have it. That
288 would allow us to expose the access of the crbit after the mfcr, allowing
289 that access to be trivially folded into other ops. A simple example:
291 int foo(int a, int b) { return (a < b) << 4; }
298 rlwinm r2, r2, 29, 31, 31
302 ===-------------------------------------------------------------------------===
304 Fold add and sub with constant into non-extern, non-weak addresses so this:
307 void bar(int b) { a = b; }
308 void foo(unsigned char *c) {
325 lbz r2, lo16(_a+3)(r2)
329 ===-------------------------------------------------------------------------===
331 We generate really bad code for this:
333 int f(signed char *a, _Bool b, _Bool c) {
339 ===-------------------------------------------------------------------------===
342 int test(unsigned *P) { return *P >> 24; }
357 ===-------------------------------------------------------------------------===
359 On the G5, logical CR operations are more expensive in their three
360 address form: ops that read/write the same register are half as expensive as
361 those that read from two registers that are different from their destination.
363 We should model this with two separate instructions. The isel should generate
364 the "two address" form of the instructions. When the register allocator
365 detects that it needs to insert a copy due to the two-addresness of the CR
366 logical op, it will invoke PPCInstrInfo::convertToThreeAddress. At this point
367 we can convert to the "three address" instruction, to save code space.
369 This only matters when we start generating cr logical ops.
371 ===-------------------------------------------------------------------------===
373 We should compile these two functions to the same thing:
376 void f(int a, int b, int *P) {
377 *P = (a-b)>=0?(a-b):(b-a);
379 void g(int a, int b, int *P) {
383 Further, they should compile to something better than:
389 bgt cr0, LBB2_2 ; entry
406 ... which is much nicer.
408 This theoretically may help improve twolf slightly (used in dimbox.c:142?).
410 ===-------------------------------------------------------------------------===
413 define i32 @clamp0g(i32 %a) {
415 %cmp = icmp slt i32 %a, 0
416 %sel = select i1 %cmp, i32 0, i32 %a
420 Is compile to this with the PowerPC (32-bit) backend:
432 This could be reduced to the much simpler:
439 ===-------------------------------------------------------------------------===
441 int foo(int N, int ***W, int **TK, int X) {
444 for (t = 0; t < N; ++t)
445 for (i = 0; i < 4; ++i)
446 W[t / X][i][t % X] = TK[i][t];
451 We generate relatively atrocious code for this loop compared to gcc.
453 We could also strength reduce the rem and the div:
454 http://www.lcs.mit.edu/pubs/pdf/MIT-LCS-TM-600.pdf
456 ===-------------------------------------------------------------------------===
458 float foo(float X) { return (int)(X); }
473 We could use a target dag combine to turn the lwz/extsw into an lwa when the
474 lwz has a single use. Since LWA is cracked anyway, this would be a codesize
477 ===-------------------------------------------------------------------------===
479 We generate ugly code for this:
481 void func(unsigned int *ret, float dx, float dy, float dz, float dw) {
483 if(dx < -dw) code |= 1;
484 if(dx > dw) code |= 2;
485 if(dy < -dw) code |= 4;
486 if(dy > dw) code |= 8;
487 if(dz < -dw) code |= 16;
488 if(dz > dw) code |= 32;
492 ===-------------------------------------------------------------------------===
494 %struct.B = type { i8, [3 x i8] }
496 define void @bar(%struct.B* %b) {
498 %tmp = bitcast %struct.B* %b to i32* ; <uint*> [#uses=1]
499 %tmp = load i32* %tmp ; <uint> [#uses=1]
500 %tmp3 = bitcast %struct.B* %b to i32* ; <uint*> [#uses=1]
501 %tmp4 = load i32* %tmp3 ; <uint> [#uses=1]
502 %tmp8 = bitcast %struct.B* %b to i32* ; <uint*> [#uses=2]
503 %tmp9 = load i32* %tmp8 ; <uint> [#uses=1]
504 %tmp4.mask17 = shl i32 %tmp4, i8 1 ; <uint> [#uses=1]
505 %tmp1415 = and i32 %tmp4.mask17, 2147483648 ; <uint> [#uses=1]
506 %tmp.masked = and i32 %tmp, 2147483648 ; <uint> [#uses=1]
507 %tmp11 = or i32 %tmp1415, %tmp.masked ; <uint> [#uses=1]
508 %tmp12 = and i32 %tmp9, 2147483647 ; <uint> [#uses=1]
509 %tmp13 = or i32 %tmp12, %tmp11 ; <uint> [#uses=1]
510 store i32 %tmp13, i32* %tmp8
520 rlwimi r2, r4, 0, 0, 0
524 We could collapse a bunch of those ORs and ANDs and generate the following
529 rlwinm r4, r2, 1, 0, 0
534 ===-------------------------------------------------------------------------===
536 Consider a function like this:
538 float foo(float X) { return X + 1234.4123f; }
540 The FP constant ends up in the constant pool, so we need to get the LR register.
541 This ends up producing code like this:
550 addis r2, r2, ha16(.CPI_foo_0-"L00000$pb")
551 lfs f0, lo16(.CPI_foo_0-"L00000$pb")(r2)
557 This is functional, but there is no reason to spill the LR register all the way
558 to the stack (the two marked instrs): spilling it to a GPR is quite enough.
560 Implementing this will require some codegen improvements. Nate writes:
562 "So basically what we need to support the "no stack frame save and restore" is a
563 generalization of the LR optimization to "callee-save regs".
565 Currently, we have LR marked as a callee-save reg. The register allocator sees
566 that it's callee save, and spills it directly to the stack.
568 Ideally, something like this would happen:
570 LR would be in a separate register class from the GPRs. The class of LR would be
571 marked "unspillable". When the register allocator came across an unspillable
572 reg, it would ask "what is the best class to copy this into that I *can* spill"
573 If it gets a class back, which it will in this case (the gprs), it grabs a free
574 register of that class. If it is then later necessary to spill that reg, so be
577 ===-------------------------------------------------------------------------===
581 return X ? 524288 : 0;
589 beq cr0, LBB1_2 ;entry
602 This sort of thing occurs a lot due to globalopt.
604 ===-------------------------------------------------------------------------===
608 define i32 @bar(i32 %x) nounwind readnone ssp {
610 %0 = icmp eq i32 %x, 0 ; <i1> [#uses=1]
611 %neg = sext i1 %0 to i32 ; <i32> [#uses=1]
623 it would be better to produce:
630 ===-------------------------------------------------------------------------===
632 test/CodeGen/PowerPC/2007-03-24-cntlzd.ll compiles to:
634 __ZNK4llvm5APInt17countLeadingZerosEv:
637 or r2, r2, r2 <<-- silly.
641 The dead or is a 'truncate' from 64- to 32-bits.
643 ===-------------------------------------------------------------------------===
645 We generate horrible ppc code for this:
657 addi r5, r5, 1 ;; Extra IV for the exit value compare.
661 xoris r6, r5, 30 ;; This is due to a large immediate.
662 cmplwi cr0, r6, 33920
665 //===---------------------------------------------------------------------===//
669 inline std::pair<unsigned, bool> full_add(unsigned a, unsigned b)
670 { return std::make_pair(a + b, a + b < a); }
671 bool no_overflow(unsigned a, unsigned b)
672 { return !full_add(a, b).second; }
689 rlwinm r2, r2, 29, 31, 31
693 //===---------------------------------------------------------------------===//
695 We compile some FP comparisons into an mfcr with two rlwinms and an or. For
698 int test(double x, double y) { return islessequal(x, y);}
699 int test2(double x, double y) { return islessgreater(x, y);}
700 int test3(double x, double y) { return !islessequal(x, y);}
702 Compiles into (all three are similar, but the bits differ):
707 rlwinm r3, r2, 29, 31, 31
708 rlwinm r2, r2, 31, 31, 31
712 GCC compiles this into:
721 which is more efficient and can use mfocr. See PR642 for some more context.
723 //===---------------------------------------------------------------------===//
725 void foo(float *data, float d) {
727 for (i = 0; i < 8000; i++)
730 void foo2(float *data, float d) {
733 for (i = 0; i < 8000; i++) {
746 cmplwi cr0, r4, 32000
755 cmplwi cr0, r4, 32000
760 The 'mr' could be eliminated to folding the add into the cmp better.
762 //===---------------------------------------------------------------------===//
763 Codegen for the following (low-probability) case deteriorated considerably
764 when the correctness fixes for unordered comparisons went in (PR 642, 58871).
765 It should be possible to recover the code quality described in the comments.
767 ; RUN: llvm-as < %s | llc -march=ppc32 | grep or | count 3
768 ; This should produce one 'or' or 'cror' instruction per function.
770 ; RUN: llvm-as < %s | llc -march=ppc32 | grep mfcr | count 3
773 define i32 @test(double %x, double %y) nounwind {
775 %tmp3 = fcmp ole double %x, %y ; <i1> [#uses=1]
776 %tmp345 = zext i1 %tmp3 to i32 ; <i32> [#uses=1]
780 define i32 @test2(double %x, double %y) nounwind {
782 %tmp3 = fcmp one double %x, %y ; <i1> [#uses=1]
783 %tmp345 = zext i1 %tmp3 to i32 ; <i32> [#uses=1]
787 define i32 @test3(double %x, double %y) nounwind {
789 %tmp3 = fcmp ugt double %x, %y ; <i1> [#uses=1]
790 %tmp34 = zext i1 %tmp3 to i32 ; <i32> [#uses=1]
793 //===----------------------------------------------------------------------===//
794 ; RUN: llvm-as < %s | llc -march=ppc32 | not grep fneg
796 ; This could generate FSEL with appropriate flags (FSEL is not IEEE-safe, and
797 ; should not be generated except with -enable-finite-only-fp-math or the like).
798 ; With the correctness fixes for PR642 (58871) LowerSELECT_CC would need to
799 ; recognize a more elaborate tree than a simple SETxx.
801 define double @test_FNEG_sel(double %A, double %B, double %C) {
802 %D = fsub double -0.000000e+00, %A ; <double> [#uses=1]
803 %Cond = fcmp ugt double %D, -0.000000e+00 ; <i1> [#uses=1]
804 %E = select i1 %Cond, double %B, double %C ; <double> [#uses=1]
808 //===----------------------------------------------------------------------===//
809 The save/restore sequence for CR in prolog/epilog is terrible:
810 - Each CR subreg is saved individually, rather than doing one save as a unit.
811 - On Darwin, the save is done after the decrement of SP, which means the offset
812 from SP of the save slot can be too big for a store instruction, which means we
813 need an additional register (currently hacked in 96015+96020; the solution there
814 is correct, but poor).
815 - On SVR4 the same thing can happen, and I don't think saving before the SP
816 decrement is safe on that target, as there is no red zone. This is currently
817 broken AFAIK, although it's not a target I can exercise.
818 The following demonstrates the problem:
819 extern void bar(char *p);
823 __asm__("" ::: "cr2");