1 //===- README.txt - Notes for improving PowerPC-specific code gen ---------===//
4 * lmw/stmw pass a la arm load store optimizer for prolog/epilog
6 ===-------------------------------------------------------------------------===
10 long f2 (long x) { return 0xfffffff000000000UL; }
11 long f3 (long x) { return 0x1ffffffffUL; }
38 ===-------------------------------------------------------------------------===
42 unsigned add32carry(unsigned sum, unsigned x) {
49 Should compile to something like:
59 rlwinm r4, r4, 29, 31, 31
64 ===-------------------------------------------------------------------------===
66 Support 'update' load/store instructions. These are cracked on the G5, but are
69 With preinc enabled, this:
71 long *%test4(long *%X, long *%dest) {
72 %Y = getelementptr long* %X, int 4
74 store long %A, long* %dest
89 with -sched=list-burr, I get:
98 ===-------------------------------------------------------------------------===
100 We compile the hottest inner loop of viterbi to:
111 bne cr0, LBB1_83 ;bb420.i
113 The CBE manages to produce:
124 This could be much better (bdnz instead of bdz) but it still beats us. If we
125 produced this with bdnz, the loop would be a single dispatch group.
127 ===-------------------------------------------------------------------------===
129 Lump the constant pool for each function into ONE pic object, and reference
130 pieces of it as offsets from the start. For functions like this (contrived
131 to have lots of constants obviously):
133 double X(double Y) { return (Y*1.23 + 4.512)*2.34 + 14.38; }
138 lis r2, ha16(.CPI_X_0)
139 lfd f0, lo16(.CPI_X_0)(r2)
140 lis r2, ha16(.CPI_X_1)
141 lfd f2, lo16(.CPI_X_1)(r2)
143 lis r2, ha16(.CPI_X_2)
144 lfd f1, lo16(.CPI_X_2)(r2)
145 lis r2, ha16(.CPI_X_3)
146 lfd f2, lo16(.CPI_X_3)(r2)
150 It would be better to materialize .CPI_X into a register, then use immediates
151 off of the register to avoid the lis's. This is even more important in PIC
154 Note that this (and the static variable version) is discussed here for GCC:
155 http://gcc.gnu.org/ml/gcc-patches/2006-02/msg00133.html
157 Here's another example (the sgn function):
158 double testf(double a) {
159 return a == 0.0 ? 0.0 : (a > 0.0 ? 1.0 : -1.0);
162 it produces a BB like this:
164 lis r2, ha16(LCPI1_0)
165 lfs f0, lo16(LCPI1_0)(r2)
166 lis r2, ha16(LCPI1_1)
167 lis r3, ha16(LCPI1_2)
168 lfs f2, lo16(LCPI1_2)(r3)
169 lfs f3, lo16(LCPI1_1)(r2)
174 ===-------------------------------------------------------------------------===
176 PIC Code Gen IPO optimization:
178 Squish small scalar globals together into a single global struct, allowing the
179 address of the struct to be CSE'd, avoiding PIC accesses (also reduces the size
180 of the GOT on targets with one).
182 Note that this is discussed here for GCC:
183 http://gcc.gnu.org/ml/gcc-patches/2006-02/msg00133.html
185 ===-------------------------------------------------------------------------===
187 Compile offsets from allocas:
190 %X = alloca { int, int }
191 %Y = getelementptr {int,int}* %X, int 0, uint 1
195 into a single add, not two:
202 --> important for C++.
204 ===-------------------------------------------------------------------------===
206 No loads or stores of the constants should be needed:
208 struct foo { double X, Y; };
209 void xxx(struct foo F);
210 void bar() { struct foo R = { 1.0, 2.0 }; xxx(R); }
212 ===-------------------------------------------------------------------------===
216 We still generate calls to foo$stub, and stubs, on Darwin. This is not
217 necessary when building with the Leopard (10.5) or later linker, as stubs are
218 generated by ld when necessary. Parameterizing this based on the deployment
219 target (-mmacosx-version-min) is probably enough. x86-32 does this right, see
222 ===-------------------------------------------------------------------------===
224 Darwin Stub LICM optimization:
230 Have to go through an indirect stub if bar is external or linkonce. It would
231 be better to compile it as:
236 which only computes the address of bar once (instead of each time through the
237 stub). This is Darwin specific and would have to be done in the code generator.
238 Probably not a win on x86.
240 ===-------------------------------------------------------------------------===
242 Simple IPO for argument passing, change:
243 void foo(int X, double Y, int Z) -> void foo(int X, int Z, double Y)
245 the Darwin ABI specifies that any integer arguments in the first 32 bytes worth
246 of arguments get assigned to r3 through r10. That is, if you have a function
247 foo(int, double, int) you get r3, f1, r6, since the 64 bit double ate up the
248 argument bytes for r4 and r5. The trick then would be to shuffle the argument
249 order for functions we can internalize so that the maximum number of
250 integers/pointers get passed in regs before you see any of the fp arguments.
252 Instead of implementing this, it would actually probably be easier to just
253 implement a PPC fastcc, where we could do whatever we wanted to the CC,
254 including having this work sanely.
256 ===-------------------------------------------------------------------------===
258 Fix Darwin FP-In-Integer Registers ABI
260 Darwin passes doubles in structures in integer registers, which is very very
261 bad. Add something like a BITCAST to LLVM, then do an i-p transformation that
262 percolates these things out of functions.
264 Check out how horrible this is:
265 http://gcc.gnu.org/ml/gcc/2005-10/msg01036.html
267 This is an extension of "interprocedural CC unmunging" that can't be done with
270 ===-------------------------------------------------------------------------===
277 return b * 3; // ignore the fact that this is always 3.
283 into something not this:
288 rlwinm r2, r2, 29, 31, 31
290 bgt cr0, LBB1_2 ; UnifiedReturnBlock
292 rlwinm r2, r2, 0, 31, 31
295 LBB1_2: ; UnifiedReturnBlock
299 In particular, the two compares (marked 1) could be shared by reversing one.
300 This could be done in the dag combiner, by swapping a BR_CC when a SETCC of the
301 same operands (but backwards) exists. In this case, this wouldn't save us
302 anything though, because the compares still wouldn't be shared.
304 ===-------------------------------------------------------------------------===
306 We should custom expand setcc instead of pretending that we have it. That
307 would allow us to expose the access of the crbit after the mfcr, allowing
308 that access to be trivially folded into other ops. A simple example:
310 int foo(int a, int b) { return (a < b) << 4; }
317 rlwinm r2, r2, 29, 31, 31
321 ===-------------------------------------------------------------------------===
323 Fold add and sub with constant into non-extern, non-weak addresses so this:
326 void bar(int b) { a = b; }
327 void foo(unsigned char *c) {
344 lbz r2, lo16(_a+3)(r2)
348 ===-------------------------------------------------------------------------===
350 We generate really bad code for this:
352 int f(signed char *a, _Bool b, _Bool c) {
358 ===-------------------------------------------------------------------------===
361 int test(unsigned *P) { return *P >> 24; }
376 ===-------------------------------------------------------------------------===
378 On the G5, logical CR operations are more expensive in their three
379 address form: ops that read/write the same register are half as expensive as
380 those that read from two registers that are different from their destination.
382 We should model this with two separate instructions. The isel should generate
383 the "two address" form of the instructions. When the register allocator
384 detects that it needs to insert a copy due to the two-addresness of the CR
385 logical op, it will invoke PPCInstrInfo::convertToThreeAddress. At this point
386 we can convert to the "three address" instruction, to save code space.
388 This only matters when we start generating cr logical ops.
390 ===-------------------------------------------------------------------------===
392 We should compile these two functions to the same thing:
395 void f(int a, int b, int *P) {
396 *P = (a-b)>=0?(a-b):(b-a);
398 void g(int a, int b, int *P) {
402 Further, they should compile to something better than:
408 bgt cr0, LBB2_2 ; entry
425 ... which is much nicer.
427 This theoretically may help improve twolf slightly (used in dimbox.c:142?).
429 ===-------------------------------------------------------------------------===
432 define i32 @clamp0g(i32 %a) {
434 %cmp = icmp slt i32 %a, 0
435 %sel = select i1 %cmp, i32 0, i32 %a
439 Is compile to this with the PowerPC (32-bit) backend:
451 This could be reduced to the much simpler:
458 ===-------------------------------------------------------------------------===
460 int foo(int N, int ***W, int **TK, int X) {
463 for (t = 0; t < N; ++t)
464 for (i = 0; i < 4; ++i)
465 W[t / X][i][t % X] = TK[i][t];
470 We generate relatively atrocious code for this loop compared to gcc.
472 We could also strength reduce the rem and the div:
473 http://www.lcs.mit.edu/pubs/pdf/MIT-LCS-TM-600.pdf
475 ===-------------------------------------------------------------------------===
477 float foo(float X) { return (int)(X); }
492 We could use a target dag combine to turn the lwz/extsw into an lwa when the
493 lwz has a single use. Since LWA is cracked anyway, this would be a codesize
496 ===-------------------------------------------------------------------------===
498 We generate ugly code for this:
500 void func(unsigned int *ret, float dx, float dy, float dz, float dw) {
502 if(dx < -dw) code |= 1;
503 if(dx > dw) code |= 2;
504 if(dy < -dw) code |= 4;
505 if(dy > dw) code |= 8;
506 if(dz < -dw) code |= 16;
507 if(dz > dw) code |= 32;
511 ===-------------------------------------------------------------------------===
513 %struct.B = type { i8, [3 x i8] }
515 define void @bar(%struct.B* %b) {
517 %tmp = bitcast %struct.B* %b to i32* ; <uint*> [#uses=1]
518 %tmp = load i32* %tmp ; <uint> [#uses=1]
519 %tmp3 = bitcast %struct.B* %b to i32* ; <uint*> [#uses=1]
520 %tmp4 = load i32* %tmp3 ; <uint> [#uses=1]
521 %tmp8 = bitcast %struct.B* %b to i32* ; <uint*> [#uses=2]
522 %tmp9 = load i32* %tmp8 ; <uint> [#uses=1]
523 %tmp4.mask17 = shl i32 %tmp4, i8 1 ; <uint> [#uses=1]
524 %tmp1415 = and i32 %tmp4.mask17, 2147483648 ; <uint> [#uses=1]
525 %tmp.masked = and i32 %tmp, 2147483648 ; <uint> [#uses=1]
526 %tmp11 = or i32 %tmp1415, %tmp.masked ; <uint> [#uses=1]
527 %tmp12 = and i32 %tmp9, 2147483647 ; <uint> [#uses=1]
528 %tmp13 = or i32 %tmp12, %tmp11 ; <uint> [#uses=1]
529 store i32 %tmp13, i32* %tmp8
539 rlwimi r2, r4, 0, 0, 0
543 We could collapse a bunch of those ORs and ANDs and generate the following
548 rlwinm r4, r2, 1, 0, 0
553 ===-------------------------------------------------------------------------===
557 unsigned test6(unsigned x) {
558 return ((x & 0x00FF0000) >> 16) | ((x & 0x000000FF) << 16);
565 rlwinm r3, r3, 16, 0, 31
574 rlwinm r3,r3,16,24,31
579 ===-------------------------------------------------------------------------===
581 Consider a function like this:
583 float foo(float X) { return X + 1234.4123f; }
585 The FP constant ends up in the constant pool, so we need to get the LR register.
586 This ends up producing code like this:
595 addis r2, r2, ha16(.CPI_foo_0-"L00000$pb")
596 lfs f0, lo16(.CPI_foo_0-"L00000$pb")(r2)
602 This is functional, but there is no reason to spill the LR register all the way
603 to the stack (the two marked instrs): spilling it to a GPR is quite enough.
605 Implementing this will require some codegen improvements. Nate writes:
607 "So basically what we need to support the "no stack frame save and restore" is a
608 generalization of the LR optimization to "callee-save regs".
610 Currently, we have LR marked as a callee-save reg. The register allocator sees
611 that it's callee save, and spills it directly to the stack.
613 Ideally, something like this would happen:
615 LR would be in a separate register class from the GPRs. The class of LR would be
616 marked "unspillable". When the register allocator came across an unspillable
617 reg, it would ask "what is the best class to copy this into that I *can* spill"
618 If it gets a class back, which it will in this case (the gprs), it grabs a free
619 register of that class. If it is then later necessary to spill that reg, so be
622 ===-------------------------------------------------------------------------===
626 return X ? 524288 : 0;
634 beq cr0, LBB1_2 ;entry
647 This sort of thing occurs a lot due to globalopt.
649 ===-------------------------------------------------------------------------===
653 define i32 @bar(i32 %x) nounwind readnone ssp {
655 %0 = icmp eq i32 %x, 0 ; <i1> [#uses=1]
656 %neg = sext i1 %0 to i32 ; <i32> [#uses=1]
668 it would be better to produce:
675 ===-------------------------------------------------------------------------===
677 We currently compile 32-bit bswap:
679 declare i32 @llvm.bswap.i32(i32 %A)
680 define i32 @test(i32 %A) {
681 %B = call i32 @llvm.bswap.i32(i32 %A)
688 rlwinm r2, r3, 24, 16, 23
690 rlwimi r2, r3, 8, 24, 31
691 rlwimi r4, r3, 8, 8, 15
692 rlwimi r4, r2, 0, 16, 31
696 it would be more efficient to produce:
699 rlwinm r3,r3,8,0xffffffff
701 rlwimi r3,r0,24,16,23
704 ===-------------------------------------------------------------------------===
706 test/CodeGen/PowerPC/2007-03-24-cntlzd.ll compiles to:
708 __ZNK4llvm5APInt17countLeadingZerosEv:
711 or r2, r2, r2 <<-- silly.
715 The dead or is a 'truncate' from 64- to 32-bits.
717 ===-------------------------------------------------------------------------===
719 We generate horrible ppc code for this:
731 addi r5, r5, 1 ;; Extra IV for the exit value compare.
735 xoris r6, r5, 30 ;; This is due to a large immediate.
736 cmplwi cr0, r6, 33920
739 //===---------------------------------------------------------------------===//
743 inline std::pair<unsigned, bool> full_add(unsigned a, unsigned b)
744 { return std::make_pair(a + b, a + b < a); }
745 bool no_overflow(unsigned a, unsigned b)
746 { return !full_add(a, b).second; }
763 rlwinm r2, r2, 29, 31, 31
767 //===---------------------------------------------------------------------===//
769 We compile some FP comparisons into an mfcr with two rlwinms and an or. For
772 int test(double x, double y) { return islessequal(x, y);}
773 int test2(double x, double y) { return islessgreater(x, y);}
774 int test3(double x, double y) { return !islessequal(x, y);}
776 Compiles into (all three are similar, but the bits differ):
781 rlwinm r3, r2, 29, 31, 31
782 rlwinm r2, r2, 31, 31, 31
786 GCC compiles this into:
795 which is more efficient and can use mfocr. See PR642 for some more context.
797 //===---------------------------------------------------------------------===//
799 void foo(float *data, float d) {
801 for (i = 0; i < 8000; i++)
804 void foo2(float *data, float d) {
807 for (i = 0; i < 8000; i++) {
820 cmplwi cr0, r4, 32000
829 cmplwi cr0, r4, 32000
834 The 'mr' could be eliminated to folding the add into the cmp better.
836 //===---------------------------------------------------------------------===//
837 Codegen for the following (low-probability) case deteriorated considerably
838 when the correctness fixes for unordered comparisons went in (PR 642, 58871).
839 It should be possible to recover the code quality described in the comments.
841 ; RUN: llvm-as < %s | llc -march=ppc32 | grep or | count 3
842 ; This should produce one 'or' or 'cror' instruction per function.
844 ; RUN: llvm-as < %s | llc -march=ppc32 | grep mfcr | count 3
847 define i32 @test(double %x, double %y) nounwind {
849 %tmp3 = fcmp ole double %x, %y ; <i1> [#uses=1]
850 %tmp345 = zext i1 %tmp3 to i32 ; <i32> [#uses=1]
854 define i32 @test2(double %x, double %y) nounwind {
856 %tmp3 = fcmp one double %x, %y ; <i1> [#uses=1]
857 %tmp345 = zext i1 %tmp3 to i32 ; <i32> [#uses=1]
861 define i32 @test3(double %x, double %y) nounwind {
863 %tmp3 = fcmp ugt double %x, %y ; <i1> [#uses=1]
864 %tmp34 = zext i1 %tmp3 to i32 ; <i32> [#uses=1]
867 //===----------------------------------------------------------------------===//
868 ; RUN: llvm-as < %s | llc -march=ppc32 | not grep fneg
870 ; This could generate FSEL with appropriate flags (FSEL is not IEEE-safe, and
871 ; should not be generated except with -enable-finite-only-fp-math or the like).
872 ; With the correctness fixes for PR642 (58871) LowerSELECT_CC would need to
873 ; recognize a more elaborate tree than a simple SETxx.
875 define double @test_FNEG_sel(double %A, double %B, double %C) {
876 %D = fsub double -0.000000e+00, %A ; <double> [#uses=1]
877 %Cond = fcmp ugt double %D, -0.000000e+00 ; <i1> [#uses=1]
878 %E = select i1 %Cond, double %B, double %C ; <double> [#uses=1]
882 //===----------------------------------------------------------------------===//
883 The save/restore sequence for CR in prolog/epilog is terrible:
884 - Each CR subreg is saved individually, rather than doing one save as a unit.
885 - On Darwin, the save is done after the decrement of SP, which means the offset
886 from SP of the save slot can be too big for a store instruction, which means we
887 need an additional register (currently hacked in 96015+96020; the solution there
888 is correct, but poor).
889 - On SVR4 the same thing can happen, and I don't think saving before the SP
890 decrement is safe on that target, as there is no red zone. This is currently
891 broken AFAIK, although it's not a target I can exercise.
892 The following demonstrates the problem:
893 extern void bar(char *p);
897 __asm__("" ::: "cr2");