1 //===- README.txt - Notes for improving PowerPC-specific code gen ---------===//
5 * implement do-loop -> bdnz transform
6 * lmw/stmw pass a la arm load store optimizer for prolog/epilog
8 ===-------------------------------------------------------------------------===
12 long f2 (long x) { return 0xfffffff000000000UL; }
13 long f3 (long x) { return 0x1ffffffffUL; }
41 ===-------------------------------------------------------------------------===
43 Support 'update' load/store instructions. These are cracked on the G5, but are
46 With preinc enabled, this:
48 long *%test4(long *%X, long *%dest) {
49 %Y = getelementptr long* %X, int 4
51 store long %A, long* %dest
66 with -sched=list-burr, I get:
75 ===-------------------------------------------------------------------------===
77 We compile the hottest inner loop of viterbi to:
88 bne cr0, LBB1_83 ;bb420.i
90 The CBE manages to produce:
101 This could be much better (bdnz instead of bdz) but it still beats us. If we
102 produced this with bdnz, the loop would be a single dispatch group.
104 ===-------------------------------------------------------------------------===
121 This is effectively a simple form of predication.
123 ===-------------------------------------------------------------------------===
125 Lump the constant pool for each function into ONE pic object, and reference
126 pieces of it as offsets from the start. For functions like this (contrived
127 to have lots of constants obviously):
129 double X(double Y) { return (Y*1.23 + 4.512)*2.34 + 14.38; }
134 lis r2, ha16(.CPI_X_0)
135 lfd f0, lo16(.CPI_X_0)(r2)
136 lis r2, ha16(.CPI_X_1)
137 lfd f2, lo16(.CPI_X_1)(r2)
139 lis r2, ha16(.CPI_X_2)
140 lfd f1, lo16(.CPI_X_2)(r2)
141 lis r2, ha16(.CPI_X_3)
142 lfd f2, lo16(.CPI_X_3)(r2)
146 It would be better to materialize .CPI_X into a register, then use immediates
147 off of the register to avoid the lis's. This is even more important in PIC
150 Note that this (and the static variable version) is discussed here for GCC:
151 http://gcc.gnu.org/ml/gcc-patches/2006-02/msg00133.html
153 Here's another example (the sgn function):
154 double testf(double a) {
155 return a == 0.0 ? 0.0 : (a > 0.0 ? 1.0 : -1.0);
158 it produces a BB like this:
160 lis r2, ha16(LCPI1_0)
161 lfs f0, lo16(LCPI1_0)(r2)
162 lis r2, ha16(LCPI1_1)
163 lis r3, ha16(LCPI1_2)
164 lfs f2, lo16(LCPI1_2)(r3)
165 lfs f3, lo16(LCPI1_1)(r2)
170 ===-------------------------------------------------------------------------===
172 PIC Code Gen IPO optimization:
174 Squish small scalar globals together into a single global struct, allowing the
175 address of the struct to be CSE'd, avoiding PIC accesses (also reduces the size
176 of the GOT on targets with one).
178 Note that this is discussed here for GCC:
179 http://gcc.gnu.org/ml/gcc-patches/2006-02/msg00133.html
181 ===-------------------------------------------------------------------------===
183 Implement Newton-Rhapson method for improving estimate instructions to the
184 correct accuracy, and implementing divide as multiply by reciprocal when it has
185 more than one use. Itanium would want this too.
187 ===-------------------------------------------------------------------------===
189 Compile offsets from allocas:
192 %X = alloca { int, int }
193 %Y = getelementptr {int,int}* %X, int 0, uint 1
197 into a single add, not two:
204 --> important for C++.
206 ===-------------------------------------------------------------------------===
208 No loads or stores of the constants should be needed:
210 struct foo { double X, Y; };
211 void xxx(struct foo F);
212 void bar() { struct foo R = { 1.0, 2.0 }; xxx(R); }
214 ===-------------------------------------------------------------------------===
218 We still generate calls to foo$stub, and stubs, on Darwin. This is not
219 necessary when building with the Leopard (10.5) or later linker, as stubs are
220 generated by ld when necessary. Parameterizing this based on the deployment
221 target (-mmacosx-version-min) is probably enough. x86-32 does this right, see
224 ===-------------------------------------------------------------------------===
226 Darwin Stub LICM optimization:
232 Have to go through an indirect stub if bar is external or linkonce. It would
233 be better to compile it as:
238 which only computes the address of bar once (instead of each time through the
239 stub). This is Darwin specific and would have to be done in the code generator.
240 Probably not a win on x86.
242 ===-------------------------------------------------------------------------===
244 Simple IPO for argument passing, change:
245 void foo(int X, double Y, int Z) -> void foo(int X, int Z, double Y)
247 the Darwin ABI specifies that any integer arguments in the first 32 bytes worth
248 of arguments get assigned to r3 through r10. That is, if you have a function
249 foo(int, double, int) you get r3, f1, r6, since the 64 bit double ate up the
250 argument bytes for r4 and r5. The trick then would be to shuffle the argument
251 order for functions we can internalize so that the maximum number of
252 integers/pointers get passed in regs before you see any of the fp arguments.
254 Instead of implementing this, it would actually probably be easier to just
255 implement a PPC fastcc, where we could do whatever we wanted to the CC,
256 including having this work sanely.
258 ===-------------------------------------------------------------------------===
260 Fix Darwin FP-In-Integer Registers ABI
262 Darwin passes doubles in structures in integer registers, which is very very
263 bad. Add something like a BIT_CONVERT to LLVM, then do an i-p transformation
264 that percolates these things out of functions.
266 Check out how horrible this is:
267 http://gcc.gnu.org/ml/gcc/2005-10/msg01036.html
269 This is an extension of "interprocedural CC unmunging" that can't be done with
272 ===-------------------------------------------------------------------------===
279 return b * 3; // ignore the fact that this is always 3.
285 into something not this:
290 rlwinm r2, r2, 29, 31, 31
292 bgt cr0, LBB1_2 ; UnifiedReturnBlock
294 rlwinm r2, r2, 0, 31, 31
297 LBB1_2: ; UnifiedReturnBlock
301 In particular, the two compares (marked 1) could be shared by reversing one.
302 This could be done in the dag combiner, by swapping a BR_CC when a SETCC of the
303 same operands (but backwards) exists. In this case, this wouldn't save us
304 anything though, because the compares still wouldn't be shared.
306 ===-------------------------------------------------------------------------===
308 We should custom expand setcc instead of pretending that we have it. That
309 would allow us to expose the access of the crbit after the mfcr, allowing
310 that access to be trivially folded into other ops. A simple example:
312 int foo(int a, int b) { return (a < b) << 4; }
319 rlwinm r2, r2, 29, 31, 31
323 ===-------------------------------------------------------------------------===
325 Fold add and sub with constant into non-extern, non-weak addresses so this:
328 void bar(int b) { a = b; }
329 void foo(unsigned char *c) {
346 lbz r2, lo16(_a+3)(r2)
350 ===-------------------------------------------------------------------------===
352 We generate really bad code for this:
354 int f(signed char *a, _Bool b, _Bool c) {
360 ===-------------------------------------------------------------------------===
363 int test(unsigned *P) { return *P >> 24; }
378 ===-------------------------------------------------------------------------===
380 On the G5, logical CR operations are more expensive in their three
381 address form: ops that read/write the same register are half as expensive as
382 those that read from two registers that are different from their destination.
384 We should model this with two separate instructions. The isel should generate
385 the "two address" form of the instructions. When the register allocator
386 detects that it needs to insert a copy due to the two-addresness of the CR
387 logical op, it will invoke PPCInstrInfo::convertToThreeAddress. At this point
388 we can convert to the "three address" instruction, to save code space.
390 This only matters when we start generating cr logical ops.
392 ===-------------------------------------------------------------------------===
394 We should compile these two functions to the same thing:
397 void f(int a, int b, int *P) {
398 *P = (a-b)>=0?(a-b):(b-a);
400 void g(int a, int b, int *P) {
404 Further, they should compile to something better than:
410 bgt cr0, LBB2_2 ; entry
427 ... which is much nicer.
429 This theoretically may help improve twolf slightly (used in dimbox.c:142?).
431 ===-------------------------------------------------------------------------===
434 define i32 @clamp0g(i32 %a) {
436 %cmp = icmp slt i32 %a, 0
437 %sel = select i1 %cmp, i32 0, i32 %a
441 Is compile to this with the PowerPC (32-bit) backend:
453 This could be reduced to the much simpler:
460 ===-------------------------------------------------------------------------===
462 int foo(int N, int ***W, int **TK, int X) {
465 for (t = 0; t < N; ++t)
466 for (i = 0; i < 4; ++i)
467 W[t / X][i][t % X] = TK[i][t];
472 We generate relatively atrocious code for this loop compared to gcc.
474 We could also strength reduce the rem and the div:
475 http://www.lcs.mit.edu/pubs/pdf/MIT-LCS-TM-600.pdf
477 ===-------------------------------------------------------------------------===
479 float foo(float X) { return (int)(X); }
494 We could use a target dag combine to turn the lwz/extsw into an lwa when the
495 lwz has a single use. Since LWA is cracked anyway, this would be a codesize
498 ===-------------------------------------------------------------------------===
500 We generate ugly code for this:
502 void func(unsigned int *ret, float dx, float dy, float dz, float dw) {
504 if(dx < -dw) code |= 1;
505 if(dx > dw) code |= 2;
506 if(dy < -dw) code |= 4;
507 if(dy > dw) code |= 8;
508 if(dz < -dw) code |= 16;
509 if(dz > dw) code |= 32;
513 ===-------------------------------------------------------------------------===
515 Complete the signed i32 to FP conversion code using 64-bit registers
516 transformation, good for PI. See PPCISelLowering.cpp, this comment:
518 // FIXME: disable this lowered code. This generates 64-bit register values,
519 // and we don't model the fact that the top part is clobbered by calls. We
520 // need to flag these together so that the value isn't live across a call.
521 //setOperationAction(ISD::SINT_TO_FP, MVT::i32, Custom);
523 Also, if the registers are spilled to the stack, we have to ensure that all
524 64-bits of them are save/restored, otherwise we will miscompile the code. It
525 sounds like we need to get the 64-bit register classes going.
527 ===-------------------------------------------------------------------------===
529 %struct.B = type { i8, [3 x i8] }
531 define void @bar(%struct.B* %b) {
533 %tmp = bitcast %struct.B* %b to i32* ; <uint*> [#uses=1]
534 %tmp = load i32* %tmp ; <uint> [#uses=1]
535 %tmp3 = bitcast %struct.B* %b to i32* ; <uint*> [#uses=1]
536 %tmp4 = load i32* %tmp3 ; <uint> [#uses=1]
537 %tmp8 = bitcast %struct.B* %b to i32* ; <uint*> [#uses=2]
538 %tmp9 = load i32* %tmp8 ; <uint> [#uses=1]
539 %tmp4.mask17 = shl i32 %tmp4, i8 1 ; <uint> [#uses=1]
540 %tmp1415 = and i32 %tmp4.mask17, 2147483648 ; <uint> [#uses=1]
541 %tmp.masked = and i32 %tmp, 2147483648 ; <uint> [#uses=1]
542 %tmp11 = or i32 %tmp1415, %tmp.masked ; <uint> [#uses=1]
543 %tmp12 = and i32 %tmp9, 2147483647 ; <uint> [#uses=1]
544 %tmp13 = or i32 %tmp12, %tmp11 ; <uint> [#uses=1]
545 store i32 %tmp13, i32* %tmp8
555 rlwimi r2, r4, 0, 0, 0
559 We could collapse a bunch of those ORs and ANDs and generate the following
564 rlwinm r4, r2, 1, 0, 0
569 ===-------------------------------------------------------------------------===
573 unsigned test6(unsigned x) {
574 return ((x & 0x00FF0000) >> 16) | ((x & 0x000000FF) << 16);
581 rlwinm r3, r3, 16, 0, 31
590 rlwinm r3,r3,16,24,31
595 ===-------------------------------------------------------------------------===
597 Consider a function like this:
599 float foo(float X) { return X + 1234.4123f; }
601 The FP constant ends up in the constant pool, so we need to get the LR register.
602 This ends up producing code like this:
611 addis r2, r2, ha16(.CPI_foo_0-"L00000$pb")
612 lfs f0, lo16(.CPI_foo_0-"L00000$pb")(r2)
618 This is functional, but there is no reason to spill the LR register all the way
619 to the stack (the two marked instrs): spilling it to a GPR is quite enough.
621 Implementing this will require some codegen improvements. Nate writes:
623 "So basically what we need to support the "no stack frame save and restore" is a
624 generalization of the LR optimization to "callee-save regs".
626 Currently, we have LR marked as a callee-save reg. The register allocator sees
627 that it's callee save, and spills it directly to the stack.
629 Ideally, something like this would happen:
631 LR would be in a separate register class from the GPRs. The class of LR would be
632 marked "unspillable". When the register allocator came across an unspillable
633 reg, it would ask "what is the best class to copy this into that I *can* spill"
634 If it gets a class back, which it will in this case (the gprs), it grabs a free
635 register of that class. If it is then later necessary to spill that reg, so be
638 ===-------------------------------------------------------------------------===
642 return X ? 524288 : 0;
650 beq cr0, LBB1_2 ;entry
663 This sort of thing occurs a lot due to globalopt.
665 ===-------------------------------------------------------------------------===
669 define i32 @bar(i32 %x) nounwind readnone ssp {
671 %0 = icmp eq i32 %x, 0 ; <i1> [#uses=1]
672 %neg = sext i1 %0 to i32 ; <i32> [#uses=1]
684 it would be better to produce:
691 ===-------------------------------------------------------------------------===
693 We currently compile 32-bit bswap:
695 declare i32 @llvm.bswap.i32(i32 %A)
696 define i32 @test(i32 %A) {
697 %B = call i32 @llvm.bswap.i32(i32 %A)
704 rlwinm r2, r3, 24, 16, 23
706 rlwimi r2, r3, 8, 24, 31
707 rlwimi r4, r3, 8, 8, 15
708 rlwimi r4, r2, 0, 16, 31
712 it would be more efficient to produce:
715 rlwinm r3,r3,8,0xffffffff
717 rlwimi r3,r0,24,16,23
720 ===-------------------------------------------------------------------------===
722 test/CodeGen/PowerPC/2007-03-24-cntlzd.ll compiles to:
724 __ZNK4llvm5APInt17countLeadingZerosEv:
727 or r2, r2, r2 <<-- silly.
731 The dead or is a 'truncate' from 64- to 32-bits.
733 ===-------------------------------------------------------------------------===
735 We generate horrible ppc code for this:
747 addi r5, r5, 1 ;; Extra IV for the exit value compare.
751 xoris r6, r5, 30 ;; This is due to a large immediate.
752 cmplwi cr0, r6, 33920
755 //===---------------------------------------------------------------------===//
759 inline std::pair<unsigned, bool> full_add(unsigned a, unsigned b)
760 { return std::make_pair(a + b, a + b < a); }
761 bool no_overflow(unsigned a, unsigned b)
762 { return !full_add(a, b).second; }
779 rlwinm r2, r2, 29, 31, 31
783 //===---------------------------------------------------------------------===//
785 We compile some FP comparisons into an mfcr with two rlwinms and an or. For
788 int test(double x, double y) { return islessequal(x, y);}
789 int test2(double x, double y) { return islessgreater(x, y);}
790 int test3(double x, double y) { return !islessequal(x, y);}
792 Compiles into (all three are similar, but the bits differ):
797 rlwinm r3, r2, 29, 31, 31
798 rlwinm r2, r2, 31, 31, 31
802 GCC compiles this into:
811 which is more efficient and can use mfocr. See PR642 for some more context.
813 //===---------------------------------------------------------------------===//
815 void foo(float *data, float d) {
817 for (i = 0; i < 8000; i++)
820 void foo2(float *data, float d) {
823 for (i = 0; i < 8000; i++) {
836 cmplwi cr0, r4, 32000
845 cmplwi cr0, r4, 32000
850 The 'mr' could be eliminated to folding the add into the cmp better.
852 //===---------------------------------------------------------------------===//
853 Codegen for the following (low-probability) case deteriorated considerably
854 when the correctness fixes for unordered comparisons went in (PR 642, 58871).
855 It should be possible to recover the code quality described in the comments.
857 ; RUN: llvm-as < %s | llc -march=ppc32 | grep or | count 3
858 ; This should produce one 'or' or 'cror' instruction per function.
860 ; RUN: llvm-as < %s | llc -march=ppc32 | grep mfcr | count 3
863 define i32 @test(double %x, double %y) nounwind {
865 %tmp3 = fcmp ole double %x, %y ; <i1> [#uses=1]
866 %tmp345 = zext i1 %tmp3 to i32 ; <i32> [#uses=1]
870 define i32 @test2(double %x, double %y) nounwind {
872 %tmp3 = fcmp one double %x, %y ; <i1> [#uses=1]
873 %tmp345 = zext i1 %tmp3 to i32 ; <i32> [#uses=1]
877 define i32 @test3(double %x, double %y) nounwind {
879 %tmp3 = fcmp ugt double %x, %y ; <i1> [#uses=1]
880 %tmp34 = zext i1 %tmp3 to i32 ; <i32> [#uses=1]
883 //===----------------------------------------------------------------------===//
884 ; RUN: llvm-as < %s | llc -march=ppc32 | not grep fneg
886 ; This could generate FSEL with appropriate flags (FSEL is not IEEE-safe, and
887 ; should not be generated except with -enable-finite-only-fp-math or the like).
888 ; With the correctness fixes for PR642 (58871) LowerSELECT_CC would need to
889 ; recognize a more elaborate tree than a simple SETxx.
891 define double @test_FNEG_sel(double %A, double %B, double %C) {
892 %D = fsub double -0.000000e+00, %A ; <double> [#uses=1]
893 %Cond = fcmp ugt double %D, -0.000000e+00 ; <i1> [#uses=1]
894 %E = select i1 %Cond, double %B, double %C ; <double> [#uses=1]
898 //===----------------------------------------------------------------------===//
899 The save/restore sequence for CR in prolog/epilog is terrible:
900 - Each CR subreg is saved individually, rather than doing one save as a unit.
901 - On Darwin, the save is done after the decrement of SP, which means the offset
902 from SP of the save slot can be too big for a store instruction, which means we
903 need an additional register (currently hacked in 96015+96020; the solution there
904 is correct, but poor).
905 - On SVR4 the same thing can happen, and I don't think saving before the SP
906 decrement is safe on that target, as there is no red zone. This is currently
907 broken AFAIK, although it's not a target I can exercise.
908 The following demonstrates the problem:
909 extern void bar(char *p);
913 __asm__("" ::: "cr2");