1 //===- README.txt - Notes for improving PowerPC-specific code gen ---------===//
5 * implement do-loop -> bdnz transform
6 * lmw/stmw pass a la arm load store optimizer for prolog/epilog
8 ===-------------------------------------------------------------------------===
12 long f2 (long x) { return 0xfffffff000000000UL; }
13 long f3 (long x) { return 0x1ffffffffUL; }
41 ===-------------------------------------------------------------------------===
43 Support 'update' load/store instructions. These are cracked on the G5, but are
46 With preinc enabled, this:
48 long *%test4(long *%X, long *%dest) {
49 %Y = getelementptr long* %X, int 4
51 store long %A, long* %dest
66 with -sched=list-burr, I get:
75 ===-------------------------------------------------------------------------===
77 We compile the hottest inner loop of viterbi to:
88 bne cr0, LBB1_83 ;bb420.i
90 The CBE manages to produce:
101 This could be much better (bdnz instead of bdz) but it still beats us. If we
102 produced this with bdnz, the loop would be a single dispatch group.
104 ===-------------------------------------------------------------------------===
121 This is effectively a simple form of predication.
123 ===-------------------------------------------------------------------------===
125 Lump the constant pool for each function into ONE pic object, and reference
126 pieces of it as offsets from the start. For functions like this (contrived
127 to have lots of constants obviously):
129 double X(double Y) { return (Y*1.23 + 4.512)*2.34 + 14.38; }
134 lis r2, ha16(.CPI_X_0)
135 lfd f0, lo16(.CPI_X_0)(r2)
136 lis r2, ha16(.CPI_X_1)
137 lfd f2, lo16(.CPI_X_1)(r2)
139 lis r2, ha16(.CPI_X_2)
140 lfd f1, lo16(.CPI_X_2)(r2)
141 lis r2, ha16(.CPI_X_3)
142 lfd f2, lo16(.CPI_X_3)(r2)
146 It would be better to materialize .CPI_X into a register, then use immediates
147 off of the register to avoid the lis's. This is even more important in PIC
150 Note that this (and the static variable version) is discussed here for GCC:
151 http://gcc.gnu.org/ml/gcc-patches/2006-02/msg00133.html
153 Here's another example (the sgn function):
154 double testf(double a) {
155 return a == 0.0 ? 0.0 : (a > 0.0 ? 1.0 : -1.0);
158 it produces a BB like this:
160 lis r2, ha16(LCPI1_0)
161 lfs f0, lo16(LCPI1_0)(r2)
162 lis r2, ha16(LCPI1_1)
163 lis r3, ha16(LCPI1_2)
164 lfs f2, lo16(LCPI1_2)(r3)
165 lfs f3, lo16(LCPI1_1)(r2)
170 ===-------------------------------------------------------------------------===
172 PIC Code Gen IPO optimization:
174 Squish small scalar globals together into a single global struct, allowing the
175 address of the struct to be CSE'd, avoiding PIC accesses (also reduces the size
176 of the GOT on targets with one).
178 Note that this is discussed here for GCC:
179 http://gcc.gnu.org/ml/gcc-patches/2006-02/msg00133.html
181 ===-------------------------------------------------------------------------===
183 Implement Newton-Rhapson method for improving estimate instructions to the
184 correct accuracy, and implementing divide as multiply by reciprocal when it has
185 more than one use. Itanium would want this too.
187 ===-------------------------------------------------------------------------===
189 Compile offsets from allocas:
192 %X = alloca { int, int }
193 %Y = getelementptr {int,int}* %X, int 0, uint 1
197 into a single add, not two:
204 --> important for C++.
206 ===-------------------------------------------------------------------------===
208 No loads or stores of the constants should be needed:
210 struct foo { double X, Y; };
211 void xxx(struct foo F);
212 void bar() { struct foo R = { 1.0, 2.0 }; xxx(R); }
214 ===-------------------------------------------------------------------------===
218 We still generate calls to foo$stub, and stubs, on Darwin. This is not
219 necessary when building with the Leopard (10.5) or later linker, as stubs are
220 generated by ld when necessary. Parameterizing this based on the deployment
221 target (-mmacosx-version-min) is probably enough. x86-32 does this right, see
224 ===-------------------------------------------------------------------------===
226 Darwin Stub LICM optimization:
232 Have to go through an indirect stub if bar is external or linkonce. It would
233 be better to compile it as:
238 which only computes the address of bar once (instead of each time through the
239 stub). This is Darwin specific and would have to be done in the code generator.
240 Probably not a win on x86.
242 ===-------------------------------------------------------------------------===
244 Simple IPO for argument passing, change:
245 void foo(int X, double Y, int Z) -> void foo(int X, int Z, double Y)
247 the Darwin ABI specifies that any integer arguments in the first 32 bytes worth
248 of arguments get assigned to r3 through r10. That is, if you have a function
249 foo(int, double, int) you get r3, f1, r6, since the 64 bit double ate up the
250 argument bytes for r4 and r5. The trick then would be to shuffle the argument
251 order for functions we can internalize so that the maximum number of
252 integers/pointers get passed in regs before you see any of the fp arguments.
254 Instead of implementing this, it would actually probably be easier to just
255 implement a PPC fastcc, where we could do whatever we wanted to the CC,
256 including having this work sanely.
258 ===-------------------------------------------------------------------------===
260 Fix Darwin FP-In-Integer Registers ABI
262 Darwin passes doubles in structures in integer registers, which is very very
263 bad. Add something like a BIT_CONVERT to LLVM, then do an i-p transformation
264 that percolates these things out of functions.
266 Check out how horrible this is:
267 http://gcc.gnu.org/ml/gcc/2005-10/msg01036.html
269 This is an extension of "interprocedural CC unmunging" that can't be done with
272 ===-------------------------------------------------------------------------===
279 return b * 3; // ignore the fact that this is always 3.
285 into something not this:
290 rlwinm r2, r2, 29, 31, 31
292 bgt cr0, LBB1_2 ; UnifiedReturnBlock
294 rlwinm r2, r2, 0, 31, 31
297 LBB1_2: ; UnifiedReturnBlock
301 In particular, the two compares (marked 1) could be shared by reversing one.
302 This could be done in the dag combiner, by swapping a BR_CC when a SETCC of the
303 same operands (but backwards) exists. In this case, this wouldn't save us
304 anything though, because the compares still wouldn't be shared.
306 ===-------------------------------------------------------------------------===
308 We should custom expand setcc instead of pretending that we have it. That
309 would allow us to expose the access of the crbit after the mfcr, allowing
310 that access to be trivially folded into other ops. A simple example:
312 int foo(int a, int b) { return (a < b) << 4; }
319 rlwinm r2, r2, 29, 31, 31
323 ===-------------------------------------------------------------------------===
325 Fold add and sub with constant into non-extern, non-weak addresses so this:
328 void bar(int b) { a = b; }
329 void foo(unsigned char *c) {
346 lbz r2, lo16(_a+3)(r2)
350 ===-------------------------------------------------------------------------===
352 We generate really bad code for this:
354 int f(signed char *a, _Bool b, _Bool c) {
360 ===-------------------------------------------------------------------------===
363 int test(unsigned *P) { return *P >> 24; }
378 ===-------------------------------------------------------------------------===
380 On the G5, logical CR operations are more expensive in their three
381 address form: ops that read/write the same register are half as expensive as
382 those that read from two registers that are different from their destination.
384 We should model this with two separate instructions. The isel should generate
385 the "two address" form of the instructions. When the register allocator
386 detects that it needs to insert a copy due to the two-addresness of the CR
387 logical op, it will invoke PPCInstrInfo::convertToThreeAddress. At this point
388 we can convert to the "three address" instruction, to save code space.
390 This only matters when we start generating cr logical ops.
392 ===-------------------------------------------------------------------------===
394 We should compile these two functions to the same thing:
397 void f(int a, int b, int *P) {
398 *P = (a-b)>=0?(a-b):(b-a);
400 void g(int a, int b, int *P) {
404 Further, they should compile to something better than:
410 bgt cr0, LBB2_2 ; entry
427 ... which is much nicer.
429 This theoretically may help improve twolf slightly (used in dimbox.c:142?).
431 ===-------------------------------------------------------------------------===
433 int foo(int N, int ***W, int **TK, int X) {
436 for (t = 0; t < N; ++t)
437 for (i = 0; i < 4; ++i)
438 W[t / X][i][t % X] = TK[i][t];
443 We generate relatively atrocious code for this loop compared to gcc.
445 We could also strength reduce the rem and the div:
446 http://www.lcs.mit.edu/pubs/pdf/MIT-LCS-TM-600.pdf
448 ===-------------------------------------------------------------------------===
450 float foo(float X) { return (int)(X); }
465 We could use a target dag combine to turn the lwz/extsw into an lwa when the
466 lwz has a single use. Since LWA is cracked anyway, this would be a codesize
469 ===-------------------------------------------------------------------------===
471 We generate ugly code for this:
473 void func(unsigned int *ret, float dx, float dy, float dz, float dw) {
475 if(dx < -dw) code |= 1;
476 if(dx > dw) code |= 2;
477 if(dy < -dw) code |= 4;
478 if(dy > dw) code |= 8;
479 if(dz < -dw) code |= 16;
480 if(dz > dw) code |= 32;
484 ===-------------------------------------------------------------------------===
486 Complete the signed i32 to FP conversion code using 64-bit registers
487 transformation, good for PI. See PPCISelLowering.cpp, this comment:
489 // FIXME: disable this lowered code. This generates 64-bit register values,
490 // and we don't model the fact that the top part is clobbered by calls. We
491 // need to flag these together so that the value isn't live across a call.
492 //setOperationAction(ISD::SINT_TO_FP, MVT::i32, Custom);
494 Also, if the registers are spilled to the stack, we have to ensure that all
495 64-bits of them are save/restored, otherwise we will miscompile the code. It
496 sounds like we need to get the 64-bit register classes going.
498 ===-------------------------------------------------------------------------===
500 %struct.B = type { i8, [3 x i8] }
502 define void @bar(%struct.B* %b) {
504 %tmp = bitcast %struct.B* %b to i32* ; <uint*> [#uses=1]
505 %tmp = load i32* %tmp ; <uint> [#uses=1]
506 %tmp3 = bitcast %struct.B* %b to i32* ; <uint*> [#uses=1]
507 %tmp4 = load i32* %tmp3 ; <uint> [#uses=1]
508 %tmp8 = bitcast %struct.B* %b to i32* ; <uint*> [#uses=2]
509 %tmp9 = load i32* %tmp8 ; <uint> [#uses=1]
510 %tmp4.mask17 = shl i32 %tmp4, i8 1 ; <uint> [#uses=1]
511 %tmp1415 = and i32 %tmp4.mask17, 2147483648 ; <uint> [#uses=1]
512 %tmp.masked = and i32 %tmp, 2147483648 ; <uint> [#uses=1]
513 %tmp11 = or i32 %tmp1415, %tmp.masked ; <uint> [#uses=1]
514 %tmp12 = and i32 %tmp9, 2147483647 ; <uint> [#uses=1]
515 %tmp13 = or i32 %tmp12, %tmp11 ; <uint> [#uses=1]
516 store i32 %tmp13, i32* %tmp8
526 rlwimi r2, r4, 0, 0, 0
530 We could collapse a bunch of those ORs and ANDs and generate the following
535 rlwinm r4, r2, 1, 0, 0
540 ===-------------------------------------------------------------------------===
544 unsigned test6(unsigned x) {
545 return ((x & 0x00FF0000) >> 16) | ((x & 0x000000FF) << 16);
552 rlwinm r3, r3, 16, 0, 31
561 rlwinm r3,r3,16,24,31
566 ===-------------------------------------------------------------------------===
568 Consider a function like this:
570 float foo(float X) { return X + 1234.4123f; }
572 The FP constant ends up in the constant pool, so we need to get the LR register.
573 This ends up producing code like this:
582 addis r2, r2, ha16(.CPI_foo_0-"L00000$pb")
583 lfs f0, lo16(.CPI_foo_0-"L00000$pb")(r2)
589 This is functional, but there is no reason to spill the LR register all the way
590 to the stack (the two marked instrs): spilling it to a GPR is quite enough.
592 Implementing this will require some codegen improvements. Nate writes:
594 "So basically what we need to support the "no stack frame save and restore" is a
595 generalization of the LR optimization to "callee-save regs".
597 Currently, we have LR marked as a callee-save reg. The register allocator sees
598 that it's callee save, and spills it directly to the stack.
600 Ideally, something like this would happen:
602 LR would be in a separate register class from the GPRs. The class of LR would be
603 marked "unspillable". When the register allocator came across an unspillable
604 reg, it would ask "what is the best class to copy this into that I *can* spill"
605 If it gets a class back, which it will in this case (the gprs), it grabs a free
606 register of that class. If it is then later necessary to spill that reg, so be
609 ===-------------------------------------------------------------------------===
613 return X ? 524288 : 0;
621 beq cr0, LBB1_2 ;entry
634 This sort of thing occurs a lot due to globalopt.
636 ===-------------------------------------------------------------------------===
638 We currently compile 32-bit bswap:
640 declare i32 @llvm.bswap.i32(i32 %A)
641 define i32 @test(i32 %A) {
642 %B = call i32 @llvm.bswap.i32(i32 %A)
649 rlwinm r2, r3, 24, 16, 23
651 rlwimi r2, r3, 8, 24, 31
652 rlwimi r4, r3, 8, 8, 15
653 rlwimi r4, r2, 0, 16, 31
657 it would be more efficient to produce:
660 rlwinm r3,r3,8,0xffffffff
662 rlwimi r3,r0,24,16,23
665 ===-------------------------------------------------------------------------===
667 test/CodeGen/PowerPC/2007-03-24-cntlzd.ll compiles to:
669 __ZNK4llvm5APInt17countLeadingZerosEv:
672 or r2, r2, r2 <<-- silly.
676 The dead or is a 'truncate' from 64- to 32-bits.
678 ===-------------------------------------------------------------------------===
680 We generate horrible ppc code for this:
692 addi r5, r5, 1 ;; Extra IV for the exit value compare.
696 xoris r6, r5, 30 ;; This is due to a large immediate.
697 cmplwi cr0, r6, 33920
700 //===---------------------------------------------------------------------===//
704 inline std::pair<unsigned, bool> full_add(unsigned a, unsigned b)
705 { return std::make_pair(a + b, a + b < a); }
706 bool no_overflow(unsigned a, unsigned b)
707 { return !full_add(a, b).second; }
724 rlwinm r2, r2, 29, 31, 31
728 //===---------------------------------------------------------------------===//
730 We compile some FP comparisons into an mfcr with two rlwinms and an or. For
733 int test(double x, double y) { return islessequal(x, y);}
734 int test2(double x, double y) { return islessgreater(x, y);}
735 int test3(double x, double y) { return !islessequal(x, y);}
737 Compiles into (all three are similar, but the bits differ):
742 rlwinm r3, r2, 29, 31, 31
743 rlwinm r2, r2, 31, 31, 31
747 GCC compiles this into:
756 which is more efficient and can use mfocr. See PR642 for some more context.
758 //===---------------------------------------------------------------------===//
760 void foo(float *data, float d) {
762 for (i = 0; i < 8000; i++)
765 void foo2(float *data, float d) {
768 for (i = 0; i < 8000; i++) {
781 cmplwi cr0, r4, 32000
790 cmplwi cr0, r4, 32000
795 The 'mr' could be eliminated to folding the add into the cmp better.
797 //===---------------------------------------------------------------------===//
798 Codegen for the following (low-probability) case deteriorated considerably
799 when the correctness fixes for unordered comparisons went in (PR 642, 58871).
800 It should be possible to recover the code quality described in the comments.
802 ; RUN: llvm-as < %s | llc -march=ppc32 | grep or | count 3
803 ; This should produce one 'or' or 'cror' instruction per function.
805 ; RUN: llvm-as < %s | llc -march=ppc32 | grep mfcr | count 3
808 define i32 @test(double %x, double %y) nounwind {
810 %tmp3 = fcmp ole double %x, %y ; <i1> [#uses=1]
811 %tmp345 = zext i1 %tmp3 to i32 ; <i32> [#uses=1]
815 define i32 @test2(double %x, double %y) nounwind {
817 %tmp3 = fcmp one double %x, %y ; <i1> [#uses=1]
818 %tmp345 = zext i1 %tmp3 to i32 ; <i32> [#uses=1]
822 define i32 @test3(double %x, double %y) nounwind {
824 %tmp3 = fcmp ugt double %x, %y ; <i1> [#uses=1]
825 %tmp34 = zext i1 %tmp3 to i32 ; <i32> [#uses=1]
828 //===----------------------------------------------------------------------===//
829 ; RUN: llvm-as < %s | llc -march=ppc32 | not grep fneg
831 ; This could generate FSEL with appropriate flags (FSEL is not IEEE-safe, and
832 ; should not be generated except with -enable-finite-only-fp-math or the like).
833 ; With the correctness fixes for PR642 (58871) LowerSELECT_CC would need to
834 ; recognize a more elaborate tree than a simple SETxx.
836 define double @test_FNEG_sel(double %A, double %B, double %C) {
837 %D = sub double -0.000000e+00, %A ; <double> [#uses=1]
838 %Cond = fcmp ugt double %D, -0.000000e+00 ; <i1> [#uses=1]
839 %E = select i1 %Cond, double %B, double %C ; <double> [#uses=1]