1 //===---------------------------------------------------------------------===//
2 // Random ideas for the ARM backend.
3 //===---------------------------------------------------------------------===//
5 Reimplement 'select' in terms of 'SEL'.
7 * We would really like to support UXTAB16, but we need to prove that the
8 add doesn't need to overflow between the two 16-bit chunks.
10 * Implement pre/post increment support. (e.g. PR935)
11 * Implement smarter constant generation for binops with large immediates.
13 //===---------------------------------------------------------------------===//
15 Crazy idea: Consider code that uses lots of 8-bit or 16-bit values. By the
16 time regalloc happens, these values are now in a 32-bit register, usually with
17 the top-bits known to be sign or zero extended. If spilled, we should be able
18 to spill these to a 8-bit or 16-bit stack slot, zero or sign extending as part
21 Doing this reduces the size of the stack frame (important for thumb etc), and
22 also increases the likelihood that we will be able to reload multiple values
23 from the stack with a single load.
25 //===---------------------------------------------------------------------===//
27 The constant island pass is in good shape. Some cleanups might be desirable,
28 but there is unlikely to be much improvement in the generated code.
30 1. There may be some advantage to trying to be smarter about the initial
31 placement, rather than putting everything at the end.
33 2. There might be some compile-time efficiency to be had by representing
34 consecutive islands as a single block rather than multiple blocks.
36 3. Use a priority queue to sort constant pool users in inverse order of
37 position so we always process the one closed to the end of functions
38 first. This may simply CreateNewWater.
40 //===---------------------------------------------------------------------===//
42 Eliminate copysign custom expansion. We are still generating crappy code with
43 default expansion + if-conversion.
45 //===---------------------------------------------------------------------===//
47 Eliminate one instruction from:
49 define i32 @_Z6slow4bii(i32 %x, i32 %y) {
50 %tmp = icmp sgt i32 %x, %y
51 %retval = select i1 %tmp, i32 %x, i32 %y
67 //===---------------------------------------------------------------------===//
69 Implement long long "X-3" with instructions that fold the immediate in. These
70 were disabled due to badness with the ARM carry flag on subtracts.
72 //===---------------------------------------------------------------------===//
74 We currently compile abs:
75 int foo(int p) { return p < 0 ? -p : p; }
86 This is very, uh, literal. This could be a 3 operation sequence:
90 Which would be better. This occurs in png decode.
92 //===---------------------------------------------------------------------===//
94 More load / store optimizations:
95 1) Better representation for block transfer? This is from Olden/power:
106 If we can spare the registers, it would be better to use fldm and fstm here.
107 Need major register allocator enhancement though.
109 2) Can we recognize the relative position of constantpool entries? i.e. Treat
120 Then the ldr's can be combined into a single ldm. See Olden/power.
122 Note for ARM v4 gcc uses ldmia to load a pair of 32-bit values to represent a
123 double 64-bit FP constant:
133 3) struct copies appear to be done field by field
134 instead of by words, at least sometimes:
136 struct foo { int x; short s; char c1; char c2; };
137 void cpy(struct foo*a, struct foo*b) { *a = *b; }
152 In this benchmark poor handling of aggregate copies has shown up as
153 having a large effect on size, and possibly speed as well (we don't have
154 a good way to measure on ARM).
156 //===---------------------------------------------------------------------===//
158 * Consider this silly example:
160 double bar(double x) {
166 stmfd sp!, {r4, r5, r7, lr}
178 ldmfd sp!, {r4, r5, r7, pc}
180 Ignore the prologue and epilogue stuff for a second. Note
183 the copys to callee-save registers and the fact they are only being used by the
184 fmdrr instruction. It would have been better had the fmdrr been scheduled
185 before the call and place the result in a callee-save DPR register. The two
186 mov ops would not have been necessary.
188 //===---------------------------------------------------------------------===//
190 Calling convention related stuff:
192 * gcc's parameter passing implementation is terrible and we suffer as a result:
200 void foo(struct s S) {
201 printf("%g, %d\n", S.d1, S.s1);
204 'S' is passed via registers r0, r1, r2. But gcc stores them to the stack, and
205 then reload them to r1, r2, and r3 before issuing the call (r0 contains the
206 address of the format string):
211 stmia sp, {r0, r1, r2}
219 Instead of a stmia, ldmia, and a ldr, wouldn't it be better to do three moves?
221 * Return an aggregate type is even worse:
225 struct s S = {1.1, 2};
234 @ lr needed for prologue
235 ldmia r0, {r0, r1, r2}
236 stmia sp, {r0, r1, r2}
237 stmia ip, {r0, r1, r2}
242 r0 (and later ip) is the hidden parameter from caller to store the value in. The
243 first ldmia loads the constants into r0, r1, r2. The last stmia stores r0, r1,
244 r2 into the address passed in. However, there is one additional stmia that
245 stores r0, r1, and r2 to some stack location. The store is dead.
247 The llvm-gcc generated code looks like this:
249 csretcc void %foo(%struct.s* %agg.result) {
251 %S = alloca %struct.s, align 4 ; <%struct.s*> [#uses=1]
252 %memtmp = alloca %struct.s ; <%struct.s*> [#uses=1]
253 cast %struct.s* %S to sbyte* ; <sbyte*>:0 [#uses=2]
254 call void %llvm.memcpy.i32( sbyte* %0, sbyte* cast ({ double, int }* %C.0.904 to sbyte*), uint 12, uint 4 )
255 cast %struct.s* %agg.result to sbyte* ; <sbyte*>:1 [#uses=2]
256 call void %llvm.memcpy.i32( sbyte* %1, sbyte* %0, uint 12, uint 0 )
257 cast %struct.s* %memtmp to sbyte* ; <sbyte*>:2 [#uses=1]
258 call void %llvm.memcpy.i32( sbyte* %2, sbyte* %1, uint 12, uint 0 )
262 llc ends up issuing two memcpy's (the first memcpy becomes 3 loads from
263 constantpool). Perhaps we should 1) fix llvm-gcc so the memcpy is translated
264 into a number of load and stores, or 2) custom lower memcpy (of small size) to
265 be ldmia / stmia. I think option 2 is better but the current register
266 allocator cannot allocate a chunk of registers at a time.
268 A feasible temporary solution is to use specific physical registers at the
269 lowering time for small (<= 4 words?) transfer size.
271 * ARM CSRet calling convention requires the hidden argument to be returned by
274 //===---------------------------------------------------------------------===//
276 We can definitely do a better job on BB placements to eliminate some branches.
277 It's very common to see llvm generated assembly code that looks like this:
286 If BB4 is the only predecessor of BB3, then we can emit BB3 after BB4. We can
287 then eliminate beq and and turn the unconditional branch to LBB2 to a bne.
289 See McCat/18-imp/ComputeBoundingBoxes for an example.
291 //===---------------------------------------------------------------------===//
293 Pre-/post- indexed load / stores:
295 1) We should not make the pre/post- indexed load/store transform if the base ptr
296 is guaranteed to be live beyond the load/store. This can happen if the base
297 ptr is live out of the block we are performing the optimization. e.g.
309 In most cases, this is just a wasted optimization. However, sometimes it can
310 negatively impact the performance because two-address code is more restrictive
311 when it comes to scheduling.
313 Unfortunately, liveout information is currently unavailable during DAG combine
316 2) Consider spliting a indexed load / store into a pair of add/sub + load/store
317 to solve #1 (in TwoAddressInstructionPass.cpp).
319 3) Enhance LSR to generate more opportunities for indexed ops.
321 4) Once we added support for multiple result patterns, write indexed loads
322 patterns instead of C++ instruction selection code.
324 5) Use VLDM / VSTM to emulate indexed FP load / store.
326 //===---------------------------------------------------------------------===//
328 Implement support for some more tricky ways to materialize immediates. For
329 example, to get 0xffff8000, we can use:
334 //===---------------------------------------------------------------------===//
336 We sometimes generate multiple add / sub instructions to update sp in prologue
337 and epilogue if the inc / dec value is too large to fit in a single immediate
338 operand. In some cases, perhaps it might be better to load the value from a
339 constantpool instead.
341 //===---------------------------------------------------------------------===//
343 GCC generates significantly better code for this function.
345 int foo(int StackPtr, unsigned char *Line, unsigned char *Stack, int LineLen) {
349 while (StackPtr != 0 && i < (((LineLen) < (32768))? (LineLen) : (32768)))
350 Line[i++] = Stack[--StackPtr];
353 while (StackPtr != 0 && i < LineLen)
363 //===---------------------------------------------------------------------===//
365 This should compile to the mlas instruction:
366 int mlas(int x, int y, int z) { return ((x * y + z) < 0) ? 7 : 13; }
368 //===---------------------------------------------------------------------===//
370 At some point, we should triage these to see if they still apply to us:
372 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=19598
373 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=18560
374 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=27016
376 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=11831
377 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=11826
378 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=11825
379 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=11824
380 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=11823
381 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=11820
382 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=10982
384 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=10242
385 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=9831
386 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=9760
387 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=9759
388 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=9703
389 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=9702
390 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=9663
392 http://www.inf.u-szeged.hu/gcc-arm/
393 http://citeseer.ist.psu.edu/debus04linktime.html
395 //===---------------------------------------------------------------------===//
397 gcc generates smaller code for this function at -O2 or -Os:
399 void foo(signed char* p) {
408 llvm decides it's a good idea to turn the repeated if...else into a
409 binary tree, as if it were a switch; the resulting code requires -1
410 compare-and-branches when *p<=2 or *p==5, the same number if *p==4
411 or *p>6, and +1 if *p==3. So it should be a speed win
412 (on balance). However, the revised code is larger, with 4 conditional
413 branches instead of 3.
415 More seriously, there is a byte->word extend before
416 each comparison, where there should be only one, and the condition codes
417 are not remembered when the same two values are compared twice.
419 //===---------------------------------------------------------------------===//
421 More LSR enhancements possible:
423 1. Teach LSR about pre- and post- indexed ops to allow iv increment be merged
425 2. Allow iv reuse even when a type conversion is required. For example, i8
426 and i32 load / store addressing modes are identical.
429 //===---------------------------------------------------------------------===//
433 int foo(int a, int b, int c, int d) {
434 long long acc = (long long)a * (long long)b;
435 acc += (long long)c * (long long)d;
436 return (int)(acc >> 32);
439 Should compile to use SMLAL (Signed Multiply Accumulate Long) which multiplies
440 two signed 32-bit values to produce a 64-bit value, and accumulates this with
443 We currently get this with both v4 and v6:
452 //===---------------------------------------------------------------------===//
456 std::pair<unsigned, bool> full_add(unsigned a, unsigned b)
457 { return std::make_pair(a + b, a + b < a); }
458 bool no_overflow(unsigned a, unsigned b)
459 { return !full_add(a, b).second; }
497 //===---------------------------------------------------------------------===//
499 Some of the NEON intrinsics may be appropriate for more general use, either
500 as target-independent intrinsics or perhaps elsewhere in the ARM backend.
501 Some of them may also be lowered to target-independent SDNodes, and perhaps
502 some new SDNodes could be added.
504 For example, maximum, minimum, and absolute value operations are well-defined
505 and standard operations, both for vector and scalar types.
507 The current NEON-specific intrinsics for count leading zeros and count one
508 bits could perhaps be replaced by the target-independent ctlz and ctpop
509 intrinsics. It may also make sense to add a target-independent "ctls"
510 intrinsic for "count leading sign bits". Likewise, the backend could use
511 the target-independent SDNodes for these operations.
513 ARMv6 has scalar saturating and halving adds and subtracts. The same
514 intrinsics could possibly be used for both NEON's vector implementations of
515 those operations and the ARMv6 scalar versions.
517 //===---------------------------------------------------------------------===//
519 ARM::MOVCCr is commutable (by flipping the condition). But we need to implement
520 ARMInstrInfo::commuteInstruction() to support it.
522 //===---------------------------------------------------------------------===//
524 Split out LDR (literal) from normal ARM LDR instruction. Also consider spliting
525 LDR into imm12 and so_reg forms. This allows us to clean up some code. e.g.
526 ARMLoadStoreOptimizer does not need to look at LDR (literal) and LDR (so_reg)
527 while ARMConstantIslandPass only need to worry about LDR (literal).
529 //===---------------------------------------------------------------------===//
531 Constant island pass should make use of full range SoImm values for LEApcrel.
532 Be careful though as the last attempt caused infinite looping on lencod.
534 //===---------------------------------------------------------------------===//
536 Predication issue. This function:
538 extern unsigned array[ 128 ];
541 y = array[ x & 127 ];
543 y = 123456789 & ( y >> 2 );
555 ldr r1, [r2, +r1, lsl #2]
563 It would be better to do something like this, to fold the shift into the
569 ldr r1, [r2, +r1, lsl #2]
576 it saves an instruction and a register.
578 //===---------------------------------------------------------------------===//
580 It might be profitable to cse MOVi16 if there are lots of 32-bit immediates
581 with the same bottom half.
583 //===---------------------------------------------------------------------===//
585 Robert Muth started working on an alternate jump table implementation that
586 does not put the tables in-line in the text. This is more like the llvm
587 default jump table implementation. This might be useful sometime. Several
588 revisions of patches are on the mailing list, beginning at:
589 http://lists.cs.uiuc.edu/pipermail/llvmdev/2009-June/022763.html
591 //===---------------------------------------------------------------------===//
593 Make use of the "rbit" instruction.
595 //===---------------------------------------------------------------------===//
597 Take a look at test/CodeGen/Thumb2/machine-licm.ll. ARM should be taught how
598 to licm and cse the unnecessary load from cp#1.