1 //===---------------------------------------------------------------------===//
2 // Random ideas for the ARM backend.
3 //===---------------------------------------------------------------------===//
5 Reimplement 'select' in terms of 'SEL'.
7 * We would really like to support UXTAB16, but we need to prove that the
8 add doesn't need to overflow between the two 16-bit chunks.
10 * Implement pre/post increment support. (e.g. PR935)
11 * Implement smarter constant generation for binops with large immediates.
13 //===---------------------------------------------------------------------===//
15 Crazy idea: Consider code that uses lots of 8-bit or 16-bit values. By the
16 time regalloc happens, these values are now in a 32-bit register, usually with
17 the top-bits known to be sign or zero extended. If spilled, we should be able
18 to spill these to a 8-bit or 16-bit stack slot, zero or sign extending as part
21 Doing this reduces the size of the stack frame (important for thumb etc), and
22 also increases the likelihood that we will be able to reload multiple values
23 from the stack with a single load.
25 //===---------------------------------------------------------------------===//
27 The constant island pass is in good shape. Some cleanups might be desirable,
28 but there is unlikely to be much improvement in the generated code.
30 1. There may be some advantage to trying to be smarter about the initial
31 placement, rather than putting everything at the end.
33 2. There might be some compile-time efficiency to be had by representing
34 consecutive islands as a single block rather than multiple blocks.
36 3. Use a priority queue to sort constant pool users in inverse order of
37 position so we always process the one closed to the end of functions
38 first. This may simply CreateNewWater.
40 //===---------------------------------------------------------------------===//
42 Eliminate copysign custom expansion. We are still generating crappy code with
43 default expansion + if-conversion.
45 //===---------------------------------------------------------------------===//
47 Eliminate one instruction from:
49 define i32 @_Z6slow4bii(i32 %x, i32 %y) {
50 %tmp = icmp sgt i32 %x, %y
51 %retval = select i1 %tmp, i32 %x, i32 %y
67 //===---------------------------------------------------------------------===//
69 Implement long long "X-3" with instructions that fold the immediate in. These
70 were disabled due to badness with the ARM carry flag on subtracts.
72 //===---------------------------------------------------------------------===//
74 More load / store optimizations:
75 1) Better representation for block transfer? This is from Olden/power:
86 If we can spare the registers, it would be better to use fldm and fstm here.
87 Need major register allocator enhancement though.
89 2) Can we recognize the relative position of constantpool entries? i.e. Treat
100 Then the ldr's can be combined into a single ldm. See Olden/power.
102 Note for ARM v4 gcc uses ldmia to load a pair of 32-bit values to represent a
103 double 64-bit FP constant:
113 3) struct copies appear to be done field by field
114 instead of by words, at least sometimes:
116 struct foo { int x; short s; char c1; char c2; };
117 void cpy(struct foo*a, struct foo*b) { *a = *b; }
132 In this benchmark poor handling of aggregate copies has shown up as
133 having a large effect on size, and possibly speed as well (we don't have
134 a good way to measure on ARM).
136 //===---------------------------------------------------------------------===//
138 * Consider this silly example:
140 double bar(double x) {
146 stmfd sp!, {r4, r5, r7, lr}
158 ldmfd sp!, {r4, r5, r7, pc}
160 Ignore the prologue and epilogue stuff for a second. Note
163 the copys to callee-save registers and the fact they are only being used by the
164 fmdrr instruction. It would have been better had the fmdrr been scheduled
165 before the call and place the result in a callee-save DPR register. The two
166 mov ops would not have been necessary.
168 //===---------------------------------------------------------------------===//
170 Calling convention related stuff:
172 * gcc's parameter passing implementation is terrible and we suffer as a result:
180 void foo(struct s S) {
181 printf("%g, %d\n", S.d1, S.s1);
184 'S' is passed via registers r0, r1, r2. But gcc stores them to the stack, and
185 then reload them to r1, r2, and r3 before issuing the call (r0 contains the
186 address of the format string):
191 stmia sp, {r0, r1, r2}
199 Instead of a stmia, ldmia, and a ldr, wouldn't it be better to do three moves?
201 * Return an aggregate type is even worse:
205 struct s S = {1.1, 2};
214 @ lr needed for prologue
215 ldmia r0, {r0, r1, r2}
216 stmia sp, {r0, r1, r2}
217 stmia ip, {r0, r1, r2}
222 r0 (and later ip) is the hidden parameter from caller to store the value in. The
223 first ldmia loads the constants into r0, r1, r2. The last stmia stores r0, r1,
224 r2 into the address passed in. However, there is one additional stmia that
225 stores r0, r1, and r2 to some stack location. The store is dead.
227 The llvm-gcc generated code looks like this:
229 csretcc void %foo(%struct.s* %agg.result) {
231 %S = alloca %struct.s, align 4 ; <%struct.s*> [#uses=1]
232 %memtmp = alloca %struct.s ; <%struct.s*> [#uses=1]
233 cast %struct.s* %S to sbyte* ; <sbyte*>:0 [#uses=2]
234 call void %llvm.memcpy.i32( sbyte* %0, sbyte* cast ({ double, int }* %C.0.904 to sbyte*), uint 12, uint 4 )
235 cast %struct.s* %agg.result to sbyte* ; <sbyte*>:1 [#uses=2]
236 call void %llvm.memcpy.i32( sbyte* %1, sbyte* %0, uint 12, uint 0 )
237 cast %struct.s* %memtmp to sbyte* ; <sbyte*>:2 [#uses=1]
238 call void %llvm.memcpy.i32( sbyte* %2, sbyte* %1, uint 12, uint 0 )
242 llc ends up issuing two memcpy's (the first memcpy becomes 3 loads from
243 constantpool). Perhaps we should 1) fix llvm-gcc so the memcpy is translated
244 into a number of load and stores, or 2) custom lower memcpy (of small size) to
245 be ldmia / stmia. I think option 2 is better but the current register
246 allocator cannot allocate a chunk of registers at a time.
248 A feasible temporary solution is to use specific physical registers at the
249 lowering time for small (<= 4 words?) transfer size.
251 * ARM CSRet calling convention requires the hidden argument to be returned by
254 //===---------------------------------------------------------------------===//
256 We can definitely do a better job on BB placements to eliminate some branches.
257 It's very common to see llvm generated assembly code that looks like this:
266 If BB4 is the only predecessor of BB3, then we can emit BB3 after BB4. We can
267 then eliminate beq and and turn the unconditional branch to LBB2 to a bne.
269 See McCat/18-imp/ComputeBoundingBoxes for an example.
271 //===---------------------------------------------------------------------===//
273 Pre-/post- indexed load / stores:
275 1) We should not make the pre/post- indexed load/store transform if the base ptr
276 is guaranteed to be live beyond the load/store. This can happen if the base
277 ptr is live out of the block we are performing the optimization. e.g.
289 In most cases, this is just a wasted optimization. However, sometimes it can
290 negatively impact the performance because two-address code is more restrictive
291 when it comes to scheduling.
293 Unfortunately, liveout information is currently unavailable during DAG combine
296 2) Consider spliting a indexed load / store into a pair of add/sub + load/store
297 to solve #1 (in TwoAddressInstructionPass.cpp).
299 3) Enhance LSR to generate more opportunities for indexed ops.
301 4) Once we added support for multiple result patterns, write indexed loads
302 patterns instead of C++ instruction selection code.
304 5) Use VLDM / VSTM to emulate indexed FP load / store.
306 //===---------------------------------------------------------------------===//
308 Implement support for some more tricky ways to materialize immediates. For
309 example, to get 0xffff8000, we can use:
314 //===---------------------------------------------------------------------===//
316 We sometimes generate multiple add / sub instructions to update sp in prologue
317 and epilogue if the inc / dec value is too large to fit in a single immediate
318 operand. In some cases, perhaps it might be better to load the value from a
319 constantpool instead.
321 //===---------------------------------------------------------------------===//
323 GCC generates significantly better code for this function.
325 int foo(int StackPtr, unsigned char *Line, unsigned char *Stack, int LineLen) {
329 while (StackPtr != 0 && i < (((LineLen) < (32768))? (LineLen) : (32768)))
330 Line[i++] = Stack[--StackPtr];
333 while (StackPtr != 0 && i < LineLen)
343 //===---------------------------------------------------------------------===//
345 This should compile to the mlas instruction:
346 int mlas(int x, int y, int z) { return ((x * y + z) < 0) ? 7 : 13; }
348 //===---------------------------------------------------------------------===//
350 At some point, we should triage these to see if they still apply to us:
352 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=19598
353 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=18560
354 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=27016
356 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=11831
357 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=11826
358 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=11825
359 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=11824
360 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=11823
361 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=11820
362 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=10982
364 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=10242
365 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=9831
366 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=9760
367 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=9759
368 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=9703
369 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=9702
370 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=9663
372 http://www.inf.u-szeged.hu/gcc-arm/
373 http://citeseer.ist.psu.edu/debus04linktime.html
375 //===---------------------------------------------------------------------===//
377 gcc generates smaller code for this function at -O2 or -Os:
379 void foo(signed char* p) {
388 llvm decides it's a good idea to turn the repeated if...else into a
389 binary tree, as if it were a switch; the resulting code requires -1
390 compare-and-branches when *p<=2 or *p==5, the same number if *p==4
391 or *p>6, and +1 if *p==3. So it should be a speed win
392 (on balance). However, the revised code is larger, with 4 conditional
393 branches instead of 3.
395 More seriously, there is a byte->word extend before
396 each comparison, where there should be only one, and the condition codes
397 are not remembered when the same two values are compared twice.
399 //===---------------------------------------------------------------------===//
401 More LSR enhancements possible:
403 1. Teach LSR about pre- and post- indexed ops to allow iv increment be merged
405 2. Allow iv reuse even when a type conversion is required. For example, i8
406 and i32 load / store addressing modes are identical.
409 //===---------------------------------------------------------------------===//
413 int foo(int a, int b, int c, int d) {
414 long long acc = (long long)a * (long long)b;
415 acc += (long long)c * (long long)d;
416 return (int)(acc >> 32);
419 Should compile to use SMLAL (Signed Multiply Accumulate Long) which multiplies
420 two signed 32-bit values to produce a 64-bit value, and accumulates this with
423 We currently get this with both v4 and v6:
432 //===---------------------------------------------------------------------===//
436 std::pair<unsigned, bool> full_add(unsigned a, unsigned b)
437 { return std::make_pair(a + b, a + b < a); }
438 bool no_overflow(unsigned a, unsigned b)
439 { return !full_add(a, b).second; }
477 //===---------------------------------------------------------------------===//
479 Some of the NEON intrinsics may be appropriate for more general use, either
480 as target-independent intrinsics or perhaps elsewhere in the ARM backend.
481 Some of them may also be lowered to target-independent SDNodes, and perhaps
482 some new SDNodes could be added.
484 For example, maximum, minimum, and absolute value operations are well-defined
485 and standard operations, both for vector and scalar types.
487 The current NEON-specific intrinsics for count leading zeros and count one
488 bits could perhaps be replaced by the target-independent ctlz and ctpop
489 intrinsics. It may also make sense to add a target-independent "ctls"
490 intrinsic for "count leading sign bits". Likewise, the backend could use
491 the target-independent SDNodes for these operations.
493 ARMv6 has scalar saturating and halving adds and subtracts. The same
494 intrinsics could possibly be used for both NEON's vector implementations of
495 those operations and the ARMv6 scalar versions.
497 //===---------------------------------------------------------------------===//
499 ARM::MOVCCr is commutable (by flipping the condition). But we need to implement
500 ARMInstrInfo::commuteInstruction() to support it.
502 //===---------------------------------------------------------------------===//
504 Split out LDR (literal) from normal ARM LDR instruction. Also consider spliting
505 LDR into imm12 and so_reg forms. This allows us to clean up some code. e.g.
506 ARMLoadStoreOptimizer does not need to look at LDR (literal) and LDR (so_reg)
507 while ARMConstantIslandPass only need to worry about LDR (literal).
509 //===---------------------------------------------------------------------===//
511 Constant island pass should make use of full range SoImm values for LEApcrel.
512 Be careful though as the last attempt caused infinite looping on lencod.
514 //===---------------------------------------------------------------------===//
516 Predication issue. This function:
518 extern unsigned array[ 128 ];
521 y = array[ x & 127 ];
523 y = 123456789 & ( y >> 2 );
535 ldr r1, [r2, +r1, lsl #2]
543 It would be better to do something like this, to fold the shift into the
549 ldr r1, [r2, +r1, lsl #2]
556 it saves an instruction and a register.
558 //===---------------------------------------------------------------------===//
560 It might be profitable to cse MOVi16 if there are lots of 32-bit immediates
561 with the same bottom half.
563 //===---------------------------------------------------------------------===//
565 Robert Muth started working on an alternate jump table implementation that
566 does not put the tables in-line in the text. This is more like the llvm
567 default jump table implementation. This might be useful sometime. Several
568 revisions of patches are on the mailing list, beginning at:
569 http://lists.cs.uiuc.edu/pipermail/llvmdev/2009-June/022763.html
571 //===---------------------------------------------------------------------===//
573 Make use of the "rbit" instruction.
575 //===---------------------------------------------------------------------===//
577 Take a look at test/CodeGen/Thumb2/machine-licm.ll. ARM should be taught how
578 to licm and cse the unnecessary load from cp#1.
580 //===---------------------------------------------------------------------===//
582 The CMN instruction sets the flags like an ADD instruction, while CMP sets
583 them like a subtract. Therefore to be able to use CMN for comparisons other
584 than the Z bit, we'll need additional logic to reverse the conditionals
585 associated with the comparison. Perhaps a pseudo-instruction for the comparison,
586 with a post-codegen pass to clean up and handle the condition codes?
587 See PR5694 for testcase.