1 //===---------------------------------------------------------------------===//
2 // Random ideas for the ARM backend.
3 //===---------------------------------------------------------------------===//
5 Reimplement 'select' in terms of 'SEL'.
7 * We would really like to support UXTAB16, but we need to prove that the
8 add doesn't need to overflow between the two 16-bit chunks.
10 * Implement pre/post increment support. (e.g. PR935)
11 * Implement smarter constant generation for binops with large immediates.
13 A few ARMv6T2 ops should be pattern matched: BFI, SBFX, and UBFX
15 //===---------------------------------------------------------------------===//
17 Crazy idea: Consider code that uses lots of 8-bit or 16-bit values. By the
18 time regalloc happens, these values are now in a 32-bit register, usually with
19 the top-bits known to be sign or zero extended. If spilled, we should be able
20 to spill these to a 8-bit or 16-bit stack slot, zero or sign extending as part
23 Doing this reduces the size of the stack frame (important for thumb etc), and
24 also increases the likelihood that we will be able to reload multiple values
25 from the stack with a single load.
27 //===---------------------------------------------------------------------===//
29 The constant island pass is in good shape. Some cleanups might be desirable,
30 but there is unlikely to be much improvement in the generated code.
32 1. There may be some advantage to trying to be smarter about the initial
33 placement, rather than putting everything at the end.
35 2. There might be some compile-time efficiency to be had by representing
36 consecutive islands as a single block rather than multiple blocks.
38 3. Use a priority queue to sort constant pool users in inverse order of
39 position so we always process the one closed to the end of functions
40 first. This may simply CreateNewWater.
42 //===---------------------------------------------------------------------===//
44 Eliminate copysign custom expansion. We are still generating crappy code with
45 default expansion + if-conversion.
47 //===---------------------------------------------------------------------===//
49 Eliminate one instruction from:
51 define i32 @_Z6slow4bii(i32 %x, i32 %y) {
52 %tmp = icmp sgt i32 %x, %y
53 %retval = select i1 %tmp, i32 %x, i32 %y
69 //===---------------------------------------------------------------------===//
71 Implement long long "X-3" with instructions that fold the immediate in. These
72 were disabled due to badness with the ARM carry flag on subtracts.
74 //===---------------------------------------------------------------------===//
76 More load / store optimizations:
77 1) Better representation for block transfer? This is from Olden/power:
88 If we can spare the registers, it would be better to use fldm and fstm here.
89 Need major register allocator enhancement though.
91 2) Can we recognize the relative position of constantpool entries? i.e. Treat
102 Then the ldr's can be combined into a single ldm. See Olden/power.
104 Note for ARM v4 gcc uses ldmia to load a pair of 32-bit values to represent a
105 double 64-bit FP constant:
115 3) struct copies appear to be done field by field
116 instead of by words, at least sometimes:
118 struct foo { int x; short s; char c1; char c2; };
119 void cpy(struct foo*a, struct foo*b) { *a = *b; }
134 In this benchmark poor handling of aggregate copies has shown up as
135 having a large effect on size, and possibly speed as well (we don't have
136 a good way to measure on ARM).
138 //===---------------------------------------------------------------------===//
140 * Consider this silly example:
142 double bar(double x) {
148 stmfd sp!, {r4, r5, r7, lr}
160 ldmfd sp!, {r4, r5, r7, pc}
162 Ignore the prologue and epilogue stuff for a second. Note
165 the copys to callee-save registers and the fact they are only being used by the
166 fmdrr instruction. It would have been better had the fmdrr been scheduled
167 before the call and place the result in a callee-save DPR register. The two
168 mov ops would not have been necessary.
170 //===---------------------------------------------------------------------===//
172 Calling convention related stuff:
174 * gcc's parameter passing implementation is terrible and we suffer as a result:
182 void foo(struct s S) {
183 printf("%g, %d\n", S.d1, S.s1);
186 'S' is passed via registers r0, r1, r2. But gcc stores them to the stack, and
187 then reload them to r1, r2, and r3 before issuing the call (r0 contains the
188 address of the format string):
193 stmia sp, {r0, r1, r2}
201 Instead of a stmia, ldmia, and a ldr, wouldn't it be better to do three moves?
203 * Return an aggregate type is even worse:
207 struct s S = {1.1, 2};
216 @ lr needed for prologue
217 ldmia r0, {r0, r1, r2}
218 stmia sp, {r0, r1, r2}
219 stmia ip, {r0, r1, r2}
224 r0 (and later ip) is the hidden parameter from caller to store the value in. The
225 first ldmia loads the constants into r0, r1, r2. The last stmia stores r0, r1,
226 r2 into the address passed in. However, there is one additional stmia that
227 stores r0, r1, and r2 to some stack location. The store is dead.
229 The llvm-gcc generated code looks like this:
231 csretcc void %foo(%struct.s* %agg.result) {
233 %S = alloca %struct.s, align 4 ; <%struct.s*> [#uses=1]
234 %memtmp = alloca %struct.s ; <%struct.s*> [#uses=1]
235 cast %struct.s* %S to sbyte* ; <sbyte*>:0 [#uses=2]
236 call void %llvm.memcpy.i32( sbyte* %0, sbyte* cast ({ double, int }* %C.0.904 to sbyte*), uint 12, uint 4 )
237 cast %struct.s* %agg.result to sbyte* ; <sbyte*>:1 [#uses=2]
238 call void %llvm.memcpy.i32( sbyte* %1, sbyte* %0, uint 12, uint 0 )
239 cast %struct.s* %memtmp to sbyte* ; <sbyte*>:2 [#uses=1]
240 call void %llvm.memcpy.i32( sbyte* %2, sbyte* %1, uint 12, uint 0 )
244 llc ends up issuing two memcpy's (the first memcpy becomes 3 loads from
245 constantpool). Perhaps we should 1) fix llvm-gcc so the memcpy is translated
246 into a number of load and stores, or 2) custom lower memcpy (of small size) to
247 be ldmia / stmia. I think option 2 is better but the current register
248 allocator cannot allocate a chunk of registers at a time.
250 A feasible temporary solution is to use specific physical registers at the
251 lowering time for small (<= 4 words?) transfer size.
253 * ARM CSRet calling convention requires the hidden argument to be returned by
256 //===---------------------------------------------------------------------===//
258 We can definitely do a better job on BB placements to eliminate some branches.
259 It's very common to see llvm generated assembly code that looks like this:
268 If BB4 is the only predecessor of BB3, then we can emit BB3 after BB4. We can
269 then eliminate beq and and turn the unconditional branch to LBB2 to a bne.
271 See McCat/18-imp/ComputeBoundingBoxes for an example.
273 //===---------------------------------------------------------------------===//
275 Pre-/post- indexed load / stores:
277 1) We should not make the pre/post- indexed load/store transform if the base ptr
278 is guaranteed to be live beyond the load/store. This can happen if the base
279 ptr is live out of the block we are performing the optimization. e.g.
291 In most cases, this is just a wasted optimization. However, sometimes it can
292 negatively impact the performance because two-address code is more restrictive
293 when it comes to scheduling.
295 Unfortunately, liveout information is currently unavailable during DAG combine
298 2) Consider spliting a indexed load / store into a pair of add/sub + load/store
299 to solve #1 (in TwoAddressInstructionPass.cpp).
301 3) Enhance LSR to generate more opportunities for indexed ops.
303 4) Once we added support for multiple result patterns, write indexed loads
304 patterns instead of C++ instruction selection code.
306 5) Use VLDM / VSTM to emulate indexed FP load / store.
308 //===---------------------------------------------------------------------===//
310 Implement support for some more tricky ways to materialize immediates. For
311 example, to get 0xffff8000, we can use:
316 //===---------------------------------------------------------------------===//
318 We sometimes generate multiple add / sub instructions to update sp in prologue
319 and epilogue if the inc / dec value is too large to fit in a single immediate
320 operand. In some cases, perhaps it might be better to load the value from a
321 constantpool instead.
323 //===---------------------------------------------------------------------===//
325 GCC generates significantly better code for this function.
327 int foo(int StackPtr, unsigned char *Line, unsigned char *Stack, int LineLen) {
331 while (StackPtr != 0 && i < (((LineLen) < (32768))? (LineLen) : (32768)))
332 Line[i++] = Stack[--StackPtr];
335 while (StackPtr != 0 && i < LineLen)
345 //===---------------------------------------------------------------------===//
347 This should compile to the mlas instruction:
348 int mlas(int x, int y, int z) { return ((x * y + z) < 0) ? 7 : 13; }
350 //===---------------------------------------------------------------------===//
352 At some point, we should triage these to see if they still apply to us:
354 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=19598
355 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=18560
356 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=27016
358 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=11831
359 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=11826
360 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=11825
361 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=11824
362 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=11823
363 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=11820
364 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=10982
366 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=10242
367 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=9831
368 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=9760
369 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=9759
370 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=9703
371 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=9702
372 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=9663
374 http://www.inf.u-szeged.hu/gcc-arm/
375 http://citeseer.ist.psu.edu/debus04linktime.html
377 //===---------------------------------------------------------------------===//
379 gcc generates smaller code for this function at -O2 or -Os:
381 void foo(signed char* p) {
390 llvm decides it's a good idea to turn the repeated if...else into a
391 binary tree, as if it were a switch; the resulting code requires -1
392 compare-and-branches when *p<=2 or *p==5, the same number if *p==4
393 or *p>6, and +1 if *p==3. So it should be a speed win
394 (on balance). However, the revised code is larger, with 4 conditional
395 branches instead of 3.
397 More seriously, there is a byte->word extend before
398 each comparison, where there should be only one, and the condition codes
399 are not remembered when the same two values are compared twice.
401 //===---------------------------------------------------------------------===//
403 More LSR enhancements possible:
405 1. Teach LSR about pre- and post- indexed ops to allow iv increment be merged
407 2. Allow iv reuse even when a type conversion is required. For example, i8
408 and i32 load / store addressing modes are identical.
411 //===---------------------------------------------------------------------===//
415 int foo(int a, int b, int c, int d) {
416 long long acc = (long long)a * (long long)b;
417 acc += (long long)c * (long long)d;
418 return (int)(acc >> 32);
421 Should compile to use SMLAL (Signed Multiply Accumulate Long) which multiplies
422 two signed 32-bit values to produce a 64-bit value, and accumulates this with
425 We currently get this with both v4 and v6:
434 //===---------------------------------------------------------------------===//
438 std::pair<unsigned, bool> full_add(unsigned a, unsigned b)
439 { return std::make_pair(a + b, a + b < a); }
440 bool no_overflow(unsigned a, unsigned b)
441 { return !full_add(a, b).second; }
479 //===---------------------------------------------------------------------===//
481 Some of the NEON intrinsics may be appropriate for more general use, either
482 as target-independent intrinsics or perhaps elsewhere in the ARM backend.
483 Some of them may also be lowered to target-independent SDNodes, and perhaps
484 some new SDNodes could be added.
486 For example, maximum, minimum, and absolute value operations are well-defined
487 and standard operations, both for vector and scalar types.
489 The current NEON-specific intrinsics for count leading zeros and count one
490 bits could perhaps be replaced by the target-independent ctlz and ctpop
491 intrinsics. It may also make sense to add a target-independent "ctls"
492 intrinsic for "count leading sign bits". Likewise, the backend could use
493 the target-independent SDNodes for these operations.
495 ARMv6 has scalar saturating and halving adds and subtracts. The same
496 intrinsics could possibly be used for both NEON's vector implementations of
497 those operations and the ARMv6 scalar versions.
499 //===---------------------------------------------------------------------===//
501 ARM::MOVCCr is commutable (by flipping the condition). But we need to implement
502 ARMInstrInfo::commuteInstruction() to support it.
504 //===---------------------------------------------------------------------===//
506 Split out LDR (literal) from normal ARM LDR instruction. Also consider spliting
507 LDR into imm12 and so_reg forms. This allows us to clean up some code. e.g.
508 ARMLoadStoreOptimizer does not need to look at LDR (literal) and LDR (so_reg)
509 while ARMConstantIslandPass only need to worry about LDR (literal).
511 //===---------------------------------------------------------------------===//
513 Constant island pass should make use of full range SoImm values for LEApcrel.
514 Be careful though as the last attempt caused infinite looping on lencod.
516 //===---------------------------------------------------------------------===//
518 Predication issue. This function:
520 extern unsigned array[ 128 ];
523 y = array[ x & 127 ];
525 y = 123456789 & ( y >> 2 );
537 ldr r1, [r2, +r1, lsl #2]
545 It would be better to do something like this, to fold the shift into the
551 ldr r1, [r2, +r1, lsl #2]
558 it saves an instruction and a register.
560 //===---------------------------------------------------------------------===//
562 It might be profitable to cse MOVi16 if there are lots of 32-bit immediates
563 with the same bottom half.
565 //===---------------------------------------------------------------------===//
567 Robert Muth started working on an alternate jump table implementation that
568 does not put the tables in-line in the text. This is more like the llvm
569 default jump table implementation. This might be useful sometime. Several
570 revisions of patches are on the mailing list, beginning at:
571 http://lists.cs.uiuc.edu/pipermail/llvmdev/2009-June/022763.html
573 //===---------------------------------------------------------------------===//
575 Make use of the "rbit" instruction.
577 //===---------------------------------------------------------------------===//
579 Take a look at test/CodeGen/Thumb2/machine-licm.ll. ARM should be taught how
580 to licm and cse the unnecessary load from cp#1.
582 //===---------------------------------------------------------------------===//
584 The CMN instruction sets the flags like an ADD instruction, while CMP sets
585 them like a subtract. Therefore to be able to use CMN for comparisons other
586 than the Z bit, we'll need additional logic to reverse the conditionals
587 associated with the comparison. Perhaps a pseudo-instruction for the comparison,
588 with a post-codegen pass to clean up and handle the condition codes?
589 See PR5694 for testcase.