1 //===---------------------------------------------------------------------===//
2 // Random ideas for the ARM backend.
3 //===---------------------------------------------------------------------===//
5 Reimplement 'select' in terms of 'SEL'.
7 * We would really like to support UXTAB16, but we need to prove that the
8 add doesn't need to overflow between the two 16-bit chunks.
10 * implement predication support
11 * Implement pre/post increment support. (e.g. PR935)
12 * Coalesce stack slots!
13 * Implement smarter constant generation for binops with large immediates.
15 * Consider materializing FP constants like 0.0f and 1.0f using integer
16 immediate instructions then copy to FPU. Slower than load into FPU?
18 //===---------------------------------------------------------------------===//
20 The constant island pass is in good shape. Some cleanups might be desirable,
21 but there is unlikely to be much improvement in the generated code.
23 1. There may be some advantage to trying to be smarter about the initial
24 placement, rather than putting everything at the end.
26 2. The handling of 2-byte padding for Thumb is overly conservative. There
27 would be a small gain to keeping accurate track of the padding (which would
28 require aligning functions containing constant pools to 4-byte boundaries).
30 3. There might be some compile-time efficiency to be had by representing
31 consecutive islands as a single block rather than multiple blocks.
33 4. Use a priority queue to sort constant pool users in inverse order of
34 position so we always process the one closed to the end of functions
35 first. This may simply CreateNewWater.
37 //===---------------------------------------------------------------------===//
39 We need to start generating predicated instructions. The .td files have a way
40 to express this now (see the PPC conditional return instruction), but the
41 branch folding pass (or a new if-cvt pass) should start producing these, at
42 least in the trivial case.
44 Among the obvious wins, doing so can eliminate the need to custom expand
45 copysign (i.e. we won't need to custom expand it to get the conditional
48 This allows us to eliminate one instruction from:
50 define i32 @_Z6slow4bii(i32 %x, i32 %y) {
51 %tmp = icmp sgt i32 %x, %y
52 %retval = select i1 %tmp, i32 %x, i32 %y
62 //===---------------------------------------------------------------------===//
64 Implement long long "X-3" with instructions that fold the immediate in. These
65 were disabled due to badness with the ARM carry flag on subtracts.
67 //===---------------------------------------------------------------------===//
69 We currently compile abs:
70 int foo(int p) { return p < 0 ? -p : p; }
81 This is very, uh, literal. This could be a 3 operation sequence:
85 Which would be better. This occurs in png decode.
87 //===---------------------------------------------------------------------===//
89 More load / store optimizations:
90 1) Look past instructions without side-effects (not load, store, branch, etc.)
91 when forming the list of loads / stores to optimize.
93 2) Smarter register allocation?
94 We are probably missing some opportunities to use ldm / stm. Consider:
99 This cannot be merged into a ldm. Perhaps we will need to do the transformation
100 before register allocation. Then teach the register allocator to allocate a
101 chunk of consecutive registers.
103 3) Better representation for block transfer? This is from Olden/power:
114 If we can spare the registers, it would be better to use fldm and fstm here.
115 Need major register allocator enhancement though.
117 4) Can we recognize the relative position of constantpool entries? i.e. Treat
128 Then the ldr's can be combined into a single ldm. See Olden/power.
130 Note for ARM v4 gcc uses ldmia to load a pair of 32-bit values to represent a
131 double 64-bit FP constant:
141 5) Can we make use of ldrd and strd? Instead of generating ldm / stm, use
142 ldrd/strd instead if there are only two destination registers that form an
143 odd/even pair. However, we probably would pay a penalty if the address is not
144 aligned on 8-byte boundary. This requires more information on load / store
145 nodes (and MI's?) then we currently carry.
147 6) struct copies appear to be done field by field
148 instead of by words, at least sometimes:
150 struct foo { int x; short s; char c1; char c2; };
151 void cpy(struct foo*a, struct foo*b) { *a = *b; }
166 In this benchmark poor handling of aggregate copies has shown up as
167 having a large effect on size, and possibly speed as well (we don't have
168 a good way to measure on ARM).
170 //===---------------------------------------------------------------------===//
172 * Consider this silly example:
174 double bar(double x) {
199 Ignore the prologue and epilogue stuff for a second. Note
202 the copys to callee-save registers and the fact they are only being used by the
203 fmdrr instruction. It would have been better had the fmdrr been scheduled
204 before the call and place the result in a callee-save DPR register. The two
205 mov ops would not have been necessary.
207 //===---------------------------------------------------------------------===//
209 Calling convention related stuff:
211 * gcc's parameter passing implementation is terrible and we suffer as a result:
219 void foo(struct s S) {
220 printf("%g, %d\n", S.d1, S.s1);
223 'S' is passed via registers r0, r1, r2. But gcc stores them to the stack, and
224 then reload them to r1, r2, and r3 before issuing the call (r0 contains the
225 address of the format string):
230 stmia sp, {r0, r1, r2}
238 Instead of a stmia, ldmia, and a ldr, wouldn't it be better to do three moves?
240 * Return an aggregate type is even worse:
244 struct s S = {1.1, 2};
253 @ lr needed for prologue
254 ldmia r0, {r0, r1, r2}
255 stmia sp, {r0, r1, r2}
256 stmia ip, {r0, r1, r2}
261 r0 (and later ip) is the hidden parameter from caller to store the value in. The
262 first ldmia loads the constants into r0, r1, r2. The last stmia stores r0, r1,
263 r2 into the address passed in. However, there is one additional stmia that
264 stores r0, r1, and r2 to some stack location. The store is dead.
266 The llvm-gcc generated code looks like this:
268 csretcc void %foo(%struct.s* %agg.result) {
270 %S = alloca %struct.s, align 4 ; <%struct.s*> [#uses=1]
271 %memtmp = alloca %struct.s ; <%struct.s*> [#uses=1]
272 cast %struct.s* %S to sbyte* ; <sbyte*>:0 [#uses=2]
273 call void %llvm.memcpy.i32( sbyte* %0, sbyte* cast ({ double, int }* %C.0.904 to sbyte*), uint 12, uint 4 )
274 cast %struct.s* %agg.result to sbyte* ; <sbyte*>:1 [#uses=2]
275 call void %llvm.memcpy.i32( sbyte* %1, sbyte* %0, uint 12, uint 0 )
276 cast %struct.s* %memtmp to sbyte* ; <sbyte*>:2 [#uses=1]
277 call void %llvm.memcpy.i32( sbyte* %2, sbyte* %1, uint 12, uint 0 )
281 llc ends up issuing two memcpy's (the first memcpy becomes 3 loads from
282 constantpool). Perhaps we should 1) fix llvm-gcc so the memcpy is translated
283 into a number of load and stores, or 2) custom lower memcpy (of small size) to
284 be ldmia / stmia. I think option 2 is better but the current register
285 allocator cannot allocate a chunk of registers at a time.
287 A feasible temporary solution is to use specific physical registers at the
288 lowering time for small (<= 4 words?) transfer size.
290 * ARM CSRet calling convention requires the hidden argument to be returned by
293 //===---------------------------------------------------------------------===//
295 We can definitely do a better job on BB placements to eliminate some branches.
296 It's very common to see llvm generated assembly code that looks like this:
305 If BB4 is the only predecessor of BB3, then we can emit BB3 after BB4. We can
306 then eliminate beq and and turn the unconditional branch to LBB2 to a bne.
308 See McCat/18-imp/ComputeBoundingBoxes for an example.
310 //===---------------------------------------------------------------------===//
312 Register scavenging is now implemented. The example in the previous version
313 of this document produces optimal code at -O2.
315 //===---------------------------------------------------------------------===//
317 Pre-/post- indexed load / stores:
319 1) We should not make the pre/post- indexed load/store transform if the base ptr
320 is guaranteed to be live beyond the load/store. This can happen if the base
321 ptr is live out of the block we are performing the optimization. e.g.
333 In most cases, this is just a wasted optimization. However, sometimes it can
334 negatively impact the performance because two-address code is more restrictive
335 when it comes to scheduling.
337 Unfortunately, liveout information is currently unavailable during DAG combine
340 2) Consider spliting a indexed load / store into a pair of add/sub + load/store
341 to solve #1 (in TwoAddressInstructionPass.cpp).
343 3) Enhance LSR to generate more opportunities for indexed ops.
345 4) Once we added support for multiple result patterns, write indexed loads
346 patterns instead of C++ instruction selection code.
348 5) Use FLDM / FSTM to emulate indexed FP load / store.
350 //===---------------------------------------------------------------------===//
352 We should add i64 support to take advantage of the 64-bit load / stores.
353 We can add a pseudo i64 register class containing pseudo registers that are
354 register pairs. All other ops (e.g. add, sub) would be expanded as usual.
356 We need to add pseudo instructions (i.e. gethi / getlo) to extract i32 registers
357 from the i64 register. These are single moves which can be eliminated if the
358 destination register is a sub-register of the source. We should implement proper
359 subreg support in the register allocator to coalesce these away.
361 There are other minor issues such as multiple instructions for a spill / restore
364 //===---------------------------------------------------------------------===//
366 Implement support for some more tricky ways to materialize immediates. For
367 example, to get 0xffff8000, we can use:
372 //===---------------------------------------------------------------------===//
374 We sometimes generate multiple add / sub instructions to update sp in prologue
375 and epilogue if the inc / dec value is too large to fit in a single immediate
376 operand. In some cases, perhaps it might be better to load the value from a
377 constantpool instead.
379 //===---------------------------------------------------------------------===//
381 GCC generates significantly better code for this function.
383 int foo(int StackPtr, unsigned char *Line, unsigned char *Stack, int LineLen) {
387 while (StackPtr != 0 && i < (((LineLen) < (32768))? (LineLen) : (32768)))
388 Line[i++] = Stack[--StackPtr];
391 while (StackPtr != 0 && i < LineLen)
401 //===---------------------------------------------------------------------===//
403 This should compile to the mlas instruction:
404 int mlas(int x, int y, int z) { return ((x * y + z) < 0) ? 7 : 13; }
406 //===---------------------------------------------------------------------===//
408 At some point, we should triage these to see if they still apply to us:
410 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=19598
411 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=18560
412 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=27016
414 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=11831
415 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=11826
416 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=11825
417 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=11824
418 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=11823
419 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=11820
420 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=10982
422 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=10242
423 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=9831
424 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=9760
425 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=9759
426 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=9703
427 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=9702
428 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=9663
430 http://www.inf.u-szeged.hu/gcc-arm/
431 http://citeseer.ist.psu.edu/debus04linktime.html
433 //===---------------------------------------------------------------------===//
435 gcc generates smaller code for this function at -O2 or -Os:
437 void foo(signed char* p) {
446 llvm decides it's a good idea to turn the repeated if...else into a
447 binary tree, as if it were a switch; the resulting code requires -1
448 compare-and-branches when *p<=2 or *p==5, the same number if *p==4
449 or *p>6, and +1 if *p==3. So it should be a speed win
450 (on balance). However, the revised code is larger, with 4 conditional
451 branches instead of 3.
453 More seriously, there is a byte->word extend before
454 each comparison, where there should be only one, and the condition codes
455 are not remembered when the same two values are compared twice.
457 //===---------------------------------------------------------------------===//
459 More register scavenging work:
461 1. Use the register scavenger to track frame index materialized into registers
462 (those that do not fit in addressing modes) to allow reuse in the same BB.
463 2. Finish scavenging for Thumb.
464 3. We know some spills and restores are unnecessary. The issue is once live
465 intervals are merged, they are not never split. So every def is spilled
466 and every use requires a restore if the register allocator decides the
467 resulting live interval is not assigned a physical register. It may be
468 possible (with the help of the scavenger) to turn some spill / restore
469 pairs into register copies.
471 //===---------------------------------------------------------------------===//
473 Teach LSR about ARM addressing modes.