1 //===---------------------------------------------------------------------===//
2 // Random ideas for the ARM backend.
3 //===---------------------------------------------------------------------===//
5 Reimplement 'select' in terms of 'SEL'.
7 * We would really like to support UXTAB16, but we need to prove that the
8 add doesn't need to overflow between the two 16-bit chunks.
10 * implement predication support
11 * Implement pre/post increment support. (e.g. PR935)
12 * Coalesce stack slots!
13 * Implement smarter constant generation for binops with large immediates.
15 * Consider materializing FP constants like 0.0f and 1.0f using integer
16 immediate instructions then copy to FPU. Slower than load into FPU?
18 //===---------------------------------------------------------------------===//
20 The constant island pass has been much improved; all the todo items in the
21 previous version of this document have been addressed. However, there are still
22 things that can be done:
24 1. When there isn't an existing water, the current MBB is split right after
25 the use. It would be profitable to look farther forward, especially on Thumb,
26 where negative offsets won't work.
27 (Partially fixed: it will put the island at the end of the block if that is
28 in range. If it is not in range things still work as above, which is poor on
31 2. There may be some advantage to trying to be smarter about the initial
32 placement, rather than putting everything at the end.
34 3. The handling of 2-byte padding for Thumb is overly conservative. There
35 would be a small gain to keeping accurate track of the padding (which would
36 require aligning functions containing constant pools to 4-byte boundaries).
38 //===---------------------------------------------------------------------===//
40 We need to start generating predicated instructions. The .td files have a way
41 to express this now (see the PPC conditional return instruction), but the
42 branch folding pass (or a new if-cvt pass) should start producing these, at
43 least in the trivial case.
45 Among the obvious wins, doing so can eliminate the need to custom expand
46 copysign (i.e. we won't need to custom expand it to get the conditional
49 This allows us to eliminate one instruction from:
51 define i32 @_Z6slow4bii(i32 %x, i32 %y) {
52 %tmp = icmp sgt i32 %x, %y
53 %retval = select i1 %tmp, i32 %x, i32 %y
63 //===---------------------------------------------------------------------===//
65 Implement long long "X-3" with instructions that fold the immediate in. These
66 were disabled due to badness with the ARM carry flag on subtracts.
68 //===---------------------------------------------------------------------===//
70 We currently compile abs:
71 int foo(int p) { return p < 0 ? -p : p; }
82 This is very, uh, literal. This could be a 3 operation sequence:
86 Which would be better. This occurs in png decode.
88 //===---------------------------------------------------------------------===//
90 More load / store optimizations:
91 1) Look past instructions without side-effects (not load, store, branch, etc.)
92 when forming the list of loads / stores to optimize.
94 2) Smarter register allocation?
95 We are probably missing some opportunities to use ldm / stm. Consider:
100 This cannot be merged into a ldm. Perhaps we will need to do the transformation
101 before register allocation. Then teach the register allocator to allocate a
102 chunk of consecutive registers.
104 3) Better representation for block transfer? This is from Olden/power:
115 If we can spare the registers, it would be better to use fldm and fstm here.
116 Need major register allocator enhancement though.
118 4) Can we recognize the relative position of constantpool entries? i.e. Treat
129 Then the ldr's can be combined into a single ldm. See Olden/power.
131 Note for ARM v4 gcc uses ldmia to load a pair of 32-bit values to represent a
132 double 64-bit FP constant:
142 5) Can we make use of ldrd and strd? Instead of generating ldm / stm, use
143 ldrd/strd instead if there are only two destination registers that form an
144 odd/even pair. However, we probably would pay a penalty if the address is not
145 aligned on 8-byte boundary. This requires more information on load / store
146 nodes (and MI's?) then we currently carry.
148 //===---------------------------------------------------------------------===//
150 * Consider this silly example:
152 double bar(double x) {
177 Ignore the prologue and epilogue stuff for a second. Note
180 the copys to callee-save registers and the fact they are only being used by the
181 fmdrr instruction. It would have been better had the fmdrr been scheduled
182 before the call and place the result in a callee-save DPR register. The two
183 mov ops would not have been necessary.
185 //===---------------------------------------------------------------------===//
187 Calling convention related stuff:
189 * gcc's parameter passing implementation is terrible and we suffer as a result:
197 void foo(struct s S) {
198 printf("%g, %d\n", S.d1, S.s1);
201 'S' is passed via registers r0, r1, r2. But gcc stores them to the stack, and
202 then reload them to r1, r2, and r3 before issuing the call (r0 contains the
203 address of the format string):
208 stmia sp, {r0, r1, r2}
216 Instead of a stmia, ldmia, and a ldr, wouldn't it be better to do three moves?
218 * Return an aggregate type is even worse:
222 struct s S = {1.1, 2};
231 @ lr needed for prologue
232 ldmia r0, {r0, r1, r2}
233 stmia sp, {r0, r1, r2}
234 stmia ip, {r0, r1, r2}
239 r0 (and later ip) is the hidden parameter from caller to store the value in. The
240 first ldmia loads the constants into r0, r1, r2. The last stmia stores r0, r1,
241 r2 into the address passed in. However, there is one additional stmia that
242 stores r0, r1, and r2 to some stack location. The store is dead.
244 The llvm-gcc generated code looks like this:
246 csretcc void %foo(%struct.s* %agg.result) {
248 %S = alloca %struct.s, align 4 ; <%struct.s*> [#uses=1]
249 %memtmp = alloca %struct.s ; <%struct.s*> [#uses=1]
250 cast %struct.s* %S to sbyte* ; <sbyte*>:0 [#uses=2]
251 call void %llvm.memcpy.i32( sbyte* %0, sbyte* cast ({ double, int }* %C.0.904 to sbyte*), uint 12, uint 4 )
252 cast %struct.s* %agg.result to sbyte* ; <sbyte*>:1 [#uses=2]
253 call void %llvm.memcpy.i32( sbyte* %1, sbyte* %0, uint 12, uint 0 )
254 cast %struct.s* %memtmp to sbyte* ; <sbyte*>:2 [#uses=1]
255 call void %llvm.memcpy.i32( sbyte* %2, sbyte* %1, uint 12, uint 0 )
259 llc ends up issuing two memcpy's (the first memcpy becomes 3 loads from
260 constantpool). Perhaps we should 1) fix llvm-gcc so the memcpy is translated
261 into a number of load and stores, or 2) custom lower memcpy (of small size) to
262 be ldmia / stmia. I think option 2 is better but the current register
263 allocator cannot allocate a chunk of registers at a time.
265 A feasible temporary solution is to use specific physical registers at the
266 lowering time for small (<= 4 words?) transfer size.
268 * ARM CSRet calling convention requires the hidden argument to be returned by
271 //===---------------------------------------------------------------------===//
273 We can definitely do a better job on BB placements to eliminate some branches.
274 It's very common to see llvm generated assembly code that looks like this:
283 If BB4 is the only predecessor of BB3, then we can emit BB3 after BB4. We can
284 then eliminate beq and and turn the unconditional branch to LBB2 to a bne.
286 See McCat/18-imp/ComputeBoundingBoxes for an example.
288 //===---------------------------------------------------------------------===//
290 We need register scavenging. Currently, the 'ip' register is reserved in case
291 frame indexes are too big. This means that we generate extra code for stuff
294 void foo(unsigned x, unsigned y, unsigned z, unsigned *a, unsigned *b, unsigned *c) {
295 short Rconst = (short) (16384.0f * 1.40200 + 0.5 );
304 *** stmfd sp!, {r4, r7}
307 orr r4, r4, #89, 24 @ 22784
317 *** ldmfd sp!, {r4, r7}
336 This is apparently all because we couldn't use ip here.
338 //===---------------------------------------------------------------------===//
340 Pre-/post- indexed load / stores:
342 1) We should not make the pre/post- indexed load/store transform if the base ptr
343 is guaranteed to be live beyond the load/store. This can happen if the base
344 ptr is live out of the block we are performing the optimization. e.g.
356 In most cases, this is just a wasted optimization. However, sometimes it can
357 negatively impact the performance because two-address code is more restrictive
358 when it comes to scheduling.
360 Unfortunately, liveout information is currently unavailable during DAG combine
363 2) Consider spliting a indexed load / store into a pair of add/sub + load/store
364 to solve #1 (in TwoAddressInstructionPass.cpp).
366 3) Enhance LSR to generate more opportunities for indexed ops.
368 4) Once we added support for multiple result patterns, write indexed loads
369 patterns instead of C++ instruction selection code.
371 5) Use FLDM / FSTM to emulate indexed FP load / store.
373 //===---------------------------------------------------------------------===//
375 We should add i64 support to take advantage of the 64-bit load / stores.
376 We can add a pseudo i64 register class containing pseudo registers that are
377 register pairs. All other ops (e.g. add, sub) would be expanded as usual.
379 We need to add pseudo instructions (i.e. gethi / getlo) to extract i32 registers
380 from the i64 register. These are single moves which can be eliminated if the
381 destination register is a sub-register of the source. We should implement proper
382 subreg support in the register allocator to coalesce these away.
384 There are other minor issues such as multiple instructions for a spill / restore
387 //===---------------------------------------------------------------------===//
389 Implement support for some more tricky ways to materialize immediates. For
390 example, to get 0xffff8000, we can use:
395 //===---------------------------------------------------------------------===//
397 We sometimes generate multiple add / sub instructions to update sp in prologue
398 and epilogue if the inc / dec value is too large to fit in a single immediate
399 operand. In some cases, perhaps it might be better to load the value from a
400 constantpool instead.
402 //===---------------------------------------------------------------------===//
404 GCC generates significantly better code for this function.
406 int foo(int StackPtr, unsigned char *Line, unsigned char *Stack, int LineLen) {
410 while (StackPtr != 0 && i < (((LineLen) < (32768))? (LineLen) : (32768)))
411 Line[i++] = Stack[--StackPtr];
414 while (StackPtr != 0 && i < LineLen)
424 //===---------------------------------------------------------------------===//
426 This should compile to the mlas instruction:
427 int mlas(int x, int y, int z) { return ((x * y + z) < 0) ? 7 : 13; }
429 //===---------------------------------------------------------------------===//
431 At some point, we should triage these to see if they still apply to us:
433 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=19598
434 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=18560
435 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=27016
437 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=11831
438 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=11826
439 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=11825
440 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=11824
441 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=11823
442 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=11820
443 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=10982
445 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=10242
446 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=9831
447 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=9760
448 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=9759
449 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=9703
450 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=9702
451 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=9663
453 http://www.inf.u-szeged.hu/gcc-arm/
454 http://citeseer.ist.psu.edu/debus04linktime.html
456 //===---------------------------------------------------------------------===//