1 //===---------------------------------------------------------------------===//
2 // Random ideas for the ARM backend.
3 //===---------------------------------------------------------------------===//
5 Reimplement 'select' in terms of 'SEL'.
7 * We would really like to support UXTAB16, but we need to prove that the
8 add doesn't need to overflow between the two 16-bit chunks.
10 * implement predication support
11 * Implement pre/post increment support. (e.g. PR935)
12 * Coalesce stack slots!
13 * Implement smarter constant generation for binops with large immediates.
15 * Consider materializing FP constants like 0.0f and 1.0f using integer
16 immediate instructions then copy to FPU. Slower than load into FPU?
18 //===---------------------------------------------------------------------===//
20 The constant island pass is in good shape. Some cleanups might be desirable,
21 but there is unlikely to be much improvement in the generated code.
23 1. There may be some advantage to trying to be smarter about the initial
24 placement, rather than putting everything at the end.
26 2. The handling of 2-byte padding for Thumb is overly conservative. There
27 would be a small gain to keeping accurate track of the padding (which would
28 require aligning functions containing constant pools to 4-byte boundaries).
30 3. There might be some compile-time efficiency to be had by representing
31 consecutive islands as a single block rather than multiple blocks.
33 //===---------------------------------------------------------------------===//
35 We need to start generating predicated instructions. The .td files have a way
36 to express this now (see the PPC conditional return instruction), but the
37 branch folding pass (or a new if-cvt pass) should start producing these, at
38 least in the trivial case.
40 Among the obvious wins, doing so can eliminate the need to custom expand
41 copysign (i.e. we won't need to custom expand it to get the conditional
44 This allows us to eliminate one instruction from:
46 define i32 @_Z6slow4bii(i32 %x, i32 %y) {
47 %tmp = icmp sgt i32 %x, %y
48 %retval = select i1 %tmp, i32 %x, i32 %y
58 //===---------------------------------------------------------------------===//
60 Implement long long "X-3" with instructions that fold the immediate in. These
61 were disabled due to badness with the ARM carry flag on subtracts.
63 //===---------------------------------------------------------------------===//
65 We currently compile abs:
66 int foo(int p) { return p < 0 ? -p : p; }
77 This is very, uh, literal. This could be a 3 operation sequence:
81 Which would be better. This occurs in png decode.
83 //===---------------------------------------------------------------------===//
85 More load / store optimizations:
86 1) Look past instructions without side-effects (not load, store, branch, etc.)
87 when forming the list of loads / stores to optimize.
89 2) Smarter register allocation?
90 We are probably missing some opportunities to use ldm / stm. Consider:
95 This cannot be merged into a ldm. Perhaps we will need to do the transformation
96 before register allocation. Then teach the register allocator to allocate a
97 chunk of consecutive registers.
99 3) Better representation for block transfer? This is from Olden/power:
110 If we can spare the registers, it would be better to use fldm and fstm here.
111 Need major register allocator enhancement though.
113 4) Can we recognize the relative position of constantpool entries? i.e. Treat
124 Then the ldr's can be combined into a single ldm. See Olden/power.
126 Note for ARM v4 gcc uses ldmia to load a pair of 32-bit values to represent a
127 double 64-bit FP constant:
137 5) Can we make use of ldrd and strd? Instead of generating ldm / stm, use
138 ldrd/strd instead if there are only two destination registers that form an
139 odd/even pair. However, we probably would pay a penalty if the address is not
140 aligned on 8-byte boundary. This requires more information on load / store
141 nodes (and MI's?) then we currently carry.
143 //===---------------------------------------------------------------------===//
145 * Consider this silly example:
147 double bar(double x) {
172 Ignore the prologue and epilogue stuff for a second. Note
175 the copys to callee-save registers and the fact they are only being used by the
176 fmdrr instruction. It would have been better had the fmdrr been scheduled
177 before the call and place the result in a callee-save DPR register. The two
178 mov ops would not have been necessary.
180 //===---------------------------------------------------------------------===//
182 Calling convention related stuff:
184 * gcc's parameter passing implementation is terrible and we suffer as a result:
192 void foo(struct s S) {
193 printf("%g, %d\n", S.d1, S.s1);
196 'S' is passed via registers r0, r1, r2. But gcc stores them to the stack, and
197 then reload them to r1, r2, and r3 before issuing the call (r0 contains the
198 address of the format string):
203 stmia sp, {r0, r1, r2}
211 Instead of a stmia, ldmia, and a ldr, wouldn't it be better to do three moves?
213 * Return an aggregate type is even worse:
217 struct s S = {1.1, 2};
226 @ lr needed for prologue
227 ldmia r0, {r0, r1, r2}
228 stmia sp, {r0, r1, r2}
229 stmia ip, {r0, r1, r2}
234 r0 (and later ip) is the hidden parameter from caller to store the value in. The
235 first ldmia loads the constants into r0, r1, r2. The last stmia stores r0, r1,
236 r2 into the address passed in. However, there is one additional stmia that
237 stores r0, r1, and r2 to some stack location. The store is dead.
239 The llvm-gcc generated code looks like this:
241 csretcc void %foo(%struct.s* %agg.result) {
243 %S = alloca %struct.s, align 4 ; <%struct.s*> [#uses=1]
244 %memtmp = alloca %struct.s ; <%struct.s*> [#uses=1]
245 cast %struct.s* %S to sbyte* ; <sbyte*>:0 [#uses=2]
246 call void %llvm.memcpy.i32( sbyte* %0, sbyte* cast ({ double, int }* %C.0.904 to sbyte*), uint 12, uint 4 )
247 cast %struct.s* %agg.result to sbyte* ; <sbyte*>:1 [#uses=2]
248 call void %llvm.memcpy.i32( sbyte* %1, sbyte* %0, uint 12, uint 0 )
249 cast %struct.s* %memtmp to sbyte* ; <sbyte*>:2 [#uses=1]
250 call void %llvm.memcpy.i32( sbyte* %2, sbyte* %1, uint 12, uint 0 )
254 llc ends up issuing two memcpy's (the first memcpy becomes 3 loads from
255 constantpool). Perhaps we should 1) fix llvm-gcc so the memcpy is translated
256 into a number of load and stores, or 2) custom lower memcpy (of small size) to
257 be ldmia / stmia. I think option 2 is better but the current register
258 allocator cannot allocate a chunk of registers at a time.
260 A feasible temporary solution is to use specific physical registers at the
261 lowering time for small (<= 4 words?) transfer size.
263 * ARM CSRet calling convention requires the hidden argument to be returned by
266 //===---------------------------------------------------------------------===//
268 We can definitely do a better job on BB placements to eliminate some branches.
269 It's very common to see llvm generated assembly code that looks like this:
278 If BB4 is the only predecessor of BB3, then we can emit BB3 after BB4. We can
279 then eliminate beq and and turn the unconditional branch to LBB2 to a bne.
281 See McCat/18-imp/ComputeBoundingBoxes for an example.
283 //===---------------------------------------------------------------------===//
285 We need register scavenging. Currently, the 'ip' register is reserved in case
286 frame indexes are too big. This means that we generate extra code for stuff
289 void foo(unsigned x, unsigned y, unsigned z, unsigned *a, unsigned *b, unsigned *c) {
290 short Rconst = (short) (16384.0f * 1.40200 + 0.5 );
299 *** stmfd sp!, {r4, r7}
302 orr r4, r4, #89, 24 @ 22784
312 *** ldmfd sp!, {r4, r7}
331 This is apparently all because we couldn't use ip here.
333 //===---------------------------------------------------------------------===//
335 Pre-/post- indexed load / stores:
337 1) We should not make the pre/post- indexed load/store transform if the base ptr
338 is guaranteed to be live beyond the load/store. This can happen if the base
339 ptr is live out of the block we are performing the optimization. e.g.
351 In most cases, this is just a wasted optimization. However, sometimes it can
352 negatively impact the performance because two-address code is more restrictive
353 when it comes to scheduling.
355 Unfortunately, liveout information is currently unavailable during DAG combine
358 2) Consider spliting a indexed load / store into a pair of add/sub + load/store
359 to solve #1 (in TwoAddressInstructionPass.cpp).
361 3) Enhance LSR to generate more opportunities for indexed ops.
363 4) Once we added support for multiple result patterns, write indexed loads
364 patterns instead of C++ instruction selection code.
366 5) Use FLDM / FSTM to emulate indexed FP load / store.
368 //===---------------------------------------------------------------------===//
370 We should add i64 support to take advantage of the 64-bit load / stores.
371 We can add a pseudo i64 register class containing pseudo registers that are
372 register pairs. All other ops (e.g. add, sub) would be expanded as usual.
374 We need to add pseudo instructions (i.e. gethi / getlo) to extract i32 registers
375 from the i64 register. These are single moves which can be eliminated if the
376 destination register is a sub-register of the source. We should implement proper
377 subreg support in the register allocator to coalesce these away.
379 There are other minor issues such as multiple instructions for a spill / restore
382 //===---------------------------------------------------------------------===//
384 Implement support for some more tricky ways to materialize immediates. For
385 example, to get 0xffff8000, we can use:
390 //===---------------------------------------------------------------------===//
392 We sometimes generate multiple add / sub instructions to update sp in prologue
393 and epilogue if the inc / dec value is too large to fit in a single immediate
394 operand. In some cases, perhaps it might be better to load the value from a
395 constantpool instead.
397 //===---------------------------------------------------------------------===//
399 GCC generates significantly better code for this function.
401 int foo(int StackPtr, unsigned char *Line, unsigned char *Stack, int LineLen) {
405 while (StackPtr != 0 && i < (((LineLen) < (32768))? (LineLen) : (32768)))
406 Line[i++] = Stack[--StackPtr];
409 while (StackPtr != 0 && i < LineLen)
419 //===---------------------------------------------------------------------===//
421 This should compile to the mlas instruction:
422 int mlas(int x, int y, int z) { return ((x * y + z) < 0) ? 7 : 13; }
424 //===---------------------------------------------------------------------===//
426 At some point, we should triage these to see if they still apply to us:
428 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=19598
429 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=18560
430 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=27016
432 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=11831
433 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=11826
434 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=11825
435 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=11824
436 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=11823
437 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=11820
438 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=10982
440 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=10242
441 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=9831
442 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=9760
443 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=9759
444 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=9703
445 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=9702
446 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=9663
448 http://www.inf.u-szeged.hu/gcc-arm/
449 http://citeseer.ist.psu.edu/debus04linktime.html
451 //===---------------------------------------------------------------------===//