1 //===---------------------------------------------------------------------===//
2 // Random ideas for the ARM backend.
3 //===---------------------------------------------------------------------===//
5 Reimplement 'select' in terms of 'SEL'.
7 * We would really like to support UXTAB16, but we need to prove that the
8 add doesn't need to overflow between the two 16-bit chunks.
10 * implement predication support
11 * Implement pre/post increment support. (e.g. PR935)
12 * Coalesce stack slots!
13 * Implement smarter constant generation for binops with large immediates.
15 * Consider materializing FP constants like 0.0f and 1.0f using integer
16 immediate instructions then copy to FPU. Slower than load into FPU?
18 //===---------------------------------------------------------------------===//
20 The constant island pass is in good shape. Some cleanups might be desirable,
21 but there is unlikely to be much improvement in the generated code.
23 1. There may be some advantage to trying to be smarter about the initial
24 placement, rather than putting everything at the end.
26 2. The handling of 2-byte padding for Thumb is overly conservative. There
27 would be a small gain to keeping accurate track of the padding (which would
28 require aligning functions containing constant pools to 4-byte boundaries).
30 3. There might be some compile-time efficiency to be had by representing
31 consecutive islands as a single block rather than multiple blocks.
33 //===---------------------------------------------------------------------===//
35 We need to start generating predicated instructions. The .td files have a way
36 to express this now (see the PPC conditional return instruction), but the
37 branch folding pass (or a new if-cvt pass) should start producing these, at
38 least in the trivial case.
40 Among the obvious wins, doing so can eliminate the need to custom expand
41 copysign (i.e. we won't need to custom expand it to get the conditional
44 This allows us to eliminate one instruction from:
46 define i32 @_Z6slow4bii(i32 %x, i32 %y) {
47 %tmp = icmp sgt i32 %x, %y
48 %retval = select i1 %tmp, i32 %x, i32 %y
58 //===---------------------------------------------------------------------===//
60 Implement long long "X-3" with instructions that fold the immediate in. These
61 were disabled due to badness with the ARM carry flag on subtracts.
63 //===---------------------------------------------------------------------===//
65 We currently compile abs:
66 int foo(int p) { return p < 0 ? -p : p; }
77 This is very, uh, literal. This could be a 3 operation sequence:
81 Which would be better. This occurs in png decode.
83 //===---------------------------------------------------------------------===//
85 More load / store optimizations:
86 1) Look past instructions without side-effects (not load, store, branch, etc.)
87 when forming the list of loads / stores to optimize.
89 2) Smarter register allocation?
90 We are probably missing some opportunities to use ldm / stm. Consider:
95 This cannot be merged into a ldm. Perhaps we will need to do the transformation
96 before register allocation. Then teach the register allocator to allocate a
97 chunk of consecutive registers.
99 3) Better representation for block transfer? This is from Olden/power:
110 If we can spare the registers, it would be better to use fldm and fstm here.
111 Need major register allocator enhancement though.
113 4) Can we recognize the relative position of constantpool entries? i.e. Treat
124 Then the ldr's can be combined into a single ldm. See Olden/power.
126 Note for ARM v4 gcc uses ldmia to load a pair of 32-bit values to represent a
127 double 64-bit FP constant:
137 5) Can we make use of ldrd and strd? Instead of generating ldm / stm, use
138 ldrd/strd instead if there are only two destination registers that form an
139 odd/even pair. However, we probably would pay a penalty if the address is not
140 aligned on 8-byte boundary. This requires more information on load / store
141 nodes (and MI's?) then we currently carry.
143 6) (From CoreGraphics): struct copies appear to be done field by field
144 instead of by words, at least sometimes:
146 struct foo { int x; short s; char c1; char c2; };
147 void cpy(struct foo*a, struct foo*b) { *a = *b; }
162 In this benchmark poor handling of aggregate copies has shown up as
163 having a large effect on size, and possibly speed as well (we don't have
164 a good way to measure on ARM).
166 //===---------------------------------------------------------------------===//
168 * Consider this silly example:
170 double bar(double x) {
195 Ignore the prologue and epilogue stuff for a second. Note
198 the copys to callee-save registers and the fact they are only being used by the
199 fmdrr instruction. It would have been better had the fmdrr been scheduled
200 before the call and place the result in a callee-save DPR register. The two
201 mov ops would not have been necessary.
203 //===---------------------------------------------------------------------===//
205 Calling convention related stuff:
207 * gcc's parameter passing implementation is terrible and we suffer as a result:
215 void foo(struct s S) {
216 printf("%g, %d\n", S.d1, S.s1);
219 'S' is passed via registers r0, r1, r2. But gcc stores them to the stack, and
220 then reload them to r1, r2, and r3 before issuing the call (r0 contains the
221 address of the format string):
226 stmia sp, {r0, r1, r2}
234 Instead of a stmia, ldmia, and a ldr, wouldn't it be better to do three moves?
236 * Return an aggregate type is even worse:
240 struct s S = {1.1, 2};
249 @ lr needed for prologue
250 ldmia r0, {r0, r1, r2}
251 stmia sp, {r0, r1, r2}
252 stmia ip, {r0, r1, r2}
257 r0 (and later ip) is the hidden parameter from caller to store the value in. The
258 first ldmia loads the constants into r0, r1, r2. The last stmia stores r0, r1,
259 r2 into the address passed in. However, there is one additional stmia that
260 stores r0, r1, and r2 to some stack location. The store is dead.
262 The llvm-gcc generated code looks like this:
264 csretcc void %foo(%struct.s* %agg.result) {
266 %S = alloca %struct.s, align 4 ; <%struct.s*> [#uses=1]
267 %memtmp = alloca %struct.s ; <%struct.s*> [#uses=1]
268 cast %struct.s* %S to sbyte* ; <sbyte*>:0 [#uses=2]
269 call void %llvm.memcpy.i32( sbyte* %0, sbyte* cast ({ double, int }* %C.0.904 to sbyte*), uint 12, uint 4 )
270 cast %struct.s* %agg.result to sbyte* ; <sbyte*>:1 [#uses=2]
271 call void %llvm.memcpy.i32( sbyte* %1, sbyte* %0, uint 12, uint 0 )
272 cast %struct.s* %memtmp to sbyte* ; <sbyte*>:2 [#uses=1]
273 call void %llvm.memcpy.i32( sbyte* %2, sbyte* %1, uint 12, uint 0 )
277 llc ends up issuing two memcpy's (the first memcpy becomes 3 loads from
278 constantpool). Perhaps we should 1) fix llvm-gcc so the memcpy is translated
279 into a number of load and stores, or 2) custom lower memcpy (of small size) to
280 be ldmia / stmia. I think option 2 is better but the current register
281 allocator cannot allocate a chunk of registers at a time.
283 A feasible temporary solution is to use specific physical registers at the
284 lowering time for small (<= 4 words?) transfer size.
286 * ARM CSRet calling convention requires the hidden argument to be returned by
289 //===---------------------------------------------------------------------===//
291 We can definitely do a better job on BB placements to eliminate some branches.
292 It's very common to see llvm generated assembly code that looks like this:
301 If BB4 is the only predecessor of BB3, then we can emit BB3 after BB4. We can
302 then eliminate beq and and turn the unconditional branch to LBB2 to a bne.
304 See McCat/18-imp/ComputeBoundingBoxes for an example.
306 //===---------------------------------------------------------------------===//
308 Register scavenging is now implemented. The example in the previous version
309 of this document produces optimal code at -O2.
311 //===---------------------------------------------------------------------===//
313 Pre-/post- indexed load / stores:
315 1) We should not make the pre/post- indexed load/store transform if the base ptr
316 is guaranteed to be live beyond the load/store. This can happen if the base
317 ptr is live out of the block we are performing the optimization. e.g.
329 In most cases, this is just a wasted optimization. However, sometimes it can
330 negatively impact the performance because two-address code is more restrictive
331 when it comes to scheduling.
333 Unfortunately, liveout information is currently unavailable during DAG combine
336 2) Consider spliting a indexed load / store into a pair of add/sub + load/store
337 to solve #1 (in TwoAddressInstructionPass.cpp).
339 3) Enhance LSR to generate more opportunities for indexed ops.
341 4) Once we added support for multiple result patterns, write indexed loads
342 patterns instead of C++ instruction selection code.
344 5) Use FLDM / FSTM to emulate indexed FP load / store.
346 //===---------------------------------------------------------------------===//
348 We should add i64 support to take advantage of the 64-bit load / stores.
349 We can add a pseudo i64 register class containing pseudo registers that are
350 register pairs. All other ops (e.g. add, sub) would be expanded as usual.
352 We need to add pseudo instructions (i.e. gethi / getlo) to extract i32 registers
353 from the i64 register. These are single moves which can be eliminated if the
354 destination register is a sub-register of the source. We should implement proper
355 subreg support in the register allocator to coalesce these away.
357 There are other minor issues such as multiple instructions for a spill / restore
360 //===---------------------------------------------------------------------===//
362 Implement support for some more tricky ways to materialize immediates. For
363 example, to get 0xffff8000, we can use:
368 //===---------------------------------------------------------------------===//
370 We sometimes generate multiple add / sub instructions to update sp in prologue
371 and epilogue if the inc / dec value is too large to fit in a single immediate
372 operand. In some cases, perhaps it might be better to load the value from a
373 constantpool instead.
375 //===---------------------------------------------------------------------===//
377 GCC generates significantly better code for this function.
379 int foo(int StackPtr, unsigned char *Line, unsigned char *Stack, int LineLen) {
383 while (StackPtr != 0 && i < (((LineLen) < (32768))? (LineLen) : (32768)))
384 Line[i++] = Stack[--StackPtr];
387 while (StackPtr != 0 && i < LineLen)
397 //===---------------------------------------------------------------------===//
399 This should compile to the mlas instruction:
400 int mlas(int x, int y, int z) { return ((x * y + z) < 0) ? 7 : 13; }
402 //===---------------------------------------------------------------------===//
404 At some point, we should triage these to see if they still apply to us:
406 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=19598
407 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=18560
408 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=27016
410 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=11831
411 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=11826
412 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=11825
413 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=11824
414 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=11823
415 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=11820
416 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=10982
418 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=10242
419 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=9831
420 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=9760
421 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=9759
422 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=9703
423 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=9702
424 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=9663
426 http://www.inf.u-szeged.hu/gcc-arm/
427 http://citeseer.ist.psu.edu/debus04linktime.html
429 //===---------------------------------------------------------------------===//
430 (CoreGraphics): gcc generates smaller code for this function at -O2 or -Os:
432 void foo(signed char* p) {
441 llvm decides it's a good idea to turn the repeated if...else into a
442 binary tree, as if it were a switch; the resulting code requires -1
443 compare-and-branches when *p<=2 or *p==5, the same number if *p==4
444 or *p>6, and +1 if *p==3. So it should be a speed win
445 (on balance). However, the revised code is larger, with 4 conditional
446 branches instead of 3.
448 More seriously, there is a byte->word extend before
449 each comparison, where there should be only one, and the condition codes
450 are not remembered when the same two values are compared twice.