1 //===- README_X86_64.txt - Notes for X86-64 code gen ----------------------===//
3 Implement different PIC models? Right now we only support Mac OS X with small
6 //===---------------------------------------------------------------------===//
8 Make use of "Red Zone".
10 //===---------------------------------------------------------------------===//
12 Implement __int128 and long double support.
14 //===---------------------------------------------------------------------===//
29 We need to do the tailcall optimization as well.
31 //===---------------------------------------------------------------------===//
41 leal (%edi,%edi,2), %eax
43 We should be generating
44 leal (%rdi,%rdi,2), %eax
46 instead. The later form does not require an address-size prefix 67H.
48 It's probably ok to simply emit the corresponding 64-bit super class registers
52 //===---------------------------------------------------------------------===//
54 AMD64 Optimization Manual 8.2 has some nice information about optimizing integer
55 multiplication by a constant. How much of it applies to Intel's X86-64
56 implementation? There are definite trade-offs to consider: latency vs. register
57 pressure vs. code size.
59 //===---------------------------------------------------------------------===//
61 Are we better off using branches instead of cmove to implement FP to
65 ucomiss LC0(%rip), %xmm0
66 cvttss2siq %xmm0, %rdx
68 subss LC0(%rip), %xmm0
69 movabsq $-9223372036854775808, %rax
70 cvttss2siq %xmm0, %rdx
79 movss LCPI1_0(%rip), %xmm1
80 cvttss2siq %xmm0, %rcx
83 cvttss2siq %xmm2, %rax
84 movabsq $-9223372036854775808, %rdx
90 Seems like the jb branch has high likelyhood of being taken. It would have
91 saved a few instructions.
93 //===---------------------------------------------------------------------===//
100 memset(X, b, 2*sizeof(X[0]));
104 movq _b@GOTPCREL(%rip), %rax
114 movq _X@GOTPCREL(%rip), %rdx
120 movq _b@GOTPCREL(%rip), %rax
121 movabsq $72340172838076673, %rdx
124 movq _X@GOTPCREL(%rip), %rdx
128 //===---------------------------------------------------------------------===//
130 Vararg function prologue can be further optimized. Currently all XMM registers
131 are stored into register save area. Most of them can be eliminated since the
132 upper bound of the number of XMM registers used are passed in %al. gcc produces
133 something like the following:
136 leaq 0(,%rdx,4), %rax
137 leaq 4+L2(%rip), %rdx
140 movaps %xmm7, -15(%rax)
141 movaps %xmm6, -31(%rax)
142 movaps %xmm5, -47(%rax)
143 movaps %xmm4, -63(%rax)
144 movaps %xmm3, -79(%rax)
145 movaps %xmm2, -95(%rax)
146 movaps %xmm1, -111(%rax)
147 movaps %xmm0, -127(%rax)
150 It jumps over the movaps that do not need to be stored. Hard to see this being
151 significant as it added 5 instruciton (including a indirect branch) to avoid
152 executing 0 to 8 stores in the function prologue.
154 Perhaps we can optimize for the common case where no XMM registers are used for
155 parameter passing. i.e. is %al == 0 jump over all stores. Or in the case of a
156 leaf function where we can determine that no XMM input parameter is need, avoid
157 emitting the stores at all.
159 //===---------------------------------------------------------------------===//
161 AMD64 has a complex calling convention for aggregate passing by value:
163 1. If the size of an object is larger than two eightbytes, or in C++, is a non-
164 POD structure or union type, or contains unaligned fields, it has class
166 2. Both eightbytes get initialized to class NO_CLASS.
167 3. Each field of an object is classified recursively so that always two fields
168 are considered. The resulting class is calculated according to the classes
169 of the fields in the eightbyte:
170 (a) If both classes are equal, this is the resulting class.
171 (b) If one of the classes is NO_CLASS, the resulting class is the other
173 (c) If one of the classes is MEMORY, the result is the MEMORY class.
174 (d) If one of the classes is INTEGER, the result is the INTEGER.
175 (e) If one of the classes is X87, X87UP, COMPLEX_X87 class, MEMORY is used as
177 (f) Otherwise class SSE is used.
178 4. Then a post merger cleanup is done:
179 (a) If one of the classes is MEMORY, the whole argument is passed in memory.
180 (b) If SSEUP is not preceeded by SSE, it is converted to SSE.
182 Currently llvm frontend does not handle this correctly.
185 typedef struct { int i; double d; } QuadWordS;
186 It is currently passed in two i64 integer registers. However, gcc compiled
187 callee expects the second element 'd' to be passed in XMM0.
190 typedef struct { int32_t i; float j; double d; } QuadWordS;
191 The size of the first two fields == i64 so they will be combined and passed in
192 a integer register RDI. The third field is still passed in XMM0.
195 typedef struct { int64_t i; int8_t j; int64_t d; } S;
197 The size of this aggregate is greater than two i64 so it should be passed in
198 memory. Currently llvm breaks this down and passed it in three integer
202 Taking problem 3 one step ahead where a function expects a aggregate value
203 in memory followed by more parameter(s) passed in register(s).
204 void test(S s, int b)
206 LLVM IR does not allow parameter passing by aggregates, therefore it must break
207 the aggregates value (in problem 3 and 4) into a number of scalar values:
208 void %test(long %s.i, byte %s.j, long %s.d);
210 However, if the backend were to lower this code literally it would pass the 3
211 values in integer registers. To force it be passed in memory, the frontend
212 should change the function signiture to:
213 void %test(long %undef1, long %undef2, long %undef3, long %undef4,
214 long %undef5, long %undef6,
215 long %s.i, byte %s.j, long %s.d);
216 And the callee would look something like this:
217 call void %test( undef, undef, undef, undef, undef, undef,
218 %tmp.s.i, %tmp.s.j, %tmp.s.d );
219 The first 6 undef parameters would exhaust the 6 integer registers used for
220 parameter passing. The following three integer values would then be forced into
223 For problem 4, the parameter 'd' would be moved to the front of the parameter
224 list so it will be passed in register:
226 long %undef1, long %undef2, long %undef3, long %undef4,
227 long %undef5, long %undef6,
228 long %s.i, byte %s.j, long %s.d);
230 //===---------------------------------------------------------------------===//
241 We generate this code for static relocation model:
244 leaq _dst(%rip), %rax
245 movq %rax, _ptr(%rip)
248 If we are in small code model, they we can treat _dst as a 32-bit constant.
249 movq $_dst, _ptr(%rip)
251 Note, however, we should continue to use RIP relative addressing mode as much as
252 possible. The above is actually one byte shorter than
255 A better example is the code from PR1018. We are generating:
256 leaq xcalloc2(%rip), %rax
258 when we should be generating:
259 movq $xcalloc2, 8(%rsp)
261 The reason the better codegen isn't done now is support for static small
262 code model in JIT mode. The JIT cannot ensure that all GV's are placed in the
263 lower 4G so we are not treating GV labels as 32-bit values.
265 //===---------------------------------------------------------------------===//
267 Right now the asm printer assumes GlobalAddress are accessed via RIP relative
268 addressing. Therefore, it is not possible to generate this:
269 movabsq $__ZTV10polynomialIdE+16, %rax
271 That is ok for now since we currently only support small model. So the above
273 leaq __ZTV10polynomialIdE+16(%rip), %rax
275 This is probably slightly slower but is much shorter than movabsq. However, if
276 we were to support medium or larger code models, we need to use the movabs
277 instruction. We should probably introduce something like AbsoluteAddress to
278 distinguish it from GlobalAddress so the asm printer and JIT code emitter can