1 //===---------------------------------------------------------------------===//
2 // Random ideas for the X86 backend.
3 //===---------------------------------------------------------------------===//
5 Add a MUL2U and MUL2S nodes to represent a multiply that returns both the
6 Hi and Lo parts (combination of MUL and MULH[SU] into one node). Add this to
7 X86, & make the dag combiner produce it when needed. This will eliminate one
8 imul from the code generated for:
10 long long test(long long X, long long Y) { return X*Y; }
12 by using the EAX result from the mul. We should add a similar node for
17 long long test(int X, int Y) { return (long long)X*Y; }
19 ... which should only be one imul instruction.
21 //===---------------------------------------------------------------------===//
23 This should be one DIV/IDIV instruction, not a libcall:
25 unsigned test(unsigned long long X, unsigned Y) {
29 This can be done trivially with a custom legalizer. What about overflow
30 though? http://gcc.gnu.org/bugzilla/show_bug.cgi?id=14224
32 //===---------------------------------------------------------------------===//
34 Improvements to the multiply -> shift/add algorithm:
35 http://gcc.gnu.org/ml/gcc-patches/2004-08/msg01590.html
37 //===---------------------------------------------------------------------===//
39 Improve code like this (occurs fairly frequently, e.g. in LLVM):
40 long long foo(int x) { return 1LL << x; }
42 http://gcc.gnu.org/ml/gcc-patches/2004-09/msg01109.html
43 http://gcc.gnu.org/ml/gcc-patches/2004-09/msg01128.html
44 http://gcc.gnu.org/ml/gcc-patches/2004-09/msg01136.html
46 Another useful one would be ~0ULL >> X and ~0ULL << X.
48 //===---------------------------------------------------------------------===//
51 _Bool f(_Bool a) { return a!=1; }
58 //===---------------------------------------------------------------------===//
62 1. Dynamic programming based approach when compile time if not an
64 2. Code duplication (addressing mode) during isel.
65 3. Other ideas from "Register-Sensitive Selection, Duplication, and
66 Sequencing of Instructions".
67 4. Scheduling for reduced register pressure. E.g. "Minimum Register
68 Instruction Sequence Problem: Revisiting Optimal Code Generation for DAGs"
69 and other related papers.
70 http://citeseer.ist.psu.edu/govindarajan01minimum.html
72 //===---------------------------------------------------------------------===//
74 Should we promote i16 to i32 to avoid partial register update stalls?
76 //===---------------------------------------------------------------------===//
78 Leave any_extend as pseudo instruction and hint to register
79 allocator. Delay codegen until post register allocation.
81 //===---------------------------------------------------------------------===//
83 Model X86 EFLAGS as a real register to avoid redudant cmp / test. e.g.
87 testb %al, %al # unnecessary
90 //===---------------------------------------------------------------------===//
92 Count leading zeros and count trailing zeros:
94 int clz(int X) { return __builtin_clz(X); }
95 int ctz(int X) { return __builtin_ctz(X); }
97 $ gcc t.c -S -o - -O3 -fomit-frame-pointer -masm=intel
99 bsr %eax, DWORD PTR [%esp+4]
103 bsf %eax, DWORD PTR [%esp+4]
106 however, check that these are defined for 0 and 32. Our intrinsics are, GCC's
109 //===---------------------------------------------------------------------===//
111 Use push/pop instructions in prolog/epilog sequences instead of stores off
112 ESP (certain code size win, perf win on some [which?] processors).
113 Also, it appears icc use push for parameter passing. Need to investigate.
115 //===---------------------------------------------------------------------===//
117 Only use inc/neg/not instructions on processors where they are faster than
118 add/sub/xor. They are slower on the P4 due to only updating some processor
121 //===---------------------------------------------------------------------===//
123 The instruction selector sometimes misses folding a load into a compare. The
124 pattern is written as (cmp reg, (load p)). Because the compare isn't
125 commutative, it is not matched with the load on both sides. The dag combiner
126 should be made smart enough to cannonicalize the load into the RHS of a compare
127 when it can invert the result of the compare for free.
129 How about intrinsics? An example is:
130 *res = _mm_mulhi_epu16(*A, _mm_mul_epu32(*B, *C));
133 pmuludq (%eax), %xmm0
138 The transformation probably requires a X86 specific pass or a DAG combiner
139 target specific hook.
141 //===---------------------------------------------------------------------===//
143 The DAG Isel doesn't fold the loads into the adds in this testcase. The
144 pattern selector does. This is because the chain value of the load gets
145 selected first, and the loads aren't checking to see if they are only used by
150 int %test(int* %x, int* %y, int* %z) {
183 This is bad for register pressure, though the dag isel is producing a
186 //===---------------------------------------------------------------------===//
188 In many cases, LLVM generates code like this:
197 on some processors (which ones?), it is more efficient to do this:
206 Doing this correctly is tricky though, as the xor clobbers the flags.
208 //===---------------------------------------------------------------------===//
210 We should generate 'test' instead of 'cmp' in various cases, e.g.:
213 %Y = shl int %X, ubyte 1
223 This may just be a matter of using 'test' to write bigger patterns for X86cmp.
225 An important case is comparison against zero:
240 //===---------------------------------------------------------------------===//
242 We should generate bts/btr/etc instructions on targets where they are cheap or
243 when codesize is important. e.g., for:
245 void setbit(int *target, int bit) {
246 *target |= (1 << bit);
248 void clearbit(int *target, int bit) {
249 *target &= ~(1 << bit);
252 //===---------------------------------------------------------------------===//
254 Instead of the following for memset char*, 1, 10:
256 movl $16843009, 4(%edx)
257 movl $16843009, (%edx)
260 It might be better to generate
267 when we can spare a register. It reduces code size.
269 //===---------------------------------------------------------------------===//
271 Evaluate what the best way to codegen sdiv X, (2^C) is. For X/8, we currently
288 GCC knows several different ways to codegen it, one of which is this:
298 which is probably slower, but it's interesting at least :)
300 //===---------------------------------------------------------------------===//
302 Should generate min/max for stuff like:
304 void minf(float a, float b, float *X) {
308 Make use of floating point min / max instructions. Perhaps introduce ISD::FMIN
309 and ISD::FMAX node types?
311 //===---------------------------------------------------------------------===//
313 The first BB of this code:
317 %V = call bool %foo()
318 br bool %V, label %T, label %F
335 It would be better to emit "cmp %al, 1" than a xor and test.
337 //===---------------------------------------------------------------------===//
339 Enable X86InstrInfo::convertToThreeAddress().
341 //===---------------------------------------------------------------------===//
343 Investigate whether it is better to codegen the following
345 %tmp.1 = mul int %x, 9
349 leal (%eax,%eax,8), %eax
351 as opposed to what llc is currently generating:
353 imull $9, 4(%esp), %eax
355 Currently the load folding imull has a higher complexity than the LEA32 pattern.
357 //===---------------------------------------------------------------------===//
359 We are currently lowering large (1MB+) memmove/memcpy to rep/stosl and rep/movsl
360 We should leave these as libcalls for everything over a much lower threshold,
361 since libc is hand tuned for medium and large mem ops (avoiding RFO for large
362 stores, TLB preheating, etc)
364 //===---------------------------------------------------------------------===//
366 Optimize this into something reasonable:
367 x * copysign(1.0, y) * copysign(1.0, z)
369 //===---------------------------------------------------------------------===//
371 Optimize copysign(x, *y) to use an integer load from y.
373 //===---------------------------------------------------------------------===//
375 %X = weak global int 0
378 %N = cast int %N to uint
379 %tmp.24 = setgt int %N, 0
380 br bool %tmp.24, label %no_exit, label %return
383 %indvar = phi uint [ 0, %entry ], [ %indvar.next, %no_exit ]
384 %i.0.0 = cast uint %indvar to int
385 volatile store int %i.0.0, int* %X
386 %indvar.next = add uint %indvar, 1
387 %exitcond = seteq uint %indvar.next, %N
388 br bool %exitcond, label %return, label %no_exit
402 jl LBB_foo_4 # return
403 LBB_foo_1: # no_exit.preheader
406 movl L_X$non_lazy_ptr, %edx
410 jne LBB_foo_2 # no_exit
411 LBB_foo_3: # return.loopexit
415 We should hoist "movl L_X$non_lazy_ptr, %edx" out of the loop after
416 remateralization is implemented. This can be accomplished with 1) a target
417 dependent LICM pass or 2) makeing SelectDAG represent the whole function.
419 //===---------------------------------------------------------------------===//
421 The following tests perform worse with LSR:
423 lambda, siod, optimizer-eval, ackermann, hash2, nestedloop, strcat, and Treesor.
425 //===---------------------------------------------------------------------===//
427 Teach the coalescer to coalesce vregs of different register classes. e.g. FR32 /
430 //===---------------------------------------------------------------------===//
438 Obviously it would have been better for the first mov (or any op) to store
439 directly %esp[0] if there are no other uses.
441 //===---------------------------------------------------------------------===//
443 Adding to the list of cmp / test poor codegen issues:
445 int test(__m128 *A, __m128 *B) {
446 if (_mm_comige_ss(*A, *B))
466 Note the setae, movzbl, cmpl, cmove can be replaced with a single cmovae. There
467 are a number of issues. 1) We are introducing a setcc between the result of the
468 intrisic call and select. 2) The intrinsic is expected to produce a i32 value
469 so a any extend (which becomes a zero extend) is added.
471 We probably need some kind of target DAG combine hook to fix this.
473 //===---------------------------------------------------------------------===//
475 We generate significantly worse code for this than GCC:
476 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=21150
477 http://gcc.gnu.org/bugzilla/attachment.cgi?id=8701
479 There is also one case we do worse on PPC.
481 //===---------------------------------------------------------------------===//
483 If shorter, we should use things like:
488 The former can also be used when the two-addressy nature of the 'and' would
489 require a copy to be inserted (in X86InstrInfo::convertToThreeAddress).
491 //===---------------------------------------------------------------------===//
493 This code generates ugly code, probably due to costs being off or something:
495 void %test(float* %P, <4 x float>* %P2 ) {
496 %xFloat0.688 = load float* %P
497 %loadVector37.712 = load <4 x float>* %P2
498 %inFloat3.713 = insertelement <4 x float> %loadVector37.712, float 0.000000e+00, uint 3
499 store <4 x float> %inFloat3.713, <4 x float>* %P2
507 movd %xmm0, %eax ;; EAX = 0!
510 pinsrw $6, %eax, %xmm0
511 shrl $16, %eax ;; EAX = 0 again!
512 pinsrw $7, %eax, %xmm0
516 It would be better to generate:
522 pinsrw $6, %eax, %xmm0
523 pinsrw $7, %eax, %xmm0
527 or use pxor (to make a zero vector) and shuffle (to insert it).
529 //===---------------------------------------------------------------------===//
533 char foo(int x) { return x; }
541 //===---------------------------------------------------------------------===//