1 //===---------------------------------------------------------------------===//
2 // Random ideas for the X86 backend.
3 //===---------------------------------------------------------------------===//
5 Add a MUL2U and MUL2S nodes to represent a multiply that returns both the
6 Hi and Lo parts (combination of MUL and MULH[SU] into one node). Add this to
7 X86, & make the dag combiner produce it when needed. This will eliminate one
8 imul from the code generated for:
10 long long test(long long X, long long Y) { return X*Y; }
12 by using the EAX result from the mul. We should add a similar node for
17 long long test(int X, int Y) { return (long long)X*Y; }
19 ... which should only be one imul instruction.
21 //===---------------------------------------------------------------------===//
23 This should be one DIV/IDIV instruction, not a libcall:
25 unsigned test(unsigned long long X, unsigned Y) {
29 This can be done trivially with a custom legalizer. What about overflow
30 though? http://gcc.gnu.org/bugzilla/show_bug.cgi?id=14224
32 //===---------------------------------------------------------------------===//
34 Some targets (e.g. athlons) prefer freep to fstp ST(0):
35 http://gcc.gnu.org/ml/gcc-patches/2004-04/msg00659.html
37 //===---------------------------------------------------------------------===//
39 This should use fiadd on chips where it is profitable:
40 double foo(double P, int *I) { return P+*I; }
42 We have fiadd patterns now but the followings have the same cost and
43 complexity. We need a way to specify the later is more profitable.
45 def FpADD32m : FpI<(ops RFP:$dst, RFP:$src1, f32mem:$src2), OneArgFPRW,
46 [(set RFP:$dst, (fadd RFP:$src1,
47 (extloadf64f32 addr:$src2)))]>;
48 // ST(0) = ST(0) + [mem32]
50 def FpIADD32m : FpI<(ops RFP:$dst, RFP:$src1, i32mem:$src2), OneArgFPRW,
51 [(set RFP:$dst, (fadd RFP:$src1,
52 (X86fild addr:$src2, i32)))]>;
53 // ST(0) = ST(0) + [mem32int]
55 //===---------------------------------------------------------------------===//
57 The FP stackifier needs to be global. Also, it should handle simple permutates
58 to reduce number of shuffle instructions, e.g. turning:
71 http://gcc.gnu.org/ml/gcc-patches/2004-11/msg02410.html
74 //===---------------------------------------------------------------------===//
76 Improvements to the multiply -> shift/add algorithm:
77 http://gcc.gnu.org/ml/gcc-patches/2004-08/msg01590.html
79 //===---------------------------------------------------------------------===//
81 Improve code like this (occurs fairly frequently, e.g. in LLVM):
82 long long foo(int x) { return 1LL << x; }
84 http://gcc.gnu.org/ml/gcc-patches/2004-09/msg01109.html
85 http://gcc.gnu.org/ml/gcc-patches/2004-09/msg01128.html
86 http://gcc.gnu.org/ml/gcc-patches/2004-09/msg01136.html
88 Another useful one would be ~0ULL >> X and ~0ULL << X.
90 //===---------------------------------------------------------------------===//
93 _Bool f(_Bool a) { return a!=1; }
100 //===---------------------------------------------------------------------===//
104 1. Dynamic programming based approach when compile time if not an
106 2. Code duplication (addressing mode) during isel.
107 3. Other ideas from "Register-Sensitive Selection, Duplication, and
108 Sequencing of Instructions".
109 4. Scheduling for reduced register pressure. E.g. "Minimum Register
110 Instruction Sequence Problem: Revisiting Optimal Code Generation for DAGs"
111 and other related papers.
112 http://citeseer.ist.psu.edu/govindarajan01minimum.html
114 //===---------------------------------------------------------------------===//
116 Should we promote i16 to i32 to avoid partial register update stalls?
118 //===---------------------------------------------------------------------===//
120 Leave any_extend as pseudo instruction and hint to register
121 allocator. Delay codegen until post register allocation.
123 //===---------------------------------------------------------------------===//
125 Add a target specific hook to DAG combiner to handle SINT_TO_FP and
126 FP_TO_SINT when the source operand is already in memory.
128 //===---------------------------------------------------------------------===//
130 Model X86 EFLAGS as a real register to avoid redudant cmp / test. e.g.
134 testb %al, %al # unnecessary
137 //===---------------------------------------------------------------------===//
139 Count leading zeros and count trailing zeros:
141 int clz(int X) { return __builtin_clz(X); }
142 int ctz(int X) { return __builtin_ctz(X); }
144 $ gcc t.c -S -o - -O3 -fomit-frame-pointer -masm=intel
146 bsr %eax, DWORD PTR [%esp+4]
150 bsf %eax, DWORD PTR [%esp+4]
153 however, check that these are defined for 0 and 32. Our intrinsics are, GCC's
156 //===---------------------------------------------------------------------===//
158 Use push/pop instructions in prolog/epilog sequences instead of stores off
159 ESP (certain code size win, perf win on some [which?] processors).
160 Also, it appears icc use push for parameter passing. Need to investigate.
162 //===---------------------------------------------------------------------===//
164 Only use inc/neg/not instructions on processors where they are faster than
165 add/sub/xor. They are slower on the P4 due to only updating some processor
168 //===---------------------------------------------------------------------===//
170 Open code rint,floor,ceil,trunc:
171 http://gcc.gnu.org/ml/gcc-patches/2004-08/msg02006.html
172 http://gcc.gnu.org/ml/gcc-patches/2004-08/msg02011.html
174 //===---------------------------------------------------------------------===//
176 Combine: a = sin(x), b = cos(x) into a,b = sincos(x).
178 Expand these to calls of sin/cos and stores:
179 double sincos(double x, double *sin, double *cos);
180 float sincosf(float x, float *sin, float *cos);
181 long double sincosl(long double x, long double *sin, long double *cos);
183 Doing so could allow SROA of the destination pointers. See also:
184 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=17687
186 //===---------------------------------------------------------------------===//
188 The instruction selector sometimes misses folding a load into a compare. The
189 pattern is written as (cmp reg, (load p)). Because the compare isn't
190 commutative, it is not matched with the load on both sides. The dag combiner
191 should be made smart enough to cannonicalize the load into the RHS of a compare
192 when it can invert the result of the compare for free.
194 How about intrinsics? An example is:
195 *res = _mm_mulhi_epu16(*A, _mm_mul_epu32(*B, *C));
198 pmuludq (%eax), %xmm0
203 The transformation probably requires a X86 specific pass or a DAG combiner
204 target specific hook.
206 //===---------------------------------------------------------------------===//
208 LSR should be turned on for the X86 backend and tuned to take advantage of its
211 //===---------------------------------------------------------------------===//
213 When compiled with unsafemath enabled, "main" should enable SSE DAZ mode and
214 other fast SSE modes.
216 //===---------------------------------------------------------------------===//
218 Think about doing i64 math in SSE regs.
220 //===---------------------------------------------------------------------===//
222 The DAG Isel doesn't fold the loads into the adds in this testcase. The
223 pattern selector does. This is because the chain value of the load gets
224 selected first, and the loads aren't checking to see if they are only used by
229 int %test(int* %x, int* %y, int* %z) {
262 This is bad for register pressure, though the dag isel is producing a
265 //===---------------------------------------------------------------------===//
267 This testcase should have no SSE instructions in it, and only one load from
270 double %test3(bool %B) {
271 %C = select bool %B, double 123.412, double 523.01123123
275 Currently, the select is being lowered, which prevents the dag combiner from
276 turning 'select (load CPI1), (load CPI2)' -> 'load (select CPI1, CPI2)'
278 The pattern isel got this one right.
280 //===---------------------------------------------------------------------===//
282 We need to lower switch statements to tablejumps when appropriate instead of
283 always into binary branch trees.
285 //===---------------------------------------------------------------------===//
287 SSE doesn't have [mem] op= reg instructions. If we have an SSE instruction
292 and the register allocator decides to spill X, it is cheaper to emit this as:
303 ..and this uses one fewer register (so this should be done at load folding
304 time, not at spiller time). *Note* however that this can only be done
305 if Y is dead. Here's a testcase:
307 %.str_3 = external global [15 x sbyte] ; <[15 x sbyte]*> [#uses=0]
308 implementation ; Functions:
309 declare void %printf(int, ...)
313 no_exit.i7: ; preds = %no_exit.i7, %build_tree.exit
314 %tmp.0.1.0.i9 = phi double [ 0.000000e+00, %build_tree.exit ], [ %tmp.34.i18, %no_exit.i7 ] ; <double> [#uses=1]
315 %tmp.0.0.0.i10 = phi double [ 0.000000e+00, %build_tree.exit ], [ %tmp.28.i16, %no_exit.i7 ] ; <double> [#uses=1]
316 %tmp.28.i16 = add double %tmp.0.0.0.i10, 0.000000e+00
317 %tmp.34.i18 = add double %tmp.0.1.0.i9, 0.000000e+00
318 br bool false, label %Compute_Tree.exit23, label %no_exit.i7
319 Compute_Tree.exit23: ; preds = %no_exit.i7
320 tail call void (int, ...)* %printf( int 0 )
321 store double %tmp.34.i18, double* null
330 *** movsd %XMM2, QWORD PTR [%ESP + 8]
331 *** addsd %XMM2, %XMM1
332 *** movsd QWORD PTR [%ESP + 8], %XMM2
333 jmp .BBmain_1 # no_exit.i7
335 This is a bugpoint reduced testcase, which is why the testcase doesn't make
336 much sense (e.g. its an infinite loop). :)
338 //===---------------------------------------------------------------------===//
340 None of the FPStack instructions are handled in
341 X86RegisterInfo::foldMemoryOperand, which prevents the spiller from
342 folding spill code into the instructions.
344 //===---------------------------------------------------------------------===//
346 In many cases, LLVM generates code like this:
355 on some processors (which ones?), it is more efficient to do this:
364 Doing this correctly is tricky though, as the xor clobbers the flags.
366 //===---------------------------------------------------------------------===//
368 We should generate 'test' instead of 'cmp' in various cases, e.g.:
371 %Y = shl int %X, ubyte 1
381 This may just be a matter of using 'test' to write bigger patterns for X86cmp.
383 //===---------------------------------------------------------------------===//
385 SSE should implement 'select_cc' using 'emulated conditional moves' that use
386 pcmp/pand/pandn/por to do a selection instead of a conditional branch:
388 double %X(double %Y, double %Z, double %A, double %B) {
389 %C = setlt double %A, %B
390 %z = add double %Z, 0.0 ;; select operand is not a load
391 %D = select bool %C, double %Y, double %z
400 addsd 24(%esp), %xmm0
401 movsd 32(%esp), %xmm1
402 movsd 16(%esp), %xmm2
403 ucomisd 40(%esp), %xmm1
413 //===---------------------------------------------------------------------===//
415 We should generate bts/btr/etc instructions on targets where they are cheap or
416 when codesize is important. e.g., for:
418 void setbit(int *target, int bit) {
419 *target |= (1 << bit);
421 void clearbit(int *target, int bit) {
422 *target &= ~(1 << bit);
425 //===---------------------------------------------------------------------===//
427 Instead of the following for memset char*, 1, 10:
429 movl $16843009, 4(%edx)
430 movl $16843009, (%edx)
433 It might be better to generate
440 when we can spare a register. It reduces code size.
442 //===---------------------------------------------------------------------===//
444 It's not clear whether we should use pxor or xorps / xorpd to clear XMM
445 registers. The choice may depend on subtarget information. We should do some
446 more experiments on different x86 machines.
448 //===---------------------------------------------------------------------===//
450 Evaluate what the best way to codegen sdiv X, (2^C) is. For X/8, we currently
467 GCC knows several different ways to codegen it, one of which is this:
477 which is probably slower, but it's interesting at least :)
479 //===---------------------------------------------------------------------===//
481 Currently the x86 codegen isn't very good at mixing SSE and FPStack
484 unsigned int foo(double x) { return x; }
488 movsd 24(%esp), %xmm0
496 This will be solved when we go to a dynamic programming based isel.
498 //===---------------------------------------------------------------------===//
500 Should generate min/max for stuff like:
502 void minf(float a, float b, float *X) {
506 Make use of floating point min / max instructions. Perhaps introduce ISD::FMIN
507 and ISD::FMAX node types?
509 //===---------------------------------------------------------------------===//
511 The first BB of this code:
515 %V = call bool %foo()
516 br bool %V, label %T, label %F
533 It would be better to emit "cmp %al, 1" than a xor and test.
535 //===---------------------------------------------------------------------===//
537 Enable X86InstrInfo::convertToThreeAddress().
539 //===---------------------------------------------------------------------===//
541 Investigate whether it is better to codegen the following
543 %tmp.1 = mul int %x, 9
547 leal (%eax,%eax,8), %eax
549 as opposed to what llc is currently generating:
551 imull $9, 4(%esp), %eax
553 Currently the load folding imull has a higher complexity than the LEA32 pattern.
555 //===---------------------------------------------------------------------===//
557 We are currently lowering large (1MB+) memmove/memcpy to rep/stosl and rep/movsl
558 We should leave these as libcalls for everything over a much lower threshold,
559 since libc is hand tuned for medium and large mem ops (avoiding RFO for large
560 stores, TLB preheating, etc)
562 //===---------------------------------------------------------------------===//
564 Lower memcpy / memset to a series of SSE 128 bit move instructions when it's
567 //===---------------------------------------------------------------------===//
569 Teach the coalescer to commute 2-addr instructions, allowing us to eliminate
570 the reg-reg copy in this example:
572 float foo(int *x, float *y, unsigned c) {
575 for (i = 0; i < c; i++) {
576 float xx = (float)x[i];
585 cvtsi2ss %XMM0, DWORD PTR [%EDX + 4*%ESI]
586 mulss %XMM0, DWORD PTR [%EAX + 4*%ESI]
590 **** movaps %XMM1, %XMM0
591 jb LBB_foo_3 # no_exit
593 //===---------------------------------------------------------------------===//
596 if (copysign(1.0, x) == copysign(1.0, y))
601 //===---------------------------------------------------------------------===//
603 Optimize this into something reasonable:
604 x * copysign(1.0, y) * copysign(1.0, z)
606 //===---------------------------------------------------------------------===//
608 Optimize copysign(x, *y) to use an integer load from y.
610 //===---------------------------------------------------------------------===//
612 %X = weak global int 0
615 %N = cast int %N to uint
616 %tmp.24 = setgt int %N, 0
617 br bool %tmp.24, label %no_exit, label %return
620 %indvar = phi uint [ 0, %entry ], [ %indvar.next, %no_exit ]
621 %i.0.0 = cast uint %indvar to int
622 volatile store int %i.0.0, int* %X
623 %indvar.next = add uint %indvar, 1
624 %exitcond = seteq uint %indvar.next, %N
625 br bool %exitcond, label %return, label %no_exit
639 jl LBB_foo_4 # return
640 LBB_foo_1: # no_exit.preheader
643 movl L_X$non_lazy_ptr, %edx
647 jne LBB_foo_2 # no_exit
648 LBB_foo_3: # return.loopexit
652 We should hoist "movl L_X$non_lazy_ptr, %edx" out of the loop after
653 remateralization is implemented. This can be accomplished with 1) a target
654 dependent LICM pass or 2) makeing SelectDAG represent the whole function.
656 //===---------------------------------------------------------------------===//
658 The following tests perform worse with LSR:
660 lambda, siod, optimizer-eval, ackermann, hash2, nestedloop, strcat, and Treesor.
662 //===---------------------------------------------------------------------===//
664 Teach the coalescer to coalesce vregs of different register classes. e.g. FR32 /
667 //===---------------------------------------------------------------------===//
675 Obviously it would have been better for the first mov (or any op) to store
676 directly %esp[0] if there are no other uses.
678 //===---------------------------------------------------------------------===//
680 Use movhps to update upper 64-bits of a v4sf value. Also movlps on lower half
683 //===---------------------------------------------------------------------===//
685 Better codegen for vector_shuffles like this { x, 0, 0, 0 } or { x, 0, x, 0}.
686 Perhaps use pxor / xorp* to clear a XMM register first?
688 //===---------------------------------------------------------------------===//
692 void f(float a, float b, vector float * out) { *out = (vector float){ a, 0.0, 0.0, b}; }
693 void f(float a, float b, vector float * out) { *out = (vector float){ a, b, 0.0, 0}; }
695 For the later we generate:
701 unpcklps %xmm1, %xmm2
703 unpcklps %xmm0, %xmm1
704 unpcklps %xmm2, %xmm1
709 This seems like it should use shufps, one for each of a & b.
711 //===---------------------------------------------------------------------===//
713 Adding to the list of cmp / test poor codegen issues:
715 int test(__m128 *A, __m128 *B) {
716 if (_mm_comige_ss(*A, *B))
736 Note the setae, movzbl, cmpl, cmove can be replaced with a single cmovae. There
737 are a number of issues. 1) We are introducing a setcc between the result of the
738 intrisic call and select. 2) The intrinsic is expected to produce a i32 value
739 so a any extend (which becomes a zero extend) is added.
741 We probably need some kind of target DAG combine hook to fix this.
743 //===---------------------------------------------------------------------===//
745 How to decide when to use the "floating point version" of logical ops? Here are
748 movaps LCPI5_5, %xmm2
751 mulps 8656(%ecx), %xmm3
752 addps 8672(%ecx), %xmm3
758 movaps LCPI5_5, %xmm1
761 mulps 8656(%ecx), %xmm3
762 addps 8672(%ecx), %xmm3
766 movaps %xmm3, 112(%esp)
769 Due to some minor source change, the later case ended up using orps and movaps
770 instead of por and movdqa. Does it matter?
772 //===---------------------------------------------------------------------===//
774 Use movddup to splat a v2f64 directly from a memory source. e.g.
776 #include <emmintrin.h>
778 void test(__m128d *r, double A) {
786 unpcklpd %xmm0, %xmm0
795 movddup 8(%esp), %xmm0
799 //===---------------------------------------------------------------------===//
801 A Mac OS X IA-32 specific ABI bug wrt returning value > 8 bytes:
802 http://llvm.org/bugs/show_bug.cgi?id=729
804 //===---------------------------------------------------------------------===//
806 X86RegisterInfo::copyRegToReg() returns X86::MOVAPSrr for VR128. Is it possible
807 to choose between movaps, movapd, and movdqa based on types of source and
810 How about andps, andpd, and pand? Do we really care about the type of the packed
811 elements? If not, why not always use the "ps" variants which are likely to be
814 //===---------------------------------------------------------------------===//
816 We are emitting bad code for this:
818 float %test(float* %V, int %I, int %D, float %V) {
820 %tmp = seteq int %D, 0
821 br bool %tmp, label %cond_true, label %cond_false23
824 %tmp3 = getelementptr float* %V, int %I
825 %tmp = load float* %tmp3
826 %tmp5 = setgt float %tmp, %V
827 %tmp6 = tail call bool %llvm.isunordered.f32( float %tmp, float %V )
828 %tmp7 = or bool %tmp5, %tmp6
829 br bool %tmp7, label %UnifiedReturnBlock, label %cond_next
832 %tmp10 = add int %I, 1
833 %tmp12 = getelementptr float* %V, int %tmp10
834 %tmp13 = load float* %tmp12
835 %tmp15 = setle float %tmp13, %V
836 %tmp16 = tail call bool %llvm.isunordered.f32( float %tmp13, float %V )
837 %tmp17 = or bool %tmp15, %tmp16
838 %retval = select bool %tmp17, float 0.000000e+00, float 1.000000e+00
842 %tmp28 = tail call float %foo( float* %V, int %I, int %D, float %V )
845 UnifiedReturnBlock: ; preds = %cond_true
846 ret float 0.000000e+00
849 declare bool %llvm.isunordered.f32(float, float)
851 declare float %foo(float*, int, int, float)
854 It exposes a known load folding problem:
856 movss (%edx,%ecx,4), %xmm1
861 LBB_test_2: # cond_next
865 jbe LBB_test_6 # cond_next
866 LBB_test_5: # cond_next
868 LBB_test_6: # cond_next
869 movss %xmm3, 40(%esp)
874 Clearly it's unnecessary to clear %xmm3. It's also not clear why we are emitting
875 three moves (movss, movaps, movss).
877 //===---------------------------------------------------------------------===//
879 External test Nurbs exposed some problems. Look for
880 __ZN15Nurbs_SSE_Cubic17TessellateSurfaceE, bb cond_next140. This is what icc
883 movaps (%edx), %xmm2 #59.21
884 movaps (%edx), %xmm5 #60.21
885 movaps (%edx), %xmm4 #61.21
886 movaps (%edx), %xmm3 #62.21
887 movl 40(%ecx), %ebp #69.49
888 shufps $0, %xmm2, %xmm5 #60.21
889 movl 100(%esp), %ebx #69.20
890 movl (%ebx), %edi #69.20
891 imull %ebp, %edi #69.49
892 addl (%eax), %edi #70.33
893 shufps $85, %xmm2, %xmm4 #61.21
894 shufps $170, %xmm2, %xmm3 #62.21
895 shufps $255, %xmm2, %xmm2 #63.21
896 lea (%ebp,%ebp,2), %ebx #69.49
898 lea -3(%edi,%ebx), %ebx #70.33
900 addl 32(%ecx), %ebx #68.37
901 testb $15, %bl #91.13
902 jne L_B1.24 # Prob 5% #91.13
904 This is the llvm code after instruction scheduling:
906 cond_next140 (0xa910740, LLVM BB @0xa90beb0):
907 %reg1078 = MOV32ri -3
908 %reg1079 = ADD32rm %reg1078, %reg1068, 1, %NOREG, 0
909 %reg1037 = MOV32rm %reg1024, 1, %NOREG, 40
910 %reg1080 = IMUL32rr %reg1079, %reg1037
911 %reg1081 = MOV32rm %reg1058, 1, %NOREG, 0
912 %reg1038 = LEA32r %reg1081, 1, %reg1080, -3
913 %reg1036 = MOV32rm %reg1024, 1, %NOREG, 32
914 %reg1082 = SHL32ri %reg1038, 4
915 %reg1039 = ADD32rr %reg1036, %reg1082
916 %reg1083 = MOVAPSrm %reg1059, 1, %NOREG, 0
917 %reg1034 = SHUFPSrr %reg1083, %reg1083, 170
918 %reg1032 = SHUFPSrr %reg1083, %reg1083, 0
919 %reg1035 = SHUFPSrr %reg1083, %reg1083, 255
920 %reg1033 = SHUFPSrr %reg1083, %reg1083, 85
921 %reg1040 = MOV32rr %reg1039
922 %reg1084 = AND32ri8 %reg1039, 15
924 JE mbb<cond_next204,0xa914d30>
926 Still ok. After register allocation:
928 cond_next140 (0xa910740, LLVM BB @0xa90beb0):
930 %EDX = MOV32rm <fi#3>, 1, %NOREG, 0
931 ADD32rm %EAX<def&use>, %EDX, 1, %NOREG, 0
932 %EDX = MOV32rm <fi#7>, 1, %NOREG, 0
933 %EDX = MOV32rm %EDX, 1, %NOREG, 40
934 IMUL32rr %EAX<def&use>, %EDX
935 %ESI = MOV32rm <fi#5>, 1, %NOREG, 0
936 %ESI = MOV32rm %ESI, 1, %NOREG, 0
937 MOV32mr <fi#4>, 1, %NOREG, 0, %ESI
938 %EAX = LEA32r %ESI, 1, %EAX, -3
939 %ESI = MOV32rm <fi#7>, 1, %NOREG, 0
940 %ESI = MOV32rm %ESI, 1, %NOREG, 32
942 SHL32ri %EDI<def&use>, 4
943 ADD32rr %EDI<def&use>, %ESI
944 %XMM0 = MOVAPSrm %ECX, 1, %NOREG, 0
945 %XMM1 = MOVAPSrr %XMM0
946 SHUFPSrr %XMM1<def&use>, %XMM1, 170
947 %XMM2 = MOVAPSrr %XMM0
948 SHUFPSrr %XMM2<def&use>, %XMM2, 0
949 %XMM3 = MOVAPSrr %XMM0
950 SHUFPSrr %XMM3<def&use>, %XMM3, 255
951 SHUFPSrr %XMM0<def&use>, %XMM0, 85
953 AND32ri8 %EBX<def&use>, 15
955 JE mbb<cond_next204,0xa914d30>
957 This looks really bad. The problem is shufps is a destructive opcode. Since it
958 appears as operand two in more than one shufps ops. It resulted in a number of
959 copies. Note icc also suffers from the same problem. Either the instruction
960 selector should select pshufd or The register allocator can made the two-address
961 to three-address transformation.
963 It also exposes some other problems. See MOV32ri -3 and the spills.
965 //===---------------------------------------------------------------------===//
967 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=25500
969 LLVM is producing bad code.
971 LBB_main_4: # cond_true44
982 jne LBB_main_4 # cond_true44
984 There are two problems. 1) No need to two loop induction variables. We can
985 compare against 262144 * 16. 2) Poor register allocation decisions. We should
986 be able eliminate one of the movaps:
991 movaps %xmm2, %xmm2 <=== Eliminate!
998 jne LBB_main_4 # cond_true44