1 ==========================
2 Auto-Vectorization in LLVM
3 ==========================
8 LLVM has two vectorizers: The :ref:`Loop Vectorizer <loop-vectorizer>`,
9 which operates on Loops, and the :ref:`SLP Vectorizer
10 <slp-vectorizer>`, which optimizes straight-line code. These vectorizers
11 focus on different optimization opportunities and use different techniques.
12 The SLP vectorizer merges multiple scalars that are found in the code into
13 vectors while the Loop Vectorizer widens instructions in the original loop
14 to operate on multiple consecutive loop iterations.
24 LLVM's Loop Vectorizer is now enabled by default for -O3.
25 We plan to enable parts of the Loop Vectorizer on -O2 and -Os in future releases.
26 The vectorizer can be disabled using the command line:
28 .. code-block:: console
30 $ clang ... -fno-vectorize file.c
35 The loop vectorizer uses a cost model to decide on the optimal vectorization factor
36 and unroll factor. However, users of the vectorizer can force the vectorizer to use
37 specific values. Both 'clang' and 'opt' support the flags below.
39 Users can control the vectorization SIMD width using the command line flag "-force-vector-width".
41 .. code-block:: console
43 $ clang -mllvm -force-vector-width=8 ...
44 $ opt -loop-vectorize -force-vector-width=8 ...
46 Users can control the unroll factor using the command line flag "-force-vector-unroll"
48 .. code-block:: console
50 $ clang -mllvm -force-vector-unroll=2 ...
51 $ opt -loop-vectorize -force-vector-unroll=2 ...
56 The LLVM Loop Vectorizer has a number of features that allow it to vectorize
59 Loops with unknown trip count
60 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
62 The Loop Vectorizer supports loops with an unknown trip count.
63 In the loop below, the iteration ``start`` and ``finish`` points are unknown,
64 and the Loop Vectorizer has a mechanism to vectorize loops that do not start
65 at zero. In this example, 'n' may not be a multiple of the vector width, and
66 the vectorizer has to execute the last few iterations as scalar code. Keeping
67 a scalar copy of the loop increases the code size.
71 void bar(float *A, float* B, float K, int start, int end) {
72 for (int i = start; i < end; ++i)
76 Runtime Checks of Pointers
77 ^^^^^^^^^^^^^^^^^^^^^^^^^^
79 In the example below, if the pointers A and B point to consecutive addresses,
80 then it is illegal to vectorize the code because some elements of A will be
81 written before they are read from array B.
83 Some programmers use the 'restrict' keyword to notify the compiler that the
84 pointers are disjointed, but in our example, the Loop Vectorizer has no way of
85 knowing that the pointers A and B are unique. The Loop Vectorizer handles this
86 loop by placing code that checks, at runtime, if the arrays A and B point to
87 disjointed memory locations. If arrays A and B overlap, then the scalar version
88 of the loop is executed.
92 void bar(float *A, float* B, float K, int n) {
93 for (int i = 0; i < n; ++i)
101 In this example the ``sum`` variable is used by consecutive iterations of
102 the loop. Normally, this would prevent vectorization, but the vectorizer can
103 detect that 'sum' is a reduction variable. The variable 'sum' becomes a vector
104 of integers, and at the end of the loop the elements of the array are added
105 together to create the correct result. We support a number of different
106 reduction operations, such as addition, multiplication, XOR, AND and OR.
110 int foo(int *A, int *B, int n) {
112 for (int i = 0; i < n; ++i)
117 We support floating point reduction operations when `-ffast-math` is used.
122 In this example the value of the induction variable ``i`` is saved into an
123 array. The Loop Vectorizer knows to vectorize induction variables.
127 void bar(float *A, float* B, float K, int n) {
128 for (int i = 0; i < n; ++i)
135 The Loop Vectorizer is able to "flatten" the IF statement in the code and
136 generate a single stream of instructions. The Loop Vectorizer supports any
137 control flow in the innermost loop. The innermost loop may contain complex
138 nesting of IFs, ELSEs and even GOTOs.
142 int foo(int *A, int *B, int n) {
144 for (int i = 0; i < n; ++i)
150 Pointer Induction Variables
151 ^^^^^^^^^^^^^^^^^^^^^^^^^^^
153 This example uses the "accumulate" function of the standard c++ library. This
154 loop uses C++ iterators, which are pointers, and not integer indices.
155 The Loop Vectorizer detects pointer induction variables and can vectorize
156 this loop. This feature is important because many C++ programs use iterators.
160 int baz(int *A, int n) {
161 return std::accumulate(A, A + n, 0);
167 The Loop Vectorizer can vectorize loops that count backwards.
171 int foo(int *A, int *B, int n) {
172 for (int i = n; i > 0; --i)
179 The Loop Vectorizer can vectorize code that becomes a sequence of scalar instructions
180 that scatter/gathers memory.
184 int foo(int *A, int *B, int n, int k) {
185 for (int i = 0; i < n; ++i)
189 Vectorization of Mixed Types
190 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
192 The Loop Vectorizer can vectorize programs with mixed types. The Vectorizer
193 cost model can estimate the cost of the type conversion and decide if
194 vectorization is profitable.
198 int foo(int *A, char *B, int n, int k) {
199 for (int i = 0; i < n; ++i)
203 Global Structures Alias Analysis
204 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
206 Access to global structures can also be vectorized, with alias analysis being
207 used to make sure accesses don't alias. Run-time checks can also be added on
208 pointer access to structure members.
210 Many variations are supported, but some that rely on undefined behaviour being
211 ignored (as other compilers do) are still being left un-vectorized.
215 struct { int A[100], K, B[100]; } Foo;
218 for (int i = 0; i < 100; ++i)
219 Foo.A[i] = Foo.B[i] + 100;
222 Vectorization of function calls
223 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
225 The Loop Vectorize can vectorize intrinsic math functions.
226 See the table below for a list of these functions.
228 +-----+-----+---------+
230 +-----+-----+---------+
232 +-----+-----+---------+
233 | log |log2 | log10 |
234 +-----+-----+---------+
236 +-----+-----+---------+
237 |fma |trunc|nearbyint|
238 +-----+-----+---------+
240 +-----+-----+---------+
242 The loop vectorizer knows about special instructions on the target and will
243 vectorize a loop containing a function call that maps to the instructions. For
244 example, the loop below will be vectorized on Intel x86 if the SSE4.1 roundps
245 instruction is available.
250 for (int i = 0; i != 1024; ++i)
254 Partial unrolling during vectorization
255 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
257 Modern processors feature multiple execution units, and only programs that contain a
258 high degree of parallelism can fully utilize the entire width of the machine.
259 The Loop Vectorizer increases the instruction level parallelism (ILP) by
260 performing partial-unrolling of loops.
262 In the example below the entire array is accumulated into the variable 'sum'.
263 This is inefficient because only a single execution port can be used by the processor.
264 By unrolling the code the Loop Vectorizer allows two or more execution ports
265 to be used simultaneously.
269 int foo(int *A, int *B, int n) {
271 for (int i = 0; i < n; ++i)
276 The Loop Vectorizer uses a cost model to decide when it is profitable to unroll loops.
277 The decision to unroll the loop depends on the register pressure and the generated code size.
282 This section shows the the execution time of Clang on a simple benchmark:
283 `gcc-loops <http://llvm.org/viewvc/llvm-project/test-suite/trunk/SingleSource/UnitTests/Vectorizer/>`_.
284 This benchmarks is a collection of loops from the GCC autovectorization
285 `page <http://gcc.gnu.org/projects/tree-ssa/vectorization.html>`_ by Dorit Nuzman.
287 The chart below compares GCC-4.7, ICC-13, and Clang-SVN with and without loop vectorization at -O3, tuned for "corei7-avx", running on a Sandybridge iMac.
288 The Y-axis shows the time in msec. Lower is better. The last column shows the geomean of all the kernels.
290 .. image:: gcc-loops.png
292 And Linpack-pc with the same configuration. Result is Mflops, higher is better.
294 .. image:: linpack-pc.png
304 The goal of SLP vectorization (a.k.a. superword-level parallelism) is
305 to combine similar independent instructions within simple control-flow regions
306 into vector instructions. Memory accesses, arithemetic operations, comparison
307 operations and some math functions can all be vectorized using this technique
308 (subject to the capabilities of the target architecture).
310 For example, the following function performs very similar operations on its
311 inputs (a1, b1) and (a2, b2). The basic-block vectorizer may combine these
312 into vector operations.
316 void foo(int a1, int a2, int b1, int b2, int *A) {
317 A[0] = a1*(a1 + b1)/b1 + 50*b1/a1;
318 A[1] = a2*(a2 + b2)/b2 + 50*b2/a2;
321 The SLP-vectorizer has two phases, bottom-up, and top-down. The top-down vectorization
322 phase is more aggressive, but takes more time to run.
327 The SLP Vectorizer is not enabled by default, but it can be enabled
328 through clang using the command line flag:
330 .. code-block:: console
332 $ clang -fslp-vectorize file.c
334 LLVM has a second basic block vectorization phase
335 which is more compile-time intensive (The BB vectorizer). This optimization
336 can be enabled through clang using the command line flag:
338 .. code-block:: console
340 $ clang -fslp-vectorize-aggressive file.c
343 The SLP vectorizer is in early development stages but can already vectorize
344 and accelerate many programs in the LLVM test suite.
346 ======================= ============
348 ======================= ============
350 Misc/matmul_f64_4x4 -23.23%
353 ASC_Sequoia/AMGmk -13.85%
354 TSVC/LoopRerolling-flt -11.76%
358 TSVC/NodeSplitting-dbl -6.96%
359 Misc-C++/sphereflake -6.74%
361 ======================= ============