1 ==========================
2 Auto-Vectorization in LLVM
3 ==========================
8 LLVM has two vectorizers: The :ref:`Loop Vectorizer <loop-vectorizer>`,
9 which operates on Loops, and the :ref:`Basic Block Vectorizer
10 <bb-vectorizer>`, which optimizes straight-line code. These vectorizers
11 focus on different optimization opportunities and use different techniques.
12 The BB vectorizer merges multiple scalars that are found in the code into
13 vectors while the Loop Vectorizer widens instructions in the original loop
14 to operate on multiple consecutive loop iterations.
24 LLVM's Loop Vectorizer is now enabled by default for -O3.
25 The vectorizer can be disabled using the command line:
27 .. code-block:: console
29 $ clang ... -fno-vectorize file.c
34 The loop vectorizer uses a cost model to decide on the optimal vectorization factor
35 and unroll factor. However, users of the vectorizer can force the vectorizer to use
36 specific values. Both 'clang' and 'opt' support the flags below.
38 Users can control the vectorization SIMD width using the command line flag "-force-vector-width".
40 .. code-block:: console
42 $ clang -mllvm -force-vector-width=8 ...
43 $ opt -loop-vectorize -force-vector-width=8 ...
45 Users can control the unroll factor using the command line flag "-force-vector-unroll"
47 .. code-block:: console
49 $ clang -mllvm -force-vector-unroll=2 ...
50 $ opt -loop-vectorize -force-vector-unroll=2 ...
55 The LLVM Loop Vectorizer has a number of features that allow it to vectorize
58 Loops with unknown trip count
59 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
61 The Loop Vectorizer supports loops with an unknown trip count.
62 In the loop below, the iteration ``start`` and ``finish`` points are unknown,
63 and the Loop Vectorizer has a mechanism to vectorize loops that do not start
64 at zero. In this example, 'n' may not be a multiple of the vector width, and
65 the vectorizer has to execute the last few iterations as scalar code. Keeping
66 a scalar copy of the loop increases the code size.
70 void bar(float *A, float* B, float K, int start, int end) {
71 for (int i = start; i < end; ++i)
75 Runtime Checks of Pointers
76 ^^^^^^^^^^^^^^^^^^^^^^^^^^
78 In the example below, if the pointers A and B point to consecutive addresses,
79 then it is illegal to vectorize the code because some elements of A will be
80 written before they are read from array B.
82 Some programmers use the 'restrict' keyword to notify the compiler that the
83 pointers are disjointed, but in our example, the Loop Vectorizer has no way of
84 knowing that the pointers A and B are unique. The Loop Vectorizer handles this
85 loop by placing code that checks, at runtime, if the arrays A and B point to
86 disjointed memory locations. If arrays A and B overlap, then the scalar version
87 of the loop is executed.
91 void bar(float *A, float* B, float K, int n) {
92 for (int i = 0; i < n; ++i)
100 In this example the ``sum`` variable is used by consecutive iterations of
101 the loop. Normally, this would prevent vectorization, but the vectorizer can
102 detect that 'sum' is a reduction variable. The variable 'sum' becomes a vector
103 of integers, and at the end of the loop the elements of the array are added
104 together to create the correct result. We support a number of different
105 reduction operations, such as addition, multiplication, XOR, AND and OR.
109 int foo(int *A, int *B, int n) {
111 for (int i = 0; i < n; ++i)
116 We support floating point reduction operations when `-ffast-math` is used.
121 In this example the value of the induction variable ``i`` is saved into an
122 array. The Loop Vectorizer knows to vectorize induction variables.
126 void bar(float *A, float* B, float K, int n) {
127 for (int i = 0; i < n; ++i)
134 The Loop Vectorizer is able to "flatten" the IF statement in the code and
135 generate a single stream of instructions. The Loop Vectorizer supports any
136 control flow in the innermost loop. The innermost loop may contain complex
137 nesting of IFs, ELSEs and even GOTOs.
141 int foo(int *A, int *B, int n) {
143 for (int i = 0; i < n; ++i)
149 Pointer Induction Variables
150 ^^^^^^^^^^^^^^^^^^^^^^^^^^^
152 This example uses the "accumulate" function of the standard c++ library. This
153 loop uses C++ iterators, which are pointers, and not integer indices.
154 The Loop Vectorizer detects pointer induction variables and can vectorize
155 this loop. This feature is important because many C++ programs use iterators.
159 int baz(int *A, int n) {
160 return std::accumulate(A, A + n, 0);
166 The Loop Vectorizer can vectorize loops that count backwards.
170 int foo(int *A, int *B, int n) {
171 for (int i = n; i > 0; --i)
178 The Loop Vectorizer can vectorize code that becomes a sequence of scalar instructions
179 that scatter/gathers memory.
183 int foo(int *A, int *B, int n, int k) {
184 for (int i = 0; i < n; ++i)
188 Vectorization of Mixed Types
189 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
191 The Loop Vectorizer can vectorize programs with mixed types. The Vectorizer
192 cost model can estimate the cost of the type conversion and decide if
193 vectorization is profitable.
197 int foo(int *A, char *B, int n, int k) {
198 for (int i = 0; i < n; ++i)
202 Global Structures Alias Analysis
203 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
205 Access to global structures can also be vectorized, with alias analysis being
206 used to make sure accesses don't alias. Run-time checks can also be added on
207 pointer access to structure members.
209 Many variations are supported, but some that rely on undefined behaviour being
210 ignored (as other compilers do) are still being left un-vectorized.
214 struct { int A[100], K, B[100]; } Foo;
217 for (int i = 0; i < 100; ++i)
218 Foo.A[i] = Foo.B[i] + 100;
221 Vectorization of function calls
222 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
224 The Loop Vectorize can vectorize intrinsic math functions.
225 See the table below for a list of these functions.
227 +-----+-----+---------+
229 +-----+-----+---------+
231 +-----+-----+---------+
232 | log |log2 | log10 |
233 +-----+-----+---------+
235 +-----+-----+---------+
236 |fma |trunc|nearbyint|
237 +-----+-----+---------+
239 +-----+-----+---------+
241 The loop vectorizer knows about special instructions on the target and will
242 vectorize a loop containing a function call that maps to the instructions. For
243 example, the loop below will be vectorized on Intel x86 if the SSE4.1 roundps
244 instruction is available.
249 for (int i = 0; i != 1024; ++i)
253 Partial unrolling during vectorization
254 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
256 Modern processors feature multiple execution units, and only programs that contain a
257 high degree of parallelism can fully utilize the entire width of the machine.
258 The Loop Vectorizer increases the instruction level parallelism (ILP) by
259 performing partial-unrolling of loops.
261 In the example below the entire array is accumulated into the variable 'sum'.
262 This is inefficient because only a single execution port can be used by the processor.
263 By unrolling the code the Loop Vectorizer allows two or more execution ports
264 to be used simultaneously.
268 int foo(int *A, int *B, int n) {
270 for (int i = 0; i < n; ++i)
275 The Loop Vectorizer uses a cost model to decide when it is profitable to unroll loops.
276 The decision to unroll the loop depends on the register pressure and the generated code size.
281 This section shows the the execution time of Clang on a simple benchmark:
282 `gcc-loops <http://llvm.org/viewvc/llvm-project/test-suite/trunk/SingleSource/UnitTests/Vectorizer/>`_.
283 This benchmarks is a collection of loops from the GCC autovectorization
284 `page <http://gcc.gnu.org/projects/tree-ssa/vectorization.html>`_ by Dorit Nuzman.
286 The chart below compares GCC-4.7, ICC-13, and Clang-SVN with and without loop vectorization at -O3, tuned for "corei7-avx", running on a Sandybridge iMac.
287 The Y-axis shows the time in msec. Lower is better. The last column shows the geomean of all the kernels.
289 .. image:: gcc-loops.png
291 And Linpack-pc with the same configuration. Result is Mflops, higher is better.
293 .. image:: linpack-pc.png
303 The goal of SLP vectorization (a.k.a. superword-level parallelism) is
304 to combine similar independent instructions within simple control-flow regions
305 into vector instructions. Memory accesses, arithemetic operations, comparison
306 operations and some math functions can all be vectorized using this technique
307 (subject to the capabilities of the target architecture).
309 For example, the following function performs very similar operations on its
310 inputs (a1, b1) and (a2, b2). The basic-block vectorizer may combine these
311 into vector operations.
315 void foo(int a1, int a2, int b1, int b2, int *A) {
316 A[0] = a1*(a1 + b1)/b1 + 50*b1/a1;
317 A[1] = a2*(a2 + b2)/b2 + 50*b2/a2;
324 The SLP Vectorizer is not enabled by default, but it can be enabled
325 through clang using the command line flag:
327 .. code-block:: console
329 $ clang -fslp-vectorize file.c
331 LLVM has a second phase basic block vectorization phase
332 which is more compile-time intensive (The BB vectorizer). This optimization
333 can be enabled through clang using the command line flag:
335 .. code-block:: console
337 $ clang -fslp-vectorize-aggressive file.c
340 The SLP vectorizer is in early development stages but can already vectorize
341 and accelerate many programs in the LLVM test suite.
343 ======================= ============
345 ======================= ============
347 Misc/matmul_f64_4x4 -23.23%
350 ASC_Sequoia/AMGmk -13.85%
351 TSVC/LoopRerolling-flt -11.76%
355 TSVC/NodeSplitting-dbl -6.96%
356 Misc-C++/sphereflake -6.74%
358 ======================= ============