LLVM's Analysis and Transform Passes
  1. Introduction
  2. Analysis Passes
  3. Transform Passes
  4. Utility Passes

Written by Reid Spencer and Gordon Henriksen

Introduction

This document serves as a high level summary of the optimization features that LLVM provides. Optimizations are implemented as Passes that traverse some portion of a program to either collect information or transform the program. The table below divides the passes that LLVM provides into three categories. Analysis passes compute information that other passes can use or for debugging or program visualization purposes. Transform passes can use (or invalidate) the analysis passes. Transform passes all mutate the program in some way. Utility passes provides some utility but don't otherwise fit categorization. For example passes to extract functions to bitcode or write a module to bitcode are neither analysis nor transform passes.

The table below provides a quick summary of each pass and links to the more complete pass description later in the document.

ANALYSIS PASSES
OptionName
-aa-evalExhaustive Alias Analysis Precision Evaluator
-anders-aaAndersen's Interprocedural Alias Analysis
-basicaaBasic Alias Analysis (default AA impl)
-basiccgBasic CallGraph Construction
-basicvnBasic Value Numbering (default GVN impl)
-callgraphPrint a call graph
-callsccPrint SCCs of the Call Graph
-cfgsccPrint SCCs of each function CFG
-codegenprepareOptimize for code generation
-count-aaCount Alias Analysis Query Responses
-debug-aaAA use debugger
-domfrontierDominance Frontier Construction
-domtreeDominator Tree Construction
-externalfnconstantsPrint external fn callsites passed constants
-globalsmodref-aaSimple mod/ref analysis for globals
-instcountCounts the various types of Instructions
-intervalsInterval Partition Construction
-load-vnLoad Value Numbering
-loopsNatural Loop Construction
-memdepMemory Dependence Analysis
-no-aaNo Alias Analysis (always returns 'may' alias)
-no-profileNo Profile Information
-postdomfrontierPost-Dominance Frontier Construction
-postdomtreePost-Dominator Tree Construction
-printPrint function to stderr
-print-alias-setsAlias Set Printer
-print-callgraphPrint Call Graph to 'dot' file
-print-cfgPrint CFG of function to 'dot' file
-print-cfg-onlyPrint CFG of function to 'dot' file (with no function bodies)
-printmPrint module to stderr
-printusedtypesFind Used Types
-profile-loaderLoad profile information from llvmprof.out
-scalar-evolutionScalar Evolution Analysis
-targetdataTarget Data Layout
TRANSFORM PASSES
OptionName
-adceAggressive Dead Code Elimination
-argpromotionPromote 'by reference' arguments to scalars
-block-placementProfile Guided Basic Block Placement
-break-crit-edgesBreak critical edges in CFG
-ceeCorrelated Expression Elimination
-condpropConditional Propagation
-constmergeMerge Duplicate Global Constants
-constpropSimple constant propagation
-dceDead Code Elimination
-deadargelimDead Argument Elimination
-deadtypeelimDead Type Elimination
-dieDead Instruction Elimination
-dseDead Store Elimination
-gcseGlobal Common Subexpression Elimination
-globaldceDead Global Elimination
-globaloptGlobal Variable Optimizer
-gvnGlobal Value Numbering
-gvnpreGlobal Value Numbering/Partial Redundancy Elimination
-indmemremIndirect Malloc and Free Removal
-indvarsCanonicalize Induction Variables
-inlineFunction Integration/Inlining
-insert-block-profilingInsert instrumentation for block profiling
-insert-edge-profilingInsert instrumentation for edge profiling
-insert-function-profilingInsert instrumentation for function profiling
-insert-null-profiling-rsMeasure profiling framework overhead
-insert-rs-profiling-frameworkInsert random sampling instrumentation framework
-instcombineCombine redundant instructions
-internalizeInternalize Global Symbols
-ipconstpropInterprocedural constant propagation
-ipsccpInterprocedural Sparse Conditional Constant Propagation
-lcssaLoop-Closed SSA Form Pass
-licmLoop Invariant Code Motion
-loop-extractExtract loops into new functions
-loop-extract-singleExtract at most one loop into a new function
-loop-index-splitIndex Split Loops
-loop-reduceLoop Strength Reduction
-loop-rotateRotate Loops
-loop-unrollUnroll loops
-loop-unswitchUnswitch loops
-loopsimplifyCanonicalize natural loops
-lower-packedlowers packed operations to operations on smaller packed datatypes
-lowerallocsLower allocations from instructions to calls
-lowergcLower GC intrinsics, for GCless code generators
-lowerinvokeLower invoke and unwind, for unwindless code generators
-lowerselectLower select instructions to branches
-lowersetjmpLower Set Jump
-lowerswitchLower SwitchInst's to branches
-mem2regPromote Memory to Register
-mergereturnUnify function exit nodes
-predsimplifyPredicate Simplifier
-prune-ehRemove unused exception handling info
-raiseallocsRaise allocations from calls to instructions
-reassociateReassociate expressions
-reg2memDemote all values to stack slots
-scalarreplScalar Replacement of Aggregates
-sccpSparse Conditional Constant Propagation
-simplify-libcallsSimplify well-known library calls
-simplifycfgSimplify the CFG
-stripStrip all symbols from a module
-tailcallelimTail Call Elimination
-tailduplicateTail Duplication
UTILITY PASSES
OptionName
-deadarghaX0rDead Argument Hacking (BUGPOINT USE ONLY; DO NOT USE)
-extract-blocksExtract Basic Blocks From Module (for bugpoint use)
-emitbitcodeBitcode Writer
-verifyModule Verifier
-view-cfgView CFG of function
-view-cfg-onlyView CFG of function (with no function bodies)
Analysis Passes

This section describes the LLVM Analysis Passes.

Exhaustive Alias Analysis Precision Evaluator

This is a simple N^2 alias analysis accuracy evaluator. Basically, for each function in the program, it simply queries to see how the alias analysis implementation answers alias queries between each pair of pointers in the function.

This is inspired and adapted from code by: Naveen Neelakantam, Francesco Spadini, and Wojciech Stryjewski.

Andersen's Interprocedural Alias Analysis

This is an implementation of Andersen's interprocedural alias analysis

In pointer analysis terms, this is a subset-based, flow-insensitive, field-sensitive, and context-insensitive algorithm pointer algorithm.

This algorithm is implemented as three stages:

  1. Object identification.
  2. Inclusion constraint identification.
  3. Offline constraint graph optimization.
  4. Inclusion constraint solving.

The object identification stage identifies all of the memory objects in the program, which includes globals, heap allocated objects, and stack allocated objects.

The inclusion constraint identification stage finds all inclusion constraints in the program by scanning the program, looking for pointer assignments and other statements that effect the points-to graph. For a statement like A = B, this statement is processed to indicate that A can point to anything that B can point to. Constraints can handle copies, loads, and stores, and address taking.

The offline constraint graph optimization portion includes offline variable substitution algorithms intended to computer pointer and location equivalences. Pointer equivalences are those pointers that will have the same points-to sets, and location equivalences are those variables that always appear together in points-to sets.

The inclusion constraint solving phase iteratively propagates the inclusion constraints until a fixed point is reached. This is an O(n³) algorithm.

Function constraints are handled as if they were structs with X fields. Thus, an access to argument X of function Y is an access to node index getNode(Y) + X. This representation allows handling of indirect calls without any issues. To wit, an indirect call Y(a,b) is equivalent to *(Y + 1) = a, *(Y + 2) = b. The return node for a function F is always located at getNode(F) + CallReturnPos. The arguments start at getNode(F) + CallArgPos.

Basic Alias Analysis (default AA impl)

This is the default implementation of the Alias Analysis interface that simply implements a few identities (two different globals cannot alias, etc), but otherwise does no analysis.

Basic CallGraph Construction

Yet to be written.

Basic Value Numbering (default GVN impl)

This is the default implementation of the ValueNumbering interface. It walks the SSA def-use chains to trivially identify lexically identical expressions. This does not require any ahead of time analysis, so it is a very fast default implementation.

Print a call graph

This pass, only available in opt, prints the call graph to standard output in a human-readable form.

Print SCCs of the Call Graph

This pass, only available in opt, prints the SCCs of the call graph to standard output in a human-readable form.

Print SCCs of each function CFG

This pass, only available in opt, prints the SCCs of each function CFG to standard output in a human-readable form.

Optimize for code generation

This pass munges the code in the input function to better prepare it for SelectionDAG-based code generation. This works around limitations in it's basic-block-at-a-time approach. It should eventually be removed.

Count Alias Analysis Query Responses

A pass which can be used to count how many alias queries are being made and how the alias analysis implementation being used responds.

AA use debugger

This simple pass checks alias analysis users to ensure that if they create a new value, they do not query AA without informing it of the value. It acts as a shim over any other AA pass you want.

Yes keeping track of every value in the program is expensive, but this is a debugging pass.

Dominance Frontier Construction

This pass is a simple dominator construction algorithm for finding forward dominator frontiers.

Dominator Tree Construction

This pass is a simple dominator construction algorithm for finding forward dominators.

Print external fn callsites passed constants

This pass, only available in opt, prints out call sites to external functions that are called with constant arguments. This can be useful when looking for standard library functions we should constant fold or handle in alias analyses.

Simple mod/ref analysis for globals

This simple pass provides alias and mod/ref information for global values that do not have their address taken, and keeps track of whether functions read or write memory (are "pure"). For this simple (but very common) case, we can provide pretty accurate and useful information.

Counts the various types of Instructions

This pass collects the count of all instructions and reports them

Interval Partition Construction

This analysis calculates and represents the interval partition of a function, or a preexisting interval partition.

In this way, the interval partition may be used to reduce a flow graph down to its degenerate single node interval partition (unless it is irreducible).

Load Value Numbering

This pass value numbers load and call instructions. To do this, it finds lexically identical load instructions, and uses alias analysis to determine which loads are guaranteed to produce the same value. To value number call instructions, it looks for calls to functions that do not write to memory which do not have intervening instructions that clobber the memory that is read from.

This pass builds off of another value numbering pass to implement value numbering for non-load and non-call instructions. It uses Alias Analysis so that it can disambiguate the load instructions. The more powerful these base analyses are, the more powerful the resultant value numbering will be.

Natural Loop Construction

This analysis is used to identify natural loops and determine the loop depth of various nodes of the CFG. Note that the loops identified may actually be several natural loops that share the same header node... not just a single natural loop.

Memory Dependence Analysis

An analysis that determines, for a given memory operation, what preceding memory operations it depends on. It builds on alias analysis information, and tries to provide a lazy, caching interface to a common kind of alias information query.

No Alias Analysis (always returns 'may' alias)

Always returns "I don't know" for alias queries. NoAA is unlike other alias analysis implementations, in that it does not chain to a previous analysis. As such it doesn't follow many of the rules that other alias analyses must.

No Profile Information

The default "no profile" implementation of the abstract ProfileInfo interface.

Post-Dominance Frontier Construction

This pass is a simple post-dominator construction algorithm for finding post-dominator frontiers.

Post-Dominator Tree Construction

This pass is a simple post-dominator construction algorithm for finding post-dominators.

Print function to stderr

The PrintFunctionPass class is designed to be pipelined with other FunctionPasses, and prints out the functions of the module as they are processed.

Alias Set Printer

Yet to be written.

Print Call Graph to 'dot' file

This pass, only available in opt, prints the call graph into a .dot graph. This graph can then be processed with the "dot" tool to convert it to postscript or some other suitable format.

Print CFG of function to 'dot' file

This pass, only available in opt, prints the control flow graph into a .dot graph. This graph can then be processed with the "dot" tool to convert it to postscript or some other suitable format.

Print CFG of function to 'dot' file (with no function bodies)

This pass, only available in opt, prints the control flow graph into a .dot graph, omitting the function bodies. This graph can then be processed with the "dot" tool to convert it to postscript or some other suitable format.

Print module to stderr

This pass simply prints out the entire module when it is executed.

Find Used Types

This pass is used to seek out all of the types in use by the program. Note that this analysis explicitly does not include types only used by the symbol table.

Load profile information from llvmprof.out

A concrete implementation of profiling information that loads the information from a profile dump file.

Scalar Evolution Analysis

The ScalarEvolution analysis can be used to analyze and catagorize scalar expressions in loops. It specializes in recognizing general induction variables, representing them with the abstract and opaque SCEV class. Given this analysis, trip counts of loops and other important properties can be obtained.

This analysis is primarily useful for induction variable substitution and strength reduction.

Target Data Layout

Provides other passes access to information on how the size and alignment required by the the target ABI for various data types.

Transform Passes

This section describes the LLVM Transform Passes.

Aggressive Dead Code Elimination

ADCE aggressively tries to eliminate code. This pass is similar to DCE but it assumes that values are dead until proven otherwise. This is similar to SCCP, except applied to the liveness of values.

Promote 'by reference' arguments to scalars

This pass promotes "by reference" arguments to be "by value" arguments. In practice, this means looking for internal functions that have pointer arguments. If it can prove, through the use of alias analysis, that an argument is *only* loaded, then it can pass the value into the function instead of the address of the value. This can cause recursive simplification of code and lead to the elimination of allocas (especially in C++ template code like the STL).

This pass also handles aggregate arguments that are passed into a function, scalarizing them if the elements of the aggregate are only loaded. Note that it refuses to scalarize aggregates which would require passing in more than three operands to the function, because passing thousands of operands for a large array or structure is unprofitable!

Note that this transformation could also be done for arguments that are only stored to (returning the value instead), but does not currently. This case would be best handled when and if LLVM starts supporting multiple return values from functions.

Profile Guided Basic Block Placement

This pass is a very simple profile guided basic block placement algorithm. The idea is to put frequently executed blocks together at the start of the function and hopefully increase the number of fall-through conditional branches. If there is no profile information for a particular function, this pass basically orders blocks in depth-first order.

Break critical edges in CFG

Break all of the critical edges in the CFG by inserting a dummy basic block. It may be "required" by passes that cannot deal with critical edges. This transformation obviously invalidates the CFG, but can update forward dominator (set, immediate dominators, tree, and frontier) information.

Correlated Expression Elimination

Correlated Expression Elimination propagates information from conditional branches to blocks dominated by destinations of the branch. It propagates information from the condition check itself into the body of the branch, allowing transformations like these for example:

if (i == 7)
  ... 4*i;  // constant propagation

M = i+1; N = j+1;
if (i == j)
  X = M-N;  // = M-M == 0;

This is called Correlated Expression Elimination because we eliminate or simplify expressions that are correlated with the direction of a branch. In this way we use static information to give us some information about the dynamic value of a variable.

Conditional Propagation

This pass propagates information about conditional expressions through the program, allowing it to eliminate conditional branches in some cases.

Merge Duplicate Global Constants

Merges duplicate global constants together into a single constant that is shared. This is useful because some passes (ie TraceValues) insert a lot of string constants into the program, regardless of whether or not an existing string is available.

Simple constant propagation

This file implements constant propagation and merging. It looks for instructions involving only constant operands and replaces them with a constant value instead of an instruction. For example:

add i32 1, 2

becomes

i32 3

NOTE: this pass has a habit of making definitions be dead. It is a good idea to to run a DIE (Dead Instruction Elimination) pass sometime after running this pass.

Dead Code Elimination

Dead code elimination is similar to dead instruction elimination, but it rechecks instructions that were used by removed instructions to see if they are newly dead.

Dead Argument Elimination

This pass deletes dead arguments from internal functions. Dead argument elimination removes arguments which are directly dead, as well as arguments only passed into function calls as dead arguments of other functions. This pass also deletes dead arguments in a similar way.

This pass is often useful as a cleanup pass to run after aggressive interprocedural passes, which add possibly-dead arguments.

Dead Type Elimination

This pass is used to cleanup the output of GCC. It eliminate names for types that are unused in the entire translation unit, using the find used types pass.

Dead Instruction Elimination

Dead instruction elimination performs a single pass over the function, removing instructions that are obviously dead.

Dead Store Elimination

A trivial dead store elimination that only considers basic-block local redundant stores.

Global Common Subexpression Elimination

This pass is designed to be a very quick global transformation that eliminates global common subexpressions from a function. It does this by using an existing value numbering implementation to identify the common subexpressions, eliminating them when possible.

Dead Global Elimination

This transform is designed to eliminate unreachable internal globals from the program. It uses an aggressive algorithm, searching out globals that are known to be alive. After it finds all of the globals which are needed, it deletes whatever is left over. This allows it to delete recursive chunks of the program which are unreachable.

Global Variable Optimizer

This pass transforms simple global variables that never have their address taken. If obviously true, it marks read/write globals as constant, deletes variables only stored to, etc.

Global Value Numbering

This pass performs global value numbering to eliminate fully redundant instructions. It also performs simple dead load elimination.

Global Value Numbering/Partial Redundancy Elimination

This pass performs a hybrid of global value numbering and partial redundancy elimination, known as GVN-PRE. It performs partial redundancy elimination on values, rather than lexical expressions, allowing a more comprehensive view the optimization. It replaces redundant values with uses of earlier occurences of the same value. While this is beneficial in that it eliminates unneeded computation, it also increases register pressure by creating large live ranges, and should be used with caution on platforms that are very sensitive to register pressure.

Indirect Malloc and Free Removal

This pass finds places where memory allocation functions may escape into indirect land. Some transforms are much easier (aka possible) only if free or malloc are not called indirectly.

Thus find places where the address of memory functions are taken and construct bounce functions with direct calls of those functions.

Canonicalize Induction Variables

This transformation analyzes and transforms the induction variables (and computations derived from them) into simpler forms suitable for subsequent analysis and transformation.

This transformation makes the following changes to each loop with an identifiable induction variable:

  1. All loops are transformed to have a single canonical induction variable which starts at zero and steps by one.
  2. The canonical induction variable is guaranteed to be the first PHI node in the loop header block.
  3. Any pointer arithmetic recurrences are raised to use array subscripts.

If the trip count of a loop is computable, this pass also makes the following changes:

  1. The exit condition for the loop is canonicalized to compare the induction value against the exit value. This turns loops like:
    for (i = 7; i*i < 1000; ++i)
    into
    for (i = 0; i != 25; ++i)
  2. Any use outside of the loop of an expression derived from the indvar is changed to compute the derived value outside of the loop, eliminating the dependence on the exit value of the induction variable. If the only purpose of the loop is to compute the exit value of some derived expression, this transformation will make the loop dead.

This transformation should be followed by strength reduction after all of the desired loop transformations have been performed. Additionally, on targets where it is profitable, the loop could be transformed to count down to zero (the "do loop" optimization).

Function Integration/Inlining

Bottom-up inlining of functions into callees.

Insert instrumentation for block profiling

This pass instruments the specified program with counters for basic block profiling, which counts the number of times each basic block executes. This is the most basic form of profiling, which can tell which blocks are hot, but cannot reliably detect hot paths through the CFG.

Note that this implementation is very naïve. Control equivalent regions of the CFG should not require duplicate counters, but it does put duplicate counters in.

Insert instrumentation for edge profiling

This pass instruments the specified program with counters for edge profiling. Edge profiling can give a reasonable approximation of the hot paths through a program, and is used for a wide variety of program transformations.

Note that this implementation is very naïve. It inserts a counter for every edge in the program, instead of using control flow information to prune the number of counters inserted.

Insert instrumentation for function profiling

This pass instruments the specified program with counters for function profiling, which counts the number of times each function is called.

Measure profiling framework overhead

The basic profiler that does nothing. It is the default profiler and thus terminates RSProfiler chains. It is useful for measuring framework overhead.

Insert random sampling instrumentation framework

The second stage of the random-sampling instrumentation framework, duplicates all instructions in a function, ignoring the profiling code, then connects the two versions together at the entry and at backedges. At each connection point a choice is made as to whether to jump to the profiled code (take a sample) or execute the unprofiled code.

After this pass, it is highly recommended to runmem2reg and adce. instcombine, load-vn, gdce, and dse also are good to run afterwards.

Combine redundant instructions

Combine instructions to form fewer, simple instructions. This pass does not modify the CFG This pass is where algebraic simplification happens.

This pass combines things like:

%Y = add i32 %X, 1
%Z = add i32 %Y, 1

into:

%Z = add i32 %X, 2

This is a simple worklist driven algorithm.

This pass guarantees that the following canonicalizations are performed on the program:

Internalize Global Symbols

This pass loops over all of the functions in the input module, looking for a main function. If a main function is found, all other functions and all global variables with initializers are marked as internal.

Interprocedural constant propagation

This pass implements an extremely simple interprocedural constant propagation pass. It could certainly be improved in many different ways, like using a worklist. This pass makes arguments dead, but does not remove them. The existing dead argument elimination pass should be run after this to clean up the mess.

Interprocedural Sparse Conditional Constant Propagation

An interprocedural variant of Sparse Conditional Constant Propagation.

Loop-Closed SSA Form Pass

This pass transforms loops by placing phi nodes at the end of the loops for all values that are live across the loop boundary. For example, it turns the left into the right code:

for (...)                for (...)
  if (c)                   if (c)
    X1 = ...                 X1 = ...
  else                     else
    X2 = ...                 X2 = ...
  X3 = phi(X1, X2)         X3 = phi(X1, X2)
... = X3 + 4              X4 = phi(X3)
                          ... = X4 + 4

This is still valid LLVM; the extra phi nodes are purely redundant, and will be trivially eliminated by InstCombine. The major benefit of this transformation is that it makes many other loop optimizations, such as LoopUnswitching, simpler.

Loop Invariant Code Motion

This pass performs loop invariant code motion, attempting to remove as much code from the body of a loop as possible. It does this by either hoisting code into the preheader block, or by sinking code to the exit blocks if it is safe. This pass also promotes must-aliased memory locations in the loop to live in registers, thus hoisting and sinking "invariant" loads and stores.

This pass uses alias analysis for two purposes:

Extract loops into new functions

A pass wrapper around the ExtractLoop() scalar transformation to extract each top-level loop into its own new function. If the loop is the only loop in a given function, it is not touched. This is a pass most useful for debugging via bugpoint.

Extract at most one loop into a new function

Similar to Extract loops into new functions, this pass extracts one natural loop from the program into a function if it can. This is used by bugpoint.

Index Split Loops

This pass divides loop's iteration range by spliting loop such that each individual loop is executed efficiently.

Loop Strength Reduction

This pass performs a strength reduction on array references inside loops that have as one or more of their components the loop induction variable. This is accomplished by creating a new value to hold the initial value of the array access for the first iteration, and then creating a new GEP instruction in the loop to increment the value by the appropriate amount.

Rotate Loops

A simple loop rotation transformation.

Unroll loops

This pass implements a simple loop unroller. It works best when loops have been canonicalized by the -indvars pass, allowing it to determine the trip counts of loops easily.

Unswitch loops

This pass transforms loops that contain branches on loop-invariant conditions to have multiple loops. For example, it turns the left into the right code:

for (...)                  if (lic)
  A                          for (...)
  if (lic)                     A; B; C
    B                      else
  C                          for (...)
                               A; C

This can increase the size of the code exponentially (doubling it every time a loop is unswitched) so we only unswitch if the resultant code will be smaller than a threshold.

This pass expects LICM to be run before it to hoist invariant conditions out of the loop, to make the unswitching opportunity obvious.

Canonicalize natural loops

This pass performs several transformations to transform natural loops into a simpler form, which makes subsequent analyses and transformations simpler and more effective.

Loop pre-header insertion guarantees that there is a single, non-critical entry edge from outside of the loop to the loop header. This simplifies a number of analyses and transformations, such as LICM.

Loop exit-block insertion guarantees that all exit blocks from the loop (blocks which are outside of the loop that have predecessors inside of the loop) only have predecessors from inside of the loop (and are thus dominated by the loop header). This simplifies transformations such as store-sinking that are built into LICM.

This pass also guarantees that loops will have exactly one backedge.

Note that the simplifycfg pass will clean up blocks which are split out but end up being unnecessary, so usage of this pass should not pessimize generated code.

This pass obviously modifies the CFG, but updates loop information and dominator information.

lowers packed operations to operations on smaller packed datatypes

Lowers operations on vector datatypes into operations on more primitive vector datatypes, and finally to scalar operations.

Lower allocations from instructions to calls

Turn malloc and free instructions into @malloc and @free calls.

This is a target-dependent tranformation because it depends on the size of data types and alignment constraints.

Lower GC intrinsics, for GCless code generators

This file implements lowering for the llvm.gc* intrinsics for targets that do not natively support them (which includes the C backend). Note that the code generated is not as efficient as it would be for targets that natively support the GC intrinsics, but it is useful for getting new targets up-and-running quickly.

This pass implements the code transformation described in this paper:

"Accurate Garbage Collection in an Uncooperative Environment" Fergus Henderson, ISMM, 2002

Lower invoke and unwind, for unwindless code generators

This transformation is designed for use by code generators which do not yet support stack unwinding. This pass supports two models of exception handling lowering, the 'cheap' support and the 'expensive' support.

'Cheap' exception handling support gives the program the ability to execute any program which does not "throw an exception", by turning 'invoke' instructions into calls and by turning 'unwind' instructions into calls to abort(). If the program does dynamically use the unwind instruction, the program will print a message then abort.

'Expensive' exception handling support gives the full exception handling support to the program at the cost of making the 'invoke' instruction really expensive. It basically inserts setjmp/longjmp calls to emulate the exception handling as necessary.

Because the 'expensive' support slows down programs a lot, and EH is only used for a subset of the programs, it must be specifically enabled by the -enable-correct-eh-support option.

Note that after this pass runs the CFG is not entirely accurate (exceptional control flow edges are not correct anymore) so only very simple things should be done after the lowerinvoke pass has run (like generation of native code). This should not be used as a general purpose "my LLVM-to-LLVM pass doesn't support the invoke instruction yet" lowering pass.

Lower select instructions to branches

Lowers select instructions into conditional branches for targets that do not have conditional moves or that have not implemented the select instruction yet.

Note that this pass could be improved. In particular it turns every select instruction into a new conditional branch, even though some common cases have select instructions on the same predicate next to each other. It would be better to use the same branch for the whole group of selects.

Lower Set Jump

Lowers setjmp and longjmp to use the LLVM invoke and unwind instructions as necessary.

Lowering of longjmp is fairly trivial. We replace the call with a call to the LLVM library function __llvm_sjljeh_throw_longjmp(). This unwinds the stack for us calling all of the destructors for objects allocated on the stack.

At a setjmp call, the basic block is split and the setjmp removed. The calls in a function that have a setjmp are converted to invoke where the except part checks to see if it's a longjmp exception and, if so, if it's handled in the function. If it is, then it gets the value returned by the longjmp and goes to where the basic block was split. invoke instructions are handled in a similar fashion with the original except block being executed if it isn't a longjmp except that is handled by that function.

Lower SwitchInst's to branches

Rewrites switch instructions with a sequence of branches, which allows targets to get away with not implementing the switch instruction until it is convenient.

Promote Memory to Register

This file promotes memory references to be register references. It promotes alloca instructions which only have loads and stores as uses. An alloca is transformed by using dominator frontiers to place phi nodes, then traversing the function in depth-first order to rewrite loads and stores as appropriate. This is just the standard SSA construction algorithm to construct "pruned" SSA form.

Unify function exit nodes

Ensure that functions have at most one ret instruction in them. Additionally, it keeps track of which node is the new exit node of the CFG.

Predicate Simplifier

Path-sensitive optimizer. In a branch where x == y, replace uses of x with y. Permits further optimization, such as the elimination of the unreachable call:

void test(int *p, int *q)
{
  if (p != q)
    return;

  if (*p != *q)
    foo(); // unreachable
}
Remove unused exception handling info

This file implements a simple interprocedural pass which walks the call-graph, turning invoke instructions into call instructions if and only if the callee cannot throw an exception. It implements this as a bottom-up traversal of the call-graph.

Raise allocations from calls to instructions

Converts @malloc and @free calls to malloc and free instructions.

Reassociate expressions

This pass reassociates commutative expressions in an order that is designed to promote better constant propagation, GCSE, LICM, PRE, etc.

For example: 4 + (x + 5) ⇒ x + (4 + 5)

In the implementation of this algorithm, constants are assigned rank = 0, function arguments are rank = 1, and other values are assigned ranks corresponding to the reverse post order traversal of current function (starting at 2), which effectively gives values in deep loops higher rank than values not in loops.

Demote all values to stack slots

This file demotes all registers to memory references. It is intented to be the inverse of -mem2reg. By converting to load instructions, the only values live accross basic blocks are alloca instructions and load instructions before phi nodes. It is intended that this should make CFG hacking much easier. To make later hacking easier, the entry block is split into two, such that all introduced alloca instructions (and nothing else) are in the entry block.

Scalar Replacement of Aggregates

The well-known scalar replacement of aggregates transformation. This transform breaks up alloca instructions of aggregate type (structure or array) into individual alloca instructions for each member if possible. Then, if possible, it transforms the individual alloca instructions into nice clean scalar SSA form.

This combines a simple scalar replacement of aggregates algorithm with the mem2reg algorithm because often interact, especially for C++ programs. As such, iterating between scalarrepl, then mem2reg until we run out of things to promote works well.

Sparse Conditional Constant Propagation

Sparse conditional constant propagation and merging, which can be summarized as:

  1. Assumes values are constant unless proven otherwise
  2. Assumes BasicBlocks are dead unless proven otherwise
  3. Proves values to be constant, and replaces them with constants
  4. Proves conditional branches to be unconditional

Note that this pass has a habit of making definitions be dead. It is a good idea to to run a DCE pass sometime after running this pass.

Simplify well-known library calls

Applies a variety of small optimizations for calls to specific well-known function calls (e.g. runtime library functions). For example, a call exit(3) that occurs within the main() function can be transformed into simply return 3.

Simplify the CFG

Performs dead code elimination and basic block merging. Specifically:

  1. Removes basic blocks with no predecessors.
  2. Merges a basic block into its predecessor if there is only one and the predecessor only has one successor.
  3. Eliminates PHI nodes for basic blocks with a single predecessor.
  4. Eliminates a basic block that only contains an unconditional branch.
Strip all symbols from a module

Performs code stripping. This transformation can delete:

  1. names for virtual registers
  2. symbols for internal globals and functions
  3. debug information

Note that this transformation makes code much less readable, so it should only be used in situations where the strip utility would be used, such as reducing code size or making it harder to reverse engineer code.

Tail Call Elimination

This file transforms calls of the current function (self recursion) followed by a return instruction with a branch to the entry of the function, creating a loop. This pass also implements the following extensions to the basic algorithm:

Tail Duplication

This pass performs a limited form of tail duplication, intended to simplify CFGs by removing some unconditional branches. This pass is necessary to straighten out loops created by the C front-end, but also is capable of making other code nicer. After this pass is run, the CFG simplify pass should be run to clean up the mess.

Utility Passes

This section describes the LLVM Utility Passes.

Dead Argument Hacking (BUGPOINT USE ONLY; DO NOT USE)

Same as dead argument elimination, but deletes arguments to functions which are external. This is only for use by bugpoint.

Extract Basic Blocks From Module (for bugpoint use)

This pass is used by bugpoint to extract all blocks from the module into their own functions.

Bitcode Writer

Yet to be written.

Module Verifier

Verifies an LLVM IR code. This is useful to run after an optimization which is undergoing testing. Note that llvm-as verifies its input before emitting bitcode, and also that malformed bitcode is likely to make LLVM crash. All language front-ends are therefore encouraged to verify their output before performing optimizing transformations.

Note that this does not provide full security verification (like Java), but instead just tries to ensure that code is well-formed.

View CFG of function

Displays the control flow graph using the GraphViz tool.

View CFG of function (with no function bodies)

Displays the control flow graph using the GraphViz tool, but omitting function bodies.


Valid CSS! Valid HTML 4.01! Reid Spencer
LLVM Compiler Infrastructure
Last modified: $Date$