1 ==============================================
2 Kaleidoscope: Adding JIT and Optimizer Support
3 ==============================================
11 Welcome to Chapter 4 of the "`Implementing a language with
12 LLVM <index.html>`_" tutorial. Chapters 1-3 described the implementation
13 of a simple language and added support for generating LLVM IR. This
14 chapter describes two new techniques: adding optimizer support to your
15 language, and adding JIT compiler support. These additions will
16 demonstrate how to get nice, efficient code for the Kaleidoscope
19 Trivial Constant Folding
20 ========================
22 Our demonstration for Chapter 3 is elegant and easy to extend.
23 Unfortunately, it does not produce wonderful code. The IRBuilder,
24 however, does give us obvious optimizations when compiling simple code:
28 ready> def test(x) 1+2+x;
29 Read function definition:
30 define double @test(double %x) {
32 %addtmp = fadd double 3.000000e+00, %x
36 This code is not a literal transcription of the AST built by parsing the
41 ready> def test(x) 1+2+x;
42 Read function definition:
43 define double @test(double %x) {
45 %addtmp = fadd double 2.000000e+00, 1.000000e+00
46 %addtmp1 = fadd double %addtmp, %x
50 Constant folding, as seen above, in particular, is a very common and
51 very important optimization: so much so that many language implementors
52 implement constant folding support in their AST representation.
54 With LLVM, you don't need this support in the AST. Since all calls to
55 build LLVM IR go through the LLVM IR builder, the builder itself checked
56 to see if there was a constant folding opportunity when you call it. If
57 so, it just does the constant fold and return the constant instead of
58 creating an instruction.
60 Well, that was easy :). In practice, we recommend always using
61 ``IRBuilder`` when generating code like this. It has no "syntactic
62 overhead" for its use (you don't have to uglify your compiler with
63 constant checks everywhere) and it can dramatically reduce the amount of
64 LLVM IR that is generated in some cases (particular for languages with a
65 macro preprocessor or that use a lot of constants).
67 On the other hand, the ``IRBuilder`` is limited by the fact that it does
68 all of its analysis inline with the code as it is built. If you take a
69 slightly more complex example:
73 ready> def test(x) (1+2+x)*(x+(1+2));
74 ready> Read function definition:
75 define double @test(double %x) {
77 %addtmp = fadd double 3.000000e+00, %x
78 %addtmp1 = fadd double %x, 3.000000e+00
79 %multmp = fmul double %addtmp, %addtmp1
83 In this case, the LHS and RHS of the multiplication are the same value.
84 We'd really like to see this generate "``tmp = x+3; result = tmp*tmp;``"
85 instead of computing "``x+3``" twice.
87 Unfortunately, no amount of local analysis will be able to detect and
88 correct this. This requires two transformations: reassociation of
89 expressions (to make the add's lexically identical) and Common
90 Subexpression Elimination (CSE) to delete the redundant add instruction.
91 Fortunately, LLVM provides a broad range of optimizations that you can
92 use, in the form of "passes".
94 LLVM Optimization Passes
95 ========================
97 LLVM provides many optimization passes, which do many different sorts of
98 things and have different tradeoffs. Unlike other systems, LLVM doesn't
99 hold to the mistaken notion that one set of optimizations is right for
100 all languages and for all situations. LLVM allows a compiler implementor
101 to make complete decisions about what optimizations to use, in which
102 order, and in what situation.
104 As a concrete example, LLVM supports both "whole module" passes, which
105 look across as large of body of code as they can (often a whole file,
106 but if run at link time, this can be a substantial portion of the whole
107 program). It also supports and includes "per-function" passes which just
108 operate on a single function at a time, without looking at other
109 functions. For more information on passes and how they are run, see the
110 `How to Write a Pass <../WritingAnLLVMPass.html>`_ document and the
111 `List of LLVM Passes <../Passes.html>`_.
113 For Kaleidoscope, we are currently generating functions on the fly, one
114 at a time, as the user types them in. We aren't shooting for the
115 ultimate optimization experience in this setting, but we also want to
116 catch the easy and quick stuff where possible. As such, we will choose
117 to run a few per-function optimizations as the user types the function
118 in. If we wanted to make a "static Kaleidoscope compiler", we would use
119 exactly the code we have now, except that we would defer running the
120 optimizer until the entire file has been parsed.
122 In order to get per-function optimizations going, we need to set up a
123 `FunctionPassManager <../WritingAnLLVMPass.html#what-passmanager-doesr>`_ to hold
124 and organize the LLVM optimizations that we want to run. Once we have
125 that, we can add a set of optimizations to run. We'll need a new
126 FunctionPassManager for each module that we want to optimize, so we'll
127 write a function to create and initialize both the module and pass manager
132 void InitializeModuleAndPassManager(void) {
133 // Open a new module.
134 TheModule = llvm::make_unique<Module>("my cool jit", getGlobalContext());
135 TheModule->setDataLayout(TheJIT->getTargetMachine().createDataLayout());
137 // Create a new pass manager attached to it.
138 TheFPM = llvm::make_unique<FunctionPassManager>(TheModule.get());
140 // Provide basic AliasAnalysis support for GVN.
141 TheFPM.add(createBasicAliasAnalysisPass());
142 // Do simple "peephole" optimizations and bit-twiddling optzns.
143 TheFPM.add(createInstructionCombiningPass());
144 // Reassociate expressions.
145 TheFPM.add(createReassociatePass());
146 // Eliminate Common SubExpressions.
147 TheFPM.add(createGVNPass());
148 // Simplify the control flow graph (deleting unreachable blocks, etc).
149 TheFPM.add(createCFGSimplificationPass());
151 TheFPM.doInitialization();
154 This code initializes the global module ``TheModule``, and the function pass
155 manager ``TheFPM``, which is attached to ``TheModule``. Once the pass manager is
156 set up, we use a series of "add" calls to add a bunch of LLVM passes.
158 In this case, we choose to add five passes: one analysis pass (alias analysis),
159 and four optimization passes. The passes we choose here are a pretty standard set
160 of "cleanup" optimizations that are useful for a wide variety of code. I won't
161 delve into what they do but, believe me, they are a good starting place :).
163 Once the PassManager is set up, we need to make use of it. We do this by
164 running it after our newly created function is constructed (in
165 ``FunctionAST::codegen()``), but before it is returned to the client:
169 if (Value *RetVal = Body->codegen()) {
170 // Finish off the function.
171 Builder.CreateRet(RetVal);
173 // Validate the generated code, checking for consistency.
174 verifyFunction(*TheFunction);
176 // Optimize the function.
177 TheFPM->run(*TheFunction);
182 As you can see, this is pretty straightforward. The
183 ``FunctionPassManager`` optimizes and updates the LLVM Function\* in
184 place, improving (hopefully) its body. With this in place, we can try
185 our test above again:
189 ready> def test(x) (1+2+x)*(x+(1+2));
190 ready> Read function definition:
191 define double @test(double %x) {
193 %addtmp = fadd double %x, 3.000000e+00
194 %multmp = fmul double %addtmp, %addtmp
198 As expected, we now get our nicely optimized code, saving a floating
199 point add instruction from every execution of this function.
201 LLVM provides a wide variety of optimizations that can be used in
202 certain circumstances. Some `documentation about the various
203 passes <../Passes.html>`_ is available, but it isn't very complete.
204 Another good source of ideas can come from looking at the passes that
205 ``Clang`` runs to get started. The "``opt``" tool allows you to
206 experiment with passes from the command line, so you can see if they do
209 Now that we have reasonable code coming out of our front-end, lets talk
212 Adding a JIT Compiler
213 =====================
215 Code that is available in LLVM IR can have a wide variety of tools
216 applied to it. For example, you can run optimizations on it (as we did
217 above), you can dump it out in textual or binary forms, you can compile
218 the code to an assembly file (.s) for some target, or you can JIT
219 compile it. The nice thing about the LLVM IR representation is that it
220 is the "common currency" between many different parts of the compiler.
222 In this section, we'll add JIT compiler support to our interpreter. The
223 basic idea that we want for Kaleidoscope is to have the user enter
224 function bodies as they do now, but immediately evaluate the top-level
225 expressions they type in. For example, if they type in "1 + 2;", we
226 should evaluate and print out 3. If they define a function, they should
227 be able to call it from the command line.
229 In order to do this, we first declare and initialize the JIT. This is
230 done by adding a global variable ``TheJIT``, and initializing it in
235 static std::unique_ptr<KaleidoscopeJIT> TheJIT;
239 TheJIT = llvm::make_unique<KaleidoscopeJIT>();
241 // Run the main "interpreter loop" now.
247 The KaleidoscopeJIT class is a simple JIT built specifically for these
248 tutorials. In later chapters we will look at how it works and extend it with
249 new features, but for now we will take it as given. Its API is very simple::
250 ``addModule`` adds an LLVM IR module to the JIT, making its functions
251 available for execution; ``removeModule`` removes a module, freeing any
252 memory associated with the code in that module; and ``findSymbol`` allows us
253 to look up pointers to the compiled code.
255 We can take this simple API and change our code that parses top-level expressions to
260 static void HandleTopLevelExpression() {
261 // Evaluate a top-level expression into an anonymous function.
262 if (auto FnAST = ParseTopLevelExpr()) {
263 if (FnAST->codegen()) {
265 // JIT the module containing the anonymous expression, keeping a handle so
266 // we can free it later.
267 auto H = TheJIT->addModule(std::move(TheModule));
268 InitializeModuleAndPassManager();
270 // Search the JIT for the __anon_expr symbol.
271 auto ExprSymbol = TheJIT->findSymbol("__anon_expr");
272 assert(ExprSymbol && "Function not found");
274 // Get the symbol's address and cast it to the right type (takes no
275 // arguments, returns a double) so we can call it as a native function.
276 double (*FP)() = (double (*)())(intptr_t)ExprSymbol.getAddress();
277 fprintf(stderr, "Evaluated to %f\n", FP());
279 // Delete the anonymous expression module from the JIT.
280 TheJIT->removeModule(H);
283 If parsing and codegen succeeed, the next step is to add the module containing
284 the top-level expression to the JIT. We do this by calling addModule, which
285 triggers code generation for all the functions in the module, and returns a
286 handle that can be used to remove the module from the JIT later. Once the module
287 has been added to the JIT it can no longer be modified, so we also open a new
288 module to hold subsequent code by calling ``InitializeModuleAndPassManager()``.
290 Once we've added the module to the JIT we need to get a pointer to the final
291 generated code. We do this by calling the JIT's findSymbol method, and passing
292 the name of the top-level expression function: ``__anon_expr``. Since we just
293 added this function, we assert that findSymbol returned a result.
295 Next, we get the in-memory address of the ``__anon_expr`` function by calling
296 ``getAddress()`` on the symbol. Recall that we compile top-level expressions
297 into a self-contained LLVM function that takes no arguments and returns the
298 computed double. Because the LLVM JIT compiler matches the native platform ABI,
299 this means that you can just cast the result pointer to a function pointer of
300 that type and call it directly. This means, there is no difference between JIT
301 compiled code and native machine code that is statically linked into your
304 Finally, since we don't support re-evaluation of top-level expressions, we
305 remove the module from the JIT when we're done to free the associated memory.
306 Recall, however, that the module we created a few lines earlier (via
307 ``InitializeModuleAndPassManager``) is still open and waiting for new code to be
310 With just these two changes, lets see how Kaleidoscope works now!
315 Read top-level expression:
318 ret double 9.000000e+00
321 Evaluated to 9.000000
323 Well this looks like it is basically working. The dump of the function
324 shows the "no argument function that always returns double" that we
325 synthesize for each top-level expression that is typed in. This
326 demonstrates very basic functionality, but can we do more?
330 ready> def testfunc(x y) x + y*2;
331 Read function definition:
332 define double @testfunc(double %x, double %y) {
334 %multmp = fmul double %y, 2.000000e+00
335 %addtmp = fadd double %multmp, %x
339 ready> testfunc(4, 10);
340 Read top-level expression:
343 %calltmp = call double @testfunc(double 4.000000e+00, double 1.000000e+01)
347 Evaluated to 24.000000
349 ready> testfunc(5, 10);
350 ready> LLVM ERROR: Program used external function 'testfunc' which could not be resolved!
353 Function definitions and calls also work, but something went very wrong on that
354 last line. The call looks valid, so what happened? As you may have guessed from
355 the the API a Module is a unit of allocation for the JIT, and testfunc was part
356 of the same module that contained anonymous expression. When we removed that
357 module from the JIT to free the memory for the anonymous expression, we deleted
358 the definition of ``testfunc`` along with it. Then, when we tried to call
359 testfunc a second time, the JIT could no longer find it.
361 The easiest way to fix this is to put the anonymous expression in a separate
362 module from the rest of the function definitions. The JIT will happily resolve
363 function calls across module boundaries, as long as each of the functions called
364 has a prototype, and is added to the JIT before it is called. By putting the
365 anonymous expression in a different module we can delete it without affecting
366 the rest of the functions.
368 In fact, we're going to go a step further and put every function in its own
369 module. Doing so allows us to exploit a useful property of the KaleidoscopeJIT
370 that will make our environment more REPL-like: Functions can be added to the
371 JIT more than once (unlike a module where every function must have a unique
372 definition). When you look up a symbol in KaleidoscopeJIT it will always return
373 the most recent definition:
377 ready> def foo(x) x + 1;
378 Read function definition:
379 define double @foo(double %x) {
381 %addtmp = fadd double %x, 1.000000e+00
386 Evaluated to 3.000000
388 ready> def foo(x) x + 2;
389 define double @foo(double %x) {
391 %addtmp = fadd double %x, 2.000000e+00
396 Evaluated to 4.000000
399 To allow each function to live in its own module we'll need a way to
400 re-generate previous function declarations into each new module we open:
404 static std::unique_ptr<KaleidoscopeJIT> TheJIT;
408 Function *getFunction(std::string Name) {
409 // First, see if the function has already been added to the current module.
410 if (auto *F = TheModule->getFunction(Name))
413 // If not, check whether we can codegen the declaration from some existing
415 auto FI = FunctionProtos.find(Name);
416 if (FI != FunctionProtos.end())
417 return FI->second->codegen();
419 // If no existing prototype exists, return null.
425 Value *CallExprAST::codegen() {
426 // Look up the name in the global module table.
427 Function *CalleeF = getFunction(Callee);
431 Function *FunctionAST::codegen() {
432 // Transfer ownership of the prototype to the FunctionProtos map, but keep a
433 // reference to it for use below.
435 FunctionProtos[Proto->getName()] = std::move(Proto);
436 Function *TheFunction = getFunction(P.getName());
441 To enable this, we'll start by adding a new global, ``FunctionProtos``, that
442 holds the most recent prototype for each function. We'll also add a convenience
443 method, ``getFunction()``, to replace calls to ``TheModule->getFunction()``.
444 Our convenience method searches ``TheModule`` for an existing function
445 declaration, falling back to generating a new declaration from FunctionProtos if
446 it doesn't find one. In ``CallExprAST::codegen()`` we just need to replace the
447 call to ``TheModule->getFunction()``. In ``FunctionAST::codegen()`` we need to
448 update the FunctionProtos map first, then call ``getFunction()``. With this
449 done, we can always obtain a function declaration in the current module for any
450 previously declared function.
452 We also need to update HandleDefinition and HandleExtern:
456 static void HandleDefinition() {
457 if (auto FnAST = ParseDefinition()) {
458 if (auto *FnIR = FnAST->codegen()) {
459 fprintf(stderr, "Read function definition:");
461 TheJIT->addModule(std::move(TheModule));
462 InitializeModuleAndPassManager();
465 // Skip token for error recovery.
470 static void HandleExtern() {
471 if (auto ProtoAST = ParseExtern()) {
472 if (auto *FnIR = ProtoAST->codegen()) {
473 fprintf(stderr, "Read extern: ");
475 FunctionProtos[ProtoAST->getName()] = std::move(ProtoAST);
478 // Skip token for error recovery.
483 In HandleDefinition, we add two lines to transfer the newly defined function to
484 the JIT and open a new module. In HandleExtern, we just need to add one line to
485 add the prototype to FunctionProtos.
487 With these changes made, lets try our REPL again (I removed the dump of the
488 anonymous functions this time, you should get the idea by now :) :
492 ready> def foo(x) x + 1;
494 Evaluated to 3.000000
496 ready> def foo(x) x + 2;
498 Evaluated to 4.000000
502 Even with this simple code, we get some surprisingly powerful capabilities -
507 ready> extern sin(x);
509 declare double @sin(double)
511 ready> extern cos(x);
513 declare double @cos(double)
516 Read top-level expression:
519 ret double 0x3FEAED548F090CEE
522 Evaluated to 0.841471
524 ready> def foo(x) sin(x)*sin(x) + cos(x)*cos(x);
525 Read function definition:
526 define double @foo(double %x) {
528 %calltmp = call double @sin(double %x)
529 %multmp = fmul double %calltmp, %calltmp
530 %calltmp2 = call double @cos(double %x)
531 %multmp4 = fmul double %calltmp2, %calltmp2
532 %addtmp = fadd double %multmp, %multmp4
537 Read top-level expression:
540 %calltmp = call double @foo(double 4.000000e+00)
544 Evaluated to 1.000000
546 Whoa, how does the JIT know about sin and cos? The answer is surprisingly
547 simple: The KaleidoscopeJIT has a straightforward symbol resolution rule that
548 it uses to find symbols that aren't available in any given module: First
549 it searches all the modules that have already been added to the JIT, from the
550 most recent to the oldest, to find the newest definition. If no definition is
551 found inside the JIT, it falls back to calling "``dlsym("sin")``" on the
552 Kaleidoscope process itself. Since "``sin``" is defined within the JIT's
553 address space, it simply patches up calls in the module to call the libm
554 version of ``sin`` directly.
556 In the future we'll see how tweaking this symbol resolution rule can be used to
557 enable all sorts of useful features, from security (restricting the set of
558 symbols available to JIT'd code), to dynamic code generation based on symbol
559 names, and even lazy compilation.
561 One immediate benefit of the symbol resolution rule is that we can now extend
562 the language by writing arbitrary C++ code to implement operations. For example,
567 /// putchard - putchar that takes a double and returns 0.
568 extern "C" double putchard(double X) {
569 fputc((char)X, stderr);
573 Now we can produce simple output to the console by using things like:
574 "``extern putchard(x); putchard(120);``", which prints a lowercase 'x'
575 on the console (120 is the ASCII code for 'x'). Similar code could be
576 used to implement file I/O, console input, and many other capabilities
579 This completes the JIT and optimizer chapter of the Kaleidoscope
580 tutorial. At this point, we can compile a non-Turing-complete
581 programming language, optimize and JIT compile it in a user-driven way.
582 Next up we'll look into `extending the language with control flow
583 constructs <LangImpl5.html>`_, tackling some interesting LLVM IR issues
589 Here is the complete code listing for our running example, enhanced with
590 the LLVM JIT and optimizer. To build this example, use:
595 clang++ -g toy.cpp `llvm-config --cxxflags --ldflags --system-libs --libs core mcjit native` -O3 -o toy
599 If you are compiling this on Linux, make sure to add the "-rdynamic"
600 option as well. This makes sure that the external functions are resolved
605 .. literalinclude:: ../../examples/Kaleidoscope/Chapter4/toy.cpp
608 `Next: Extending the language: control flow <LangImpl5.html>`_