1 <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN"
2 "http://www.w3.org/TR/html4/strict.dtd">
6 <title>Kaleidoscope: Adding JIT and Optimizer Support</title>
7 <meta http-equiv="Content-Type" content="text/html; charset=utf-8">
8 <meta name="author" content="Chris Lattner">
9 <link rel="stylesheet" href="../llvm.css" type="text/css">
14 <div class="doc_title">Kaleidoscope: Adding JIT and Optimizer Support</div>
17 <li><a href="index.html">Up to Tutorial Index</a></li>
20 <li><a href="#intro">Chapter 4 Introduction</a></li>
21 <li><a href="#trivialconstfold">Trivial Constant Folding</a></li>
22 <li><a href="#optimizerpasses">LLVM Optimization Passes</a></li>
23 <li><a href="#jit">Adding a JIT Compiler</a></li>
24 <li><a href="#code">Full Code Listing</a></li>
27 <li><a href="LangImpl5.html">Chapter 5</a>: Extending the Language: Control
31 <div class="doc_author">
32 <p>Written by <a href="mailto:sabre@nondot.org">Chris Lattner</a></p>
35 <!-- *********************************************************************** -->
36 <div class="doc_section"><a name="intro">Chapter 4 Introduction</a></div>
37 <!-- *********************************************************************** -->
39 <div class="doc_text">
41 <p>Welcome to Chapter 4 of the "<a href="index.html">Implementing a language
42 with LLVM</a>" tutorial. Chapters 1-3 described the implementation of a simple
43 language and added support for generating LLVM IR. This chapter describes
44 two new techniques: adding optimizer support to your language, and adding JIT
45 compiler support. This shows how to get nice efficient code for your
50 <!-- *********************************************************************** -->
51 <div class="doc_section"><a name="trivialconstfold">Trivial Constant
53 <!-- *********************************************************************** -->
55 <div class="doc_text">
58 Our demonstration for Chapter 3 is elegant and easy to extend. Unfortunately,
59 it does not produce wonderful code. For example, when compiling simple code,
60 we don't get obvious optimizations:</p>
62 <div class="doc_code">
64 ready> <b>def test(x) 1+2+x;</b>
65 Read function definition:
66 define double @test(double %x) {
68 %addtmp = add double 1.000000e+00, 2.000000e+00
69 %addtmp1 = add double %addtmp, %x
75 <p>This code is a very very literal transcription of the AST built by parsing
76 our code, and as such, lacks optimizations like constant folding (we'd like to
77 get "<tt>add x, 3.0</tt>" in the example above) as well as other more important
78 optimizations. Constant folding in particular is a very common and very
79 important optimization: so much so that many language implementors implement
80 constant folding support in their AST representation.</p>
82 <p>With LLVM, you don't need to. Since all calls to build LLVM IR go through
83 the LLVM builder, it would be nice if the builder itself checked to see if there
84 was a constant folding opportunity when you call it. If so, it could just do
85 the constant fold and return the constant instead of creating an instruction.
86 This is exactly what the <tt>LLVMFoldingBuilder</tt> class does. Lets make one
89 <div class="doc_code">
91 static LLVMFoldingBuilder Builder;
95 <p>All we did was switch from <tt>LLVMBuilder</tt> to
96 <tt>LLVMFoldingBuilder</tt>. Though we change no other code, now all of our
97 instructions are implicitly constant folded without us having to do anything
98 about it. For example, our example above now compiles to:</p>
100 <div class="doc_code">
102 ready> <b>def test(x) 1+2+x;</b>
103 Read function definition:
104 define double @test(double %x) {
106 %addtmp = add double 3.000000e+00, %x
112 <p>Well, that was easy :). In practice, we recommend always using
113 <tt>LLVMFoldingBuilder</tt> when generating code like this. It has no
114 "syntactic overhead" for its use (you don't have to uglify your compiler with
115 constant checks everywhere) and it can dramatically reduce the amount of
116 LLVM IR that is generated in some cases (particular for languages with a macro
117 preprocessor or that use a lot of constants).</p>
119 <p>On the other hand, the <tt>LLVMFoldingBuilder</tt> is limited by the fact
120 that it does all of its analysis inline with the code as it is built. If you
121 take a slightly more complex example:</p>
123 <div class="doc_code">
125 ready> <b>def test(x) (1+2+x)*(x+(1+2));</b>
126 ready> Read function definition:
127 define double @test(double %x) {
129 %addtmp = add double 3.000000e+00, %x
130 %addtmp1 = add double %x, 3.000000e+00
131 %multmp = mul double %addtmp, %addtmp1
137 <p>In this case, the LHS and RHS of the multiplication are the same value. We'd
138 really like to see this generate "<tt>tmp = x+3; result = tmp*tmp;</tt>" instead
139 of computing "<tt>x*3</tt>" twice.</p>
141 <p>Unfortunately, no amount of local analysis will be able to detect and correct
142 this. This requires two transformations: reassociation of expressions (to
143 make the add's lexically identical) and Common Subexpression Elimination (CSE)
144 to delete the redundant add instruction. Fortunately, LLVM provides a broad
145 range of optimizations that you can use, in the form of "passes".</p>
149 <!-- *********************************************************************** -->
150 <div class="doc_section"><a name="optimizerpasses">LLVM Optimization
152 <!-- *********************************************************************** -->
154 <div class="doc_text">
156 <p>LLVM provides many optimization passes which do many different sorts of
157 things and have different tradeoffs. Unlike other systems, LLVM doesn't hold
158 to the mistaken notion that one set of optimizations is right for all languages
159 and for all situations. LLVM allows a compiler implementor to make complete
160 decisions about what optimizations to use, in which order, and in what
163 <p>As a concrete example, LLVM supports both "whole module" passes, which look
164 across as large of body of code as they can (often a whole file, but if run
165 at link time, this can be a substantial portion of the whole program). It also
166 supports and includes "per-function" passes which just operate on a single
167 function at a time, without looking at other functions. For more information
168 on passes and how the get run, see the <a href="../WritingAnLLVMPass.html">How
169 to Write a Pass</a> document and the <a href="../Passes.html">List of LLVM
172 <p>For Kaleidoscope, we are currently generating functions on the fly, one at
173 a time, as the user types them in. We aren't shooting for the ultimate
174 optimization experience in this setting, but we also want to catch the easy and
175 quick stuff where possible. As such, we will choose to run a few per-function
176 optimizations as the user types the function in. If we wanted to make a "static
177 Kaleidoscope compiler", we would use exactly the code we have now, except that
178 we would defer running the optimizer until the entire file has been parsed.</p>
180 <p>In order to get per-function optimizations going, we need to set up a
181 <a href="../WritingAnLLVMPass.html#passmanager">FunctionPassManager</a> to hold and
182 organize the LLVM optimizations that we want to run. Once we have that, we can
183 add a set of optimizations to run. The code looks like this:</p>
185 <div class="doc_code">
187 ExistingModuleProvider OurModuleProvider(TheModule);
188 FunctionPassManager OurFPM(&OurModuleProvider);
190 // Set up the optimizer pipeline. Start with registering info about how the
191 // target lays out data structures.
192 OurFPM.add(new TargetData(*TheExecutionEngine->getTargetData()));
193 // Do simple "peephole" optimizations and bit-twiddling optzns.
194 OurFPM.add(createInstructionCombiningPass());
195 // Reassociate expressions.
196 OurFPM.add(createReassociatePass());
197 // Eliminate Common SubExpressions.
198 OurFPM.add(createGVNPass());
199 // Simplify the control flow graph (deleting unreachable blocks, etc).
200 OurFPM.add(createCFGSimplificationPass());
202 // Set the global so the code gen can use this.
203 TheFPM = &OurFPM;
205 // Run the main "interpreter loop" now.
210 <p>This code defines two objects, a <tt>ExistingModuleProvider</tt> and a
211 <tt>FunctionPassManager</tt>. The former is basically a wrapper around our
212 <tt>Module</tt> that the PassManager requires. It provides certain flexibility
213 that we're not going to take advantage of here, so I won't dive into what it is
216 <p>The meat of the matter is the definition of "<tt>OurFPM</tt>". It
217 requires a pointer to the <tt>Module</tt> (through the <tt>ModuleProvider</tt>)
218 to construct itself. Once it is set up, we use a series of "add" calls to add
219 a bunch of LLVM passes. The first pass is basically boilerplate, it adds a pass
220 so that later optimizations know how the data structures in the program are
221 layed out. The "<tt>TheExecutionEngine</tt>" variable is related to the JIT,
222 which we will get to in the next section.</p>
224 <p>In this case, we choose to add 4 optimization passes. The passes we chose
225 here are a pretty standard set of "cleanup" optimizations that are useful for
226 a wide variety of code. I won't delve into what they do, but believe me that
227 they are a good starting place :).</p>
229 <p>Once the PassManager is set up, we need to make use of it. We do this by
230 running it after our newly created function is constructed (in
231 <tt>FunctionAST::Codegen</tt>), but before it is returned to the client:</p>
233 <div class="doc_code">
235 if (Value *RetVal = Body->Codegen()) {
236 // Finish off the function.
237 Builder.CreateRet(RetVal);
239 // Validate the generated code, checking for consistency.
240 verifyFunction(*TheFunction);
242 <b>// Optimize the function.
243 TheFPM->run(*TheFunction);</b>
250 <p>As you can see, this is pretty straight-forward. The
251 <tt>FunctionPassManager</tt> optimizes and updates the LLVM Function* in place,
252 improving (hopefully) its body. With this in place, we can try our test above
255 <div class="doc_code">
257 ready> <b>def test(x) (1+2+x)*(x+(1+2));</b>
258 ready> Read function definition:
259 define double @test(double %x) {
261 %addtmp = add double %x, 3.000000e+00
262 %multmp = mul double %addtmp, %addtmp
268 <p>As expected, we now get our nicely optimized code, saving a floating point
269 add instruction from every execution of this function.</p>
271 <p>LLVM provides a wide variety of optimizations that can be used in certain
272 circumstances. Some <a href="../Passes.html">documentation about the various
273 passes</a> is available, but it isn't very complete. Another good source of
274 ideas is to look at the passes that <tt>llvm-gcc</tt> or
275 <tt>llvm-ld</tt> run to get started. The "<tt>opt</tt>" tool allows you to
276 experiment with passes from the command line, so you can see if they do
279 <p>Now that we have reasonable code coming out of our front-end, lets talk about
284 <!-- *********************************************************************** -->
285 <div class="doc_section"><a name="jit">Adding a JIT Compiler</a></div>
286 <!-- *********************************************************************** -->
288 <div class="doc_text">
290 <p>Code that is available in LLVM IR can have a wide variety of tools
291 applied to it. For example, you can run optimizations on it (as we did above),
292 you can dump it out in textual or binary forms, you can compile the code to an
293 assembly file (.s) for some target, or you can JIT compile it. The nice thing
294 about the LLVM IR representation is that it is the "common currency" between
295 many different parts of the compiler.
298 <p>In this section, we'll add JIT compiler support to our interpreter. The
299 basic idea that we want for Kaleidoscope is to have the user enter function
300 bodies as they do now, but immediately evaluate the top-level expressions they
301 type in. For example, if they type in "1 + 2;", we should evaluate and print
302 out 3. If they define a function, they should be able to call it from the
305 <p>In order to do this, we first declare and initialize the JIT. This is done
306 by adding a global variable and a call in <tt>main</tt>:</p>
308 <div class="doc_code">
310 <b>static ExecutionEngine *TheExecutionEngine;</b>
314 <b>// Create the JIT.
315 TheExecutionEngine = ExecutionEngine::create(TheModule);</b>
321 <p>This creates an abstract "Execution Engine" which can be either a JIT
322 compiler or the LLVM interpreter. LLVM will automatically pick a JIT compiler
323 for you if one is available for your platform, otherwise it will fall back to
326 <p>Once the <tt>ExecutionEngine</tt> is created, the JIT is ready to be used.
327 There are a variety of APIs that are useful, but the most simple one is the
328 "<tt>getPointerToFunction(F)</tt>" method. This method JIT compiles the
329 specified LLVM Function and returns a function pointer to the generated machine
330 code. In our case, this means that we can change the code that parses a
331 top-level expression to look like this:</p>
333 <div class="doc_code">
335 static void HandleTopLevelExpression() {
336 // Evaluate a top level expression into an anonymous function.
337 if (FunctionAST *F = ParseTopLevelExpr()) {
338 if (Function *LF = F->Codegen()) {
339 LF->dump(); // Dump the function for exposition purposes.
341 <b>// JIT the function, returning a function pointer.
342 void *FPtr = TheExecutionEngine->getPointerToFunction(LF);
344 // Cast it to the right type (takes no arguments, returns a double) so we
345 // can call it as a native function.
346 double (*FP)() = (double (*)())FPtr;
347 fprintf(stderr, "Evaluated to %f\n", FP());</b>
352 <p>Recall that we compile top-level expressions into a self-contained LLVM
353 function that takes no arguments and returns the computed double. Because the
354 LLVM JIT compiler matches the native platform ABI, this means that you can just
355 cast the result pointer to a function pointer of that type and call it directly.
356 As such, there is no difference between JIT compiled code and native machine
357 code that is statically linked into your application.</p>
359 <p>With just these two changes, lets see how Kaleidoscope works now!</p>
361 <div class="doc_code">
363 ready> <b>4+5;</b>
364 define double @""() {
366 ret double 9.000000e+00
369 <em>Evaluated to 9.000000</em>
373 <p>Well this looks like it is basically working. The dump of the function
374 shows the "no argument function that always returns double" that we synthesize
375 for each top level expression that is typed it. This demonstrates very basic
376 functionality, but can we do more?</p>
378 <div class="doc_code">
380 ready> <b>def testfunc(x y) x + y*2; </b>
381 Read function definition:
382 define double @testfunc(double %x, double %y) {
384 %multmp = mul double %y, 2.000000e+00
385 %addtmp = add double %multmp, %x
389 ready> <b>testfunc(4, 10);</b>
390 define double @""() {
392 %calltmp = call double @testfunc( double 4.000000e+00, double 1.000000e+01 )
396 <em>Evaluated to 24.000000</em>
400 <p>This illustrates that we can now call user code, but it is a bit subtle what
401 is going on here. Note that we only invoke the JIT on the anonymous functions
402 that <em>calls testfunc</em>, but we never invoked it on <em>testfunc
405 <p>What actually happened here is that the anonymous function is
406 JIT'd when requested. When the Kaleidoscope app calls through the function
407 pointer that is returned, the anonymous function starts executing. It ends up
408 making the call to the "testfunc" function, and ends up in a stub that invokes
409 the JIT, lazily, on testfunc. Once the JIT finishes lazily compiling testfunc,
410 it returns and the code re-executes the call.</p>
412 <p>In summary, the JIT will lazily JIT code on the fly as it is needed. The
413 JIT provides a number of other more advanced interfaces for things like freeing
414 allocated machine code, rejit'ing functions to update them, etc. However, even
415 with this simple code, we get some surprisingly powerful capabilities - check
416 this out (I removed the dump of the anonymous functions, you should get the idea
419 <div class="doc_code">
421 ready> <b>extern sin(x);</b>
423 declare double @sin(double)
425 ready> <b>extern cos(x);</b>
427 declare double @cos(double)
429 ready> <b>sin(1.0);</b>
430 <em>Evaluated to 0.841471</em>
432 ready> <b>def foo(x) sin(x)*sin(x) + cos(x)*cos(x);</b>
433 Read function definition:
434 define double @foo(double %x) {
436 %calltmp = call double @sin( double %x )
437 %multmp = mul double %calltmp, %calltmp
438 %calltmp2 = call double @cos( double %x )
439 %multmp4 = mul double %calltmp2, %calltmp2
440 %addtmp = add double %multmp, %multmp4
444 ready> <b>foo(4.0);</b>
445 <em>Evaluated to 1.000000</em>
449 <p>Whoa, how does the JIT know about sin and cos? The answer is surprisingly
451 example, the JIT started execution of a function and got to a function call. It
452 realized that the function was not yet JIT compiled and invoked the standard set
453 of routines to resolve the function. In this case, there is no body defined
454 for the function, so the JIT ended up calling "<tt>dlsym("sin")</tt>" on the
455 Kaleidoscope process itself.
456 Since "<tt>sin</tt>" is defined within the JIT's address space, it simply
457 patches up calls in the module to call the libm version of <tt>sin</tt>
460 <p>The LLVM JIT provides a number of interfaces (look in the
461 <tt>ExecutionEngine.h</tt> file) for controlling how unknown functions get
462 resolved. It allows you to establish explicit mappings between IR objects and
463 addresses (useful for LLVM global variables that you want to map to static
464 tables, for example), allows you to dynamically decide on the fly based on the
465 function name, and even allows you to have the JIT abort itself if any lazy
466 compilation is attempted.</p>
468 <p>One interesting application of this is that we can now extend the language
469 by writing arbitrary C++ code to implement operations. For example, if we add:
472 <div class="doc_code">
474 /// putchard - putchar that takes a double and returns 0.
476 double putchard(double X) {
483 <p>Now we can produce simple output to the console by using things like:
484 "<tt>extern putchard(x); putchard(120);</tt>", which prints a lowercase 'x' on
485 the console (120 is the ASCII code for 'x'). Similar code could be used to
486 implement file I/O, console input, and many other capabilities in
489 <p>This completes the JIT and optimizer chapter of the Kaleidoscope tutorial. At
490 this point, we can compile a non-Turing-complete programming language, optimize
491 and JIT compile it in a user-driven way. Next up we'll look into <a
492 href="LangImpl5.html">extending the language with control flow constructs</a>,
493 tackling some interesting LLVM IR issues along the way.</p>
497 <!-- *********************************************************************** -->
498 <div class="doc_section"><a name="code">Full Code Listing</a></div>
499 <!-- *********************************************************************** -->
501 <div class="doc_text">
504 Here is the complete code listing for our running example, enhanced with the
505 LLVM JIT and optimizer. To build this example, use:
508 <div class="doc_code">
511 g++ -g toy.cpp `llvm-config --cppflags --ldflags --libs core jit native` -O3 -o toy
517 <p>Here is the code:</p>
519 <div class="doc_code">
521 #include "llvm/DerivedTypes.h"
522 #include "llvm/ExecutionEngine/ExecutionEngine.h"
523 #include "llvm/Module.h"
524 #include "llvm/ModuleProvider.h"
525 #include "llvm/PassManager.h"
526 #include "llvm/Analysis/Verifier.h"
527 #include "llvm/Target/TargetData.h"
528 #include "llvm/Transforms/Scalar.h"
529 #include "llvm/Support/LLVMBuilder.h"
530 #include <cstdio>
531 #include <string>
533 #include <vector>
534 using namespace llvm;
536 //===----------------------------------------------------------------------===//
538 //===----------------------------------------------------------------------===//
540 // The lexer returns tokens [0-255] if it is an unknown character, otherwise one
541 // of these for known things.
546 tok_def = -2, tok_extern = -3,
549 tok_identifier = -4, tok_number = -5,
552 static std::string IdentifierStr; // Filled in if tok_identifier
553 static double NumVal; // Filled in if tok_number
555 /// gettok - Return the next token from standard input.
556 static int gettok() {
557 static int LastChar = ' ';
559 // Skip any whitespace.
560 while (isspace(LastChar))
561 LastChar = getchar();
563 if (isalpha(LastChar)) { // identifier: [a-zA-Z][a-zA-Z0-9]*
564 IdentifierStr = LastChar;
565 while (isalnum((LastChar = getchar())))
566 IdentifierStr += LastChar;
568 if (IdentifierStr == "def") return tok_def;
569 if (IdentifierStr == "extern") return tok_extern;
570 return tok_identifier;
573 if (isdigit(LastChar) || LastChar == '.') { // Number: [0-9.]+
577 LastChar = getchar();
578 } while (isdigit(LastChar) || LastChar == '.');
580 NumVal = strtod(NumStr.c_str(), 0);
584 if (LastChar == '#') {
585 // Comment until end of line.
586 do LastChar = getchar();
587 while (LastChar != EOF && LastChar != '\n' & LastChar != '\r');
593 // Check for end of file. Don't eat the EOF.
597 // Otherwise, just return the character as its ascii value.
598 int ThisChar = LastChar;
599 LastChar = getchar();
603 //===----------------------------------------------------------------------===//
604 // Abstract Syntax Tree (aka Parse Tree)
605 //===----------------------------------------------------------------------===//
607 /// ExprAST - Base class for all expression nodes.
610 virtual ~ExprAST() {}
611 virtual Value *Codegen() = 0;
614 /// NumberExprAST - Expression class for numeric literals like "1.0".
615 class NumberExprAST : public ExprAST {
618 NumberExprAST(double val) : Val(val) {}
619 virtual Value *Codegen();
622 /// VariableExprAST - Expression class for referencing a variable, like "a".
623 class VariableExprAST : public ExprAST {
626 VariableExprAST(const std::string &name) : Name(name) {}
627 virtual Value *Codegen();
630 /// BinaryExprAST - Expression class for a binary operator.
631 class BinaryExprAST : public ExprAST {
635 BinaryExprAST(char op, ExprAST *lhs, ExprAST *rhs)
636 : Op(op), LHS(lhs), RHS(rhs) {}
637 virtual Value *Codegen();
640 /// CallExprAST - Expression class for function calls.
641 class CallExprAST : public ExprAST {
643 std::vector<ExprAST*> Args;
645 CallExprAST(const std::string &callee, std::vector<ExprAST*> &args)
646 : Callee(callee), Args(args) {}
647 virtual Value *Codegen();
650 /// PrototypeAST - This class represents the "prototype" for a function,
651 /// which captures its argument names as well as if it is an operator.
654 std::vector<std::string> Args;
656 PrototypeAST(const std::string &name, const std::vector<std::string> &args)
657 : Name(name), Args(args) {}
662 /// FunctionAST - This class represents a function definition itself.
667 FunctionAST(PrototypeAST *proto, ExprAST *body)
668 : Proto(proto), Body(body) {}
673 //===----------------------------------------------------------------------===//
675 //===----------------------------------------------------------------------===//
677 /// CurTok/getNextToken - Provide a simple token buffer. CurTok is the current
678 /// token the parser it looking at. getNextToken reads another token from the
679 /// lexer and updates CurTok with its results.
681 static int getNextToken() {
682 return CurTok = gettok();
685 /// BinopPrecedence - This holds the precedence for each binary operator that is
687 static std::map<char, int> BinopPrecedence;
689 /// GetTokPrecedence - Get the precedence of the pending binary operator token.
690 static int GetTokPrecedence() {
691 if (!isascii(CurTok))
694 // Make sure it's a declared binop.
695 int TokPrec = BinopPrecedence[CurTok];
696 if (TokPrec <= 0) return -1;
700 /// Error* - These are little helper functions for error handling.
701 ExprAST *Error(const char *Str) { fprintf(stderr, "Error: %s\n", Str);return 0;}
702 PrototypeAST *ErrorP(const char *Str) { Error(Str); return 0; }
703 FunctionAST *ErrorF(const char *Str) { Error(Str); return 0; }
705 static ExprAST *ParseExpression();
709 /// ::= identifier '(' expression* ')'
710 static ExprAST *ParseIdentifierExpr() {
711 std::string IdName = IdentifierStr;
713 getNextToken(); // eat identifier.
715 if (CurTok != '(') // Simple variable ref.
716 return new VariableExprAST(IdName);
719 getNextToken(); // eat (
720 std::vector<ExprAST*> Args;
723 ExprAST *Arg = ParseExpression();
727 if (CurTok == ')') break;
730 return Error("Expected ')'");
738 return new CallExprAST(IdName, Args);
741 /// numberexpr ::= number
742 static ExprAST *ParseNumberExpr() {
743 ExprAST *Result = new NumberExprAST(NumVal);
744 getNextToken(); // consume the number
748 /// parenexpr ::= '(' expression ')'
749 static ExprAST *ParseParenExpr() {
750 getNextToken(); // eat (.
751 ExprAST *V = ParseExpression();
755 return Error("expected ')'");
756 getNextToken(); // eat ).
761 /// ::= identifierexpr
764 static ExprAST *ParsePrimary() {
766 default: return Error("unknown token when expecting an expression");
767 case tok_identifier: return ParseIdentifierExpr();
768 case tok_number: return ParseNumberExpr();
769 case '(': return ParseParenExpr();
774 /// ::= ('+' primary)*
775 static ExprAST *ParseBinOpRHS(int ExprPrec, ExprAST *LHS) {
776 // If this is a binop, find its precedence.
778 int TokPrec = GetTokPrecedence();
780 // If this is a binop that binds at least as tightly as the current binop,
781 // consume it, otherwise we are done.
782 if (TokPrec < ExprPrec)
785 // Okay, we know this is a binop.
787 getNextToken(); // eat binop
789 // Parse the primary expression after the binary operator.
790 ExprAST *RHS = ParsePrimary();
793 // If BinOp binds less tightly with RHS than the operator after RHS, let
794 // the pending operator take RHS as its LHS.
795 int NextPrec = GetTokPrecedence();
796 if (TokPrec < NextPrec) {
797 RHS = ParseBinOpRHS(TokPrec+1, RHS);
798 if (RHS == 0) return 0;
802 LHS = new BinaryExprAST(BinOp, LHS, RHS);
807 /// ::= primary binoprhs
809 static ExprAST *ParseExpression() {
810 ExprAST *LHS = ParsePrimary();
813 return ParseBinOpRHS(0, LHS);
817 /// ::= id '(' id* ')'
818 static PrototypeAST *ParsePrototype() {
819 if (CurTok != tok_identifier)
820 return ErrorP("Expected function name in prototype");
822 std::string FnName = IdentifierStr;
826 return ErrorP("Expected '(' in prototype");
828 std::vector<std::string> ArgNames;
829 while (getNextToken() == tok_identifier)
830 ArgNames.push_back(IdentifierStr);
832 return ErrorP("Expected ')' in prototype");
835 getNextToken(); // eat ')'.
837 return new PrototypeAST(FnName, ArgNames);
840 /// definition ::= 'def' prototype expression
841 static FunctionAST *ParseDefinition() {
842 getNextToken(); // eat def.
843 PrototypeAST *Proto = ParsePrototype();
844 if (Proto == 0) return 0;
846 if (ExprAST *E = ParseExpression())
847 return new FunctionAST(Proto, E);
851 /// toplevelexpr ::= expression
852 static FunctionAST *ParseTopLevelExpr() {
853 if (ExprAST *E = ParseExpression()) {
854 // Make an anonymous proto.
855 PrototypeAST *Proto = new PrototypeAST("", std::vector<std::string>());
856 return new FunctionAST(Proto, E);
861 /// external ::= 'extern' prototype
862 static PrototypeAST *ParseExtern() {
863 getNextToken(); // eat extern.
864 return ParsePrototype();
867 //===----------------------------------------------------------------------===//
869 //===----------------------------------------------------------------------===//
871 static Module *TheModule;
872 static LLVMFoldingBuilder Builder;
873 static std::map<std::string, Value*> NamedValues;
874 static FunctionPassManager *TheFPM;
876 Value *ErrorV(const char *Str) { Error(Str); return 0; }
878 Value *NumberExprAST::Codegen() {
879 return ConstantFP::get(Type::DoubleTy, APFloat(Val));
882 Value *VariableExprAST::Codegen() {
883 // Look this variable up in the function.
884 Value *V = NamedValues[Name];
885 return V ? V : ErrorV("Unknown variable name");
888 Value *BinaryExprAST::Codegen() {
889 Value *L = LHS->Codegen();
890 Value *R = RHS->Codegen();
891 if (L == 0 || R == 0) return 0;
894 case '+': return Builder.CreateAdd(L, R, "addtmp");
895 case '-': return Builder.CreateSub(L, R, "subtmp");
896 case '*': return Builder.CreateMul(L, R, "multmp");
898 L = Builder.CreateFCmpULT(L, R, "cmptmp");
899 // Convert bool 0/1 to double 0.0 or 1.0
900 return Builder.CreateUIToFP(L, Type::DoubleTy, "booltmp");
901 default: return ErrorV("invalid binary operator");
905 Value *CallExprAST::Codegen() {
906 // Look up the name in the global module table.
907 Function *CalleeF = TheModule->getFunction(Callee);
909 return ErrorV("Unknown function referenced");
911 // If argument mismatch error.
912 if (CalleeF->arg_size() != Args.size())
913 return ErrorV("Incorrect # arguments passed");
915 std::vector<Value*> ArgsV;
916 for (unsigned i = 0, e = Args.size(); i != e; ++i) {
917 ArgsV.push_back(Args[i]->Codegen());
918 if (ArgsV.back() == 0) return 0;
921 return Builder.CreateCall(CalleeF, ArgsV.begin(), ArgsV.end(), "calltmp");
924 Function *PrototypeAST::Codegen() {
925 // Make the function type: double(double,double) etc.
926 std::vector<const Type*> Doubles(Args.size(), Type::DoubleTy);
927 FunctionType *FT = FunctionType::get(Type::DoubleTy, Doubles, false);
929 Function *F = new Function(FT, Function::ExternalLinkage, Name, TheModule);
931 // If F conflicted, there was already something named 'Name'. If it has a
932 // body, don't allow redefinition or reextern.
933 if (F->getName() != Name) {
934 // Delete the one we just made and get the existing one.
935 F->eraseFromParent();
936 F = TheModule->getFunction(Name);
938 // If F already has a body, reject this.
939 if (!F->empty()) {
940 ErrorF("redefinition of function");
944 // If F took a different number of args, reject.
945 if (F->arg_size() != Args.size()) {
946 ErrorF("redefinition of function with different # args");
951 // Set names for all arguments.
953 for (Function::arg_iterator AI = F->arg_begin(); Idx != Args.size();
955 AI->setName(Args[Idx]);
957 // Add arguments to variable symbol table.
958 NamedValues[Args[Idx]] = AI;
964 Function *FunctionAST::Codegen() {
967 Function *TheFunction = Proto->Codegen();
968 if (TheFunction == 0)
971 // Create a new basic block to start insertion into.
972 BasicBlock *BB = new BasicBlock("entry", TheFunction);
973 Builder.SetInsertPoint(BB);
975 if (Value *RetVal = Body->Codegen()) {
976 // Finish off the function.
977 Builder.CreateRet(RetVal);
979 // Validate the generated code, checking for consistency.
980 verifyFunction(*TheFunction);
982 // Optimize the function.
983 TheFPM->run(*TheFunction);
988 // Error reading body, remove function.
989 TheFunction->eraseFromParent();
993 //===----------------------------------------------------------------------===//
994 // Top-Level parsing and JIT Driver
995 //===----------------------------------------------------------------------===//
997 static ExecutionEngine *TheExecutionEngine;
999 static void HandleDefinition() {
1000 if (FunctionAST *F = ParseDefinition()) {
1001 if (Function *LF = F->Codegen()) {
1002 fprintf(stderr, "Read function definition:");
1006 // Skip token for error recovery.
1011 static void HandleExtern() {
1012 if (PrototypeAST *P = ParseExtern()) {
1013 if (Function *F = P->Codegen()) {
1014 fprintf(stderr, "Read extern: ");
1018 // Skip token for error recovery.
1023 static void HandleTopLevelExpression() {
1024 // Evaluate a top level expression into an anonymous function.
1025 if (FunctionAST *F = ParseTopLevelExpr()) {
1026 if (Function *LF = F->Codegen()) {
1027 // JIT the function, returning a function pointer.
1028 void *FPtr = TheExecutionEngine->getPointerToFunction(LF);
1030 // Cast it to the right type (takes no arguments, returns a double) so we
1031 // can call it as a native function.
1032 double (*FP)() = (double (*)())FPtr;
1033 fprintf(stderr, "Evaluated to %f\n", FP());
1036 // Skip token for error recovery.
1041 /// top ::= definition | external | expression | ';'
1042 static void MainLoop() {
1044 fprintf(stderr, "ready> ");
1046 case tok_eof: return;
1047 case ';': getNextToken(); break; // ignore top level semicolons.
1048 case tok_def: HandleDefinition(); break;
1049 case tok_extern: HandleExtern(); break;
1050 default: HandleTopLevelExpression(); break;
1057 //===----------------------------------------------------------------------===//
1058 // "Library" functions that can be "extern'd" from user code.
1059 //===----------------------------------------------------------------------===//
1061 /// putchard - putchar that takes a double and returns 0.
1063 double putchard(double X) {
1068 //===----------------------------------------------------------------------===//
1069 // Main driver code.
1070 //===----------------------------------------------------------------------===//
1073 // Install standard binary operators.
1074 // 1 is lowest precedence.
1075 BinopPrecedence['<'] = 10;
1076 BinopPrecedence['+'] = 20;
1077 BinopPrecedence['-'] = 20;
1078 BinopPrecedence['*'] = 40; // highest.
1080 // Prime the first token.
1081 fprintf(stderr, "ready> ");
1084 // Make the module, which holds all the code.
1085 TheModule = new Module("my cool jit");
1088 TheExecutionEngine = ExecutionEngine::create(TheModule);
1091 ExistingModuleProvider OurModuleProvider(TheModule);
1092 FunctionPassManager OurFPM(&OurModuleProvider);
1094 // Set up the optimizer pipeline. Start with registering info about how the
1095 // target lays out data structures.
1096 OurFPM.add(new TargetData(*TheExecutionEngine->getTargetData()));
1097 // Do simple "peephole" optimizations and bit-twiddling optzns.
1098 OurFPM.add(createInstructionCombiningPass());
1099 // Reassociate expressions.
1100 OurFPM.add(createReassociatePass());
1101 // Eliminate Common SubExpressions.
1102 OurFPM.add(createGVNPass());
1103 // Simplify the control flow graph (deleting unreachable blocks, etc).
1104 OurFPM.add(createCFGSimplificationPass());
1106 // Set the global so the code gen can use this.
1107 TheFPM = &OurFPM;
1109 // Run the main "interpreter loop" now.
1113 } // Free module provider and pass manager.
1116 // Print out all of the generated code.
1117 TheModule->dump();
1125 <!-- *********************************************************************** -->
1128 <a href="http://jigsaw.w3.org/css-validator/check/referer"><img
1129 src="http://jigsaw.w3.org/css-validator/images/vcss" alt="Valid CSS!"></a>
1130 <a href="http://validator.w3.org/check/referer"><img
1131 src="http://www.w3.org/Icons/valid-html401" alt="Valid HTML 4.01!"></a>
1133 <a href="mailto:sabre@nondot.org">Chris Lattner</a><br>
1134 <a href="http://llvm.org">The LLVM Compiler Infrastructure</a><br>
1135 Last modified: $Date: 2007-10-17 11:05:13 -0700 (Wed, 17 Oct 2007) $