Read function definition:
define double @test(double %x) {
entry:
- %addtmp = add double 3.000000e+00, %x
+ %addtmp = fadd double 3.000000e+00, %x
ret double %addtmp
}
</pre>
Read function definition:
define double @test(double %x) {
entry:
- %addtmp = add double 2.000000e+00, 1.000000e+00
- %addtmp1 = add double %addtmp, %x
+ %addtmp = fadd double 2.000000e+00, 1.000000e+00
+ %addtmp1 = fadd double %addtmp, %x
ret double %addtmp1
}
</pre>
ready> Read function definition:
define double @test(double %x) {
entry:
- %addtmp = add double 3.000000e+00, %x
- %addtmp1 = add double %x, 3.000000e+00
- %multmp = mul double %addtmp, %addtmp1
+ %addtmp = fadd double 3.000000e+00, %x
+ %addtmp1 = fadd double %x, 3.000000e+00
+ %multmp = fmul double %addtmp, %addtmp1
ret double %multmp
}
</pre>
<div class="doc_code">
<pre>
- ExistingModuleProvider *OurModuleProvider =
- new ExistingModuleProvider(TheModule);
-
- FunctionPassManager OurFPM(OurModuleProvider);
+ FunctionPassManager OurFPM(TheModule);
// Set up the optimizer pipeline. Start with registering info about how the
// target lays out data structures.
</pre>
</div>
-<p>This code defines two objects, an <tt>ExistingModuleProvider</tt> and a
-<tt>FunctionPassManager</tt>. The former is basically a wrapper around our
-<tt>Module</tt> that the PassManager requires. It provides certain flexibility
-that we're not going to take advantage of here, so I won't dive into any details
-about it.</p>
-
-<p>The meat of the matter here, is the definition of "<tt>OurFPM</tt>". It
-requires a pointer to the <tt>Module</tt> (through the <tt>ModuleProvider</tt>)
-to construct itself. Once it is set up, we use a series of "add" calls to add
-a bunch of LLVM passes. The first pass is basically boilerplate, it adds a pass
-so that later optimizations know how the data structures in the program are
-laid out. The "<tt>TheExecutionEngine</tt>" variable is related to the JIT,
-which we will get to in the next section.</p>
+<p>This code defines a <tt>FunctionPassManager</tt>, "<tt>OurFPM</tt>". It
+requires a pointer to the <tt>Module</tt> to construct itself. Once it is set
+up, we use a series of "add" calls to add a bunch of LLVM passes. The first
+pass is basically boilerplate, it adds a pass so that later optimizations know
+how the data structures in the program are laid out. The
+"<tt>TheExecutionEngine</tt>" variable is related to the JIT, which we will get
+to in the next section.</p>
<p>In this case, we choose to add 4 optimization passes. The passes we chose
here are a pretty standard set of "cleanup" optimizations that are useful for
ready> Read function definition:
define double @test(double %x) {
entry:
- %addtmp = add double %x, 3.000000e+00
- %multmp = mul double %addtmp, %addtmp
+ %addtmp = fadd double %x, 3.000000e+00
+ %multmp = fmul double %addtmp, %addtmp
ret double %multmp
}
</pre>
...
int main() {
..
- <b>// Create the JIT. This takes ownership of the module and module provider.
- TheExecutionEngine = EngineBuilder(OurModuleProvider).create();</b>
+ <b>// Create the JIT. This takes ownership of the module.
+ TheExecutionEngine = EngineBuilder(TheModule).create();</b>
..
}
</pre>
Read function definition:
define double @testfunc(double %x, double %y) {
entry:
- %multmp = mul double %y, 2.000000e+00
- %addtmp = add double %multmp, %x
+ %multmp = fmul double %y, 2.000000e+00
+ %addtmp = fadd double %multmp, %x
ret double %addtmp
}
</pre>
</div>
-<p>This illustrates that we can now call user code, but there is something a bit subtle
-going on here. Note that we only invoke the JIT on the anonymous functions
-that <em>call testfunc</em>, but we never invoked it on <em>testfunc
-</em>itself.</p>
-
-<p>What actually happened here is that the anonymous function was
-JIT'd when requested. When the Kaleidoscope app calls through the function
-pointer that is returned, the anonymous function starts executing. It ends up
-making the call to the "testfunc" function, and ends up in a stub that invokes
-the JIT, lazily, on testfunc. Once the JIT finishes lazily compiling testfunc,
-it returns and the code re-executes the call.</p>
-
-<p>In summary, the JIT will lazily JIT code, on the fly, as it is needed. The
-JIT provides a number of other more advanced interfaces for things like freeing
-allocated machine code, rejit'ing functions to update them, etc. However, even
-with this simple code, we get some surprisingly powerful capabilities - check
-this out (I removed the dump of the anonymous functions, you should get the idea
-by now :) :</p>
+<p>This illustrates that we can now call user code, but there is something a bit
+subtle going on here. Note that we only invoke the JIT on the anonymous
+functions that <em>call testfunc</em>, but we never invoked it
+on <em>testfunc</em> itself. What actually happened here is that the JIT
+scanned for all non-JIT'd functions transitively called from the anonymous
+function and compiled all of them before returning
+from <tt>getPointerToFunction()</tt>.</p>
+
+<p>The JIT provides a number of other more advanced interfaces for things like
+freeing allocated machine code, rejit'ing functions to update them, etc.
+However, even with this simple code, we get some surprisingly powerful
+capabilities - check this out (I removed the dump of the anonymous functions,
+you should get the idea by now :) :</p>
<div class="doc_code">
<pre>
define double @foo(double %x) {
entry:
%calltmp = call double @sin( double %x )
- %multmp = mul double %calltmp, %calltmp
+ %multmp = fmul double %calltmp, %calltmp
%calltmp2 = call double @cos( double %x )
- %multmp4 = mul double %calltmp2, %calltmp2
- %addtmp = add double %multmp, %multmp4
+ %multmp4 = fmul double %calltmp2, %calltmp2
+ %addtmp = fadd double %multmp, %multmp4
ret double %addtmp
}
resolved. It allows you to establish explicit mappings between IR objects and
addresses (useful for LLVM global variables that you want to map to static
tables, for example), allows you to dynamically decide on the fly based on the
-function name, and even allows you to have the JIT abort itself if any lazy
-compilation is attempted.</p>
+function name, and even allows you to have the JIT compile functions lazily the
+first time they're called.</p>
<p>One interesting application of this is that we can now extend the language
by writing arbitrary C++ code to implement operations. For example, if we add:
<div class="doc_code">
<pre>
# Compile
- g++ -g toy.cpp `llvm-config --cppflags --ldflags --libs core jit interpreter native` -O3 -o toy
+ g++ -g toy.cpp `llvm-config --cppflags --ldflags --libs core jit native` -O3 -o toy
# Run
./toy
</pre>
<pre>
#include "llvm/DerivedTypes.h"
#include "llvm/ExecutionEngine/ExecutionEngine.h"
-#include "llvm/ExecutionEngine/Interpreter.h"
#include "llvm/ExecutionEngine/JIT.h"
#include "llvm/LLVMContext.h"
#include "llvm/Module.h"
-#include "llvm/ModuleProvider.h"
#include "llvm/PassManager.h"
#include "llvm/Analysis/Verifier.h"
#include "llvm/Target/TargetData.h"
// Make the module, which holds all the code.
TheModule = new Module("my cool jit", Context);
- ExistingModuleProvider *OurModuleProvider =
- new ExistingModuleProvider(TheModule);
-
- // Create the JIT. This takes ownership of the module and module provider.
- TheExecutionEngine = EngineBuilder(OurModuleProvider).create();
+ // Create the JIT. This takes ownership of the module.
+ std::string ErrStr;
+ TheExecutionEngine = EngineBuilder(TheModule).setErrorStr(&ErrStr).create();
+ if (!TheExecutionEngine) {
+ fprintf(stderr, "Could not create ExecutionEngine: %s\n", ErrStr.c_str());
+ exit(1);
+ }
- FunctionPassManager OurFPM(OurModuleProvider);
+ FunctionPassManager OurFPM(TheModule);
// Set up the optimizer pipeline. Start with registering info about how the
// target lays out data structures.
<a href="mailto:sabre@nondot.org">Chris Lattner</a><br>
<a href="http://llvm.org">The LLVM Compiler Infrastructure</a><br>
- Last modified: $Date: 2007-10-17 11:05:13 -0700 (Wed, 17 Oct 2007) $
+ Last modified: $Date$
</address>
</body>
</html>