1 ================================
2 Frequently Asked Questions (FAQ)
3 ================================
12 Does the University of Illinois Open Source License really qualify as an "open source" license?
13 -----------------------------------------------------------------------------------------------
14 Yes, the license is `certified
15 <http://www.opensource.org/licenses/UoI-NCSA.php>`_ by the Open Source
19 Can I modify LLVM source code and redistribute the modified source?
20 -------------------------------------------------------------------
21 Yes. The modified source distribution must retain the copyright notice and
22 follow the three bulletted conditions listed in the `LLVM license
23 <http://llvm.org/svn/llvm-project/llvm/trunk/LICENSE.TXT>`_.
26 Can I modify the LLVM source code and redistribute binaries or other tools based on it, without redistributing the source?
27 --------------------------------------------------------------------------------------------------------------------------
28 Yes. This is why we distribute LLVM under a less restrictive license than GPL,
29 as explained in the first question above.
35 In what language is LLVM written?
36 ---------------------------------
37 All of the LLVM tools and libraries are written in C++ with extensive use of
41 How portable is the LLVM source code?
42 -------------------------------------
43 The LLVM source code should be portable to most modern Unix-like operating
44 systems. Most of the code is written in standard C++ with operating system
45 services abstracted to a support library. The tools required to build and
46 test LLVM have been ported to a plethora of platforms.
48 Some porting problems may exist in the following areas:
50 * The autoconf/makefile build system relies heavily on UNIX shell tools,
51 like the Bourne Shell and sed. Porting to systems without these tools
52 (MacOS 9, Plan 9) will require more effort.
54 What API do I use to store a value to one of the virtual registers in LLVM IR's SSA representation?
55 ---------------------------------------------------------------------------------------------------
57 In short: you can't. It's actually kind of a silly question once you grok
58 what's going on. Basically, in code like:
62 %result = add i32 %foo, %bar
64 , ``%result`` is just a name given to the ``Value`` of the ``add``
65 instruction. In other words, ``%result`` *is* the add instruction. The
66 "assignment" doesn't explicitly "store" anything to any "virtual register";
67 the "``=``" is more like the mathematical sense of equality.
69 Longer explanation: In order to generate a textual representation of the
70 IR, some kind of name has to be given to each instruction so that other
71 instructions can textually reference it. However, the isomorphic in-memory
72 representation that you manipulate from C++ has no such restriction since
73 instructions can simply keep pointers to any other ``Value``'s that they
74 reference. In fact, the names of dummy numbered temporaries like ``%1`` are
75 not explicitly represented in the in-memory representation at all (see
76 ``Value::getName()``).
81 When I run configure, it finds the wrong C compiler.
82 ----------------------------------------------------
83 The ``configure`` script attempts to locate first ``gcc`` and then ``cc``,
84 unless it finds compiler paths set in ``CC`` and ``CXX`` for the C and C++
85 compiler, respectively.
87 If ``configure`` finds the wrong compiler, either adjust your ``PATH``
88 environment variable or set ``CC`` and ``CXX`` explicitly.
91 The ``configure`` script finds the right C compiler, but it uses the LLVM tools from a previous build. What do I do?
92 ---------------------------------------------------------------------------------------------------------------------
93 The ``configure`` script uses the ``PATH`` to find executables, so if it's
94 grabbing the wrong linker/assembler/etc, there are two ways to fix it:
96 #. Adjust your ``PATH`` environment variable so that the correct program
97 appears first in the ``PATH``. This may work, but may not be convenient
98 when you want them *first* in your path for other work.
100 #. Run ``configure`` with an alternative ``PATH`` that is correct. In a
101 Bourne compatible shell, the syntax would be:
103 .. code-block:: console
105 % PATH=[the path without the bad program] $LLVM_SRC_DIR/configure ...
107 This is still somewhat inconvenient, but it allows ``configure`` to do its
108 work without having to adjust your ``PATH`` permanently.
111 When creating a dynamic library, I get a strange GLIBC error.
112 -------------------------------------------------------------
113 Under some operating systems (i.e. Linux), libtool does not work correctly if
114 GCC was compiled with the ``--disable-shared option``. To work around this,
115 install your own version of GCC that has shared libraries enabled by default.
118 I've updated my source tree from Subversion, and now my build is trying to use a file/directory that doesn't exist.
119 -------------------------------------------------------------------------------------------------------------------
120 You need to re-run configure in your object directory. When new Makefiles
121 are added to the source tree, they have to be copied over to the object tree
122 in order to be used by the build.
125 I've modified a Makefile in my source tree, but my build tree keeps using the old version. What do I do?
126 ---------------------------------------------------------------------------------------------------------
127 If the Makefile already exists in your object tree, you can just run the
128 following command in the top level directory of your object tree:
130 .. code-block:: console
132 % ./config.status <relative path to Makefile>;
134 If the Makefile is new, you will have to modify the configure script to copy
138 I've upgraded to a new version of LLVM, and I get strange build errors.
139 -----------------------------------------------------------------------
140 Sometimes, changes to the LLVM source code alters how the build system works.
141 Changes in ``libtool``, ``autoconf``, or header file dependencies are
142 especially prone to this sort of problem.
144 The best thing to try is to remove the old files and re-build. In most cases,
145 this takes care of the problem. To do this, just type ``make clean`` and then
146 ``make`` in the directory that fails to build.
149 I've built LLVM and am testing it, but the tests freeze.
150 --------------------------------------------------------
151 This is most likely occurring because you built a profile or release
152 (optimized) build of LLVM and have not specified the same information on the
153 ``gmake`` command line.
155 For example, if you built LLVM with the command:
157 .. code-block:: console
159 % gmake ENABLE_PROFILING=1
161 ...then you must run the tests with the following commands:
163 .. code-block:: console
166 % gmake ENABLE_PROFILING=1
168 Why do test results differ when I perform different types of builds?
169 --------------------------------------------------------------------
170 The LLVM test suite is dependent upon several features of the LLVM tools and
173 First, the debugging assertions in code are not enabled in optimized or
174 profiling builds. Hence, tests that used to fail may pass.
176 Second, some tests may rely upon debugging options or behavior that is only
177 available in the debug build. These tests will fail in an optimized or
181 Compiling LLVM with GCC 3.3.2 fails, what should I do?
182 ------------------------------------------------------
183 This is `a bug in GCC <http://gcc.gnu.org/bugzilla/show_bug.cgi?id=13392>`_,
184 and affects projects other than LLVM. Try upgrading or downgrading your GCC.
187 After Subversion update, rebuilding gives the error "No rule to make target".
188 -----------------------------------------------------------------------------
189 If the error is of the form:
191 .. code-block:: console
193 gmake[2]: *** No rule to make target `/path/to/somefile',
194 needed by `/path/to/another/file.d'.
197 This may occur anytime files are moved within the Subversion repository or
198 removed entirely. In this case, the best solution is to erase all ``.d``
199 files, which list dependencies for source files, and rebuild:
201 .. code-block:: console
204 % rm -f `find . -name \*\.d`
207 In other cases, it may be necessary to run ``make clean`` before rebuilding.
213 What source languages are supported?
214 ------------------------------------
215 LLVM currently has full support for C and C++ source languages. These are
216 available through both `Clang <http://clang.llvm.org/>`_ and `DragonEgg
217 <http://dragonegg.llvm.org/>`_.
219 The PyPy developers are working on integrating LLVM into the PyPy backend so
220 that PyPy language can translate to LLVM.
223 I'd like to write a self-hosting LLVM compiler. How should I interface with the LLVM middle-end optimizers and back-end code generators?
224 ----------------------------------------------------------------------------------------------------------------------------------------
225 Your compiler front-end will communicate with LLVM by creating a module in the
226 LLVM intermediate representation (IR) format. Assuming you want to write your
227 language's compiler in the language itself (rather than C++), there are 3
228 major ways to tackle generating LLVM IR from a front-end:
230 1. **Call into the LLVM libraries code using your language's FFI (foreign
231 function interface).**
233 * *for:* best tracks changes to the LLVM IR, .ll syntax, and .bc format
235 * *for:* enables running LLVM optimization passes without a emit/parse
238 * *for:* adapts well to a JIT context
240 * *against:* lots of ugly glue code to write
242 2. **Emit LLVM assembly from your compiler's native language.**
244 * *for:* very straightforward to get started
246 * *against:* the .ll parser is slower than the bitcode reader when
247 interfacing to the middle end
249 * *against:* it may be harder to track changes to the IR
251 3. **Emit LLVM bitcode from your compiler's native language.**
253 * *for:* can use the more-efficient bitcode reader when interfacing to the
256 * *against:* you'll have to re-engineer the LLVM IR object model and bitcode
257 writer in your language
259 * *against:* it may be harder to track changes to the IR
261 If you go with the first option, the C bindings in include/llvm-c should help
262 a lot, since most languages have strong support for interfacing with C. The
263 most common hurdle with calling C from managed code is interfacing with the
264 garbage collector. The C interface was designed to require very little memory
265 management, and so is straightforward in this regard.
267 What support is there for a higher level source language constructs for building a compiler?
268 --------------------------------------------------------------------------------------------
269 Currently, there isn't much. LLVM supports an intermediate representation
270 which is useful for code representation but will not support the high level
271 (abstract syntax tree) representation needed by most compilers. There are no
272 facilities for lexical nor semantic analysis.
275 I don't understand the ``GetElementPtr`` instruction. Help!
276 -----------------------------------------------------------
277 See `The Often Misunderstood GEP Instruction <GetElementPtr.html>`_.
280 Using the C and C++ Front Ends
281 ==============================
283 Can I compile C or C++ code to platform-independent LLVM bitcode?
284 -----------------------------------------------------------------
285 No. C and C++ are inherently platform-dependent languages. The most obvious
286 example of this is the preprocessor. A very common way that C code is made
287 portable is by using the preprocessor to include platform-specific code. In
288 practice, information about other platforms is lost after preprocessing, so
289 the result is inherently dependent on the platform that the preprocessing was
292 Another example is ``sizeof``. It's common for ``sizeof(long)`` to vary
293 between platforms. In most C front-ends, ``sizeof`` is expanded to a
294 constant immediately, thus hard-wiring a platform-specific detail.
296 Also, since many platforms define their ABIs in terms of C, and since LLVM is
297 lower-level than C, front-ends currently must emit platform-specific IR in
298 order to have the result conform to the platform ABI.
301 Questions about code generated by the demo page
302 ===============================================
304 What is this ``llvm.global_ctors`` and ``_GLOBAL__I_a...`` stuff that happens when I ``#include <iostream>``?
305 -------------------------------------------------------------------------------------------------------------
306 If you ``#include`` the ``<iostream>`` header into a C++ translation unit,
307 the file will probably use the ``std::cin``/``std::cout``/... global objects.
308 However, C++ does not guarantee an order of initialization between static
309 objects in different translation units, so if a static ctor/dtor in your .cpp
310 file used ``std::cout``, for example, the object would not necessarily be
311 automatically initialized before your use.
313 To make ``std::cout`` and friends work correctly in these scenarios, the STL
314 that we use declares a static object that gets created in every translation
315 unit that includes ``<iostream>``. This object has a static constructor
316 and destructor that initializes and destroys the global iostream objects
317 before they could possibly be used in the file. The code that you see in the
318 ``.ll`` file corresponds to the constructor and destructor registration code.
320 If you would like to make it easier to *understand* the LLVM code generated
321 by the compiler in the demo page, consider using ``printf()`` instead of
322 ``iostream``\s to print values.
325 Where did all of my code go??
326 -----------------------------
327 If you are using the LLVM demo page, you may often wonder what happened to
328 all of the code that you typed in. Remember that the demo script is running
329 the code through the LLVM optimizers, so if your code doesn't actually do
330 anything useful, it might all be deleted.
332 To prevent this, make sure that the code is actually needed. For example, if
333 you are computing some expression, return the value from the function instead
334 of leaving it in a local variable. If you really want to constrain the
335 optimizer, you can read from and assign to ``volatile`` global variables.
338 What is this "``undef``" thing that shows up in my code?
339 --------------------------------------------------------
340 ``undef`` is the LLVM way of representing a value that is not defined. You
341 can get these if you do not initialize a variable before you use it. For
342 example, the C function:
346 int X() { int i; return i; }
348 Is compiled to "``ret i32 undef``" because "``i``" never has a value specified
352 Why does instcombine + simplifycfg turn a call to a function with a mismatched calling convention into "unreachable"? Why not make the verifier reject it?
353 ----------------------------------------------------------------------------------------------------------------------------------------------------------
354 This is a common problem run into by authors of front-ends that are using
355 custom calling conventions: you need to make sure to set the right calling
356 convention on both the function and on each call to the function. For
361 define fastcc void @foo() {
373 define fastcc void @foo() {
380 ... with "``opt -instcombine -simplifycfg``". This often bites people because
381 "all their code disappears". Setting the calling convention on the caller and
382 callee is required for indirect calls to work, so people often ask why not
383 make the verifier reject this sort of thing.
385 The answer is that this code has undefined behavior, but it is not illegal.
386 If we made it illegal, then every transformation that could potentially create
387 this would have to ensure that it doesn't, and there is valid code that can
388 create this sort of construct (in dead code). The sorts of things that can
389 cause this to happen are fairly contrived, but we still need to accept them.
394 define fastcc void @foo() {
397 define internal void @bar(void()* %FP, i1 %cond) {
398 br i1 %cond, label %T, label %F
403 call fastcc void %FP()
406 define void @test() {
407 %X = or i1 false, false
408 call void @bar(void()* @foo, i1 %X)
412 In this example, "test" always passes ``@foo``/``false`` into ``bar``, which
413 ensures that it is dynamically called with the right calling conv (thus, the
414 code is perfectly well defined). If you run this through the inliner, you
415 get this (the explicit "or" is there so that the inliner doesn't dead code
416 eliminate a bunch of stuff):
420 define fastcc void @foo() {
423 define void @test() {
424 %X = or i1 false, false
425 br i1 %X, label %T.i, label %F.i
430 call fastcc void @foo()
436 Here you can see that the inlining pass made an undefined call to ``@foo``
437 with the wrong calling convention. We really don't want to make the inliner
438 have to know about this sort of thing, so it needs to be valid code. In this
439 case, dead code elimination can trivially remove the undefined code. However,
440 if ``%X`` was an input argument to ``@test``, the inliner would produce this:
444 define fastcc void @foo() {
448 define void @test(i1 %X) {
449 br i1 %X, label %T.i, label %F.i
454 call fastcc void @foo()
460 The interesting thing about this is that ``%X`` *must* be false for the
461 code to be well-defined, but no amount of dead code elimination will be able
462 to delete the broken call as unreachable. However, since
463 ``instcombine``/``simplifycfg`` turns the undefined call into unreachable, we
464 end up with a branch on a condition that goes to unreachable: a branch to
465 unreachable can never happen, so "``-inline -instcombine -simplifycfg``" is
470 define fastcc void @foo() {
473 define void @test(i1 %X) {
475 call fastcc void @foo()