@@ -66,24 +68,20 @@ current one. To see the release notes for a specific release, please see the
Almost dead code.
include/llvm/Analysis/LiveValues.h => Dan
lib/Transforms/IPO/MergeFunctions.cpp => consider for 2.8.
- llvm/Analysis/PointerTracking.h => Edwin wants this, consider for 2.8.
- ABCD, GEPSplitterPass
- MSIL backend?
- lib/Transforms/Utils/SSI.cpp -> ABCD depends on it.
+ GEPSplitterPass
-->
-
-
-
+
+
+
@@ -116,44 +114,32 @@ through expressive diagnostics, a high level of conformance to language
standards, fast compilation, and low memory use. Like LLVM, Clang provides a
modular, library-based architecture that makes it suitable for creating or
integrating with other development tools. Clang is considered a
-production-quality compiler for C and Objective-C on x86 (32- and 64-bit).
-
-
In the LLVM 2.7 time-frame, the Clang team has made many improvements:
-
-
-
-
C++ Support: Clang is now capable of self-hosting! While still
-alpha-quality, Clang's C++ support has matured enough to build LLVM and Clang,
-and C++ is now enabled by default. See the Clang C++ compatibility
-page for common C++ migration issues.
-
-
Objective-C: Clang now includes experimental support for an updated
-Objective-C ABI on non-Darwin platforms. This includes support for non-fragile
-instance variables and accelerated proxies, as well as greater potential for
-future optimisations. The new ABI is used when compiling with the
--fobjc-nonfragile-abi and -fgnu-runtime options. Code compiled with these
-options may be mixed with code compiled with GCC or clang using the old GNU ABI,
-but requires the libobjc2 runtime from the GNUstep project.
-
-
New warnings: Clang contains a number of new warnings, including
-control-flow warnings (unreachable code, missing return statements in a
-non-void function, etc.), sign-comparison warnings, and improved
-format-string warnings.
-
-
CIndex API and Python bindings: Clang now includes a C API as part of the
-CIndex library. Although we may make some changes to the API in the future, it
-is intended to be stable and has been designed for use by external projects. See
-the Clang
-doxygen CIndex
-documentation for more details. The CIndex API also includes a preliminary
-set of Python bindings.
-
-
ARM Support: Clang now has ABI support for both the Darwin and Linux ARM
-ABIs. Coupled with many improvements to the LLVM ARM backend, Clang is now
-suitable for use as a beta quality ARM compiler.
-
-
+production-quality compiler for C, Objective-C, C++ and Objective-C++ on x86
+(32- and 64-bit), and for darwin-arm targets.
+
+
In the LLVM 2.8 time-frame, the Clang team has made many improvements:
+
+
+
Clang C++ is now feature-complete with respect to the ISO C++ 1998 and 2003 standards.
+
Added support for Objective-C++.
+
Clang now uses LLVM-MC to directly generate object code and to parse inline assembly (on Darwin).
+
Introduced many new warnings, including -Wmissing-field-initializers, -Wshadow, -Wno-protocol, -Wtautological-compare, -Wstrict-selector-match, -Wcast-align, -Wunused improvements, and greatly improved format-string checking.
+
Introduced the "libclang" library, a C interface to Clang intended to support IDE clients.
+
Added support for #pragma GCC visibility, #pragma align, and others.
+
Added support for SSE, AVX, ARM NEON, and AltiVec.
+
Improved support for many Microsoft extensions.
+
Implemented support for blocks in C++.
+
Implemented precompiled headers for C++.
+
Improved abstract syntax trees to retain more accurate source information.
+
Added driver support for handling LLVM IR and bitcode files directly.
+
Major improvements to compiler correctness for exception handling.
+
Improved generated code quality in some areas:
+
+
Good code generation for X86-32 and X86-64 ABI handling.
+
Improved code generation for bit-fields, although important work remains.
+
+
+
@@ -170,48 +156,64 @@ suitable for use as a beta quality ARM compiler.
future!). The tool is very good at finding bugs that occur on specific
paths through code, such as on error conditions.
-
In the LLVM 2.7 time-frame, the analyzer core has made several major and
- minor improvements, including better support for tracking the fields of
- structures, initial support (not enabled by default yet) for doing
- interprocedural (cross-function) analysis, and new checks have been added.
+
The LLVM 2.8 release fixes a number of bugs and slightly improves precision
+ over 2.7, but there are no major new features in the release.
-The VMKit project is an implementation of
-a JVM and a CLI Virtual Machine (Microsoft .NET is an
-implementation of the CLI) using LLVM for static and just-in-time
-compilation.
+DragonEgg is a port of llvm-gcc to
+gcc-4.5. Unlike llvm-gcc, dragonegg in theory does not require any gcc-4.5
+modifications whatsoever (currently one small patch is needed) thanks to the
+new gcc plugin architecture.
+DragonEgg is a gcc plugin that makes gcc-4.5 use the LLVM optimizers and code
+generators instead of gcc's, just like with llvm-gcc.
+
-With the release of LLVM 2.7, VMKit has shifted to a great framework for writing
-virtual machines. VMKit now offers precise and efficient garbage collection with
-multi-threading support, thanks to the MMTk memory management toolkit, as well
-as just in time and ahead of time compilation with LLVM. The major changes in
-VMKit 0.27 are:
+DragonEgg is still a work in progress, but it is able to compile a lot of code,
+for example all of gcc, LLVM and clang. Currently Ada, C, C++ and Fortran work
+well, while all other languages either don't work at all or only work poorly.
+For the moment only the x86-32 and x86-64 targets are supported, and only on
+linux and darwin (darwin may need additional gcc patches).
+
+
+The 2.8 release has the following notable changes:
+
The plugin loads faster due to exporting fewer symbols.
+
Additional vector operations such as addps256 are now supported.
+
Ada global variables with no initial value are no longer zero initialized,
+resulting in better optimization.
+
The '-fplugin-arg-dragonegg-enable-gcc-optzns' flag now runs all gcc
+optimizers, rather than just a handful.
+
Fortran programs using common variables now link correctly.
+
GNU OMP constructs no longer crash the compiler.
+
-
Garbage collection: VMKit now uses the MMTk toolkit for garbage collectors.
- The first collector to be ported is the MarkSweep collector, which is precise,
- and drastically improves the performance of VMKit.
-
Line number information in the JVM: by using the debug metadata of LLVM, the
- JVM now supports precise line number information, useful when printing a stack
- trace.
-
Interface calls in the JVM: we implemented a variant of the Interface Method
- Table technique for interface calls in the JVM.
-
+The VMKit project is an implementation of
+a Java Virtual Machine (Java VM or JVM) that uses LLVM for static and
+just-in-time compilation. As of LLVM 2.8, VMKit now supports copying garbage
+collectors, and can be configured to use MMTk's copy mark-sweep garbage
+collector. In LLVM 2.8, the VMKit .NET VM is no longer being maintained.
+
+
@@ -231,80 +233,96 @@ libgcc routines).
All of the code in the compiler-rt project is available under the standard LLVM
-License, a "BSD-style" license. New in LLVM 2.7: compiler_rt now
-supports ARM targets.
+License, a "BSD-style" license. New in LLVM 2.8, compiler_rt now supports
+soft floating point (for targets that don't have a real floating point unit),
+and includes an extensive testsuite for the "blocks" language feature and the
+blocks runtime included in compiler_rt.
-DragonEgg is a port of llvm-gcc to
-gcc-4.5. Unlike llvm-gcc, which makes many intrusive changes to the underlying
-gcc-4.2 code, dragonegg in theory does not require any gcc-4.5 modifications
-whatsoever (currently one small patch is needed). This is thanks to the new
-gcc plugin architecture, which
-makes it possible to modify the behaviour of gcc at runtime by loading a plugin,
-which is nothing more than a dynamic library which conforms to the gcc plugin
-interface. DragonEgg is a gcc plugin that causes the LLVM optimizers to be run
-instead of the gcc optimizers, and the LLVM code generators instead of the gcc
-code generators, just like llvm-gcc. To use it, you add
-"-fplugin=path/dragonegg.so" to the gcc-4.5 command line, and gcc-4.5 magically
-becomes llvm-gcc-4.5!
-
+LLDB is a brand new member of the LLVM
+umbrella of projects. LLDB is a next generation, high-performance debugger. It
+is built as a set of reusable components which highly leverage existing
+libraries in the larger LLVM Project, such as the Clang expression parser, the
+LLVM disassembler and the LLVM JIT.
-DragonEgg is still a work in progress. Currently C works very well, while C++,
-Ada and Fortran work fairly well. All other languages either don't work at all,
-or only work poorly. For the moment only the x86-32 and x86-64 targets are
-supported, and only on linux and darwin (darwin needs an additional gcc patch).
+LLDB is in early development and not included as part of the LLVM 2.8 release,
+but is mature enough to support basic debugging scenarios on Mac OS X in C,
+Objective-C and C++. We'd really like help extending and expanding LLDB to
+support new platforms, new languages, new architectures, and new features.
-DragonEgg is a new project which is seeing its first release with llvm-2.7.
+libc++ is another new member of the LLVM
+family. It is an implementation of the C++ standard library, written from the
+ground up to specifically target the forthcoming C++'0X standard and focus on
+delivering great performance.
+
+
+As of the LLVM 2.8 release, libc++ is virtually feature complete, but would
+benefit from more testing and better integration with Clang++. It is also
+looking forward to the C++ committee finalizing the C++'0x standard.
-The LLVM Machine Code (aka MC) sub-project of LLVM was created to solve a number
-of problems in the realm of assembly, disassembly, object file format handling,
-and a number of other related areas that CPU instruction-set level tools work
-in. It is a sub-project of LLVM which provides it with a number of advantages
-over other compilers that do not have tightly integrated assembly-level tools.
-For a gentle introduction, please see the Intro to the
-LLVM MC Project Blog Post.
+KLEE is a symbolic execution framework for
+programs in LLVM bitcode form. KLEE tries to symbolically evaluate "all" paths
+through the application and records state transitions that lead to fault
+states. This allows it to construct testcases that lead to faults and can even
+be used to verify some algorithms.
-
2.7 includes major parts of the work required by the new MC Project. A few
- targets have been refactored to support it, and work is underway to support a
- native assembler in LLVM. This work is not complete in LLVM 2.7, but it has
- made substantially more progress on LLVM mainline.
-
-
One minor example of what MC can do is to transcode an AT&T syntax
- X86 .s file into intel syntax. You can do this with something like:
Although KLEE does not have any major new features as of 2.8, we have made
+various minor improvements, particular to ease development:
+
+
Added support for LLVM 2.8. KLEE currently maintains compatibility with
+ LLVM 2.6, 2.7, and 2.8.
+
Added a buildbot for 2.6, 2.7, and trunk. A 2.8 buildbot will be coming
+ soon following release.
+
Fixed many C++ code issues to allow building with Clang++. Mostly
+ complete, except for the version of MiniSAT which is inside the KLEE STP
+ version.
+
Improved support for building with separate source and build
+ directories.
+
Added support for "long double" on x86.
+
Initial work on KLEE support for using 'lit' test runner instead of
+ DejaGNU.
+
Added configure support for using an external version of
+ STP.
@@ -312,214 +330,274 @@ LLVM MC Project Blog Post.
An exciting aspect of LLVM is that it is used as an enabling technology for
a lot of other language and tools projects. This section lists some of the
- projects that have already been updated to work with LLVM 2.7.
+ projects that have already been updated to work with LLVM 2.8.
-Pure
-is an algebraic/functional programming language based on term rewriting.
-Programs are collections of equations which are used to evaluate expressions in
-a symbolic fashion. Pure offers dynamic typing, eager and lazy evaluation,
-lexical closures, a hygienic macro system (also based on term rewriting),
-built-in list and matrix support (including list and matrix comprehensions) and
-an easy-to-use C interface. The interpreter uses LLVM as a backend to
- JIT-compile Pure programs to fast native code.
+TCE is a toolset for designing
+application-specific processors (ASP) based on the Transport triggered
+architecture (TTA). The toolset provides a complete co-design flow from C/C++
+programs down to synthesizable VHDL and parallel program binaries. Processor
+customization points include the register files, function units, supported
+operations, and the interconnection network.
-
Pure versions 0.43 and later have been tested and are known to work with
-LLVM 2.7 (and continue to work with older LLVM releases >= 2.5).
+
TCE uses llvm-gcc/Clang and LLVM for C/C++ language support, target
+independent optimizations and also for parts of code generation. It generates
+new LLVM-based code generators "on the fly" for the designed TTA processors and
+loads them in to the compiler backend as runtime libraries to avoid per-target
+recompilation of larger parts of the compiler chain.
-Roadsend PHP (rphp) is an open
-source implementation of the PHP programming
-language that uses LLVM for its optimizer, JIT and static compiler. This is a
-reimplementation of an earlier project that is now based on LLVM.
-
+Horizon is a bytecode
+language and compiler written on top of LLVM, intended for producing
+single-address-space managed code operating systems that
+run faster than the equivalent multiple-address-space C systems.
+More in-depth blurb is available on the wiki.
+
-Unladen Swallow is a
-branch of Python intended to be fully
-compatible and significantly faster. It uses LLVM's optimization passes and JIT
-compiler.
+Clam AntiVirus is an open source (GPL)
+anti-virus toolkit for UNIX, designed especially for e-mail scanning on mail
+gateways. Since version 0.96 it has bytecode
+signatures that allow writing detections for complex malware. It
+uses LLVM's JIT to speed up the execution of bytecode on
+X86, X86-64, PPC32/64, falling back to its own interpreter otherwise.
+The git version was updated to work with LLVM 2.8.
+
+
The
+ClamAV bytecode compiler uses Clang and LLVM to compile a C-like
+language, insert runtime checks, and generate ClamAV bytecode.
-TCE is a toolset for designing
-application-specific processors (ASP) based on the Transport triggered
-architecture (TTA). The toolset provides a complete co-design flow from C/C++
-programs down to synthesizable VHDL and parallel program binaries. Processor
-customization points include the register files, function units, supported
-operations, and the interconnection network.
+Pure
+is an algebraic/functional
+programming language based on term rewriting. Programs are collections
+of equations which are used to evaluate expressions in a symbolic
+fashion. Pure offers dynamic typing, eager and lazy evaluation, lexical
+closures, a hygienic macro system (also based on term rewriting),
+built-in list and matrix support (including list and matrix
+comprehensions) and an easy-to-use C interface. The interpreter uses
+LLVM as a backend to JIT-compile Pure programs to fast native code.
-
TCE uses llvm-gcc/Clang and LLVM for C/C++ language support, target
-independent optimizations and also for parts of code generation. It generates
-new LLVM-based code generators "on the fly" for the designed TTA processors and
-loads them in to the compiler backend as runtime libraries to avoid per-target
-recompilation of larger parts of the compiler chain.
+
Pure versions 0.44 and later have been tested and are known to work with
+LLVM 2.8 (and continue to work with older LLVM releases >= 2.5).
-SAFECode is a memory safe C
-compiler built using LLVM. It takes standard, unannotated C code, analyzes the
-code to ensure that memory accesses and array indexing operations are safe, and
-instruments the code with run-time checks when safety cannot be proven
-statically.
-
+GHC is an open source,
+state-of-the-art programming suite for
+Haskell, a standard lazy functional programming language. It includes
+an optimizing static compiler generating good code for a variety of
+platforms, together with an interactive system for convenient, quick
+development.
+
+
In addition to the existing C and native code generators, GHC 7.0 now
+supports an LLVM
+code generator. GHC supports LLVM 2.7 and later.
-IcedTea provides a
-harness to build OpenJDK using only free software build tools and to provide
-replacements for the not-yet free parts of OpenJDK. One of the extensions that
-IcedTea provides is a new JIT compiler named Shark which uses LLVM
-to provide native code generation without introducing processor-dependent
-code.
-
-
Icedtea6 1.8 and later have been tested and are known to work with
-LLVM 2.7 (and continue to work with older LLVM releases >= 2.6 as well).
-
+Clay is a new systems programming
+language that is specifically designed for generic programming. It makes
+generic programming very concise thanks to whole program type propagation. It
+uses LLVM as its backend.
+
-LLVM-Lua uses LLVM
- to add JIT and static compiling support to the Lua VM. Lua
-bytecode is analyzed to remove type checks, then LLVM is used to compile the
-bytecode down to machine code.
-
-
LLVM-Lua 1.2.0 have been tested and is known to work with LLVM 2.7.
-
+llvm-py has been updated to work
+with LLVM 2.8. llvm-py provides Python bindings for LLVM, allowing you to write a
+compiler backend or a VM in Python.
+
-MacRuby is an implementation of Ruby based on
-core Mac OS technologies, sponsored by Apple Inc. It uses LLVM at runtime for
-optimization passes, JIT compilation and exception handling. It also allows
-static (ahead-of-time) compilation of Ruby code straight to machine code.
-
-
The upcoming MacRuby 0.6 release works with LLVM 2.7.
-
+FAUST is a compiled language for real-time
+audio signal processing. The name FAUST stands for Functional AUdio STream. Its
+programming model combines two approaches: functional programming and block
+diagram composition. In addition with the C, C++, JAVA output formats, the
+Faust compiler can now generate LLVM bitcode, and works with LLVM 2.7 and
+2.8.
+
-GHC is an open source,
-state-of-the-art programming suite for Haskell, a standard lazy
-functional programming language. It includes an optimizing static
-compiler generating good code for a variety of platforms, together
-with an interactive system for convenient, quick development.
+
Jade
+(Just-in-time Adaptive Decoder Engine) is a generic video decoder engine using
+LLVM for just-in-time compilation of video decoder configurations. Those
+configurations are designed by MPEG Reconfigurable Video Coding (RVC) committee.
+MPEG RVC standard is built on a stream-based dataflow representation of
+decoders. It is composed of a standard library of coding tools written in
+RVC-CAL language and a dataflow configuration — block diagram —
+of a decoder.
-
In addition to the existing C and native code generators, GHC now
-supports an LLVM
-code generator. GHC supports LLVM 2.7.
+
Jade project is hosted as part of the Open
+RVC-CAL Compiler and requires it to translate the RVC-CAL standard library
+of video coding tools into an LLVM assembly code.
Neko LLVM JIT
+replaces the standard Neko JIT with an LLVM-based implementation. While not
+fully complete, it is already providing a 1.5x speedup on 64-bit systems.
+Neko LLVM JIT requires LLVM 2.8 or later.
+Crack aims to provide
+the ease of development of a scripting language with the performance of a
+compiled language. The language derives concepts from C++, Java and Python,
+incorporating object-oriented programming, operator overloading and strong
+typing. Crack 0.2 works with LLVM 2.7, and the forthcoming Crack 0.2.1 release
+builds on LLVM 2.8.
-
This release includes a huge number of bug fixes, performance tweaks and
-minor improvements. Some of the major improvements and new features are listed
-in this section.
-
+DTMC provides support for
+Transactional Memory, which is an easy-to-use and efficient way to synchronize
+accesses to shared memory. Transactions can contain normal C/C++ code (e.g.,
+__transaction { list.remove(x); x.refCount--; }) and will be executed
+virtually atomically and isolated from other transactions.
+Kai (Japanese ä¼ for
+meeting/gathering) is an experimental interpreter that provides a highly
+extensible runtime environment and explicit control over the compilation
+process. Programs are defined using nested symbolic expressions, which are all
+parsed into first-class values with minimal intrinsic semantics. Kai can
+generate optimised code at run-time (using LLVM) in order to exploit the nature
+of the underlying hardware and to integrate with external software libraries.
+It is a unique exploration into world of dynamic code compilation, and the
+interaction between high level and low level semantics.
In addition to changes to the code, between LLVM 2.6 and 2.7, a number of
-organization changes have happened:
+
+
+OSL is a shading
+language designed for use in physically based renderers and in particular
+production rendering. By using LLVM instead of the interpreter, it was able to
+meet its performance goals (>= C-code) while retaining the benefits of
+runtime specialization and a portable high-level language.
Ted Kremenek and Doug Gregor have stepped forward as Code Owners of the
- Clang static analyzer and the Clang frontend, respectively.
-
LLVM now has an official Blog at
- http://blog.llvm.org. This is a great way
- to learn about new LLVM-related features as they are implemented. Several
- features in this release are already explained on the blog.
-
The LLVM web pages are now checked into the SVN server, in the "www",
- "www-pubs" and "www-releases" SVN modules. Previously they were hidden in a
- largely inaccessible old CVS server.
This release includes a huge number of bug fixes, performance tweaks and
+minor improvements. Some of the major improvements and new features are listed
+in this section.
+
-
llvm.org is now hosted on a new (and much
- faster) server. It is still graciously hosted at the University of Illinois
- of Urbana Champaign.
-
@@ -529,43 +607,19 @@ organization changes have happened:
-
LLVM 2.7 includes several major new capabilities:
+
LLVM 2.8 includes several major new capabilities:
-
2.7 includes initial support for the MicroBlaze target.
- MicroBlaze is a soft processor core designed for Xilinx FPGAs.
-
-
2.7 includes a new LLVM IR "extensible metadata" feature. This feature
- supports many different use cases, including allowing front-end authors to
- encode source level information into LLVM IR, which is consumed by later
- language-specific passes. This is a great way to do high-level optimizations
- like devirtualization, type-based alias analysis, etc. See the
- Extensible Metadata Blog Post for more information.
-
-
2.7 encodes debug information
-in a completely new way, built on extensible metadata. The new implementation
-is much more memory efficient and paves the way for improvements to optimized
-code debugging experience.
-
-
2.7 now directly supports taking the address of a label and doing an
- indirect branch through a pointer. This is particularly useful for
- interpreter loops, and is used to implement the GCC "address of label"
- extension. For more information, see the
-Address of Label and Indirect Branches in LLVM IR Blog Post.
-
-
2.7 is the first release to start supporting APIs for assembling and
- disassembling target machine code. These APIs are useful for a variety of
- low level clients, and are surfaced in the new "enhanced disassembly" API.
- For more information see the The X86
- Disassembler Blog Post for more information.
-
-
2.7 includes major parts of the work required by the new MC Project,
- see the MC update above for more information.
-
+
As mentioned above, libc++ and LLDB are major new additions to the LLVM collective.
+
LLVM 2.8 now has pretty decent support for debugging optimized code. You
+ should be able to reliably get debug info for function arguments, assuming
+ that the value is actually available where you have stopped.
+
A new 'llvm-diff' tool is available that does a semantic diff of .ll
+ files.
+
The MC subproject has made major progress in this release.
+ Direct .o file writing support for darwin/x86[-64] is now reliable and
+ support for other targets and object file formats are in progress.
@@ -580,39 +634,19 @@ Address of Label and Indirect Branches in LLVM IR Blog Post.
expose new optimization opportunities:
-
LLVM IR now supports a 16-bit "half float" data type through two new intrinsics and APFloat support.
-
LLVM IR supports two new function
- attributes: inlinehint and alignstack(n). The former is a hint to the
- optimizer that a function was declared 'inline' and thus the inliner should
- weight it higher when considering inlining it. The later
- indicates to the code generator that the function diverges from the platform
- ABI on stack alignment.
-
The new llvm.objectsize intrinsic
- allows the optimizer to infer the sizes of memory objects in some cases.
- This intrinsic is used to implement the GCC __builtin_object_size
- extension.
-
LLVM IR now supports marking load and store instructions with "non-temporal" hints (building on the new
- metadata feature). This hint encourages the code
- generator to generate non-temporal accesses when possible, which are useful
- for code that is carefully managing cache behavior. Currently, only the
- X86 backend provides target support for this feature.
-
-
LLVM 2.7 has pre-alpha support for unions in LLVM IR.
- Unfortunately, this support is not really usable in 2.7, so if you're
- interested in pushing it forward, please help contribute to LLVM mainline.
-
-
-
-
LLVM 2.8 changes the internal order of operands in InvokeInst
- and CallInst.
- To be portable across releases, resort to CallSite and the
- high-level accessors, such as getCalledValue and setUnwindDest.
+
The memcpy, memmove, and memset
+ intrinsics now take address space qualified pointers and a bit to indicate
+ whether the transfer is "volatile" or not.
-
+
Per-instruction debug info metadata is much faster and uses less memory by
+ using the new DebugLoc class.
+
LLVM IR now has a more formalized concept of "trap values", which allow the optimizer
+ to optimize more aggressively in the presence of undefined behavior, while
+ still producing predictable results.
+
LLVM IR now supports two new linkage
+ types (linker_private_weak and linker_private_weak_def_auto) which map
+ onto some obscure MachO concepts.
@@ -628,80 +662,82 @@ expose new optimization opportunities:
release includes a few major enhancements and additions to the optimizers:
-
-
The inliner now merges arrays stack objects in different callees when
- inlining multiple call sites into one function. This reduces the stack size
- of the resultant function.
-
The -basicaa alias analysis pass (which is the default) has been improved to
- be less dependent on "type safe" pointers. It can now look through bitcasts
- and other constructs more aggressively, allowing better load/store
- optimization.
The module target data string now
- includes a notion of 'native' integer data types for the target. This
- helps mid-level optimizations avoid promoting complex sequences of
- operations to data types that are not natively supported (e.g. converting
- i32 operations to i64 on 32-bit chips).
-
The mid-level optimizer is now conservative when operating on a module with
- no target data. Previously, it would default to SparcV9 settings, which is
- not what most people expected.
-
Jump threading is now much more aggressive at simplifying correlated
- conditionals and threading blocks with otherwise complex logic. It has
- subsumed the old "Conditional Propagation" pass, and -condprop has been
- removed from LLVM 2.7.
-
The -instcombine pass has been refactored from being one huge file to being
- a library of its own. Internally, it uses a customized IRBuilder to clean
- it up and simplify it.
-
-
The optimal edge profiling pass is reliable and much more complete than in
- 2.6. It can be used with the llvm-prof tool but isn't wired up to the
- llvm-gcc and clang command line options yet.
-
-
A new experimental alias analysis implementation, -scev-aa, has been added.
- It uses LLVM's Scalar Evolution implementation to do symbolic analysis of
- pointer offset expressions to disambiguate pointers. It can catch a few
- cases that basicaa cannot, particularly in complex loop nests.
-
-
The default pass ordering has been tweaked for improved optimization
- effectiveness.
-
+
As mentioned above, the optimizer now has support for updating debug
+ information as it goes. A key aspect of this is the new llvm.dbg.value
+ intrinsic. This intrinsic represents debug info for variables that are
+ promoted to SSA values (typically by mem2reg or the -scalarrepl passes).
+
+
The JumpThreading pass is now much more aggressive about implied value
+ relations, allowing it to thread conditions like "a == 4" when a is known to
+ be 13 in one of the predecessors of a block. It does this in conjunction
+ with the new LazyValueInfo analysis pass.
+
The new RegionInfo analysis pass identifies single-entry single-exit regions
+ in the CFG. You can play with it with the "opt -regions -analyze" or
+ "opt -view-regions" commands.
+
The loop optimizer has significantly improved strength reduction and analysis
+ capabilities. Notably it is able to build on the trap value and signed
+ integer overflow information to optimize <= and >= loops.
+
The CallGraphSCCPassManager now has some basic support for iterating within
+ an SCC when a optimizer devirtualizes a function call. This allows inlining
+ through indirect call sites that are devirtualized by store-load forwarding
+ and other optimizations.
+
The new -loweratomic pass is available
+ to lower atomic instructions into their non-atomic form. This can be useful
+ to optimize generic code that expects to run in a single-threaded
+ environment.
+The LLVM Machine Code (aka MC) subsystem was created to solve a number
+of problems in the realm of assembly, disassembly, object file format handling,
+and a number of other related areas that CPU instruction-set level tools work
+in.
-
-
The JIT now supports generating debug information and is compatible with
-the new GDB 7.0 (and later) interfaces for registering dynamically generated
-debug info.
+
The MC subproject has made great leaps in LLVM 2.8. For example, support for
+ directly writing .o files from LLC (and clang) now works reliably for
+ darwin/x86[-64] (including inline assembly support) and the integrated
+ assembler is turned on by default in Clang for these targets. This provides
+ improved compile times among other things.
-
The JIT now defaults
-to compiling eagerly to avoid a race condition in the lazy JIT.
-Clients that still want the lazy JIT can switch it on by calling
-ExecutionEngine::DisableLazyCompilation(false).
+
+
The entire compiler has converted over to using the MCStreamer assembler API
+ instead of writing out a .s file textually.
+
The "assembler parser" is far more mature than in 2.7, supporting a full
+ complement of directives, now supports assembler macros, etc.
+
The "assembler backend" has been completed, including support for relaxation
+ relocation processing and all the other things that an assembler does.
+
The MachO file format support is now fully functional and works.
+
The MC disassembler now fully supports ARM and Thumb. ARM assembler support
+ is still in early development though.
+
The X86 MC assembler now supports the X86 AES and AVX instruction set.
+
Work on ELF and COFF object files and ARM target support is well underway,
+ but isn't useful yet in LLVM 2.8. Please contact the llvmdev mailing list
+ if you're interested in this.
@@ -715,49 +751,58 @@ infrastructure, which allows us to implement more aggressive algorithms and make
it run faster:
-
The 'llc -asm-verbose' option (which is now the default) has been enhanced
- to emit many useful comments to .s files indicating information about spill
- slots and loop nest structure. This should make it much easier to read and
- understand assembly files. This is wired up in llvm-gcc and clang to
- the -fverbose-asm option.
-
-
New LSR with "full strength reduction" mode, which can reduce address
- register pressure in loops where address generation is important.
-
-
A new codegen level Common Subexpression Elimination pass (MachineCSE)
- is available and enabled by default. It catches redundancies exposed by
- lowering.
-
A new pre-register-allocation tail duplication pass is available and enabled
- by default, it can substantially improve branch prediction quality in some
- cases.
-
A new sign and zero extension optimization pass (OptimizeExtsPass)
- is available and enabled by default. This pass can takes advantage
- architecture features like x86-64 implicit zero extension behavior and
- sub-registers.
-
The code generator now supports a mode where it attempts to preserve the
- order of instructions in the input code. This is important for source that
- is hand scheduled and extremely sensitive to scheduling. It is compatible
- with the GCC -fno-schedule-insns option.
-
The target-independent code generator now supports generating code with
- arbitrary numbers of result values. Returning more values than was
- previously supported is handled by returning through a hidden pointer. In
- 2.7, only the X86 and XCore targets have adopted support for this
- though.
The "DAG instruction
- selection" phase of the code generator has been largely rewritten for
- 2.7. Previously, tblgen spit out tons of C++ code which was compiled and
- linked into the target to do the pattern matching, now it emits a much
- smaller table which is read by the target-independent code. The primary
- advantages of this approach is that the size and compile time of various
- targets is much improved. The X86 code generator shrunk by 1.5MB of code,
- for example.
-
Almost the entire code generator has switched to emitting code through the
- MC interfaces instead of printing textually to the .s file. This led to a
- number of cleanups and speedups. In 2.7, debug an exception handling
- information does not go through MC yet.
+
The clang/gcc -momit-leaf-frame-pointer argument is now supported.
+
The clang/gcc -ffunction-sections and -fdata-sections arguments are now
+ supported on ELF targets (like GCC).
+
The MachineCSE pass is now tuned and on by default. It eliminates common
+ subexpressions that are exposed when lowering to machine instructions.
+
The "local" register allocator was replaced by a new "fast" register
+ allocator. This new allocator (which is often used at -O0) is substantially
+ faster and produces better code than the old local register allocator.
+
A new LLC "-regalloc=default" option is available, which automatically
+ chooses a register allocator based on the -O optimization level.
+
The common code generator code was modified to promote illegal argument and
+ return value vectors to wider ones when possible instead of scalarizing
+ them. For example, <3 x float> will now pass in one SSE register
+ instead of 3 on X86. This generates substantially better code since the
+ rest of the code generator was already expecting this.
+
The code generator uses a new "COPY" machine instruction. This speeds up
+ the code generator and eliminates the need for targets to implement the
+ isMoveInstr hook. Also, the copyRegToReg hook was renamed to copyPhysReg
+ and simplified.
+
The code generator now has a "LocalStackSlotPass", which optimizes stack
+ slot access for targets (like ARM) that have limited stack displacement
+ addressing.
+
A new "PeepholeOptimizer" is available, which eliminates sign and zero
+ extends, and optimizes away compare instructions when the condition result
+ is available from a previous instruction.
+
Atomic operations now get legalized into simpler atomic operations if not
+ natively supported, easing the implementation burden on targets.
+
We have added two new bottom-up pre-allocation register pressure aware schedulers:
+
+
The hybrid scheduler schedules aggressively to minimize schedule length when registers are available and avoid overscheduling in high pressure situations.
+
The instruction-level-parallelism scheduler schedules for maximum ILP when registers are available and avoid overscheduling in high pressure situations.
+
+
The tblgen type inference algorithm was rewritten to be more consistent and
+ diagnose more target bugs. If you have an out-of-tree backend, you may
+ find that it finds bugs in your target description. This support also
+ allows limited support for writing patterns for instructions that return
+ multiple results (e.g. a virtual register and a flag result). The
+ 'parallel' modifier in tblgen was removed, you should use the new support
+ for multiple results instead.
+
A new (experimental) "-rendermf" pass is available which renders a
+ MachineFunction into HTML, showing live ranges and other useful
+ details.
+
The new SubRegIndex tablegen class allows subregisters to be indexed
+ symbolically instead of numerically. If your target uses subregisters you
+ will need to adapt to use SubRegIndex when you upgrade to 2.8.
+
+
+
The -fast-isel instruction selection path (used at -O0 on X86) was rewritten
+ to work bottom-up on basic blocks instead of top down. This makes it
+ slightly faster (because the MachineDCE pass is not needed any longer) and
+ allows it to generate better code in some cases.
+
@@ -767,16 +812,46 @@ it run faster:
-
New features of the X86 target include:
+
New features and major changes in the X86 target include:
-
The X86 backend now optimizes tails calls much more aggressively for
- functions that use the standard C calling convention.
-
The X86 backend now models scalar SSE registers as subregs of the SSE vector
- registers, making the code generator more aggressive in cases where scalars
- and vector types are mixed.
-
+
The X86 backend now supports holding X87 floating point stack values
+ in registers across basic blocks, dramatically improving performance of code
+ that uses long double, and when targeting CPUs that don't support SSE.
+
+
The X86 backend now uses a SSEDomainFix pass to optimize SSE operations. On
+ Nehalem ("Core i7") and newer CPUs there is a 2 cycle latency penalty on
+ using a register in a different domain than where it was defined. This pass
+ optimizes away these stalls.
+
+
The X86 backend now promotes 16-bit integer operations to 32-bits when
+ possible. This avoids 0x66 prefixes, which are slow on some
+ microarchitectures and bloat the code on all of them.
+
+
The X86 backend now supports the Microsoft "thiscall" calling convention,
+ and a calling convention to support
+ ghc.
+
+
The X86 backend supports a new "llvm.x86.int" intrinsic, which maps onto
+ the X86 "int $42" and "int3" instructions.
+
+
At the IR level, the <2 x float> datatype is now promoted and passed
+ around as a <4 x float> instead of being passed and returned as an MMX
+ vector. If you have a frontend that uses this, please pass and return a
+ <2 x i32> instead (using bitcasts).
+
+
When printing .s files in verbose assembly mode (the default for clang -S),
+ the X86 backend now decodes X86 shuffle instructions and prints human
+ readable comments after the most inscrutable of them, e.g.:
+
+
NEON support has been improved to model instructions which operate onto
+ multiple consecutive registers more aggressively. This avoids lots of
+ extraneous register copies.
+
The ARM backend now uses a new "ARMGlobalMerge" pass, which merges several
+ global variables into one, saving extra address computation (all the global
+ variables can be accessed via same base address) and potentially reducing
+ register pressure.
+
+
The ARM backend has received many minor improvements and tweaks which lead
+ to substantially better performance in a wide range of different scenarios.
+
-
The ARM backend now generates instructions in unified assembly syntax.
-
-
llvm-gcc now has complete support for the ARM v7 NEON instruction set. This
- support differs slightly from the GCC implementation. Please see the
-
- ARM Advanced SIMD (NEON) Intrinsics and Types in LLVM Blog Post for
- helpful information if migrating code from GCC to LLVM-GCC.
-
-
The ARM and Thumb code generators now use register scavenging for stack
- object address materialization. This allows the use of R3 as a general
- purpose register in Thumb1 code, as it was previous reserved for use in
- stack address materialization. Secondly, sequential uses of the same
- value will now re-use the materialized constant.
-
-
The ARM backend now has good support for ARMv4 targets and has been tested
- on StrongARM hardware. Previously, LLVM only supported ARMv4T and
- newer chips.
-
-
Atomic builtins are now supported for ARMv6 and ARMv7 (__sync_synchronize,
- __sync_fetch_and_add, etc.).
This release includes a number of new APIs that are used internally, which
- may also be useful for external clients.
-
-
-
-
The optimizer uses the new CodeMetrics class to measure the size of code.
- Various passes (like the inliner, loop unswitcher, etc) all use this to make
- more accurate estimates of the code size impact of various
- optimizations.
-
A new
- llvm/Analysis/InstructionSimplify.h interface is available for doing
- symbolic simplification of instructions (e.g. a+0 -> a)
- without requiring the instruction to exist. This centralizes a lot of
- ad-hoc symbolic manipulation code scattered in various passes.
-
The optimizer now uses a new SSAUpdater
- class which efficiently supports
- doing unstructured SSA update operations. This centralized a bunch of code
- scattered throughout various passes (e.g. jump threading, lcssa,
- loop rotate, etc) for doing this sort of thing. The code generator has a
- similar
- MachineSSAUpdater class.
-
The
- llvm/Support/Regex.h header exposes a platform independent regular
- expression API. Building on this, the FileCheck utility now supports
- regular exressions.
-
raw_ostream now supports a circular "debug stream" accessed with "dbgs()".
- By default, this stream works the same way as "errs()", but if you pass
- -debug-buffer-size=1000 to opt, the debug stream is capped to a
- fixed sized circular buffer and the output is printed at the end of the
- program's execution. This is helpful if you have a long lived compiler
- process and you're interested in seeing snapshots in time.
You can now build LLVM as a big dynamic library (e.g. "libllvm2.7.so"). To
- get this, configure LLVM with the --enable-shared option.
-
-
LLVM command line tools now overwrite their output by default. Previously,
- they would only do this with -f. This makes them more convenient to use, and
- behave more like standard unix tools.
+
The ARM NEON intrinsics have been substantially reworked to reduce
+ redundancy and improve code generation. Some of the major changes are:
+
+
+ All of the NEON load and store intrinsics (llvm.arm.neon.vld* and
+ llvm.arm.neon.vst*) take an extra parameter to specify the alignment in bytes
+ of the memory being accessed.
+
+
+ The llvm.arm.neon.vaba intrinsic (vector absolute difference and
+ accumulate) has been removed. This operation is now represented using
+ the llvm.arm.neon.vabd intrinsic (vector absolute difference) followed by a
+ vector add.
+
+
+ The llvm.arm.neon.vabdl and llvm.arm.neon.vabal intrinsics (lengthening
+ vector absolute difference with and without accumulation) have been removed.
+ They are represented using the llvm.arm.neon.vabd intrinsic (vector absolute
+ difference) followed by a vector zero-extend operation, and for vabal,
+ a vector add.
+
+
+ The llvm.arm.neon.vmovn intrinsic has been removed. Calls of this intrinsic
+ are now replaced by vector truncate operations.
+
+
+ The llvm.arm.neon.vmovls and llvm.arm.neon.vmovlu intrinsics have been
+ removed. They are now represented as vector sign-extend (vmovls) and
+ zero-extend (vmovlu) operations.
+
+
+ The llvm.arm.neon.vaddl*, llvm.arm.neon.vaddw*, llvm.arm.neon.vsubl*, and
+ llvm.arm.neon.vsubw* intrinsics (lengthening vector add and subtract) have
+ been removed. They are replaced by vector add and vector subtract operations
+ where one (vaddw, vsubw) or both (vaddl, vsubl) of the operands are either
+ sign-extended or zero-extended.
+
+
+ The llvm.arm.neon.vmulls, llvm.arm.neon.vmullu, llvm.arm.neon.vmlal*, and
+ llvm.arm.neon.vmlsl* intrinsics (lengthening vector multiply with and without
+ accumulation and subtraction) have been removed. These operations are now
+ represented as vector multiplications where the operands are either
+ sign-extended or zero-extended, followed by a vector add for vmlal or a
+ vector subtract for vmlsl. Note that the polynomial vector multiply
+ intrinsic, llvm.arm.neon.vmullp, remains unchanged.
+
+
+
-
The opt and llc tools now autodetect whether their input is a .ll or .bc
- file, and automatically do the right thing. This means you don't need to
- explicitly use the llvm-as tool for most things.
If you're already an LLVM user or developer with out-of-tree changes based
-on LLVM 2.6, this section lists some "gotchas" that you may run into upgrading
+on LLVM 2.7, this section lists some "gotchas" that you may run into upgrading
from the previous release.
-
-
-The Andersen's alias analysis ("anders-aa") pass, the Predicate Simplifier
-("predsimplify") pass, the LoopVR pass, the GVNPRE pass, and the random sampling
-profiling ("rsprofiling") passes have all been removed. They were not being
-actively maintained and had substantial problems. If you are interested in
-these components, you are welcome to ressurect them from SVN, fix the
-correctness problems, and resubmit them to mainline.
-
-
LLVM now defaults to building most libraries with RTTI turned off, providing
-a code size reduction. Packagers who are interested in building LLVM to support
-plugins that require RTTI information should build with "make REQUIRE_RTTI=1"
-and should read the new Advice on Packaging LLVM
-document.
-
-
The LLVM interpreter now defaults to not using libffi even
-if you have it installed. This makes it more likely that an LLVM built on one
-system will work when copied to a similar system. To use libffi,
-configure with --enable-libffi.
-
-
Debug information uses a completely different representation, an LLVM 2.6
-.bc file should work with LLVM 2.7, but debug info won't come forward.
-
-
The LLVM 2.6 (and earlier) "malloc" and "free" instructions got removed,
- along with LowerAllocations pass. Now you should just use a call to the
- malloc and free functions in libc. These calls are optimized as well as
- the old instructions were.
+
The build configuration machinery changed the output directory names. It
+ wasn't clear to many people that a "Release-Asserts" build was a release build
+ without asserts. To make this more clear, "Release" does not include
+ assertions and "Release+Asserts" does (likewise, "Debug" and
+ "Debug+Asserts").
+
The MSIL Backend was removed, it was unsupported and broken.
+
The ABCD, SSI, and SCCVN passes were removed. These were not fully
+ functional and their behavior has been or will be subsumed by the
+ LazyValueInfo pass.
+
The LLVM IR 'Union' feature was removed. While this is a desirable feature
+ for LLVM IR to support, the existing implementation was half baked and
+ barely useful. We'd really like anyone interested to resurrect the work and
+ finish it for a future release.
+
If you're used to reading .ll files, you'll probably notice that .ll file
+ dumps don't produce #uses comments anymore. To get them, run a .bc file
+ through "llvm-dis --show-annotations".
+
Target triples are now stored in a normalized form, and all inputs from
+ humans are expected to be normalized by Triple::normalize before being
+ stored in a module triple or passed to another library.
+
+
In addition, many APIs have changed in this release. Some of the major LLVM
API changes are:
-
+
LLVM 2.8 changes the internal order of operands in InvokeInst
+ and CallInst.
+ To be portable across releases, please use the CallSite class and the
+ high-level accessors, such as getCalledValue and
+ setUnwindDest.
+
+
+ You can no longer pass use_iterators directly to cast<> (and similar),
+ because these routines tend to perform costly dereference operations more
+ than once. You have to dereference the iterators yourself and pass them in.
+
+
+ llvm.memcpy.*, llvm.memset.*, llvm.memmove.* intrinsics take an extra
+ parameter now ("i1 isVolatile"), totaling 5 parameters, and the pointer
+ operands are now address-space qualified.
+ If you were creating these intrinsic calls and prototypes yourself (as opposed
+ to using Intrinsic::getDeclaration), you can use
+ UpgradeIntrinsicFunction/UpgradeIntrinsicCall to be portable across releases.
+
+
+ SetCurrentDebugLocation takes a DebugLoc now instead of a MDNode.
+ Change your code to use
+ SetCurrentDebugLocation(DebugLoc::getFromDILocation(...)).
+
+
+ The RegisterPass and RegisterAnalysisGroup templates are
+ considered deprecated, but continue to function in LLVM 2.8. Clients are
+ strongly advised to use the upcoming INITIALIZE_PASS() and
+ INITIALIZE_AG_PASS() macros instead.
+
+
+ The constructor for the Triple class no longer tries to understand odd triple
+ specifications. Frontends should ensure that they only pass valid triples to
+ LLVM. The Triple::normalize utility method has been added to help front-ends
+ deal with funky triples.
+
+
+ The signature of the GCMetadataPrinter::finishAssembly virtual
+ function changed: the raw_ostream and MCAsmInfo arguments
+ were dropped. GC plugins which compute stack maps must be updated to avoid
+ having the old definition overload the new signature.
+
+
+ The signature of MemoryBuffer::getMemBuffer changed. Unfortunately
+ calls intended for the old version still compile, but will not work correctly,
+ leading to a confusing error about an invalid header in the bitcode.
+
The add, sub, and mul instructions no longer
-support floating-point operands. The fadd, fsub, and
-fmul instructions should be used for this purpose instead.
-
+
+ Some public headers were renamed:
+
+
llvm/Assembly/AsmAnnotationWriter.h was renamed
+ to llvm/Assembly/AssemblyAnnotationWriter.h
+
This section lists changes to the LLVM development infrastructure. This
+mostly impacts users who actively work on LLVM or follow development on
+mainline, but may also impact users who leverage the LLVM build infrastructure
+or are interested in LLVM qualification.
-
Intel and AMD machines (IA32, X86-64, AMD64, EMT-64) running Red Hat
- Linux, Fedora Core, FreeBSD and AuroraUX (and probably other unix-like
- systems).
-
PowerPC and X86-based Mac OS X systems, running 10.4 and above in 32-bit
- and 64-bit modes.
-
Intel and AMD machines running on Win32 using MinGW libraries (native).
-
Intel and AMD machines running on Win32 with the Cygwin libraries (limited
- support is available for native builds with Visual C++).
-
Sun x86 and AMD64 machines running Solaris 10, OpenSolaris 0906.
-
Alpha-based machines running Debian GNU/Linux.
+
The default for make check is now to use
+ the lit testing tool, which is
+ part of LLVM itself. You can use lit directly as well, or use
+ the llvm-lit tool which is created as part of a Makefile or CMake
+ build (and knows how to find the appropriate tools). See the lit
+ documentation and the blog
+ post, and PR5217
+ for more information.
+
+
The LLVM test-suite infrastructure has a new "simple" test format
+ (make TEST=simple). The new format is intended to require only a
+ compiler and not a full set of LLVM tools. This makes it useful for testing
+ released compilers, for running the test suite with other compilers (for
+ performance comparisons), and makes sure that we are testing the compiler as
+ users would see it. The new format is also designed to work using reference
+ outputs instead of comparison to a baseline compiler, which makes it run much
+ faster and makes it less system dependent.
+
+
Significant progress has been made on a new interface to running the
+ LLVM test-suite (aka the LLVM "nightly tests") using
+ the LNT infrastructure. The LNT
+ interface to the test-suite brings significantly improved reporting
+ capabilities for monitoring the correctness and generated code quality
+ produced by LLVM over time.
-
-
The core LLVM infrastructure uses GNU autoconf to adapt itself
-to the machine and operating system on which it is built. However, minor
-porting may be required to get LLVM to work on new platforms. We welcome your
-portability patches and reports of successful builds or error messages.
-
@@ -987,15 +1097,6 @@ listed by component. If you run into a problem, please check the LLVM bug database and submit a bug if
there isn't already one.
-
-
LLVM will not correctly compile on Solaris and/or OpenSolaris
-using the stock GCC 3.x.x series 'out the box',
-See: Broken versions of GCC and other tools.
-However, A Modern GCC Build
-for x86/x86-64 has been made available from the third party AuroraUX Project
-that has been meticulously tested for bootstrapping LLVM & Clang.
-
-
@@ -1013,11 +1114,10 @@ components, please contact us on the LLVMdev list.
-
The MSIL, Alpha, SPU, MIPS, PIC16, Blackfin, MSP430, SystemZ and MicroBlaze
- backends are experimental.
-
llc "-filetype=asm" (the default) is the only
- supported value for this option. The MachO writer is experimental, and
- works much better in mainline SVN.
+
The Alpha, Blackfin, CellSPU, MicroBlaze, MSP430, MIPS, SystemZ
+ and XCore backends are experimental.
+
llc "-filetype=obj" is experimental on all targets
+ other than darwin-i386 and darwin-x86_64.
The X86 backend generates inefficient floating point code when configured
- to generate code for systems that don't have SSE2.
Win64 code generation wasn't widely tested. Everything should work, but we
expect small issues to happen. Also, llvm-gcc cannot build the mingw64
runtime currently due to lack of support for the 'u' inline assembly
@@ -1127,6 +1225,9 @@ appropriate nops inserted to ensure restartability.
+
The C backend has numerous problems and is not being actively maintained.
+Depending on it for anything serious is not advised.
The only major language feature of GCC not supported by llvm-gcc is
- the __builtin_apply family of builtins. However, some extensions
- are only supported on some targets. For example, trampolines are only
- supported on some targets (these are used when you take the address of a
- nested function).
-The llvm-gcc 4.2 Ada compiler works fairly well; however, this is not a mature
-technology, and problems should be expected.
-
-
The Ada front-end currently only builds on X86-32. This is mainly due
-to lack of trampoline support (pointers to nested functions) on other platforms.
-However, it also fails to build on X86-64
-which does support trampolines.
-
The Ada front-end fails to bootstrap.
-This is due to lack of LLVM support for setjmp/longjmp style
-exception handling, which is used internally by the compiler.
-Workaround: configure with --disable-bootstrap.
-
The c380004, c393010
-and cxg2021 ACATS tests fail
-(c380004 also fails with gcc-4.2 mainline).
-If the compiler is built with checks disabled then c393010
-causes the compiler to go into an infinite loop, using up all system memory.
-
Some GCC specific Ada tests continue to crash the compiler.
-
The -E binder option (exception backtraces)
-does not work and will result in programs
-crashing if an exception is raised. Workaround: do not use -E.
-
Only discrete types are allowed to start
-or finish at a non-byte offset in a record. Workaround: do not pack records
-or use representation clauses that result in a field of a non-discrete type
-starting or finishing in the middle of a byte.
llvm-gcc is generally very stable for the C family of languages. The only
+ major language feature of GCC not supported by llvm-gcc is the
+ __builtin_apply family of builtins. However, some extensions
+ are only supported on some targets. For example, trampolines are only
+ supported on some targets (these are used when you take the address of a
+ nested function).
+
+
Fortran support generally works, but there are still several unresolved bugs
+ in Bugzilla. Please see the
+ tools/gfortran component for details. Note that llvm-gcc is missing major
+ Fortran performance work in the frontend and library that went into GCC after
+ 4.2. If you are interested in Fortran, we recommend that you consider using
+ dragonegg instead.
+
+
The llvm-gcc 4.2 Ada compiler has basic functionality, but is no longer being
+actively maintained. If you are interested in Ada, we recommend that you
+consider using dragonegg instead.