X-Git-Url: http://demsky.eecs.uci.edu/git/?a=blobdiff_plain;f=docs%2FTestingGuide.html;h=c39065a2824eb9279ef0043ff9a549880c6dd92e;hb=2db49d797b86b7f3615bae17b2b016727778a6c4;hp=584b632e04c3b54e759cf4d0e6e886690f9786d3;hpb=bbb2a7a2cf43eca2870a7063404930866136716f;p=oota-llvm.git diff --git a/docs/TestingGuide.html b/docs/TestingGuide.html index 584b632e04c..c39065a2824 100644 --- a/docs/TestingGuide.html +++ b/docs/TestingGuide.html @@ -2,32 +2,46 @@ "http://www.w3.org/TR/html4/strict.dtd"> - LLVM Test Suite Guide + LLVM Testing Infrastructure Guide
- LLVM Test Suite Guide + LLVM Testing Infrastructure Guide
  1. Overview
  2. -
  3. Requirements
  4. -
  5. Quick Start
  6. -
  7. LLVM Test Suite Organization +
  8. Requirements
  9. +
  10. LLVM testing infrastructure organization
  11. -
  12. LLVM Test Suite Tree
  13. -
  14. DejaGNU Structure
  15. -
  16. llvm-test Structure
  17. -
  18. Running the LLVM Tests +
  19. Quick start +
  20. DejaGNU tests
  21. +
  22. Test suite
  23. + + +
  24. DejaGNU structure + +
  25. +
  26. Test suite structure
  27. +
  28. Running the test suite +
  29. Running the nightly tester
@@ -43,19 +57,19 @@
-

This document is the reference manual for the LLVM test suite. It documents -the structure of the LLVM test suite, the tools needed to use it, and how to add -and run tests.

+

This document is the reference manual for the LLVM testing infrastructure. It documents +the structure of the LLVM testing infrastructure, the tools needed to use it, +and how to add and run tests.

-
Requirements
+
Requirements
-

In order to use the LLVM test suite, you will need all of the software +

In order to use the LLVM testing infrastructure, you will need all of the software required to build LLVM, plus the following:

@@ -65,222 +79,232 @@ required to build LLVM, plus the following:

Expect is required by DejaGNU.
tcl
Tcl is required by DejaGNU.
+
-
F2C
-
For now, LLVM does not have a Fortran front-end, but using F2C, we can run -Fortran benchmarks. F2C support must be enabled via configure if not -installed in a standard place. F2C requires three items: the f2c -executable, f2c.h to compile the generated code, and libf2c.a -to link generated code. By default, given an F2C directory $DIR, the -configure script will search $DIR/bin for f2c, -$DIR/include for f2c.h, and $DIR/lib for -libf2c.a. The default $DIR values are: /usr, -/usr/local, /sw, and /opt. If you installed F2C in a -different location, you must tell configure: +
- - + +
LLVM testing infrastructure organization
+ + +
-

Darwin (Mac OS X) developers can simplify the installation of Expect and tcl -by using fink. fink install expect will install both. Alternatively, -Darwinports users can use sudo port install expect to install Expect -and tcl.

+

The LLVM testing infrastructure contains two major categories of tests: code +fragments and whole programs. Code fragments are referred to as the "DejaGNU +tests" and are in the llvm module in subversion under the +llvm/test directory. The whole programs tests are referred to as the +"Test suite" and are in the test-suite module in subversion. +

+ +
+ + +
DejaGNU tests
+ + +
+ +

Code fragments are small pieces of code that test a specific +feature of LLVM or trigger a specific bug in LLVM. They are usually +written in LLVM assembly language, but can be written in other +languages if the test targets a particular language front end (and the +appropriate --with-llvmgcc options were used +at configure time of the llvm module). These tests +are driven by the DejaGNU testing framework, which is hidden behind a +few simple makefiles.

+ +

These code fragments are not complete programs. The code generated +from them is never executed to determine correct behavior.

+ +

These code fragment tests are located in the llvm/test +directory.

+ +

Typically when a bug is found in LLVM, a regression test containing +just enough code to reproduce the problem should be written and placed +somewhere underneath this directory. In most cases, this will be a small +piece of LLVM assembly language code, often distilled from an actual +application or benchmark.

+ +
+ + +
Test suite
+ + +
+ +

The test suite contains whole programs, which are pieces of +code which can be compiled and linked into a stand-alone program that can be +executed. These programs are generally written in high level languages such as +C or C++, but sometimes they are written straight in LLVM assembly.

+ +

These programs are compiled and then executed using several different +methods (native compiler, LLVM C backend, LLVM JIT, LLVM native code generation, +etc). The output of these programs is compared to ensure that LLVM is compiling +the program correctly.

+ +

In addition to compiling and executing programs, whole program tests serve as +a way of benchmarking LLVM performance, both in terms of the efficiency of the +programs generated as well as the speed with which LLVM compiles, optimizes, and +generates code.

+ +

The test-suite is located in the test-suite Subversion module.

-
Quick Start
+
Quick start
-

The tests are located in two separate CVS modules. The basic feature and -regression tests are in the main "llvm" module under the directory -llvm/test. A more comprehensive test suite that includes whole -programs in C and C++ is in the llvm-test module. This module should -be checked out to the llvm/projects directory. When you -configure the llvm module, the llvm-test module -will be automatically configured. Alternatively, you can configure the - llvm-test module manually.

+

The tests are located in two separate Subversion modules. The + DejaGNU tests are in the main "llvm" module under the directory + llvm/test (so you get these tests for free with the main llvm tree). + The more comprehensive test suite that includes whole +programs in C and C++ is in the test-suite module. This module should +be checked out to the llvm/projects directory (don't use another name +then the default "test-suite", for then the test suite will be run every time +you run make in the main llvm directory). +When you configure the llvm module, +the test-suite directory will be automatically configured. +Alternatively, you can configure the test-suite module manually.

+ + +
DejaGNU tests
+

To run all of the simple tests in LLVM using DejaGNU, use the master Makefile in the llvm/test directory:

+ +
 % gmake -C llvm/test
 
-or
+
+ +

or

+ +
 % gmake check
 
+
-

To run only a subdirectory of tests in llvm/test using DejaGNU (ie. -Regression/Transforms), just set the TESTSUITE variable to the path of the +

To run only a subdirectory of tests in llvm/test using DejaGNU (ie. +Transforms), just set the TESTSUITE variable to the path of the subdirectory (relative to llvm/test):

+ +
-% gmake -C llvm/test TESTSUITE=Regression/Transforms
+% gmake TESTSUITE=Transforms check
 
+

Note: If you are running the tests with objdir != subdir, you must have run the complete testsuite before you can specify a subdirectory.

-

To run the comprehensive test suite (tests that compile and execute whole -programs), run the llvm-test tests:

+

To run only a single test, set TESTONE to its path (relative to +llvm/test) and make the check-one target:

+
-% cd llvm/projects
-% cvs co llvm-test
-% cd llvm-test
-% ./configure --with-llvmsrc=$LLVM_SRC_ROOT --with-llvmobj=$LLVM_OBJ_ROOT
-% gmake
+% gmake TESTONE=Feature/basictest.ll check-one
 
-
- -
LLVM Test Suite Organization
- - -
- -

The LLVM test suite contains two major categories of tests: code -fragments and whole programs. Code fragments are in the llvm module -under the llvm/test directory. The whole programs -test suite is in the llvm-test module under the main directory.

+

To run the tests with Valgrind (Memcheck by default), just append +VG=1 to the commands above, e.g.:

+
+
+% gmake check VG=1
+
- + -
- -

Code fragments are small pieces of code that test a specific feature of LLVM -or trigger a specific bug in LLVM. They are usually written in LLVM assembly -language, but can be written in other languages if the test targets a particular -language front end.

- -

Code fragments are not complete programs, and they are never executed to -determine correct behavior.

- -

These code fragment tests are located in the llvm/test/Features and -llvm/test/Regression directories.

+

To run the comprehensive test suite (tests that compile and execute whole +programs), first checkout and setup the test-suite module:

+
+
+% cd llvm/projects
+% svn co http://llvm.org/svn/llvm-project/test-suite/trunk test-suite
+% cd ..
+% ./configure --with-llvmgccdir=$LLVM_GCC_DIR
+
- - - +

where $LLVM_GCC_DIR is the directory where +you installed llvm-gcc, not it's src or obj +dir. The --with-llvmgccdir option assumes that +the llvm-gcc-4.2 module was configured with +--program-prefix=llvm-, and therefore that the C and C++ +compiler drivers are called llvm-gcc and llvm-g++ +respectively. If this is not the case, +use --with-llvmgcc/--with-llvmgxx to specify each +executable's location.

-
+

Then, run the entire test suite by running make in the test-suite +directory:

-

Whole Programs are pieces of code which can be compiled and linked into a -stand-alone program that can be executed. These programs are generally written -in high level languages such as C or C++, but sometimes they are written -straight in LLVM assembly.

+
+
+% cd projects/test-suite
+% gmake
+
+
-

These programs are compiled and then executed using several different -methods (native compiler, LLVM C backend, LLVM JIT, LLVM native code generation, -etc). The output of these programs is compared to ensure that LLVM is compiling -the program correctly.

+

Usually, running the "nightly" set of tests is a good idea, and you can also +let it generate a report by running:

-

In addition to compiling and executing programs, whole program tests serve as -a way of benchmarking LLVM performance, both in terms of the efficiency of the -programs generated as well as the speed with which LLVM compiles, optimizes, and -generates code.

+
+
+% cd projects/test-suite
+% gmake TEST=nightly report report.html
+
+
-

All "whole program" tests are located in the llvm-test CVS -module.

+

Any of the above commands can also be run in a subdirectory of +projects/test-suite to run the specified test only on the programs in +that subdirectory.

- + -
+

The LLVM DejaGNU tests are driven by DejaGNU together with GNU Make and are + located in the llvm/test directory. -

Each type of test in the LLVM test suite has its own directory. The major -subtrees of the test suite directory tree are as follows:

- -
    -
  • llvm/test

    This directory contains a large array of small tests that exercise various features of LLVM and to ensure that regressions do not occur. The directory is broken into several sub-directories, each focused on - a particular area of LLVM. A few of the important ones are:

      + a particular area of LLVM. A few of the important ones are:

      + +
      • Analysis: checks Analysis passes.
      • Archive: checks the Archive library.
      • Assembler: checks Assembly reader/writer functionality.
      • -
      • Bytecode: checks Bytecode reader/writer functionality.
      • +
      • Bitcode: checks Bitcode reader/writer functionality.
      • CodeGen: checks code generation and each target.
      • Features: checks various features of the LLVM language.
      • -
      • Linker: tests bytecode linking.
      • +
      • Linker: tests bitcode linking.
      • Transforms: tests each of the scalar, IPO, and utility transforms to ensure they make the right transformations.
      • Verifier: tests the IR verifier.
      • -

      -

      Typically when a bug is found in LLVM, a regression test containing - just enough code to reproduce the problem should be written and placed - somewhere underneath this directory. In most cases, this will be a small - piece of LLVM assembly language code, often distilled from an actual - application or benchmark.

      - -
    • llvm-test -

      The llvm-test CVS module contains programs that can be compiled -with LLVM and executed. These programs are compiled using the native compiler -and various LLVM backends. The output from the program compiled with the -native compiler is assumed correct; the results from the other programs are -compared to the native program output and pass if they match.

      - -

      In addition for testing correctness, the llvm-test directory also -performs timing tests of various LLVM optimizations. It also records -compilation times for the compilers and the JIT. This information can be -used to compare the effectiveness of LLVM's optimizations and code -generation.

    • - -
    • llvm-test/SingleSource -

      The SingleSource directory contains test programs that are only a single -source file in size. These are usually small benchmark programs or small -programs that calculate a particular value. Several such programs are grouped -together in each directory.

    • - -
    • llvm-test/MultiSource -

      The MultiSource directory contains subdirectories which contain entire -programs with multiple source files. Large benchmarks and whole applications -go here.

    • - -
    • llvm-test/External -

      The External directory contains Makefiles for building code that is external -to (i.e., not distributed with) LLVM. The most prominent members of this -directory are the SPEC 95 and SPEC 2000 benchmark suites. The presence and -location of these external programs is configured by the llvm-test -configure script.

    • - -
    +
- - - -
-

The LLVM test suite is partially driven by DejaGNU and partially driven by - GNU Make. Specifically, the Features and Regression tests are all driven by - DejaGNU. The llvm-test module is currently driven by a set of - Makefiles.

+ + + +

The DejaGNU structure is very simple, but does require some information to be set. This information is gathered via configure and is written to a file, site.exp in llvm/test. The llvm/test @@ -289,7 +313,9 @@ location of these external programs is configured by the llvm-test

In order for DejaGNU to work, each directory of tests must have a dg.exp file. DejaGNU looks for this file to determine how to run the tests. This file is just a Tcl script and it can do anything you want, but - we've standardized it for the LLVM regression tests. It simply loads a Tcl + we've standardized it for the LLVM regression tests. If you're adding a + directory of tests, just copy dg.exp from another directory to get + running. The standard dg.exp simply loads a Tcl library (test/lib/llvm.exp) and calls the llvm_runtests function defined in that library with a list of file names to run. The names are obtained by using Tcl's glob command. Any directory that contains only @@ -318,21 +344,361 @@ location of these external programs is configured by the llvm-test line to be concatenated with the next one. In this way you can build up long pipelines of commands without making huge line lengths. The lines ending in \ are concatenated until a RUN line that doesn't end in \ is - found. This concatenated set or RUN lines then constitutes one execution. + found. This concatenated set of RUN lines then constitutes one execution. Tcl will substitute variables and arrange for the pipeline to be executed. If any process in the pipeline fails, the entire line (and test case) fails too.

Below is an example of legal RUN lines in a .ll file:

-
-  ; RUN: llvm-as < %s | llvm-dis > %t1
-  ; RUN: llvm-dis < %s.bc-13 > %t2
-  ; RUN: diff %t1 %t2
-  
+ +
+
+; RUN: llvm-as < %s | llvm-dis > %t1
+; RUN: llvm-dis < %s.bc-13 > %t2
+; RUN: diff %t1 %t2
+
+

As with a Unix shell, the RUN: lines permit pipelines and I/O redirection + to be used. However, the usage is slightly different than for Bash. To check + what's legal, see the documentation for the + Tcl exec + command and the + tutorial. + The major differences are:

+
    +
  • You can't do 2>&1. That will cause Tcl to write to a + file named &1. Usually this is done to get stderr to go through + a pipe. You can do that in tcl with |& so replace this idiom: + ... 2>&1 | grep with ... |& grep
  • +
  • You can only redirect to a file, not to another descriptor and not from + a here document.
  • +
  • tcl supports redirecting to open files with the @ syntax but you + shouldn't use that here.
  • +
+ +

There are some quoting rules that you must pay attention to when writing + your RUN lines. In general nothing needs to be quoted. Tcl won't strip off any + ' or " so they will get passed to the invoked program. For example:

+ +
+
+... | grep 'find this string'
+
+
+ +

This will fail because the ' characters are passed to grep. This would + instruction grep to look for 'find in the files this and + string'. To avoid this use curly braces to tell Tcl that it should + treat everything enclosed as one value. So our example would become:

+ +
+
+... | grep {find this string}
+
+
+ +

Additionally, the characters [ and ] are treated + specially by Tcl. They tell Tcl to interpret the content as a command to + execute. Since these characters are often used in regular expressions this can + have disastrous results and cause the entire test run in a directory to fail. + For example, a common idiom is to look for some basicblock number:

+ +
+
+... | grep bb[2-8]
+
+
+ +

This, however, will cause Tcl to fail because its going to try to execute + a program named "2-8". Instead, what you want is this:

+ +
+
+... | grep {bb\[2-8\]}
+
+
+ +

Finally, if you need to pass the \ character down to a program, + then it must be doubled. This is another Tcl special character. So, suppose + you had: + +

+
+... | grep 'i32\*'
+
+
+ +

This will fail to match what you want (a pointer to i32). First, the + ' do not get stripped off. Second, the \ gets stripped off + by Tcl so what grep sees is: 'i32*'. That's not likely to match + anything. To resolve this you must use \\ and the {}, like + this:

+ +
+
+... | grep {i32\\*}
+
+
+ +

If your system includes GNU grep, make sure +that GREP_OPTIONS is not set in your environment. Otherwise, +you may get invalid results (both false positives and false +negatives).

+ +
+ + + + + +
+ +

A powerful feature of the RUN: lines is that it allows any arbitrary commands + to be executed as part of the test harness. While standard (portable) unix + tools like 'grep' work fine on run lines, as you see above, there are a lot + of caveats due to interaction with Tcl syntax, and we want to make sure the + run lines are portable to a wide range of systems. Another major problem is + that grep is not very good at checking to verify that the output of a tools + contains a series of different output in a specific order. The FileCheck + tool was designed to help with these problems.

+ +

FileCheck (whose basic command line arguments are described in the FileCheck man page is + designed to read a file to check from standard input, and the set of things + to verify from a file specified as a command line argument. A simple example + of using FileCheck from a RUN line looks like this:

+ +
+
+; RUN: llvm-as < %s | llc -march=x86-64 | FileCheck %s
+
+
+ +

This syntax says to pipe the current file ("%s") into llvm-as, pipe that into +llc, then pipe the output of llc into FileCheck. This means that FileCheck will +be verifying its standard input (the llc output) against the filename argument +specified (the original .ll file specified by "%s"). To see how this works, +lets look at the rest of the .ll file (after the RUN line):

+ +
+
+define void @sub1(i32* %p, i32 %v) {
+entry:
+; CHECK: sub1:
+; CHECK: subl
+        %0 = tail call i32 @llvm.atomic.load.sub.i32.p0i32(i32* %p, i32 %v)
+        ret void
+}
+
+define void @inc4(i64* %p) {
+entry:
+; CHECK: inc4:
+; CHECK: incq
+        %0 = tail call i64 @llvm.atomic.load.add.i64.p0i64(i64* %p, i64 1)
+        ret void
+}
+
+
+ +

Here you can see some "CHECK:" lines specified in comments. Now you can see +how the file is piped into llvm-as, then llc, and the machine code output is +what we are verifying. FileCheck checks the machine code output to verify that +it matches what the "CHECK:" lines specify.

+ +

The syntax of the CHECK: lines is very simple: they are fixed strings that +must occur in order. FileCheck defaults to ignoring horizontal whitespace +differences (e.g. a space is allowed to match a tab) but otherwise, the contents +of the CHECK: line is required to match some thing in the test file exactly.

+ +

One nice thing about FileCheck (compared to grep) is that it allows merging +test cases together into logical groups. For example, because the test above +is checking for the "sub1:" and "inc4:" labels, it will not match unless there +is a "subl" in between those labels. If it existed somewhere else in the file, +that would not count: "grep subl" matches if subl exists anywhere in the +file.

+ +
+ + + + +
+ +

The FileCheck -check-prefix option allows multiple test configurations to be +driven from one .ll file. This is useful in many circumstances, for example, +testing different architectural variants with llc. Here's a simple example:

+ +
+
+; RUN: llvm-as < %s | llc -mtriple=i686-apple-darwin9 -mattr=sse41 \
+; RUN:              | FileCheck %s -check-prefix=X32
+; RUN: llvm-as < %s | llc -mtriple=x86_64-apple-darwin9 -mattr=sse41 \
+; RUN:              | FileCheck %s -check-prefix=X64
+
+define <4 x i32> @pinsrd_1(i32 %s, <4 x i32> %tmp) nounwind {
+        %tmp1 = insertelement <4 x i32> %tmp, i32 %s, i32 1
+        ret <4 x i32> %tmp1
+; X32: pinsrd_1:
+; X32:    pinsrd $1, 4(%esp), %xmm0
+
+; X64: pinsrd_1:
+; X64:    pinsrd $1, %edi, %xmm0
+}
+
+
+ +

In this case, we're testing that we get the expected code generation with +both 32-bit and 64-bit code generation.

+ +
+ + + + +
+ +

Sometimes you want to match lines and would like to verify that matches +happen on exactly consequtive lines with no other lines in between them. In +this case, you can use CHECK: and CHECK-NEXT: directives to specify this. If +you specified a custom check prefix, just use "<PREFIX>-NEXT:". For +example, something like this works as you'd expect:

+ +
+
+define void @t2(<2 x double>* %r, <2 x double>* %A, double %B) {
+	%tmp3 = load <2 x double>* %A, align 16
+	%tmp7 = insertelement <2 x double> undef, double %B, i32 0
+	%tmp9 = shufflevector <2 x double> %tmp3,
+                              <2 x double> %tmp7,
+                              <2 x i32> < i32 0, i32 2 >
+	store <2 x double> %tmp9, <2 x double>* %r, align 16
+	ret void
+        
+; CHECK: t2:
+; CHECK: 	movl	8(%esp), %eax
+; CHECK-NEXT: 	movapd	(%eax), %xmm0
+; CHECK-NEXT: 	movhpd	12(%esp), %xmm0
+; CHECK-NEXT: 	movl	4(%esp), %eax
+; CHECK-NEXT: 	movapd	%xmm0, (%eax)
+; CHECK-NEXT: 	ret
+}
+
+
+ +

CHECK-NEXT: directives reject the input unless there is exactly one newline +between it an the previous directive. A CHECK-NEXT cannot be the first +directive in a file.

+ +
+ + + + +
+ +

The CHECK-NOT: directive is used to verify that a string doesn't occur +between two matches (or the first match and the beginning of the file). For +example, to verify that a load is removed by a transformation, a test like this +can be used:

+ +
+
+define i8 @coerce_offset0(i32 %V, i32* %P) {
+  store i32 %V, i32* %P
+   
+  %P2 = bitcast i32* %P to i8*
+  %P3 = getelementptr i8* %P2, i32 2
+
+  %A = load i8* %P3
+  ret i8 %A
+; CHECK: @coerce_offset0
+; CHECK-NOT: load
+; CHECK: ret i8
+}
+
+
+ +
+ + + + +
+ +

The CHECK: and CHECK-NOT: directives both take a pattern to match. For most +uses of FileCheck, fixed string matching is perfectly sufficient. For some +things, a more flexible form of matching is desired. To support this, FileCheck +allows you to specify regular expressions in matching strings, surrounded by +double braces: {{yourregex}}. Because we want to use fixed string +matching for a majority of what we do, FileCheck has been designed to support +mixing and matching fixed string matching with regular expressions. This allows +you to write things like this:

+ +
+
+; CHECK: movhpd	{{[0-9]+}}(%esp), {{%xmm[0-7]}}
+
+
+ +

In this case, any offset from the ESP register will be allowed, and any xmm +register will be allowed.

+ +

Because regular expressions are enclosed with double braces, they are +visually distinct, and you don't need to use escape characters within the double +braces like you would in C. In the rare case that you want to match double +braces explicitly from the input, you can use something ugly like +{{[{][{]}} as your pattern.

+ +
+ + + + +
+ +

It is often useful to match a pattern and then verify that it occurs again +later in the file. For codegen tests, this can be useful to allow any register, +but verify that that register is used consistently later. To do this, FileCheck +allows named variables to be defined and substituted into patterns. Here is a +simple example:

+ +
+
+; CHECK: test5:
+; CHECK:    notw	[[REGISTER:%[a-z]+]]
+; CHECK:    andw	{{.*}}[[REGISTER]]
+
+
+ +

The first check line matches a regex (%[a-z]+) and captures it into +the variables "REGISTER". The second line verifies that whatever is in REGISTER +occurs later in the file after an "andw". FileCheck variable references are +always contained in [[ ]] pairs, are named, and their names can be +formed with the regex "[a-zA-Z][a-zA-Z0-9]*". If a colon follows the +name, then it is a definition of the variable, if not, it is a use.

+ +

FileCheck variables can be defined multiple times, and uses always get the +latest value. Note that variables are all read at the start of a "CHECK" line +and are all defined at the end. This means that if you have something like +"CHECK: [[XYZ:.*]]x[[XYZ]]" that the check line will read the previous +value of the XYZ variable and define a new one after the match is performed. If +you need to do something like this you can probably take advantage of the fact +that FileCheck is not actually line-oriented when it matches, this allows you to +define two separate CHECK lines that match on the same line. +

+ +
+ + + -

With a RUN line there are a number of substitutions that are permitted. In general, any Tcl variable that is available in the substitute @@ -342,70 +708,77 @@ location of these external programs is configured by the llvm-test library, certain names can be accessed with an alternate syntax: a % prefix. These alternates are deprecated and may go away in a future version.

- Here are the available variable names. The alternate syntax is listed in +

Here are the available variable names. The alternate syntax is listed in parentheses.

+
$test (%s)
The full path to the test case's source. This is suitable for passing on the command line as the input to an llvm tool.
+
$srcdir
The source directory from where the "make check" was run.
+
objdir
-
The object directory that corresponds to the $srcdir.
+
The object directory that corresponds to the $srcdir.
+
subdir
A partial path from the test directory that contains the sub-directory that contains the test source being executed.
+
srcroot
The root directory of the LLVM src tree.
+
objroot
The root directory of the LLVM object tree. This could be the same as the srcroot.
+
path
The path to the directory that contains the test case source. This is for locating any supporting files that are not generated by the test, but used by the test.
+
tmp
The path to a temporary file name that could be used for this test case. The file name won't conflict with other test cases. You can append to it if you need multiple temporaries. This is useful as the destination of some redirected output.
+
llvmlibsdir (%llvmlibsdir)
The directory where the LLVM libraries are located.
+
target_triplet (%target_triplet)
The target triplet that corresponds to the current host machine (the one running the test cases). This should probably be called "host".
-
prcontext (%prcontext)
-
Path to the prcontext tcl script that prints some context around a - line that matches a pattern. This isn't strictly necessary as the test suite - is run with its PATH altered to include the test/Scripts directory where - the prcontext script is located. Note that this script is similar to - grep -C but you should use the prcontext script because - not all platforms support grep -C.
+
llvmgcc (%llvmgcc)
The full path to the llvm-gcc executable as specified in the configured LLVM environment
+
llvmgxx (%llvmgxx)
The full path to the llvm-gxx executable as specified in the configured LLVM environment
-
llvmgcc_version (%llvmgcc_version)
-
The full version number of the llvm-gcc executable.
-
llvmgccmajvers (%llvmgccmajvers)
-
The major version number of the llvm-gcc executable.
+
gccpath
The full path to the C compiler used to build LLVM. Note that this might not be gcc.
+
gxxpath
The full path to the C++ compiler used to build LLVM. Note that this might not be g++.
+
compile_c (%compile_c)
The full command line used to compile LLVM C source code. This has all the configured -I, -D and optimization options.
+
compile_cxx (%compile_cxx)
The full command used to compile LLVM C++ source code. This has all the configured -I, -D and optimization options.
+
link (%link)
This full link command used to link LLVM executables. This has all the configured -I, -L and -l options.
+
shlibext (%shlibext)
The suffix for the host platforms share library (dll) files. This includes the period as the first character.
@@ -420,9 +793,12 @@ location of these external programs is configured by the llvm-test +

To make RUN line writing easier, there are several shell scripts located - in the llvm/test/Scripts directory. For example:

+ in the llvm/test/Scripts directory. This directory is in the PATH + when running tests, so you can just call these scripts using their name. For + example:

ignore
This script runs its arguments and then always returns 0. This is useful @@ -431,6 +807,7 @@ location of these external programs is configured by the llvm-test non-zero result will cause the test to fail. This script overcomes that issue and nicely documents that the test case is purposefully ignoring the result code of the tool
+
not
This script runs its arguments and then inverts the result code from it. Zero result codes become 1. Non-zero result codes become 0. This is @@ -439,26 +816,27 @@ location of these external programs is configured by the llvm-test

Sometimes it is necessary to mark a test case as "expected fail" or XFAIL. - You can easily mark a test as XFAIL just by including XFAIL: on a + You can easily mark a test as XFAIL just by including XFAIL: on a line near the top of the file. This signals that the test case should succeed if the test fails. Such test cases are counted separately by DejaGnu. To specify an expected fail, use the XFAIL keyword in the comments of the test program followed by a colon and one or more regular expressions (separated by - a comma). The regular expressions allow you to XFAIL the test conditionally - by host platform. The regular expressions following the : are matched against - the target triplet or llvmgcc version number for the host machine. If there is - a match, the test is expected to fail. If not, the test is expected to - succeed. To XFAIL everywhere just specify XFAIL: *. When matching - the llvm-gcc version, you can specify the major (e.g. 3) or full version - (i.e. 3.4) number. Here is an example of an XFAIL line:

-
-   ; XFAIL: darwin,sun,llvmgcc4
-  
+ a comma). The regular expressions allow you to XFAIL the test conditionally by + host platform. The regular expressions following the : are matched against the + target triplet for the host machine. If there is a match, the test is expected + to fail. If not, the test is expected to succeed. To XFAIL everywhere just + specify XFAIL: *. Here is an example of an XFAIL line:

+ +
+
+; XFAIL: darwin,sun
+
+

To make the output more useful, the llvm_runtest function wil scan the lines of the test case for ones that contain a pattern that matches PR[0-9]+. This is the syntax for specifying a PR (Problem Report) number that - is related to the test case. The numer after "PR" specifies the LLVM bugzilla + is related to the test case. The number after "PR" specifies the LLVM bugzilla number. When a PR number is specified, it will be used in the pass/fail reporting. This is useful to quickly get some context when a test fails.

@@ -472,64 +850,75 @@ location of these external programs is configured by the llvm-test
-
llvm-test +
-

As mentioned previously, the llvm-test module provides three types -of tests: MultiSource, SingleSource, and External. Each tree is then subdivided -into several categories, including applications, benchmarks, regression tests, -code that is strange grammatically, etc. These organizations should be -relatively self explanatory.

+

The test-suite module contains a number of programs that can be compiled +with LLVM and executed. These programs are compiled using the native compiler +and various LLVM backends. The output from the program compiled with the +native compiler is assumed correct; the results from the other programs are +compared to the native program output and pass if they match.

-

In addition to the regular "whole program" tests, the llvm-test -module also provides a mechanism for compiling the programs in different ways. -If the variable TEST is defined on the gmake command line, the test system will -include a Makefile named TEST.<value of TEST variable>.Makefile. -This Makefile can modify build rules to yield different results.

+

When executing tests, it is usually a good idea to start out with a subset of +the available tests or programs. This makes test run times smaller at first and +later on this is useful to investigate individual test failures. To run some +test only on a subset of programs, simply change directory to the programs you +want tested and run gmake there. Alternatively, you can run a different +test using the TEST variable to change what tests or run on the +selected programs (see below for more info).

-

For example, the LLVM nightly tester uses TEST.nightly.Makefile to -create the nightly test reports. To run the nightly tests, run gmake -TEST=nightly.

+

In addition for testing correctness, the llvm-test directory also +performs timing tests of various LLVM optimizations. It also records +compilation times for the compilers and the JIT. This information can be +used to compare the effectiveness of LLVM's optimizations and code +generation.

-

There are several TEST Makefiles available in the tree. Some of them are -designed for internal LLVM research and will not work outside of the LLVM -research group. They may still be valuable, however, as a guide to writing your -own TEST Makefile for any optimization or analysis passes that you develop with -LLVM.

+

llvm-test tests are divided into three types of tests: MultiSource, +SingleSource, and External.

+ +
    +
  • llvm-test/SingleSource +

    The SingleSource directory contains test programs that are only a single +source file in size. These are usually small benchmark programs or small +programs that calculate a particular value. Several such programs are grouped +together in each directory.

  • + +
  • llvm-test/MultiSource +

    The MultiSource directory contains subdirectories which contain entire +programs with multiple source files. Large benchmarks and whole applications +go here.

  • + +
  • llvm-test/External +

    The External directory contains Makefiles for building code that is external +to (i.e., not distributed with) LLVM. The most prominent members of this +directory are the SPEC 95 and SPEC 2000 benchmark suites. The External +directory does not contain these actual tests, but only the Makefiles that know +how to properly compile these programs from somewhere else. The presence and +location of these external programs is configured by the llvm-test +configure script.

  • +
+ +

Each tree is then subdivided into several categories, including applications, +benchmarks, regression tests, code that is strange grammatically, etc. These +organizations should be relatively self explanatory.

+ +

Some tests are known to fail. Some are bugs that we have not fixed yet; +others are features that we haven't added yet (or may never add). In DejaGNU, +the result for such tests will be XFAIL (eXpected FAILure). In this way, you +can tell the difference between an expected and unexpected failure.

+ +

The tests in the test suite have no such feature at this time. If the +test passes, only warnings and other miscellaneous output will be generated. If +a test fails, a large <program> FAILED message will be displayed. This +will help you separate benign warnings from actual test failures.

-

Note, when configuring the llvm-test module, you might want to -specify the following configuration options:

-
-
--enable-spec2000 -
--enable-spec2000=<directory> -
- Enable the use of SPEC2000 when testing LLVM. This is disabled by default - (unless configure finds SPEC2000 installed). By specifying - directory, you can tell configure where to find the SPEC2000 - benchmarks. If directory is left unspecified, configure - uses the default value - /home/vadve/shared/benchmarks/speccpu2000/benchspec. -

-

--enable-spec95 -
--enable-spec95=<directory> -
- Enable the use of SPEC95 when testing LLVM. It is similar to the - --enable-spec2000 option. -

-

--enable-povray -
--enable-povray=<directory> -
- Enable the use of Povray as an external test. Versions of Povray written - in C should work. This option is similar to the --enable-spec2000 - option. -
- +
@@ -538,80 +927,151 @@ specify the following configuration options:

are not executed inside of the LLVM source tree. This is because the test suite creates temporary files during execution.

-

The master Makefile in llvm/test is capable of running only the DejaGNU -driven tests. By default, it will run all of these tests.

+

To run the test suite, you need to use the following steps:

-

To run only the DejaGNU driven tests, run gmake at the -command line in llvm/test. To run a specific directory of tests, use -the TESTSUITE variable. -

+
    +
  1. cd into the llvm/projects directory in your source tree. +
  2. -

    For example, to run the Regression tests, type -gmake TESTSUITE=Regression in llvm/tests.

    +
  3. Check out the test-suite module with:

    -

    Note that there are no Makefiles in llvm/test/Features and -llvm/test/Regression. You must use DejaGNU from the llvm/test -directory to run them.

    +
    +
    +% svn co http://llvm.org/svn/llvm-project/test-suite/trunk test-suite
    +
    +
    +

    This will get the test suite into llvm/projects/test-suite.

    +
  4. +
  5. Configure and build llvm.

  6. +
  7. Configure and build llvm-gcc.

  8. +
  9. Install llvm-gcc somewhere.

  10. +
  11. Re-configure llvm from the top level of + each build tree (LLVM object directory tree) in which you want + to run the test suite, just as you do before building LLVM.

    +

    During the re-configuration, you must either: (1) + have llvm-gcc you just built in your path, or (2) + specify the directory where your just-built llvm-gcc is + installed using --with-llvmgccdir=$LLVM_GCC_DIR.

    +

    You must also tell the configure machinery that the test suite + is available so it can be configured for your build tree:

    +
    +
    +% cd $LLVM_OBJ_ROOT ; $LLVM_SRC_ROOT/configure [--with-llvmgccdir=$LLVM_GCC_DIR]
    +
    +
    +

    [Remember that $LLVM_GCC_DIR is the directory where you + installed llvm-gcc, not its src or obj directory.]

    +
  12. -

    To run the llvm-test suite, you need to use the following steps: -

    -
      -
    1. cd into the llvm/projects directory
    2. -
    3. check out the llvm-test module with:
      - cvs -d :pserver:anon@llvm.org:/var/cvs/llvm co -PR llvm-test
      - This will get the test suite into llvm/projects/llvm-test
    4. -
    5. configure the test suite. You can do this one of two ways: -
        -
      1. Use the regular llvm configure:
        - cd $LLVM_OBJ_ROOT ; $LLVM_SRC_ROOT/configure
        - This will ensure that the projects/llvm-test directory is also - properly configured.
      2. -
      3. Use the configure script found in the llvm-test source - directory:
        - $LLVM_SRC_ROOT/projects/llvm-test/configure - --with-llvmsrc=$LLVM_SRC_ROOT --with-llvmobj=$LLVM_OBJ_ROOT -
      4. -
      -
    6. gmake
    7. +
    8. You can now run the test suite from your build tree as follows:

      +
      +
      +% cd $LLVM_OBJ_ROOT/projects/test-suite
      +% make
      +
      +
      +

    Note that the second and third steps only need to be done once. After you have the suite checked out and configured, you don't need to do it again (unless the test code or configure script changes).

    -

    To make a specialized test (use one of the -llvm-test/TEST.<type>.Makefiles), just run:
    -gmake TEST=<type> test
    For example, you could run the -nightly tester tests using the following commands:

    +
-
- % cd llvm/projects/llvm-test
- % gmake TEST=nightly test
-
+ + + -

Regardless of which test you're running, the results are printed on standard -output and standard error. You can redirect these results to a file if you -choose.

+
+

In order to run the External tests in the test-suite + module, you must specify --with-externals. This + must be done during the re-configuration step (see above), + and the llvm re-configuration must recognize the + previously-built llvm-gcc. If any of these is missing or + neglected, the External tests won't work.

+
+
--with-externals
+
--with-externals=<directory>
+
+ This tells LLVM where to find any external tests. They are expected to be + in specifically named subdirectories of <directory>. + If directory is left unspecified, + configure uses the default value + /home/vadve/shared/benchmarks/speccpu2000/benchspec. + Subdirectory names known to LLVM include: +
+
spec95
+
speccpu2000
+
speccpu2006
+
povray31
+
+ Others are added from time to time, and can be determined from + configure. +
-

Some tests are known to fail. Some are bugs that we have not fixed yet; -others are features that we haven't added yet (or may never add). In DejaGNU, -the result for such tests will be XFAIL (eXpected FAILure). In this way, you -can tell the difference between an expected and unexpected failure.

+ + + +
+

In addition to the regular "whole program" tests, the test-suite +module also provides a mechanism for compiling the programs in different ways. +If the variable TEST is defined on the gmake command line, the test system will +include a Makefile named TEST.<value of TEST variable>.Makefile. +This Makefile can modify build rules to yield different results.

-

The tests in llvm-test have no such feature at this time. If the -test passes, only warnings and other miscellaneous output will be generated. If -a test fails, a large <program> FAILED message will be displayed. This -will help you separate benign warnings from actual test failures.

+

For example, the LLVM nightly tester uses TEST.nightly.Makefile to +create the nightly test reports. To run the nightly tests, run gmake +TEST=nightly.

+ +

There are several TEST Makefiles available in the tree. Some of them are +designed for internal LLVM research and will not work outside of the LLVM +research group. They may still be valuable, however, as a guide to writing your +own TEST Makefile for any optimization or analysis passes that you develop with +LLVM.

+Generating test output
+ +
+

There are a number of ways to run the tests and generate output. The most + simple one is simply running gmake with no arguments. This will + compile and run all programs in the tree using a number of different methods + and compare results. Any failures are reported in the output, but are likely + drowned in the other output. Passes are not reported explicitely.

+ +

Somewhat better is running gmake TEST=sometest test, which runs + the specified test and usually adds per-program summaries to the output + (depending on which sometest you use). For example, the nightly test + explicitely outputs TEST-PASS or TEST-FAIL for every test after each program. + Though these lines are still drowned in the output, it's easy to grep the + output logs in the Output directories.

+ +

Even better are the report and report.format targets + (where format is one of html, csv, text or + graphs). The exact contents of the report are dependent on which + TEST you are running, but the text results are always shown at the + end of the run and the results are always stored in the + report.<type>.format file (when running with + TEST=<type>). + + The report also generate a file called + report.<type>.raw.out containing the output of the entire test + run. +

+ + +
-

Assuming you can run llvm-test, (e.g. "gmake TEST=nightly report" +

Assuming you can run the test suite, (e.g. "gmake TEST=nightly report" should work), it is really easy to run optimizations or code generator components against every program in the tree, collecting statistics or running custom checks for correctness. At base, this is how the nightly tester works, @@ -624,10 +1084,10 @@ will tally counts of things you care about.

Following this, you can set up a test and a report that collects these and formats them for easy viewing. This consists of two files, an -"llvm-test/TEST.XXX.Makefile" fragment (where XXX is the name of your +"test-suite/TEST.XXX.Makefile" fragment (where XXX is the name of your test) and an "llvm-test/TEST.XXX.report" file that indicates how to format the output into a table. There are many example reports of various -levels of sophistication included with llvm-test, and the framework is very +levels of sophistication included with the test suite, and the framework is very general.

If you are interested in testing an optimization pass, check out the @@ -635,7 +1095,7 @@ general.

-% cd llvm/projects/llvm-test/MultiSource/Benchmarks  # or some other level
+% cd llvm/projects/test-suite/MultiSource/Benchmarks  # or some other level
 % make TEST=libcalls report
 
@@ -666,7 +1126,7 @@ Prolangs-C/bison/mybison | 74 | * | You can also use the "TEST=libcalls report.html" target to get the table in HTML form, similarly for report.csv and report.tex.

-

The source for this is in llvm-test/TEST.libcalls.*. The format is pretty +

The source for this is in test-suite/TEST.libcalls.*. The format is pretty simple: the Makefile indicates how to run the test (in this case, "opt -simplify-libcalls -stats"), and the report contains one line for each column of the output. The first value is the header for the column and the @@ -675,7 +1135,6 @@ example reports that can do fancy stuff.

- @@ -685,7 +1144,7 @@ example reports that can do fancy stuff.

The LLVM Nightly Testers automatically check out an LLVM tree, build it, run the "nightly" -program test (described above), run all of the feature and regression tests, +program test (described above), run all of the DejaGNU tests, delete the checked out tree, and then submit the results to http://llvm.org/nightlytest/. After test results are submitted to @@ -700,24 +1159,15 @@ as keep track of LLVM's progress over time.

machine, take a look at the comments at the top of the utils/NewNightlyTest.pl file. If you decide to set up a nightly tester please choose a unique nickname and invoke utils/NewNightlyTest.pl -with the "-nickname [yournickname]" command line option. We usually run it -from a crontab entry that looks like this:

- -
-
-5 3 * * *  $HOME/llvm/utils/NewNightlyTest.pl -parallel -nickname Nickname \
-           $CVSROOT $HOME/buildtest $HOME/cvs/testresults
-
-
+with the "-nickname [yournickname]" command line option. -

Or, you can create a shell script to encapsulate the running of the script. +

You can create a shell script to encapsulate the running of the script. The optimized x86 Linux nightly test is run from just such a script:

 #!/bin/bash
 BASE=/proj/work/llvm/nightlytest
-export CVSROOT=:pserver:anon@llvm.org:/var/cvs/llvm
 export BUILDDIR=$BASE/build 
 export WEBDIR=$BASE/testresults 
 export LLVMGCCDIR=/proj/work/llvm/cfrontend/install
@@ -726,7 +1176,7 @@ export LD_LIBRARY_PATH=/proj/install/lib
 cd $BASE
 cp /proj/work/llvm/llvm/utils/NewNightlyTest.pl .
 nice ./NewNightlyTest.pl -nice -release -verbose -parallel -enable-linscan \
-   -nickname NightlyTester -noexternals 2>&1 > output.log
+   -nickname NightlyTester -noexternals > output.log 2>&1 
 
@@ -750,12 +1200,12 @@ know. Thanks!


Valid CSS! + src="http://jigsaw.w3.org/css-validator/images/vcss-blue" alt="Valid CSS"> Valid HTML 4.01! + src="http://www.w3.org/Icons/valid-html401-blue" alt="Valid HTML 4.01"> John T. Criswell, Reid Spencer, and Tanya Lattner
- The LLVM Compiler Infrastructure
+ The LLVM Compiler Infrastructure
Last modified: $Date$