X-Git-Url: http://demsky.eecs.uci.edu/git/?a=blobdiff_plain;f=docs%2FTestingGuide.html;h=43c414d4c974ef877cabcca5e699fbb5a42fe9ec;hb=9e6d1d1f5034347d237941f1bf08fba5c1583cd3;hp=90f05918cac3330254fe6ed1410fdfa95730dc27;hpb=0f6d7c0e20a67a0e6be2bff6d4d83cfeb149c17f;p=oota-llvm.git diff --git a/docs/TestingGuide.html b/docs/TestingGuide.html index 90f05918cac..43c414d4c97 100644 --- a/docs/TestingGuide.html +++ b/docs/TestingGuide.html @@ -1,429 +1,1223 @@ - +
- - -This document is the reference manual for the LLVM testing infrastructure. It documents +the structure of the LLVM testing infrastructure, the tools needed to use it, +and how to add and run tests.
+ +In order to use the LLVM testing infrastructure, you will need all of the software +required to build LLVM, plus the following:
+ + + +The LLVM testing infrastructure contains two major categories of tests: code +fragments and whole programs. Code fragments are referred to as the "DejaGNU +tests" and are in the llvm module in subversion under the +llvm/test directory. The whole programs tests are referred to as the +"Test suite" and are in the test-suite module in subversion. +
+ +Code fragments are small pieces of code that test a specific +feature of LLVM or trigger a specific bug in LLVM. They are usually +written in LLVM assembly language, but can be written in other +languages if the test targets a particular language front end (and the +appropriate --with-llvmgcc options were used +at configure time of the llvm module). These tests +are driven by the DejaGNU testing framework, which is hidden behind a +few simple makefiles.
+ +These code fragments are not complete programs. The code generated +from them is never executed to determine correct behavior.
+ +These code fragment tests are located in the llvm/test +directory.
+ +Typically when a bug is found in LLVM, a regression test containing +just enough code to reproduce the problem should be written and placed +somewhere underneath this directory. In most cases, this will be a small +piece of LLVM assembly language code, often distilled from an actual +application or benchmark.
+ +The test suite contains whole programs, which are pieces of +code which can be compiled and linked into a stand-alone program that can be +executed. These programs are generally written in high level languages such as +C or C++, but sometimes they are written straight in LLVM assembly.
+ +These programs are compiled and then executed using several different +methods (native compiler, LLVM C backend, LLVM JIT, LLVM native code generation, +etc). The output of these programs is compared to ensure that LLVM is compiling +the program correctly.
+ +In addition to compiling and executing programs, whole program tests serve as +a way of benchmarking LLVM performance, both in terms of the efficiency of the +programs generated as well as the speed with which LLVM compiles, optimizes, and +generates code.
+ +The test-suite is located in the test-suite Subversion module.
+ +The tests are located in two separate Subversion modules. The + DejaGNU tests are in the main "llvm" module under the directory + llvm/test (so you get these tests for free with the main llvm tree). + The more comprehensive test suite that includes whole +programs in C and C++ is in the test-suite module. This module should +be checked out to the llvm/projects directory (don't use another name +then the default "test-suite", for then the test suite will be run every time +you run make in the main llvm directory). +When you configure the llvm module, +the test-suite directory will be automatically configured. +Alternatively, you can configure the test-suite module manually.
+ + + + +To run all of the simple tests in LLVM using DejaGNU, use the master Makefile + in the llvm/test directory:
+ ++% gmake -C llvm/test ++
or
+ ++% gmake check ++
To run only a subdirectory of tests in llvm/test using DejaGNU (ie. +Transforms), just set the TESTSUITE variable to the path of the +subdirectory (relative to llvm/test):
+ ++% gmake TESTSUITE=Transforms check ++
Note: If you are running the tests with objdir != subdir, you +must have run the complete testsuite before you can specify a +subdirectory.
+ +To run only a single test, set TESTONE to its path (relative to +llvm/test) and make the check-one target:
+ ++% gmake TESTONE=Feature/basictest.ll check-one ++
To run the tests with Valgrind (Memcheck by default), just append +VG=1 to the commands above, e.g.:
+ ++% gmake check VG=1 ++
To run the comprehensive test suite (tests that compile and execute whole +programs), first checkout and setup the test-suite module:
+ ++% cd llvm/projects +% svn co http://llvm.org/svn/llvm-project/test-suite/trunk test-suite +% cd .. +% ./configure --with-llvmgccdir=$LLVM_GCC_DIR ++
where $LLVM_GCC_DIR is the directory where +you installed llvm-gcc, not it's src or obj +dir. The --with-llvmgccdir option assumes that +the llvm-gcc-4.2 module was configured with +--program-prefix=llvm-, and therefore that the C and C++ +compiler drivers are called llvm-gcc and llvm-g++ +respectively. If this is not the case, +use --with-llvmgcc/--with-llvmgxx to specify each +executable's location.
+ +Then, run the entire test suite by running make in the test-suite +directory:
+ ++% cd projects/test-suite +% gmake ++
Usually, running the "nightly" set of tests is a good idea, and you can also +let it generate a report by running:
+ ++% cd projects/test-suite +% gmake TEST=nightly report report.html ++
Any of the above commands can also be run in a subdirectory of +projects/test-suite to run the specified test only on the programs in +that subdirectory.
+ +The LLVM DejaGNU tests are driven by DejaGNU together with GNU Make and are + located in the llvm/test directory. + +
This directory contains a large array of small tests + that exercise various features of LLVM and to ensure that regressions do not + occur. The directory is broken into several sub-directories, each focused on + a particular area of LLVM. A few of the important ones are:
+ +The DejaGNU structure is very simple, but does require some information to + be set. This information is gathered via configure and is written + to a file, site.exp in llvm/test. The llvm/test + Makefile does this work for you.
+ +In order for DejaGNU to work, each directory of tests must have a + dg.exp file. DejaGNU looks for this file to determine how to run the + tests. This file is just a Tcl script and it can do anything you want, but + we've standardized it for the LLVM regression tests. If you're adding a + directory of tests, just copy dg.exp from another directory to get + running. The standard dg.exp simply loads a Tcl + library (test/lib/llvm.exp) and calls the llvm_runtests + function defined in that library with a list of file names to run. The names + are obtained by using Tcl's glob command. Any directory that contains only + directories does not need the dg.exp file.
+ +The llvm-runtests function lookas at each file that is passed to + it and gathers any lines together that match "RUN:". This are the "RUN" lines + that specify how the test is to be run. So, each test script must contain + RUN lines if it is to do anything. If there are no RUN lines, the + llvm-runtests function will issue an error and the test will + fail.
+ +RUN lines are specified in the comments of the test program using the + keyword RUN followed by a colon, and lastly the command (pipeline) + to execute. Together, these lines form the "script" that + llvm-runtests executes to run the test case. The syntax of the + RUN lines is similar to a shell's syntax for pipelines including I/O + redirection and variable substitution. However, even though these lines + may look like a shell script, they are not. RUN lines are interpreted + directly by the Tcl exec command. They are never executed by a + shell. Consequently the syntax differs from normal shell script syntax in a + few ways. You can specify as many RUN lines as needed.
+ +Each RUN line is executed on its own, distinct from other lines unless + its last character is \. This continuation character causes the RUN + line to be concatenated with the next one. In this way you can build up long + pipelines of commands without making huge line lengths. The lines ending in + \ are concatenated until a RUN line that doesn't end in \ is + found. This concatenated set of RUN lines then constitutes one execution. + Tcl will substitute variables and arrange for the pipeline to be executed. If + any process in the pipeline fails, the entire line (and test case) fails too. +
+ +Below is an example of legal RUN lines in a .ll file:
+ ++; RUN: llvm-as < %s | llvm-dis > %t1 +; RUN: llvm-dis < %s.bc-13 > %t2 +; RUN: diff %t1 %t2 ++
As with a Unix shell, the RUN: lines permit pipelines and I/O redirection + to be used. However, the usage is slightly different than for Bash. To check + what's legal, see the documentation for the + Tcl exec + command and the + tutorial. + The major differences are:
+There are some quoting rules that you must pay attention to when writing + your RUN lines. In general nothing needs to be quoted. Tcl won't strip off any + ' or " so they will get passed to the invoked program. For example:
+ ++... | grep 'find this string' ++
This will fail because the ' characters are passed to grep. This would + instruction grep to look for 'find in the files this and + string'. To avoid this use curly braces to tell Tcl that it should + treat everything enclosed as one value. So our example would become:
+ ++... | grep {find this string} ++
Additionally, the characters [ and ] are treated + specially by Tcl. They tell Tcl to interpret the content as a command to + execute. Since these characters are often used in regular expressions this can + have disastrous results and cause the entire test run in a directory to fail. + For example, a common idiom is to look for some basicblock number:
+ ++... | grep bb[2-8] ++
This, however, will cause Tcl to fail because its going to try to execute + a program named "2-8". Instead, what you want is this:
+ ++... | grep {bb\[2-8\]} ++
Finally, if you need to pass the \ character down to a program, + then it must be doubled. This is another Tcl special character. So, suppose + you had: + +
+... | grep 'i32\*' ++
This will fail to match what you want (a pointer to i32). First, the + ' do not get stripped off. Second, the \ gets stripped off + by Tcl so what grep sees is: 'i32*'. That's not likely to match + anything. To resolve this you must use \\ and the {}, like + this:
+ ++... | grep {i32\\*} ++
If your system includes GNU grep, make sure +that GREP_OPTIONS is not set in your environment. Otherwise, +you may get invalid results (both false positives and false +negatives).
+ +A powerful feature of the RUN: lines is that it allows any arbitrary commands + to be executed as part of the test harness. While standard (portable) unix + tools like 'grep' work fine on run lines, as you see above, there are a lot + of caveats due to interaction with Tcl syntax, and we want to make sure the + run lines are portable to a wide range of systems. Another major problem is + that grep is not very good at checking to verify that the output of a tools + contains a series of different output in a specific order. The FileCheck + tool was designed to help with these problems.
+ +FileCheck (whose basic command line arguments are described in the FileCheck man page is + designed to read a file to check from standard input, and the set of things + to verify from a file specified as a command line argument. A simple example + of using FileCheck from a RUN line looks like this:
+ ++; RUN: llvm-as < %s | llc -march=x86-64 | FileCheck %s ++
This syntax says to pipe the current file ("%s") into llvm-as, pipe that into +llc, then pipe the output of llc into FileCheck. This means that FileCheck will +be verifying its standard input (the llc output) against the filename argument +specified (the original .ll file specified by "%s"). To see how this works, +lets look at the rest of the .ll file (after the RUN line):
+ ++define void @sub1(i32* %p, i32 %v) { +entry: +; CHECK: sub1: +; CHECK: subl + %0 = tail call i32 @llvm.atomic.load.sub.i32.p0i32(i32* %p, i32 %v) + ret void +} + +define void @inc4(i64* %p) { +entry: +; CHECK: inc4: +; CHECK: incq + %0 = tail call i64 @llvm.atomic.load.add.i64.p0i64(i64* %p, i64 1) + ret void +} ++
Here you can see some "CHECK:" lines specified in comments. Now you can see +how the file is piped into llvm-as, then llc, and the machine code output is +what we are verifying. FileCheck checks the machine code output to verify that +it matches what the "CHECK:" lines specify.
+ +The syntax of the CHECK: lines is very simple: they are fixed strings that +must occur in order. FileCheck defaults to ignoring horizontal whitespace +differences (e.g. a space is allowed to match a tab) but otherwise, the contents +of the CHECK: line is required to match some thing in the test file exactly.
+ +One nice thing about FileCheck (compared to grep) is that it allows merging +test cases together into logical groups. For example, because the test above +is checking for the "sub1:" and "inc4:" labels, it will not match unless there +is a "subl" in between those labels. If it existed somewhere else in the file, +that would not count: "grep subl" matches if subl exists anywhere in the +file.
+ +The FileCheck -check-prefix option allows multiple test configurations to be +driven from one .ll file. This is useful in many circumstances, for example, +testing different architectural variants with llc. Here's a simple example:
+ ++; RUN: llvm-as < %s | llc -mtriple=i686-apple-darwin9 -mattr=sse41 \ +; RUN: | FileCheck %s -check-prefix=X32 +; RUN: llvm-as < %s | llc -mtriple=x86_64-apple-darwin9 -mattr=sse41 \ +; RUN: | FileCheck %s -check-prefix=X64 + +define <4 x i32> @pinsrd_1(i32 %s, <4 x i32> %tmp) nounwind { + %tmp1 = insertelement <4 x i32> %tmp, i32 %s, i32 1 + ret <4 x i32> %tmp1 +; X32: pinsrd_1: +; X32: pinsrd $1, 4(%esp), %xmm0 + +; X64: pinsrd_1: +; X64: pinsrd $1, %edi, %xmm0 +} ++
In this case, we're testing that we get the expected code generation with +both 32-bit and 64-bit code generation.
+ +Sometimes you want to match lines and would like to verify that matches +happen on exactly consequtive lines with no other lines in between them. In +this case, you can use CHECK: and CHECK-NEXT: directives to specify this. If +you specified a custom check prefix, just use "<PREFIX>-NEXT:". For +example, something like this works as you'd expect:
+ ++define void @t2(<2 x double>* %r, <2 x double>* %A, double %B) { + %tmp3 = load <2 x double>* %A, align 16 + %tmp7 = insertelement <2 x double> undef, double %B, i32 0 + %tmp9 = shufflevector <2 x double> %tmp3, + <2 x double> %tmp7, + <2 x i32> < i32 0, i32 2 > + store <2 x double> %tmp9, <2 x double>* %r, align 16 + ret void + +; CHECK: t2: +; CHECK: movl 8(%esp), %eax +; CHECK-NEXT: movapd (%eax), %xmm0 +; CHECK-NEXT: movhpd 12(%esp), %xmm0 +; CHECK-NEXT: movl 4(%esp), %eax +; CHECK-NEXT: movapd %xmm0, (%eax) +; CHECK-NEXT: ret +} ++
CHECK-NEXT: directives reject the input unless there is exactly one newline +between it an the previous directive. A CHECK-NEXT cannot be the first +directive in a file.
+ +The CHECK-NOT: directive is used to verify that a string doesn't occur +between two matches (or the first match and the beginning of the file). For +example, to verify that a load is removed by a transformation, a test like this +can be used:
+ ++define i8 @coerce_offset0(i32 %V, i32* %P) { + store i32 %V, i32* %P + + %P2 = bitcast i32* %P to i8* + %P3 = getelementptr i8* %P2, i32 2 + + %A = load i8* %P3 + ret i8 %A +; CHECK: @coerce_offset0 +; CHECK-NOT: load +; CHECK: ret i8 +} ++
The CHECK: and CHECK-NOT: directives both take a pattern to match. For most +uses of FileCheck, fixed string matching is perfectly sufficient. For some +things, a more flexible form of matching is desired. To support this, FileCheck +allows you to specify regular expressions in matching strings, surrounded by +double braces: {{yourregex}}. Because we want to use fixed string +matching for a majority of what we do, FileCheck has been designed to support +mixing and matching fixed string matching with regular expressions. This allows +you to write things like this:
+ ++; CHECK: movhpd {{[0-9]+}}(%esp), {{%xmm[0-7]}} ++
In this case, any offset from the ESP register will be allowed, and any xmm +register will be allowed.
+ +Because regular expressions are enclosed with double braces, they are +visually distinct, and you don't need to use escape characters within the double +braces like you would in C. In the rare case that you want to match double +braces explicitly from the input, you can use something ugly like +{{[{][{]}} as your pattern.
+ +It is often useful to match a pattern and then verify that it occurs again +later in the file. For codegen tests, this can be useful to allow any register, +but verify that that register is used consistently later. To do this, FileCheck +allows named variables to be defined and substituted into patterns. Here is a +simple example:
+ ++; CHECK: test5: +; CHECK: notw [[REGISTER:%[a-z]+]] +; CHECK: andw {{.*}}[[REGISTER]] ++
The first check line matches a regex (%[a-z]+) and captures it into +the variables "REGISTER". The second line verifies that whatever is in REGISTER +occurs later in the file after an "andw". FileCheck variable references are +always contained in [[ ]] pairs, are named, and their names can be +formed with the regex "[a-zA-Z][a-zA-Z0-9]*". If a colon follows the +name, then it is a definition of the variable, if not, it is a use.
+ +FileCheck variables can be defined multiple times, and uses always get the +latest value. Note that variables are all read at the start of a "CHECK" line +and are all defined at the end. This means that if you have something like +"CHECK: [[XYZ:.*]]x[[XYZ]]" that the check line will read the previous +value of the XYZ variable and define a new one after the match is performed. If +you need to do something like this you can probably take advantage of the fact +that FileCheck is not actually line-oriented when it matches, this allows you to +define two separate CHECK lines that match on the same line. +
+ + + +With a RUN line there are a number of substitutions that are permitted. In + general, any Tcl variable that is available in the substitute + function (in test/lib/llvm.exp) can be substituted into a RUN line. + To make a substitution just write the variable's name preceded by a $. + Additionally, for compatibility reasons with previous versions of the test + library, certain names can be accessed with an alternate syntax: a % prefix. + These alternates are deprecated and may go away in a future version. +
+Here are the available variable names. The alternate syntax is listed in + parentheses.
+ +To add more variables, two things need to be changed. First, add a line in + the test/Makefile that creates the site.exp file. This will + "set" the variable as a global in the site.exp file. Second, in the + test/lib/llvm.exp file, in the substitute proc, add the variable name + to the list of "global" declarations at the beginning of the proc. That's it, + the variable can then be used in test scripts.
+To make RUN line writing easier, there are several shell scripts located + in the llvm/test/Scripts directory. This directory is in the PATH + when running tests, so you can just call these scripts using their name. For + example:
+Sometimes it is necessary to mark a test case as "expected fail" or XFAIL. + You can easily mark a test as XFAIL just by including XFAIL: on a + line near the top of the file. This signals that the test case should succeed + if the test fails. Such test cases are counted separately by DejaGnu. To + specify an expected fail, use the XFAIL keyword in the comments of the test + program followed by a colon and one or more regular expressions (separated by + a comma). The regular expressions allow you to XFAIL the test conditionally + by host platform. The regular expressions following the : are matched against + the target triplet or llvmgcc version number for the host machine. If there is + a match, the test is expected to fail. If not, the test is expected to + succeed. To XFAIL everywhere just specify XFAIL: *. When matching + the llvm-gcc version, you can specify the major (e.g. 3) or full version + (i.e. 3.4) number. Here is an example of an XFAIL line:
+ ++; XFAIL: darwin,sun,llvmgcc4 ++
To make the output more useful, the llvm_runtest function wil + scan the lines of the test case for ones that contain a pattern that matches + PR[0-9]+. This is the syntax for specifying a PR (Problem Report) number that + is related to the test case. The number after "PR" specifies the LLVM bugzilla + number. When a PR number is specified, it will be used in the pass/fail + reporting. This is useful to quickly get some context when a test fails.
+ +Finally, any line that contains "END." will cause the special + interpretation of lines to terminate. This is generally done right after the + last RUN: line. This has two side effects: (a) it prevents special + interpretation of lines that are part of the test program, not the + instructions to the test case, and (b) it speeds things up for really big test + cases by avoiding interpretation of the remainder of the file.
+ +The test-suite module contains a number of programs that can be compiled +with LLVM and executed. These programs are compiled using the native compiler +and various LLVM backends. The output from the program compiled with the +native compiler is assumed correct; the results from the other programs are +compared to the native program output and pass if they match.
+ +When executing tests, it is usually a good idea to start out with a subset of +the available tests or programs. This makes test run times smaller at first and +later on this is useful to investigate individual test failures. To run some +test only on a subset of programs, simply change directory to the programs you +want tested and run gmake there. Alternatively, you can run a different +test using the TEST variable to change what tests or run on the +selected programs (see below for more info).
+ +In addition for testing correctness, the llvm-test directory also +performs timing tests of various LLVM optimizations. It also records +compilation times for the compilers and the JIT. This information can be +used to compare the effectiveness of LLVM's optimizations and code +generation.
+ +llvm-test tests are divided into three types of tests: MultiSource, +SingleSource, and External.
+The SingleSource directory contains test programs that are only a single +source file in size. These are usually small benchmark programs or small +programs that calculate a particular value. Several such programs are grouped +together in each directory.
The MultiSource directory contains subdirectories which contain entire +programs with multiple source files. Large benchmarks and whole applications +go here.
The External directory contains Makefiles for building code that is external +to (i.e., not distributed with) LLVM. The most prominent members of this +directory are the SPEC 95 and SPEC 2000 benchmark suites. The External +directory does not contain these actual tests, but only the Makefiles that know +how to properly compile these programs from somewhere else. The presence and +location of these external programs is configured by the llvm-test +configure script.
Written by John T. Criswell
+ +Each tree is then subdivided into several categories, including applications, +benchmarks, regression tests, code that is strange grammatically, etc. These +organizations should be relatively self explanatory.
+ +Some tests are known to fail. Some are bugs that we have not fixed yet; +others are features that we haven't added yet (or may never add). In DejaGNU, +the result for such tests will be XFAIL (eXpected FAILure). In this way, you +can tell the difference between an expected and unexpected failure.
+ +The tests in the test suite have no such feature at this time. If the +test passes, only warnings and other miscellaneous output will be generated. If +a test fails, a large <program> FAILED message will be displayed. This +will help you separate benign warnings from actual test failures.
+ +First, all tests are executed within the LLVM object directory tree. They +are not executed inside of the LLVM source tree. This is because the +test suite creates temporary files during execution.
+ +To run the test suite, you need to use the following steps:
+ +Check out the test-suite module with:
+ ++% svn co http://llvm.org/svn/llvm-project/test-suite/trunk test-suite ++
This will get the test suite into llvm/projects/test-suite.
+Configure and build llvm.
Configure and build llvm-gcc.
Install llvm-gcc somewhere.
Re-configure llvm from the top level of + each build tree (LLVM object directory tree) in which you want + to run the test suite, just as you do before building LLVM.
+During the re-configuration, you must either: (1) + have llvm-gcc you just built in your path, or (2) + specify the directory where your just-built llvm-gcc is + installed using --with-llvmgccdir=$LLVM_GCC_DIR.
+You must also tell the configure machinery that the test suite + is available so it can be configured for your build tree:
++% cd $LLVM_OBJ_ROOT ; $LLVM_SRC_ROOT/configure [--with-llvmgccdir=$LLVM_GCC_DIR] ++
[Remember that $LLVM_GCC_DIR is the directory where you + installed llvm-gcc, not its src or obj directory.]
+You can now run the test suite from your build tree as follows:
++% cd $LLVM_OBJ_ROOT/projects/test-suite +% make ++
Note that the second and third steps only need to be done once. After you +have the suite checked out and configured, you don't need to do it again (unless +the test code or configure script changes).
- - - - -- This document is the reference manual for the LLVM test suite. It - documents the structure of the LLVM test suite, the tools needed to - use it, and how to add and run tests. -
-- In order to use the LLVM test suite, you will need all of the software - required to build LLVM, plus the following: -
- -- The tests are located in the LLVM source tree under the directory - llvm/test. To run all of the tests in LLVM, use the Master - Makefile in that directory: -
-- % gmake -C llvm/test -- -
- To run only the code fragment tests (i.e. those that do basic testing of - LLVM), run the tests organized by QMTest: -
- -- % gmake -C llvm/test qmtest -- -
- To run only the tests that compile and execute whole programs, run the - Programs tests: -
- -- % gmake -C llvm/test/Programs --
The LLVM test suite contains two major categories of tests: code - fragments and whole programs.
-- Code fragments are small pieces of code that test a specific - feature of LLVM or trigger a specific bug in LLVM. They are - usually written in LLVM assembly language, but can be - written in other languages if the test targets a - particular language front end. -
- Code fragments are not complete programs, and they are - never executed to determine correct behavior. -
- The tests in the Features and - Regression directories contain code fragments. -
-- Whole Programs are pieces of code which can be compiled and - linked into a stand-alone program that can be executed. These - programs are generally written in high level languages such as C - or C++, but sometimes they are written straight in LLVM - assembly. -
- These programs are compiled and then executed using several - different methods (native compiler, LLVM C backend, LLVM JIT, - LLVM native code generation, etc). The output of these programs - is compared to ensure that LLVM is compiling the program - correctly. -
- In addition to compiling and executing programs, whole program - tests serve as a way of benchmarking LLVM performance, both in - terms of the efficiency of the programs generated as well as the - speed with which LLVM compiles, optimizes, and generates code. -
- The Programs directory contains all tests which compile and - benchmark whole programs. -
-Each type of test in the LLVM test suite has its own directory. The - major subtrees of the test suite directory tree are as follows:
- -- This directory contains sample codes that test various features - of the LLVM language. These pieces of sample code are run - through various assembler, disassembler, and optimizer passes. -
- -- This directory contains regression tests for LLVM. When a bug - is found in LLVM, a regression test containing just enough - code to reproduce the problem should be written and placed - somewhere underneath this directory. In most cases, this - will be a small piece of LLVM assembly language code, often - distilled from an actual application or benchmark. -
- -- The Programs directory contains programs that can be compiled - with LLVM and executed. These programs are compiled using the - native compiler and various LLVM backends. The output from the - program compiled with the native compiler is assumed correct; - the results from the other programs are compared to the native - program output and pass if they match. -
- In addition for testing correctness, the Programs directory - also performs timing tests of various LLVM optimizations. - It also records compilation times for the compilers and the - JIT. This information can be used to compare the - effectiveness of LLVM's optimizations and code generation. -
- The Programs directory is subdivided into several smaller - subdirectories: -
- -- The SingleSource directory contains test programs that - are only a single source file in size. These are - usually small benchmark programs or small programs that - calculate a particular value. Several such programs are - grouped together in each directory. -
- -- The MultiSource directory contains subdirectories which - contain entire programs with multiple source files. - Large benchmarks and whole applications go here. -
- -- The External directory contains Makefiles for building - code that is external to (i.e. not distributed with) - LLVM. The most prominent member of this directory is - the SPEC 2000 benchmark suite. The presence and - location of these external programs is configured by the - LLVM configure script. -
-- -
- This directory contains the QMTest information files. Inside - this directory are QMTest administration files and the Python - code that implements the LLVM test and database classes. -
-- The LLVM test suite is partially driven by QMTest and partially - driven by GNU Make. Specifically, the Features and Regression tests - are all driven by QMTest. The Programs directory is currently - driven by a set of Makefiles. -
- The QMTest system needs to have several pieces of information - available; these pieces of configuration information are known - collectively as the "context" in QMTest parlance. Since the context - for LLVM is relatively large, the master Makefile in llvm/test - sets it for you. -
- The LLVM database class makes the subdirectories of llvm/test a - QMTest test database. For each directory that contains tests driven by - QMTest, it knows what type of test the source file is and how to run it. -
- Hence, the QMTest namespace is essentially what you see in the - Feature and Regression directories, but there is some magic that - the database class performs (as described below). -
- The QMTest namespace is currently composed of the following tests and - test suites: -
- -- These are the feature tests found in the Feature directory. - They are broken up into the following categories: -
-- Assembler/Disassembler tests. These tests verify that a - piece of LLVM assembly language can be assembled into - bytecode and then disassembled into the original - assembly language code. It does this several times to - ensure that assembled output can be disassembled and - disassembler output can be assembled. It also verifies - that the give assembly language file can be assembled - correctly. -
- -- Optimizer tests. These tests verify that two of the - optimizer passes completely optimize a program (i.e. - after a single pass, they cannot optimize a program - any further). -
- -- Machine code tests. These tests verify that the LLVM - assembly language file can be translated into native - assembly code. -
- -- C code tests. These tests verify that the specified - LLVM assembly code can be converted into C source code - using the C backend. -
-- The LLVM database class looks at every file in the Feature - directory and creates a fake test hierarchy containing - Feature.<testtype>.<testname>. So, if you - add an LLVM assembly language file to the Feature directory, it - actually creates 5 new tests: assembler/disassembler, assembler, - optimizer, machine code, and C code. -
- -- These are the regression tests. There is one suite for each - subdirectory of the Regression directory. If you add a new - subdirectory there, you will need to modify, at least, the - RegressionMap variable in QMTest/llvmdb.py so - that QMTest knows how to run the tests in the new subdirectory. -
-- As mentioned previously, the Programs tree in llvm/test provides three - types of tests: MultiSource, SingleSource, and External. Each tree is - then subdivided into several categories, including applications, - benchmarks, regression tests, code that is strange grammatically, etc. - These organizations should be relatively self explanatory. -
- In addition to the regular Programs tests, the Programs tree also - provides a mechanism for compiling the programs in different ways. If - the variable TEST is defined on the gmake command line, the test system - will include a Makefile named TEST.<value of TEST - variable>.Makefile. This Makefile can modify build rules to - yield different results. -
- For example, the LLVM nightly tester uses TEST.nightly.Makefile - to create the nightly test reports. To run the nightly tests, run - gmake TEST=nightly. -
- There are several TEST Makefiles available in the tree. Some of them - are designed for internal LLVM research and will not work outside of the - LLVM research group. They may still be valuable, however, as a guide to - writing your own TEST Makefile for any optimization or analysis passes - that you develop with LLVM. -
-- First, all tests are executed within the LLVM object directory tree. - They are not executed inside of the LLVM source tree. This is - because the test suite creates temporary files during execution. -
- The master Makefile in llvm/test is capable of running both the - QMTest driven tests and the Programs tests. By default, it will run - all of the tests. -
- To run only the QMTest driven tests, run gmake qmtest at the - command line in llvm/tests. To run a specific qmtest, suffix the test - name with ".t" when running gmake. -
- For example, to run the Regression.LLC tests, type - gmake Regression.LLC.t in llvm/tests. -
- Note that the Makefiles in llvm/test/Features and llvm/test/Regression - are gone. You must now use QMTest from the llvm/test directory to run - them. -
- To run the Programs test, cd into the llvm/test/Programs directory and - type gmake. Alternatively, you can type gmake - TEST=<type> test to run one of the specialized tests in - llvm/test/Programs/TEST.<type>.Makefile. For example, you could - run the nightly tester tests using the following commands: -
- -- % cd llvm/test/Programs - % gmake TEST=nightly test -- -
- Regardless of which test you're running, the results are printed on - standard output and standard error. You can redirect these results to a - file if you choose. -
- Some tests are known to fail. Some are bugs that we have not fixed yet; - others are features that we haven't added yet (or may never add). In - QMTest, the result for such tests will be XFAIL (eXpected FAILure). In - this way, you can tell the difference between an expected and unexpected - failure. -
- The Programs tests have no such feature as of this time. If the test - passes, only warnings and other miscellaneous output will be generated. - If a test fails, a large <program> FAILED message will be - displayed. This will help you separate benign warnings from actual test - failures. -
-In order to run the External tests in the test-suite + module, you must specify --with-externals. This + must be done during the re-configuration step (see above), + and the llvm re-configuration must recognize the + previously-built llvm-gcc. If any of these is missing or + neglected, the External tests won't work.
+In addition to the regular "whole program" tests, the test-suite +module also provides a mechanism for compiling the programs in different ways. +If the variable TEST is defined on the gmake command line, the test system will +include a Makefile named TEST.<value of TEST variable>.Makefile. +This Makefile can modify build rules to yield different results.
+ +For example, the LLVM nightly tester uses TEST.nightly.Makefile to +create the nightly test reports. To run the nightly tests, run gmake +TEST=nightly.
+ +There are several TEST Makefiles available in the tree. Some of them are +designed for internal LLVM research and will not work outside of the LLVM +research group. They may still be valuable, however, as a guide to writing your +own TEST Makefile for any optimization or analysis passes that you develop with +LLVM.
+ +There are a number of ways to run the tests and generate output. The most + simple one is simply running gmake with no arguments. This will + compile and run all programs in the tree using a number of different methods + and compare results. Any failures are reported in the output, but are likely + drowned in the other output. Passes are not reported explicitely.
+ +Somewhat better is running gmake TEST=sometest test, which runs + the specified test and usually adds per-program summaries to the output + (depending on which sometest you use). For example, the nightly test + explicitely outputs TEST-PASS or TEST-FAIL for every test after each program. + Though these lines are still drowned in the output, it's easy to grep the + output logs in the Output directories.
+ +Even better are the report and report.format targets + (where format is one of html, csv, text or + graphs). The exact contents of the report are dependent on which + TEST you are running, but the text results are always shown at the + end of the run and the results are always stored in the + report.<type>.format file (when running with + TEST=<type>). + + The report also generate a file called + report.<type>.raw.out containing the output of the entire test + run. +
Assuming you can run the test suite, (e.g. "gmake TEST=nightly report" +should work), it is really easy to run optimizations or code generator +components against every program in the tree, collecting statistics or running +custom checks for correctness. At base, this is how the nightly tester works, +it's just one example of a general framework.
+ +Lets say that you have an LLVM optimization pass, and you want to see how +many times it triggers. First thing you should do is add an LLVM +statistic to your pass, which +will tally counts of things you care about.
+ +Following this, you can set up a test and a report that collects these and +formats them for easy viewing. This consists of two files, an +"test-suite/TEST.XXX.Makefile" fragment (where XXX is the name of your +test) and an "llvm-test/TEST.XXX.report" file that indicates how to +format the output into a table. There are many example reports of various +levels of sophistication included with the test suite, and the framework is very +general.
+ +If you are interested in testing an optimization pass, check out the +"libcalls" test as an example. It can be run like this:
+ +
+% cd llvm/projects/test-suite/MultiSource/Benchmarks # or some other level +% make TEST=libcalls report ++
This will do a bunch of stuff, then eventually print a table like this:
+ ++Name | total | #exit | +... +FreeBench/analyzer/analyzer | 51 | 6 | +FreeBench/fourinarow/fourinarow | 1 | 1 | +FreeBench/neural/neural | 19 | 9 | +FreeBench/pifft/pifft | 5 | 3 | +MallocBench/cfrac/cfrac | 1 | * | +MallocBench/espresso/espresso | 52 | 12 | +MallocBench/gs/gs | 4 | * | +Prolangs-C/TimberWolfMC/timberwolfmc | 302 | * | +Prolangs-C/agrep/agrep | 33 | 12 | +Prolangs-C/allroots/allroots | * | * | +Prolangs-C/assembler/assembler | 47 | * | +Prolangs-C/bison/mybison | 74 | * | +... ++
This basically is grepping the -stats output and displaying it in a table. +You can also use the "TEST=libcalls report.html" target to get the table in HTML +form, similarly for report.csv and report.tex.
+ +The source for this is in test-suite/TEST.libcalls.*. The format is pretty +simple: the Makefile indicates how to run the test (in this case, +"opt -simplify-libcalls -stats"), and the report contains one line for +each column of the output. The first value is the header for the column and the +second is the regex to grep the output of the command for. There are lots of +example reports that can do fancy stuff.
+ ++The LLVM Nightly Testers +automatically check out an LLVM tree, build it, run the "nightly" +program test (described above), run all of the DejaGNU tests, +delete the checked out tree, and then submit the results to +http://llvm.org/nightlytest/. +After test results are submitted to +http://llvm.org/nightlytest/, +they are processed and displayed on the tests page. An email to + +llvm-testresults@cs.uiuc.edu summarizing the results is also generated. +This testing scheme is designed to ensure that programs don't break as well +as keep track of LLVM's progress over time.
+ +If you'd like to set up an instance of the nightly tester to run on your +machine, take a look at the comments at the top of the +utils/NewNightlyTest.pl file. If you decide to set up a nightly tester +please choose a unique nickname and invoke utils/NewNightlyTest.pl +with the "-nickname [yournickname]" command line option. + +
You can create a shell script to encapsulate the running of the script. +The optimized x86 Linux nightly test is run from just such a script:
+ ++#!/bin/bash +BASE=/proj/work/llvm/nightlytest +export BUILDDIR=$BASE/build +export WEBDIR=$BASE/testresults +export LLVMGCCDIR=/proj/work/llvm/cfrontend/install +export PATH=/proj/install/bin:$LLVMGCCDIR/bin:$PATH +export LD_LIBRARY_PATH=/proj/install/lib +cd $BASE +cp /proj/work/llvm/llvm/utils/NewNightlyTest.pl . +nice ./NewNightlyTest.pl -nice -release -verbose -parallel -enable-linscan \ + -nickname NightlyTester -noexternals > output.log 2>&1 ++
It is also possible to specify the the location your nightly test results +are submitted. You can do this by passing the command line option +"-submit-server [server_address]" and "-submit-script [script_on_server]" to +utils/NewNightlyTest.pl. For example, to submit to the llvm.org +nightly test results page, you would invoke the nightly test script with +"-submit-server llvm.org -submit-script /nightlytest/NightlyTestAccept.cgi". +If these options are not specified, the nightly test script sends the results +to the llvm.org nightly test results page.
+ +Take a look at the NewNightlyTest.pl file to see what all of the +flags and strings do. If you start running the nightly tests, please let us +know. Thanks!
+ +