X-Git-Url: http://demsky.eecs.uci.edu/git/?a=blobdiff_plain;f=docs%2FTestingGuide.html;h=1150ac8f962e41e23c03e9aee21bf9cdad0839a6;hb=a75ce9f5d2236d93c117e861e60e6f3f748c9555;hp=28a5e8a983d91097030a2b81f52b249e1bfd8d80;hpb=4d0764da03b6b229c3bf26e5ac47fe82104d30d7;p=oota-llvm.git diff --git a/docs/TestingGuide.html b/docs/TestingGuide.html index 28a5e8a983d..1150ac8f962 100644 --- a/docs/TestingGuide.html +++ b/docs/TestingGuide.html @@ -16,22 +16,24 @@
This document is the reference manual for the LLVM testing infrastructure. It documents -the structure of the LLVM testing infrastructure, the tools needed to use it, -and how to add and run tests.
+This document is the reference manual for the LLVM testing infrastructure. It +documents the structure of the LLVM testing infrastructure, the tools needed to +use it, and how to add and run tests.
In order to use the LLVM testing infrastructure, you will need all of the software -required to build LLVM, plus the following:
- -In order to use the LLVM testing infrastructure, you will need all of the +software required to build LLVM, as well +as Python 2.4 or later.
The LLVM testing infrastructure contains two major categories of tests: code -fragments and whole programs. Code fragments are referred to as the "DejaGNU -tests" and are in the llvm module in subversion under the -llvm/test directory. The whole programs tests are referred to as the -"Test suite" and are in the test-suite module in subversion. +
The LLVM testing infrastructure contains two major categories of tests: +regression tests and whole programs. The regression tests are contained inside +the LLVM repository itself under llvm/test and are expected to always +pass -- they should be run before every commit. The whole programs tests are +referred to as the "LLVM test suite" and are in the test-suite module +in subversion.
Code fragments are small pieces of code that test a specific -feature of LLVM or trigger a specific bug in LLVM. They are usually -written in LLVM assembly language, but can be written in other -languages if the test targets a particular language front end (and the -appropriate --with-llvmgcc options were used -at configure time of the llvm module). These tests -are driven by the DejaGNU testing framework, which is hidden behind a -few simple makefiles.
+The regression tests are small pieces of code that test a specific feature of +LLVM or trigger a specific bug in LLVM. They are usually written in LLVM +assembly language, but can be written in other languages if the test targets a +particular language front end (and the appropriate --with-llvmgcc +options were used at configure time of the llvm module). These +tests are driven by the 'lit' testing tool, which is part of LLVM.
These code fragments are not complete programs. The code generated from them is never executed to determine correct behavior.
@@ -152,29 +143,46 @@ generates code.The test suite contains tests to check quality of debugging information. +The test are written in C based languages or in LLVM assembly language.
+ +These tests are compiled and run under a debugger. The debugger output +is checked to validate of debugging information. See README.txt in the +test suite for more information . This test suite is located in the +debuginfo-tests Subversion module.
+ +The tests are located in two separate Subversion modules. The - DejaGNU tests are in the main "llvm" module under the directory +
The tests are located in two separate Subversion modules. The regressions + tests are in the main "llvm" module under the directory llvm/test (so you get these tests for free with the main llvm tree). The more comprehensive test suite that includes whole programs in C and C++ is in the test-suite module. This module should be checked out to the llvm/projects directory (don't use another name -then the default "test-suite", for then the test suite will be run every time +than the default "test-suite", for then the test suite will be run every time you run make in the main llvm directory). When you configure the llvm module, the test-suite directory will be automatically configured. Alternatively, you can configure the test-suite module manually.
- + -To run all of the simple tests in LLVM using DejaGNU, use the master Makefile - in the llvm/test directory:
+To run all of the LLVM regression tests, use master Makefile in + the llvm/test directory:
@@ -190,38 +198,47 @@ Alternatively, you can configure the test-suite module manually.
To run only a subdirectory of tests in llvm/test using DejaGNU (ie. -Transforms), just set the TESTSUITE variable to the path of the -subdirectory (relative to llvm/test):
+If you have Clang checked out and built, +you can run the LLVM and Clang tests simultaneously using:
+ +or
-% gmake TESTSUITE=Transforms check +% gmake check-all
Note: If you are running the tests with objdir != subdir, you -must have run the complete testsuite before you can specify a -subdirectory.
+To run the tests with Valgrind (Memcheck by default), just append +VG=1 to the commands above, e.g.:
+ ++% gmake check VG=1 ++
To run only a single test, set TESTONE to its path (relative to -llvm/test) and make the check-one target:
+To run individual tests or subsets of tests, you can use the 'llvm-lit' +script which is built as part of LLVM. For example, to run the +'Integer/BitCast.ll' test by itself you can run:
-% gmake TESTONE=Feature/basictest.ll check-one +% llvm-lit ~/llvm/test/Integer/BitCast.ll
To run the tests with Valgrind (Memcheck by default), just append -VG=1 to the commands above, e.g.:
+or to run all of the ARM CodeGen tests:
-% gmake check VG=1 +% llvm-lit ~/llvm/test/CodeGen/ARM
For more information on using the 'lit' tool, see 'llvm-lit --help' or the +'lit' man page.
+ @@ -239,7 +256,7 @@ programs), first checkout and setup the test-suite module:where $LLVM_GCC_DIR is the directory where -you installed llvm-gcc, not it's src or obj +you installed llvm-gcc, not its src or obj dir. The --with-llvmgccdir option assumes that the llvm-gcc-4.2 module was configured with --program-prefix=llvm-, and therefore that the C and C++ @@ -274,12 +291,31 @@ that subdirectory.
+ + + + +To run debugging information tests simply checkout the tests inside +clang/test directory.
+ ++%cd clang/test +% svn co http://llvm.org/svn/llvm-project/debuginfo-tests/trunk debuginfo-tests ++
These tests are already set up to run as part of clang regression tests.
+ + + - +The LLVM DejaGNU tests are driven by DejaGNU together with GNU Make and are - located in the llvm/test directory. +
The LLVM regression tests are driven by 'lit' and are located in + the llvm/test directory.
This directory contains a large array of small tests that exercise various features of LLVM and to ensure that regressions do not @@ -302,23 +338,24 @@ that subdirectory.
The DejaGNU structure is very simple, but does require some information to - be set. This information is gathered via configure and is written - to a file, site.exp in llvm/test. The llvm/test - Makefile does this work for you.
- -In order for DejaGNU to work, each directory of tests must have a - dg.exp file. DejaGNU looks for this file to determine how to run the - tests. This file is just a Tcl script and it can do anything you want, but - we've standardized it for the LLVM regression tests. If you're adding a +
The regression test structure is very simple, but does require some + information to be set. This information is gathered via configure and + is written to a file, lit.site.cfg + in llvm/test. The llvm/test Makefile does this work for + you.
+ +In order for the regression tests to work, each directory of tests must + have a dg.exp file. Lit looks for this file to determine how to + run the tests. This file is just a Tcl script and it can do anything you want, + but we've standardized it for the LLVM regression tests. If you're adding a directory of tests, just copy dg.exp from another directory to get - running. The standard dg.exp simply loads a Tcl - library (test/lib/llvm.exp) and calls the llvm_runtests - function defined in that library with a list of file names to run. The names - are obtained by using Tcl's glob command. Any directory that contains only + running. The standard dg.exp simply loads a Tcl library + (test/lib/llvm.exp) and calls the llvm_runtests function + defined in that library with a list of file names to run. The names are + obtained by using Tcl's glob command. Any directory that contains only directories does not need the dg.exp file.
The llvm-runtests function lookas at each file that is passed to @@ -379,7 +416,8 @@ that subdirectory.
There are some quoting rules that you must pay attention to when writing your RUN lines. In general nothing needs to be quoted. Tcl won't strip off any - ' or " so they will get passed to the invoked program. For example:
+ quote characters so they will get passed to the invoked program. For + example:@@ -625,7 +663,78 @@ define i8 @coerce_offset0(i32 %V, i32* %P) {
To make RUN line writing easier, there are several shell scripts located @@ -751,22 +854,20 @@ substitutions
Sometimes it is necessary to mark a test case as "expected fail" or XFAIL. - You can easily mark a test as XFAIL just by including XFAIL: on a + You can easily mark a test as XFAIL just by including XFAIL: on a line near the top of the file. This signals that the test case should succeed - if the test fails. Such test cases are counted separately by DejaGnu. To + if the test fails. Such test cases are counted separately by the testing tool. To specify an expected fail, use the XFAIL keyword in the comments of the test program followed by a colon and one or more regular expressions (separated by - a comma). The regular expressions allow you to XFAIL the test conditionally - by host platform. The regular expressions following the : are matched against - the target triplet or llvmgcc version number for the host machine. If there is - a match, the test is expected to fail. If not, the test is expected to - succeed. To XFAIL everywhere just specify XFAIL: *. When matching - the llvm-gcc version, you can specify the major (e.g. 3) or full version - (i.e. 3.4) number. Here is an example of an XFAIL line:
+ a comma). The regular expressions allow you to XFAIL the test conditionally by + host platform. The regular expressions following the : are matched against the + target triplet for the host machine. If there is a match, the test is expected + to fail. If not, the test is expected to succeed. To XFAIL everywhere just + specify XFAIL: *. Here is an example of an XFAIL line:-; XFAIL: darwin,sun,llvmgcc4 +; XFAIL: darwin,sun
In addition for testing correctness, the llvm-test directory also +
In addition for testing correctness, the test-suite directory also performs timing tests of various LLVM optimizations. It also records compilation times for the compilers and the JIT. This information can be used to compare the effectiveness of LLVM's optimizations and code generation.
-llvm-test tests are divided into three types of tests: MultiSource, +
test-suite tests are divided into three types of tests: MultiSource, SingleSource, and External.
The SingleSource directory contains test programs that are only a single source file in size. These are usually small benchmark programs or small programs that calculate a particular value. Several such programs are grouped together in each directory.
The MultiSource directory contains subdirectories which contain entire programs with multiple source files. Large benchmarks and whole applications go here.
The External directory contains Makefiles for building code that is external to (i.e., not distributed with) LLVM. The most prominent members of this directory are the SPEC 95 and SPEC 2000 benchmark suites. The External directory does not contain these actual tests, but only the Makefiles that know how to properly compile these programs from somewhere else. The presence and -location of these external programs is configured by the llvm-test +location of these external programs is configured by the test-suite configure script.
Some tests are known to fail. Some are bugs that we have not fixed yet; -others are features that we haven't added yet (or may never add). In DejaGNU, -the result for such tests will be XFAIL (eXpected FAILure). In this way, you -can tell the difference between an expected and unexpected failure.
+others are features that we haven't added yet (or may never add). In the +regression tests, the result for such tests will be XFAIL (eXpected FAILure). +In this way, you can tell the difference between an expected and unexpected +failure.The tests in the test suite have no such feature at this time. If the test passes, only warnings and other miscellaneous output will be generated. If @@ -1020,9 +1122,9 @@ many times it triggers. First thing you should do is add an LLVM will tally counts of things you care about.
Following this, you can set up a test and a report that collects these and -formats them for easy viewing. This consists of two files, an +formats them for easy viewing. This consists of two files, a "test-suite/TEST.XXX.Makefile" fragment (where XXX is the name of your -test) and an "llvm-test/TEST.XXX.report" file that indicates how to +test) and a "test-suite/TEST.XXX.report" file that indicates how to format the output into a table. There are many example reports of various levels of sophistication included with the test suite, and the framework is very general.
@@ -1072,67 +1174,6 @@ example reports that can do fancy stuff. - - - - - --The LLVM Nightly Testers -automatically check out an LLVM tree, build it, run the "nightly" -program test (described above), run all of the DejaGNU tests, -delete the checked out tree, and then submit the results to -http://llvm.org/nightlytest/. -After test results are submitted to -http://llvm.org/nightlytest/, -they are processed and displayed on the tests page. An email to - -llvm-testresults@cs.uiuc.edu summarizing the results is also generated. -This testing scheme is designed to ensure that programs don't break as well -as keep track of LLVM's progress over time.
- -If you'd like to set up an instance of the nightly tester to run on your -machine, take a look at the comments at the top of the -utils/NewNightlyTest.pl file. If you decide to set up a nightly tester -please choose a unique nickname and invoke utils/NewNightlyTest.pl -with the "-nickname [yournickname]" command line option. - -
You can create a shell script to encapsulate the running of the script. -The optimized x86 Linux nightly test is run from just such a script:
- --#!/bin/bash -BASE=/proj/work/llvm/nightlytest -export BUILDDIR=$BASE/build -export WEBDIR=$BASE/testresults -export LLVMGCCDIR=/proj/work/llvm/cfrontend/install -export PATH=/proj/install/bin:$LLVMGCCDIR/bin:$PATH -export LD_LIBRARY_PATH=/proj/install/lib -cd $BASE -cp /proj/work/llvm/llvm/utils/NewNightlyTest.pl . -nice ./NewNightlyTest.pl -nice -release -verbose -parallel -enable-linscan \ - -nickname NightlyTester -noexternals > output.log 2>&1 --
It is also possible to specify the the location your nightly test results -are submitted. You can do this by passing the command line option -"-submit-server [server_address]" and "-submit-script [script_on_server]" to -utils/NewNightlyTest.pl. For example, to submit to the llvm.org -nightly test results page, you would invoke the nightly test script with -"-submit-server llvm.org -submit-script /nightlytest/NightlyTestAccept.cgi". -If these options are not specified, the nightly test script sends the results -to the llvm.org nightly test results page.
- -Take a look at the NewNightlyTest.pl file to see what all of the -flags and strings do. If you start running the nightly tests, please let us -know. Thanks!
- -