Introduction
The analysis release build system, SConsTools, provides a mechanism for integrating unit tests. Each package in the release, or a package that a user is developing, can have its own tests. All tests can be run by a simple command. Users may find this useful for testing their packages. For packages that psana developers add to the analysis release, these tests are automatically run during the nightly build. This page is primarily for psana developers, to go over how to add unit tests to test packages in the release during nightly builds. There are special considerations to make for tests that are part of the nightly build discussed below.
Creating a Package test directory
As an example, lets make a package with both a Python and a C++ unit test. For the example below, I am starting from the directory rel in my home directory. I make a new release, a new package in the release, and the crucial step is that I make a subdirectory called test in my package:
psanacs051:~/rel $ newrel ana-current unitTestTutorial
psanacs051:~/rel $ cd unitTestTutorial/
psanacs051:~/rel/unitTestTutorial $ sit_setup
psanacs051:~/rel/unitTestTutorial $ newpkg MyPkg
psanacs051:~/rel/unitTestTutorial $ mkdir MyPkg/test
Now when one does
scons test
You are building and running the test target. SConsTools will look in the test subdirectory for all packages. It looks for unit tests in these test subdirectories. It looks for:
- Any file without an extension is treated as a test script. This script will be installed and run.
- Any file that looks like C or C++ code (.c, .cpp extension, etc) is treated as a test program. It will be compiled, installed, and run.
- If a test script or program returns non-zero, it failed and scons will report this.
Other files in the test directory are ignored.
Python Unit Test
Add the file MyPkg/test/myfirst
#!@PYTHON@ import sys if __name__ == '__main__': print "Running my test." sys.exit(1)
When you do scons test, you will get the output
Running UnitTest: "build/x86_64-rhel5-gcc41-opt/MyPkg/myfirst" ************************************************************************************ *** Unit test failed, check log file build/x86_64-rhel5-gcc41-opt/MyPkg/myfirst.utest *** ************************************************************************************
This is because the script returned something non-zero. If you look at the file build/x86_64-rhel5-gcc41-opt/MyPkg/myfirst.utest you see the output of the script.
The syntax @PYTHON@ is explained in the SConsTools page.
Change the sys.exit(1) to sys.exit(0) and the test will succeed. You can also take out the sys.exit line, be default Python will return 0 after the script runs.
C++ Unit Test
A simple C++ test would like this, create the file MyPkg/test/mysecond.cpp
#include <iostream>
int main() {
std::cout << "Cpp test" << std::endl;
return -1;
}
This test will also fail. Note, scons test stops after the first test fails. If you have not changed myfirst to return 0, only one of myfirst and mysecond will be run before failure is reported.
Using Frameworks for Unit Tests
It is worthwhile to learn how to use a testing framework. Psana developers are encouraged to use unittest for Python, and boost::unit_test for C++ in order to be consistent with existing tests in the release. However this is not necessary. You can use whatever framework you like.
Python unittest Framework
Below is an example of using unittest with Python. Add the file MyPkg/test/using_python_framework with the following content:
#!@PYTHON@ import sys import unittest class MyTest( unittest.TestCase ) : def setUp(self) : """ Method called to prepare the test fixture. This is called immediately before calling the test method; any exception raised by this method will be considered an error rather than a test failure. """ pass def tearDown(self) : """ Method called immediately after the test method has been called and the result recorded. This is called even if the test method raised an exception, so the implementation in subclasses may need to be particularly careful about checking internal state. Any exception raised by this method will be considered an error rather than a test failure. This method will only be called if the setUp() succeeds, regardless of the outcome of the test method. """ pass def test_mytest(self): a=3 b=4 self.assertEqual(a,b) if __name__ == '__main__': unittest.main(argv=[sys.argv[0], '-v'])
after doing scons test, you will get failure. After looking at the utest output file, one will find
******************************************************************************************************** *** Unit test failed, check log file build/x86_64-rhel5-gcc41-opt/MyPkg/using_python_framework.utest *** ******************************************************************************************************** scons: *** [build/x86_64-rhel5-gcc41-opt/MyPkg/using_python_framework.utest] Error 256 scons: building terminated because of errors. psana1302:~/rel/unitTestTutorial $ cat build/x86_64-rhel5-gcc41-opt/MyPkg/using_python_framework.utest test_mytest (__main__.MyTest) ... FAIL ====================================================================== FAIL: test_mytest (__main__.MyTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "build/x86_64-rhel5-gcc41-opt/MyPkg/using_python_framework", line 31, in test_mytest self.assertEqual(a,b) AssertionError: 3 != 4 ---------------------------------------------------------------------- Ran 1 test in 0.000s FAILED (failures=1)
Refer to the documentation https://docs.python.org/2/library/unittest.html for more information on unittest.
Boost unit_test Framework
For an example of using the boost C++ unit test framework, create the file MyPkg/test/using_boost_framework.cpp with the contents (this is mostly copied from the boost website:
#define BOOST_TEST_MODULE MyTest #include <boost/test/included/unit_test.hpp> /** * Simple test suite for module psevt-unit-test. * See http://www.boost.org/doc/libs/1_36_0/libs/test/doc/html/index.html */ #define BOOST_TEST_MODULE MyTest #include <boost/test/unit_test.hpp> int add( int i, int j ) { return i+j; } BOOST_AUTO_TEST_CASE( my_test ) { // seven ways to detect and report the same error: BOOST_CHECK( add( 2,2 ) == 4 ); // #1 continues on error BOOST_REQUIRE( add( 2,2 ) == 4 ); // #2 throws on error if( add( 2,2 ) != 4 ) BOOST_ERROR( "Ouch..." ); // #3 continues on error if( add( 2,2 ) != 4 ) BOOST_FAIL( "Ouch..." ); // #4 throws on error if( add( 2,2 ) != 4 ) throw "Ouch..."; // #5 throws on error BOOST_CHECK_MESSAGE( add( 2,2 ) == 4, // #6 continues on error "add(..) result: " << add( 2,2 ) ); BOOST_CHECK_EQUAL( add( 2,2 ), 4 ); // #7 continues on error } BOOST_AUTO_TEST_CASE( my_test_fail ) { BOOST_CHECK_EQUAL( add( 2,2 ), 5 ); }
The second test is designed to fail, and the output in the .utest file is
psana1302:~/rel/unitTestTutorial $ cat build/x86_64-rhel5-gcc41-opt/MyPkg/using_boost_framework.utest Running 2 test cases... MyPkg/test/using_boost_framework.cpp(38): error in "my_test_fail": check add( 2,2 ) == 5 failed [4 != 5]
For more examples, one can look in the test subdirectories of packages like AppUtils, ConfigSvc, XtcInput, psana, psana_test, Translator
Nightly Build Considerations - External Test Data
There are several things psana developers need to consider when writing tests for packages that are part of the analysis release that will be run as part of the nightly build. This mostly involves how to work with external test data files.
- The nightly build is (presently) run on psdev, both rhat5 and rhat6 machines.
- psdev has no access to the experiment data
- The same unit test may be running under both rhat5 and rhat6 at the same time, from the same release directory, but on different host machines.
- The nightly build runs under the user account psrel
- psrel cannot read files private to your directories. It may not have the same group permissions that you do.
Unit tests should not reference experimental test data. In addition to the above, experiment data may be removed as per the data retention policy. In light of this, there are several choices for incorporating test data
- Check it in as part of your package
- Make a copy of it in a place accessible to psrel running on the psdev machines
Test Data Checked into the Package
We do not want to keep large amounts of data under version control. I think 10 kilobytes or so is Ok, but when it gets larger one should use the external location discussed below. Files that you do check in could go right in the test directory alongisde the test, or you can create new directories for test data. Another standard directory in the SConsTools system is data, which is fine, but it is intended for package data as opposed to testing data. One could also create a subdirectory to the test directory, such as
- test/data or
- test/fixtures
To hold small amounts of test data.
To construct the correct path to read test data, note that during the nightly build ,the current working directory will be the release directory. Hence a Python unit test might look like
def test_mytest(self): text = file('MyPkg/test/fixtures/myfile.txt','r').read() self.assertTrue(text.startswith('file text'))
External Test Data Location
For xtc files, we have a directory,
/reg/g/psdm/data_test
that was created expressly for the purpose of storing xtc test data. However we do not want to copy entire xtc files from the experiments into this location, we need to select the parts of the xtc file necessary for testing. The current organization of the data_test directory is
data_test/Translator | samples from approximately 80 different xtc files that cover a broad range of psana types, and Translator issues. A unit tests work with one of these xtc files at a time. |
data_test/multifile | samples from 8 different experiments, unit tests use psana datasource string specification to work with a set of xtc files from an experiment |
data_test/types | soft links to files in data_test/Translator to easily identify a file with a given type |
data_test/calib | calibration test data. Same structure as calib directory to an experiment |
Keeping the test data files small makes the preparation of xtc test data tedious. One must identifying a part of the xtc file that you want to test and copying it out. Presently the largest xtc test file in data_test is about 1GB, which is bigger than it needs to be. For the testing that I have done, I typically want to run psana on a few datagrams in an xtc file to test how it parses a new type or handles some damaged data. I need to identifying the file offsets of the beginning and ends of those datagrams in the xtc file, as well as the beginning and ends of transition datagrams that make the xtc file correct. I have some tools in the psana_test package for this purpose. An example is below.
Once you have prepared some test data, you can either add it to the Translator subdirectory, or the multifile subdirectory, or create a new sub directory, maybe with your package name (like I did when I made the Translator subdirectory). If you want to add it to Translator or multifile, please contact me (davidsch) as these files have specific naming conventions and there are unit tests in the psana_test package that access them. Creating a new subdirectory requires less coordination, however if you think the test data is going to be useful to others, we should work together on it. One of the benefits of using the psana_test package, is I have a mechanism for checking in the md5 checksums of the test data into svn. This allows the unit tests to verify that the test data has not changed.
Preparing an Xtc File for a New Data Type
Here I'll go through an example of preparing a xtc file with a new data type.