-
Notifications
You must be signed in to change notification settings - Fork 207
Using create_test
Robert Jacob edited this page Jan 30, 2017
·
1 revision
The new $create_test tool is located in the $CCSMROOT/scripts/
directory and can be used to set up entire CESM test suites, as well as single standalone test cases. To see the list of test cases, and to view the available script options, execute create_test -help or create_test without any arguments. Creating a single test looks like:
> cd $CCSMROOT/scripts
> ./create_test -testname ERS.f19_g16.X.yellowstone_intel -testid t01
> cd ERS.f19_g16.X.yellowstone_intel.t01
> ERS.f19_g16.X.yellowstone_intel.t01.test_build
./ERS.f19_g16.X.yellowstone_intel.t01.submit
Check your test results. A successful test produces "PASS" as the first word in the file TestStatus
As an example, to create an entire suite of tests on yellowstone for the 'prebeta' test category, do the following:
> cd $CCSMROOT/scripts
> ./create_test \
-xml_mach yellowstone \
-xml_compiler intel \
-xml_category prebeta \
-testid alpha01a \
-testroot /glade/scratch/$USER/alpha01a
Some things to note about CESM tests:
- For usage information about the create_test tool, run "create_test -help".
- Test results are output in the TestStatus file. The TestStatus.out file provides additional details of the test run, which may aid in debugging failed tests.
- At this time, tests are not always easily re-runnable from an existing test directory. Rather than rerun a previous test case, it's recommended to set up a clean test case, (i.e. create one with a new testid)
- Tests are built using the .test_build script. This is different from regular production cases which are built using the .build script. Some tests require more than one executable, thus the .test_build script builds all required executables upfront interactively.
- The costs of tests vary widely. Some are short and some are long.
- If a test fails, see the Section called Debugging Tests That Fail for debugging assistance.
- There are -compare, -generate, and -baselineroot options for the create_test tool that support regression testing. These tools allow one to accomplish several goals:
- -generate will save log files as well as coupler history NetCDF files in the baselineroot under the current case name. Later tests will compare their coupler history files against these baselines to check for numerical differences.
- -compare will compare the current tag's tests against a previous tag's results, again for numerical accuracy.
- -baselineroot simply specifies where you would like your baseline files to be stored. By default, the test system will choose the configured baseline root for your machine.
- There are extra test options that can be added to the test such as _D, _E, or _P*. These are described in more detail in the create_test -help output.
- There is also a new option to create_test, -nlcompareonly. This allows one to create a suite of Smoke Build Namelist tests. These tests aren't compiled or run, the test cases are simply generated. These are useful in that you can create a suite for a previous CESM tag, then compare the current CESM tag's generated namelists against the previous tag. This can potentially spot hard-to-find answer-changing errors, and/or unintended changes in development.
The test status results have the following meaning
Test Result | Description |
---|---|
BFAIL | compare test couldn't find the baseline directory for the testcase |
BUILD | build succeeded, test not submitted |
CFAIL | env variable or build error |
CHECK | manual review of data is required |
ERROR | test checker failed, test may or may not have passed |
FAIL | test failed |
GEN | test has been generated |
PASS | test passed |
PEND | test has been submitted |
RUN | test is currently running OR it hung, timed out, exceeded its allotted walltime, or exited abnormally |
SFAIL | generation of test failed in scripts |
TFAIL | test setup error |
UNDEF | undefined result |
The following tests are available at the time of writing:
Test | Description |
---|---|
SMS | smoke test |
ERS | exact restart from startup, default 6 days initial + 5 days restart |
ERB | branch/exact restart test |
ERH | hybrid/exact restart test |
ERI | hybrid/branch/exact restart test |
ERT | exact restart from startup, default 2 months + 1 month |
SEQ | sequencing bit-for-bit test |
PEA | single processor testing with mpi and mpi-serial |
PEM | pe counts mpi bit-for-bit test |
PET | pe counts mpi/openmp bit-for-bit test |
CME | compare mct and esmf interfaces test |
NCK | single vs multi instance validation test |
SBN | smoke build namelist test |