-
Notifications
You must be signed in to change notification settings - Fork 22
simpletest
Part of general development workflow is creating example models to exercise various parts of the NHES Modelica package. Periodically running these tests and comparing the results against previously stored knowns allows unintended changes (or regressions) to be caught early in the development process. A test framework has been created in the Python language to automatically perform these checks. This framework is able to use Dymola to build and run Modelica models, collect the results, and compare them to previously stored reference results. For automated use, the framework is able to provide a single cumulative pass/fail result for all test models together.
The test framework is supported for use on both Linux and Windows operating systems. The successful operation of the test framework requires the following software dependencies:
- Python language version 2.7
- Matplotlib.pyplot package for Python (Provides screen plots showing deviations between model output and reference results)
- Scipy.io package for Python (Provides ability to read simulation output files having the .mat file extension)
- The Dymola executable must be found on the current system path
- Dymola must be set up to automatically load the ThermoPower library for Modelica on startup, so that it is available when the NHES package is loaded by the test framework. This is done by adding script commands to Dymola's setup.mos file. On Windows this file is located in folder %APPDATA%/Dynasim. On Linux, this file may be found in folder ~/.dynasim. The commands needed to load ThermoPower automatically are included below:
// ===== Add path to ThermoPower as default (So that auto-test stuff can find it)
AddModelicaPath("Path/to/ThermoPower/Library/ThermoPower 3.1", erase=false);
openModel("Path/to/ThermoPower/Library/ThermoPower 3.1/package.mo");
On the Windows operating system, the Anaconda Python distribution for Python 2.7 has been verified to provide the necessary Python, Matplotlib, and Scipy needed to run the test framework in one installation package.
Tests are defined by creating Modelica Script (*.mos) files in the directory hybrid/NHES/Resources/Scripts/Dymola. Each of these script files does two things:
- Specify what model in the NHES package to run: This is done by writing a simulateModel() command. This command has the form shown below, which allows the specification of simulation parameters:
simulateModel(problem="", startTime=0.0, stopTime=1.0, numberOfIntervals=0, outputInterval=0.0, method="Dassl", tolerance=0.0001, fixedstepsize=0.0, resultFile="dsres")
- Specify what model parameters and/or variables to monitor for changes: One or more values of interest from the test model are defined in a createPlot() command. One way to write such a command is to use the Dymola variable browser to create a plot, use the context menu on the plot to show it in the command window (Show In -> Command Window), and then copy the resulting createPlot command from there.
simulateModel("NHES.Systems.BalanceOfPlant.Turbine.Examples.IdealTurbine_Test", stopTime=100, numberOfIntervals=100, method="dassl", resultFile="IdealTurbine_Test");
createPlot(id=1, position={75, 70, 584, 421}, y={"BOP.sensorBus.subBus_BOP.W_total"}, range={0.0, 100.0, 402000000.0, 415000000.0}, grid=true, colors={{28,108,200}});
After creating a test, it is necessary to generate reference results by running the test framework manually in interactive mode (see Running Tests below). The output of future automated runs of this test will be compared against this reference to determine whether the test has passed or failed. Each reference result is stored in a text file in hybrid/NHES/Resources/ReferenceResults/Dymola. When adding or updating a test, the corresponding reference result file must be part of the merge request to allow automated testing of the branch to pass.
last-generated=2017-04-25
statistics-initialization=
{
"numerical Jacobians": "0",
"nonlinear": "1",
"linear": "0, 0, 2, 0"
}
statistics-simulation=
{
"numerical Jacobians": "0",
"nonlinear": "6",
"number of continuous time states": "4",
"linear": "2, 0"
}
BOP.sensorBus.subBus_BOP.W_total=[402450880., 414202656., 414202656., 414202656., 414202656., 414202656., 414202656., 414202656., 414202656., 414202656., 414202656., 414202656., 414202656., 414202656., 414202656., 414202656., 414202656., 414202656., 414202656., 414202656., 414202656., 414202656., 414202656., 414202656., 414202656., 414202656., 414202656., 414202656., 414202656., 414202656., 414202656., 414202656., 414202656., 414202656., 414202656., 414202656., 414202656., 414202656., 414202656., 414202656., 414202656., 414202656., 414202656., 414202656., 414202656., 414202656., 414202656., 414202656., 414202656., 414202656., 414202656., 414202656., 414202656., 414202656., 414202656., 414202656., 414202656., 414202656., 414202656., 414202656., 414202656., 414202656., 414202656., 414202656., 414202656., 414202656., 414202656., 414202656., 414202656., 414202656., 414202656., 414202656., 414202656., 414202656., 414202656., 414202656., 414202656., 414202656., 414202656., 414202656., 414202656., 414202656., 414202656., 414202656., 414202656., 414202656., 414202656., 414202656., 414202656., 414202656., 414202656., 414202656., 414202656., 414202656., 414202656., 414202656., 414202656., 414202656., 414202656., 414202656., 414202656.]
time=[0., 100.]
The test framework is accessed by running the Python script run_test.py in the root directory of the project:
hybrid> python run_tests.py [-h | --help] [-i | --interactive] [-s | --st <single test name>] [-p | --proc <# of processes>]
There are several command-line parameters available:
- '-h' or '--help' shows help text about the script and then exits
- '-i' or '--interactive' causes tests to be run in interactive mode. In this mode deviation from expected test outputs will be shown using plots and the user is prompted to optionally set the test output as the new expected values.
- '-g' or '--gui' will display the Dymola graphical user interface while test runs are in progress. This may be useful for diagnosing failures.
- '-s' or '--st' <single test name> causes only that test whose script file name is given to be run. The name is case sensitive. It is not necessary to include '.mos' as part of the name.
- '-p' or '--proc' <number of processes> limits the number of concurrent Dymola processes (default is to use one per available processor). This is useful to limit the number of licenses required to run multiple tests.
- To run all defined tests in the NHES package in interactive mode while limiting the number of concurrent simulation runs to two (the output below reflects the case where no reference results have yet been stored) :
hybrid> python run_tests.py -i -p 2
############################################################
#
# HYBRID SYSTEMS MODELICA TESTING
# Run: 2017-04-25 09:59:23.412207
#
# Current Directory: /home/bobk/projects/hybrid
# Interactive Mode Selected
# Using at most 2 processor(s)
#
# Searching for ModelicaPy in: /home/bobk/projects/hybrid/Testing/ModelicaPy
#
Directory with .mo /home/bobk/projects/hybrid/NHES/package.mo
Directory with Resources is /home/bobk/projects/hybrid/NHES
Beginning to run tests
Executable is on path
Directory with .mo /home/bobk/projects/hybrid/NHES/package.mo
Directory with Resources is /home/bobk/projects/hybrid/NHES
Current working directory is good
Using 2 of 24 processors to run unit tests.
Number of models : 274
blocks : 9
functions: 58
Preparing simulations
Temp Directories set... preparing to write runscripts...
Generated 8 regression tests.
number of processors is greater than 1
multiprocessing pool set up.... executing...
subprocess popen complete
waiting..
subprocess popen complete
waiting..
closing logfil...
closing logfil...
Preparing to concatenate simulator output files
Preparing to concatenate simulator statistics files
Simulation Complete
Checking logfil is amc...
Checking refrence files...
checking fmu statistics...
check reference points...
*** Error: Reference file NHES_Logical_Test.txt does not yet exist.
Create new file?
Enter: y(yes), n(no), Y(yes for all), N(no for all): Y
*** Error: Reference file NHES_SteamManifold_Test.txt does not yet exist.
*** Error: Reference file NHES_TightlyCoupled_PowerCtrl.txt does not yet exist.
*** Error: Reference file NHES_TightlyCoupled_SteamFlowCtrl.txt does not yet exist.
*** Error: Reference file NHES_Infinite_Test.txt does not yet exist.
*** Error: Reference file NHES_SimpleBreakers_Test.txt does not yet exist.
*** Error: Reference file NHES_IdealTurbine_Test.txt does not yet exist.
*** Error: Reference file NHES_IRIS_Test.txt does not yet exist.
*** Warning: Output file of PHS_CS_Test.mos is excluded from result test.
Deleting Temp Directories..
Checking for errors...
*** Warning: Numerical Jacobian in 'RunScript("Resources/Scripts/Dymola//TightlyCoupled_PowerCtrl.mos");'.
*** Warning: Unspecified initial conditions in 'RunScript("Resources/Scripts/Dymola//TightlyCoupled_PowerCtrl.mos");'.
*** Warning: Unspecified initial conditions in 'RunScript("Resources/Scripts/Dymola//IdealTurbine_Test.mos");'.
*** Warning: Numerical Jacobian in 'RunScript("Resources/Scripts/Dymola//SteamManifold_Test.mos");'.
*** Warning: Unspecified initial conditions in 'RunScript("Resources/Scripts/Dymola//SteamManifold_Test.mos");'.
*** Warning: Numerical Jacobian in 'RunScript("Resources/Scripts/Dymola//TightlyCoupled_SteamFlowCtrl.mos");'.
*** Warning: Unspecified initial conditions in 'RunScript("Resources/Scripts/Dymola//TightlyCoupled_SteamFlowCtrl.mos");'.
*** Warning: Unspecified initial conditions in 'RunScript("Resources/Scripts/Dymola//SimpleBreakers_Test.mos");'.
*** Warning: Numerical Jacobian in 'RunScript("Resources/Scripts/Dymola//IRIS_Test.mos");'.
Number of models with numerical Jacobian : 4
Number of models with unspecified initial conditions : 5
Script that runs unit tests had 10 warnings and 8 errors.
See 'simulator.log' for details.
Printing list of files that may be excluded...
Execution time = 621.518 s
#
# BuildingsPy Test Framework Returns: 1 (FAILED)
#
############################################################
In this example after determining that a reference result doesn't exist, the user is been asked if it should be created using the results generated from the model. By entering a capital letter 'Y', all such instances in the current run will be automatically accepted also. Note that the test framework reports failure because there were no previous results to compare against.
* To run only the test with the script name "IdealTurbine_Test.mos" in interactive mode (for this example the reference results have been deliberately changed to illustrate what happens when the model output does not match):
hybrid> python run_tests.py -i -s IdealTurbine_Test
############################################################
#
# HYBRID SYSTEMS MODELICA TESTING
# Run: 2017-04-25 10:38:51.606751
#
# Current Directory: /home/bobk/projects/hybrid
# Interactive Mode Selected
# Using all available processors
#
# Single Test Requested: IdealTurbine_Test
#
# Searching for ModelicaPy in: /home/bobk/projects/hybrid/Testing/ModelicaPy
#
Directory with .mo /home/bobk/projects/hybrid/NHES/package.mo
Directory with Resources is /home/bobk/projects/hybrid/NHES
/home/bobk/projects/hybrid/NHES/Resources/Scripts/Dymola/IdealTurbine_Test.mos
Beginning to run tests
Testing One Single Package
Executable is on path
Directory with .mo /home/bobk/projects/hybrid/NHES/package.mo
Directory with Resources is /home/bobk/projects/hybrid/NHES
Current working directory is good
Using 1 of 24 processors to run unit tests.
Number of models : 274
blocks : 9
functions: 58
Preparing simulations
Temp Directories set... preparing to write runscripts...
Generated 1 regression tests.
running simulation
subprocess popen complete
waiting..
closing logfil...
Preparing to concatenate simulator output files
Preparing to concatenate simulator statistics files
Simulation Complete
Checking logfil is amc...
Checking refrence files...
checking fmu statistics...
check reference points...
*** Error: IdealTurbine_Test.mat: BOP.sensorBus.subBus_BOP.W_total has absolute and relative error = 4.000e+06, 9.565e-03.
Maximum error is at t = 41.0
For NHES_IdealTurbine_Test.txt,
accept new file and update reference files? (Close plot window to continue.)
At this point (while in interactive mode) a plot illustrating the difference between the stored reference results and those from running the model is shown:
After the plot window is closed by the user, the framework asks whether the new result should be stored as the new reference result (it is not in this case, so 'n' is entered).
Enter: y(yes), n(no), Y(yes for all), N(no for all): n
Deleting Temp Directories..
Checking for errors...
*** Warning: Unspecified initial conditions in 'RunScript("Resources/Scripts/Dymola//IdealTurbine_Test.mos");'.
Number of models with unspecified initial conditions : 1
Script that runs unit tests had 1 warnings and 1 errors.
See 'simulator.log' for details.
Printing list of files that may be excluded...
Execution time = 250.027 s
#
# BuildingsPy Test Framework Returns: 1 (FAILED)
#
############################################################
Since the original reference is left in place, the test result is failure because the output does not match.
* To run all tests in non-interactive mode, limiting the number of concurrent sessions to two to conserve licenses:
hybrid> python run_tests.py -p 2
############################################################
#
# HYBRID SYSTEMS MODELICA TESTING
# Run: 2017-04-25 12:18:53.299871
#
# Current Directory: /home/bobk/projects/hybrid
# Using at most 2 processor(s)
#
# Searching for ModelicaPy in: /home/bobk/projects/hybrid/Testing/ModelicaPy
#
Directory with .mo /home/bobk/projects/hybrid/NHES/package.mo
Directory with Resources is /home/bobk/projects/hybrid/NHES
Beginning to run tests
Executable is on path
Directory with .mo /home/bobk/projects/hybrid/NHES/package.mo
Directory with Resources is /home/bobk/projects/hybrid/NHES
Current working directory is good
Using 2 of 24 processors to run unit tests.
Number of models : 274
blocks : 9
functions: 58
Preparing simulations
Temp Directories set... preparing to write runscripts...
Generated 8 regression tests.
number of processors is greater than 1
multiprocessing pool set up.... executing...
subprocess popen complete
waiting..
subprocess popen complete
waiting..
closing logfil...
closing logfil...
Preparing to concatenate simulator output files
Preparing to concatenate simulator statistics files
Simulation Complete
Checking logfil is amc...
Checking refrence files...
checking fmu statistics...
check reference points...
*** Warning: Output file of PHS_CS_Test.mos is excluded from result test.
Deleting Temp Directories..
Checking for errors...
*** Warning: Numerical Jacobian in 'RunScript("Resources/Scripts/Dymola//TightlyCoupled_PowerCtrl.mos");'.
*** Warning: Unspecified initial conditions in 'RunScript("Resources/Scripts/Dymola//TightlyCoupled_PowerCtrl.mos");'.
*** Warning: Unspecified initial conditions in 'RunScript("Resources/Scripts/Dymola//IdealTurbine_Test.mos");'.
*** Warning: Numerical Jacobian in 'RunScript("Resources/Scripts/Dymola//SteamManifold_Test.mos");'.
*** Warning: Unspecified initial conditions in 'RunScript("Resources/Scripts/Dymola//SteamManifold_Test.mos");'.
*** Warning: Numerical Jacobian in 'RunScript("Resources/Scripts/Dymola//TightlyCoupled_SteamFlowCtrl.mos");'.
*** Warning: Unspecified initial conditions in 'RunScript("Resources/Scripts/Dymola//TightlyCoupled_SteamFlowCtrl.mos");'.
*** Warning: Unspecified initial conditions in 'RunScript("Resources/Scripts/Dymola//SimpleBreakers_Test.mos");'.
*** Warning: Numerical Jacobian in 'RunScript("Resources/Scripts/Dymola//IRIS_Test.mos");'.
Number of models with numerical Jacobian : 4
Number of models with unspecified initial conditions : 5
Script that runs unit tests had 10 warnings and 0 errors.
See 'simulator.log' for details.
Printing list of files that may be excluded...
Execution time = 746.254 s
#
# BuildingsPy Test Framework Returns: 2 (PASSED WITH WARNINGS)
#
############################################################
This output indicates that all tests passed.
The automated test system for Modelica models ModelicaPy is based on the buildingspy system from Lawrence Berkeley National Laborary (LBNL, http://simulationresearch.lbl.gov/modelica/buildingspy/) as modified by Scott Greenwood (@greemich) et al. from Oak Ridge National Laboratory (ORNL).
Regression testing is a development methodology that is used to verify the correct performance of software after modifications are made. As software is created, developers create additional small programs (tests) to exercise its various features. The tests and their expected results are stored as part of the repository. Before new software changes are accepted into the repository, all of the tests are run against the modified code. The outputs of these tests are then compared against expected results stored in the repository. If a result is different than expected for any test, or if any test does not run successfully, then the test is considered to have failed. When the tests do not pass, a developer must determine why the failure occurred before a proposed change may be accepted. Regressions tests are one way to assure the quality of software produced.
As described in the HYBRID project report for FY16, the RAVEN code is used to optimize system economics. A key RAVEN capability used in performing this analysis is the ability to incorporate multiple user-developed external models. RAVEN is able to combine multiple such models in a single run to solve a larger combined problem. For this project two types of external models are used:
- Those written in the Modelica programming language, used in HYBRID to simulate the behavior of physical systems that make up the N-R HES under varying conditions.
- Additional code written in the Python scripting language that describes economic aspects of the problem including electricity dispatch and costing.
Added in FY17 is the ability to perform test runs of RAVEN from outside its normal file system location. This allows external models stored in the HYBRID repository (such as the economic and dispatch modules) to be tested using RAVEN without having to be made a part of RAVEN itself.
Significant effort has been invested in developing the economic and dispatch models. The RAVEN code used to execute them is under active development. Having the ability to test these modules in an automated manner provides a mechanism to detect any regression problems inadvertently introduced by ongoing RAVEN development. Each time the HYBRID repository is to be updated with a new RAVEN version, these modules will be tested with it before the update is accepted. Any failures in those tests provide a warning about a regression issue before the update to RAVEN becomes a part of the HYBRID repository.
The module test system consists of scripts written in the bash shell language and Python. It may be used on any platform that is supported by RAVEN (Linux, Mac, and Windows). The following conditions must be satisfied for the module test system to function properly:
- The RAVEN submodule of the HYBRID repository used to run the module tests must be initialized and fully updated. It is this installation of RAVEN that will be used to run the module tests.
- The MOOSE submodule of the RAVEN submodule must also be initialized and updated. RAVEN (and by extension Hybrid) makes use of the testing system contained in MOOSE.
- The system running the tests must be configured with the software prerequisites necessary to build and run RAVEN. These include a Python interpreter, Python libraries (h5py, matplotlib, numpy, scipy, and scikit-learn), and development tools (C++ compiler, Miniconda package manager for Python, and git source code control).
- The RAVEN submodule must be built with the appropriate compiler before it can be used to run the tests.
The RAVEN submodule that is already part of the HYBRID repository contains a complete testing system used to provide regression testing for itself. HYBRID module testing is achieved by extending the RAVEN system so that it can run tests stored in the parent repository (which it would not normally be able to do). Files associated with RAVEN module testing reside in the raven_module_testing directory located immediately below the root of the HYBRID project repository. The main script file that configures the RAVEN test system for Hybrid-specific functionality is placed in this directory. Inside the root folder “hybrid” it is defined a “raven_module_testing” folder which contains the scripting for the testing and the folder “tests”. The folder “tests” contains the input of the tests and the corresponding output (in the “gold” folders) which are used to verify that the behavior of the code is not changed at each regression test. For organizational purposes the tests are usually collected in subfolders based on their characteristics.
Hybrid-specific module tests are defined in the same manner as they are for RAVEN. A single test consists of a RAVEN input file along with associated data needed to perform that run. That can include input data, external models, and Python files. These may be placed in the tests directory, or they may also be placed in subdirectories of tests. Every directory that contains tests to be run by the framework must contain a test specification file named “tests”. The syntax of these files is defined by the MOOSE test framework, which controls how each test is run and sets the criteria used to determine whether it passed or failed.
Comparison criteria are also defined in the “tests” file. In most cases, one or more output files generated by running the specified input file with RAVEN are compared against a gold standard provided by the developer and stored in the repository. Typically, comparisons are performed on numeric values contained in Comma-Separated Values (CSV) files to a defined tolerance. It is also possible to compare output files written in eXtensible Markup Language (XML). When these file comparisons are specified by the test developer, reference files must have the same name and be placed in the gold subdirectory below that containing the “tests” file.
The test system is accessed by running an operating system shell script located in the top directory of the HYBRID project repository. This file run_raven_module_tests has the same parameters controlling its function as the RAVEN test script. The format of the command with the most used parameters is provided below:
run_raven_module_tests [-h | --help]
[-j <number of processes> |
--jobs <number of processes>]
[--re <regular expression>]
Where:
-h or –help Causes the complete list of parameters the script will accept to be printed and then exit without running any tests. Note that there are many other parameters provided by the RAVEN/Moose test framework, many of which do not apply to Hybrid module testing.
-j or –jobs <number of processes> Runs multiple tests in parallel, allowing at most <number of processes> of them to be active concurrently. Using this option when the computer running tests has several logical processors will reduce the total amount of time needed to run multiple tests.
-re <regular expression> When specified, this parameter will only execute tests whose name matches the regular expression provided. This is typically used to run single tests.
As described in the FY 16 report, the purpose of having a single top-level test script is to provide a way for an automated system to run all of the module tests together. The HYBRID and RAVEN projects make use of a tool called Civet, developed at INL, that provides regression testing services. Each time a developer proposes modification of the contents of the HYBRID repository, Civet will cause the automated tests to be run on the modified version. These tests (which are performed separately on both the Modelica models and the HYBRID add-on modules) must all pass before a proposed change may become part of the official repository. In this way the Hybrid project is protected from the accidental introduction of flaws into the software that required significant investment of resources to develop.