-
Notifications
You must be signed in to change notification settings - Fork 235
Model Testing Standards
Unit tests are the hardest tests to write, as they primarily focus on testing aspects of the current model or tool only - that is they do not test integration with other models or tools. This is especially challenging for testing of Unit Models, as they inherently depend upon property packages and control volumes to provide critical information and infrastructure. To help with this, IDAES provides some testing utilities (Testing Utility Functions) that include minimal property and reaction packages for testing purposes, which allows testing to focus on the Unit Model alone.
- Only focus on code in the module being tested. Code in other necessary modules should be tested separately in tests for those modules; i.e. every module should have its own set of unit tests, and tests for multiple modules should not be combined.
- Execute in less than 2 seconds - if it takes longer than this to execute tests for something, then the thing being tested should probably be broken up.
- Unit tests should operate on the simplest possible instance of a modeling component that demonstrates the functionality being tested.
- Each unit test should be commented to indicate what is being tested.
- Must not involve a solver (this is the purpose of component tests). Routines that rely on solvers (e.g. initialization) should attempt to break the routine up into smaller pieces such that most of the routine can be unit tested.
- Aim to cover as much of the code as possible (subject to limitations on testing of initialization routines).
- Confirm the model was constructed as expected; i.e. test for the presence, type and shape (indexing sets) of expected model components.
- Test all possible if/else branches, and ideally all combinations of if/else branches.
- Test for all Exceptions that can be raised by the code.
Unit tests should not include tests of unit consistency, as these can be computationally expensive; unit consistency is tested as part of the component tests.
- Should use the Test Property Packages from the testing utilities
- Confirm construction of key model components, i.e. any components constructed by the model directly. Components created by sub-models do not need to be checked unless they are important
- If not using Control Volumes, check for the form of key constraints and expressions.
- Overall 85% coverage of code when combined with component tests
- Testing of all possible configurations. I.e. all if/else decisions should be tested.
- Testing of all possible Exceptions
- Unit tests must be OS agnostic (i.e. should be written such that they work on any OS).
- Overall 85% coverage of code when combined with component tests
- Test construction of general/common components (e.g. parameters, coefficients for equations of state and departure functions)
- Test correct form of constraints/expressions for all supported properties (best done by casting constraint body or expression to a string and comparing to expected form). Testing of expected values is also important, but can be combined into component tests after a solver is applied.
- Tests indicate something about the range of (state variable) values over which property calculations are valid
- Tests for all supported options
- Tests for all possible Exceptions
Component tests are used to test integration of models and tools (e.g. solvers, bit also other IDAES tools such as scaling and UI) for a set of well-defined test cases for which it is known the model can be solved. By necessity, this often involves integration of multiple models (e.g. Unit Models with property packages). Component tests are also the point where initialization routines and solutions can be tested, and thus can (but do not need to) involve the use of a solver. Component tests also include time-consuming tests that involve large amounts of expression parsing, such as asserting unit of measure consistency.
- Aim to cover all lines of code not covered by unit tests; i.e. all initialization routines and other code that requires a solver.
- Execute in less than 10 seconds (including solver time). Limited exceptions can be made for complex models, but ideally test cases should be kept small to meet this limit.
- Test for consistent units of measurement. Asserting unit consistency is expensive, hence why these are not included in unit tests.
- Include a test case for all possible model configurations (i.e. for every if/else option available during model construction).
- Test model results/outputs against expected values to a sufficient tolerance (generally 1-2 orders of magnitude greater than solver tolerance).
- Test values for as many key variables as possible (not just inputs and outputs).
- Component tests must use only open-source solvers supported through
idaes get-extensions
. This ensures that they will be available on the automated testing servers and for most users. If a commercial solver is required, then this should be an integration test. - Tests involving solvers should always confirm convergence before checking results.
- Confirm conservation of mass and energy (and momentum if applicable).
- Component tests should be OS agnostic wherever possible. There may be cases where solvers on different OS’s might give slightly different results, but any differences should be within the tolerance of a given test (i.e. the same test should still pass for any OS).
- Test one specific case study for each unit configuration using an example property package from the core library
- Test and confirm successful initialization of model for each case study
- Test and verify the solution of the model for each case study using the expected solution. This solution does not need to be sourced from literature, although this is a good idea. The idea here is to confirm the solution does not change as IDAES continues to develop, rather than to validate the model against data.
- Test for conservation of material and energy (or comment on why these are not applicable.
- Test for unit consistency
- Test one specific case study for each possible configuration. There should be one test case for each potential phase equilibrium condition supported by the package (e.g. liquid-only, vapor-only, VLE).
- Test and confirm successful initialization of model for each case study
- Test and verify solution of model for case study using expected solution. This solution does not need to be sourced from literature, although this is a good idea. The idea here is to confirm the solution does not change as IDAES continues to develop, rather than to validate the model against data.
- Test results for all supported properties
- Test for unit consistency
Integration tests are used for longer running verification and validation testing. These types of tests are critical for ensuring user confidence in the tools, and also to ensure that the tools and models are robust across a wide range of different models forms and conditions. As such, more integration tests are always better (if integration testing takes too long, we will look at running these automatically overnight or on weekends, rather than limiting the number of tests we run).
- Test model performance over as wide a range of conditions and configurations as possible.
- Compare model results/outputs with literature data, including source information (i.e. references).
- Commercial modeling tools should not be used as sources of testing data. Many of these have licensing clauses that prohibit their use for benchmarking and testing of other tools.
- Results/output should be tested to accuracy of literature data. More accurate data is always preferred where possible.
- Should always confirm solver convergence before testing results. *Should include tests for model robustness and reliability (TBD).
- There is no limit on execution time for individual integration tests, but time should be kept to a minimum to avoid excessively long run times for the test suite.
- If a commercial solver is required for testing a model or tool, then the developer needs to ensure that the solver is available on the IDAES testing platform, and the test must be marked with a
pytest.skipif
which checks for solver availability.
- Comprehensively verify the model against at least one literature source. Data should cover the full range of operating conditions in the literature source. Case specific property packages may be required.
- Literature source(s) must be documented in the test file header.
- Experimental literature sources are preferred.
- Model results should be confirmed for all available data points, and to within an acceptable error margin (ideally to within less than experimental error where available).
- Application of the IDAES convergence testing tools for at least one test case. This test case can use core property packages if desired.
- Set up pre-commit
- Run pytest with coverage report
- Run Pylint locally
- Update the Pyomo version
- Install Pyomo from a local Git clone
- Set up GitHub authentication with GCM
- Handle warnings in pytest