Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Gridded routing with lakes fails perfect restart #174

Closed
aubreyd opened this issue Sep 6, 2018 · 10 comments
Closed

Gridded routing with lakes fails perfect restart #174

aubreyd opened this issue Sep 6, 2018 · 10 comments
Labels
bug Something isn't working
Milestone

Comments

@aubreyd
Copy link
Collaborator

aubreyd commented Sep 6, 2018

As shown in CI testing, Croton gridded routing configuration with lakes fails perfect restart test.

Expected Behavior

Perfect restarts in all vars.

Current Behavior

Per CI testing, lake_inflort does not perfect restart in Croton test case gridded routing configuration. All other vars pass, so this may be just a reporting issue (may not be impacting physics/solution).

Test log here: https://travis-ci.org/NCAR/wrf_hydro_nwm_public/builds/425358497
hydro restart diff:
Variable Group Count ... Range Mean StdDev
0 lake_inflort / 3 ... 3.74963 2.08033 2.13999

@kafitzgerald kafitzgerald added this to the v5.0.x milestone Sep 6, 2018
@kafitzgerald kafitzgerald added bug Something isn't working testing Issues related to model testing and removed testing Issues related to model testing labels Sep 6, 2018
@tjmills
Copy link

tjmills commented Sep 17, 2018

Updating this with new information...

On a longer duration run, this fail becomes more severe with multiple variables in multiple outputs showing diffs in the perfect restart test. Thus, all testing on CI has been extended to 5 days in an attempt to better capture these diffs

@tjmills
Copy link

tjmills commented Sep 17, 2018

I dug a little deeper on this and found the following:

  1. In contrast to v5.1.0-alphaX, v5.0.2 release only fails perfect restarts for the variable inflort
  2. Gridded v5.0.2 passes perfect restarts when lakes are not included in the domain, indicating the problem is in code exercised by lakes.
---------------- Starting WRF-Hydro Testing ----------------
Testing the configs: gridded, gridded_no_lakes


####################################
### TESTING:  ---  gridded  ---  ###
####################################

pytest -vv --tb=no --ignore=local -p no:cacheprovider  -s --ignore=tests/test_supp_1_channel_only.py  --html=/home/docker/test_out/wrfhydro_testing-gfort-gridded.html --self-contained-html --config gridded --compiler gfort --domain_dir /home/docker/example_case --candidate_dir /home/docker/test_out/candidate_can_pytest/trunk/NDHMS --reference_dir /home/docker/test_out/reference_ref_pytest/trunk/NDHMS --output_dir /home/docker/test_out --ncores 2
======================================== test session starts ========================================
platform linux -- Python 3.6.5, pytest-3.8.0, py-1.6.0, pluggy-0.7.1 -- /home/docker/miniconda3/bin/python
metadata: {'Python': '3.6.5', 'Platform': 'Linux-4.9.93-linuxkit-aufs-x86_64-with-debian-stretch-sid', 'Packages': {'pytest': '3.8.0', 'py': '1.6.0', 'pluggy': '0.7.1'}, 'Plugins': {'metadata': '1.7.0', 'html': '1.19.0', 'datadir-ng': '1.1.0'}}
rootdir: /home/docker, inifile:
plugins: metadata-1.7.0, html-1.19.0, datadir-ng-1.1.0
collected 9 items

tests/test_1_fundamental.py::test_compile_candidate
Question: The candidate compiles?


Model successfully compiled into /home/docker/test_out/gridded/compile_candidate
PASSED
tests/test_1_fundamental.py::test_compile_reference
Question: The reference compiles?


Model successfully compiled into /home/docker/test_out/gridded/compile_reference
PASSED
tests/test_1_fundamental.py::test_run_candidate
Question: The candidate runs successfully?


Composing simulation into directory:'/home/docker/test_out/gridded/run_candidate'
Getting domain files...
Making job directories...
Validating job input files
run_candidate
Model already compiled, copying files...
Simulation successfully composed

waiting for job to complete...
Running job run_candidate:
    Wall start time: 2018-09-17 20:30:26
    Model start time: 2011-08-26 00:00
    Model end time: 2011-09-01 00:00
PASSED
tests/test_1_fundamental.py::test_run_reference
Question: The reference runs successfully?


Composing simulation into directory:'/home/docker/test_out/gridded/run_reference'
Getting domain files...
Making job directories...
Validating job input files
run_reference
Model already compiled, copying files...
Simulation successfully composed

waiting for job to complete...
Running job run_reference:
    Wall start time: 2018-09-17 20:30:41
    Model start time: 2011-08-26 00:00
    Model end time: 2011-09-01 00:00
PASSED
tests/test_1_fundamental.py::test_ncores_candidate
Question: The candidate outputs from a ncores run match outputs from ncores-1 run?


Composing simulation into directory:'/home/docker/test_out/gridded/ncores_candidate'
Getting domain files...
Making job directories...
Validating job input files
ncores_candidate
Model already compiled, copying files...
Simulation successfully composed

waiting for job to complete...
Running job ncores_candidate:
    Wall start time: 2018-09-17 20:30:56
    Model start time: 2011-08-26 00:00
    Model end time: 2011-09-01 00:00
PASSED
tests/test_1_fundamental.py::test_perfrestart_candidate
Question: The candidate outputs from a restart run match the outputs from standard run?


Composing simulation into directory:'/home/docker/test_out/gridded/restart_candidate'
Getting domain files...
Making job directories...
Validating job input files
run_candidate
Model already compiled, copying files...
Simulation successfully composed

waiting for job to complete...
Running job run_candidate:
    Wall start time: 2018-09-17 20:31:12
    Model start time: 2011-08-29 00:00
    Model end time: 2011-09-01 00:00
{'channel_rt': 0, 'channel_rt_grid': 0, 'chanobs': 0, 'lakeout': 0, 'gwout': 0, 'restart_hydro': 3, 'restart_lsm': 0}

restart_hydro

None
None
       Variable Group  Count     Sum  AbsSum       Min      Max    Range    Mean   StdDev
0  lake_inflort     /     10  983.35  983.35  0.093915  271.337  271.243  98.335  111.747
       Variable Group  Count     Sum  AbsSum       Min      Max    Range    Mean   StdDev
0  lake_inflort     /     10  983.35  983.35  0.093915  271.337  271.243  98.335  111.747
       Variable Group  Count     Sum  AbsSum       Min      Max    Range    Mean   StdDev
0  lake_inflort     /     10  983.35  983.35  0.093915  271.337  271.243  98.335  111.747
FAILED
tests/test_2_regression.py::test_regression_data
Question: The candidate run data values match the reference run?


PASSED
tests/test_2_regression.py::test_regression_metadata
Question: The candidate run output metadata match the reference run?


PASSED
tests/test_3_outputs.py::test_output_has_nans
Question: Outputs from all tests are free of nans in data and attributes


PASSED

---------- generated html file: /home/docker/test_out/wrfhydro_testing-gfort-gridded.html -----------
================================ 1 failed, 8 passed in 71.41 seconds ================================


#############################################
### TESTING:  ---  gridded_no_lakes  ---  ###
#############################################

pytest -vv --tb=no --ignore=local -p no:cacheprovider  -s --ignore=tests/test_supp_1_channel_only.py  --html=/home/docker/test_out/wrfhydro_testing-gfort-gridded_no_lakes.html --self-contained-html --config gridded_no_lakes --compiler gfort --domain_dir /home/docker/example_case --candidate_dir /home/docker/test_out/candidate_can_pytest/trunk/NDHMS --reference_dir /home/docker/test_out/reference_ref_pytest/trunk/NDHMS --output_dir /home/docker/test_out --ncores 2
======================================== test session starts ========================================
platform linux -- Python 3.6.5, pytest-3.8.0, py-1.6.0, pluggy-0.7.1 -- /home/docker/miniconda3/bin/python
metadata: {'Python': '3.6.5', 'Platform': 'Linux-4.9.93-linuxkit-aufs-x86_64-with-debian-stretch-sid', 'Packages': {'pytest': '3.8.0', 'py': '1.6.0', 'pluggy': '0.7.1'}, 'Plugins': {'metadata': '1.7.0', 'html': '1.19.0', 'datadir-ng': '1.1.0'}}
rootdir: /home/docker, inifile:
plugins: metadata-1.7.0, html-1.19.0, datadir-ng-1.1.0
collected 9 items

tests/test_1_fundamental.py::test_compile_candidate
Question: The candidate compiles?


Model successfully compiled into /home/docker/test_out/gridded_no_lakes/compile_candidate
PASSED
tests/test_1_fundamental.py::test_compile_reference
Question: The reference compiles?


Model successfully compiled into /home/docker/test_out/gridded_no_lakes/compile_reference
PASSED
tests/test_1_fundamental.py::test_run_candidate
Question: The candidate runs successfully?


Composing simulation into directory:'/home/docker/test_out/gridded_no_lakes/run_candidate'
Getting domain files...
Making job directories...
Validating job input files
run_candidate
Model already compiled, copying files...
Simulation successfully composed

waiting for job to complete...
Running job run_candidate:
    Wall start time: 2018-09-17 20:31:38
    Model start time: 2011-08-26 00:00
    Model end time: 2011-09-01 00:00
PASSED
tests/test_1_fundamental.py::test_run_reference
Question: The reference runs successfully?


Composing simulation into directory:'/home/docker/test_out/gridded_no_lakes/run_reference'
Getting domain files...
Making job directories...
Validating job input files
run_reference
Model already compiled, copying files...
Simulation successfully composed

waiting for job to complete...
Running job run_reference:
    Wall start time: 2018-09-17 20:31:53
    Model start time: 2011-08-26 00:00
    Model end time: 2011-09-01 00:00
PASSED
tests/test_1_fundamental.py::test_ncores_candidate
Question: The candidate outputs from a ncores run match outputs from ncores-1 run?


Composing simulation into directory:'/home/docker/test_out/gridded_no_lakes/ncores_candidate'
Getting domain files...
Making job directories...
Validating job input files
ncores_candidate
Model already compiled, copying files...
Simulation successfully composed

waiting for job to complete...
Running job ncores_candidate:
    Wall start time: 2018-09-17 20:32:07
    Model start time: 2011-08-26 00:00
    Model end time: 2011-09-01 00:00
PASSED
tests/test_1_fundamental.py::test_perfrestart_candidate
Question: The candidate outputs from a restart run match the outputs from standard run?


Composing simulation into directory:'/home/docker/test_out/gridded_no_lakes/restart_candidate'
Getting domain files...
Making job directories...
Validating job input files
run_candidate
Model already compiled, copying files...
Simulation successfully composed

waiting for job to complete...
Running job run_candidate:
    Wall start time: 2018-09-17 20:32:23
    Model start time: 2011-08-29 00:00
    Model end time: 2011-09-01 00:00
PASSED
tests/test_2_regression.py::test_regression_data
Question: The candidate run data values match the reference run?


PASSED
tests/test_2_regression.py::test_regression_metadata
Question: The candidate run output metadata match the reference run?


PASSED
tests/test_3_outputs.py::test_output_has_nans
Question: Outputs from all tests are free of nans in data and attributes


PASSED

------ generated html file: /home/docker/test_out/wrfhydro_testing-gfort-gridded_no_lakes.html ------
===================================== 9 passed in 69.67 seconds =====================================


##################################
###  ---  TESTING FAILED  ---  ###
##################################

@laurareads @dnyates

@tjmills
Copy link

tjmills commented Sep 17, 2018

There also appears to be a domain issue, this could be in the files or in hte namelists, need to do more digging.

Running v5.1.0 code with v5.0.2 domain yields the same minimal fail of perfect restarts only on the inflort variable

Running v5.1.0 code with dev domain yields numerous fails of perfect restarts

Testing the configs: gridded


####################################
### TESTING:  ---  gridded  ---  ###
####################################

pytest -vv --tb=no --ignore=local -p no:cacheprovider  -s --ignore=tests/test_supp_1_channel_only.py  --html=/home/docker/test_out/wrfhydro_testing-gfort-gridded.html --self-contained-html --config gridded --compiler gfort --domain_dir /home/docker/test_out/example_case --candidate_dir /home/docker/test_out/candidate_can_pytest/trunk/NDHMS --reference_dir /home/docker/test_out/reference_ref_pytest/trunk/NDHMS --output_dir /home/docker/test_out --ncores 2
======================================== test session starts ========================================
platform linux -- Python 3.6.5, pytest-3.8.0, py-1.6.0, pluggy-0.7.1 -- /home/docker/miniconda3/bin/python
metadata: {'Python': '3.6.5', 'Platform': 'Linux-4.9.93-linuxkit-aufs-x86_64-with-debian-stretch-sid', 'Packages': {'pytest': '3.8.0', 'py': '1.6.0', 'pluggy': '0.7.1'}, 'Plugins': {'metadata': '1.7.0', 'html': '1.19.0', 'datadir-ng': '1.1.0'}}
rootdir: /home/docker/test_out, inifile:
plugins: metadata-1.7.0, html-1.19.0, datadir-ng-1.1.0
collected 9 items

tests/test_1_fundamental.py::test_compile_candidate
Question: The candidate compiles?


Model successfully compiled into /home/docker/test_out/gridded/compile_candidate
PASSED
tests/test_1_fundamental.py::test_compile_reference
Question: The reference compiles?


Model successfully compiled into /home/docker/test_out/gridded/compile_reference
PASSED
tests/test_1_fundamental.py::test_run_candidate
Question: The candidate runs successfully?


Composing simulation into directory:'/home/docker/test_out/gridded/run_candidate'
Getting domain files...
Making job directories...
Validating job input files
run_candidate
Model already compiled, copying files...
Simulation successfully composed

waiting for job to complete...
Running job run_candidate:
    Wall start time: 2018-09-17 21:07:56
    Model start time: 2011-08-26 00:00
    Model end time: 2011-09-01 00:00
PASSED
tests/test_1_fundamental.py::test_run_reference
Question: The reference runs successfully?


Composing simulation into directory:'/home/docker/test_out/gridded/run_reference'
Getting domain files...
Making job directories...
Validating job input files
run_reference
Model already compiled, copying files...
Simulation successfully composed

waiting for job to complete...
Running job run_reference:
    Wall start time: 2018-09-17 21:08:12
    Model start time: 2011-08-26 00:00
    Model end time: 2011-09-01 00:00
PASSED
tests/test_1_fundamental.py::test_ncores_candidate
Question: The candidate outputs from a ncores run match outputs from ncores-1 run?


Composing simulation into directory:'/home/docker/test_out/gridded/ncores_candidate'
Getting domain files...
Making job directories...
Validating job input files
ncores_candidate
Model already compiled, copying files...
Simulation successfully composed

waiting for job to complete...
Running job ncores_candidate:
    Wall start time: 2018-09-17 21:08:29
    Model start time: 2011-08-26 00:00
    Model end time: 2011-09-01 00:00
PASSED
tests/test_1_fundamental.py::test_perfrestart_candidate
Question: The candidate outputs from a restart run match the outputs from standard run?


Composing simulation into directory:'/home/docker/test_out/gridded/restart_candidate'
Getting domain files...
Making job directories...
Validating job input files
run_candidate
Model already compiled, copying files...
Simulation successfully composed

waiting for job to complete...
Running job run_candidate:
    Wall start time: 2018-09-17 21:08:46
    Model start time: 2011-08-29 00:00
    Model end time: 2011-09-01 00:00
{'channel_rt': 3, 'channel_rt_grid': 2, 'chanobs': 1, 'lakeout': 3, 'gwout': 0, 'restart_hydro': 3, 'restart_lsm': 0}

channel_rt

    Variable Group  Count       Sum    AbsSum       Min       Max  Range      Mean  StdDev
0  q_lateral     /      1  0.000555  0.000555  0.000555  0.000555      0  0.000555       0
     Variable Group  Count       Sum    ...          Max     Range      Mean    StdDev
0  streamflow     /      9 -0.004829    ...     0.007244  0.015204 -0.000537  0.005906
1   q_lateral     /      1  0.000578    ...     0.000578  0.000000  0.000578  0.000000
2        Head     /      6 -0.000229    ...     0.000229  0.000458 -0.000038  0.000213

[3 rows x 10 columns]
     Variable Group  Count       Sum    ...          Max     Range      Mean    StdDev
0  streamflow     /      2  0.003782    ...     0.004499  0.005217  0.001891  0.003689
1   q_lateral     /      1  0.000566    ...     0.000566  0.000000  0.000566  0.000000
2        Head     /      1  0.000114    ...     0.000114  0.000000  0.000114  0.000000

[3 rows x 10 columns]

channel_rt_grid

None
     Variable Group  Count       Sum    ...          Max     Range      Mean    StdDev
0  streamflow     /      9 -0.004829    ...     0.007244  0.015204 -0.000537  0.005906

[1 rows x 10 columns]
     Variable Group  Count       Sum    ...          Max     Range      Mean    StdDev
0  streamflow     /      2  0.003782    ...     0.004499  0.005217  0.001891  0.003689

[1 rows x 10 columns]

chanobs

None
     Variable Group  Count       Sum    AbsSum       Min       Max  Range      Mean  StdDev
0  streamflow     /      1  0.000883  0.000883  0.000883  0.000883      0  0.000883       0
None

lakeout

         Variable Group  Count       Sum   ...         Max  Range      Mean  StdDev
0  water_sfc_elev     /      1  0.000061   ...    0.000061      0  0.000061       0
1         outflow     /      1  0.000557   ...    0.000557      0  0.000557       0

[2 rows x 10 columns]
         Variable Group  Count       Sum   ...         Max  Range      Mean  StdDev
0  water_sfc_elev     /      1  0.000061   ...    0.000061      0  0.000061       0
1         outflow     /      1  0.000578   ...    0.000578      0  0.000578       0

[2 rows x 10 columns]
         Variable Group  Count       Sum   ...         Max  Range      Mean  StdDev
0  water_sfc_elev     /      1  0.000061   ...    0.000061      0  0.000061       0
1         outflow     /      1  0.000568   ...    0.000568      0  0.000568       0

[2 rows x 10 columns]

restart_hydro

None
None
       Variable Group  Count          Sum    ...            Max     Range        Mean    StdDev
0          cvol     /     14    73.696500    ...      72.000000   71.9995    5.264040   19.2081
1         resht     /      1     0.000061    ...       0.000061    0.0000    0.000061    0.0000
2        qlakeo     /      1     0.000557    ...       0.000557    0.0000    0.000557    0.0000
3  lake_inflort     /      9  2151.540000    ...     665.156000  661.2950  239.060000  237.0410

[4 rows x 10 columns]
       Variable Group  Count          Sum     ...             Max       Range        Mean      StdDev
0         hlink     /      6    -0.000229     ...        0.000229    0.000458   -0.000038    0.000213
1        qlink1     /      9    -0.004829     ...        0.007244    0.015204   -0.000537    0.005906
2          cvol     /     15   425.245000     ...      424.000000  424.085000   28.349700  109.454000
3         resht     /      1     0.000061     ...        0.000061    0.000000    0.000061    0.000000
4        qlakeo     /      1     0.000578     ...        0.000578    0.000000    0.000578    0.000000
5  lake_inflort     /      9  2151.510000     ...      665.156000  661.295000  239.057000  237.043000

[6 rows x 10 columns]
       Variable Group  Count          Sum     ...             Max       Range        Mean      StdDev
0         hlink     /      1     0.000114     ...        0.000114    0.000000    0.000114    0.000000
1        qlink1     /      2     0.003782     ...        0.004499    0.005217    0.001891    0.003689
2          cvol     /     15   449.676000     ...      448.000000  448.000000   29.978400  115.642000
3         resht     /      1     0.000061     ...        0.000061    0.000000    0.000061    0.000000
4        qlakeo     /      1     0.000568     ...        0.000568    0.000000    0.000568    0.000000
5  lake_inflort     /      9  2151.480000     ...      665.155000  661.295000  239.053000  237.044000

[6 rows x 10 columns]
FAILED
tests/test_2_regression.py::test_regression_data
Question: The candidate run data values match the reference run?


PASSED
tests/test_2_regression.py::test_regression_metadata
Question: The candidate run output metadata match the reference run?


PASSED
tests/test_3_outputs.py::test_output_has_nans
Question: Outputs from all tests are free of nans in data and attributes


PASSED

---------- generated html file: /home/docker/test_out/wrfhydro_testing-gfort-gridded.html -----------
========================================= warnings summary ==========================================
/home/docker/miniconda3/lib/python3.6/site-packages/wrfhydropy/core/simulation.py:225: UserWarning: Model minor versions v5.1.0-alpha9
 do not match domain minor versions v5.1.0-alpha8
  domain.compatible_version)

/home/docker/miniconda3/lib/python3.6/site-packages/wrfhydropy/core/simulation.py:225: UserWarning: Model minor versions v5.1.0-alpha9
 do not match domain minor versions v5.1.0-alpha8
  domain.compatible_version)

-- Docs: https://docs.pytest.org/en/latest/warnings.html
========================== 1 failed, 8 passed, 2 warnings in 78.53 seconds ==========================


##################################
###  ---  TESTING FAILED  ---  ###
##################################


docker@7ef6a8957a3e:~$ mv test_out/ test_out_dev_domain_v5_1_0
docker@7ef6a8957a3e:~$ python run_tests_docker.py --config gridded --domain_tag v5.0.2
downloading asset croton_NY_example_testcase.tar.gz to /home/docker/test_out


---------------- Starting WRF-Hydro Testing ----------------
Testing the configs: gridded


####################################
### TESTING:  ---  gridded  ---  ###
####################################

pytest -vv --tb=no --ignore=local -p no:cacheprovider  -s --ignore=tests/test_supp_1_channel_only.py  --html=/home/docker/test_out/wrfhydro_testing-gfort-gridded.html --self-contained-html --config gridded --compiler gfort --domain_dir /home/docker/test_out/example_case --candidate_dir /home/docker/test_out/candidate_can_pytest/trunk/NDHMS --reference_dir /home/docker/test_out/reference_ref_pytest/trunk/NDHMS --output_dir /home/docker/test_out --ncores 2
======================================== test session starts ========================================
platform linux -- Python 3.6.5, pytest-3.8.0, py-1.6.0, pluggy-0.7.1 -- /home/docker/miniconda3/bin/python
metadata: {'Python': '3.6.5', 'Platform': 'Linux-4.9.93-linuxkit-aufs-x86_64-with-debian-stretch-sid', 'Packages': {'pytest': '3.8.0', 'py': '1.6.0', 'pluggy': '0.7.1'}, 'Plugins': {'metadata': '1.7.0', 'html': '1.19.0', 'datadir-ng': '1.1.0'}}
rootdir: /home/docker/test_out, inifile:
plugins: metadata-1.7.0, html-1.19.0, datadir-ng-1.1.0
collected 9 items

tests/test_1_fundamental.py::test_compile_candidate
Question: The candidate compiles?


Model successfully compiled into /home/docker/test_out/gridded/compile_candidate
PASSED
tests/test_1_fundamental.py::test_compile_reference
Question: The reference compiles?


Model successfully compiled into /home/docker/test_out/gridded/compile_reference
PASSED
tests/test_1_fundamental.py::test_run_candidate
Question: The candidate runs successfully?


Composing simulation into directory:'/home/docker/test_out/gridded/run_candidate'
Getting domain files...
Making job directories...
Validating job input files
run_candidate
Model already compiled, copying files...
Simulation successfully composed

waiting for job to complete...
Running job run_candidate:
    Wall start time: 2018-09-17 21:10:36
    Model start time: 2011-08-26 00:00
    Model end time: 2011-09-01 00:00
PASSED
tests/test_1_fundamental.py::test_run_reference
Question: The reference runs successfully?


Composing simulation into directory:'/home/docker/test_out/gridded/run_reference'
Getting domain files...
Making job directories...
Validating job input files
run_reference
Model already compiled, copying files...
Simulation successfully composed

waiting for job to complete...
Running job run_reference:
    Wall start time: 2018-09-17 21:10:52
    Model start time: 2011-08-26 00:00
    Model end time: 2011-09-01 00:00
PASSED
tests/test_1_fundamental.py::test_ncores_candidate
Question: The candidate outputs from a ncores run match outputs from ncores-1 run?


Composing simulation into directory:'/home/docker/test_out/gridded/ncores_candidate'
Getting domain files...
Making job directories...
Validating job input files
ncores_candidate
Model already compiled, copying files...
Simulation successfully composed

waiting for job to complete...
Running job ncores_candidate:
    Wall start time: 2018-09-17 21:11:07
    Model start time: 2011-08-26 00:00
    Model end time: 2011-09-01 00:00
PASSED
tests/test_1_fundamental.py::test_perfrestart_candidate
Question: The candidate outputs from a restart run match the outputs from standard run?


Composing simulation into directory:'/home/docker/test_out/gridded/restart_candidate'
Getting domain files...
Making job directories...
Validating job input files
run_candidate
Model already compiled, copying files...
Simulation successfully composed

waiting for job to complete...
Running job run_candidate:
    Wall start time: 2018-09-17 21:11:23
    Model start time: 2011-08-29 00:00
    Model end time: 2011-09-01 00:00
{'channel_rt': 0, 'channel_rt_grid': 0, 'chanobs': 0, 'lakeout': 0, 'gwout': 0, 'restart_hydro': 3, 'restart_lsm': 0}

restart_hydro

None
None
       Variable Group  Count      Sum   AbsSum       Min      Max    Range     Mean   StdDev
0  lake_inflort     /     10  983.381  983.381  0.093915  271.353  271.259  98.3381  111.752
       Variable Group  Count      Sum   AbsSum       Min      Max    Range     Mean   StdDev
0  lake_inflort     /     10  983.381  983.381  0.093915  271.353  271.259  98.3381  111.752
       Variable Group  Count      Sum   AbsSum       Min      Max    Range     Mean   StdDev
0  lake_inflort     /     10  983.381  983.381  0.093915  271.353  271.259  98.3381  111.752
FAILED
tests/test_2_regression.py::test_regression_data
Question: The candidate run data values match the reference run?


PASSED
tests/test_2_regression.py::test_regression_metadata
Question: The candidate run output metadata match the reference run?


PASSED
tests/test_3_outputs.py::test_output_has_nans
Question: Outputs from all tests are free of nans in data and attributes


PASSED

---------- generated html file: /home/docker/test_out/wrfhydro_testing-gfort-gridded.html -----------
========================================= warnings summary ==========================================
/home/docker/miniconda3/lib/python3.6/site-packages/wrfhydropy/core/simulation.py:225: UserWarning: Model minor versions v5.1.0-alpha9
 do not match domain minor versions v5.0.1
  domain.compatible_version)

/home/docker/miniconda3/lib/python3.6/site-packages/wrfhydropy/core/simulation.py:225: UserWarning: Model minor versions v5.1.0-alpha9
 do not match domain minor versions v5.0.1
  domain.compatible_version)

-- Docs: https://docs.pytest.org/en/latest/warnings.html
========================== 1 failed, 8 passed, 2 warnings in 74.16 seconds ==========================


##################################
###  ---  TESTING FAILED  ---  ###
##################################```

@tjmills
Copy link

tjmills commented Sep 17, 2018

Next step is to run v5.0.2 code with dev domain

@tjmills
Copy link

tjmills commented Sep 17, 2018

v5.0.2 code with dev domain produces many fails. So, the numerour fails in the v5.1.0 code could be because of the dev domain or assosciated namelist json files.

---------------- Starting WRF-Hydro Testing ----------------
Testing the configs: gridded


####################################
### TESTING:  ---  gridded  ---  ###
####################################

pytest -vv --ignore=local -p no:cacheprovider  -s --ignore=tests/test_supp_1_channel_only.py  --html=/home/docker/test_out/wrfhydro_testing-gfort-gridded.html --self-contained-html --config gridded --compiler gfort --domain_dir /home/docker/example_case --candidate_dir /home/docker/test_out/candidate_can_pytest/trunk/NDHMS --reference_dir /home/docker/test_out/reference_ref_pytest/trunk/NDHMS --output_dir /home/docker/test_out --ncores 2
======================================== test session starts ========================================
platform linux -- Python 3.6.5, pytest-3.8.0, py-1.6.0, pluggy-0.7.1 -- /home/docker/miniconda3/bin/python
metadata: {'Python': '3.6.5', 'Platform': 'Linux-4.9.93-linuxkit-aufs-x86_64-with-debian-stretch-sid', 'Packages': {'pytest': '3.8.0', 'py': '1.6.0', 'pluggy': '0.7.1'}, 'Plugins': {'metadata': '1.7.0', 'html': '1.19.0', 'datadir-ng': '1.1.0'}}
rootdir: /home/docker, inifile:
plugins: metadata-1.7.0, html-1.19.0, datadir-ng-1.1.0
collected 9 items

tests/test_1_fundamental.py::test_compile_candidate
Question: The candidate compiles?


Model successfully compiled into /home/docker/test_out/gridded/compile_candidate
PASSED
tests/test_1_fundamental.py::test_compile_reference
Question: The reference compiles?


Model successfully compiled into /home/docker/test_out/gridded/compile_reference
PASSED
tests/test_1_fundamental.py::test_run_candidate
Question: The candidate runs successfully?


Composing simulation into directory:'/home/docker/test_out/gridded/run_candidate'
Getting domain files...
Making job directories...
Validating job input files
run_candidate
Model already compiled, copying files...
Simulation successfully composed

waiting for job to complete...
Running job run_candidate:
    Wall start time: 2018-09-17 22:20:15
    Model start time: 2011-08-26 00:00
    Model end time: 2011-09-01 00:00
PASSED
tests/test_1_fundamental.py::test_run_reference
Question: The reference runs successfully?


Composing simulation into directory:'/home/docker/test_out/gridded/run_reference'
Getting domain files...
Making job directories...
Validating job input files
run_reference
Model already compiled, copying files...
Simulation successfully composed

waiting for job to complete...
Running job run_reference:
    Wall start time: 2018-09-17 22:20:31
    Model start time: 2011-08-26 00:00
    Model end time: 2011-09-01 00:00
PASSED
tests/test_1_fundamental.py::test_ncores_candidate
Question: The candidate outputs from a ncores run match outputs from ncores-1 run?


Composing simulation into directory:'/home/docker/test_out/gridded/ncores_candidate'
Getting domain files...
Making job directories...
Validating job input files
ncores_candidate
Model already compiled, copying files...
Simulation successfully composed

waiting for job to complete...
Running job ncores_candidate:
    Wall start time: 2018-09-17 22:20:48
    Model start time: 2011-08-26 00:00
    Model end time: 2011-09-01 00:00
PASSED
tests/test_1_fundamental.py::test_perfrestart_candidate
Question: The candidate outputs from a restart run match the outputs from standard run?


Composing simulation into directory:'/home/docker/test_out/gridded/restart_candidate'
Getting domain files...
Making job directories...
Validating job input files
run_candidate
Model already compiled, copying files...
Simulation successfully composed

waiting for job to complete...
Running job run_candidate:
    Wall start time: 2018-09-17 22:21:05
    Model start time: 2011-08-29 00:00
    Model end time: 2011-09-01 00:00
{'channel_rt': 3, 'channel_rt_grid': 3, 'chanobs': 1, 'lakeout': 3, 'gwout': 0, 'restart_hydro': 3, 'restart_lsm': 0}

channel_rt

     Variable Group  Count       Sum    ...          Max     Range      Mean    StdDev
0  streamflow     /      2  0.003963    ...     0.004059  0.004154  0.001982  0.002937
1   q_lateral     /      1  0.000617    ...     0.000617  0.000000  0.000617  0.000000

[2 rows x 10 columns]
     Variable Group  Count       Sum    ...          Max     Range      Mean    StdDev
0  streamflow     /      4  0.009365    ...     0.007380  0.009766  0.002341  0.004411
1   q_lateral     /      1  0.000696    ...     0.000696  0.000000  0.000696  0.000000

[2 rows x 10 columns]
     Variable Group  Count       Sum    ...          Max     Range      Mean    StdDev
0  streamflow     /      8  0.008157    ...     0.006170  0.010797  0.001020  0.003674
1   q_lateral     /      1  0.000698    ...     0.000698  0.000000  0.000698  0.000000

[2 rows x 10 columns]

channel_rt_grid

     Variable Group  Count       Sum    ...          Max     Range      Mean    StdDev
0  streamflow     /      2  0.003963    ...     0.004059  0.004154  0.001982  0.002937

[1 rows x 10 columns]
     Variable Group  Count       Sum    ...         Max     Range      Mean    StdDev
0  streamflow     /      4  0.009365    ...     0.00738  0.009766  0.002341  0.004411

[1 rows x 10 columns]
     Variable Group  Count       Sum    ...         Max     Range     Mean    StdDev
0  streamflow     /      8  0.008157    ...     0.00617  0.010797  0.00102  0.003674

[1 rows x 10 columns]

chanobs

None
     Variable Group  Count      Sum   AbsSum      Min      Max  Range     Mean  StdDev
0  streamflow     /      1  0.00738  0.00738  0.00738  0.00738      0  0.00738       0
None

lakeout

         Variable Group  Count       Sum   ...         Max  Range      Mean  StdDev
0  water_sfc_elev     /      1  0.000076   ...    0.000076      0  0.000076       0
1         outflow     /      1  0.000617   ...    0.000617      0  0.000617       0

[2 rows x 10 columns]
         Variable Group  Count       Sum   ...         Max  Range      Mean  StdDev
0  water_sfc_elev     /      1  0.000076   ...    0.000076      0  0.000076       0
1         outflow     /      1  0.000696   ...    0.000696      0  0.000696       0

[2 rows x 10 columns]
         Variable Group  Count       Sum   ...         Max  Range      Mean  StdDev
0  water_sfc_elev     /      1  0.000076   ...    0.000076      0  0.000076       0
1         outflow     /      1  0.000698   ...    0.000698      0  0.000698       0

[2 rows x 10 columns]

restart_hydro

None
None
       Variable Group  Count          Sum     ...             Max       Range        Mean      StdDev
0         hlink     /      1     0.000114     ...        0.000114    0.000000    0.000114    0.000000
1        qlink1     /      2     0.003963     ...        0.004059    0.004154    0.001982    0.002937
2          cvol     /     15    41.965300     ...       40.000000   40.000300    2.797690   10.292100
3         resht     /      1     0.000076     ...        0.000076    0.000000    0.000076    0.000000
4        qlakeo     /      1     0.000617     ...        0.000617    0.000000    0.000617    0.000000
5  lake_inflort     /      9  2151.660000     ...      665.250000  661.390000  239.073000  237.071000

[6 rows x 10 columns]
       Variable Group  Count          Sum     ...             Max       Range        Mean      StdDev
0         hlink     /      2     0.000343     ...        0.000229    0.000114    0.000172    0.000081
1        qlink1     /      4     0.009365     ...        0.007380    0.009766    0.002341    0.004411
2          cvol     /     15    90.276300     ...       88.000000   88.013900    6.018420   22.680000
3         resht     /      1     0.000076     ...        0.000076    0.000000    0.000076    0.000000
4        qlakeo     /      1     0.000696     ...        0.000696    0.000000    0.000696    0.000000
5  lake_inflort     /      9  2151.610000     ...      665.250000  661.390000  239.068000  237.074000

[6 rows x 10 columns]
       Variable Group  Count          Sum     ...             Max       Range        Mean      StdDev
0         hlink     /      4     0.000343     ...        0.000229    0.000343    0.000086    0.000144
1        qlink1     /      8     0.008157     ...        0.006170    0.010797    0.001020    0.003674
2          cvol     /     15   298.135000     ...      296.000000  296.050000   19.875700   76.387600
3         resht     /      1     0.000076     ...        0.000076    0.000000    0.000076    0.000000
4        qlakeo     /      1     0.000698     ...        0.000698    0.000000    0.000698    0.000000
5  lake_inflort     /      9  2151.580000     ...      665.251000  661.390000  239.064000  237.075000

[6 rows x 10 columns]
FAILED
tests/test_2_regression.py::test_regression_data
Question: The candidate run data values match the reference run?


PASSED
tests/test_2_regression.py::test_regression_metadata
Question: The candidate run output metadata match the reference run?


PASSED
tests/test_3_outputs.py::test_output_has_nans
Question: Outputs from all tests are free of nans in data and attributes


PASSED

@tjmills
Copy link

tjmills commented Sep 17, 2018

The v5.0.2 domain has the following namelist settings for gridded

            "dxrt": 1000,
            "aggfactrt": 4

Wherease the dev domain has these:

            "dxrt": 250,
            "aggfactrt": 4

Changing the dev domain settings to the same as the v5.0.2 settings results in consistant behavoir for gridded between v5.0.2 and v5.1.0 code with v5.0.2 and dev domains, respectively.

Unfortunately, I believe the dev domain settings are correct.

@aubreyd @laurareads ?

@aubreyd
Copy link
Collaborator Author

aubreyd commented Sep 18, 2018

As far as I know, the gridded routing files are consistent with the NWM full routing, so should be 250 with an agg factor of 4. That is what I have been using for my local tests and appears to match the domain files.

@tjmills
Copy link

tjmills commented Sep 18, 2018

OK, this change has been applied to bugfix-5.0.x release branch with an updated croton_NY release asset

@aubreyd
Copy link
Collaborator Author

aubreyd commented Sep 21, 2018

Just linking this more recent error report since the variables impacted have changed a bit since the first report. You have to run a few days to get the diffs to show up. @dnyates is digging in.

https://travis-ci.org/NCAR/wrf_hydro_nwm_public/builds/430627267

@kafitzgerald
Copy link
Contributor

Fixed in PR #205

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

3 participants