From fa824dfad3468d3ecb9fa6a67a83e28075697c9c Mon Sep 17 00:00:00 2001 From: Katelyn FitzGerald Date: Mon, 12 Nov 2018 20:31:21 -0700 Subject: [PATCH] Add the coupled lesson to internal folder --- lessons/internal/Lesson-coupled.ipynb | 539 ++++++++++++++++++++++++++ 1 file changed, 539 insertions(+) create mode 100644 lessons/internal/Lesson-coupled.ipynb diff --git a/lessons/internal/Lesson-coupled.ipynb b/lessons/internal/Lesson-coupled.ipynb new file mode 100644 index 000000000..9415fa7d4 --- /dev/null +++ b/lessons/internal/Lesson-coupled.ipynb @@ -0,0 +1,539 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "metadata": { + "toc": true + }, + "source": [ + "

Table of Contents

\n", + "
" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "# Lesson - Running the coupled WRF | WRF-Hydro modeling system\n", + "\n", + "## Overview\n", + "In this lesson, we cover the basics of how to run a coupled WRF | WRF-Hydro simulation using a prepared domain from the provided coupled test case. In particular, we cover how to run the necessary WRF Preprocessing System (WPS) utitilies and the coupled modeling system itself. For more information regarding the [WRF](http://www2.mmm.ucar.edu/wrf/users/docs/user_guide_v4/contents.html) and [WRF-Hydro](https://ral.ucar.edu/projects/wrf_hydro/technical-description-user-guide) modeling systems please consult the relevant documentation. \n", + "\n", + "### Software and conventions\n", + "The easiest way to run this lesson is via the [wrfhydro/coupled_training](https://hub.docker.com/r/wrfhydro/coupled_training/) Docker container, which has the pre-compiled modeling system and required WPS geographic data installed.\n", + "\n", + "You may either execute commands by running each cell of this notebook or alternatively you may open a terminal in Jupyter Lab by selecting `New -> Terminal` in your `Home` tab of Jupyter Lab and input the commands manually if you prefer. You can also use your own terminal by logging into the container with the command `docker exec -it wrf-hydro-coupled-training bash`\n", + "\n", + "All paths used in this lesson assume that the lesson materials are located under your home directory in a folder named *wrf-hydro-training*. If your materials are located in another directory, you will not be able to run the commands in this notebook inside Jupyter and will need to type them manually in your terminal session. \n", + "\n", + "### Container contents\n", + "The [wrfhydro/coupled_training](https://hub.docker.com/r/wrfhydro/coupled_training/) Docker container developed for this lesson contains the following:\n", + "- The precompiled WRF Preprocessing System (WPS) and required data\n", + "- The precompiled coupled WRF | WRF-Hydro modeling system\n", + "- A coupled test case compatible with the specified version\n", + "- Training lesson as a Jupyter Notebook\n", + "- Jupyter Notebook server\n", + "\n", + "## Running the WRF Preprocessing System (WPS)\n", + "\n", + "We'll start the lesson by running the WRF Preprocessing System (WPS) utilities.\n", + "\n", + "WPS consists of a set of utilities developed to aid users in model domain generation and formatting/processing of relevant input datasets for both the land and atmosphere.\n", + "\n", + "The three primary utilities included in the WPS are geogrid, ungrib, and metgrid. The following sections briefly describe each of these utilities step through the process of running each of them.\n", + "\n", + "### Running the geogrid utility\n", + "\n", + "The first of the WPS utilities is geogrid. The geogrid utility creates geogrid domain files from geospatial data (distributed by the WRF developers) based upon model domain parameters specified in the geogrid portion of the namelist.wps file as well as interpolation options specified in the GEOGRID.TBL file (distributed with WPS).\n", + "\n", + "Before running the utility we need to create and populate our run directory as shown below." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "%%bash\n", + "\n", + "# Create a run directory for WPS and navigate there\n", + "mkdir -p ~/wrf-hydro-training/output/run/WPS\n", + "cd ~/wrf-hydro-training/output/run/WPS\n", + "\n", + "# Link the required geospatial data to the proper folder\n", + "# Note that only a subset of the full dataset is included in this container\n", + "ln -sf ~/WRF_WPS/geog_conus geog\n", + "\n", + "# Link the precompiled executable\n", + "ln -sf ~/WRF_WPS/WPS/geogrid.exe .\n", + "\n", + "# Copy over the required geogrid parameter table from the WPS/geogrid directory\n", + "cp ~/WRF_WPS/WPS/geogrid/GEOGRID.TBL .\n", + "\n", + "# Copy over the preconfigured WPS namelist from the example case\n", + "cp ~/wrf-hydro-training/example_case_coupled/namelist.wps ." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Next we'll take a look at the WPS namelist. Recall from above that the relevant portion of the namelist for this utility is called *geogrid*." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "%%bash\n", + "cat ~/wrf-hydro-training/output/run/WPS/namelist.wps" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Now we will go ahead and run the geogrid utility and view the output log file." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "%%bash\n", + "\n", + "# Navigate to the run directory\n", + "cd ~/wrf-hydro-training/output/run/WPS\n", + "\n", + "# Run the geogrid utility and pipe the output to a log file\n", + "./geogrid.exe > geogrid.log 2>&1\n", + "\n", + "# View the last few lines of the log file to check for successful completion\n", + "tail geogrid.log" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Running the ungrib utility\n", + "\n", + "The next WPS utility is ungrib. The ungrib utility takes meteorological forcing data (included in the example case) to be used for simulation initial and boundary conditions and converts the files to an intermediate file format used by the metgrid utility.\n", + "\n", + "Again, we first populate the run directory with the required input files." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "%%bash\n", + "\n", + "# Navigate to the run directory\n", + "cd ~/wrf-hydro-training/output/run/WPS\n", + "\n", + "# Link the required shell script and precompiled executable from WPS\n", + "ln -sf ~/WRF_WPS/WPS/link_grib.csh .\n", + "ln -sf ~/WRF_WPS/WPS/ungrib.exe ." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Next we link the meteorological forcing data from the example case using the provided shell script. This script links the files to the run directory and renames them appropriately.\n", + "\n", + "We also link the relevant variable table (NAM in this case) provided with WPS at this point." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "%%bash\n", + "\n", + "# Navigate to the run directory\n", + "cd ~/wrf-hydro-training/output/run/WPS\n", + "\n", + "# Link the example case meteorological forcing data with the provided shell script\n", + "./link_grib.csh ~/wrf-hydro-training/example_case_coupled/WRF_FORCING/*\n", + "\n", + "# Link the relevant variable table for the meteorological forcing data we are using\n", + "ln -sf ~/WRF_WPS/WPS/ungrib/Variable_Tables/Vtable.NAM Vtable" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Now we are ready to run the ungrib utility as shown below. Recall that there is also an ungrib section in the namelist.wps. Again, these are preconfigured for your example case. " + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "%%bash\n", + "\n", + "# Navigate to the run directory\n", + "cd ~/wrf-hydro-training/output/run/WPS\n", + "\n", + "# Run the ungrib utility and pipe the output to a log file\n", + "./ungrib.exe > ungrib.log 2>&1\n", + "\n", + "# View the last few lines of the log file to check for successful completion\n", + "tail ungrib.log" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Running the metgrid utility\n", + "\n", + "Finally, we will run the metgrid utility. The metgrid utility does some interpolation of meteorological forcing data to the model domain creating metgrid files to be used as input to the WRF real utility.\n", + "\n", + "Once again, our first step is to populate the run directory with the required input files." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "%%bash\n", + "\n", + "# Navigate to the run directory\n", + "cd ~/wrf-hydro-training/output/run/WPS\n", + "\n", + "# Copy over the required geogrid parameter table from the WPS/geogrid directory\n", + "cp ~/WRF_WPS/WPS/metgrid/METGRID.TBL .\n", + "\n", + "# Link the relevant executable\n", + "ln -sf ~/WRF_WPS/WPS/metgrid.exe ." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Now we are ready to run the metgrid utility as shown below. Recall that there is also a metgrid section in the namelist.wps. Again, these are preconfigured for your example case." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "%%bash\n", + "cd ~/wrf-hydro-training/output/run/WPS\n", + "\n", + "# Run the metgrid utility and pipe the output to a log file\n", + "./metgrid.exe > metgrid.log 2>&1\n", + "\n", + "# View the last few lines of the log file to check for successful completion\n", + "tail metgrid.log" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Running the coupled WRF | WRF-Hydro modeling system\n", + "\n", + "Now that we have generated the geogrid and metgrid files for the model domains using the WPS utilities we are ready to start the process of running the coupled modeling system. \n", + "\n", + "Note that at this point in a real simulation (over a domain of your choice) you would need to take the geogrid file from the innermost domain and run the WRF-Hydro GIS preprocessing tools to generate your hydro domain files. In this example case, all of the required files are generated for you and included in the DOMAIN directory within the example case.\n", + "\n", + "**Setting up your run directory**\n", + "\n", + "First, we'll go ahead and setup the run directory for the coupled model simulation as shown below." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "%%bash\n", + "\n", + "# Copy the run directory from the precompiled WRF directory (this was compiled with hydro)\n", + "cp -RL ~/WRF_WPS/WRF/run ~/wrf-hydro-training/output/run/WRF\n", + "\n", + "# Navigate to the new run directory\n", + "cd ~/wrf-hydro-training/output/run/WRF\n", + "\n", + "# Copy over required parameter tables for the hydro components\n", + "cp ~/WRF_WPS/wrf_hydro_nwm_public-5.0.3/trunk/NDHMS/template/HYDRO/*TBL .\n", + "\n", + "# Copy over the namelist used for real and WRF as well as the hydro\n", + "# namelist from the example case directory\n", + "cp ~/wrf-hydro-training/example_case_coupled/namelist.input .\n", + "cp ~/wrf-hydro-training/example_case_coupled/hydro.namelist .\n", + "\n", + "# Link the relevant executables\n", + "cp ~/WRF_WPS/WRF/run/wrf.exe .\n", + "cp ~/WRF_WPS/WRF/run/real.exe .\n", + "\n", + "# Link the geogrid and metgrid files you just created using the WPS\n", + "ln -sf ../WPS/geo_em* .\n", + "ln -sf ../WPS/met_em* .\n", + "\n", + "# Link the hydro domain files from the coupled example case directory\n", + "ln -sf ~/wrf-hydro-training/example_case_coupled/DOMAIN ." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Running real\n", + "\n", + "Next we'll run real. This utility is used for real data (as opposed to idealized) WRF simulations and uses the geogrid and metgrid files as input. Based upon these input files and specifications in the namelist.input real.exe generates the initial (wrfinput*) and boundary (wrfbdy*) condition files required for the simulation. " + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "%%bash\n", + "\n", + "# Navigate to the run directory\n", + "cd ~/wrf-hydro-training/output/run/WRF\n", + "\n", + "# Run the real utility and pipe the output to a log file\n", + "# Note that this binary was compiled using distributed memory parallelism (thus the -np 2)\n", + "# if you are running on your own system the run command may be different\n", + "mpirun -np 2 ./real.exe > real.log 2>&1" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "**View the end of your log file to ensure the simulation completed**\n", + "\n", + "Depending upon how you run the real and wrf utilities, a number of log files are created. In this case there is one standard out (rsl.out.*) and one standard error (rsl.error.*) file for each processor. We recommend looking for the successful completion statement at the end of the rsl.out.* files as shown below and checking for any errors within these log files. For additional output from WRF and WRF-Hydro compile time diagnostic flags can be set to aid in troubleshooting. See the relevant documentation for information regarding these options. " + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "%%bash\n", + "\n", + "# Navigate to the run directory\n", + "cd ~/wrf-hydro-training/output/run/WRF\n", + "\n", + "# View the last few lines of the log file to check for successful completion\n", + "tail rsl.out.0000" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Running the coupled model\n", + "\n", + "Now we're ready to run the coupled WRF | WRF-Hydro simulation. Note that WRF-Hydro has been compiled as a library referenced by WRF and therefore we can just run a single wrf.exe executable. Namelist options are specified in namelist.input for the WRF model (options typically set in the namelist.hrldas for standalone WRF-Hydro simulations can also be found in here) and hydro.namelist for hydro components. " + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "%%bash\n", + "\n", + "# Navigate to the run directory\n", + "cd ~/wrf-hydro-training/output/run/WRF\n", + "\n", + "# Run the coupled WRF|WRF-Hydro model via the wrf.exe executable and pipe the output to a log file\n", + "# Note that this binary was compiled using distributed memory parallelism (thus the -np 2)\n", + "# if you are running on your own system the run command may be different\n", + "mpirun -np 2 ./wrf.exe > wrf.log 2>&1" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "**View the end of your log file to ensure the simulation completed**" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "%%bash\n", + "\n", + "# Navigate to the run directory\n", + "cd ~/wrf-hydro-training/output/run/WRF\n", + "\n", + "# View the last few lines of the log file to check for successful completion\n", + "tail rsl.out.0000" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "**Now list the files in your run directory**\n", + "\n", + "Note the hydro output file follow the YYYYMMDDHHMM.* and wrfout_d0x_*. " + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "%%bash\n", + "\n", + "# Navigate to the run directory\n", + "cd ~/wrf-hydro-training/output/run/WRF\n", + "\n", + "# List the files\n", + "ls " + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Viewing your model output\n", + "\n", + "All of the output data from this WRF | WRF-Hydro simulation are stored in netCDF files. Note that there are a number of utilites available for viewing and manipulating data in this format (e.g. ncview, NCO, Panoply) and packages available for reading these files using common languages for data analysis (e.g. R, Python, NCL). \n", + "\n", + "Here we will take a look at several of the output files using a couple of useful open source Python packages.\n", + "\n", + "**Import the required Python modules**\n", + "\n", + "First we import the required Python modules for this section. These include [xarray](http://xarray.pydata.org), a Python package for data analysis with labeled arrays, and pyplot from [matplotlib](https://matplotlib.org/), a commonly used Python plotting package." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "import xarray as xr\n", + "from matplotlib import pyplot as plt" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "**Open your dataset**\n", + "\n", + "Next we open a multifile dataset consisting of our gridded channel output files using xarray. You'll note that rather than explicitly specifying all of the filepaths we can use a wildcard to grab files for all dates." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "ds = xr.open_mfdataset('/home/docker/wrf-hydro-training/output/run/WRF/*.CHRTOUT_GRID2')" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "**Plot your data**\n", + "\n", + "Now we'll go ahead and plot the streamflow data as faceted grids (one for each output time step). While we don't see a large response, you can see some streamflow particularly in the upper right portion of the domain." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "scrolled": true + }, + "outputs": [], + "source": [ + "ds.streamflow[1:].plot(col='time', col_wrap=2, aspect=ds.x.size/ds.y.size)\n", + "plt.show()" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "This concludes the lesson. \n", + "\n", + "If you have extra time, consider plotting some of the other outputs from this simulation. " + ] + } + ], + "metadata": { + "kernelspec": { + "display_name": "Python 3", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.6.1" + }, + "toc": { + "base_numbering": 1, + "nav_menu": {}, + "number_sections": true, + "sideBar": true, + "skip_h1_title": true, + "title_cell": "Table of Contents", + "title_sidebar": "Contents", + "toc_cell": true, + "toc_position": { + "height": "calc(100% - 180px)", + "left": "10px", + "top": "150px", + "width": "240px" + }, + "toc_section_display": true, + "toc_window_display": true + }, + "toc-autonumbering": true, + "toc-showcode": false, + "toc-showmarkdowntxt": false, + "toc-showtags": false + }, + "nbformat": 4, + "nbformat_minor": 2 +}