From 0dc2dd9c0c4a58aafcb11408db806e869fe3d6af Mon Sep 17 00:00:00 2001 From: tkknight <2108488+tkknight@users.noreply.github.com> Date: Thu, 9 Nov 2023 12:04:40 +0000 Subject: [PATCH] fixed spacing (#5572) --- .../dask_best_practices/index.rst | 10 ++--- docs/src/further_topics/ugrid/data_model.rst | 28 ++++++------- docs/src/techpapers/um_files_loading.rst | 42 +++++++++---------- docs/src/userguide/navigating_a_cube.rst | 8 ++-- docs/src/userguide/real_and_lazy_data.rst | 23 +++++----- 5 files changed, 55 insertions(+), 56 deletions(-) diff --git a/docs/src/further_topics/dask_best_practices/index.rst b/docs/src/further_topics/dask_best_practices/index.rst index eb3321345b..f126427d3f 100644 --- a/docs/src/further_topics/dask_best_practices/index.rst +++ b/docs/src/further_topics/dask_best_practices/index.rst @@ -144,8 +144,8 @@ Iris provides a basic chunking shape to Dask, attempting to set the shape for best performance. The chunking that is used can depend on the file format that is being loaded. See below for how chunking is performed for: - * :ref:`chunking_netcdf` - * :ref:`chunking_pp_ff` +* :ref:`chunking_netcdf` +* :ref:`chunking_pp_ff` It can in some cases be beneficial to re-chunk the arrays in Iris cubes. For information on how to do this, see :ref:`dask_rechunking`. @@ -208,9 +208,9 @@ If you feel you have an example of a Dask best practice that you think may be he please share them with us by raising a new `discussion on the Iris repository `_. - * :doc:`dask_pp_to_netcdf` - * :doc:`dask_parallel_loop` - * :doc:`dask_bags_and_greed` +* :doc:`dask_pp_to_netcdf` +* :doc:`dask_parallel_loop` +* :doc:`dask_bags_and_greed` .. toctree:: :hidden: diff --git a/docs/src/further_topics/ugrid/data_model.rst b/docs/src/further_topics/ugrid/data_model.rst index cc3cc7b793..208254ada6 100644 --- a/docs/src/further_topics/ugrid/data_model.rst +++ b/docs/src/further_topics/ugrid/data_model.rst @@ -484,20 +484,20 @@ How UGRID information is stored | Described in detail in `MeshCoords`_. | Stores the following information: - * | :attr:`~iris.experimental.ugrid.MeshCoord.mesh` - | The :class:`~iris.experimental.ugrid.Mesh` associated with this - :class:`~iris.experimental.ugrid.MeshCoord`. This determines the - :attr:`~iris.cube.Cube.mesh` attribute of any :class:`~iris.cube.Cube` - this :class:`~iris.experimental.ugrid.MeshCoord` is attached to (see - `The Basics`_) - - * | :attr:`~iris.experimental.ugrid.MeshCoord.location` - | ``node``/``edge``/``face`` - the element detailed by this - :class:`~iris.experimental.ugrid.MeshCoord`. This determines the - :attr:`~iris.cube.Cube.location` attribute of any - :class:`~iris.cube.Cube` this - :class:`~iris.experimental.ugrid.MeshCoord` is attached to (see - `The Basics`_). + * | :attr:`~iris.experimental.ugrid.MeshCoord.mesh` + | The :class:`~iris.experimental.ugrid.Mesh` associated with this + :class:`~iris.experimental.ugrid.MeshCoord`. This determines the + :attr:`~iris.cube.Cube.mesh` attribute of any :class:`~iris.cube.Cube` + this :class:`~iris.experimental.ugrid.MeshCoord` is attached to (see + `The Basics`_) + + * | :attr:`~iris.experimental.ugrid.MeshCoord.location` + | ``node``/``edge``/``face`` - the element detailed by this + :class:`~iris.experimental.ugrid.MeshCoord`. This determines the + :attr:`~iris.cube.Cube.location` attribute of any + :class:`~iris.cube.Cube` this + :class:`~iris.experimental.ugrid.MeshCoord` is attached to (see + `The Basics`_). .. _ugrid MeshCoords: diff --git a/docs/src/techpapers/um_files_loading.rst b/docs/src/techpapers/um_files_loading.rst index f8c94cab08..f94898b3aa 100644 --- a/docs/src/techpapers/um_files_loading.rst +++ b/docs/src/techpapers/um_files_loading.rst @@ -125,21 +125,21 @@ with latitude and longitude axes are also supported). For an ordinary latitude-longitude grid, the cubes have coordinates called 'longitude' and 'latitude': - * These are mapped to the appropriate data dimensions. - * They have units of 'degrees'. - * They have a coordinate system of type :class:`iris.coord_systems.GeogCS`. - * The coordinate points are normally set to the regular sequence - ``ZDX/Y + BDX/Y * (1 .. LBNPT/LBROW)`` (*except*, if BDX/BDY is zero, the - values are taken from the extra data vector X/Y, if present). - * If X/Y_LOWER_BOUNDS extra data is available, this appears as bounds values - of the horizontal coordinates. +* These are mapped to the appropriate data dimensions. +* They have units of 'degrees'. +* They have a coordinate system of type :class:`iris.coord_systems.GeogCS`. +* The coordinate points are normally set to the regular sequence + ``ZDX/Y + BDX/Y * (1 .. LBNPT/LBROW)`` (*except*, if BDX/BDY is zero, the + values are taken from the extra data vector X/Y, if present). +* If X/Y_LOWER_BOUNDS extra data is available, this appears as bounds values + of the horizontal coordinates. For **rotated** latitude-longitude coordinates (as for LBCODE=101), the horizontal coordinates differ only slightly -- - * The names are 'grid_latitude' and 'grid_longitude'. - * The coord_system is a :class:`iris.coord_systems.RotatedGeogCS`, created - with a pole defined by BPLAT, BPLON. +* The names are 'grid_latitude' and 'grid_longitude'. +* The coord_system is a :class:`iris.coord_systems.RotatedGeogCS`, created + with a pole defined by BPLAT, BPLON. For example: >>> # Load a PP field. @@ -304,10 +304,9 @@ For hybrid height levels (LBVC=65): multidimensional or non-monotonic. See an example printout of a hybrid height cube, -:ref:`here `: - - Notice that this contains all of the above coordinates -- - 'model_level_number', 'sigma', 'level_height' and the derived 'altitude'. +:ref:`here `. Notice that this contains all of the +above coordinates -- ``model_level_number``, ``sigma``, ``level_height`` and +the derived ``altitude``. .. note:: @@ -364,7 +363,7 @@ Data at a single measurement timepoint (LBTIM.IB=0): defined according to LBTIM.IC. Values forecast from T2, valid at T1 (LBTIM.IB=1): - Coordinates ``time` and ``forecast_reference_time`` are created from the T1 + Coordinates ``time`` and ``forecast_reference_time`` are created from the T1 and T2 values, respectively. These have no bounds, and units of 'hours since 1970-01-01 00:00:00', with the appropriate calendar. A ``forecast_period`` coordinate is also created, with values T1-T2, no @@ -383,12 +382,11 @@ these may become dimensions of the resulting data cube. This will depend on the values actually present in the source fields for each of the elements. See an example printout of a forecast data cube, -:ref:`here ` : - - Notice that this example contains all of the above coordinates -- 'time', - 'forecast_period' and 'forecast_reference_time'. In this case the data are - forecasts, so 'time' is a dimension, 'forecast_period' varies with time and - 'forecast_reference_time' is a constant. +:ref:`here `. Notice that this example +contains all of the above coordinates -- ``time``, ``forecast_period`` and +``forecast_reference_time``. In this case the data are forecasts, so ``time`` +is a dimension, ``forecast_period``` varies with time and +``forecast_reference_time`` is a constant. Statistical Measures diff --git a/docs/src/userguide/navigating_a_cube.rst b/docs/src/userguide/navigating_a_cube.rst index b4c16b094b..ec3cd8e0dc 100644 --- a/docs/src/userguide/navigating_a_cube.rst +++ b/docs/src/userguide/navigating_a_cube.rst @@ -191,10 +191,10 @@ Adding and Removing Metadata to the Cube at Load Time Sometimes when loading a cube problems occur when the amount of metadata is more or less than expected. This is often caused by one of the following: - * The file does not contain enough metadata, and therefore the cube cannot know everything about the file. - * Some of the metadata of the file is contained in the filename, but is not part of the actual file. - * There is not enough metadata loaded from the original file as Iris has not handled the format fully. *(in which case, - please let us know about it)* +* The file does not contain enough metadata, and therefore the cube cannot know everything about the file. +* Some of the metadata of the file is contained in the filename, but is not part of the actual file. +* There is not enough metadata loaded from the original file as Iris has not handled the format fully. *(in which case, + please let us know about it)* To solve this, all of :func:`iris.load`, :func:`iris.load_cube`, and :func:`iris.load_cubes` support a callback keyword. diff --git a/docs/src/userguide/real_and_lazy_data.rst b/docs/src/userguide/real_and_lazy_data.rst index ef4de0c429..e4c041886c 100644 --- a/docs/src/userguide/real_and_lazy_data.rst +++ b/docs/src/userguide/real_and_lazy_data.rst @@ -247,20 +247,21 @@ output file, to be performed by `Dask `_ lat thus enabling parallel save operations. This works in the following way : - 1. an :func:`iris.save` call is made, with a NetCDF file output and the additional - keyword ``compute=False``. - This is currently *only* available when saving to NetCDF, so it is documented in - the Iris NetCDF file format API. See: :func:`iris.fileformats.netcdf.save`. - 2. the call creates the output file, but does not fill in variables' data, where - the data is a lazy array in the Iris object. Instead, these variables are - initially created "empty". +1. an :func:`iris.save` call is made, with a NetCDF file output and the additional + keyword ``compute=False``. + This is currently *only* available when saving to NetCDF, so it is documented in + the Iris NetCDF file format API. See: :func:`iris.fileformats.netcdf.save`. - 3. the :meth:`~iris.save` call returns a ``result`` which is a - :class:`~dask.delayed.Delayed` object. +2. the call creates the output file, but does not fill in variables' data, where + the data is a lazy array in the Iris object. Instead, these variables are + initially created "empty". - 4. the save can be completed later by calling ``result.compute()``, or by passing it - to the :func:`dask.compute` call. +3. the :meth:`~iris.save` call returns a ``result`` which is a + :class:`~dask.delayed.Delayed` object. + +4. the save can be completed later by calling ``result.compute()``, or by passing it + to the :func:`dask.compute` call. The benefit of this, is that costly data transfer operations can be performed in parallel with writes to other data files. Also, where array contents are calculated