Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Use m2r2 instead of m2r #164

Merged
merged 2 commits into from
Sep 24, 2020
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion doc/source/api/api-overview.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
Documentation for each API endpoint is automatically generated from the code and docstring for that API's main function and may not be entirely user-friendly. There are some minor differences between the internal workings of the API function and the process of querying them over the web.

The query URL is constructed from a base url ending in a slash, followed by the name of the endpoint, a question mark, and then one or more parameters of the form `attribute=value', seperated by ampersands. Parameters supplied via query URL should be web-encoded so that they will be correctly parsed.
The query URL is constructed from a base url ending in a slash, followed by the name of the endpoint, a question mark, and then one or more parameters of the form `attribute=value`, seperated by ampersands. Parameters supplied via query URL should be web-encoded so that they will be correctly parsed.

The automatically generated API documentation describes a `sesh` (database session) argument to each API function. Database sessions are supplied by the query parser and does not need to be given in the query URL.

Expand Down
2 changes: 1 addition & 1 deletion doc/source/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -33,7 +33,7 @@
# Add any Sphinx extension module names here, as strings. They can be
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
# ones.
extensions = ["sphinx.ext.autodoc", "m2r"]
extensions = ["sphinx.ext.autodoc", "m2r2"]

# Add any paths that contain templates here, relative to this directory.
templates_path = ["_templates"]
Expand Down
26 changes: 12 additions & 14 deletions doc/source/workflow.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,8 +19,8 @@ aggregates. Each individual value in these files represents a mean or stanard
deviation calculated across multiple years, typically thirty years, which is
standard in climate science. For example, a monthly climatological mean might
cover 1961-1990, but feature only twelve timestamps. The January timestamp is
the mean of the value for January 1961, January 1962, and so on up to January
1990. The February timestamp is the mean of the values for February 1961,
the mean of the value for January 1961, January 1962, and so on up to January 1990. The February timestamp is the mean of the values for
February 1961,
February 1962, and so on. Climatological means may be monthly, seasonal,
or annual. This API primarily supports analysis of climatological datasets,
and more analysis options are available for them.
Expand Down Expand Up @@ -79,23 +79,21 @@ ensemble named `bc_moti`, with variable name equal to `streamflow`.

When requesting streamflow data from a point, please be cautious and aware
of potential differences in precision. Note that while you may supply a point
with arbitrarily precise longitude and latitude values to the API, the
streamflow data is a gridded dataset and has only one value available per
grid cell. If you supply the coordinates of a small creek inside the same
grid cell as a river, the data you receive is for the total flow through
that grid cell; it is primarily influenced by the river. Accordingly,
this dataset is more accurate for larger streams, and should not be
considered accurate for streams that drain watersheds of less than 200
kilometers area. Information on the grid corresponding to a dataset can
be obtained from the `grid` API, which will provide a list of the longitudes
and latitudes corresponding to the centroids of each grid cell.
with arbitrarily precise longitude and latitude values to the API, the
streamflow data is a gridded dataset and has only one value available per
grid cell. If you supply the coordinates of a small creek inside the same
grid cell as a river, the data you receive is for the total flow through
that grid cell; it is primarily influenced by the river. Accordingly,
this dataset is more accurate for larger streams, and should not be
considered accurate for streams that drain watersheds of less than 200
kilometers area. Information on the grid corresponding to a dataset can
be obtained from the `grid` API, which will provide a list of the longitudes and latitudes corresponding to the centroids of each grid cell.

To get contextual information on the watershed that drains to the streamflow
location of interest, the `watershed` API can be invoked with the same
WKT Point used to request streamflow data. This API provides information on
the topology and extent of the watershed that drains to the grid cell
containing the specified point. The outline of the watershed is provided as a
GeoJSON object. This outline, like the streamflow data itself, has a
containing the specified point. The outline of the watershed is provided as a GeoJSON object. This outline, like the streamflow data itself, has a
resolution determined by the size of a grid cell.

The GeoJSON watershed polygon can furthermore be used to request data on
Expand Down
4 changes: 2 additions & 2 deletions requirements.txt
Original file line number Diff line number Diff line change
Expand Up @@ -11,5 +11,5 @@ sqlalchemy==1.3.17
contexttimer==0.3.3

# For documentation
Sphinx==2.4.4
m2r==0.2.1
Sphinx==3.2.1
m2r2==0.2.5