-
Notifications
You must be signed in to change notification settings - Fork 1
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
- Loading branch information
Showing
11 changed files
with
111 additions
and
21 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,35 @@ | ||
@ECHO OFF | ||
|
||
pushd %~dp0 | ||
|
||
REM Command file for Sphinx documentation | ||
|
||
if "%SPHINXBUILD%" == "" ( | ||
set SPHINXBUILD=sphinx-build | ||
) | ||
set SOURCEDIR=source | ||
set BUILDDIR=build | ||
|
||
if "%1" == "" goto help | ||
|
||
%SPHINXBUILD% >NUL 2>NUL | ||
if errorlevel 9009 ( | ||
echo. | ||
echo.The 'sphinx-build' command was not found. Make sure you have Sphinx | ||
echo.installed, then set the SPHINXBUILD environment variable to point | ||
echo.to the full path of the 'sphinx-build' executable. Alternatively you | ||
echo.may add the Sphinx directory to PATH. | ||
echo. | ||
echo.If you don't have Sphinx installed, grab it from | ||
echo.http://sphinx-doc.org/ | ||
exit /b 1 | ||
) | ||
|
||
%SPHINXBUILD% -M %1 %SOURCEDIR% %BUILDDIR% %SPHINXOPTS% %O% | ||
goto end | ||
|
||
:help | ||
%SPHINXBUILD% -M help %SOURCEDIR% %BUILDDIR% %SPHINXOPTS% %O% | ||
|
||
:end | ||
popd |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,4 +1,11 @@ | ||
Documentation for each API endpoint is automatically generated from the code and docstring for that API's main function and may not be entirely user-friendly. There are some very minor differences between arguments for each API function and the parameters needed for a web query. | ||
Documentation for each API endpoint is automatically generated from the code and docstring for that API's main function and may not be entirely user-friendly. There are some minor differences between the internal workings of the API function and the process of querying them over the web. | ||
|
||
1. Web queries do not supply a `sesh` (database session) as an argument; that will be automatically done by the query parser. | ||
2. Parameters supplied in a query url should be web-encoded. | ||
The query URL is constructed from a base url ending in a slash, followed by the name of the endpoint, a question mark, and then one or more parameters of the form `attribute=value', seperated by ampersands. Parameters supplied via query URL should be web-encoded so that they will be correctly parsed. | ||
|
||
The automatically generated API documentation describes a `sesh` (database session) argument to each API function. Database sessions are supplied by the query parser and does not need to be given in the query URL. | ||
|
||
For example, the `multimeta` function has a signature of `ce.api.multimeta(sesh, ensemble_name='ce_files', model='')` | ||
|
||
The query URL `https://base_url/multimeta?ensemble_name=ce_files&model=CanESM2` calls the `multimeta` endpoint and supplies two arguments for the `multimeta` function: `ensemble_name` is "ce_files" and `model` is CanESM2. `sesh` is not supplied in the query URL. | ||
|
||
The API function return values are converted to JSON for the endpoint response. |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,32 +1,51 @@ | ||
# Overview | ||
|
||
## Raster Data | ||
This set of APIs provide queries for retrieving statistical and summary data from multidimensional raster datasets containing climate data. | ||
## Data Matrix | ||
This set of APIs provide queries for retrieving statistical and summary data from multidimensional data matrix containing climate data. | ||
|
||
Raster datasets treat data as a regular matrix. Each dataset has a list of latitudes, a list of longitudes, a list of timestamps, and a list of variables such as temperature, precipitation, or derived values like growing degree days. A value may be accessed for every combination of latitude, longitude, timestamp, and variable, which represents the value of that variable at that particular point in time and space. | ||
The data is presented as a regular matrix. Each dataset has a list of latitudes, a list of longitudes, a list of timestamps, and a list of variables such as temperature, precipitation, or derived values like growing degree days. A value may be accessed for every combination of latitude, longitude, timestamp, and variable, which represents the value of that variable at that particular point in time and space. | ||
|
||
These data are interpolated from real-world observations or generated by General Circulation Models or Regional Climate Models, which are able to provide complete data coverage. Individual points may be masked off as missing, but this is typically only done to mark the boundaries of irregular geographic extents, like coastlines. The data should be assumed to be dense within the spatial and temporal boundaries. | ||
|
||
|
||
## Data Access | ||
This system is not designed to provide individual access to every point in the raster data matrix, but calculates and makes available summary data over user-defined spatial and temporal scales to support analysis and visualization. All numerical API endpoints accept a spatial area of interest (specified in URL-encoded WKT as a point or polygon) and return the mean of all cells within the designated area. If no area is designated, the mean across the entire spatial extent of the dataset is returned. | ||
This system is not designed to provide individual access to every point in the data matrix, but calculates and makes available summary data over user-defined spatial and temporal scales to support analysis and visualization. All numerical API endpoints accept a spatial area of interest (specified in URL-encoded WKT as a point or polygon) and return the mean of all cells within the designated area. If no area is designated, the mean across the entire spatial extent of the dataset is returned. | ||
|
||
Datafiles are organized by temporal range and resolution. | ||
|
||
## Modeled Data | ||
|
||
Most data accessible by the API is climate data from a General Circulation Model (GCM) or Regional Climate Model (RCM). These numerical models represent physical processes in the atmosphere, ocean, cryosphere, and land surface of the earth. They simulate the response of the global climate system. The models output meteorological projections like temperature and precipitation. Other available variables represent statistical indices derived from the meteorological outputs. | ||
Most data accessible by the API is climate data from a General Circulation Model (GCM) or Regional Climate Model (RCM). These numerical models represent physical processes in the atmosphere, ocean, cryosphere, and land surface of the earth. They simulate the response of the global climate system. Model outputs are meteorological variables such as temperature and precipitation. Temporally, they typically cover the period 1950 to 2100. Data for any dates in the future are termed "projections." Other variables represent statistical indices calculated from the meteorological variables output by models, and are associated with the same model that output the base data. | ||
|
||
A small minority of the datasets accessible by this API are not GCM or RCM outputs, but are intended to help provide additional context to the GCM and RCM outputs. These contextual data were instead created by models that interpolate observed data to fill spatial and temporal gaps. These non-GCM interpolated datasets have no projection data associated with dates in the future. | ||
|
||
### Emissions Scenarios | ||
|
||
Each dataset has an associated emissions scenario, sometimes called `experiment`. Emissions scenarios represent a range of possible future projections for greenhouse gas concentrations. GCMs and RCMs are typically driven by historical data on greenhouse gas concentrations when modeling the past and present, before using an emissions scenario to provide input on greenhouse gas concentrations to drive future projections. | ||
Each dataset has an associated emissions scenario. Emissions scenarios represent a range of possible future projections for greenhouse gas concentrations. GCMs and RCMs are typically driven by historical data on greenhouse gas concentrations when modeling the past and present, before using an emissions scenario to provide input on greenhouse gas concentrations to drive future projections, typically a Representative Concentration Pathway scenario defined by the IPCC. | ||
|
||
Datasets created by interpolative models feature only historical emissions scenarios. | ||
|
||
The parameter name `experiment` is used to designate an emissions scenario in the API and code. | ||
|
||
### Runs | ||
|
||
Each dataset has an associated run string, sometimes called `ensemble_member`, which represents the initialization settings of the simulating model. A GCM or RCM may be run multiple times with different initialization conditions. | ||
Each dataset has an associated run string, which represents the initialization settings of the simulating model. A GCM or RCM may be run multiple times with different initialization conditions. A collection of related runs with different initializations of the same model comprise a statistical ensemble, which can be used to give some idea of the range of possible outcomes of the model system. | ||
|
||
GCMs and RCMs follow a standard encoding for the members of a statistical ensemble, `rXiXpX`, which is provided to the API as the parameter `ensemble_member`: | ||
* rX where X is an integer representing the realization of the run | ||
* iX where X is an integer representing the initialization method of this run | ||
* pX where X is an integer representing the physics version used for this run. | ||
|
||
Interpolative models can't typically be run multiple times to create a statistical ensemble, and the concept of an ensemble_member code doesn't apply. Nevertheless, for uniformity, we have generalized from the common GCM case, and datasets output by interpolative models have an `ensemble_member` string that does not follow the `rXiXpX` encoding, such as "n/a" or "nominal". | ||
|
||
## Climatological Aggregates | ||
|
||
While GCM and RCM models output daily or sub-daily projections, most data accessible via this API is normalized as multi-year climatological statistics with monthly, seasonal, or annual resolution. | ||
|
||
The statistical methods used to generate the monthly, seasonal, or annual values vary with the nature of the data. For example, a monthly mean precipitation value (`pr`) may be calculated by taking the mean daily precipitation for every day in a month. A maximum one-day monthly precipitation value (`rx1day`) may be calculated by taking the single largest total precipitation that falls on one day in the month. | ||
|
||
## Climatological Means | ||
Most of the data is then further aggregated between years over a specified amount of time, typically 30 years, the standard in climate science. For example, the January value of a climatological aggregate dataset for 1961-1990 represents the statistically aggregated values for January 1961, January 1962, and so on up to January 1990. The February value represents February 1961, February 1962, and so on. | ||
|
||
Most data accessible via this API is normalized as multi-year climatological means. These datasets have been averaged between years over a specified amount of time, typically thirty years. For example, the January value of a precipitation climatological mean dataset for 1961-1990 represents the mean precipitation of January 1961, January 1962, and so on up to January 1990. | ||
A series of these overlapping 30-year aggregated climatologies is generated from the entire timespan of the model output data. The aggregating function used to create multi-year climatological datasets is either `mean`, to show long term trends in the numerical quantity being measured, or `stdev`, to show long term variability of the numerical quantity being measured. | ||
|
||
A series of overlapping climatologies are generated from the entire timespan of the model output data. | ||
A small number of non-aggregated datasets are available, but they are not the primary purpose of this system, and many of the queries are not available for these datasets. These datasets are removed from circulation and replaced with aggregated datasets when possible. |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -12,7 +12,6 @@ contexttimer==0.3.3 | |
|
||
# For documentation | ||
sphinx | ||
recommonmark | ||
m2r | ||
|
||
# For testing | ||
|
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters