Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Doc improvements #1184

Merged
merged 7 commits into from
May 18, 2020
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 3 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -42,6 +42,9 @@
* Add new examples for `plot_pair` (#1110)
* Add examples for `psislw` and `r2_score` (#1129)
* Add more examples on 2D kde customization (#1158)
* Make docs compatible with sphinx3 and configure `intersphinx` for better
references (#1184)
* Extend the developer guide and add it to the website (#1184)

## v0.7.0 (2020 Mar 2)

Expand Down
2 changes: 1 addition & 1 deletion arviz/data/inference_data.py
Original file line number Diff line number Diff line change
Expand Up @@ -224,7 +224,7 @@ def sel(self, inplace=False, chain_prior=False, warmup=False, **kwargs):
One example could be performing a burn in cut on the InferenceData object
or discarding a chain. The selection is performed on all relevant groups (like
posterior, prior, sample stats) while non relevant groups like observed data are
omitted.
omitted. See :meth:`xarray.Dataset.sel <xarray:xarray.Dataset.sel>`

Parameters
----------
Expand Down
3 changes: 3 additions & 0 deletions arviz/data/io_cmdstan.py
Original file line number Diff line number Diff line change
Expand Up @@ -791,6 +791,9 @@ def from_cmdstan(
) -> InferenceData:
"""Convert CmdStan data into an InferenceData object.

For a usage example read the
:doc:`Cookbook section on from_cmdstan </notebooks/InferenceDataCookbook>`

Parameters
----------
posterior : str or list of str, optional
Expand Down
3 changes: 3 additions & 0 deletions arviz/data/io_cmdstanpy.py
Original file line number Diff line number Diff line change
Expand Up @@ -347,6 +347,9 @@ def from_cmdstanpy(
):
"""Convert CmdStanPy data into an InferenceData object.

For a usage example read the
:doc:`Cookbook section on from_cmdstanpy </notebooks/InferenceDataCookbook>`

Parameters
----------
posterior : CmdStanMCMC object
Expand Down
3 changes: 3 additions & 0 deletions arviz/data/io_dict.py
Original file line number Diff line number Diff line change
Expand Up @@ -207,6 +207,9 @@ def from_dict(
):
"""Convert Dictionary data into an InferenceData object.

For a usage example read the
:doc:`Cookbook section on from_dict </notebooks/InferenceDataCookbook>`

Parameters
----------
posterior : dict
Expand Down
3 changes: 3 additions & 0 deletions arviz/data/io_numpyro.py
Original file line number Diff line number Diff line change
Expand Up @@ -301,6 +301,9 @@ def from_numpyro(
):
"""Convert NumPyro data into an InferenceData object.

For a usage example read the
:doc:`Cookbook section on from_numpyro </notebooks/InferenceDataCookbook>`

Parameters
----------
posterior : numpyro.mcmc.MCMC
Expand Down
5 changes: 4 additions & 1 deletion arviz/data/io_pymc3.py
Original file line number Diff line number Diff line change
Expand Up @@ -442,11 +442,14 @@ def from_pymc3(

All three of them are optional arguments, but at least one of ``trace``,
``prior`` and ``posterior_predictive`` must be present.
For a usage example read the
:doc:`Cookbook section on from_pymc3 </notebooks/InferenceDataCookbook>`

Parameters
----------
trace : pymc3.MultiTrace, optional
Trace generated from MCMC sampling.
Trace generated from MCMC sampling. Output of
:py:func:`pymc3:pymc3.sampling.sample`.
prior : dict, optional
Dictionary with the variable names as keys, and values numpy arrays
containing prior and prior predictive samples.
Expand Down
3 changes: 3 additions & 0 deletions arviz/data/io_pyro.py
Original file line number Diff line number Diff line change
Expand Up @@ -283,6 +283,9 @@ def from_pyro(
):
"""Convert Pyro data into an InferenceData object.

For a usage example read the
:doc:`Cookbook section on from_pyro </notebooks/InferenceDataCookbook>`

Parameters
----------
posterior : pyro.infer.MCMC
Expand Down
3 changes: 3 additions & 0 deletions arviz/data/io_pystan.py
Original file line number Diff line number Diff line change
Expand Up @@ -735,6 +735,9 @@ def from_pystan(
):
"""Convert PyStan data into an InferenceData object.

For a usage example read the
:doc:`Cookbook section on from_pystan </notebooks/InferenceDataCookbook>`

Parameters
----------
posterior : StanFit4Model or stan.fit.Fit
Expand Down
10 changes: 6 additions & 4 deletions arviz/plots/kdeplot.py
Original file line number Diff line number Diff line change
Expand Up @@ -66,7 +66,8 @@ def plot_kde(
Text size scaling factor for labels, titles and lines. If None it will be autoscaled based
on figsize. Not implemented for bokeh backend.
plot_kwargs : dict
Keywords passed to the pdf line of a 1D KDE.
Keywords passed to the pdf line of a 1D KDE. See :meth:`mpl:matplotlib.axes.Axes.plot`
or :meth:`bokeh:bokeh.plotting.figure.Figure.line` for a description of accepted values.
fill_kwargs : dict
Keywords passed to the fill under the line (use fill_kwargs={'alpha': 0} to disable fill).
Ignored for 2D KDE
Expand Down Expand Up @@ -162,9 +163,10 @@ def plot_kde(
.. plot::
:context: close-figs

>>> az.plot_kde(mu_posterior, values2=tau_posterior,
... contour_kwargs={"levels":3}
... contourf_kwargs={"levels":3};
>>> az.plot_kde(
... mu_posterior, values2=tau_posterior,
... contour_kwargs={"levels":3}, contourf_kwargs={"levels":3}
... );

Plot 2d smooth KDE

Expand Down
7 changes: 3 additions & 4 deletions arviz/plots/rankplot.py
Original file line number Diff line number Diff line change
Expand Up @@ -111,7 +111,6 @@ def plot_rank(
.. plot::
:context: close-figs

>>> import arviz as az
>>> data = az.load_arviz_data('centered_eight')
>>> az.plot_rank(data, var_names='tau')

Expand All @@ -120,12 +119,12 @@ def plot_rank(
.. plot::
:context: close-figs

>>> import arviz as az
>>> import matplotlib.pyplot as plt
>>> centered_data = az.load_arviz_data('centered_eight')
>>> noncentered_data = az.load_arviz_data('non_centered_eight')
>>> _, ax = plt.subplots(1, 2, figsize=(12, 3))
>>> az.plot_rank(centered_data, var_names="mu", kind='vlines', axes=ax[0])
>>> az.plot_rank(noncentered_data, var_names="mu", kind='vlines', axes=ax[1])
>>> az.plot_rank(centered_data, var_names="mu", kind='vlines', ax=ax[0])
>>> az.plot_rank(noncentered_data, var_names="mu", kind='vlines', ax=ax[1])

"""
if transform is not None:
Expand Down
2 changes: 1 addition & 1 deletion arviz/plots/traceplot.py
Original file line number Diff line number Diff line change
Expand Up @@ -141,7 +141,7 @@ def plot_trace(

>>> az.plot_trace(
>>> data, var_names=('^theta'), filter_vars="regex", coords=coords, combined=True
>>>> )
>>> )


Plot reference lines against distribution and trace
Expand Down
17 changes: 11 additions & 6 deletions arviz/stats/stats.py
Original file line number Diff line number Diff line change
Expand Up @@ -135,6 +135,11 @@ def compare(

In [1]: az.compare(compare_dict, ic="loo", method="stacking", scale="log")

See Also
--------
loo : Compute the Pareto Smoothed importance sampling Leave One Out cross-validation.
waic : Compute the widely applicable information criterion.

"""
names = list(dataset_dict.keys())
scale = rcParams["stats.ic_scale"] if scale is None else scale.lower()
Expand Down Expand Up @@ -349,16 +354,16 @@ def hpd(
skipna: bool
If true ignores nan values when computing the hpd interval. Defaults to false.
group: str, optional
Specifies which InferenceData group should be used to calculate hpd.
Defaults to 'posterior'
Specifies which InferenceData group should be used to calculate hpd.
Defaults to 'posterior'
var_names: list, optional
Names of variables to include in the hpd report. Prefix the variables by `~`
when you want to exclude them from the report: `["~beta"]` instead of `["beta"]`
(see `az.summary` for more details).
filter_vars: {None, "like", "regex"}, optional, default=None
If `None` (default), interpret var_names as the real variables names. If "like",
interpret var_names as substrings of the real variables names. If "regex",
interpret var_names as regular expressions on the real variables names. A la
interpret var_names as substrings of the real variables names. If "regex",
interpret var_names as regular expressions on the real variables names. A la
`pandas.filter`.
coords: mapping, optional
Specifies the subset over to calculate hpd.
Expand Down Expand Up @@ -960,8 +965,8 @@ def summary(
examples below).
filter_vars: {None, "like", "regex"}, optional, default=None
If `None` (default), interpret var_names as the real variables names. If "like",
interpret var_names as substrings of the real variables names. If "regex",
interpret var_names as regular expressions on the real variables names. A la
interpret var_names as substrings of the real variables names. If "regex",
interpret var_names as regular expressions on the real variables names. A la
`pandas.filter`.
fmt: {'wide', 'long', 'xarray'}
Return format is either pandas.DataFrame {'wide', 'long'} or xarray.Dataset {'xarray'}.
Expand Down
4 changes: 2 additions & 2 deletions arviz/stats/stats_refitting.py
Original file line number Diff line number Diff line change
Expand Up @@ -30,8 +30,8 @@ def reloo(wrapper, loo_orig=None, k_thresh=0.7, scale=None, verbose=True):
LOO-CV with only a handful of refits, which in most cases is still much less computationally
expensive than exact LOO-CV, which needs one refit per observation.

Arguments
---------
Parameters
----------
wrapper: SamplingWrapper-like
Class (preferably a subclass of ``az.SamplingWrapper``, see :ref:`wrappers_api`
for details) implementing the methods described
Expand Down
2 changes: 1 addition & 1 deletion arviz/wrappers/wrap_pystan.py
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@
class PyStanSamplingWrapper(SamplingWrapper):
"""PyStan sampling wrapper base class.

See the documentation on :ref:`wrappers_api` for a more detailed
See the documentation on :py:class:`arviz.SamplingWrapper` for a more detailed
description. An example of ``PyStanSamplingWrapper`` usage can be found
in the :doc:`pystan_refitting <../notebooks/pystan_refitting>`.

Expand Down
5 changes: 5 additions & 0 deletions doc/_static/custom.css
Original file line number Diff line number Diff line change
Expand Up @@ -37,3 +37,8 @@ h5, .h5 {
ul li p {
margin: 0 0 0 0;
}

.alert-info{
color: #555555;
background-color: #d1ecf1;
}
File renamed without changes.
File renamed without changes.
12 changes: 3 additions & 9 deletions doc/api.rst
Original file line number Diff line number Diff line change
Expand Up @@ -85,15 +85,10 @@ Stats utils
Data
----

.. autosummary::
:toctree: generated/
:template: class_members.rst

InferenceData

.. autosummary::
:toctree: generated/

InferenceData
convert_to_inference_data
load_arviz_data
to_netcdf
Expand All @@ -115,7 +110,7 @@ Utils

.. autosummary::
:toctree: generated/
:template: class.rst
:template: class_no_members.rst

Numba
interactive_backend
Expand All @@ -125,7 +120,7 @@ rcParams

.. autosummary::
:toctree: generated/
:template: class.rst
:template: class_no_members.rst

rc_context

Expand All @@ -137,7 +132,6 @@ Experimental feature

.. autosummary::
:toctree: generated/
:template: class_members.rst

SamplingWrapper
PyStanSamplingWrapper
Expand Down
19 changes: 15 additions & 4 deletions doc/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -21,9 +21,14 @@
import sys

sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
from recommonmark.parser import CommonMarkParser
import sphinx_bootstrap_theme
import arviz

class CustomCommonMarkParser(CommonMarkParser):
def visit_document(self, node):
pass

arviz.rcParams["data.load"] = "eager"

# -- General configuration ------------------------------------------------
Expand Down Expand Up @@ -54,7 +59,6 @@
"bokeh.sphinxext.bokeh_plot",
"numpydoc",
"nbsphinx",
"m2r",
"IPython.sphinxext.ipython_directive",
"IPython.sphinxext.ipython_console_highlighting",
"gallery_generator",
Expand Down Expand Up @@ -181,7 +185,9 @@


def setup(app):
app.add_stylesheet("custom.css")
app.add_css_file("custom.css")
app.add_source_suffix('.md', 'markdown')
app.add_source_parser(CustomCommonMarkParser)


# -- Options for LaTeX output ---------------------------------------------
Expand Down Expand Up @@ -253,5 +259,10 @@ def setup(app):
epub_exclude_files = ["search.html"]


# Example configuration for intersphinx: refer to the Python standard library.
# intersphinx_mapping = {'https://docs.python.org/': None}
# Example configuration for intersphinx
intersphinx_mapping = {
"xarray": ("http://xarray.pydata.org/en/stable/", None),
"pymc3": ("https://docs.pymc.io/", None),
"mpl": ("https://matplotlib.org/", None),
"bokeh": ("https://docs.bokeh.org/en/latest/", None),
}
Loading