Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[REVIEW] Change docs theme to pydata-sphinx theme #4985

Merged
merged 8 commits into from
Nov 17, 2022
Merged
Show file tree
Hide file tree
Changes from 5 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 2 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -35,6 +35,8 @@ _skbuild/

## files pickled in notebook when ran during python docstring generation
docs/source/*.model
docs/source/*.pkl
docs/source/*.tl

## eclipse
.project
Expand Down
2 changes: 1 addition & 1 deletion conda/environments/builddocs_py37.yml
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ dependencies:
- pytest
# required for building rapids project docs
- sphinx
- sphinx_rtd_theme
- pydata-sphinx-theme
- sphinx-markdown-tables
- sphinxcontrib-websupport
- nbsphinx
Expand Down
2 changes: 1 addition & 1 deletion cpp/include/cuml/manifold/tsne.h
Original file line number Diff line number Diff line change
Expand Up @@ -85,7 +85,7 @@ struct TSNEParams {
// The momentum used after the exaggeration phase.
float post_momentum = 0.8;

// Set this to -1 for pure random intializations or >= 0 for
// Set this to -1 for pure random initializations or >= 0 for
// reproducible outputs. This sets random seed correctly, but there
// may still be some variance due to the parallel nature of this algorithm.
long long random_state = -1;
Expand Down
42 changes: 0 additions & 42 deletions docs/source/_static/copybutton.css

This file was deleted.

61 changes: 0 additions & 61 deletions docs/source/_static/example_mod.js

This file was deleted.

87 changes: 0 additions & 87 deletions docs/source/_static/infoboxes.css

This file was deleted.

26 changes: 10 additions & 16 deletions docs/source/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -50,6 +50,7 @@
"nbsphinx",
"recommonmark",
"sphinx_markdown_tables",
"sphinx_copybutton"
]

ipython_mplbackend = "str"
Expand Down Expand Up @@ -107,31 +108,26 @@
# a list of builtin themes.
#

html_theme = 'sphinx_rtd_theme'

# on_rtd is whether we are on readthedocs.org
on_rtd = os.environ.get('READTHEDOCS', None) == 'True'

if not on_rtd:
# only import and set the theme if we're building docs locally
# otherwise, readthedocs.org uses their theme by default,
# so no need to specify it
import sphinx_rtd_theme
html_theme = 'sphinx_rtd_theme'
html_theme_path = [sphinx_rtd_theme.get_html_theme_path()]
html_theme = 'pydata_sphinx_theme'

# Theme options are theme-specific and customize the look and feel of a theme
# further. For a list of options available for each theme, see the
# documentation.
#
# html_theme_options = {}
html_theme_options = {
"external_links": [],
"github_url": "https://github.com/rapidsai/cuml",
"twitter_url": "https://twitter.com/rapidsai",
"show_toc_level": 1,
"navbar_align": "right",
}

# Add any paths that contain custom static files (such as style sheets) here,
# relative to this directory. They are copied after the builtin static files,
# so a file named "default.css" will overwrite the builtin "default.css".
html_static_path = ['_static']

html_js_files = ["example_mod.js"]
html_js_files = []

# -- Options for HTMLHelp output ------------------------------------------

Expand Down Expand Up @@ -193,8 +189,6 @@


def setup(app):
app.add_css_file("copybutton.css")
app.add_css_file("infoboxes.css")
app.add_css_file("references.css")
app.add_css_file("https://docs.rapids.ai/assets/css/custom.css")
app.add_js_file("https://docs.rapids.ai/assets/js/custom.js", loading_method="defer")
Expand Down
2 changes: 1 addition & 1 deletion notebooks/kmeans_demo.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@
"\n",
"KMeans is a basic but powerful clustering method which is optimized via Expectation Maximization. It randomly selects K data points in X, and computes which samples are close to these points. For every cluster of points, a mean is computed, and this becomes the new centroid.\n",
"\n",
"cuML’s KMeans supports the scalable KMeans++ intialization method. This method is more stable than randomnly selecting K points.\n",
"cuML’s KMeans supports the scalable KMeans++ initialization method. This method is more stable than randomnly selecting K points.\n",
" \n",
"The model can take array-like objects, either in host as NumPy arrays or in device (as Numba or cuda_array_interface-compliant), as well as cuDF DataFrames as the input.\n",
"\n",
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -432,18 +432,18 @@ class ColumnTransformer(TransformerMixin, BaseComposition, BaseEstimator):
----------
transformers : list of tuples
List of (name, transformer, columns) tuples specifying the
transformer objects to be applied to subsets of the data.
transformer objects to be applied to subsets of the data:

name : str
* name : str
Like in Pipeline and FeatureUnion, this allows the transformer and
its parameters to be set using ``set_params`` and searched in grid
search.
transformer : {'drop', 'passthrough'} or estimator
* transformer : {'drop', 'passthrough'} or estimator
Estimator must support `fit` and `transform`.
Special-cased strings 'drop' and 'passthrough' are accepted as
well, to indicate to drop the columns or to pass them through
untransformed, respectively.
columns : str, array-like of str, int, array-like of int, \
* columns : str, array-like of str, int, array-like of int, \
array-like of bool, slice or callable
Indexes the data on its second axis. Integers are interpreted as
positional columns, while strings can reference DataFrame columns
Expand Down Expand Up @@ -1028,14 +1028,14 @@ def make_column_transformer(*transformers,
----------
*transformers : tuples
Tuples of the form (transformer, columns) specifying the
transformer objects to be applied to subsets of the data.
transformer objects to be applied to subsets of the data:

transformer : {'drop', 'passthrough'} or estimator
* transformer : {'drop', 'passthrough'} or estimator
Estimator must support `fit` and `transform`.
Special-cased strings 'drop' and 'passthrough' are accepted as
well, to indicate to drop the columns or to pass them through
untransformed, respectively.
columns : str, array-like of str, int, array-like of int, slice, \
* columns : str, array-like of str, int, array-like of int, slice, \
array-like of bool or callable
Indexes the data on its second axis. Integers are interpreted as
positional columns, while strings can reference DataFrame columns
Expand Down Expand Up @@ -1097,7 +1097,6 @@ def make_column_transformer(*transformers,
['numerical_column']),
('onehotencoder', OneHotEncoder(...),
['categorical_column'])])

"""
# transformer_weights keyword is not passed through because the user
# would need to know the automatically generated names of the transformers
Expand Down
14 changes: 6 additions & 8 deletions python/cuml/benchmark/datagen.py
Original file line number Diff line number Diff line change
Expand Up @@ -352,26 +352,24 @@ def gen_data(
):
"""Returns a tuple of data from the specified generator.

Output
-------
(train_features, train_labels, test_features, test_labels) tuple
containing matrices or dataframes of the requested format.
test_features and test_labels may be None if no splitting was done.

Parameters
----------
dataset_name : str
Dataset to use. Can be a synthetic generator (blobs or regression)
or a specified dataset (higgs currently, others coming soon)

dataset_format : str
Type of data to return. (One of cudf, numpy, pandas, gpuarray)

n_samples : int
Number of samples to include in training set (regardless of test split)
test_fraction : float
Fraction of the dataset to partition randomly into the test set.
If this is 0.0, no test set will be created.

Returns
-------
(train_features, train_labels, test_features, test_labels) tuple
containing matrices or dataframes of the requested format.
test_features and test_labels may be None if no splitting was done.
"""
data = _data_generators[dataset_name](
int(n_samples / (1 - test_fraction)),
Expand Down
4 changes: 2 additions & 2 deletions python/cuml/cluster/dbscan.pyx
Original file line number Diff line number Diff line change
Expand Up @@ -132,7 +132,7 @@ class DBSCAN(Base,
dtype: int32

Parameters
-----------
----------
eps : float (default = 0.5)
The maximum distance between 2 points such they reside in the same
neighborhood.
Expand Down Expand Up @@ -187,7 +187,7 @@ class DBSCAN(Base,
calc_core_sample_indices==True

Notes
------
-----
DBSCAN is very sensitive to the distance metric it is used with, and a
large assumption is that datapoints need to be concentrated in groups for
clusters to be constructed.
Expand Down
2 changes: 1 addition & 1 deletion python/cuml/cluster/kmeans.pyx
Original file line number Diff line number Diff line change
Expand Up @@ -220,7 +220,7 @@ class KMeans(Base,
Which cluster each datapoint belongs to.

Notes
------
-----
KMeans requires `n_clusters` to be specified. This means one needs to
approximately guess or know how many clusters a dataset has. If one is not
sure, one can start with a small number of clusters, and visualize the
Expand Down
Loading