Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add docs build job #3157

Merged
merged 45 commits into from
Feb 28, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
45 commits
Select commit Hold shift + click to select a range
a1ebed9
initial docs build draft
AyodeAwe Jan 18, 2023
50b27ba
add dgl
AyodeAwe Jan 19, 2023
1c8e795
dgl
AyodeAwe Jan 20, 2023
eeb3e17
dgl fix
AyodeAwe Jan 23, 2023
bd9eb12
pin sphinx
AyodeAwe Jan 24, 2023
0531b18
Merge branch 'branch-23.02' into add-docs-build
AyodeAwe Jan 24, 2023
9083ec2
fix typos
AyodeAwe Jan 27, 2023
2b43932
Merge branch 'branch-23.02' of github.com:rapidsai/cugraph into add-d…
AyodeAwe Jan 27, 2023
fc28fce
update conda yaml
AyodeAwe Jan 27, 2023
be224af
Merge branch 'branch-23.02' of github.com:rapidsai/cugraph into add-d…
AyodeAwe Jan 27, 2023
4fe5a46
fix commit
AyodeAwe Jan 27, 2023
899f274
log install cmd
AyodeAwe Jan 28, 2023
c5d5e9b
Merge branch 'branch-23.02' into add-docs-build
AyodeAwe Jan 29, 2023
9c37bbb
add -W flag
AyodeAwe Jan 29, 2023
43fff75
Merge branch 'add-docs-build' of github.com:AyodeAwe/cugraph into add…
AyodeAwe Jan 29, 2023
9e332c9
cast html_theme_path to list
AyodeAwe Jan 30, 2023
820fbc4
Merge branch 'branch-23.04' into add-docs-build
AyodeAwe Feb 10, 2023
42b9b56
update shared workflows branch
ajschmidt8 Feb 13, 2023
ad6cc26
Merge branch 'branch-23.04' into add-docs-build
ajschmidt8 Feb 13, 2023
0d299a7
Merge branch 'branch-23.04' into add-docs-build
ajschmidt8 Feb 17, 2023
5bc74bd
prevent nightly docs runs
AyodeAwe Feb 17, 2023
c14be0d
update `.gitignore` to ignore doc build dirs
ajschmidt8 Feb 17, 2023
e04a25c
add `graphviz` to doc build deps
ajschmidt8 Feb 17, 2023
2031345
rm unnecessary `conf.py` header lines
ajschmidt8 Feb 17, 2023
a59089e
remove `conf.py` imports
ajschmidt8 Feb 17, 2023
a88b1a0
rm unnecessary `rtd` lines in `conf.py`
ajschmidt8 Feb 17, 2023
f1d70dd
fix some easy formatting warnings
ajschmidt8 Feb 17, 2023
f689804
fix "critical" build warnings
ajschmidt8 Feb 17, 2023
31bf955
apply `cugraph-dgl` patch from Vibhu
ajschmidt8 Feb 17, 2023
c93d31b
fix copyright header
ajschmidt8 Feb 17, 2023
ccc006a
rm dead doc import
ajschmidt8 Feb 18, 2023
d656d56
fix "critical" build warnings
ajschmidt8 Feb 18, 2023
0ef649b
rm `-W` from `sphinx-build` since too many warnings
ajschmidt8 Feb 18, 2023
445c1ee
unrelated CI fix for wheels
ajschmidt8 Feb 18, 2023
bb8b4c4
re-add erroneously removed final new lines
ajschmidt8 Feb 18, 2023
10c37b3
update dependency list name
ajschmidt8 Feb 18, 2023
2e5ac81
rm redundancies
AyodeAwe Feb 21, 2023
eecdc70
Merge branch 'branch-23.04' into add-docs-build
AyodeAwe Feb 22, 2023
baefd2b
ignore flake warning
AyodeAwe Feb 22, 2023
166f29c
Merge branch 'branch-23.04' into add-docs-build
AyodeAwe Feb 22, 2023
434b1ba
Revert "ignore flake warning"
AyodeAwe Feb 22, 2023
bbde5be
dgl.dataloading fix
AyodeAwe Feb 22, 2023
7aa61c8
less verbosity
AyodeAwe Feb 23, 2023
20a0eab
Merge branch 'branch-23.04' into add-docs-build
AyodeAwe Feb 27, 2023
9531f96
rm set cmds
AyodeAwe Feb 27, 2023
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
11 changes: 11 additions & 0 deletions .github/workflows/build.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -52,6 +52,17 @@ jobs:
branch: ${{ inputs.branch }}
date: ${{ inputs.date }}
sha: ${{ inputs.sha }}
docs-build:
if: github.ref_type == 'branch' && github.event_name == 'push'
needs: python-build
secrets: inherit
uses: rapidsai/shared-action-workflows/.github/workflows/[email protected]
with:
build_type: branch
node_type: "gpu-latest-1"
arch: "amd64"
container_image: "rapidsai/ci:latest"
run_script: "ci/build_docs.sh"
wheel-build-pylibcugraph:
secrets: inherit
uses: rapidsai/shared-action-workflows/.github/workflows/[email protected]
Expand Down
11 changes: 11 additions & 0 deletions .github/workflows/pr.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -18,6 +18,7 @@ jobs:
- conda-notebook-tests
- conda-python-build
- conda-python-tests
- docs-build
- wheel-build-pylibcugraph
- wheel-tests-pylibcugraph
- wheel-build-cugraph
Expand Down Expand Up @@ -64,6 +65,16 @@ jobs:
arch: "amd64"
container_image: "rapidsai/ci:latest"
run_script: "ci/test_notebooks.sh"
docs-build:
needs: conda-python-build
secrets: inherit
uses: rapidsai/shared-action-workflows/.github/workflows/[email protected]
with:
build_type: pull-request
node_type: "gpu-latest-1"
arch: "amd64"
container_image: "rapidsai/ci:latest"
run_script: "ci/build_docs.sh"
wheel-build-pylibcugraph:
needs: checks
secrets: inherit
Expand Down
2 changes: 2 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -93,3 +93,5 @@ python/cugraph/cugraph/tests/dask-worker-space

# Sphinx docs & build artifacts
docs/cugraph/source/api_docs/api/*
_html
_text
58 changes: 58 additions & 0 deletions ci/build_docs.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,58 @@
#!/bin/bash
# Copyright (c) 2023, NVIDIA CORPORATION.

set -euo pipefail

rapids-logger "Create test conda environment"
. /opt/conda/etc/profile.d/conda.sh

rapids-dependency-file-generator \
--output conda \
--file_key docs \
--matrix "cuda=${RAPIDS_CUDA_VERSION%.*};arch=$(arch);py=${RAPIDS_PY_VERSION}" | tee env.yaml

rapids-mamba-retry env create --force -f env.yaml -n docs
conda activate docs

rapids-print-env

rapids-logger "Downloading artifacts from previous jobs"
CPP_CHANNEL=$(rapids-download-conda-from-s3 cpp)
PYTHON_CHANNEL=$(rapids-download-conda-from-s3 python)
VERSION_NUMBER=$(rapids-get-rapids-version-from-git)

rapids-mamba-retry install \
--channel "${CPP_CHANNEL}" \
--channel "${PYTHON_CHANNEL}" \
libcugraph \
pylibcugraph \
cugraph \
cugraph-pyg \
cugraph-service-server \
cugraph-service-client \
libcugraph_etl

# This command installs `cugraph-dgl` without its dependencies
# since this package can currently only run in `11.6` CTK environments
# due to the dependency version specifications in its conda recipe.
rapids-logger "Install cugraph-dgl"
rapids-mamba-retry install "${PYTHON_CHANNEL}/linux-64/cugraph-dgl-*.tar.bz2"

rapids-logger "Build Doxygen docs"
pushd cpp/doxygen
doxygen Doxyfile
popd

rapids-logger "Build Sphinx docs"
pushd docs/cugraph
sphinx-build -b dirhtml source _html
sphinx-build -b text source _text
popd


if [[ "${RAPIDS_BUILD_TYPE}" == "branch" ]]; then
rapids-logger "Upload Docs to S3"
aws s3 sync --no-progress --delete docs/cugraph/_html "s3://rapidsai-docs/cugraph/${VERSION_NUMBER}/html"
aws s3 sync --no-progress --delete docs/cugraph/_text "s3://rapidsai-docs/cugraph/${VERSION_NUMBER}/txt"
aws s3 sync --no-progress --delete cpp/doxygen/html "s3://rapidsai-docs/libcugraph/${VERSION_NUMBER}/html"
fi
3 changes: 2 additions & 1 deletion conda/environments/all_cuda-118_arch-x86_64.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -22,6 +22,7 @@ dependencies:
- doxygen
- gcc_linux-64=9.*
- gmock=1.10.0
- graphviz
- gtest=1.10.0
- ipython
- libcudf=23.04.*
Expand Down Expand Up @@ -54,9 +55,9 @@ dependencies:
- scikit-build>=0.13.1
- scikit-learn>=0.23.1
- scipy
- sphinx
- sphinx-copybutton
- sphinx-markdown-tables
- sphinx<6
- sphinxcontrib-websupport
- ucx-proc=*=gpu
- ucx-py=0.31.*
Expand Down
23 changes: 22 additions & 1 deletion dependencies.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ files:
- common_python_test
- cpp_build
- cudatoolkit
- doc
- docs
- python_build
- test_notebook
- test_python
Expand All @@ -20,6 +20,12 @@ files:
includes:
- checks
- py_version
docs:
output: none
includes:
- cudatoolkit
- docs
- py_version
test_cpp:
output: none
includes:
Expand Down Expand Up @@ -113,6 +119,21 @@ dependencies:
cuda: "11.8"
packages:
- nvcc_linux-aarch64=11.8
docs:
common:
- output_types: [conda]
packages:
- doxygen
- graphviz
- ipython
- nbsphinx
- numpydoc
- pydata-sphinx-theme
- recommonmark
- sphinx-copybutton
- sphinx-markdown-tables
- sphinx<6
- sphinxcontrib-websupport
py_version:
specific:
- output_types: [conda]
Expand Down
2 changes: 1 addition & 1 deletion docs/cugraph/source/api_docs/centrality.rst
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,7 @@ Katz Centrality (MG)
cugraph.dask.centrality.katz_centrality.katz_centrality

Degree Centrality
---------------
-----------------
.. autosummary::
:toctree: api/

Expand Down
1 change: 0 additions & 1 deletion docs/cugraph/source/api_docs/cugraph_pyg.rst
Original file line number Diff line number Diff line change
Expand Up @@ -10,6 +10,5 @@ cugraph-pyg
:toctree: api/

cugraph_pyg.data.cugraph_store.EXPERIMENTAL__CuGraphStore
cugraph_pyg.loader.dispatch.call_cugraph_algorithm
cugraph_pyg.sampler.cugraph_sampler.EXPERIMENTAL__CuGraphSampler

2 changes: 1 addition & 1 deletion docs/cugraph/source/api_docs/index.rst
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
Python API reference
=============
====================

This page provides a list of all publicly accessible modules, methods and classes through
``cugraph.*`` namespace.
Expand Down
17 changes: 0 additions & 17 deletions docs/cugraph/source/conf.py
Original file line number Diff line number Diff line change
@@ -1,6 +1,3 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
#
# Copyright (c) 2018-2023, NVIDIA CORPORATION.
#
# pygdf documentation build configuration file, created by
Expand All @@ -27,8 +24,6 @@
# is relative to the documentation root, use os.path.abspath to make it
# absolute, like shown here.
sys.path.insert(0, os.path.abspath('sphinxext'))
sys.path.insert(0, os.path.abspath('../../python'))
sys.path.insert(0, os.path.abspath('../..'))

from github_link import make_linkcode_resolve # noqa

Expand Down Expand Up @@ -112,18 +107,6 @@

html_theme = 'pydata_sphinx_theme'

# on_rtd is whether we are on readthedocs.org
on_rtd = os.environ.get('READTHEDOCS', None) == 'True'

if not on_rtd:
# only import and set the theme if we're building docs locally
# otherwise, readthedocs.org uses their theme by default,
# so no need to specify it
import pydata_sphinx_theme
import sphinx_rtd_theme
html_theme = 'pydata_sphinx_theme'
html_theme_path = sphinx_rtd_theme.get_html_theme_path()

# If true, `todo` and `todoList` produce output, else they produce nothing.
todo_include_todos = False

Expand Down
1 change: 1 addition & 0 deletions python/cugraph-dgl/cugraph_dgl/cugraph_storage.py
Original file line number Diff line number Diff line change
Expand Up @@ -83,6 +83,7 @@ def __init__(
information this can be ``torch.int32`` or ``torch.int64``
for PyTorch.
Defaults to ``torch.int64`` if pytorch is installed

Examples
--------
The following example uses `CuGraphStorage` :
Expand Down
13 changes: 8 additions & 5 deletions python/cugraph-dgl/cugraph_dgl/dataloading/dataloader.py
Original file line number Diff line number Diff line change
Expand Up @@ -13,21 +13,22 @@
from __future__ import annotations
import os
import shutil
import torch
import cugraph_dgl
import cupy as cp
import cudf
from cugraph.experimental import BulkSampler
from dask.distributed import default_client, Event
import dgl
from dgl.dataloading import WorkerInitWrapper, create_tensorized_dataset
from cugraph_dgl.dataloading import (
HomogenousBulkSamplerDataset,
HetrogenousBulkSamplerDataset,
)
from cugraph_dgl.dataloading.utils.extract_graph_helpers import (
create_cugraph_graph_from_edges_dict,
)
from cugraph.utilities.utils import import_optional

dgl = import_optional("dgl")
torch = import_optional("torch")


class DataLoader(torch.utils.data.DataLoader):
Expand Down Expand Up @@ -128,7 +129,9 @@ def __init__(
self.shuffle = shuffle
self.drop_last = drop_last
self.graph_sampler = graph_sampler
worker_init_fn = WorkerInitWrapper(kwargs.get("worker_init_fn", None))
worker_init_fn = dgl.dataloading.WorkerInitWrapper(
kwargs.get("worker_init_fn", None)
)
self.other_storages = {}
self.epoch_number = 0
self._batch_size = batch_size
Expand All @@ -138,7 +141,7 @@ def __init__(

indices = _dgl_idx_to_cugraph_idx(indices, graph)

self.tensorized_indices_ds = create_tensorized_dataset(
self.tensorized_indices_ds = dgl.dataloading.create_tensorized_dataset(
indices,
batch_size,
drop_last,
Expand Down
7 changes: 4 additions & 3 deletions python/cugraph-dgl/cugraph_dgl/dataloading/dataset.py
Original file line number Diff line number Diff line change
Expand Up @@ -20,9 +20,10 @@
create_heterogeneous_sampled_graphs_from_dataframe,
)

# TODO: Make optional imports
import torch
import dgl
from cugraph.utilities.utils import import_optional

dgl = import_optional("dgl")
torch = import_optional("torch")


# Todo: maybe should switch to __iter__
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -12,16 +12,16 @@
# limitations under the License.
from __future__ import annotations
from typing import Tuple, Dict
from torch.utils.dlpack import from_dlpack
from collections import defaultdict
import cudf
import torch
import dgl
from cugraph.utilities.utils import import_optional

dgl = import_optional("dgl")
torch = import_optional("torch")


def cast_to_tensor(ser: cudf.Series):
# TODO: Maybe use torch.as_tensor
return from_dlpack(ser.values.toDlpack())
return torch.as_tensor(ser.values, device="cuda")


def create_homogeneous_sampled_graphs_from_dataframe(
Expand Down
1 change: 1 addition & 0 deletions python/cugraph-dgl/cugraph_dgl/nn/conv/gatconv.py
Original file line number Diff line number Diff line change
Expand Up @@ -13,6 +13,7 @@
"""Torch Module for graph attention network layer using the aggregation
primitives in cugraph-ops"""
# pylint: disable=no-member, arguments-differ, invalid-name, too-many-arguments
from __future__ import annotations
from typing import Optional

from cugraph.utilities.utils import import_optional
Expand Down
1 change: 1 addition & 0 deletions python/cugraph-dgl/cugraph_dgl/nn/conv/relgraphconv.py
Original file line number Diff line number Diff line change
Expand Up @@ -13,6 +13,7 @@
"""Torch Module for Relational graph convolution layer using the aggregation
primitives in cugraph-ops"""
# pylint: disable=no-member, arguments-differ, invalid-name, too-many-arguments
from __future__ import annotations
import math
from typing import Optional

Expand Down
1 change: 1 addition & 0 deletions python/cugraph-dgl/cugraph_dgl/nn/conv/sageconv.py
Original file line number Diff line number Diff line change
Expand Up @@ -13,6 +13,7 @@
"""Torch Module for GraphSAGE layer using the aggregation primitives in
cugraph-ops"""
# pylint: disable=no-member, arguments-differ, invalid-name, too-many-arguments
from __future__ import annotations
from typing import Optional

from cugraph.utilities.utils import import_optional
Expand Down
7 changes: 5 additions & 2 deletions python/cugraph/cugraph/link_analysis/pagerank.py
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
# Copyright (c) 2019-2022, NVIDIA CORPORATION.
# Copyright (c) 2019-2023, NVIDIA CORPORATION.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
Expand Down Expand Up @@ -77,7 +77,9 @@ def pagerank(
increases when the tolerance descreases and/or alpha increases toward the
limiting value of 1. The user is free to use default values or to provide
inputs for the initial guess, tolerance and maximum number of iterations.
Parameters. All edges will have an edge_attr value of 1.0 if not provided.
All edges will have an edge_attr value of 1.0 if not provided.

Parameters
----------
G : cugraph.Graph or networkx.Graph
cuGraph graph descriptor, should contain the connectivity information
Expand Down Expand Up @@ -158,6 +160,7 @@ def pagerank(
Contains the vertex identifiers
df['pagerank'] : cudf.Series
Contains the PageRank score

Examples
--------
>>> from cugraph.experimental.datasets import karate
Expand Down