Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

setup a documentation website auto-generated from code docstrings #12

Closed
wyli opened this issue Jan 10, 2020 · 10 comments
Closed

setup a documentation website auto-generated from code docstrings #12

wyli opened this issue Jan 10, 2020 · 10 comments
Assignees

Comments

@wyli
Copy link
Contributor

wyli commented Jan 10, 2020

Additional comment from #21

We should also nail down what format comments should be in, I would suggest Numpy format https://numpydoc.readthedocs.io/en/latest/format.html as a good hybrid of Google docstrings and reStructured Text formats, however Pytorch uses Google docstrings so maybe we should stick to that.

@atbenmurray
Copy link
Contributor

Copying in the initial email for complete issue history; hope that's ok.
Hi Ben,
This item was assigned to you and me. If you don’t mind, I can start and put together a proposal based on what we did before.

Let me know if this is okay with you.

Thanks,
Yan

@atbenmurray
Copy link
Contributor

I don't have experience with python doc generation; do you have some go to tools for it?

@vfdev-5
Copy link
Member

vfdev-5 commented Jan 15, 2020

If I could put my 2 cents to this issue. In PyTorch, Torchvision (and Ignite) they are using Sphinx for docs generation. For example, for torchvision:

Documentation pages are written in reStructured Text formats. Docstrings are as mentioned in the first message are in Google style.

HTH

@yanchengnv
Copy link

yanchengnv commented Jan 15, 2020 via email

@yanchengnv
Copy link

yanchengnv commented Jan 15, 2020 via email

@IsaacYangSLA
Copy link
Contributor

I forked this repo and created the document of MONAI API here. It was built by Sphinix, and github.io automatically pulls master branch /docs folder into repository's github pages. I tried to avoid pollute this repo, so once that page looks OK, I will create a PR.

There are at least three sub-tasks in this tasks.

  1. Creating source RST files. Usually, they are tutorials, examples, and user guides.
  2. Running sphinx's document build process to generate API document from docstring, and build html files from the combination of API document and RST files.
  3. Publishing the html files to some document website (github.io, currently).

For current stage, we are doing most of them manually. In the future, we would like to automate them as part of CD.

@pdogra89
Copy link

pdogra89 commented Feb 7, 2020

Isaac to investigate Readthedocs service.

@IsaacYangSLA
Copy link
Contributor

ReadTheDocs limits memory to 1GB. However, pytorch package is around 800MB, which exceeds the memory size. With some mocked packages, I managed to build all pages and pushed them to https://monai-readthedocstest.readthedocs.io/en/latest/. The process required manually creating mocked packages/classes/functions, and did not scale well. Please take a look at https://gitlab.com/project-monai/readthedocstest/-/tree/master for those mocked codes.

@vfdev-5
Copy link
Member

vfdev-5 commented Feb 10, 2020

@IsaacYangSLA maybe it is possible to use pytorch CPU only to build the docs which takes < 100mb

@wyli
Copy link
Contributor Author

wyli commented Feb 10, 2020

fixed via #68.

@wyli wyli closed this as completed Feb 10, 2020
Nic-Ma added a commit that referenced this issue Jun 26, 2020
* adds network

* adds basic training

* update loading

* working prototype

* update validation set

* [MONAI] Add author; paper info; PDDCA18 (#6)

+ Author
+ Early accept
+ PDDCA18 link

* Update README.md

* adds network

* adds basic training

* update loading

* working prototype

* update validation set

* [MONAI] Update TRAIN_PATH, VAL_PATH (#8)

+ Update TRAIN_PATH, VAL_PATH

* [MONAI] Add data link (#7)

+ Add data link https://drive.google.com/file/d/1A2zpVlR3CkvtkJPvtAF3-MH0nr1WZ2Mn/view?usp=sharing

* fixes typos

* tested new dataset

* print more infor, checked new dataset

* [MONAI] Add paper link (#9)

Add paper link https://arxiv.org/abs/2006.12575

* [MONAI] Use dice loss + focal loss to train (#10)

Use dice loss + focal loss to train

* [MONAI] Support non-one-hot ground truth (#11)

Support non-one-hot ground truth

* fixes format and docstrings, adds argparser options

* resume the focal_loss

* adds tests

* [MONAI] Support non-one-hot ground truth (#11)

Support non-one-hot ground truth

* adds tests

* update docstring

* [MONAI] Keep track of best validation scores (#12)

Keep track of best validation scores

* model saving

* adds window sampling

* update readme

* update docs

* fixes flake8 error

* update window sampling

* fixes model name

* fixes channel size issue

* [MONAI] Update --pretrain, --lr (#13)

+ lr from 5e-4 to 1e-3 because we use mean for class channel instead of sum for class channel.
+ pretrain path is consistent with current model_name.

* [MONAI] Pad image; elastic; best class model (#14)

* [MONAI] Pad image; elastic; best class model

+ Pad image bigger than crop_size, avoid potential issues in RandCropByPosNegLabeld
+ Use Rand3DElasticd
+ Save best model for each class

* Update train.py

Co-authored-by: Wenqi Li <[email protected]>

* flake8 fixes

* removes -1 cropsize deform

* testing commands

* fixes unit tests

* update spatial padding

* [MONAI] Add full image deform augmentation (#15)

+ Add full image deform augmentation by Rand3DElasticd
+ Please use latest MONAI in #623

* Adding py.typed

* updating setup.py to comply with black

* update based on comments

* excluding research from packaging

* update tests

* update setup.py

Co-authored-by: Wentao Zhu <[email protected]>
Co-authored-by: Neil Tenenholtz <[email protected]>
Co-authored-by: Nic Ma <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants