Skip to content

Commit

Permalink
[Enhance] refine example_project README (#2002)
Browse files Browse the repository at this point in the history
  • Loading branch information
Ben-Louis authored Mar 1, 2023
1 parent 7220749 commit 4e62b72
Show file tree
Hide file tree
Showing 3 changed files with 96 additions and 30 deletions.
4 changes: 2 additions & 2 deletions projects/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,9 +2,9 @@

In this folder, we welcome all contribution of keypoint detection techniques from community.

Here, these requirements, e.g. code standards, are not that strict as in core package. Thus, developers from the community can implement their algorithms much more easily and efficiently in MMPose. We appreciate all contributions from community to make MMPose greater.
Here, these requirements, e.g. code standards, are not as strict as in the core package. Thus, developers from the community can implement their algorithms much more easily and efficiently in MMPose. We appreciate all contributions from the community that makes MMPose greater.

Here is an [example project](./example_project) about how to add your algorithms easily.
Here is an [example project](./example_project) about how to add your algorithms easily. For common questions about projects, please read our [faq](faq.md).

We also provide some documentation listed below:

Expand Down
99 changes: 71 additions & 28 deletions projects/example_project/README.md
Original file line number Diff line number Diff line change
@@ -1,18 +1,35 @@
# Example Project

This is an example README for community `projects/`. You can write your README in your own project. Here are
some recommended parts of a README for others to understand and use your project, you can copy or modify them
according to your project.
> A README.md template for releasing a project.
>
> All the fields in this README are **mandatory** for others to understand what you have achieved in this implementation.
> Please read our [Projects FAQ](../faq.md) if you still feel unclear about the requirements, or raise an [issue](https://github.com/open-mmlab/mmpose/issues) to us!
## Description

> Share any information you would like others to know. For example:
>
> Author: @xxx.
>
> This is an implementation of \[XXX\].
Author: @xxx.

This project implements a top-down pose estimator with custom head and loss functions that have been seamlessly inherited from existing modules within MMPose.

## Usage

### Setup Environment
> For a typical model, this section should contain the commands for training and testing.
> You are also suggested to dump your environment specification to env.yml by `conda env export > env.yml`.
Please refer to [Installation](https://mmpose.readthedocs.io/en/1.x/installation.html) to install MMPose.
### Prerequisites

At first, add the current folder to `PYTHONPATH`, so that Python can find your code. Run command in the current directory to add it.
- Python 3.7
- PyTorch 1.6 or higher
- [MIM](https://github.com/open-mmlab/mim) v0.33 or higher
- [MMPose](https://github.com/open-mmlab/mmpose) v1.0.0rc0 or higher

> Please run it every time after you opened a new shell.
All the commands below rely on the correct configuration of `PYTHONPATH`, which should point to the project's directory so that Python can locate the module files. In `example_project/` root directory, run the following line to add the current directory to `PYTHONPATH`:

```shell
export PYTHONPATH=`pwd`:$PYTHONPATH
Expand All @@ -26,19 +43,19 @@ Prepare the COCO dataset according to the [instruction](https://mmpose.readthedo

**To train with single GPU:**

```bash
```shell
mim train mmpose configs/example-head-loss_hrnet-w32_8xb64-210e_coco-256x192.py
```

**To train with multiple GPUs:**

```bash
```shell
mim train mmpose configs/example-head-loss_hrnet-w32_8xb64-210e_coco-256x192.py --launcher pytorch --gpus 8
```

**To train with multiple GPUs by slurm:**

```bash
```shell
mim train mmpose configs/example-head-loss_hrnet-w32_8xb64-210e_coco-256x192.py --launcher slurm \
--gpus 16 --gpus-per-node 8 --partition $PARTITION
```
Expand All @@ -47,32 +64,36 @@ mim train mmpose configs/example-head-loss_hrnet-w32_8xb64-210e_coco-256x192.py

**To test with single GPU:**

```bash
```shell
mim test mmpose configs/example-head-loss_hrnet-w32_8xb64-210e_coco-256x192.py $CHECKPOINT
```

**To test with multiple GPUs:**

```bash
```shell
mim test mmpose configs/example-head-loss_hrnet-w32_8xb64-210e_coco-256x192.py $CHECKPOINT --launcher pytorch --gpus 8
```

**To test with multiple GPUs by slurm:**

```bash
```shell
mim test mmpose configs/example-head-loss_hrnet-w32_8xb64-210e_coco-256x192.py $CHECKPOINT --launcher slurm \
--gpus 16 --gpus-per-node 8 --partition $PARTITION
```

## Results

| Model | Backbone | AP | AR | Config | Download |
| :-----------------------: | :-------: | :---: | :---: | :------------------------------------------------------------------------: | :--------------------------------------------------------------------------------: |
| ExampleHead + ExampleLoss | HRNet-w32 | 0.749 | 0.804 | [config](./configs/example-head-loss_hrnet-w32_8xb64-210e_coco-256x192.py) | [model](https://download.openmmlab.com/mmpose/v1/body_2d_keypoint/topdown_heatmap/coco/td-hm_hrnet-w32_8xb64-210e_coco-256x192-81c58e40_20220909.pth) \| [log](https://download.openmmlab.com/mmpose/v1/body_2d_keypoint/topdown_heatmap/coco/td-hm_hrnet-w32_8xb64-210e_coco-256x192_20220909.log) |
> List the results as usually done in other model's README. Here is an [Example](https://github.com/open-mmlab/mmpose/blob/dev-1.x/configs/body_2d_keypoint/topdown_heatmap/coco/hrnet_coco.md).
> You should claim whether this is based on the pre-trained weights, which are converted from the official release; or it's a reproduced result obtained from retraining the model in this project
| Model | Backbone | Input Size | AP | AP<sup>50</sup> | AP<sup>75</sup> | AR | AR<sup>50</sup> | Download |
| :-----------------------------------------------------------: | :-------: | :--------: | :---: | :-------------: | :-------------: | :---: | :-------------: | :---------------------------------------------------------------: |
| [ExampleHead + ExampleLoss](./configs/example-head-loss_hrnet-w32_8xb64-210e_coco-256x192.py) | HRNet-w32 | 256x912 | 0.749 | 0.906 | 0.821 | 0.804 | 0.945 | [model](https://download.openmmlab.com/mmpose/v1/body_2d_keypoint/topdown_heatmap/coco/td-hm_hrnet-w32_8xb64-210e_coco-256x192-81c58e40_20220909.pth) \| [log](https://download.openmmlab.com/mmpose/v1/body_2d_keypoint/topdown_heatmap/coco/td-hm_hrnet-w32_8xb64-210e_coco-256x192_20220909.log) |

## Citation

<!-- Replace to the citation of the paper your project refers to. -->
> You may remove this section if not applicable.
```bibtex
@misc{mmpose2020,
Expand All @@ -88,36 +109,58 @@ mim test mmpose configs/example-head-loss_hrnet-w32_8xb64-210e_coco-256x192.py $
Here is a checklist of this project's progress. And you can ignore this part if you don't plan to contribute
to MMPose projects.

> The PIC (person in charge) or contributors of this project should check all the items that they believe have been finished, which will further be verified by codebase maintainers via a PR.
> OpenMMLab's maintainer will review the code to ensure the project's quality. Reaching the first milestone means that this project suffices the minimum requirement of being merged into 'projects/'. But this project is only eligible to become a part of the core package upon attaining the last milestone.
> Note that keeping this section up-to-date is crucial not only for this project's developers but the entire community, since there might be some other contributors joining this project and deciding their starting point from this list. It also helps maintainers accurately estimate time and effort on further code polishing, if needed.
> A project does not necessarily have to be finished in a single PR, but it's essential for the project to at least reach the first milestone in its very first PR.
- [ ] Milestone 1: PR-ready, and acceptable to be one of the `projects/`.

- [ ] Finish the code

<!-- The code's design shall follow existing interfaces and convention. For example, each model component should be registered into `mmpose.registry.MODELS` and configurable via a config file. -->
> The code's design shall follow existing interfaces and convention. For example, each model component should be registered into `mmpose.registry.MODELS` and configurable via a config file.
- [ ] Basic docstrings & proper citation

<!-- Each major class should contains a docstring, describing its functionality and arguments. If your code is copied or modified from other open-source projects, don't forget to cite the source project in docstring and make sure your behavior is not against its license. Typically, we do not accept any code snippet under GPL license. [A Short Guide to Open Source Licenses](https://medium.com/nationwide-technology/a-short-guide-to-open-source-licenses-cf5b1c329edd) -->
> Each major class should contains a docstring, describing its functionality and arguments. If your code is copied or modified from other open-source projects, don't forget to cite the source project in docstring and make sure your behavior is not against its license. Typically, we do not accept any code snippet under GPL license. [A Short Guide to Open Source Licenses](https://medium.com/nationwide-technology/a-short-guide-to-open-source-licenses-cf5b1c329edd)
- [ ] Test-time correctness

> If you are reproducing the result from a paper, make sure your model's inference-time performance matches that in the original paper. The weights usually could be obtained by simply renaming the keys in the official pre-trained weights. This test could be skipped though, if you are able to prove the training-time correctness and check the second milestone.
- [ ] Converted checkpoint and results (Only for reproduction)
- [ ] A full README

<!-- If you are reproducing the result from a paper, make sure the model in the project can match that results. Also please provide checkpoint links or a checkpoint conversion script for others to get the pre-trained model. -->
> As this template does.
- [ ] Milestone 2: Indicates a successful model implementation.

- [ ] Training results
- [ ] Training-time correctness

<!-- If you are reproducing the result from a paper, train your model from scratch and verified that the final result can match the original result. Usually, ±0.1% mAP is acceptable for the keypoint detections task on COCO. -->
> If you are reproducing the result from a paper, checking this item means that you should have trained your model from scratch based on the original paper's specification and verified that the final result matches the report within a minor error range.
- [ ] Milestone 3: Good to be a part of our core package!

- [ ] Type hints and docstrings

> Ideally *all* the methods should have [type hints](https://www.pythontutorial.net/python-basics/python-type-hints/) and [docstrings](https://google.github.io/styleguide/pyguide.html#381-docstrings). [Example](https://github.com/open-mmlab/mmpose/blob/0fb7f22000197181dc0629f767dd99d881d23d76/mmpose/utils/tensor_utils.py#L53)
- [ ] Unit tests

<!-- Unit tests for the major module are required. [Example](https://github.com/open-mmlab/mmpose/blob/1.x/tests/test_models/test_heads/test_heatmap_heads/test_heatmap_head.py) -->
> Unit tests for the major module are required. [Example](https://github.com/open-mmlab/mmpose/blob/1.x/tests/test_models/test_heads/test_heatmap_heads/test_heatmap_head.py)
- [ ] Code polishing

> Refactor your code according to reviewer's comment.
- [ ] Metafile.yml

- [ ] Code style
> It will be parsed by MIM and Inferencer. [Example](https://github.com/open-mmlab/mmpose/blob/dev-1.x/configs/body_2d_keypoint/topdown_heatmap/coco/hrnet_coco.yml)
<!-- Refactor your code according to reviewer's comment. -->
- [ ] Move your modules into the core package following the codebase's file hierarchy structure.

- [ ] `metafile.yml` and `README.md`
> In particular, you may have to refactor this README into a standard one. [Example](https://github.com/open-mmlab/mmpose/blob/dev-1.x/configs/body_2d_keypoint/topdown_heatmap/README.md)
<!-- It will used for MMPose to acquire your models. [Example](https://github.com/open-mmlab/mmpose/blob/1.x/configs/body_2d_keypoint/topdown_heatmap/coco/hrnet_coco.yml). In particular, you may have to refactor this README into a standard one. [Example](https://github.com/open-mmlab/mmpose/blob/1.x/configs/body_2d_keypoint/topdown_heatmap/README.md) -->
- [ ] Refactor your modules into the core package following the codebase's file hierarchy structure.
23 changes: 23 additions & 0 deletions projects/faq.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,23 @@
# FAQ

To help users better understand the `projects/` folder and how to use it effectively, we've created this FAQ page. Here, users can find answers to common questions and learn more about various aspects of the `projects/` folder, such as its usage and contribution guidance.

## Q1: Why set up `projects/` folder?

Implementing new models and features into OpenMMLab's algorithm libraries could be troublesome due to the rigorous requirements on code quality, which could hinder the fast iteration of SOTA models and might discourage our members from sharing their latest outcomes here. And that's why we have this `projects/` folder now, where some experimental features, frameworks and models are placed, only needed to satisfy the minimum requirement on the code quality, and can be used as standalone libraries. Users are welcome to use them if they [use MMPose from source](https://mmpose.readthedocs.io/en/dev-1.x/installation.html#best-practices).

## Q2: Why should there be a checklist for a project?

This checkelist is crucial not only for this project's developers but the entire community, since there might be some other contributors joining this project and deciding their starting point from this list. It also helps maintainers accurately estimate time and effort on further code polishing, if needed.

## Q3: What kind of PR will be merged?

Reaching the first milestone means that this project suffices the minimum requirement of being merged into 'projects/'. That is, the very first PR of a project must have all the terms in the first milestone checked. We do not have any extra requirements on the project's following PRs, so they can be a minor bug fix or update, and do not have to achieve one milestone at once. But keep in mind that this project is only eligible to become a part of the core package upon attaining the last milestone.

## Q4: Compared to other models in the core packages, why do the model implementations in projects have different training/testing commands?

Projects are organized independently from the core package, and therefore their modules cannot be directly imported by train.py and test.py. Each model implementation in projects should either use `mim` for training/testing as suggested in the example project or provide a custom train.py/test.py.

## Q5: How to debug a project with a debugger?

Debugger makes our lives easier, but using it becomes a bit tricky if we have to train/test a model via `mim`. The way to circumvent that is that we can take advantage of relative path to import these modules. Assuming that we are developing a project X and the core modules are placed under `projects/X/modules`, then simply adding `custom_imports = dict(imports='projects.X.modules')` to the config allows us to debug from usual entrypoints (e.g. `tools/train.py`) from the root directory of the algorithm library. Just don't forget to remove 'projects.X' before project publishment.

0 comments on commit 4e62b72

Please sign in to comment.