Skip to content

Commit

Permalink
[Docs] Refine README (#2207)
Browse files Browse the repository at this point in the history
  • Loading branch information
Tau-J authored Apr 12, 2023
1 parent 21181f6 commit 8412899
Show file tree
Hide file tree
Showing 6 changed files with 81 additions and 9 deletions.
4 changes: 2 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -140,13 +140,13 @@ MMPose v1.0.0 is a major update, including many API and config file changes. Cur
| DeepPose (CVPR 2014) | done |
| RLE (ICCV 2021) | done |
| SoftWingloss (TIP 2021) | |
| VideoPose3D (CVPR 2019) | |
| VideoPose3D (CVPR 2019) | in progress |
| Hourglass (ECCV 2016) | done |
| LiteHRNet (CVPR 2021) | done |
| AdaptiveWingloss (ICCV 2019) | done |
| SimpleBaseline2D (ECCV 2018) | done |
| PoseWarper (NeurIPS 2019) | |
| SimpleBaseline3D (ICCV 2017) | |
| SimpleBaseline3D (ICCV 2017) | in progress |
| HMR (CVPR 2018) | |
| UDP (CVPR 2020) | done |
| VIPNAS (CVPR 2021) | done |
Expand Down
4 changes: 2 additions & 2 deletions README_CN.md
Original file line number Diff line number Diff line change
Expand Up @@ -138,13 +138,13 @@ MMPose v1.0.0 是一个重大更新,包括了大量的 API 和配置文件的
| DeepPose (CVPR 2014) | done |
| RLE (ICCV 2021) | done |
| SoftWingloss (TIP 2021) | |
| VideoPose3D (CVPR 2019) | |
| VideoPose3D (CVPR 2019) | in progress |
| Hourglass (ECCV 2016) | done |
| LiteHRNet (CVPR 2021) | done |
| AdaptiveWingloss (ICCV 2019) | done |
| SimpleBaseline2D (ECCV 2018) | done |
| PoseWarper (NeurIPS 2019) | |
| SimpleBaseline3D (ICCV 2017) | |
| SimpleBaseline3D (ICCV 2017) | in progress |
| HMR (CVPR 2018) | |
| UDP (CVPR 2020) | done |
| VIPNAS (CVPR 2021) | done |
Expand Down
6 changes: 6 additions & 0 deletions projects/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -48,4 +48,10 @@ We also provide some documentation listed below to help you get started:
<img src="https://user-images.githubusercontent.com/26127467/226655503-3cee746e-6e42-40be-82ae-6e7cae2a4c7e.jpg" width="800" style="width: 800px; height: 200px; object-fit: cover"/>
</div><br/>

- **[📖Awesome MMPose](./awesome-mmpose/)**: A list of Tutorials, Papers, Datasets related to MMPose

<div align=center>
<img src="https://user-images.githubusercontent.com/13503330/231416285-5467d313-0732-4ada-97e1-12be6ec69a28.png" width="800"/>
</div><br/>

- **What's next? Join the rank of <span style="color:blue"> *MMPose contributors* </span> by creating a new project**!
35 changes: 35 additions & 0 deletions projects/awesome-mmpose/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,35 @@
# Awesome MMPose

A list of resources related to MMPose. Feel free to contribute!

## Contents

- [Tutorials](#tutorials)
- [Papers](#papers)
- [Datasets](#datasets)
- [Projects](#projects)

## Tutorials

- [MMPose Tutorial (Chinese)](https://github.com/TommyZihao/MMPose_Tutorials)
MMPose 中文 Jupyter 教程,from 同济子豪兄
- [OpenMMLab Course](https://github.com/open-mmlab/OpenMMLabCourse)
This repository hosts articles, lectures and tutorials on computer vision and OpenMMLab, helping learners to understand algorithms and master our toolboxes in a systematical way.

## Papers

- [\[paper\]](https://arxiv.org/abs/2207.10387) [\[code\]](https://github.com/luminxu/Pose-for-Everything) ECCV 2022, Pose for Everything: Towards Category-Agnostic Pose Estimation
- [\[paper\]](https://arxiv.org/abs/2201.04676) [\[code\]](https://github.com/Sense-X/UniFormer) ICLR 2022, UniFormer: Unified Transformer for Efficient Spatiotemporal Representation Learning
- [\[paper\]](https://arxiv.org/abs/2201.07412) [\[code\]](https://github.com/aim-uofa/Poseur) ECCV 2022, Poseur:Direct Human Pose Regression with Transformers
- [\[paper\]](https://arxiv.org/abs/2106.03348) [\[code\]](https://github.com/ViTAE-Transformer/ViTAE-Transformer) NeurIPS 2022, ViTAEv2: Vision Transformer Advanced by Exploring Inductive Bias for Image Recognition and Beyond
- [\[paper\]](https://arxiv.org/abs/2204.10762) [\[code\]](https://github.com/ZiyiZhang27/Dite-HRNet) IJCAI-ECAI 2021, Dite-HRNet:Dynamic Lightweight High-Resolution Network for Human Pose Estimation
- [\[paper\]](https://arxiv.org/abs/2302.08453) [\[code\]](https://github.com/TencentARC/T2I-Adapter) T2I-Adapter: Learning Adapters to Dig out More Controllable Ability for Text-to-Image Diffusion Models
- [\[paper\]](https://arxiv.org/pdf/2303.11638.pdf) [\[code\]](https://github.com/Gengzigang/PCT) CVPR 2023, Human Pose as Compositional Tokens

## Datasets

Waiting for your contribution!

## Projects

Waiting for your contribution!
28 changes: 23 additions & 5 deletions projects/rtmpose/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -267,10 +267,12 @@ bash opencv.sh
# Compile executable programs
bash build.sh

# inference for an image
# Inference for an image
# Please pass the folder of the model, not the model file
./bin/det_pose {det work-dir} {pose work-dir} {your_img.jpg} --device cpu

# inference for a video
# Inference for a video
# Please pass the folder of the model, not the model file
./bin/pose_tracker {det work-dir} {pose work-dir} {your_video.mp4} --device cpu
```

Expand All @@ -296,10 +298,12 @@ bash opencv.sh
# Compile executable programs
bash build.sh

# inference for an image
# Inference for an image
# Please pass the folder of the model, not the model file
./bin/det_pose {det work-dir} {pose work-dir} {your_img.jpg} --device cuda

# inference for a video
# Inference for a video
# Please pass the folder of the model, not the model file
./bin/pose_tracker {det work-dir} {pose work-dir} {your_video.mp4} --device cuda
```

Expand All @@ -312,11 +316,21 @@ For details, see [Pipeline Inference](#-step4-pipeline-inference).
1. Download the [pre-compiled SDK](https://github.com/open-mmlab/mmdeploy/releases).
2. Unzip the SDK and go to the `sdk/python` folder.
3. Install `mmdeploy_python` via `.whl` file.

```shell
pip install {file_name}.whl
```

4. Download the [sdk models](https://download.openmmlab.com/mmpose/v1/projects/rtmpose/rtmpose-cpu.zip) and unzip.
5. Inference with `pose_tracker.py`:

> Note:
- If you meet `ImportError: DLL load failed while importing mmdeploy_python`, please copy `thirdparty/onnxruntime/lib/onnxruntime.dll` to `site-packages/mmdeploy_python/` of your current Python env.

```shell
# go to ./sdk/example/python
# Please pass the folder of the model, not the model file
python pose_tracker.py cpu {det work-dir} {pose work-dir} {your_video.mp4}
```

Expand Down Expand Up @@ -363,7 +377,11 @@ example\cpp\build\Release

### MMPose demo scripts

MMPose provides demo scripts to conduct [inference with existing models](https://mmpose.readthedocs.io/en/1.x/user_guides/inference.html).
MMPose provides demo scripts to conduct [inference with existing models](https://mmpose.readthedocs.io/en/latest/user_guides/inference.html).

**Note:**

- Inferencing with Pytorch can not reach the maximum speed of RTMPose, just for verification.

```shell
# go to the mmpose folder
Expand Down
13 changes: 13 additions & 0 deletions projects/rtmpose/README_CN.md
Original file line number Diff line number Diff line change
Expand Up @@ -262,9 +262,11 @@ bash opencv.sh
bash build.sh

# 图片推理
# 请传入模型目录,而不是模型文件
./bin/det_pose {det work-dir} {pose work-dir} {your_img.jpg} --device cpu

# 视频推理
# 请传入模型目录,而不是模型文件
./bin/pose_tracker {det work-dir} {pose work-dir} {your_video.mp4} --device cpu
```

Expand All @@ -290,9 +292,11 @@ bash opencv.sh
bash build.sh

# 图片推理
# 请传入模型目录,而不是模型文件
./bin/det_pose {det work-dir} {pose work-dir} {your_img.jpg} --device cuda

# 视频推理
# 请传入模型目录,而不是模型文件
./bin/pose_tracker {det work-dir} {pose work-dir} {your_video.mp4} --device cuda
```

Expand All @@ -313,8 +317,13 @@ pip install {file_name}.whl
4. 下载 [sdk 模型](https://download.openmmlab.com/mmpose/v1/projects/rtmpose/rtmpose-cpu.zip)并解压。
5. 使用 `pose_tracker.py` 进行推理:

**提示:**

- 如果遇到 `ImportError: DLL load failed while importing mmdeploy_python`,请复制 `thirdparty/onnxruntime/lib/onnxruntime.dll` 到当前环境中 python 安装目录的 `site-packages/mmdeploy_python/`

```shell
# 进入 ./sdk/example/python
# 请传入模型目录,而不是模型文件
python pose_tracker.py cpu {det work-dir} {pose work-dir} {your_video.mp4}
```

Expand Down Expand Up @@ -363,6 +372,10 @@ example\cpp\build\Release

通过 MMPose 提供的 demo 脚本可以基于 Pytorch 快速进行[模型推理](https://mmpose.readthedocs.io/en/latest/user_guides/inference.html)和效果验证。

**提示:**

- 基于 Pytorch 推理并不能达到 RTMPose 模型的真实推理速度,只用于模型效果验证。

```shell
# 前往 mmpose 目录
cd ${PATH_TO_MMPOSE}
Expand Down

0 comments on commit 8412899

Please sign in to comment.