diff --git a/configs/body/3d_kpt_sview_rgb_img/pose_lift/mpi_inf_3dhp/simplebaseline3d_mpi-inf-3dhp.md b/configs/body/3d_kpt_sview_rgb_img/pose_lift/mpi_inf_3dhp/simplebaseline3d_mpi-inf-3dhp.md
index 635d4e1399..c520c51e7e 100644
--- a/configs/body/3d_kpt_sview_rgb_img/pose_lift/mpi_inf_3dhp/simplebaseline3d_mpi-inf-3dhp.md
+++ b/configs/body/3d_kpt_sview_rgb_img/pose_lift/mpi_inf_3dhp/simplebaseline3d_mpi-inf-3dhp.md
@@ -37,6 +37,6 @@ Results on MPI-INF-3DHP dataset with ground truth 2D detections
| Arch | MPJPE | P-MPJPE | 3DPCK | 3DAUC | ckpt | log |
| :---------------------------------------------------------- | :---: | :-----: | :---: | :---: | :----------------------------------------------------------: | :---------------------------------------------------------: |
-| [simple_baseline_3d_tcn1](configs/body/3d_kpt_sview_rgb_img/pose_lift/mpi_inf_3dhp/simplebaseline3d_mpi-inf-3dhp.py) | 84.3 | 53.2 | 85.0 | 52.0 | [ckpt](https://download.openmmlab.com/mmpose/body3d/simple_baseline/simplebaseline3d_mpi-inf-3dhp-b75546f6_20210603.pth) | [log](https://download.openmmlab.com/mmpose/body3d/simplebaseline3d/simplebaseline3d_mpi-inf-3dhp_20210603.log.json) |
+| [simple_baseline_3d_tcn1](/configs/body/3d_kpt_sview_rgb_img/pose_lift/mpi_inf_3dhp/simplebaseline3d_mpi-inf-3dhp.py) | 84.3 | 53.2 | 85.0 | 52.0 | [ckpt](https://download.openmmlab.com/mmpose/body3d/simple_baseline/simplebaseline3d_mpi-inf-3dhp-b75546f6_20210603.pth) | [log](https://download.openmmlab.com/mmpose/body3d/simplebaseline3d/simplebaseline3d_mpi-inf-3dhp_20210603.log.json) |
1 Differing from the original paper, we didn't apply the `max-norm constraint` because we found this led to a better convergence and performance.
diff --git a/docs/en/tasks/3d_body_mesh.md b/docs/en/tasks/3d_body_mesh.md
index aced63c802..5f12ddf789 100644
--- a/docs/en/tasks/3d_body_mesh.md
+++ b/docs/en/tasks/3d_body_mesh.md
@@ -52,6 +52,11 @@ mmpose
### SMPL Model
+
+
+
+SMPL (TOG'2015)
+
```bibtex
@article{loper2015smpl,
title={SMPL: A skinned multi-person linear model},
@@ -65,6 +70,8 @@ mmpose
}
```
+
+
For human mesh estimation, SMPL model is used to generate the human mesh.
Please download the [gender neutral SMPL model](http://smplify.is.tue.mpg.de/),
[joints regressor](https://download.openmmlab.com/mmpose/datasets/joints_regressor_cmr.npy)
@@ -180,18 +187,23 @@ extract the images by themselves.
+
+MPI-INF-3DHP (3DV'2017)
+
```bibtex
@inproceedings{mono-3dhp2017,
- author = {Mehta, Dushyant and Rhodin, Helge and Casas, Dan and Fua, Pascal and Sotnychenko, Oleksandr and Xu, Weipeng and Theobalt, Christian},
- title = {Monocular 3D Human Pose Estimation In The Wild Using Improved CNN Supervision},
- booktitle = {3D Vision (3DV), 2017 Fifth International Conference on},
- url = {http://gvv.mpi-inf.mpg.de/3dhp_dataset},
- year = {2017},
- organization={IEEE},
- doi={10.1109/3dv.2017.00064},
+ author = {Mehta, Dushyant and Rhodin, Helge and Casas, Dan and Fua, Pascal and Sotnychenko, Oleksandr and Xu, Weipeng and Theobalt, Christian},
+ title = {Monocular 3D Human Pose Estimation In The Wild Using Improved CNN Supervision},
+ booktitle = {3D Vision (3DV), 2017 Fifth International Conference on},
+ url = {http://gvv.mpi-inf.mpg.de/3dhp_dataset},
+ year = {2017},
+ organization={IEEE},
+ doi={10.1109/3dv.2017.00064},
}
```
+
+
For [MPI-INF-3DHP](http://gvv.mpi-inf.mpg.de/3dhp-dataset/), please follow the
[preprocess procedure](https://github.com/nkolot/SPIN/tree/master/datasets/preprocess)
of SPIN to sample images, and make them like this:
@@ -241,6 +253,9 @@ mmpose
+
+LSP (BMVC'2010)
+
```bibtex
@inproceedings{johnson2010clustered,
title={Clustered Pose and Nonlinear Appearance Models for Human Pose Estimation.},
@@ -254,6 +269,8 @@ mmpose
}
```
+
+
For [LSP](https://sam.johnson.io/research/lsp.html), please download the high resolution version
[LSP dataset original](http://sam.johnson.io/research/lsp_dataset_original.zip).
Extract them under `$MMPOSE/data`, and make them look like this:
@@ -277,6 +294,9 @@ mmpose
+
+LSPET (CVPR'2011)
+
```bibtex
@inproceedings{johnson2011learning,
title={Learning effective human pose estimation from inaccurate annotation},
@@ -288,6 +308,8 @@ mmpose
}
```
+
+
For [LSPET](https://sam.johnson.io/research/lspet.html), please download its high resolution form
[HR-LSPET](http://datasets.d2.mpi-inf.mpg.de/hr-lspet/hr-lspet.zip).
Extract them under `$MMPOSE/data`, and make them look like this:
@@ -313,6 +335,9 @@ mmpose
+
+CMU MoShed (CVPR'2018)
+
```bibtex
@inproceedings{kanazawa2018end,
title={End-to-end recovery of human shape and pose},
@@ -323,6 +348,8 @@ mmpose
}
```
+
+
Real-world SMPL parameters are used for the adversarial training in human mesh estimation.
The MoShed data provided in [HMR](https://github.com/akanazawa/hmr) is included in this
[zip file](https://download.openmmlab.com/mmpose/datasets/mesh_annotation_files.zip).