Xinxin Zuo, Sen Wang, Qiang Sun, Minglun Gong, Li Cheng
We have tested the code on Ubuntu 18.04/20.04 with CUDA 10.2
First you have to make sure that you have all dependencies in place. The simplest way to do is to use the anaconda.
You can create an anaconda environment called fit3d
using
conda env create -f environment.yaml
conda activate fit3d
Download SMPL Female and Male and SMPL Netural, and rename the files and extract them to <current directory>/smpl_models/smpl/
, eventually, the <current directory>/smpl_models
folder should have the following structure:
smpl_models
└-- smpl
└-- SMPL_FEMALE.pkl
└-- SMPL_MALE.pkl
└-- SMPL_NEUTRAL.pkl
- Download two weights (point cloud and depth) from: Point Cloud and Depth
- Put the downloaded weights in
<current directory>/pretrained/
python generate_pt.py --filename ./demo/demo_pt/00010805.ply --gender female
python generate_depth.py --filename ./demo/demo_depth/shortshort_flying_eagle.000075_depth.ply --gender male
The input point cloud or depth's head should correspond the Y-axis. The X-Z rotation is acceptable.
If you find this project useful for your research, please consider citing:
@article{zuo2021selfsupervised,
title={Self-supervised 3D Human Mesh Recovery from Noisy Point Clouds},
author={Zuo, Xinxin and Wang, Sen and Sun, Qiang and Gong, Minglun and Cheng, Li},
journal={arXiv preprint arXiv:2107.07539},
year={2021}
}
We indicate if a function or script is borrowed externally inside each file. Here are some great resources we benefit:
- Shape/Pose prior and some functions are borrowed from VIBE.
- SMPL models and layer is from SMPL-X model.
- Some functions are borrowed from HMR-pytorch.
- Some functions are borrowed from pointnet-pytorch.
- CAPE dataset for training CAPE.
- CMU Panoptic Studio dataset for training CMU Panoptic.
- SURREAL dataset for training SURREAL