Authors: Aleksis Pirinen*, Erik Gärtner*, and Cristian Sminchisescu (* denotes first authorship).
Official implementation of the AAAI 2020 paper Deep Reinforcement Learning for Active Human Pose Estimation. This repository contains code for producing results for our Pose-DRL model and the baselines that match those reported in the paper, as well as training Pose-DRL on Panoptic. Our paper is on arXiv too. A video overview of this paper is available here, with step-by-step visualizations here and here.
Pose-DRL is implemented in Caffe. The experiments are performed in the CMU Panoptic multi-camera framework. The Pose-DRL model in this repository uses MubyNet as underlying 3d human pose estimator.
If you find this implementation and/or our paper interesting or helpful, please consider citing:
@inproceedings{gartner2020activehpe,
title={Deep Reinforcement Learning for Active Human Pose Estimation},
author={G\"{a}rtner, Erik and Pirinen, Aleksis and Sminchisescu, Cristian},
booktitle={Proceedings of the AAAI Conference on Artificial Intelligence},
year={2020}
}
- Clone the repository.
- Read the following documentation on how to setup our system, assuming you are using pre-computed feature maps and pose predictions for MubyNet (see the next item on how to pre-compute these). This covers prerequisites and how to install our framework.
- See this dataset documentation for how to download and preprocess the Panoptic data, pre-compute MubyNet deep features and pose estimates and train/download instance features for matching.
Pretrained model weights for Pose-DRL can be downloaded here.
To train the model run the command:
run_train_agent('train')
The results and weights will be stored in the location of CONFIG.output_dir
.
Given the model weights (either the provided weights or your own):
- Set flag
CONFIG.evaluation_mode = 'test';
- Set flag
CONFIG.agent_random_init = 0;
- Set flag
CONFIG.agent_weights = '<your-weights-path>';
- Set flag
CONFIG.training_agent_nbr_eps = 1;
(Note, this will not update weights, since they are updated every 40 eps.) - Run
run_train_agent('train');
, results will be stored in the location ofCONFIG.output_dir
.
This work was supported by the European Research Council Consolidator grant SEED, CNCS-UEFISCDI PN-III-P4-ID-PCE-2016-0535, the EU Horizon 2020 Grant DE-ENIGMA, SSF, as well as the Wallenberg AI, Autonomous Systems and Software Program (WASP) funded by the Knut and Alice Wallenberg Foundation. Finally, we would like to thank Alin Popa, Andrei Zanfir, Mihai Zanfir and Elisabeta Oneata for helpful discussions and support.