This paper receives Eurographics 2019 best paper honorable mention.
If you find our work useful in your research, please consider citing:
@inproceedings{wang2019learning,
title={Learning a Generative Model for Multi-Step Human-Object Interactions from Videos},
author={Wang, He and Pirk, S{\"o}ren and Yumer, Ersin and Kim, Vladimir G and Sener, Ozan and Sridhar, Srinath and Guibas, Leonidas J},
booktitle={Computer Graphics Forum},
volume={38},
number={2},
pages={367--378},
year={2019},
organization={Wiley Online Library}
}
This is a tensorflow implementation of Action Plot RNN. The model generates action plots.
The repository includes:
- Source code of Action Plot RNN.
- Training code
- Pre-trained weights
- Sampling code for generating action plots
- Python 3.5
- Tensorflow 1.3.0
- tflearn
- cPickle
For those who are interested in the interaction videos, one can download our dataset via https://drive.google.com/drive/folders/1vBazEJhfXeAZ06xR1T2QbmnxVbbRaE5S?usp=sharing.
# Train a new Action Plot model from scratch
python3 train.py
# Sampling action plots using a checkpoint
python3 sample.py --save_dir=/ckpts/ckpts_dir --obj_list="book phone bowl bottle cup orange"