This is an implementation of MiE-X by Pytorch. MiE-X is a large-scale synthetic dataset that improves data-driven micro-expression methods. MiE-X introduces three types of effective Actions Units (AUs) that constitute trainable micro-expressions. This repository provides the implementation of acquiring these AUs and using these AUs to obtain MiE-X.
Related material: Paper, Poster, 5min Intro.
MiE-X uses the same libraries as GANimation
- python 3.7+
- pytorch 1.6+ & torchvision
- numpy
- matplotlib
- tqdm
- dlib
- face_recognition
- opencv-contrib-python
Baidunetdisk is available. Google drive is coming soon.
Variant | MiE-X (AU_mie) | MiE-X (AU_mae) | MiE-X (AU_exp) |
---|---|---|---|
Access | Baidu,Google | Baidu,Google | Baidu |
go to ganimation_replicate
cd ganimation_replicate
This work use OpenFace to extract Action Units from real-world expressions. If you would like to extract
AUs by yourself, please follow the official OpenFace
repo. We have extracted AUs from MEGC, MMEW, and Oulu and placed them in the MER/Data
folder.
E.g., prepare the AU pool for the AUs extracted from the MMEW dataset.
python prepare_AUMMEW_pool.py
Simulate image based micro-expressions:
python simulate.py --mode test --data_root datasets/celebA --gpu_ids 2,3 --ckpt_dir ckpts/emotionNet/ganimation/190327_160828/ --load_epoch 30
Preliminary: You need to download your face source dataset and place its path after --data_root
. The pretrained
GANimation model should be placed in --ckpt_ddir
. You can train your own GANimation model by following the
official GANimation repo or directly downloading the pretrained model from the
third-party implementation.
Simulate video based micro-expressions:
python simulate_video_AUexp.py --mode test --data_root datasets/celebA --gpu_ids 2,3 --ckpt_dir ckpts/emotionNet/ganimation/190327_160828/ --load_epoch 30
go to MER
cd MER
bash run_train_fold_fineT.sh
If you find this code useful, please kindly cite:
@inproceedings{liu2021synthesize,
title={How to Synthesize a Large-Scale and Trainable Micro-Expression Dataset?},
author={Liu, Yuchi and Wang, Zhongdao and Gedeon, Tom and Zheng, Liang},
booktitle={ECCV},
year={2022}
}