Skip to content
/ MiE-X Public

Synthesis large scale trainable data for the micro-expression recognition task

Notifications You must be signed in to change notification settings

liuyvchi/MiE-X

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

21 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

MiE-X

Introduction

This is an implementation of MiE-X by Pytorch. MiE-X is a large-scale synthetic dataset that improves data-driven micro-expression methods. MiE-X introduces three types of effective Actions Units (AUs) that constitute trainable micro-expressions. This repository provides the implementation of acquiring these AUs and using these AUs to obtain MiE-X.

Related material: Paper, Poster, 5min Intro.

Dependencies

MiE-X uses the same libraries as GANimation

  • python 3.7+
  • pytorch 1.6+ & torchvision
  • numpy
  • matplotlib
  • tqdm
  • dlib
  • face_recognition
  • opencv-contrib-python

Datasets

Baidunetdisk is available. Google drive is coming soon.

Variant MiE-X (AU_mie) MiE-X (AU_mae) MiE-X (AU_exp)
Access Baidu,Google Baidu,Google Baidu

Usage

go to ganimation_replicate

cd ganimation_replicate

Extract AUs by the OpenFace toolkit

This work use OpenFace to extract Action Units from real-world expressions. If you would like to extract AUs by yourself, please follow the official OpenFace repo. We have extracted AUs from MEGC, MMEW, and Oulu and placed them in the MER/Data folder.

Prepare AUs pools for simulation

E.g., prepare the AU pool for the AUs extracted from the MMEW dataset.

python prepare_AUMMEW_pool.py

Simulate MiEs

Simulate image based micro-expressions:

python simulate.py --mode test --data_root datasets/celebA --gpu_ids 2,3 --ckpt_dir ckpts/emotionNet/ganimation/190327_160828/ --load_epoch 30

Preliminary: You need to download your face source dataset and place its path after --data_root. The pretrained GANimation model should be placed in --ckpt_ddir. You can train your own GANimation model by following the official GANimation repo or directly downloading the pretrained model from the third-party implementation.

Simulate video based micro-expressions:

python simulate_video_AUexp.py --mode test --data_root datasets/celebA --gpu_ids 2,3 --ckpt_dir ckpts/emotionNet/ganimation/190327_160828/ --load_epoch 30

Train MiE classifers

go to MER

cd MER

Train on MiE-X and fine-tune on real-world data

bash run_train_fold_fineT.sh

If you find this code useful, please kindly cite:

@inproceedings{liu2021synthesize,
  title={How to Synthesize a Large-Scale and Trainable Micro-Expression Dataset?},
  author={Liu, Yuchi and Wang, Zhongdao and Gedeon, Tom and Zheng, Liang},
  booktitle={ECCV},
  year={2022}
}

About

Synthesis large scale trainable data for the micro-expression recognition task

Resources

Stars

Watchers

Forks

Packages

No packages published