Xingxun Jiang, Yuan Zong, Wenming Zheng, Chuangao Tang, Wanchuang Xia, Cheng Lu, Jiateng Liu "DFEW: A Large-Scale Database for Recognizing Dynamic Facial Expressions in the Wild". ACM MM'20
[Paper/中文版] [Download DFEW] [PPT] [poster] [video]
- Python == 3.6.0
- PyTorch == 1.8.0
- Torchvision == 0.8.0
- Step 1: download the single-labeled samples of DFEW dataset, and make sure it has the structure like the following:
/data/jiangxingxun/Data/DFEW/data_affine/single_label/
data/
00001/
00001_00001.jpg
...
00001_00144.jpg
16372/
16372_00001.jpg
...
16372_00039.jpg
label/
single_trainset_1.csv
...
single_trainset_5.csv
single_testset_1.csv
...
single_testset_5.csv
[Note]: 1:Happy 2:Sad 3:Neutral 4:Angry 5:Surprise 6:Disgust 7:Fear
- Step 2: run
run.sh
-
We provided the model's pretrained weights of each fold under 5-fold cross-validation protocol.
-
To get the better metrics, we provided the models trained by extracted frames from videos, other than those acquired by the Time Interpolation Method with matlab code.
-
We take two metrics to evaluate models, i.e., WAR and UAR. WAR is the weighted average recall, i.e., accuracy; and UAR is the unweighted average recall, i.e., the accuracy per class divided by the number of classes without considerations of instances per class.
- you can download the pretrained weights from Baidu Disk with Access Code (8azt) or Google Driver. >
model_name | ref | WAR(%) | UAR(%) | Note |
---|---|---|---|---|
r3d_18 | paper | 55.70 | 45.11 | - |
mc3_18 | paper | 57.02 | 46.50 | - |
i3d | paper | 59.24 | 47.61 | I3D-RGB |
@inproceedings{jiang2020dfew,
title={Dfew: A large-scale database for recognizing dynamic facial expressions in the wild},
author={Jiang, Xingxun and Zong, Yuan and Zheng, Wenming and Tang, Chuangao and Xia, Wanchuang and Lu, Cheng and Liu, Jiateng},
booktitle={Proceedings of the 28th ACM International Conference on Multimedia},
pages={2881--2889},
year={2020}
}