This is a PyTorch implementation of the paper:
Zengqun Zhao, Yu Cao, Shaogang Gong, and Ioannis Patras. "Enhancing Zero-Shot Facial Expression Recognition by LLM Knowledge Transfer", IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), 2025.
Current facial expression recognition (FER) models are often designed in a supervised learning manner thus are constrained by the lack of large-scale facial expression images with high-quality annotations. Consequently, these models often fail to generalize well, performing poorly on unseen images in inference. Vision-language-based zero-shot models demonstrate a promising potential for addressing such challenges. However, these models lack task-specific knowledge therefore are not optimized for the nuances of recognizing facial expressions. To bridge this gap, this work proposes a novel method, Exp-CLIP, to enhance zero-shot FER by transferring the task knowledge from large language models (LLMs). Specifically, based on the pre-trained vision-language encoders, we incorporate a projection head designed to map the initial joint vision-language space into a space that captures representations of facial actions. To train this projection head for subsequent zero-shot predictions, we propose to align the projected visual representations with task-specific semantic meanings derived from the LLM encoder, and the text instruction-based strategy is employed to customize the LLM knowledge. Given unlabelled facial data and efficient training of the projection head, Exp-CLIP achieves superior zero-shot results to the CLIP models and several other large vision-language models (LVLMs) on seven in-the-wild FER datasets.
Required packages: pip install -r requirements.txt
Modify train.py: train_data_file_path = 'change to yours'
Modify test.py: DATASET_PATH_MAPPING = {change to yours}
Extra setup for the txt files in ./annotation/: Please check preprocessing.
sh runner.sh
The pre-trained projection head are available in ./checkpoint
UAR: Unweighted Average Recall (the accuracy per class divided by the number of classes without considering the number of instances per class); WAR: Weighted Average Recall (accuracy)
If you find our work useful, please consider citing our paper:
@inproceedings{zhao2025enhancing,
title={Enhancing Zero-Shot Facial Expression Recognition by LLM Knowledge Transfer},
author={Zhao, Zengqun and Cao, Yu and Gong, Shaogang and Patras, Ioannis},
booktitle={IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)},
pages={1--10},
year={2025}
}