This repository contains the official implementation code of the paper Transformer-based Feature Reconstruction Network for Robust Multimodal Sentiment Analysis, accepted at ACMMM 2021.
Note: We strongly recommend that you browse the overall structure of our code at first. If you have any question, feel free to contact us.
In this framework, we support the following methods:
Type | Model Name | From |
---|---|---|
Baselines | TFN | Tensor-Fusion-Network |
Baselines | MulT(without CTC) | Multimodal-Transformer |
Baselines | MISA | MISA |
Missing-Task | TFR-Net | TFR-Net |
- Clone this repo and install requirements.
git clone https://github.com/Columbine21/TFR-Net.git
cd TFR-Net
- Download datasets from the following links.
- MOSI
download from CMU-MultimodalSDK
- SIMS
download from Baidu Yun Disk [code:
mfet
] or Google Drive
Notes: Please download new featuresunaligned_39.pkl
from Baidu Yun Disk [code:mfet
] or Google Drive, which is compatible with our new code structure. Themd5 code
isa5b2ed3844200c7fb3b8ddc750b77feb
.
-
Download Bert-Base, Chinese from Google-Bert.
-
Convert Tensorflow into pytorch using transformers-cli
-
Install python dependencies
-
Organize features and save them as pickle files with the following structure.
Notes:
unaligned_39.pkl
is compatible with the following structure
{
"train": {
"raw_text": [],
"audio": [],
"vision": [],
"id": [], # [video_id$_$clip_id, ..., ...]
"text": [],
"text_bert": [],
"audio_lengths": [],
"vision_lengths": [],
"annotations": [],
"classification_labels": [], # Negative(< 0), Neutral(0), Positive(> 0)
"regression_labels": []
},
"valid": {***}, # same as the "train"
"test": {***}, # same as the "train"
}
- Modify
config/config_regression.py
to update dataset pathes.
sh test.sh
- CH-SIMS: A Chinese Multimodal Sentiment Analysis Dataset with Fine-grained Annotations of Modality
- Transformer-based Feature Reconstruction Network for Robust Multimodal Sentiment Analysis
Please cite our paper if you find our work useful for your research:
@inproceedings{yu2020ch,
title={CH-SIMS: A Chinese Multimodal Sentiment Analysis Dataset with Fine-grained Annotation of Modality},
author={Yu, Wenmeng and Xu, Hua and Meng, Fanyang and Zhu, Yilin and Ma, Yixiao and Wu, Jiele and Zou, Jiyun and Yang, Kaicheng},
booktitle={Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics},
pages={3718--3727},
year={2020}
}
@inproceedings{yuan2021transformer,
title={Transformer-based Feature Reconstruction Network for Robust Multimodal Sentiment Analysis},
author={Yuan, Ziqi and Li, Wei and Xu, Hua and Yu, Wenmeng},
booktitle={Proceedings of the 29th ACM International Conference on Multimedia},
pages={4400--4407},
year={2021}
}