Extracting Motion and Appearance via Inter-Frame Attention for Efficient Video Frame Interpolation arxiv
Extracting Motion and Appearance via Inter-Frame Attention for Efficient Video Frame Interpolation
Accepted by CVPR 2023
Guozhen Zhang, Yuhan Zhu, Haonan Wang, Youxin Chen, Gangshan Wu, Limin Wang
- [2023.03.12] We compared our method with other methods (VFIFormer and M2M) under extreme cases such as large motion and scene transitions. The video demonstrating our results can be found here (Bilibili).
- [2023.03.12] Thanks to @jhogsett, our model now has a more user-friendly WebUI!
In this work, we propose to exploit inter-frame attention for extracting motion and appearance information in video frame interpolation. In particular, we utilize the correlation information hidden within the attention map to simultaneously enhance the appearance information and model motion. Meanwhile, we devise an hybrid CNN and Transformer framework to achieve a better trade-off between performance and efficiency. Experiment results show that our proposed module achieves state-of-the-art performance on both fixed- and arbitrary-timestep interpolation and enjoys effectiveness compared with the previous SOTA method.
Runtime and memory usage compared with previous SOTA method:
- torch 1.8.0
- python 3.8
- skimage 0.19.2
- numpy 1.23.1
- opencv-python 4.6.0
- timm 0.6.11
- tqdm
- Download the model checkpoints (baidu&code:gi5j)and put the
ckpt
folder into the root dir. - Run the following commands to generate 2x and Nx (arbitrary) frame interpolation demos:
python demo_2x.py # for 2x interpolation
python demo_Nx.py --n 8 # for 8x interpolation
By running above commands, you should get the follow examples by default:
- Download Vimeo90K dataset
- Run the following command at the root dir:
python -m torch.distributed.launch --nproc_per_node=4 train.py --world_size 4 --batch_size 8 --data_path **YOUR_VIMEO_DATASET_PATH**
The default training setting is Ours. If you want train Ours_small or your own model, you can modify the MODEL_CONFIG
in config.py
.
-
Download the dataset you need:
-
Download the model checkpoints and put the
ckpt
folder into the root dir.
For 2x interpolation benchmarks:
python benchmark/**dataset**.py --model **model[ours/ours_small]** --path /where/is/your/**dataset**
For 4x interpolation benchmarks:
python benchmark/**dataset**.py --model **model[ours_t/ours_small_t]** --path /where/is/your/dataset
You can also test the inference time of our methods on the
python benchmark/TimeTest.py --model **model[ours/ours_small]** --H **SIZE** --W **SIZE**
If you think this project is helpful in your research or for application, please feel free to leave a star⭐️ and cite our paper:
@inproceedings{zhang2023extracting,
title={Extracting motion and appearance via inter-frame attention for efficient video frame interpolation},
author={Zhang, Guozhen and Zhu, Yuhan and Wang, Haonan and Chen, Youxin and Wu, Gangshan and Wang, Limin},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
pages={5682--5692},
year={2023}
}
This project is released under the Apache 2.0 license. The codes are based on RIFE, PvT, IFRNet, Swin and HRFormer. Please also follow their licenses. Thanks for their awesome works.