[[Paper]]
Bangrui Jiang, Zhihuai Xie, Zhen Xia, Songnan Li, Shan Liu
Tencent Media Lab, Shenzhen, China
- Python >= 3.6 (Recommend to use Anaconda or Miniconda)
- PyTorch >= 1.1
- NVIDIA GPU + CUDA
- Linux
Clone repo
git clone xxx
cd ERDN
Install dependent packages
- torchvision
- tqdm
- imageio
- numpy
- opencv-python
We use Deformable-Convolution-v2 and install as follows
cd dcn
bash make.sh
cd ..
We use DVD for training and testing. The dataset can be download as follows
wget http://www.cs.ubc.ca/labs/imager/tr/2017/DeepVideoDeblurring/DeepVideoDeblurring_Dataset.zip
unzip DeepVideoDeblurring_Dataset.zip
The data should be placed according to the following format
|--DVD
|--Train
|--blur
|--video 1
|--frame 1
|--frame 2
:
|--video 2
:
|--video n
|--gt
|--video 1
|--frame 1
|--frame 2
:
|--video 2
:
|--video n
|--Test
|--blur
|--video 1
:
|--gt
|--video 1
:
We provide preprocess script for DVD dataset
python script/arrange.py --data_path path_to_origin_DVD_dataset --out_path path_to_DVD
Download pre-trained models from DVD (key: csig).
Run following command for quick inference.
python inference.py \
--data_path path_to_DVD \
--model_path path_to_model \
--result_path path_to_save_result \
--save_image whether_to_save_image
The training script will be released soon.