Skip to content

PyTorch implementation of "Reference-based Image Super-Resolution with Deformable Attention Transformer (ECCV2022)"

License

Notifications You must be signed in to change notification settings

caojiezhang/DATSR

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

8 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Reference-based Image Super-Resolution with Deformable Attention Transformer (ECCV 2022)

Jiezhang Cao, Jingyun Liang, Kai Zhang, Yawei Li, Yulun Zhang, Wenguan Wang, Luc Van Gool

Computer Vision Lab, ETH Zurich.


arxiv | supplementary | pretrained models | visual results

arXiv GitHub Stars download visitors

This repository is the official PyTorch implementation of "Reference-based Image Super-Resolution with Deformable Attention Transformer" (arxiv, supp, pretrained models, visual results).


Reference-based image super-resolution (RefSR) aims to exploit auxiliary reference (Ref) images to super-resolve low-resolution (LR) images. Recently, RefSR has been attracting great attention as it provides an alternative way to surpass single image SR. However, addressing the RefSR problem has two critical challenges: (i) It is difficult to match the correspondence between LR and Ref images when they are significantly different; (ii) How to transfer the relevant texture from Ref images to compensate the details for LR images is very challenging. To address these issues of RefSR, this paper proposes a deformable attention Transformer, namely DATSR, with multiple scales, each of which consists of a texture feature encoder (TFE) module, a reference-based deformable attention (RDA) module and a residual feature aggregation (RFA) module. Specifically, TFE first extracts image transformation (e.g., brightness) insensitive features for LR and Ref images, RDA then can exploit multiple relevant textures to compensate more information for LR features, and RFA lastly aggregates LR features and relevant textures to get a more visually pleasant result. Extensive experiments demonstrate that our DATSR achieves state-of-the-art performance on benchmark datasets quantitatively and qualitatively.

Contents

  1. Requirements
  2. Quick Testing
  3. Training
  4. Results
  5. Citation
  6. License and Acknowledgement

TODO

  • Add pretrained model
  • Add results of test set

Requirements

  • Python 3.8, PyTorch >= 1.7.1
  • CUDA 10.0 or CUDA 10.1
  • GCC 5.4.0

Quick Testing

Following commands will download pretrained models and test datasets.

  1. Clone Repo and Install Dependencies
    git clone https://github.com/caojiezhang/DATSR.git
    cd DATSR
    conda install pytorch=1.7.1 torchvision cudatoolkit=10.1 -c pytorch
    pip install mmcv==0.4.4
    pip install -r requirements.txt

Dataset

Please refer to Datasets.md for pre-processing and more details.

Get Started

Pretrained Models

Downloading the pretrained models from this link and put them under experiments/pretrained_model folder.

Test

We provide quick test code with the pretrained model.

# Run test code for models trained using only **reconstruction loss**.
PYTHONPATH="./:${PYTHONPATH}" python datsr/test.py -opt "options/test/test_restoration_mse.yml"

# Run test code for models trained using **GAN loss**.
PYTHONPATH="./:${PYTHONPATH}" python datsr/test.py -opt "options/test/test_restoration.yml"

Training

Train restoration network

# Train the restoration network with only mse loss
PYTHONPATH="./:${PYTHONPATH}" python datsr/train.py -opt "options/train/train_restoration_mse.yml"

# Train the restoration network with all loss
PYTHONPATH="./:${PYTHONPATH}" python datsr/train.py -opt "options/train/train_restoration_gan.yml"

Visual Results

For more results on the benchmarks, you can directly download our DATSR results from here.

result

result

Citation

@inproceedings{cao2022datsr,
  title={Reference-based Image Super-Resolution with Deformable Attention Transformer},
  author={Cao, Jiezhang and Liang, Jingyun and Zhang, Kai and Li, Yawei and Zhang, Yulun and Wang, Wenguan and Van Gool, Luc},
  booktitle={European conference on computer vision},
  year={2022}
}

License and Acknowledgement

This project is released under the CC-BY-NC license. We refer to codes from C2-Matching and BasicSR. Thanks for their awesome works. The majority of DATSR is licensed under CC-BY-NC, however portions of the project are available under separate license terms: C2-Matching is licensed under the MIT License, BasicSR are licensed under the Apache 2.0 license.

About

PyTorch implementation of "Reference-based Image Super-Resolution with Deformable Attention Transformer (ECCV2022)"

Topics

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages