This is a unoffical implementation about Image Super-Resolution via Iterative Refinement(SR3) by Pytorch.
There are some implement details with paper description, which maybe different with actual SR3
structure due to details missing.
- We used the ResNet block and channel concatenation style like vanilla
DDPM
. - We used the attention mechanism in low resolution feature(16×16) like vanilla
DDPM
. - We encoding the
$\gamma$ asFilM
strcutrue did inWaveGrad
, and embedding it without affine transformation.
If you just want to upscale 64x64px
-> 512x512px
images using the pre-trained model, check out this google colab script.
- 16×16 -> 128×128 on FFHQ-CelebaHQ
- 64×64 -> 512×512 on FFHQ-CelebaHQ
- 128×128 face generation on FFHQ
-
1024×1024 face generation by a cascade of 3 models
- log / logger
- metrics evaluation
- multi-gpu support
- resume training / pretrained model
- validate alone script
Note: We set the maximum reverse steps budget to 2000 now. Limited to model parameters in Nvidia 1080Ti
, image noise and hue deviation occasionally appears in high-resolution images, resulting in low scores. There are a lot room to optimization. Welcome to any contributions for more extensive experiments and code enhancements.
Tasks/Metrics | SSIM(+) | PSNR(+) | FID(-) | IS(+) |
---|---|---|---|---|
16×16 -> 128×128 | 0.675 | 23.26 | - | - |
64×64 -> 512×512 | 0.445 | 19.87 | - | - |
128×128 | - | - | ||
1024×1024 | - | - |
-
16×16 -> 128×128 on FFHQ-CelebaHQ [More Results]
-
64×64 -> 512×512 on FFHQ-CelebaHQ [More Results]
-
128×128 face generation on FFHQ [More Results]
This paper is based on "Denoising Diffusion Probabilistic Models", and we build both DDPM/SR3
network structure, which use timesteps/gama as model embedding input, respectively. In our experiments, SR3
model can achieve better visual results with same reverse steps and learning rate. You can select the json files with annotated suffix names to train different model.
Tasks | Platform(Code:qwer) | |
---|---|---|
16×16 -> 128×128 on FFHQ-CelebaHQ | Google Drive|Baidu Yun | |
64×64 -> 512×512 on FFHQ-CelebaHQ | Google Drive|Baidu Yun | |
128×128 face generation on FFHQ | Google Drive|Baidu Yun |
# Download the pretrain model and edit [sr|sample]_[ddpm|sr3]_[resolution option].json about "resume_state":
"resume_state": [your pretrain model path]
If you didn't have the data, you can prepare it by following steps:
Download the dataset and prepare it in LMDB or PNG format using script.
# Resize to get 16×16 LR_IMGS and 128×128 HR_IMGS, then prepare 128×128 Fake SR_IMGS by bicubic interpolation
python prepare.py --path [dataset root] --out [output root] --size 16,128 -l
then you need to change the datasets config to your data path and image resolution:
"datasets": {
"train": {
"dataroot": "dataset/ffhq_16_128", // [output root] in prepare.py script
"l_resolution": 16, // low resolution need to super_resolution
"r_resolution": 128, // high resolution
"datatype": "lmdb", //lmdb or img, path of img files
},
"val": {
"dataroot": "dataset/celebahq_16_128", // [output root] in prepare.py script
}
},
You also can use your image data by following steps.
At first, you should organize images layout like this:
# set the high/low resolution images, bicubic interpolation images path
dataset/celebahq_16_128/
├── hr_128
├── lr_16
└── sr_16_128
then you need to change the dataset config to your data path and image resolution:
"datasets": {
"train|val": { // train and validation part
"dataroot": "dataset/celebahq_16_128",
"l_resolution": 16, // low resolution need to super_resolution
"r_resolution": 128, // high resolution
"datatype": "img", //lmdb or img, path of img files
}
},
# Use sr.py and sample.py to train the super resolution task and unconditional generation task, respectively.
# Edit json files to adjust network structure and hyperparameters
python sr.py -p train -c config/sr_sr3.json
# Edit json to add pretrain model path and run the evaluation
python sr.py -p val -c config/sr_sr3.json
# Quantitative evaluation alone using SSIM/PSNR metrics on given result root
python eval.py -p [result root]
Set the HR (vanilla high resolution images), SR (images need processed) image path like step in Own Data
. HR directory contexts can be copy from SR, and LR directory is unnecessary.
# run the script
python infer.py -c [config file]
Our work is based on the following theoretical works:
- Denoising Diffusion Probabilistic Models
- Image Super-Resolution via Iterative Refinement
- WaveGrad: Estimating Gradients for Waveform Generation
- Large Scale GAN Training for High Fidelity Natural Image Synthesis
and we are benefit a lot from following projects: