Skip to content

Commit

Permalink
Merge branch 'master' into groups-conv2d
Browse files Browse the repository at this point in the history
  • Loading branch information
plyfager authored Jan 16, 2023
2 parents 7dd7d2a + 5a09a5b commit c767ea3
Show file tree
Hide file tree
Showing 13 changed files with 790 additions and 12 deletions.
38 changes: 38 additions & 0 deletions docs/en/tutorials/inception_eval.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,38 @@
# Tutorial 5: How to compute FID and KID for measruing the difference between the distribution of real data and restored-data

<!-- TOC -->

- [Tutorial 5: How to compute FID and KID for measruing the difference between the distribution of real data and restored-data](#tutorial-5-how-to-compute-fid-and-kid-for-measruing-the-difference-between-the-distribution-of-real-data-and-restored-data)
- [Why FID and KID for image restoration tasks?](#why-fid-and-kid-for-image-restoration-tasks)
- [Set Config File](#set-config-file)

<!-- TOC -->

## Why FID and KID for image restoration tasks?

Commonly used metrics for image/video resoration are PSNR, SSIM or LPIPS which directly compare high-quality original data and restored data.
While these are good metrics to order the restoration performance of different models, they have the requirement that undegraded images should be given.

Recently, some unpaired restoration methods have been proposed to restore real-world images in the condition where only degraded images are accessible.
In this condition, it is difficult to compare the various proposed models with common metrics such as PSNR, SSIM since there is no corresponding ground-truth data.

An alternative way to evaluate these models in real-world settings is to compare the distributions of real data and restored-data, rather than directly comparing corresponding images.

To this end, MMEditing provides functions to compute *Fréchet inception distance* (FID) and *Kernel Inception Distance* (KID), which are commonly used in image generation tasks to check fidelity of generated images, metrics that measure the difference between the two distributions.
Currently, computing FID and KID is only available for restoration tasks.

## Set Config File

FID and KID can be meausred after images from two distributions are extractes as feature vectors with the InceptionV3 model.

To compute the distance between extracted feature vectors, we can add `FID` and `KID` metric in `test_cfg` as follow:

```python3
test_cfg = dict(
metrics=[
'PSNR', 'SSIM', 'FID',
dict(type='KID', num_repeats=100, sample_size=1000)
],
inception_style='StyleGAN', # or pytorch
crop_border=0)
```
10 changes: 6 additions & 4 deletions mmedit/core/__init__.py
Original file line number Diff line number Diff line change
@@ -1,14 +1,16 @@
# Copyright (c) OpenMMLab. All rights reserved.
from .evaluation import (DistEvalIterHook, EvalIterHook, L1Evaluation, mae,
mse, psnr, reorder_image, sad, ssim)
from .evaluation import (FID, KID, DistEvalIterHook, EvalIterHook, InceptionV3,
L1Evaluation, mae, mse, psnr, reorder_image, sad,
ssim)
from .hooks import MMEditVisualizationHook, VisualizationHook
from .misc import tensor2img
from .optimizer import build_optimizers
from .registry import build_metric
from .scheduler import LinearLrUpdaterHook, ReduceLrUpdaterHook

__all__ = [
'build_optimizers', 'tensor2img', 'EvalIterHook', 'DistEvalIterHook',
'mse', 'psnr', 'reorder_image', 'sad', 'ssim', 'LinearLrUpdaterHook',
'VisualizationHook', 'MMEditVisualizationHook', 'L1Evaluation',
'ReduceLrUpdaterHook', 'mae'
'VisualizationHook', 'MMEditVisualizationHook', 'L1Evaluation', 'FID',
'KID', 'InceptionV3', 'build_metric', 'ReduceLrUpdaterHook', 'mae'
]
3 changes: 2 additions & 1 deletion mmedit/core/evaluation/__init__.py
Original file line number Diff line number Diff line change
@@ -1,10 +1,11 @@
# Copyright (c) OpenMMLab. All rights reserved.
from .eval_hooks import DistEvalIterHook, EvalIterHook
from .inceptions import FID, KID, InceptionV3
from .metrics import (L1Evaluation, connectivity, gradient_error, mae, mse,
niqe, psnr, reorder_image, sad, ssim)

__all__ = [
'mse', 'sad', 'psnr', 'reorder_image', 'ssim', 'EvalIterHook',
'DistEvalIterHook', 'L1Evaluation', 'gradient_error', 'connectivity',
'niqe', 'mae'
'niqe', 'mae', 'FID', 'KID', 'InceptionV3'
]
3 changes: 3 additions & 0 deletions mmedit/core/evaluation/eval_hooks.py
Original file line number Diff line number Diff line change
Expand Up @@ -57,6 +57,9 @@ def evaluate(self, runner, results):
eval_res = self.dataloader.dataset.evaluate(
results, logger=runner.logger, **self.eval_kwargs)
for name, val in eval_res.items():
if isinstance(val, dict):
runner.log_buffer.output.update(val)
continue
runner.log_buffer.output[name] = val
runner.log_buffer.ready = True
# call `after_val_epoch` after evaluation.
Expand Down
Loading

0 comments on commit c767ea3

Please sign in to comment.