From 591f78a1b2d2f8b7a6b731601a0b84e6a9f62dd1 Mon Sep 17 00:00:00 2001 From: ys-li <56712176+Yshuo-Li@users.noreply.github.com> Date: Fri, 19 Nov 2021 15:07:45 +0800 Subject: [PATCH] [Doc] Chinese translation of configs/synthesizers/pix2pix/README.md (#596) --- configs/synthesizers/pix2pix/README.md | 8 +++- configs/synthesizers/pix2pix/README_zh-CN.md | 41 +++++++++----------- 2 files changed, 25 insertions(+), 24 deletions(-) diff --git a/configs/synthesizers/pix2pix/README.md b/configs/synthesizers/pix2pix/README.md index 705b93e898..267bd4b325 100644 --- a/configs/synthesizers/pix2pix/README.md +++ b/configs/synthesizers/pix2pix/README.md @@ -33,8 +33,12 @@ We use `FID` and `IS` metrics to evaluate the generation performance of pix2pix. | official average | 111.678 | 2.624 | - | | ours average | **106.139** | **2.664** | - | -Note: we strictly follow the [paper](http://openaccess.thecvf.com/content_cvpr_2017/papers/Isola_Image-To-Image_Translation_With_CVPR_2017_paper.pdf) setting in Section 3.3: "*At inference time, we run the generator net in exactly +Note: we strictly follow the [paper](http://openaccess.thecvf.com/content_cvpr_2017/papers/Isola_Image-To-Image_Translation_With_CVPR_2017_paper.pdf) setting in Section 3.3: + +"*At inference time, we run the generator net in exactly the same manner as during the training phase. This differs from the usual protocol in that we apply dropout at test time, and we apply batch normalization using the statistics of -the test batch, rather than aggregated statistics of the training batch.*" (i.e., use model.train() mode), thus may lead to slightly different inference results every time. +the test batch, rather than aggregated statistics of the training batch.*" + +i.e., `use model.train()` mode, thus may lead to slightly different inference results every time. diff --git a/configs/synthesizers/pix2pix/README_zh-CN.md b/configs/synthesizers/pix2pix/README_zh-CN.md index 90c0fc3ec5..4dd77ece09 100644 --- a/configs/synthesizers/pix2pix/README_zh-CN.md +++ b/configs/synthesizers/pix2pix/README_zh-CN.md @@ -2,7 +2,7 @@
-Pix2Pix (CVPR'2017) +Pix2Pix (CVPR'2017) ```bibtex @inproceedings{isola2017image, @@ -18,30 +18,27 @@
-We use `FID` and `IS` metrics to evaluate the generation performance of pix2pix. +我们使用 `FID` 和 `IS` 指标来评估 pix2pix 的生成表现。 -`FID` evaluation: +| 算法 | FID | IS | 下载 | +| :----: | :-: | :-: | :------: | +| 官方 facades | **119.135** | 1.650 | - | +| [复现 facades](/configs/synthesizers/pix2pix/pix2pix_vanilla_unet_bn_1x1_80k_facades.py) | 127.792 | **1.745** | [模型](https://download.openmmlab.com/mmediting/synthesizers/pix2pix/pix2pix_facades/pix2pix_vanilla_unet_bn_1x1_80k_facades_20200524-6206de67.pth) \| [日志](https://download.openmmlab.com/mmediting/synthesizers/pix2pix/pix2pix_facades/pix2pix_vanilla_unet_bn_1x1_80k_facades_20200524_185039.log.json) | +| 官方 maps-a2b | 149.731 | 2.529 | - | +| [复现 maps-a2b](/configs/synthesizers/pix2pix/pix2pix_vanilla_unet_bn_a2b_1x1_219200_maps.py) | **118.552** | **2.689** | [模型](https://download.openmmlab.com/mmediting/synthesizers/pix2pix/pix2pix_maps_a2b/pix2pix_vanilla_unet_bn_a2b_1x1_219200_maps_20200524-b29c4538.pth) \| [日志](https://download.openmmlab.com/mmediting/synthesizers/pix2pix/pix2pix_maps_a2b/pix2pix_vanilla_unet_bn_a2b_1x1_219200_maps_20200524_191918.log.json) | +| 官方 maps-b2a | 102.072 | **3.552** | - | +| [复现 maps-b2a](/configs/synthesizers/pix2pix/pix2pix_vanilla_unet_bn_b2a_1x1_219200_maps.py) | **92.798** | 3.473 | [模型](https://download.openmmlab.com/mmediting/synthesizers/pix2pix/pix2pix_maps_b2a/pix2pix_vanilla_unet_bn_b2a_1x1_219200_maps_20200524-17882ec8.pth) \| [日志](https://download.openmmlab.com/mmediting/synthesizers/pix2pix/pix2pix_maps_b2a/pix2pix_vanilla_unet_bn_b2a_1x1_219200_maps_20200524_192641.log.json) | +| 官方 edges2shoes | **75.774** | **2.766** | - | +| [复现 edges2shoes](/configs/synthesizers/pix2pix/pix2pix_vanilla_unet_bn_wo_jitter_flip_1x4_186840_edges2shoes.py) | 85.413 | 2.747 | [模型](https://download.openmmlab.com/mmediting/synthesizers/pix2pix/pix2pix_edges2shoes_wo_jitter_flip/pix2pix_vanilla_unet_bn_wo_jitter_flip_1x4_186840_edges2shoes_20200524-b35fa9c0.pth) \| [日志](https://download.openmmlab.com/mmediting/synthesizers/pix2pix/pix2pix_edges2shoes_wo_jitter_flip/pix2pix_vanilla_unet_bn_wo_jitter_flip_1x4_186840_edges2shoes_20200524_193117.log.json) | +| 官方平均值 | 111.678 | 2.624 | - | +| 复现平均值 | **106.139** | **2.664** | - | -| Dataset | [facades](/configs/synthesizers/pix2pix/pix2pix_vanilla_unet_bn_1x1_80k_facades.py) | [maps-a2b](/configs/synthesizers/pix2pix/pix2pix_vanilla_unet_bn_a2b_1x1_219200_maps.py) | [maps-b2a](/configs/synthesizers/pix2pix/pix2pix_vanilla_unet_bn_b2a_1x1_219200_maps.py) | [edges2shoes](/configs/synthesizers/pix2pix/pix2pix_vanilla_unet_bn_wo_jitter_flip_1x4_186840_edges2shoes.py) | average | -| :------: | :--------------------------------------------------------------------------------: | :-------------------------------------------------------------------------------------: | :-------------------------------------------------------------------------------------: | :----------------------------------------------------------------------------------------------------------: | :---------: | -| official | **119.135** | 149.731 | 102.072 | **75.774** | 111.678 | -| ours | 127.792 | **118.552** | **92.798** | 85.413 | **106.139** | +注:我们严格遵守[论文](http://openaccess.thecvf.com/content_cvpr_2017/papers/Isola_Image-To-Image_Translation_With_CVPR_2017_paper.pdf)第3.3节中的设置: -`IS` evaluation: - -| Dataset | facades | maps-a2b | maps-b2a | edges2shoes | average | -| :------: | :-------: | :-------: | :-------: | :---------: | :-------: | -| official | 1.650 | 2.529 | **3.552** | **2.766** | 2.624 | -| ours | **1.745** | **2.689** | 3.473 | 2.747 | **2.664** | - -Model and log downloads: - -| Dataset | facades | maps-a2b | maps-b2a | edges2shoes | -| :------: | :-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------: | :---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------: | :---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------: | :---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------: | -| download | [model](https://download.openmmlab.com/mmediting/synthesizers/pix2pix/pix2pix_facades/pix2pix_vanilla_unet_bn_1x1_80k_facades_20200524-6206de67.pth) \| [log](https://download.openmmlab.com/mmediting/synthesizers/pix2pix/pix2pix_facades/pix2pix_vanilla_unet_bn_1x1_80k_facades_20200524_185039.log.json) | [model](https://download.openmmlab.com/mmediting/synthesizers/pix2pix/pix2pix_maps_a2b/pix2pix_vanilla_unet_bn_a2b_1x1_219200_maps_20200524-b29c4538.pth) \| [log](https://download.openmmlab.com/mmediting/synthesizers/pix2pix/pix2pix_maps_a2b/pix2pix_vanilla_unet_bn_a2b_1x1_219200_maps_20200524_191918.log.json) | [model](https://download.openmmlab.com/mmediting/synthesizers/pix2pix/pix2pix_maps_b2a/pix2pix_vanilla_unet_bn_b2a_1x1_219200_maps_20200524-17882ec8.pth) \| [log](https://download.openmmlab.com/mmediting/synthesizers/pix2pix/pix2pix_maps_b2a/pix2pix_vanilla_unet_bn_b2a_1x1_219200_maps_20200524_192641.log.json) | [model](https://download.openmmlab.com/mmediting/synthesizers/pix2pix/pix2pix_edges2shoes_wo_jitter_flip/pix2pix_vanilla_unet_bn_wo_jitter_flip_1x4_186840_edges2shoes_20200524-b35fa9c0.pth) \| [log](https://download.openmmlab.com/mmediting/synthesizers/pix2pix/pix2pix_edges2shoes_wo_jitter_flip/pix2pix_vanilla_unet_bn_wo_jitter_flip_1x4_186840_edges2shoes_20200524_193117.log.json) | - -Note: we strictly follow the [paper](http://openaccess.thecvf.com/content_cvpr_2017/papers/Isola_Image-To-Image_Translation_With_CVPR_2017_paper.pdf) setting in Section 3.3: "*At inference time, we run the generator net in exactly +"*At inference time, we run the generator net in exactly the same manner as during the training phase. This differs from the usual protocol in that we apply dropout at test time, and we apply batch normalization using the statistics of -the test batch, rather than aggregated statistics of the training batch.*" (i.e., use model.train() mode), thus may lead to slightly different inference results every time. +the test batch, rather than aggregated statistics of the training batch.*" + +即使用 `model.train()` 模式,因此可能会导致每次推理结果略有不同。