Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Reproducing results in the paper #4

Open
z-fabian opened this issue Dec 5, 2022 · 7 comments
Open

Reproducing results in the paper #4

z-fabian opened this issue Dec 5, 2022 · 7 comments

Comments

@z-fabian
Copy link

z-fabian commented Dec 5, 2022

Hi, I am trying to reproduce the results from the paper and I cannot find exactly which 1k images of the FFHQ and ImageNet dataset were used for the tables in the paper. Can you please clarify the exact split used for comparing DPS with the other methods?
Thank you!

@z-fabian
Copy link
Author

z-fabian commented Dec 6, 2022

The score network for FFHQ (taken from this repository according to the paper) has been trained on the full FFHQ dataset with all 70k images. If the experiments in the paper have been performed on a subset of the FFHQ dataset using the above model, it seems to me that the model has been trained on the images it is being evaluated on and therefore the results might be misleading. Thanks in advance for clarifying this!

@DPS2022
Copy link
Owner

DPS2022 commented Dec 13, 2022

Hello @z-fabian,

Actually, we trained the score network on the subset of the FFHQ dataset with 50k images where the validation set is excluded. Specifically, we used the first 1k images of the FFHQ dataset as the validation set.

It was written incorrectly in the paper by mistake, and it is planned to be modified in the camera-ready version. We are sorry to confuse you and thank you for pointing out the important part!

@z-fabian
Copy link
Author

Great, thank you for the clarification. I assume the folder '00000' has been used as validation set and '01000' - '49000' for training (using the default folder names after downloading the dataset), is that correct?

@DPS2022
Copy link
Owner

DPS2022 commented Dec 16, 2022

Exactly :)

@z-fabian
Copy link
Author

z-fabian commented Feb 3, 2023

Hi,

Could you please also publish which part of the ImageNet validation set was used as ImageNet-1k validation set for the results in the paper? Has center cropping been applied to the images after loading? I would like to reproduce the results and the plots from the paper. Thanks a lot in advance!

@z-fabian z-fabian reopened this Feb 3, 2023
@sojinleeme
Copy link

Hello,
I have a question about FFHQ dataset with 50K image folders.

Validation: '00000'
Training: '01000'-'49000' (49K)

Doesn't it include folder '50000' to be 50K images?

@OriLifschitz
Copy link

OriLifschitz commented Apr 25, 2024

Hi,

Could you please also publish which part of the ImageNet validation set was used as ImageNet-1k validation set for the results in the paper? Has center cropping been applied to the images after loading? I would like to reproduce the results and the plots from the paper. Thanks a lot in advance!

Any thoughts regarding this please? I too would like to know please.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants