Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unconditional Cifar10 generation FID higher than reported in the paper. #12

Open
kongwanbianjinyu opened this issue May 5, 2024 · 0 comments

Comments

@kongwanbianjinyu
Copy link

kongwanbianjinyu commented May 5, 2024

Hello @feizc ,

Thanks for the great work! I train the DiS-S/2 model for 200K steps using train.py and sample using unconditional DDPM for 250 time steps to generate 10K samples. Then I got the FID-10K=16.446, which is higher than the result in figure 2.

I follow the hyperparameters unchanged in train.py. For sampling, I follow the DiT Code and modify the sample_ddp.py and sample_ddp.sh which can be found in the sample_ddp.zip file. For FID computation, I pass both the reference cifar10-train folder and generated sample folder to test.py.

I wonder how to reproduce the results of unconditional CIFAR10 generation. Can you please help me with that? Really appreciate it.

@kongwanbianjinyu kongwanbianjinyu changed the title Unconditional Cifar10 dataset FID higher than reported in the paper. Unconditional Cifar10 generation FID higher than reported in the paper. May 5, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant