Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to reproduce your FLOPs calculation? #3

Open
rub2xr6vef opened this issue Feb 18, 2024 · 2 comments
Open

How to reproduce your FLOPs calculation? #3

rub2xr6vef opened this issue Feb 18, 2024 · 2 comments

Comments

@rub2xr6vef
Copy link

Hi @feizc! Thanks for your open-sourcing work. Just as the title, I am curious on how can I reproduce your FLOPs calculation (as reported in Table 1)?

@feizc
Copy link
Owner

feizc commented Feb 19, 2024

Hi, you can see in the test.py here:

DiS/test.py

Line 12 in c3977f4

def test_dismodel():

It enumerate all model variants, and calculate parameters, FLOPs using thop package.
As it return MACs, we *2 to get the FLOPs.

Referring to: https://cvnote.ddlee.cc/2019/09/04/thop-pytorch-mac-flop-counter.html

@rub2xr6vef
Copy link
Author

Thanks for your reply! One more question, what is the batch size you used in your training?
In the technical report, you said that conditioned generation on ImageNet-1K was trained with a batch size of 1024. But the training commands given in the README just use the default batch size of 128. Which one is correct?
Looking forward to your reply and thanks in advance.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants