Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

What model was used as a regular ResNet-50, mentioned in the paper as "Fine-Tuning" ? #2

Open
demidovd98 opened this issue Nov 21, 2022 · 1 comment

Comments

@demidovd98
Copy link

demidovd98 commented Nov 21, 2022

Hello, thanks for the code

For some reason, I can't reproduce the accuracy for the usual ResNet-50 (without SAM).
With 10 % of the data I'm only getting about 32-33 % (while in the paper it is 37%), but for 15/30/50/100 % the results are pretty close to the paper.

Could you please specify what model exactly was used as a regular ResNet-50 (mentioned in the paper as "Fine-Tuning") ?
Was it a conventional full ResNet-50 simply fine-tuned, or was the classifier trained on top of the features from the 4th conv layer (projected onto 2048-d) ?

Thank you.

@GANPerf
Copy link
Owner

GANPerf commented Nov 26, 2022

Hello, thanks for the code

For some reason, I can't reproduce the accuracy for the usual ResNet-50 (without SAM). With 10 % of the data I'm only getting about 32-33 % (while in the paper it is 37%), but for 15/30/50/100 % the results are pretty close to the paper.

Could you please specify what model exactly was used as a regular ResNet-50 (mentioned in the paper as "Fine-Tuning") ? Was it a conventional full ResNet-50 simply fine-tuned, or was the classifier trained on top of the features from the 4th conv layer (projected onto 2048-d) ?

Thank you.

Hi guys, thank you for your interest. "Fine-tuning" means training the whole network, including the resnet 50 backbone and the classifier. With 10 % of the data u are only getting about 32-33 %, maybe u need to check whether u freeze the backbone and only train the classifier, which will results in low results. Hope it is helpful to you.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants