-
Notifications
You must be signed in to change notification settings - Fork 99
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Reproducibility Problem(ResNet-50 on ImageNet) #9
Comments
According to the author's email reply, I changed some parameters. With epoch 180 (paper(RandAugment)'s epoch)
The performance with N=2, M=9 (reported optimal value) was not matched to the reported performance(top1 error = 22.4). Also there is no model that performs as well as the paper claims, even though we evaluated all models in the same space. With epoch 270 (AutoAugment's epoch)
If we increase the training epoch to 270, one of the results is outperforming 'AutoAugment' and 'RandAugment'. |
Have you tried using the randuagment code in https://github.com/tensorflow/tpu/tree/master/models/official/efficientnet? This was the code used for training the ResNet model. Also we are almost done opensourcing the ResNet model. |
@BarretZoph Is this the code you mentioned? Some parts are mismatched with this code, I will change them and conduct experiments soon. |
@BarretZoph Is this correct? Since SolarizeAdd have a addition parameter with "0"(zero), this doesn't affect the image. |
Yes that is the code I mentioned. Also the code for ResNet-50 will be opensourced soon! No I believe that is that the magnitude hyperparameter control. It is the threshold value that does not change. |
@BarretZoph Thanks for opensourcing. I really look forward to it! Also for SolarizeAdd, sorry for my misunderstanding. I will update this repo and start new experiments. Thanks. |
Great! Let me know how everything goes. The open sourcing will be done in a week or two! |
I need to examine more since the performance doesn't match after I change augmentation search space as RandAugment's code. Best top1 error is 22.65 by training 180 epochs. cc @BarretZoph
|
Hmm well the code in https://github.com/tensorflow/tpu/tree/master/models/official/resnet will be opensourced shortly. Hopefully that will resolve all of your issues. |
@BarretZoph Thanks, I will look into your opensourced codes. By checking codes provided by you, I'm not sure what is different. Below items are few things that might ruin the performance.
I guess if you opensource your codes as well as the configuration, it would be very helpful. Many Thanks! |
Thanks I just fixed the image size in the paper! Yes it should be 224x224. Yes that is the base config but a few things were changed (180 epochs and batch size of 4096 with 32 replicas). Dropblock is actually not used in this. Since 'dropblock_groups': '' there are no groups specified, so it will not be applied. https://github.com/tensorflow/tpu/blob/master/models/official/resnet/resnet_main.py#L88 I believe that changing that changing the precision will not make much of a difference. No LARS is not used with the 4K batch size. |
Hi, @ildoonet , Thanks for contributing this nice code! I wonder if the performance gap is related to two possible misalignments? These two gaps are my impression when I came across both repos, but I am not sure if it's true, or if it is true, how would the performance be affected. |
@HobbitLong Thanks! They are surely things that can cause differences. As @BarretZoph mentioned, tensorflow's randaugment will be open this week or next, I will examine the code and share what was the problem. |
The code is now updated with randaugment and autoaugment: https://github.com/tensorflow/tpu/tree/master/models/official/resnet |
@BarretZoph Thanks for the update. It will help a lot. I found that the preprocessing part for resnet/efficientnet implemented by google is a bit different than the one most people use. It regularize harder by using smaller cropping region when it trains the model, and It use center-cropped image keeping same aspect-ratio when it tests the model. I believe that this discrepancy cause some degradations. I will try it with this preprocessing.
|
I fix the above problem and got 23.22 top1 error with resnet50 and the N=2, M=9. I guess there are more things to match but due to my current work, I will have to look into this after few weeks. |
Any news? |
1 similar comment
Any news? |
Perhaps related to the issue I just posted on lower magnitudes leading to greater distortions? #24 |
Has the code Been updated for imagine reproduction? |
Below is the top1 errors with N and M grid search, epoch=180.
But, I changed epoch from 180(paper's) to 270(autoaugment's), the result with N=2, M=9 are similar to the reported value(top1 error = 22.4).
The text was updated successfully, but these errors were encountered: