-
Notifications
You must be signed in to change notification settings - Fork 119
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Why not provide 22k-supervised finetuning model??? I am really shocked by that every available ConvNeXt-V2 pre-training weights has been finetuned on imagenet-1k. Please make 22k-supervised ConvNeXt-V2 open just like ConvNeXt-V1 !!!!!! 🙏🙏🙏 #72
Comments
@yan-hao-tian |
Thanks for the reply. What I need is the ConvNeXt-v2-Huge weights ended with '22k', which means the model's last pre-training step is fine-tuning on ImageNet22k without then finetuning on ImageNet1k. Because the last line of table 7 in the ConvNeXt-v2 paper achieves 57.0 mIoU with it on ade20k dataset. It seems nowhere I can find this model. |
Hi, I am looking for the 22k-supervised fine-tuning ConvNeXt-V2-H model without 1k-supervised fine-tuning. I want to use it to fine-tune on ade20k, reproducing the result in Table 7 of the paper.
The text was updated successfully, but these errors were encountered: