We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
我自己也尝试用deeplabv3+架构测试transformer 但是完全不如官方的upernet架构代码。我看你的visdom截图MIOU也只有0.7左右,官方的代码能到0.83左右,而且用你的代码和我自己移植的代码都出现了loss严重波动的问题。请问这个问题是因为什么呢?我搜索到的其他非官方swin tranformer作为encoder的代码结果精度都很低,是什么原因造成的呢
The text was updated successfully, but these errors were encountered:
而且和官方的代码不一样,这个基于deeplab的架构 loss下降极其缓慢同时精度增长也一样缓慢。官方的代码基本上三四十个epoch就收敛了,我尝试复现官方的optimizer设置和学习率之类的设置。但是效果还是不理想
Sorry, something went wrong.
不好意思,我没有做个这方面的测试哈!
最近,我也开始继续研究分割,可以加我微信或者QQ:11355664,可以交流哈!
No branches or pull requests
我自己也尝试用deeplabv3+架构测试transformer 但是完全不如官方的upernet架构代码。我看你的visdom截图MIOU也只有0.7左右,官方的代码能到0.83左右,而且用你的代码和我自己移植的代码都出现了loss严重波动的问题。请问这个问题是因为什么呢?我搜索到的其他非官方swin tranformer作为encoder的代码结果精度都很低,是什么原因造成的呢
The text was updated successfully, but these errors were encountered: