Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

performance for stage2 #33

Open
xup16 opened this issue Dec 13, 2019 · 7 comments
Open

performance for stage2 #33

xup16 opened this issue Dec 13, 2019 · 7 comments

Comments

@xup16
Copy link

xup16 commented Dec 13, 2019

Hi~ Thank you for the excellent work.
I have reproduced the performance of stage1 followed you codes, but I can not reproduce the performance of stage2 in the paper (50 SAD).
Would you provide your performance and model of stage2 if you have tried.
Thanks!

@huochaitiantang
Copy link
Owner

I tried stage2 training(resume from sad=54.42 model and only train the convolutions of the refine stage), but the performance is not as good as paper(our best sad=53.74). There may be some mistakes in the statge2 training codes(loss function or network structure).

@xup16
Copy link
Author

xup16 commented Dec 13, 2019

Thank you for the reply.
Did you try stage3?Training the encoder, decoder and refine stage end to end?

@huochaitiantang
Copy link
Owner

Yes, I also tried the stage3 training(resume from stage2 sad=53.74 and train the whole network end-to-end) but got the worse performance(best sad= 55.48). There must exist some mistakes in the refine stage training. Maybe you could help check the implementation code.

@xup16
Copy link
Author

xup16 commented Dec 13, 2019

Ok. Thank you very much

@AstonyJ
Copy link

AstonyJ commented Dec 14, 2019

Hi~ Thank you for the excellent work.
1.I have trained stage1 from scratch followed you codes, but I get 59.40.I set the lr 0.00001 constantly,but i see your code would adjust the lr.Is this the reason?
2.How do you train stage 2?I tried, but the effect was bad.(resume from stage1 59.40,batch_size=4,4cards).Can you show me the parameter set in stage2?
Thank you.

@wrrJasmine
Copy link

@huochaitiantang I also met the problem,I found that the composition loss is really hard to train. have you found the reason?

@SahadevPoudel
Copy link

Hi~ Thank you for the excellent work.
1.I have trained stage1 from scratch followed you codes, but I get 59.40.I set the lr 0.00001 constantly,but i see your code would adjust the lr.Is this the reason?
2.How do you train stage 2?I tried, but the effect was bad.(resume from stage1 59.40,batch_size=4,4cards).Can you show me the parameter set in stage2?
Thank you.

Hi, i have trained stage 1 from scratch and i run same code. but i get 86.77. did u make any changes?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants