Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Can't achieve the performance reported in the paper #30

Open
JaringAu opened this issue Oct 30, 2019 · 4 comments
Open

Can't achieve the performance reported in the paper #30

JaringAu opened this issue Oct 30, 2019 · 4 comments

Comments

@JaringAu
Copy link

Hi,

This is an interesting work.
But we can't achieve the performance reported in PRM (mAP50: 26.8 with MCG proposals).
We can only get 11.5 mAP50 with MCG proposals downloaded from https://www2.eecs.berkeley.edu/Research/Projects/CS/vision/grouping/mcg/
and 21.5 mAP50 with COB proposals downloaded from http://www.vision.ee.ethz.ch/~cvlsegmentation/cob/code.html.

We use the default parameters of PRM (https://github.com/ZhouYanzhao/PRM/blob/pytorch/demo/config.yml) to train the classification network, (change the train_splits from trainval to trianaug, of course). But we notice that both the quality of the peaks and the instance masks are worse than those reported in the paper,

So we wonder if you use other hyper-parameter settings in your experiments?

Besides, according to our observations, the MCG proposals from https://data.vision.ee.ethz.ch/jpont/mcg/MCG-Pascal-Segmentation_trainvaltest_2012-proposals.tgz are much worse than the those shown in the paper and the supplement material. So do we need to retrain MCG with PASCAL train set to generate better proposals?

Would you please point out the differences between my experiments and yours that may results in the gap? or could you give us some advice to boost the performance?

Thanks a lot.

@ZhouYanzhao
Copy link
Owner

Hi @JaringAu, please check FAQs.

@JaringAu
Copy link
Author

JaringAu commented Nov 4, 2019

Thanks for your kindly help. @ZhouYanzhao

We achieve the performance using the reference model, but fail to get the comparable result using our own model (also trainaug based).

So we wonder if you could kindly share the training strategy or key hyper-parameter settings in your experiments?

Thanks.

@scott870430
Copy link

Hi @JaringAu , how do you get the MCG proposals (w/ COB signal)?

Or do you already have any proposal you can share with me?

Thanks a lot.

@zwy1996
Copy link

zwy1996 commented Apr 5, 2021

Hello, @JaringAu

I am sorry to bother you. When I reproduced the result of PRM(cvpr2018), I can get the 20.8 mAP on the voc 2012 val dataset with resnet50 and 17.1 mAP on the val dataset with vgg16, a little lower than paper, did you reproduce the results?

Thank you very much!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants