Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

High loss for Text CNN in Stage 1 and COCO dataset questions #6

Open
Kabnoory opened this issue Jul 13, 2018 · 5 comments
Open

High loss for Text CNN in Stage 1 and COCO dataset questions #6

Kabnoory opened this issue Jul 13, 2018 · 5 comments

Comments

@Kabnoory
Copy link

Hey layumi, I am trying to replicate your results for mscoco in tensorflow I had some questions about processing data and loss:

  1. At the end of Stage 1 my text CNN ('objective_txt') loss is high around 5.5, what was the loss you got at the end of Stage 1?

  2. in dataset/MSCOCO-prepare/prepare_wordcnn_feature2.m you create
    wordcnn = zeros(32,611765,'int16')
    then loop over all the captions in MSCOCO, but there is 616,767 captions in MSCOCO, so what's the reason of this 5002 difference? it throws an out of range error when I implemented it in tensorflow because there is more captions than the rows/columns in the matrix wordcnn created

  3. coco_dictionary.mat dimensions is 29972 in your code but my dimensions are different? I wonder if this is the reason why the loss is high or it might be because tensorflow uses a different random generator than matlab, if you have any suggestion on this that would be great

Thank you!

@layumi
Copy link
Owner

layumi commented Jul 14, 2018

Hi @Kabnoory
Thank you for following our work!

  1. I do not remember the loss clearly. The training accuracy (text branch) converges to about 70%-80%.
  2. I randomly select the 1k image for testing. 1k image & about 5k captions are not included during the dictionary learning, since dictionary learning is a training process.
  3. You may use the tensorflow random generator. It is fine. I do not think the training/test split plays an important role for training convergence.

@layumi
Copy link
Owner

layumi commented Jul 14, 2018

I will upload a second-version paper soon with more technical details.
It will be more easy to follow.

@Kabnoory
Copy link
Author

Thanks for your response! By training accuracy do you mean top1err_txt or top5err_txt for the text branch for stage 1? And that would be 0.2-0.3 error right? I think I would have to reimplement my Text CNN cause my loss is way higher than that.

@Kabnoory
Copy link
Author

I think the issue was that I set the learning rate to 0.001, but I found here https://github.com/layumi/Image-Text-Embedding/blob/master/matlab/%2Bdagnn/%40DagNN/initParams.m
that each layer has different learning rate.

I wanted to ask what's the purpose of
obj.params(p(1)).value = obj.params(p(1)).value * 0.1;
in line 78 in initParams.m?
Also, is there any other changes in dagNN that you made other than a custom learning rate, that I should implement by myself in tensorflow?

@layumi
Copy link
Owner

layumi commented Jul 17, 2018

Hi @Kabnoory,
Yes. It is a trick in the early image classification works.
They usually use a small initiailzation for the final fully-connected layer.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants