Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

pdb.set_trace() if opt.inference_only #31

Open
yongduek opened this issue Mar 1, 2019 · 1 comment
Open

pdb.set_trace() if opt.inference_only #31

yongduek opened this issue Mar 1, 2019 · 1 comment

Comments

@yongduek
Copy link

yongduek commented Mar 1, 2019

The main.py has pdb.set_trace() in the case of opt.inference_only, here in code

It is a debuger code for python programs.

Because of this COCO evaluation command you give stops after that line, printing the following:

computing CIDEr score...
CIDEr: 1.030
computing SPICE score...
Parsing reference captions
Parsing test captions
SPICE evaluation took: 4.016 s
SPICE: 0.194
Saving the predictions
> /workspace/neuralbabytalk/main.py(179)eval()
-> if tf is not None:
(Pdb)

This seems to be the end of the COCO evaluation.

My question is:

  1. is there a purpose of using pdb?
  2. where is the file of the saved predictions?
@giangnguyen2412
Copy link

This is what I got when press c to continue program from pdb.

computing Bleu score...
{'testlen': 44929, 'reflen': 45457, 'guess': [44929, 39929, 34929, 29929], 'correct': [225, 8, 0, 0]}
ratio: 0.9883846272301079
Bleu_1: 0.005
Bleu_2: 0.001
Bleu_3: 0.000
Bleu_4: 0.000
computing METEOR score...
METEOR: 0.008
computing Rouge score...
ROUGE_L: 0.004
computing CIDEr score...
CIDEr: 0.002
Saving the predictions
> /home/resl/NeuralBabyTalk/NeuralBabyTalk/main.py(181)eval()
-> if tf is not None:
(Pdb) c
model saved to save/normal_coco_1024_adam/model.pth

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants