Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

DPR training is broken #2885

Closed
bogdankostic opened this issue Jul 26, 2022 · 5 comments · Fixed by #2886 or #2908
Closed

DPR training is broken #2885

bogdankostic opened this issue Jul 26, 2022 · 5 comments · Fixed by #2886 or #2908
Labels

Comments

@bogdankostic
Copy link
Contributor

Describe the bug
It seems that DPR training does not work at the moment. I suspect that this bug was introduced by #2703.

Error message

  File "/Users/bogdan/Repositories/haystack/tutorials/Tutorial9_DPR_training.py", line 92, in <module>
    tutorial9_dpr_training()
  File "/Users/bogdan/Repositories/haystack/tutorials/Tutorial9_DPR_training.py", line 71, in tutorial9_dpr_training
    retriever.train(
  File "/Users/bogdan/Repositories/haystack/haystack/nodes/retriever/dense.py", line 680, in train
    trainer.train()
  File "/Users/bogdan/Repositories/haystack/haystack/modeling/training/base.py", line 290, in train
    loss = self.compute_loss(batch, step)
  File "/Users/bogdan/Repositories/haystack/haystack/modeling/training/base.py", line 374, in compute_loss
    logits = self.model.forward(**batch)
TypeError: forward() got an unexpected keyword argument 'label_ids'

To Reproduce
Execute Tutorial 9.

@ZanSara
Copy link
Contributor

ZanSara commented Jul 26, 2022

Related to #2881

@vijayirlapati
Copy link

Hi @ZanSara,
I see the changes corresponding to this issue are now available. I have installed the master branch version and tried the DPR training tutorial on custom data. The training is happening without any issue but I am getting the same error during evaluation.

Error message:

/usr/local/lib/python3.7/dist-packages/haystack/modeling/evaluation/eval.py in eval(self, model, return_preds_and_labels, calibrate_conf_scores, use_confidence_scores_for_ranking, use_no_answer_legacy_confidence)
75 padding_mask=batch.get("padding_mask", None),
76 output_hidden_states=batch.get("output_hidden_states", False),
---> 77 output_attentions=batch.get("output_attentions", False),
78 )
79 losses_per_head = model.logits_to_loss_per_head(logits=logits, **batch)

TypeError: forward() got an unexpected keyword argument 'input_ids'

Isn't the same code fixes to be done for the eval function as well?

@ZanSara ZanSara reopened this Jul 28, 2022
@ZanSara
Copy link
Contributor

ZanSara commented Jul 28, 2022

Hey @vijayirlapati, thanks for reporting this. We're still hunting these arguments across the codebase, so if you find more, please keep reporting. They're going to be fixed one by one. I will link the new PR to this issue.

@vijayirlapati
Copy link

Sure @ZanSara, Thanks for your response.

@ZanSara
Copy link
Contributor

ZanSara commented Jul 29, 2022

@vijayirlapati Let me know if it works now. If not, re-open again 🙂

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
4 participants