Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

A RuntimeError occurred when use_bert_only==True #33

Open
DingQiang2018 opened this issue Sep 19, 2024 · 0 comments
Open

A RuntimeError occurred when use_bert_only==True #33

DingQiang2018 opened this issue Sep 19, 2024 · 0 comments

Comments

@DingQiang2018
Copy link

Hi authors,

I ran into a runtime error when I ran bash best_parser_training_script.sh with only --use-xlnet replaced with --use-bert-only. Here is my environment and the traceback of the error.

cython                    0.29.13 
nltk                      3.5 
numpy                     1.17.2   
pytorch-pretrained-bert   0.6.2   
sentencepiece             0.2.0     
torch                     2.4.1     
transformers              2.8.0    
Training...
This is  joint_xlnet_clean_large_3_layers_no_resdrop_lambda
Traceback (most recent call last):
  File "src_joint/main.py", line 788, in <module>
    main()
  File "src_joint/main.py", line 784, in main
    args.callback(args)
  File "src_joint/main.py", line 727, in <lambda>
    subparser.set_defaults(callback=lambda args: run_train(args, hparams))
  File "src_joint/main.py", line 491, in run_train
    _, loss = parser.parse_batch(subbatch_sentences, subbatch_trees)
  File "/data/dingqiang/LAL-Parser/src_joint/KM_parser.py", line 1827, in parse_batch
    = self.parse_from_annotations(fencepost_annotations_start[start:end,:], fencepost_annotations_end[start:end,:], sentences[i], i, gold=golds[i])
  File "/data/dingqiang/LAL-Parser/src_joint/KM_parser.py", line 1900, in parse_from_annotations
    label_scores_chart = self.label_scores_from_annotations(fencepost_annotations_start, fencepost_annotations_end)
  File "/data/dingqiang/LAL-Parser/src_joint/KM_parser.py", line 1885, in label_scores_from_annotations
    label_scores_chart = self.f_label(span_features)
  File "/home/dingqiang/miniconda3/envs/lal_parser/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/home/dingqiang/miniconda3/envs/lal_parser/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1562, in _call_impl
    return forward_call(*args, **kwargs)
  File "/home/dingqiang/miniconda3/envs/lal_parser/lib/python3.8/site-packages/torch/nn/modules/container.py", line 219, in forward
    input = module(input)
  File "/home/dingqiang/miniconda3/envs/lal_parser/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/home/dingqiang/miniconda3/envs/lal_parser/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1562, in _call_impl
    return forward_call(*args, **kwargs)
  File "/home/dingqiang/miniconda3/envs/lal_parser/lib/python3.8/site-packages/torch/nn/modules/linear.py", line 117, in forward
    return F.linear(input, self.weight, self.bias)
RuntimeError: mat1 and mat2 shapes cannot be multiplied (16x1024 and 14336x250)

Could you check and resolve this bug? I will be very grateful.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant