You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When the model makes a prediction, the following error occurs:
outputs = model(texts)
......
batch_size, seq_length = input_shape
ValueError: not enough values to unpack (expected 2, got 1)
It seems that there is a problem with the dimension of the data fed into the model. More precisely, there is a dimension error in the certain batch of data.
Looking up some information, it is recommended to use the unsqueeze(0) operation on the ids and masks output by tokenizer of BERT to increase a dimension, but it does not work.
Finally I found that the bug appeared in the DatasetIterator stage, if your dataset size is exactly an integer multiple of the batch size, then the following code in class DatasetIterator(object) should be modified.
When the model makes a prediction, the following error occurs:
Looking up some information, it is recommended to use the unsqueeze(0) operation on the ids and masks output by tokenizer of BERT to increase a dimension, but it does not work.
Finally I found that the bug appeared in the DatasetIterator stage, if your dataset size is exactly an integer multiple of the batch size, then the following code in class DatasetIterator(object) should be modified.
self.index > self.n_batches
should be changed toself.index >= self.n_batches
. Otherwise the last batch is an empty tensor.There is a lot of code on Github for dataset processing before model training, maybe should check it out if you have time for these basic codes :)
The text was updated successfully, but these errors were encountered: